C++ concepts and programming by design

C++ concepts are an upcoming feature that is being proposed for C++17. While it’s still under work, the general idea behind concepts is easier to talk about, and is language-agnostic.

What are C++ concepts?

From the above, Stroustraup writes:

“A concept is a compile-time predicate.”

Concepts, at least as described here, are a way of describing semi-generic sets of types that impose requirements on their members. In a sense, a concept is a category of types.

The following is an example pulled from the link above:

template<Associative_container C>
Iterator_type<C> find(C&& assoc, const Key_type<C>& key)
{
  return assoc.find(key);
}

Usually, ensuring that types match up with a certain “category” of types is hard in C++; inheritance is one solution, but forcing a type hierarchy can be awkward and add unneeded complexity to code, especially if there is more than one “category” to satisfy. Another solution would be to use something like std::enable_if<>() and other compile-time, template-based tricks. As we can see here, the idea of performing a search on an Associative_container such that the algorithm requires a Container value and a constant reference to a Key-like type and return an Iterator for the Container is quite naturally expressed once we can express categories of types in our code.

What’s the big idea?

How can we perform logic based on categories rather than types?

This semi-generic abstraction over types is very crucial, if we think about various applications. It allows us to more closely model and abstract our programs in terms of design concepts.

Often times, we consider concepts over types rather than just types:

  • Addition is a function that adds two numbers together.
    • The concept of a number can be expressed as a union of many different types: the natural numbers, the integers, real numbers, imaginary numbers, quaternions, etc.
  • The concept of a “string” or “number” or even “linear container” can be split among various implementations that are encoded as different types with a common interface. Forcing inheritance is an invasive move for maintenance.
    • Allowing for multiple implementations to subscribe to a common interface is the name of the game in many different languages; classically, Java has its interface mechanism, but whose inclusion or opt-in scheme is similar to that of inheritance. C++ concepts are opt-in by inspection, with the checking done automatically.
  • Many types can be move-constructable but not copy-constructable. The various permutations of move/non-movable, copy/non-copyable can be tedious to figure out.
    • The concepts of MoveConstructable and CopyConstructable can simplify things by turning checking for common features/common interface into a simple type check and expression evaluation at compile-time, with less errors.
  • Categories are more generic than types; substitution is easier over when we can substitute for a class of types rather than just one.
    • For example, finding a substitute for a grain for breakfast is much harder if you need to find a substitute specifically for quinoa.
      • If you have no concept of what a “grain” is, how do you know without having to declare a large inheritance tree that quinoa is a grain?
        • What if something can serve as a substitute for a grain but isn’t actually a grain? Is duck typing okay for inheritance when a duck isn’t a grain?

You’re stupid and you write so bad. What’s your point?

Constraints violations are diagnosed at the point of use, just like type errors. C++98 (and C++11) template instantiation errors are reported in terms of implementation details (at instantiation time), whereas constraints errors are expressed in terms of the programmer’s intent stated as requirements. This is a fundamental difference. The diagnostics explain which constraints were not satisfied and the specific reasons for those failures.

  • Concepts, most importantly, are considered as a way to solve the extremely dense compilation error problem with templates.
    • Any C++ programmer who has had to deal with compilation errors of libraries like Boost or even an object as “basic” as `std::vector<T>` understands that template diagnostics can be terrifying and soul-wrenching to debug, as having to look deep into the implementation or online documentation whenever something doesn’t compile is a major slowdown when trying to fix errors and continue with development.
  • People want to program with their own concepts in C++, not just with specific types or with the pre-defined concepts of C++.
    • Stroustraup mentions that an implementation of concepts was created to test concepts out, even using concepts with the implementation of the standard library. I’d bet that understanding that your object can’t be copied because you’re using std::unique_ptr<Node<T>> instead of Node<T>* would be sped up if the compiler would just tell you that std::unique_ptr<Node<T>> is not a Copyable type instead of telling you it couldn’t match the copy constructor types or the function was deleted.
  • The concept, or, rather, the semi-generic interface is a more efficient abstraction than inheritance for “category of types.”
    • Concepts are closer to the concept of a “category of types with similar functionality” than inheritance, as a concept is not a type. As per math, a set is not the same as its elements.

 

 

3 Reasons For Modularization

I’m pretty sure most of the non-beginner people in production already know about this, but I wanted to write about this because I’ve had a moving experience with modules recently.

It’s occurred to me that it’s hard to really get the importance of modularization on the first try. I’ve recently witnessed a class learning object-orientated programming in C++ coming from C, and while some people get modularization with encapsulation, others didn’t get it on their first try. I’m of the stance that modularization is probably one of the most crucial lessons for “getting” programming, and it’s also the one that naturalizes order against entropy.

I mean, just being able to perform namespacing is a form of modularization and separation if used conventionally, and just namespacing a pretty crucial feature once we start a fiesta with more than maybe 10 libraries/modules/packages/gems/etc. (see: R packages; aliasing on import and dplyr/plyr).

Really, a lot of things come from the concept of modularization and DRY working together: “Modules? Libraries? Packages? Static vs. dynamic linking? Interfaces? Functions? Objects? Who cares?” the coder might think, annoyed by seemingly large projects with pointless cruft when everything can be put into main().

But those are core concepts! But getting somebody to intimately understand the “why” besides just saying it for show might take a little bit more.

It’s really hard to explain to somebody why it’s important until you actually encounter it. It’s like something you sort of need to experience, either as an immersive mind experiment to understand the frustration, or you need to go down the wrong path to experiencing it. Maybe I’ll write about it sometime.

Either way: here’s a good visual for modularization and 3 reasons why it’s good thing:

This is your code with modularization.
This is your code exploding with NO modularization.

1. It reduces mind load.

Separation of concerns is one of the best things about modularization of code. It is also what enables procedural and functional programming. Being able to reuse parts of code in places where it can once again repeat its purpose without having to manually retrace your steps to program it (again) is great!

Additionally, some people say that the human mind can focus on about four to seven distinct things at once. If that were so, then the mind should only be able to consider a limited amount of distinct “steps” in a procedure until it can consolidate several steps into an “umbrella” step, like how thousands of lines of code can result in a single compression function or something like “allocate this piece of memory for me.”

The human brain doesn’t seem capable of simultaneously focusing on every single line of code at once. Modularization reduces the mind load on the programmer by hiding underlying details and wrapping it up into one nice “capsule” of concept or usage, leaving more “mind-space” for more concepts in thinking.

2. It makes for easier debugging.

If you have a program to bake a cake, and main() contains everything from ordering the supplies to delivery, when the entire thing crashes, where in the code do you check first?

“Well, it’s in main(),” one might say. “But, main() is 25,000 lines long…”

Some people might turn to step-wise debugging with something like GDB. Others might turn to test cases. Embedded programmers might have to deal with print-like debugging with LED switches. A few might move to static analysis tools (squee!!!).

Ultimately, your debugging is going to have to take you back to where the error is.

Modularization reduces the space (and, so, time) in which you need to look for bugs. Being able to find the part of the program in which there might be errors by knowing other modules have already been tested or proven to be bug-free under normal operating conditions is extremely valuable! Not having to look for bugs in, say, standard library functions is a huge productivity boost. You want to be squashing bugs in your program’s code, not the code that holds it up, whether you wrote it or not.

This sort of “possibility space” reduction is also crucial for certain algorithms as well, such as the binary tree search. Being able to place things into different “buckets” of concept or steps is always helpful.

3. It enables abstraction.

Being able to “block out unnecessary details” is a big part of many interfaces of all kinds. For a given interface, hiding implementation details is often seen as a security and usability boost, but, again, it also simplifies the way somebody uses it and thinks about using it.

Power, simplicity, and consistency might be a few words to describe what certain good interfaces have in common. Simplicity in coherence and consistency are what modularization provides. What better starting point for abstraction?

Modularization enables abstraction by being the mechanism by which to tie details together to create a tight abstraction. Trying to create a “facade API” that breaks if people don’t play by the rules isn’t going to work out well for people if they decide to try and break your system. Having the ability to turn your abstract concepts and rules into an encapsulated system is a very powerful tool in your toolset as a programmer.

So what does that mean when I code?

I guess we could turn it into a list like this:

  • Smaller is better. It’s also the heuristic for…
  • Simple is better. Simple things usually take less to code.
  • One procedure, one purpose. Unix philosophy was made with this in mind, right, and we know how successful that’s been.
  • Close your modules and systems. A closed system is such that its guts won’t fall out if you shake it up. Code that doesn’t break inside its own assumptions is crucial for stability. Not doing this in, say, C, will often result in something like a segmentation fault. (Because WHY would THIS pointer be NULL?)
  • Diagram and flowchart between “modules.” It’s not easy drawing a flowchart with only one bubble. If your modularized code corresponds to bubbles, then as you split up your code, the relationships between your states and transitions make more sense as you begin to more closely map bubbles to individual steps in a more complex process.
  • Leave the door open by leaving things small. One of the things about modularizing and abstraction is that your designs can easily break. When a client comes to you and asks, “Hey, can you add this feature?” and you didn’t think about allowing for that in your abstraction system, suddenly, you’re stuck with an entire system that can’t bend over without breaking.
    • In this situation, maybe you’re left with a few choices: create a strong initial, flexible design; solidify and cement your design according to specifications or requirements after the point they can change; make the system easy to extend with a non-closed system (carefully guarded); or, maybe don’t take your system too seriously, and get ready to scrap and rebuild.
    • Perhaps the best idea would be to build from primitives small enough that changes can be done like how we might mix and match puzzle pieces–just take some out and shove other ones in. It’s much easier to work with small puzzle pieces then trying to small a large block of code, replace the certain parts that need to be changed, and hopefully put everything back together.

 

 

 

 

 

A bite on SAT and proving programs

What’s “SAT?” SAT is a shorthand for a well-known computer science program: the Boolean satisfiability problem. It’s simple: given a formula of Booleans like this:

(A' + B) \cdot (C \cdot D)

…can we figure out an assignment to all the variables to make the entire expression true?

Well, the answer is “yes:” All you have to do is try everything! But, er, that’s a bit much: if you have 4 variables, it’s 4! = 4*3*2*1 = 24 possible configurations of variables. Of course, factorials are fast-growing: 5! is 120 and 10! is 3,628,800.

The other fascinating thing about SAT is that different versions of it are P and NP. 2-SAT, which has product of sums of clauses with only 2 terms, is P while 3-SAT is considered NP. But why is 2-SAT NP? Because we can use the implication relationship that can be encoded as a pair of Booleans in order to determine this specifically because implication can be isolated in a 2-Boolean variable process.

SAT may have applications in other places. If we consider the variables to be, say, input conditions to your program, and the satisfying condition is your program crashing, being able to solve this problem in polynomial time might be helpful rather than having to solve it using an NP algorithm.

Attributes and static program traits

I recently came across this article about: Hotpatching a C function on x86. It’s a really cool article because it takes advantage of some compiler-specific and pretty low level stuff in order to do actual code patching on the popular x86-64 architecture.

But, if you take a look at some of the code:

__attribute__ ((ms_hook_prologue))
__attribute__ ((aligned(8)))
__attribute__ ((noinline))
__attribute__ ((noclone))
void
hello(void)
{
   __asm("");
   puts("hello");
}

Well, all those __attribute__ tags look kind of annoying.

Really–can’t there be a C-like language or some sort of meta-tool that can hook in at the language or lower level to cleanly encapsulate implementation vs. interface details?

This would also require meticulously surveying the semantics of a language and then translating into traits, but it would also give both compiler and language developers and programmers the degree of control over implementation details with clarity.

I’d imagine that maybe D or Rust might be able to do this sort of thing. Perhaps I can look into this.

Is Asio Boost dependent?

You know, it’s really strange that Asio itself provides non-Boost examples that require Boost.Date_Time in order to run them. Even more so, not being able to compile the examples even with the ASIO_STANDALONE flag makes me think that Asio doesn’t care about its own independence.

Boost is quite a large library. Downloading the entire thing can take quite awhile. It makes me wonder that if somebody were to make any libraries based on Boost, would it be okay? And what about dealing with Boost-like libraries that become standard libraries and then their compatibility?

It’s not too bad, it’s just a mild inconvenience with maybe some future noticeable effects.

 

 

Can we model human behavior in games with probability?

It’s a really interesting concept to be able to model human behavior in the context of creating a behavioral model for external input beyond the control of a programmer. For a systems designer (games are just human-made systems, after all), it can be really, really interesting! Sometimes, people might put in a weird button combination that can break an important I/O abstraction. In other cases, the programmer might be at fault for failing to account for an erroneous input that crashes the program with the classic segmentation fault.

But for, say, fighting games? Isn’t that a bit strange? Well, no, not really, as this can become an interesting exercise for understanding how to control minds (got your tinfoil hat on?) and optimize the learning experience of any system, even something like an SQL database or a project management system. It’s all the same.

vlad-crow-patch-shot-1200x675
This is from a recently defunct fighting game called Rising Thunder, whose company was recently bought out by Riot Games, the creators of the immensely popular online game, League of Legends.

So, what can we first say about creating such a model? Well, suppose we were to make a model. What would we need?

I’d begin with the relevant ideas and concepts that the subject might possess before he/she/it begins to use our system.

So what would that be for a fighting game? Well, in many 2D fighting games, there are standard conventions or player-game interactions. One example is that most 2D fighters simply have you hold the opposite direction from your opponent to block all incoming strikes. Another one is that simply pressing down on your control pad makes your character crouch. Conventions exist everywhere; every pencil usually has an eraser at its end and most coffee mugs have handles. We take a lot of these for granted but it’s really not; somebody put the time and effort into coming up with these and a lot of other people also put the effort in to follow them.

In fighting games, there is also the concept of balance and power. In short, power represents the “strength” of a specific character and their moveset that allows them to perform better during games, either by being able to dish out more damage in a combo or having some sort of psychological concept that erodes the other player’s mind. Balance represents the relative strength of characters to one another. For example, if one character can kill all the other ones with one hit and no other character can do that, that character and their killing move is not balanced unless it has a downside to it.

fighting_game_triangle
In many fighting games, a “balance” triangle of moves like this usually occurs. If a player is blocking, you grapple them. Grappling? Make them miss and attack them instead. And, finally, block strikes.

Now, suppose we only had these three moves in our fighting game. No moving too! This means that the entire game is based on just 3 moves: Striking, Blocking, and Grappling.

Suppose we want to design characters in this system. How can we design characters that are balanced?

Well, suppose we were to design a character that always did 10 hitpoints when they successfully struck the other person, and that every character had 10 or less hitpoints. That’s a one-hit kill! But if that other player were to always block, then that character, despite all of their power, would be practically useless.

Of course, we think that nobody’s going to block forever. But why? Because nobody wins in a fighting game by not actually fighting. In most games, only striking and grappling does damage.

Already, we see that our fighting game is more than just rock-paper-scissors–there is already an imbalance that players would prefer striking and grappling over blocking simply by the sheer notion that blocking does not directly contribute to winning the game by doing damage.

The real question: do players block less often because blocking does not win any games?

This is the real question: its answer determines if a definite value for “balance” can ever be decided.

Already, we can begin to establish a mathematical basis for analyzing the game.

P(blocking) = ?

Ideally, the probability of all moves would look like this:

P(striking) + P(blocking) + P(grappling) = 0.33 + 0.33 + 0.33 = about\ 1

In reality, we know that as the probabilities of one player’s moves become skewed, so does the other once they adapt to the move style of the other player.

Such probabilities are not static–they are in flux and change as the situation changes. But we know that they are dependent on something; at least, the patterns that one might perceive of the other player.

But, indeed, the uneven “landscape” of the different move probability distributions forms what we call “the meta” in certain types of games.

The patterns of move or choice distributions given a specific situation in a game, their “values,” and the optimal strategies to take advantage of such distributions eventually form what we call “the meta” in many types of games.

Even for games like Chess, there is a “meta.” The opening moves for chess can be varied, but people probably won’t want to take the move that you get you checkmate’d in less than 5 moves. Even knowledge of the basic move tree can affect the way people play a game, already muddying the waters.

Now, we understand that the probability of some a player taking some actions depends on what they think they should do. In fact, the internal valuation model that a player has contributes to how they calculate the value of their next move; in a way, each player should have their own model which calculates the “utility” of each of their actions in a game. Even the “absence” of a such a model is important, as their actual behavior might be decided by some sort of “default” model that most “under-developed” players exhibit; for example, fighting game players tend not to understand the strike-block-grab triangle and fail to produce a grappling counter to a more skilled opponent who blocks without repercussion. But what causes players to gravitate certain styles of play or patterns?

In fact, could we simply devalue the notion of “probability” entirely, and instead, opt for a notion of utility? In a game with sentient players who understand the rules and how to optimize their own actions towards certain rules, one cannot simply adopt a uniform, static probability distribution for modeling human behavior. What is a player action probability distribution if the parameters that decide it are always changing, and the properties of the data are always changing? Would it instead be better to use a non-smooth function as a model for calculating a probability substitute?

So now the question is: Is there a way to translate a meta-game into a probabilistic model, and is it useful? What would be a suitable substitute?

It’s really strange to try and create a useful model; something as simple as a straight-up probabilistic model with uniform distribution might prove inefficient. Suppose one made a function model for a game of rock-paper-scissors; it’s quite easy to determine the best move given a certain probability distribution–play paper if your opponent is going to play rock. Suppose a player plays rock three times in a row. A person might think, “Oh, the chance of my opponent playing rock again is low, so I should opt for paper to beat it!” But, in reality, the other player could predict that the other player is going to play paper and instead play scissors to counter the counter. And then what if the original player predicts that? And what is the probability of each case happening? Then, I suppose, we start edging into the inner makeup of the minds of each player, trying to categorize them into maybe even different personality types with certain proclivities to read into situations 2 layers rather than 3. But what if a player knew that? Then perhaps they would read 1 or 4 times deep into the situation to counter that piece of knowledge.

Really, even if we try to constrain ourselves to simple probability, the number of variables that can contribute to those probabilities start to explode and suddenly we just have too much to deal with. Can we even measure the probability that a person reads 10 times into a situation, and then the probability that a person thinks they should read 10 times deep?

One must wonder; really, if knowledge alone can heavily define our actions; the absence of a crucial piece of information can really determine if a player will really do something. A player that is not even aware they can do something, like grapple in our fighting game example, really won’t be represented accurately if we take something and use a uniform distribution, as they’ll never consider even performing the action, and would only do it by accident.

Even better, can knowledge even be treated as a discrete variable? Though its simple to say something like, “Oh, I know I can grapple now!” and then model a player’s behavior as closer to a uniform distribution, it’s really strange to simply only have a system when things can be “on” and “off” if we consider games that have imperfect knowledge–“fog-of-war”-type games–and then try to categorize a player trying to calculate the probability of a certain action happening and then taking an action as “zero” or “one.”

Perhaps something from decision theory would instead be a better substitute. It would be good to dive into topics to explore the range of possibilities to illustrate this problem.