C++: Template duck-typing in 30 lines

#include 

struct Duck {
    void quack(std::ostream& out) {
        out << "Quack!" << std::endl;
    }
};

struct Platypus {
    void quack(std::ostream& out) {
        int* i = NULL;
        out << *i << std::endl;
    }
};

template 
void do_quack(std::ostream& out,
        MaybeDuck duck) {

    duck.quack(out);
}

int main() {
    Duck duck;
    Platypus platypus;

    do_quack(std::cout, duck);
    do_quack(std::cout, platypus);
}

C++ has compile-time duck typing

(See it live on Coliru)

The C++ compiler doesn’t actually do much semantic checks in terms of whether it can “figure out” whether the type you give a template is correct.

All it does is kind of “copy-and-paste” the type you give it into the placeholder type, and sees if it compiles correctly.

Here, our two classes will both quack(), but will not necessarily do the same
thing. One will print; the other will most likely give a segmentation fault, because you’re trying to dereference a  NULL pointer.

In other words, C++ does not do special semantic checking on templates unless you do it yourself.

Hence, this program crashes.

So what’s the solution?

There are many.

Is intuition in programming important to you?

Note: Read only the bold statements if you want to skim this quickly.

Do you remember the first time you saw classes? Objects? The word “string?” What about binary trees, regular languages, compilation of languages, and recursion?

I don’t realize it myself a lot of times, but I’ve come a long way in changing my mindset to handle the terminology of programming ideas and make them seem natural. I think others are the same.

Do you remember the day when you didn’t understand anything of programming terminology?

Today, hundreds or thousands of people are in that same situation. Today, many people do not know what a “segmentation fault” is or why their program is crashing. They don’t know why a reference for WinMain@16 is missing from their MinGW program that makes use of wxWidgets when first tangling with Code::Blocks, and they certainly won’t know how monads relate to endofunctors.

Do you remember how you first grasped your first to most recent ideas of programming?

I remember my personal experience: it was late at night trying to figure out why my 5th project in C++ was not compiling and I began helplessly throwing questions at Google and Stack Overfow.

I had no idea what I was doing, and — who’s to blame? — I just didn’t know what I was doing.

I remember that my account was question-banned on Stack Overflow for trying to ask questions as an independent learner, one or two semesters before I would ever encounter my first C++ program in college.

Of course, Stack Overflow has a strict policy on repeating questions that have already been answered, and an unwritten, unspoken policy on downvoting really basic questions — which I quickly learned after the fact.

But after three months of hitting roadblocks in compilation and unable to pass barriers of discipline and knowledge required to go directly to documentation, I found myself on top of a basic GUI system implemented on-top of SDL with Lua scripting on the side.

Learning can be a frustrating and humiliating experience; it can also be thrilling and easy given the right mindset and the right resources. To me, learning is not about wading through descriptions of mind-constructs and precise, compiler-like terminology to simply parse the mechanisms by which we build, arduously and untiringly — we make and mutate what is.

It’s not about the methods by which we gather information — we do not learn by reading documentation, but, rather, by the methods by which we internalize, understand, and distill such knowledge.

Learning is compressing methods and ideas intuitively so that they can be reconstructed and utilized at will.

Who is aiming to precisely teach “how to compress methods and ideas intuitively so that they can be reconstructed and utilized at will” for basic programming concepts? For searching and understanding basic programming terminology?

Opponents of doing such might give the arguments that “we don’t baby people around” and “people are capable of Googling the most basic of concepts,” but I don’t blame a baby for not being able to walk when all he or she can do is crawl.

If you remember the time that you searched “What is the best programming language?” on the Internet, you may remember that the best answer that  was given was no answer at all. People even think that asking such as question is wrong (and “wrong questions” do exist, in a sense).

I tried to write a tutorial myself. I worked on a C++ tutorial blog a few months back. And it’s horrible. It’s a mish-mash of concepts that are out-of-scope, poorly explained and written, and are not focusing on compression of knowledge into intuition. There is no notion of audience or aim; just a spewing of bits of knowledge that fail to present a coherent whole.

I now think that it’s not about just simplicity or a short word count, but teaching concepts so that they become intuitive to the receiver.

Many people want to learn programming today. Today, many people are starting at ground zero, with no equipment to learn for themselves. And today, you are the more experienced.

I’m writing this because I want to ask three questions and see what other people think:

  1. Is intuition in programming important for you? How would you teach correct intuition to a beginner?
  2. What concepts were important for you for sustained learning in programming?
  3. What resources do people need to map out programming for themselves and say to themselves: “This is what I know,” “This is what I don’t know,” and, “This is how I’m going to learn this?”

What is a type, and how do you think about it?

I was recently asked by a person who was going to take his first C programming class next semester to show him what programming was like. So I went over the compilation process, showed him some C++ code, and tried to talk about interface vs. implementation and what an API is.

But one of the hardest questions I came across was,

“What is a type?”

Now, as a student programmer, you kind of have to think about it. People do not get a whole mumble-jumble about the core idea behind types — we simply get something like,

“Oh, there just are different types, you know. You know the difference between an integer and a decimal number, right? Well, in C, they’re called int and float!”

What’s a person to think?

At this point, somebody who has never encountered the idea of types has to think: “What is a type?” Just imagine yourself asking these questions:

Is a type like different numbers? Is 2.0 a different type than 1 than 1.0? If they’re both decimals, are they the same type? But if they’re the same value, aren’t they exactly the same type of number? Does having the same value mean having the same type?

And later:

Is struct pair {int i; int j;} the same as int i; int j;? I mean, they both contain the same data right? One is a pair of numbers, and the other is a pair of numbers. But why do I have this compiler error? My function expects a pair of two numbers, but I gave it two numbers! Why?!

And even more later:

What is a monad? Is it a type, or a function on types? But isn’t a function itself a type? And a construction of a type with a monadic constructor will “trigger” certain functions? And what do you mean a monad doesn’t require a join operation?!

Somewhere along the line, naive intuition fails. And, having nowhere to go, what’s a person to do?

You’re not going to get a formal talk on type theory until you decide you want one. But if you want to build a correct intuition of what a type is, maybe it’s time for some research.

The Philosophy of Types

I’m not going to get a touchy-feel-y and truly intuitive intuition of anything by reading a computer manual or a hard technical paper that’s already expecting you to already have a strong sense of what a type is. For this, I chose to go to the Stanford Encyclopedia of Philosophy.

Why? Because the idea of what a type is has deeper roots than type theory’s birth by Bertrand Russell alongside Russell’s Paradox for naive set theory. I’m sure that people have been categorizing things (and people) into types long before set theory was formalized (e.g. humans into male and female). And philosophy will provide intuition on more than just a pure mathematical level.

From Section 3: Types and Universals:

Are types universals? They have usually been so conceived, and with good reason. (See the entry on properties.) But the matter is controversial. It depends in part on what a universal is.

“But what’s a universal?” I asked:

Universals, in contrast to particulars, have been characterized ashaving instances, being repeatable, being abstract, being acausal, lacking a spatio-temporal location and being predicable of things. Whether universals have all these characteristics cannot be resolved here. The point is that types seem to have some, but not all, of these characteristics.

Where predicable is defined as “capable of being asserted,” where predicate is defined as “something that is affirmed or denied of the subject in a proposition in logic” by the Merriam-Webster dictionary.

This seems to be closer to what a property is:

Properties (also called ‘attributes,’ ‘qualities,’ ‘features,’ ‘characteristics,’ ‘types’) are those entities that can be predicated of things or, in other words, attributed to them. Moreover, properties are entities that things are said to bear, possess or exemplify.

In other words, if there’s a sentence like, “The sky is blue,” here, “The sky” is the object, and “is blue” is the property.

Continuing on, skipping ahead a little bit:

When it comes to being predicable, however, most types diverge from such classic examples of universals as the property of being white or the relation of being east of. They seem not to be predicable, or at least not as obviously so as the classic examples of universals. That is, if the hallmark of a universal is to answer to a predicate or open sentence such as being white answers to ‘is white’, then most types do not resemble universals, as they more readily answer to singular terms. … It is also underscored by the observation that it is more natural to say of a token of a word—‘infinity’, say—that it is a token of the word ‘infinity’ than that it is an ‘infinity’. That is to say, types seem to be objects, like numbers and sets, rather than properties or relations; it’s just that they are not concrete particulars but are general objects—abstract objects, on some views. If, then, we follow Gottlob Frege (1977) in classifying all objects as being the sort of thing referred to by singular terms, and all properties as the sort of thing referred to by predicates, then types would be objects. Hence they would not fall into the same category as the classic examples of universals such as being whiteand being east of, and thus perhaps should not be considered universals at all. (Although perhaps all this shows is that they are not akin to properties, but are their own kind of universal.) A general exception to what has just been claimed about how we refer to types (with singular terms) might be inferred from the fact that we do more often say of an animal that it is a tiger, rather than that it is a member of the species Felis Tigris. This raises the question as to whether the species Felis Tigris is just the property of being a tiger, and if it isn’t, then what the relation between these two items is.

This last sentence raises the question: is being a member of the set of members defined to be within a type the same as being a type?

Well, we could say yes. But object “tiger” is not the same as the abstract notion of “the tiger” as a type.

We could say no, but the tiger object is certainly an instance of a type of tiger.

Hm.

But what is a type?

From Section 4: What is a Type? (with selected parts put in bold) —

The question permits answers at varying levels of generality. At its most general, it is a request for a theory of types, the way set theory answers the question “what is a set?” A general answer would tell us what sort of thing a type—any type—is. For example, is it sui generis, or a universal, or perhaps the set of its tokens, or a function from worlds to sets, or a kind, or, as Peirce maintained, a law? These options are discussed in §4.1. At a more specific level, “what is a type?” is a request for a theory that would shed some light on the identity conditions for specific types of types, not necessarily all of them. It would yield an account of what a word (or a symphony, a species, a disease, etc.) is. This is in many ways a more difficult thing to do.

In other words, the answer is almost a non-answer. “What is a type?” I asked. “Well, it depends on what your definition of what a type is…” replied the article.

Reading on further, it asks a question:

Is it a set?

It might appear that types are better construed as sets (assuming sets themselves are not universals). The natural thought is that a type is the set of its tokens, which is how Quine sometimes (1987, p. 218) construes a type. After all, a species is often said to be “the class of its members”. There are two serious problems with this construal. One is that many types have no tokens and yet they are different types. For example, there are a lot of very long sentences that have no tokens. So if a type were just the set of its tokens, these distinct sentences would be wrongly classified as identical, because each would be identical to the null set. Another closely related problem also stems from the fact that sets, or classes, are defined extensionally, in terms of their members. The set of natural numbers without the number 17 is a distinct set from the set of natural numbers. One way to put this is that classes have their members essentially. Not so the species homo sapiens, the word ‘the’, nor Beethoven’s Symphony No. 9. The set of specimens of homo sapiens without George W. Bush is a different set from the set of specimens of homo sapiens with him, but the species would be the same even if George W. Bush did not exist. That is, it is false that had George W. Bush never existed, the species homo sapiens would not have existed. The same species might have had different members; it does not depend for its existence on the existence of all its members as sets do.

So, it’s not just a set. A type that is missing one of its numbers does not disappear because one of its members does.

But what if we want that type with a missing number to be a distinct type? I think that having an int but not 0 type can be quite helpful and is quite distinct from int as a type.

I think the next paragraph is quite good:

Better, then, but still in keeping with an extensional set-theoretic approach, would be to identify a type as a function from worlds to sets of objects in that world. It is difficult to see any motivation for this move that would not also motivate identifying properties as such functions and then we are left with the question of whether types are universals, discussed in §3.

If a world can be called a set, then a type is a function to map a set onto subsets of itself. I think that’s a fair definition: classification of objects depends on there being objects to classify and make divisions in classification.

 

Is it a kind?

From the section on it:

The example of homo sapiens suggests that perhaps a type is a kind, where a kind is not a set (for the reasons mentioned two paragraphs above). Of course, this raises the question of what a kind is; Wolterstorff (1970) adopts the kind view of types and identifies kinds as universals.

So a kind is not a set but is a type.

Instead he views the type as what he calls the archetype of the kind, defined as something that models all the tokens of a kind with respect to projectible questions but not something that admits of answers to individuating questions. Thus for Bromberger the type is not the kind itself, but models all the tokens of the kind. We shall see some difficulties for this view in §5 below.

That’s fair… the concept of a type as an archetype for a set of objects that subscribe to that archetype is reasonable.

Is it a law?

Thus types have a definite identity as signs, are general laws established by men, but they do not exist. Perhaps all Peirce meant by saying they do not exist was that they are “not individual things” (6.334), that they are, instead what he calls “generals”—not to be confused with universals. What he might have meant by a “general law” is uncertain. Stebbing (1935, p.15) suggests “a rule in accordance with which tokens … could be so used as to have meaning attached to them”. Greenlee (1973, p. 137) suggests that for Peirce a type is “a habit controlling a specific way of responding interpretatively.” Perhaps, then, types are of a psychological nature. Obviously two people can have the same habit, so habits also come in types and tokens. Presumably, types are then habit types. This account may be plausible for words, but it is not plausible for sentences, because there are sentences that have no tokens because if Φ,Ψ are sentences, then so is (Φ & Ψ) and it is clear that for Peirce “every [type] requires [tokens]” (2.246). And it is much less plausible for non-linguistic types, like types of beetles, some of which have yet to be discovered.

I do like the idea of having a type be the way of interpreting tokens or instances of objects. But then the question becomes, “Does an object intrinsically possess a type?” Here, it appears that the answer is, “That depends on how people categorize it.” Once again, I think this can be seen as a relationship between a universal set of objects into subsets that satisfy the type.

But so what?

I think Section 5 does a good job at creating a “So What?” moment for us:

The relation between types and their tokens is obviously dependent on what a type is.

I agree. There exist multiple possible ways to look at the abstract idea of a type.

If it is a set, as Quine (1987, p. 217) maintains, the relation is class membership.

Then we can defer to set theory (or one of them, of which a popular choice would be ZFC), a basis for modern mathematics.

If it is a kind, as Wolterstorff maintains, the relation is kind-membership.

The idea of a kind is a bit foggy on first read, but a kind looks to be something that models all the objects within the “set” or “category” of a particular kind “with respect to projectible questions” but for which answers to “individuating questions” is not valid. That is, I can ask you about a property of a kind, but I can’t ask you to say, “Where was Homo sapiens, as a species or type, on New Year’s Eve” because Homo sapiens is a species, not a specific object.

If it is a law, as Peirce maintains, it is something else again, perhaps being used in accordance with. (Although Peirce also says a token is an “instance” of a type and that the token signifies the type.)

Seems kind of ambivalent in nature on first observance.

Nonetheless, it has often been taken to be the relation of instantiation, or exemplification; a token is an instance of a type; it exemplifies the type.

And to this, I think we can agree: things that are of a type exemplifies the type, or, things that are of a type are examples of a type.

Though, that seems pretty useless. An identity property isn’t anything special.

That wasn’t much of a “so what?” moment…

I think, from reading the article on types and tokens, we can assume the following:

  • Types are usually interpreted as universals that cannot be picked out and given individual object properties. That is, we can talk about “All int have some value between the range of a and b,”, but we can’t really say, “The type-object int itself has a value that is between a and b.” No. A type is not a “physical object,” but more like an abstract object that is a stand-in for all of the objects that is admitted to a type, but cannot be picked out itself validly and observed “physically” or “in particular.”
  • Types can exhibit set properties if you adopt such a philosophy. This means that if we match our own philosophy of types to set theory, we can have all the power and work already done on ZFC and, by possible extension, category theory. This has worked out very well — we can see that functional programming is a very valid paradigm of programming, with links to category theory.
  • The mechanism of recognizing objects as “tokens” or “instances” of a type should be defined. And, we’ve come up with different ways of handling the decision of types: simply typed lambda calculus, dependent type theory, refined types. Most programming languages provide basic types that are already defined, and operations by which to define other types based on those basic types in a recursive manner until you reach those base types again.

As a programmer and computer engineering candidate, I think the most popular application of type theory is in type systems for the various programming languages and compilers. Most of the popular languages of today have some notion of type and use it for both static and dynamic type checking, as well as compile vs. run-time type checking for safety and checking the validity of programs.

As the computer deals with the abstract notion of information, we can consider the following questions:

Final Questions

  • How does type theory mix with information theory, especially as applied in physical hardware architecture and abstract software architecture and design?
  • How is type theory applied to type systems for programming languages in the “real world,” and why is it popular? What are its direct productivity and safety gains and properties that make them almost universal among modern programming languages?
  • What are some algorithms for type system-related operations that are efficient? Are some type systems better than others with respect to time complexity of type checking algorithms, etc.

But wait, what would you say to a beginner?

  • Types represent a class of objects that all have something in common. For example, the int type is a class of integers which are all between the range of such and such.
  • In C and similar abstraction-level languages, all types are interpretations on the language level from the actual binary information that fuels most computer architectures. The same sequence of bits can represent multiple things depending on what lens of “type” is being used. But this matters less as more abstraction is applied, and the coupling between the raw binary representation of data and our type interpretation is less coupled or close.
  • How you decide whether something is of a type depends on the language, but most often, we just declare certain pieces of data to already be a type, and then, the system takes it from there, and limits the way we program based on how the system allows us to manipulate the types. And you can always define more ways to go between types.

 

 

Notes on Eclipse

  1. Ensure to configure to remove a lot of startup plugins and also setup the configuration file nicely.
  2. We can run cmd.exe inside the Eclipse console by following instructions here. If you take off the /C initial flag, we can get the usual output.
  3. MinGW-w64 can be made compatible with Eclipse provided:
    1. You configure your PATH to point to the bin directory for MinGW-w64.
    2. You change the name of the external build program to mingw32-make (you can still use the -j4 flag).