2011-10-20

Adding "Invalid" value to enumeration

Please don't do that:

    enum TrafficLightColor
    {
        red,
        yellow,
        green,
        invalid
    };

You've just artificially extended domain for every function that wants to operate on TrafficLightColor. Now everywhere we need to do stupid things:

    uint32_t toRgb(TrafficLightColor c)
    {
        switch(c)
        {
        case red: return 0x00FF0000;
        case yellow: return 0x0000FF00;
        case green: return 0x000000FF;
        default:
            assert(false);
            return 0xFFFFFFFF;
        }
    }

But why would one pass "invalid" to this function? Wouldn't it be clear if red, yellow, and green are the only possible values, and thus function just never fails?

I assume that generally the less values has function domain, the better.

The first reason why enums got this stupid "invalids", "unknowns" etc. is that somewhere in the code we convert from integer to enumeration.

    TrafficLightColor fromInteger(uint32_t n)
    {
        switch (n)
        {
        case 0: return red;
        case 1: return yellow;
        case 2: return green;
        default: return invalid;
        }
    }

That is the place where you should fail loudly on incorrect integer -- in conversion function! Either with firing an assert of with throwing an exception (depends on many factors).

The second one is that there is a code that wants to know about number of elements in enumeration, and we actually want to use it in the following manner:


    enum TrafficLightColor
    {
        red,
        yellow,
        green,
        invalid,
        number_of_element = invalid + 1
    };
   
    for (TrafficLightColor c = red; c < number_of_elements; ++c)
       ...

Don't do that either. Just make up something like

    enum TrafficLightColor
    {
        red,
        yellow,
        green
    };
    
    static uint32_t const number_of_elements = green + 1;

Don't bother too much that you'll need to change this constant every time you change enum: all clients that use TrafficLightColor needed to be recompiled anyway.

You want your compiler to complain about cases in switch statements that don't cover some enumeration values. And you should fail on enum construction time, not when enum is actually used.

2011-09-20

On overtime

It is widely acknowledged these days that chronic overtime is bad: it makes your team unproductive, unmotivated and unhappy.

At the same time, never having to do any extra hours could be an indicator of unimportance of ones work. Or you have too conservative schedule -- which is both bad economically and demotivating.

So, I think that it is even useful to have an occasional bursts during development cycle. Not only can it help to meet some tight deadline, prepare some demo, polish last glitches before release, or even -- yes -- simply make up that mistake made during time estimation. But also it can actually motivate and unify developing team. It is healthy to push yourself once in a while.

For me, "occasional burst" means to work harder than my long-term pace allows for about a week or two twice a year -- otherwise it's neither "occasional" nor "burst" anymore.

2011-09-10

Templates vs. virtual functions in C++, part II

In part I I have written that templates should generally be preferred to virtual functions if dynamic polymorphism is not absolutely needed.
In this part I will deal with some common arguments in favor of using object-oriented style interfaces and critics of template.

So why C++ developers often prefer interfaces with virtual functions to solutions based on templates? Here is the list of motives I'm aware of:

- code that use OO interfaces can be hidden in .cpp/.CC files, whenever templates force to expose the whole code in the header file;
- templates will cause code bloat;
- OO interfaces are explicit, whenever requirements to template parameters are implicit and exist only in developer's head;
- heavy usage of templates hurts compilation speed.

Let's scrutinize them in order.

Code that uses OO interfaces can be hidden in .cpp/.CC files, whenever templates force to expose the whole code in the header file

Here is how usually code which works with object-oriented hierarchies looks like:

    // in header file
    void f(Base const& t);

    // in source file
    void f(Base const& t)
    {
        // ...
        // hairy implementation
        // ...
    }

So nobody sees implementation of f() guy (*).

Here is how templates usually implemented:

    // in header file
    template <class T>
    void f(T const& t)
    {
        // ...
        // hairy implementation
        // not only hurts aesthetic feelings of every visitor
        // but also couples the interface with dependencies of the implementation
        // ...
    }

Aesthetics can be amended very easy by extracting implementation to "implementation" header file.

Second problem -- exposing dependency on implementation -- is more difficult. This can be fixed, but only if you know a subset of types that your template will be instantiated with -- just provide explicit instantiation of template functions/classes with those parameters:

    // in header file
    template <class T>
    void f(T const& t);
   
    // explicit instantiation
    template
    void f<T1>(T1 const&);
   
    // another one
    template
    void f<T2>(T2 const&);
   
    // in source file
    template <class T>
    void f(T const& t)
    {
        // ...
        // hairy implementation is now hidden
        // ...
    }

Templates will cause code bloat

Usually your code will actually shrink comparing to non-template version -- because only those template functions, classes and member-functions instantiated, and with those type arguments only, that you actually use in code! (**)

    int f()
    {
        std::vector<int> v;
        v.push_back(42);
        // only constructor, destructor, push_back() 
        // and whatever they use internally will be instantiated,
        // and only for type argument int
    }

Bloating can happen though if you are not careful. Classic example is containers that hold pointers. In C implementation there will be just one container that holds void*, and a couple of macros for convenient (but very unsafe) casting of those pointers to specific types. In naive C++ implementation there will be bunch of functions generated for each pointer type:

    std::vector<int*> ints;
    std::vector<float*> floats;
    std::vector<Goblin*> goblins;
    ints.push_back(0);
    floats.push_back(0);
    goblins.push_back(0);
    // three pairs of push_back() will be generated... or not?

All decent implementations though provide a specialization for pointers:

    // primary template
    template <class T> class vector; // additional template omitted

    // specialization for void*
    template<> class vector<void*>
    {
    public:
        // here comes the real implementation
        void push_back(void* p) { ... }
    };

    // partial specialization for pointer types
    template<T> class vector<T*> : private vector<void*>
    {
    private:
        typedef vector<void*> base;
    public:
        // use version of vector<void*> with cast
        // inlined, and doesn't cause any code bloating compared with
        // version one would implement without templates
        // (also safe: user cannot screw up)
        void push_back(T* p) { return base::push_back(static_cast<void*>(v)); }
    };

OO interfaces are explicit, whenever requirements to template parameters are implicit

This one is sad but absolutely true. Remember example from part I:

    template <class T>
    void f1(T const& t)
    {
        // no requirements on T except in comments if you are lucky
        bool flag = t.do_something();
    }

    // serves as an explicit specification
    class Base
    {
    public:
        virtual bool do_something() const;
        // ...
    };

    void f2(Base const& t)
    {
        // explicit requirement is Base's class interface
        bool flag = t.do_something();
    }

Concepts would have solved this problem, but unfortunately they have been rejected from C++11. Let's hope that they appear in the next Standard.

Until then, you have two options.
  1. You can specify requirements in comments and documentation (like SGI STL documentation and C++ Standard do).
  2. Or you emulate concepts. Boost.Concept is the nice tool for that. Finally you can at least use constrains in templates implementation (***)
Heavy usage of templates hurts compilation speed

This one is also true. Compilation slows down due to following reasons:
  • "real code" is generated from templates, and that takes time. Not much can be done with this that "issue". (Alternatively you can write all code by hand, but that would take even more time);
  • templates usually implemented in header files (see the first point), and thus increase preprocessing time and introduce new dependencies that every template user depends on. Sometimes that could be mitigated with explicit instantiation requests, other times with accurate dependency management (not include what can be just forward declared etc.). Sometimes you can just live with it, and other times you can consider using other abstraction tool and not templates.
All in all, in my opinion in modern C++ templates and static polymorphism should be considered the basic design tool -- especially for libraries, and object-oriented techniques should be only considered after them, and not something you start with.

_____________________________________________________________________

(*) Unless developer wants to make it inline, which s/he usually doesn't -- if efficiency of this function was so important s/he wouldn't use virtual functions here.

 (**) Subject to some restrictions: for instance, virtual functions always instantiated, [provide second example]. For more details, read C++ Templates -- The Complete Guide book.

(***) Constrains solve another (but closely related) problem with early diagnostics of violations on type requirements. Unfortunately they are poorly suitable for documenting interface as they are specified at implementation, not in interface.

2011-09-04

Templates vs. virtual functions in C++, part I

Virtual functions and templates are two major tools for supporting polymorphism in C++.

Virtual functions are basic blocks for supporting dynamic polymorphism and thus object-oriented paradigm in C++. Templates are useful for generic programming and metaprogramming; they allow static polymorphism.

Each tool has its own applications. Here is an advise from Bjarne Stroustrup from his book The C++ Programming Language, 13.8:
  1. Prefer a template over derived classes when run-time efficiency is at a premium.
  2. Prefer derived classes over a template if adding new variants without recompilation is important.
  3. Prefer a template over derived classes when no common base can be defined.
  4. Prefer a template over derived classes when built-in types and structures with compatibility constraints are important.
Note that it leaves only one case for virtual functions: when adding new variants without recompilation is important. And if you don't need this benefit, then start with templates. There are several reasons for it:

Templates are non-intrusive

Once type meets requirements imposed by function or class template, it can be used unmodified. Usually that means that type is required to provide some functions or member-functions with predefined semantics.

To pass object to function that operates via pointers/references to root of some hierarchy, type should be specially prepared (either it should be derived from this root directly on indirectly from the beginning, or it should be adapted later) and should have some member-functions with exact signature (*):

    template <class T>
    void f1(T const& t)
    {
        // no requirements on T except that it should provide
        // member-function do_something that
        // returns something convertible to bool
        // and can be called without arguments (so can have i.e default parameters)
        bool flag = t.do_something();
    }

    class Base
    {
    public:
        virtual bool do_something() const;
        // ...
    };

    void f2(Base const& t)
    {
        // first, t should be of class derived from Base
        // second, Derived::do_something() should follow 
        // the signature of Base::do_something()
        bool flag = t.do_something();
    }
Templates produce faster and smaller code

Call it "premature optimization" if you like. But if you are implementing a library you often don't know what will be on the critical path of an application using it. And if you are implementing a C++ library, you'd better make everything as fast as possible, and without using unnecessary memory.

    template <class T>
    void f1(T const& t)
    {
        // no virtual call overhead
        // t doesn't have to store a pointer to vtable
        bool flag = t.do_something();
    }

    void f2(Base const& t)
    {
        // probably virtual call overhead
        // stores pointer to vtable
        bool flag = t.do_something();

        // that can be undesirable especially for small objects
        // and trivial and/or frequently called operations

    }

Templates don't lose information about types

    template <class T>
    void f1(T const& t)
    {
        // here you know the exact type of t and can make further decisions
        // based on it (perhaps using traits or strategies)
    }

    void f2(Base const& t)
    {
        // here information about exact type erased
        // you cannot get it back without run-time type queries
        // (which is inefficient and doesn't scale on a number of derived classes)
    }

Making up hierarchy costs nothing

... but not vice versa. You can always wrap your types designed for using as template parameters with object-oriented hierarchy -- without run-time penalty (**), but once you have "virtuality" in place, its overhead with you forever!

I consider this one the most important. Because it means that, other thing being equal, you probably won't make mistake starting with templates. It is possible because of the previous benefits: performance, non-intrusiveness and keeping type information.

In part II we will deal with some arguments in favor of using dynamic polymorphism.

_____________________________________________________________________


(*) There is relaxed for covariant return types (and for exception specifications which can be stricter in derived class -- provided you consider them part of signature)


(**) Of course, compared with having hierarchy from the start.

2011-08-12

Review your commit changes with 'git add --patch'

I found that git is particularly useful for reviewing changes I'm going to commit.

When I'm ready to commit and my change is more than couple of lines of code in one file, I do

  $ git add --patch .

Then git nicely shows me every hunk of change I made in the code.

I see two benefits of working that way:
  1. Obviously it is opportunity to skim through my own code, and it leads to more focused reviews than if I just read through whole source, or even skim through full patch to be committed.
  2. It leads to atomic commits which I'm big fan of:
  • I don't forget to add things to index. This is not much important when using git, 'cause I always can amend my commits, or even rebase interactively, but it's always nice to form nice commit earlier rather than later. Of course I can do just 'git add .' or even 'git commit -a' for that which leads to next point:
  • if for some reason I made several unrelated changes, I have an opportunity to split my change into several logical changes if needed. That is where real power comes.
The workflow is as follows:

  $ <hack hack hack>
  $ git add --patch . # interactive session where I select what comes to commit
  $ git stash save --keep-index # I stash my unstaged index; needed for the next step:
  $ <compile and run tests to make sure you haven't screw with partial adding patches>
  $ git commit
  $ git stash pop # get the remainder of my changes back in working copy
  $ <repeat with git add>

If I have screw somewhere:

  $ git reset # moves staged changes from index (they are still in working copy)

If I want to review the changes one more time after adding but before commit:

  $ git diff --staged

It sound like a lot of work should be done for every commit. I don't know. First of all I do it fast enough (shell history can help do it even faster). And I use this technique only if I've touched code in several places.

Advanced techniques and details of git add you can add in the documentation, and in Markus Prinz'es article.

2011-08-04

Hacking in front of a kid

I've been hacking a simple breakout/arcanoid game clone last time. It uses pygame for graphics/audio/input/windows/etc. so that I can concentrate on the game logic.

My goal is to implement something cool for my 6-year old son. But also I try to show my son what daddy can do with his text editor and terminal. The game logic itself is the next to trivial for such a simple game, so I trick myself for developing new features in front him.

I must say that that is amazing experience. We do it in 10-20 minutes sessions. During that time I'm implementing some game feature, and afterwards he plays a bit. While I'm hacking he is sitting next to me and staring at python code. And for sure if I don't produce something exciting in 5 minutes or so, he get bored. So I'm coding in real time like hell. (And using dynamic language with extremely short "hack-try-fail-fix-try-works" cycle suits really well.)

No tests, no design (except some half-minute thinking in advance), just pure hacking.

At this point of time I just want to show the kid what you can do with programming. I think you need to excite somebody with before teaching them  industrial methods. Like, you buy your children Lego, and you show them what they can build with it, and then you want them to play with it. And only later on you teach them that they should design their buildings first and that they should pass some regulatory mandated tests and follow the industrial good practices. And hacking simple python games using nice game engine is just like that -- constructing Lego buildings.

I think I'm halfway done with exciting my son with programming, because after third of fourth session he asked me what it takes to be a programmer :)

P.S. Disclaimer: I make up the code a bit while he is sleeping though, so when we start next day it doesn't look like complete mess. One day I will share it on github.

2011-06-14

Being cross-platform

Too often developers sing the praises of cross-platform code. I claim that striving to be cross-platform is not always that good.

Platform independence is an abstraction. As any abstraction it is good -- when it solves real problem. But it could as well be premature, and contribute nothing but complexity and inconvenience both for the code clients and the implementers.

Working with just one platform, you get the benefits of using native platform tools; compiling your code with native build system; using just one compiler and one standard library; using just one set of system primitives; and all that without being forced to play around infinite quirks of each individual component.

If you cannot afford luxury of development just on one platform, strive at least to work with as small subset as possible, and to as close platforms as makes sense. For example, developing network server for Linux and FreeBSD can be OK (at least you have POSIX and pretty much same compiler), but adding Windows to the box is not so fun. The same way, developing desktop game on different Windows versions makes sense, but striving to platform independence only because "one day we may want to run it on Mac" would likely add no value but definitely will increase your budget/schedule.

After all, you should stop somewhere. Like, "this application is only going to work on desktops", or "this will be a library to help with mobile development". My point is that the earlier you stop the better. The less specific and more portable standard you comply to, the less useful primitives you get. At the end you'll be left without threads and directories. Sometimes there is a reason for that.

As with any abstraction, don't try to build this one "just in case": 1) you aren't going need it: instead do on demand; 2) you will do it wrong: instead let it grow organically.

Having said that, it doesn't mean that platform-dependent primitives should proliferate through all your code. On the contrary, your higher level code should not probably depend on platform-specific low-level details. But hey, it has a little to do with "cross-platform" stuff, it is just how reasonable abstractions are built!

Of course, sometimes you can get abstraction from platform for free -- for example, when you already have good cross-platform library or tool that does what you just need. In this case there is no reason not to make use of it. Remember, platform independence is not bad on itself, but only when it implies costs that otherwise could be avoided.