2011-08-12

Review your commit changes with 'git add --patch'

I found that git is particularly useful for reviewing changes I'm going to commit.

When I'm ready to commit and my change is more than couple of lines of code in one file, I do

  $ git add --patch .

Then git nicely shows me every hunk of change I made in the code.

I see two benefits of working that way:
  1. Obviously it is opportunity to skim through my own code, and it leads to more focused reviews than if I just read through whole source, or even skim through full patch to be committed.
  2. It leads to atomic commits which I'm big fan of:
  • I don't forget to add things to index. This is not much important when using git, 'cause I always can amend my commits, or even rebase interactively, but it's always nice to form nice commit earlier rather than later. Of course I can do just 'git add .' or even 'git commit -a' for that which leads to next point:
  • if for some reason I made several unrelated changes, I have an opportunity to split my change into several logical changes if needed. That is where real power comes.
The workflow is as follows:

  $ <hack hack hack>
  $ git add --patch . # interactive session where I select what comes to commit
  $ git stash save --keep-index # I stash my unstaged index; needed for the next step:
  $ <compile and run tests to make sure you haven't screw with partial adding patches>
  $ git commit
  $ git stash pop # get the remainder of my changes back in working copy
  $ <repeat with git add>

If I have screw somewhere:

  $ git reset # moves staged changes from index (they are still in working copy)

If I want to review the changes one more time after adding but before commit:

  $ git diff --staged

It sound like a lot of work should be done for every commit. I don't know. First of all I do it fast enough (shell history can help do it even faster). And I use this technique only if I've touched code in several places.

Advanced techniques and details of git add you can add in the documentation, and in Markus Prinz'es article.

2011-08-04

Hacking in front of a kid

I've been hacking a simple breakout/arcanoid game clone last time. It uses pygame for graphics/audio/input/windows/etc. so that I can concentrate on the game logic.

My goal is to implement something cool for my 6-year old son. But also I try to show my son what daddy can do with his text editor and terminal. The game logic itself is the next to trivial for such a simple game, so I trick myself for developing new features in front him.

I must say that that is amazing experience. We do it in 10-20 minutes sessions. During that time I'm implementing some game feature, and afterwards he plays a bit. While I'm hacking he is sitting next to me and staring at python code. And for sure if I don't produce something exciting in 5 minutes or so, he get bored. So I'm coding in real time like hell. (And using dynamic language with extremely short "hack-try-fail-fix-try-works" cycle suits really well.)

No tests, no design (except some half-minute thinking in advance), just pure hacking.

At this point of time I just want to show the kid what you can do with programming. I think you need to excite somebody with before teaching them  industrial methods. Like, you buy your children Lego, and you show them what they can build with it, and then you want them to play with it. And only later on you teach them that they should design their buildings first and that they should pass some regulatory mandated tests and follow the industrial good practices. And hacking simple python games using nice game engine is just like that -- constructing Lego buildings.

I think I'm halfway done with exciting my son with programming, because after third of fourth session he asked me what it takes to be a programmer :)

P.S. Disclaimer: I make up the code a bit while he is sleeping though, so when we start next day it doesn't look like complete mess. One day I will share it on github.

2011-06-14

Being cross-platform

Too often developers sing the praises of cross-platform code. I claim that striving to be cross-platform is not always that good.

Platform independence is an abstraction. As any abstraction it is good -- when it solves real problem. But it could as well be premature, and contribute nothing but complexity and inconvenience both for the code clients and the implementers.

Working with just one platform, you get the benefits of using native platform tools; compiling your code with native build system; using just one compiler and one standard library; using just one set of system primitives; and all that without being forced to play around infinite quirks of each individual component.

If you cannot afford luxury of development just on one platform, strive at least to work with as small subset as possible, and to as close platforms as makes sense. For example, developing network server for Linux and FreeBSD can be OK (at least you have POSIX and pretty much same compiler), but adding Windows to the box is not so fun. The same way, developing desktop game on different Windows versions makes sense, but striving to platform independence only because "one day we may want to run it on Mac" would likely add no value but definitely will increase your budget/schedule.

After all, you should stop somewhere. Like, "this application is only going to work on desktops", or "this will be a library to help with mobile development". My point is that the earlier you stop the better. The less specific and more portable standard you comply to, the less useful primitives you get. At the end you'll be left without threads and directories. Sometimes there is a reason for that.

As with any abstraction, don't try to build this one "just in case": 1) you aren't going need it: instead do on demand; 2) you will do it wrong: instead let it grow organically.

Having said that, it doesn't mean that platform-dependent primitives should proliferate through all your code. On the contrary, your higher level code should not probably depend on platform-specific low-level details. But hey, it has a little to do with "cross-platform" stuff, it is just how reasonable abstractions are built!

Of course, sometimes you can get abstraction from platform for free -- for example, when you already have good cross-platform library or tool that does what you just need. In this case there is no reason not to make use of it. Remember, platform independence is not bad on itself, but only when it implies costs that otherwise could be avoided.

2011-06-07

Enums in C++

"Should I use enumeration in this code, or could I just use plain integer/boolean type for representing set of unique integers?"

Enumerations are OK if:
  • you use the names in 'switch' statements
  • using values in template parameter and specializing on that parameter
  • semantics of some code will be changed if new value has been added
  • semantics of the code should not be changed if values of two variables in the set have been swapped
You'd better stick with plain integers if:
  • you routinely iterate through the values in 'for' loop
  • you perform arithmetics on the values
  • semantics of the code is not changed if new value has been added
  •  
    For values that represent strict binary choice -- yes/no, forward/backward, good/bad -- it is almost always great idea to have an enumeration instead of boolean type: it communicates an idea of variable semantics much cleaner.

    To sum up, you should use enumerations only if you are interested in the names, and not interested in the values (with an exception for serialization maybe), and if the set of integers is bound.

    2011-06-03

    SCM: atomic commits and merges

    (Note: this post is more relevant to centralized world of SCM)

    There are two types of merges that we face in everyday work: merging from more stable branch to less stable branch (release to trunk, trunk to feature, central to local etc.), and merging from less stable branch to more stable one (vice versa, though I hope you never integrate trunk to release branch).

    The first one is for getting last (and hopefully somewhat stable) changes.
    The second one is for delivering your work to the world.

    Typically, when you want to integrate, two branches have diverged for a more than one commit.

    How to reconcile the idea of atomic commits with merging? I'll show it on example of integrating back and forth between the trunk and some feature branch, but the same reasoning is applied on any kind of integration/merging (just substitute "trunk" with "more stable" and "feature branch" with "less stable").

    When merging changes from the trunk to the feature branch, you should merge change by change, not a bulk of changes at hand. First of all, it is obviously easier to merge this way. More important reason is that that allows to test and review each change made at trunk in isolation. It is a common mistake to merge in one lump change: a lump change means a lump diff, and a log message like "merge from trunk" -- which definitely doesn't tell you much.

    As every single commit to the trunk supposed to be an atomic, integrating those changes to the feature branch should also remain atomic.



    When merging changes from the feature branch to the trunk, you cannot and don't want to integrate change by change: by definition, you have created your feature branch because you didn't want to commit the changes to the trunk. So you can only integrate back to the trunk when the policy of the feature branch meets the policy of the trunk.


    (What is "a policy of the branch"? Think about a policy as an invariant of a codeline. A policy of the trunk could be "the code is compilable and passes unit-tests for whole system", whether a policy of some experimental branch could say "commit whatever you want". Clearly you want to integrate your experimental branch back only when it is also compilable and passes unit-tests.)

    Special care is needed for keeping such lump commits atomic: your feature branch should have a single purpose. In other words branch should be feature-oriented, not component-oriented. Don't mix space-fixing, refactorings, bug-fixing, features in one feature branch. For example, if you are working in the GIS domain and optimize memory consumption of the routing component, you should have "optimize-routing-memory" branch, and having a "routing" branch for a years is plain stupid (to whom it might concern: any coincidences are not).




    For distributed SCMs all that is much simpler, as such systems save whole graph (not trying to convert it to linear history).

    Some articles and books that I recommend to read:
    High-level Best Practices in SCM from Perforce site: Perforce Software, and especially Laura Wingerd (their VP of Product Technology) are known for evangelizing good practices in (centralized) SCM usage -- not specific to their product only. This article is about... well, high-level best practices in SCM.

    Streamed Lines: Branching Patterns for Parallel Software Development: everything about branching, mostly in "pattern language format": what, how, when, when not, etc.

    2011-05-26

    SCM: you should make atomic commits

    As with functions/modules/classes, tools, and almost everything else in software development, a change that you are going to commit should have a single purpose, and should accomplish this purpose. I will refer to such changeset as an atomic commit.
    The most important side effect of an atomic commit is that it produces diff that is easy to read and understand. In turn, having diff that is easy to read and understand makes you happy because:
    - it simplifies your debugging by localizing changes;
    - it simplifies code review by localizing changes.

    Atomic commit doesn't mix refactoring, bug fixing, development of new feature, and style changing. Neither mixes it several refactorings, or bug fixings or whatever.

    Atomic commit is self-contained, and accumulates all changes that serve its purpose.

    Atomic commit has a short and up-to-the point log message, which usually doesn't contain 'and'-word and lists.

    "But I don't have time to fix that minor issues, like capitalizing and spaces, separately: I want to do that along working on my primary task at hand!"

    If those are minor issues, why bother spending time on them? It's not "fixing" then, it's polishing. You should polish your product, not your code (unless it happens that your code is the product). Otherwise, just get a piece of paper or text file and add notes about what should be done after you have finished your task.

    "But often while working on a task I notice some TODOs or small things that I will forget if I haven't fix them right now!"

    TODOs are easy to grep, aren't they: why don't you just get good habit to elaborate them periodically. Small things could be transformed in TODOs so you don't forget. Otherwise, just get a notebook or text file and add notes about what should be done after you have finished your task.

    "But dumping small things to the piece of paper kills my flow!"

    And fixing those things while working on bigger one doesn't? Then forget about small things.

    Even better, switch to the modern SCM. This days modern means distributed, and most often that means git or Mercurial. For those who use one of such tool, there are no excuses at all to not produce atomic commits. Because in your local repository you can commit absolutely freestyle, and then slice the meat you've just produced into nice atomic cuts before upstreaming. (For git users, interactive rebases and partial commits are primary tools for that.)

    "But merging changes from one codeline to another means that corresponding commit doesn't have single purpose!"

    Wrong. The single purpose of merge should be delivering change to another codeline. That's why you should carefully choose the changes you want to merge in one commit. Once again, that is much easier to do with distributed tool, but it is also simple to do right with svn or perforce. (I will post more on branching/merging in some of subsequent posts.)

    Example: this is just awful:

    $ git diff
    diff --git a/my.cpp b/my.cpp
    index 6223d3c..4246210 100644
    --- a/my.cpp
    +++ b/my.cpp
    @@ -1,14 +1,17 @@
    -int fancy_stuff(int arg)
    +int fancyStuff(int n)
     {
    -    // do a lot of stuff
    -    return arg * 2;
    +    // Do a lot of stuff.
    +    return n * 2;
     }

    -int contrived(int arg)
    +int contrivedFunction(int n)
     {
    -    if (arg > 0) {
    -        return fancy_stuff(arg*2);
    -    } else {
    -        throw std::runtime_error("arg is negative!");
    +    if (n >= 0)
    +    {
    +        return fancyStuff(n * 2);
    +    }
    +    else
    +    {
    +        throw std::runtime_error("n is negative!");
         }
     }


    ... if all you wanted to say was:

     int contrived(int arg)
     {
    -    if (arg > 0) {
    +    if (arg >= 0) {
             return fancy_stuff(arg*2);
         } else {
             throw std::runtime_error("arg is negative!");

    2011-05-22

    Specifications, part IV

    In the previous post I talked about how to write specifications for virtual functions. This post is about second C++ mechanism which uses new code as a tuning for the old one: templates.

    Templates are probably the most important C++ abstraction-building tool: they allow constructing amazingly powerful abstractions without incurring run-time and memory penalties, and don't lose type information along the way. I think they should be preferred over virtual functions whenever run-time dispatching is not necessary (choosing one or another is the topic of separate blog post in the future though).

    Specify template parameters

    Often class and function templates (and member-function templates, and member-functions of class templates) expect their template parameters to have some properties. As usual, it is better to be explicit in what is expected.

    Consider following template:

        /// prints its argument
        template <class T>
        void f(T t)
        {
            t.print();
        }

    The code doesn't make sense if type T has no member-function template print which can be called without arguments. So it's better to put it in the documentation of function template f:

        /// prints its argument
        /// \pre T has a member-function print which can be called without arguments
        template <class T>
        void f(T t)
        ...

    (Note that it is wise specify absolute minimum -- I'm talking about function print that can be called without arguments, not about print that has no arguments.)

    This example is somewhat contrived: if client programmer misuses f compilation error will occur.
    Sometimes though deciphering compilation error get tricky as instantiation that causes error is deep burried in the call stack. With that kind of things template parameters constraints help: you can have your types checked by type system, and as close to source of violation as possible. Read this Bjarne Stroustrup's faq for that.

    And what is more important, compiler cannot verify everything: it verifies syntax and basic things (like presence of print() with compatible signature). But it will not help you with semantics that is not expressed in C++ type system. Examples of these ones are complexity, exception guarantees, specific side effects, commutativity/associativity of operation etc.

    Consider std::vector<>. It requires that type of elements it holds is CopyConstructible, and CopyConstructible means specific semantics besides mere presence of publicly available copy-constructor. If you violate this requirement while instantiating std::vector with std::auto_ptr<SomeType>, you don't get any compilation error. Instead, you get undefined behavior (which in this case can be crash at runtime).

    Another example of "semantic requirements" is usual requirement imposed on any clean-up function, and on user-defined swap operation to not throw: otherwise generic transactional code is impossible to implement correctly.

    Third example is user-defined template that operates on container and promises O(N) run-time in terms of number of elements in the container:

        /// \pre Container is STL-like container
        /// \pre Op is the class that provides context-aware operator() with parameters
        ///      of types convertible to Container and to the type held by Container
        /// \post run-time complexity is O(c.size())
        template <class Container, class Op>
        do_something(Container& c, Op f)
        {
            for (c::const_iterator it = c.begin(); it != c.end(); ++it)
               f(c, *it);
        }

    Note, that for keeping its promise, it should in general require that Op::operator() itself has the run-time complexity of O(1)! If all you have is do_something declaration and comments, you should specify it as well.

    As with virtual functions, specification of template is only one part for ensuring correctness. Another one is of course providing type parameters that satisfies the specification.