DVCS and its most vexing merge

July 11th, 2008

There's this very elegant way to shoot your leg off with a DVCS. Here's the recipe:

  1. I create a clone of the repository A and call it B. You create another clone of A and call it C.
  2. I add an if statement in B, fixing a bug. You fix the same bug in C in a very similar way.
  3. I pull your changes from C to B. There's a conflict around the bug since we both fixed it. I take my patch, throwing away your nearly identical patch.
  4. Meanwhile, your roommate pulled both B and C into his clone, D. And he had to resolve that conflict, too. And he took your patch, and threw mine away.
  5. Now, D and B are pushed back to A. DVCS quiz: what happens?

Answer: the system has accurately recorded the manual merges, so it has enough information to make this new merge automatically. It sees your patch, and it throws it away as it should according to my manual merge. It sees my patch, and it flushes it down the toilet since that's what your roommate said. Net result: both patches are gone, the bug is back in business. (Edit: it doesn't work that way – bk version 4 does a different thing, and other systems reportedly do still other things. Do you still want to read the rest of this?..)

Which of course doesn't matter, since it's immediately discovered by the massive Automated Test Suite. For example, if it's an OS kernel, each revision is automatically tested on all hardware configurations it should run on. And the whole process only takes 10 minutes, according to the Ten Minute Build XP Practice. No harm done, no reason to discuss it. I just thought it was a curious thing worth sharing.

Maybe it's a well-known thing, but I don't think it is, and if I'm right, it's definitely lovable. For example, here's what BitMover, maker of BitKeeper, the common ancestor of DVCSes, has to say about this:

"It's important to note that because BitKeeper has a star topology and its possible to share data with any repository, it's not necessarily recommended."

What this is trying to say is that the graph of pulls shouldn't be a generic graph and you're better off with a tree. That is, I shouldn't pull directly from you; we should both pull from and push to A. You and your roommate should also synchronize via A, or via A's "child" repository, but then you shouldn't push to A directly, only via that child, and so on. If we maintain this tree structure, the same conflict will never be resolved twice, and then we won't get screwed when the merges are merged.

I wonder if you could detect the situations when you "merge merges", that is, when the same conflict was resolved differently. You could then insist on human intervention and save those humans' bacon. I'm too lazy to think this out and too stupid to effortlessly see the answer, so I'll resort to a social heuristic like all of us uber-formal nerds do. The social heuristic is that Larry McVoy of BitMover has probably already thought about this, and found ways in which it could fail. So I'm not going to argue that BitKeeper merges are broken.

What I'm going to argue, at least for the next couple of paragraphs, is that it sucks when they tell you about their superstar topology and then explain that it's best to avoid actually using it. Not only that, but they fail to mention a fairly frightening and, trust me, not at all unlikely scenario which could actually persuade you to follow their advice.

Because when they tell me "we have this simple model of how things work – repositories with complete local history and changes flowing between them – but you should arbitrarily restrict yourself to a subset of this model, for reasons we aren't going to share with you, even though the general case works", when they tell me that, my reply is "I alias rm to rm -f". I understand how rm works, it's fairly simple, and I don't like to talk about it over and over again, "Are you sure?" – yes, I'm sure, thank you very much and good bye.

But the lovable part is, speaking of social heuristics, the lovable part is that BitMover said it right. Because if they mentioned that fairly frightening and not at all unlikely scenario, they'd scare people off rather than illustrate a point. On the other hand, when they say "It's good practice to think about how the data should flow", most people will nod and follow whatever advice they give them.

Just imagine a team of programmers engaging in the practice of thinking about how the data should flow, dammit, all on company time. Yeah, yeah, so BitKeeper earned a sarcastic comment on Proper Fixation. It's still a small price to pay for getting your message to the majority of your users.

You see, the majority of programmers are not just "irrational" as we all are, but their reliance on reasoning doesn't even exceed the mean of the population, which means they barely use reasoning at all, it's pure gut feeling.

For example, I was writing a bunch of macros in a proprietary debugger extension language. A guy who came to talk to me looked over my shoulder, and I explained, "Debugger macros. Very useful, a crappy language though." He said, "Yeah, looks like so."

HE COULDN'T POSSIBLY KNOW THAT. I knew he couldn't. How could he look at the code and realize that all variables were global? How could he know they were printed upon assignment, including loop counters ('cause it's a "macro", so it works just like assigning at the debugger console, which prints the variable)? He couldn't know any of that. So why did he agree with the "crappy" part? Oh, I knew why.

"You mean it has dollar signs?" Silence. "You mean it prefixes variable names with the dollar sign, don't you?" "Yeah, that." "Well, I like the dollar signs, helps you distinguish between your macro variables and the variables of the debugged C program. Anything else?" "Well, um, yeah, it looks kinda primitive." Low-end Ignorant Language Bigotry quiz: if "crappy" means "has dollar signs", what does "primitive" mean? Answer: no type declarations. I'm sure it was that, although I didn't go on and ask.

So that's "engineers" for you. If you want to write programs or tech docs for the average engineer, keep that picture in mind. Or don't. Aim for the minority, for people who don't work that way, under the assumption that they are the ones that actually matter the most. I don't know if this assumption is right, but at least it's lofty.


For the record, I had my share of both centralized and distributed version control systems, and today I like it distributed and I wouldn't ever want to go back, The Most Vexing Merge be damned. Why? I'll share my reasons with you through the story of My Last CVS To BitKeeper Exodus. I think I'll illustrate the engineers-and-reasoning-don't-mix point as a by-product, because that's how that story goes.

There recently was this argument about DVCS encouraging "code bombs", a.k.a "crawling into a cave". I haven't heard either of these terms, so I've been using a term of my own – "accumulating critical mass". The idea is to develop in your own corner, without integrating it with the main branch. You then show up with N KLOC of new stuff and kindly ask to merge it in.

Some people claimed this was particularly harmful for open source projects where there was no managerial authority to prevent this. Ha! Double ha. In an open source project, the key maintainers may say, "you see, we can't integrate it when it's done this way; we're sorry, but you should have talked to us." The changes will then be reimplemented or dropped on the floor.

Now, if you think that in a commercial environment a manager can easily decide to drop changes on the floor, together with the cost of implementing them and especially the cost of delaying the delivery of the features, if you think that, well, I wonder why you do. But perhaps a manager could insist on frequent integration? She could try, but she'd have to deal with real or imaginary cost of merges, increasing over time and getting in the way of deliveries. Of course there are perfect managers and perfect teams where it's all dealt with appropriately, you just have to find them.

So yeah, "code bombing" is a problem, especially in commercial projects. But the idea that DVCS encourages it? Hilarity! It's like saying that guns encourage murder. I prefer to think of guns as something that encourages fear of armed policemen, getting in the way of the natural instinct to club your neighbors to death. I mean, yeah, it's easier to code bomb with a DVCS, but with a centralized system, people use code bombing – or clubbing? – even more, because merging is harder, the cost of merges increases more quickly and the ability to force integration is thus lower. The criminals are poorly equipped, but so is the police.

And this is exactly what happened to the last team stuck with CVS that I helped migrate to BitKeeper. Everybody had their own version made up of file snapshots taken at different times and merged with the repository version to different extents. A centralized system doesn't record these local versions, so unless you immediately commit your merges, you are left with a version of a file which the system doesn't know. This means that the next merges are going to be really hard, because you've lost your GCA, the greatest common ancestor. So instead of a 3-way merge, you'll be doing a 2-way merge, which really sucks.

So I decided to not talk about the caves they were crawling into and the code bombs they were throwing at each other. Rather, I decided to show them how a 2-way merge couldn't do what a 3-way merge could. I still think it's the ultimate argument for DVCS, because DVCS is basically about accurate recording of all versions and not just the single time line of the main trunk. So the argument for it has to do with the benefits of such detailed recording.

So I gave this example where you start with a file having two lines:

And then I add a line, ccc, and you delete a line, aaa. If we have the GCA (a 3-way merge), then clearly the right result is deleting aaa and adding ccc, getting this:

But with a 2-way merge, we only have your file:

...and my file:

This can only be merged manually, because there's no way to automatically figure out that you deleted aaa and I added ccc; for all the tool knows, you could have done nothing and I've added two lines, so the right merge is:

...canceling your change. So it has to be manual merge. Manual merge means dozens of boring deltas you have to inspect in each file. That's what I call "costly".

Of course it doesn't matter in a right world, where people integrate frequently and always commit their merged files to the centralized repository. Except it wasn't so in the wrong world of the CVS developers I was "helping" to upgrade to new tools (for the last time in my life, people, for the last time in my life). And I thought we could avoid the discussion of the somewhat-technical-but-largely-social reasons of the constantly increasing cost of merges, and instead we could focus on the technical benefits of the 3-way merge and accurate GCA recording.

And of course I was wrong. The discussion immediately shifted to "we don't need merges" because everything is "modular" and there's a single owner to each file. Of course it wasn't, and there wasn't. Some things were used by everybody, like the awful build scripts and the DMA code. Some modules had two owners, or were in a transition state and had 1.5 owners, and so on. There were merges all over the place.

And if there weren't merges and merge-related problems, how come everybody worked on their own "pirate" version which was never recorded in the main trunk and was made from a colorful variety of files partially merged at different times? How come changes propagated with cp and emacs-diff and not cvs update? And why was the tech lead so passionate about moving to BitKeeper which doesn't let you partially update a repository so you have to merge everything? And why did everybody anxiously object that necessity if there were "no problems with merges"?

The final result: the tech lead simply forced the migration to bk. Everybody on the team hated me for my connection with the idea (I wasn't on their team but I used to be a likable satellite and now became a hateful satellite). Developers who I thought were their best eventually (and I mean eventually) told me it was actually a good thing. So it wasn't a bad closure. And still, I decided that I'm not going to "help" anybody "deploy" any kind of "tool" in this lifetime again, roughly speaking. Too much emotions for this programmer.

And this was supposed to show why I like DVCS, at least in the imperfect world where long-living branches occasionally happen, and the kind of reasoning I think is interesting in this context, and the kind of reasoning other people I came across found interesting. So there were are.

P.S. Why "most vexing"?

I thought I saw that "C++'s most vexing parse" from Scott Meyers' Effective STL has its own Wikipedia entry, but apparently it doesn't. It's basically a variation on the theme of C++'s declaration/definition ambiguity, and I liked the term, especially the "most" part where parses are unambiguously sorted along the vexing dimension. So I figured "X's most vexing Y" is a good template.

I'd like to use this opportunity to say that I skimmed though Effective C++, 3rd Edition, and... Where do I start? There's an advice to create date objects with "Date d(Day(31), Month::april(), Year(2000))" or something. That is, create special types for the constructor arguments. Well, it doesn't check that April comes without the 31th day, does it? The Date constructor could test for it though. Well, why not test for April 41st in the Date constructor, too, and, ya know, spare the users some keystrokes, if you see what I mean? The code is verbose. C++ compiler error messages are verbose. VERBOSITY EVERYWHERE! Help!

This raises the question to the author, whether he ever worked with a system where every piece of data comes covered with the toxic waste of overzealous static typing. But this borders on an ad hominem attack. And seriously, that sort of thing is to be avoided, at least until somebody proposes to have named constants for days or years and not just months.

So instead of the personal attack, I'll ask Software Development Consultants, all of them, to kindly change the phrasing "it's best to do so and so" to "I like so and so" or something. Because we have this huge crappy-dollar-sign crowd, and they copy style from each other like crazy, and their ultimate source of style is books by Software Development Consultants, and whenever they see a "best practice", their common sense is turned off and they add the technique to the bag of tricks. So Consultants have a great power in this world. It doesn't make the common sense shut-off feature their fault, but power they do have.

And with great power comes great responsibility, profanity deleted. I mean, you're obviously giving advice neither you nor others have tested for very long, out of generic principles, profanity deleted. Like "prefer algorithms such as for_each to loops", an advice issued before fully compliant implementations of STL were even widely available, profanity deleted. Quite a piece of advice, that, profanity deleted. Couldn't you at least phrase your advices in a little less self-assured way, fucking profanity deleted?

For example, Meyers has finally lowered the bridge and let the enemy of template metaprogramming occupy a notable share of pages in an Effective C++ edition. I still remember his promise to "never write about templates", in the preface to Modern C++ Design, I think. And now the promise is broken. Hordes of clueless weenies are rushing into the minefield of template metaprogramming as we speak, since it's now officially Mainstream C++. Can you imagine the consequences? I can't. It's too awful. I think I'll go to sleep now.