Git: so far, so good

I started using git two months ago. Basically, colleagues around me fall into three categories:

  • Those that already use git or mercurial (a small minority).
  • Those that are considering to start using it like me a few months ago (a few).
  • Those that don’t get it (the majority).

To those that don’t get it: time to update your skill sets. Not getting it is never good in IT and keeping your skill set current is vital to survival long term. Git is still new enough that you can get away with not getting it but I don’t think that will last long.

The truth of the matter is that git mostly works as advertised and there are a few real benefits to using it and a few real problems with not using it. To start with the problems:

  • Not using git limits you to one branch: trunk. Don’t fool yourself into thinking otherwise. I’ve seen branching in svn a couple of times and it was not pretty.
  • Not using git forces you to either work in small, non invasive increments or accept pro-longed instability on trunk with lots of disruptive change coming in. Most teams tend to have a release heart beat where trunk is close to useless except when a release is coming.
  • Not using git limits size of the group of people that can work effectively on the same code base. Having too many people commit on the same code will increase the number conflicting changes.
  • Not using git exposes you regularly to merge problems and conflicts when you upgrade your work copy from trunk.
  • Not using git forces a style of working that avoids the above problems: you don’t branch; people get angry when trunk breaks (which it does, often); you avoid making disruptive changes and when you do, you work for prolonged periods of time without committing; when you finally commit, you find that some a**hole introduced conflicting changes on trunk while you weren’t committing; once you have committed other people find that their uncommitted work now conflicts with trunk etc.
  • Given the above problems, people avoid the type of changes that causes them to run into these problems. This is the real problem. Not refactoring because of potential conflicts is an anti-pattern. Not doing a change because it would take too long to stabilize means that necessary changes get delayed.

All of those problems are real and the worst part is that people think they are normal. Git is hardly a silver bullet but it does take away these specific problems. And that’s a real benefit. Because it is a real benefit, more and more people are starting to use git, which puts all those people not using it at a disadvantage. So, not getting it is causing you real problems now (which you may not even be aware off). Just because you don’t get it doesn’t stop people who do get it from competing with you.

In the past few weeks, I’ve been gradually expanding my use of git. I started with the basics but I now find that my work flow is changing:

I’m no longer paranoid about updating from svn regularly because the incoming changes tend to not conflict with my local work if I “git svn rebase”. Rebasing is git specific process where you pull in changes from remote and “reapply” your own local commits on top of them. Basically before you push changes to remote, you rebase them on top of the latest and greatest available remote. This way your commit to remote is guaranteed to not conflict. So “git svn rebase” pulls changes from trunk and applies my local commits on top of them. Occasionally there are conflicts of course but git tends to be pretty smart about resolving most of those. E.g. file renames tend to be no problem. In a few weeks of using git, I’ve only had to edit conflicts a couple of times and in all of these cases, this was straightforward. The frequency with which you rebase doesn’t really matter since the process works on a per commit basis and not on a merge basis like in svn.

I tend to isolate big chunks of work on their own git branch so I can switch between tasks. I have a few experimental things going on that change our production business logic in a pretty big way. Those changes live in their own git branch. Once in a while, I rebase those branches against master where I rebase against svn trunk regularly to get the latest changes from svn trunk on the branch and make sure that I can still push them back to trunk when the time comes. Simply being able to work on such changes without those changes disrupting trunk or trunk changes disrupting my changes is a great benefit. You tend to not experiment on svn trunk because this pisses people off. I can experiment all I want on a local branch though. However, most of my branches are actually short lived: just because I can sit on changes forever doesn’t mean I make a habit of doing that needlessly. The main thing for me is being able to isolate unrelated changes from each other and from trunk and switching between those changes effortlessly.

Branching and rebasing allows me to work on a group of related changes without committing back to trunk right away. I find that my svn commits tend to be bigger but less frequent now. I’ve heard people who don’t get it argue that this is a bad thing. And I agree: for svn users this would be a bad thing because of the above problems. However, I don’t have those problems anymore because I use git. So, why would I want to destabilize trunk with my incomplete work?

Whenever I get interrupted to fix some bug or address some issue, I just do the change on whatever branch I’m working on. I commit the changes in that branch. Then I do a git stash save to quickly store any uncommitted work in progress. I do a git checkout master followed by a git cherrypick to get the commit with the fix on master. Then I git svn rebase and git svn dcommit to get the change into trunk. Then I checkout my branch again and do a git stash pop to pickup where I was before I was interrupted. This may sound complicated but it means that I am no more than two commands away from having a completely clean directory that matches svn trunk exactly that I can execute at any time without losing work in progress. So, no matter how disruptive the changes are that I am working on, I can always switch to a clean replica of svn trunk to do a quick change and then pick up the work on my disruptive changes. Even better, I can work on several independent sets of changes and switch between them in a few seconds.

So those are big improvements in my workflow that have been enabled by using git svn to interface with svn. I’d love to be able to collaborate with colleagues on my experimental branches. Git would enable them to do that. This why them not getting git is a problem.

Btw. you can replace git with mercurial in the text above. They are very similar in capabilities.

Impressive IT project

The serverside.com, one of the sites I regularly visit for Java related news today posted a link to this case study.

The case study describes an EJB based software system that runs the health care system in Brasil. When I say “runs the health care system”, I mean that for example in Sao Paulo, a city with 20 million inhabitants, all health centers are based on and interoperate with this system. That is a seriously impressive achievement.

Healthcare software is notorious for so-called island automation meaning that basically the IT infrastructure of health care organizations consist of many incompatible islands of software system that simply refuse to work together in a meaningful way. Brasil fixed that. Both from a technical point of view and a organizational point of view that is a very impressive achievement.

I was inรƒโ€šร‚ย  Dutch hospital three years ago. They gave me a plastic card with holes in it, aka. a punchcard. My health record actually lives in a software system that uses punchcards! I find that a disturbing thought. In most civilized countries, the IT infrastructure in the health sector is comparatively fragmented or worse. E.g. the US health system is notorious for being one of the most expensive (per capita) on this planet and also for having a comparatively low quality of service. Also a significant part of the population is uninsured.

More on MS

It’s now a few days after my previous post on the vista delay. The rumour machine on the Vista delays is now rolling. A few days ago this wild claim about 60% of vista being in need of a rewrite started circulating. Inacurate of course but it woke up some people. Now this blogpost on a blog about Microsoft (fequented by many of their employees) made it to slashdot. Regardless of the accuracy of any statements in that post, this is a PR disaster. Lots of people (the entire IT industry, stockholders) read slashdot.

There’s lots of interesting details in the comments on that post that suggest that MS has at least these problems:

  • Management is clueless and generally out of touch with development progress. Claims on release dates are totally disconnected from software development planning. Release dates announced in press releases are wishful thinking at best. This is one of the reasons the date slips so often.
  • Middle management is worse. Either they have failed to communicate down when to release or up when their people tell them release is actually impossible. Either way, they have failed doing what middle management is supposed to do: implement corporate strategy and communicate up when that strategy is not working as expected.
  • Software engineers within MS are extremely frustrated with this. Enough to voice their opinions on a public blog. A lot needs to happen before I start criticizing my employer in public. I know where the money comes from. Really, I’d probably leave long before it would get to this point. So, I interpret this as MS having a few extremely frustrated employees that might very well represent a large silently disgruntled majority. Steve Ballmer seems to be rather impopular in his own company right now (never mind his external image).
  • The best MS software engineers are leaving MS and are replaced with being people of lesser quality because MS now has to compete in the job market. I remember a few years ago that MS could cherry pick from the job market. Now the cherries are leaving. Really, if your best people are leaving and you have billions in cash to fix whatever problem is causing them to leave, you are doing something wrong (like not fixing the problem).
  • Microsoft employees are spilling stock influencing information on public blogs. Opennes is one thing but this is an out of control situation. Regardless of whether they are right, these people are doing a lot of damage.

It’s probably not as bad as the comments suggest but bad enough for MS, if only for all the negative PR. Anyway, I might be revisiting the predictions I made in my previous post. I have a feeling some of them might prove to be correct in a few months already. Very amusing ๐Ÿ™‚

The coming paradigm shift in TV broadcasting

The coming paradigm shift in TV broadcasting
The coming paradigm shift in TV broadcasting

This article comments on Apple’s latest move to offer video content through their iTunes and how this is a logical and inevitable move with some far ranging effects.

In this blog post I abstract from this and apply it to the whole telecommunications, media and IT industry. Some things are about to change in this economically important sector.

It’s understandable they put up a fight. The telecommunications sector is built on the notion that exchanging information (in any form) costs money. The media industry is built on the notion that media needs to be distributed (physically) and that they can charge dollars for that. And finally the IT industry is used to steady income from license fees from software. All these industries may lose a lot of revenue if the rules are changed.

And that’s what’s going on. Apple just changed the rules for the Media industry. This will have a snowball effect. Right now if you want to watch something (movie, the news, tv series, documentary) you need to turn to one of the industry controlled and closely guarded media: cinema, a tv channel, a dvd, etc. Each of these things is a source of revenue to the industry and you pay directly or indirectly for it in all sorts of ways. There’s nothing against that in principle they offer access to scarce resources and people pay a market price for access.

Their problem is that Apple just made these resources a lot less scarce. Distribution through the internet of content is cheap and will become even cheaper. Technology will gradually erode the cost to close to 0$. There’s plenty of bandwidth available and an increasing amount of people has what I call a critical amount of bandwith: enough bandwidth to make streaming high definition audio and video feasible & desirable.

Apple is tapping into this by letting their users access content over the internet through their iTunes store and by providing the necessary hardware and software to them. That’s a small change and not at all revolutionary. But it will teach people an important lesson: hey I can watch desperate housewives (one of the offerings used to commercialize the new itunes ability) whenever I want, wherever I want and I don’t need to buy the dvd, I don’t need to turn on the tv on a specific time and I don’t need to watch the commercial blocks. The next steps are obvious and imminent: why store the desperate housewives episode on an ipod when you can just stream it? Mobile networks will soon mature enough to reach the same critical bandwidth as home users are currently enjoying on their home networks.

That means that anytime, anywhere you can start streaming anything to your mobile phone, your pda, your ipod, your tv that anyone bothers to put online. Inevitably this will replace all existing forms of content distributions. Why tune to a channel to view some program when you can just start streaming the program whenever you want, skip to any part you want and pause it whenever you want, etc?

Apple just gave the industry a little reality check, just like they did when they kick started online music sales a few years ago: if the industry doesn’t move, somebody else will. Over the next few months, one after the other media company will either join apple or similar iniatives from e.g. microsoft. Once this happens the pressure will be on and the market will do its work. Better content leads to more online revenue, at the cost of traditional revenue. The huge gap between cost of content production and content distribution and the market price (which is obscene) will start to come under pressure as well. At some point in the near future the market model will change from paid downloads to paid streams (subscription, per view, etc).

This will put an end to tv networks as we know them. They are content distributers and we don’t need them anymore.

The same is going on in the telecommunications sector where revenue used to come from telephony and related services. IP telephony has eliminated the need for paid telephone services since it works just fine over a modest internet connection. If you have umts phone, it is technically possible to use the internet for IP telephony so why exactly are we paying 30 cents per minute for a local phone call? Some mobile networks already offer fixed price bandwidth (expensive though). The operators on these networks get their revenue from a number of services, all of which with the exception of the network connection are technically possible with already available software packages that use the connection. People think it’s normal to pay 25 cent for the delivery of a 160 character message to a cell phone (SMS). If those two cellphones are umts phones and run msn, icq, aim, jabber or any of the other IM network clients you can send unlimited messages to anyone freely. Surprisingly few people have figured this out but they will. These changes are already happening and will kill much of the telecom industry as we know it. A mobile phone is nothing else than a general purpose computer with a umts modem or similar wireless connection and some general purpose software. The form factor is irrelevant.

Which brings us to the software industry because nothing of the above requires software with a pricetag greater than 0$. All of the services mentioned above can be implemented using existing, open source software. In fact oss developers have already done most of the work and created OSS media centers, video & audio codecs, communication software, real time operating systems and any other kind of software component you could possibly need to implement any of the services mentioned in this document. It’s just a matter of putting together the components.

So what remains is bandwidth, hardware and intellectual property. Any revenue not coming directly from these, will vaporize in the next few decades. The remaining revenue will still be sizable but probably less than the industry is used today. 50$ for a dvd now is considered normal today. I’d be surprised and disappointed if I was unable to watch star wars III on my mobile phone anywhere, anytime for over 5$ in about ten years. And no way am I going to watch that shit ten times.

My impression is that the whole proces will be slow thanks to the industry resisting any form of progress. It will take some outsiders, like Apple, to change the rules gradually. These outsiders exist and are already changing the rules.