Git and agile

I’ve been working with Subversion since 2004 (we used a pre 1.0 version at GX). I started hearing about git around the 2006-2007 time frame when Linus Torvalds’ replacement for Bitkeeper started maturing enough for other people to use it. I met people working on Maemo (the Debian based OS for the N770, N800, N810, and recently the N900) in Nokia who were really enthusiastic about it in 2008. They had to use it to work with all the upstream projects Maemo depends on and they loved it. When I moved to Berlin everybody there was using subversion so I just conformed and ignored git/mercurial and all those other cool versioning systems out there for an entire year. It turns out that was lost time, I should have switched around 2007/2008. I’m especially annoyed by this because I’ve been aware of decentralized versioning being superior to centralized versioning since 2006. If you don’t believe me, I had a workshop paper at SPLC 2006 on version management and variability management that pointed out the emerging of DVCSes in that context. I’ve wasted at least three years. Ages for the early adopter type guy I still consider myself to be.

Anyway, after weighing the pros and cons for way too long, I switched from subversion to git last week. What triggered me to do this was, oddly, an excellent tutorial on Mercurial by Joel Spolsky. Nothing against Mercurial, but Git has the momentum in my view and it definitely appears to be the band wagon to be jumping right now. I don’t see any big technical argument for using Mercurial instead of Git. There’s github and no mercurial hub as far as I know. So, I took Joel’s good advice on Mercurial as a hint that it was time to get off my ass and get more serious about switching to anything else than Subversion. I had already decided in favor of git based on stuff I’ve been reading on both versioning systems.

My colleagues of course haven’t switched (yet, mostly) but that is not an issue with git-svn, which allows me to interface with svn repositories. I’d like to say making the switch was an easy ride, except it wasn’t. The reason is not git but me. Git is a powerful tool that has quite a bit more features than Subversion. Martin Fowler has a nice diagram on “recommendability” and “required skill”. Git is in the top right corner (highly recommended but you’ll need to learn some new skills) and Subversion is lower right (recommended, not much skill needed). The good news is that you will need only a small subset of commands to cover the feature set provided by svn and you can gradually expand what you use from there. Even with this small subset git is worth the trouble IMHO, if only because world + dog are switching. The bad news is that you will just have to sit down and spend a few hours learning the basics. I spent a bit more than I planned to on this but in the end I got there.

I should have switched around 2007/2008

The mistake I made that caused me to delay the switch for years was not realizing that git adds loads of value even when your colleagues are not using it: you will be able to collaborate more effectively if you are the only one using git! There are two parts to my mistake.

The first part is that the whole point of git is branching. You don’t have a working copy, you have a branch. It’s exactly the same with git-svn: you don’t have a svn working copy but a branch forked of svn trunk. So what, you might think. Git excels at merging between branches. With svn branching and merging is painful, so instead of having branches and merging between them, you avoid conflicts by updating often and committing often. With git-svn, you don’t update from svn trunk, you merge its changes in your local branch. You are working on a branch by default and creating more than 1 is really not something to be scared of. It’s is painless, even if you have a large amount of uncommitted work (which would get you in trouble with svn). Even if that work includes renaming the top level directories in your project (I did this). Even if other people are doing big changes in svn trunk. That’s a really valuable feature to have around. It means I can work on big changes to the code without having to worry about upstream svn commits. The type of changes nobody dares to take on because it would be too disruptive to deal with branching and merging and because there are “more important things” to do and we don’t want to “destabilize” trunk. Well, not any more. I can work on changes locally on a git branch for weeks if needed and push it back to trunk when it is ready while at the same time me and my colleagues keep committing big changes on trunk. The reason I’m so annoyed right now is the time I spent on resolving svn conflicts in the past four years was essentially unnecessary. Not switching four years ago was a big mistake.

The second part of my mistake was assuming I needed IDE support for git to be able to deal with refactoring and particularly class renames (which I do all the time in Eclipse). While there is egit now, it is still pretty immature. It turns out that assuming I needed Eclipse support was a false assumption. If you rename a file in a git repository and commit the file, Git will automatically figure out that the file was renamed, you don’t need to tell git that the file was renamed. A simple “mv foo.java bar.java” will work. On directories too. This is a really cool feature. So I can develop in eclipse without it even being aware of any git specifics, refactor and rename as much as I like, and git will keep tracking the changes for me. Even better, certain types of refactorings that are quite tricky with subclipse and subversive just work in git. I’ve corrupted svn work directories on several occasions when trying to rename packages and moving stuff around. Git will handle this effortlessly. Merges work so well because git can handle the situation where a locally renamed file needs changes from upstream merged into it. It’s a core feature, not an argument against using it. My mistake. I probably spent even more time on corrupted svn directories than conflict resolution in the last three years.

Git is an Agile enabler

We have plenty of pending big changes and refactorings that we have been delaying because they are disruptive. Git allows me to work on these changes whenever I feel like it without having to finish them before somebody else starts introducing conflicting changes.

This is not just a technical advantage. It is a process advantage as well. Subversion forces you to serialize change so that you minimize the interactions between the changes. That’s another way of saying that subversion is all about waterfall. Git allows you to decouple change instead and parallelize the work more effectively. Think multiple teams working on the same code base on unrelated changes. Don’t believe me? The linux kernel community has thousands of developers from hundreds of companies working on the same code base touching large portions of the entire source tree. Git is why that works at all and why they push out stable releases every 6 weeks. Linux kernel development speed is measured in thousands of lines of code modified or added per day. Evaluating the incoming changes every day is a full time job for several people.

Subversion is causing us to delay necessary changes, i.e. changes that we would prefer to do if only it wouldn’t be so disruptive. Delayed changes pile up to become technical debt. Think of git as a tool to manage your technical debt. You can work on business value adding changes (and keep the managers happy) and disruptive changes at the same time without the two interfering. In other words you can be more agile. Agile has always been about technical enablers (refactoring tooling, unit testing frameworks, continuous integration infrastructure, version control, etc) as much as it was about process. Having the infrastructure to do rapid iterations and release frequently is critical to the ability to release every sprint. You can’t do one without the other. Of course, tools don’t fix process problems. But then, process tends to be about workarounds for lacking tools as well. Decentralized version management is another essential tool in this context. You can compensate not using it with process. IMHO life is to short to play bureaucrat.

Not an easy ride

But as I said, switching from svn to git wasn’t a smooth ride. Getting familiar with the various git commands and how they are different from what I am used to in svn has been taking some time despite the fact that I understand how it works and how I am supposed to use it. I’m a git newby and I’ve been making lots of beginners mistakes (mainly using the wrong git commands for the things I was trying to do). The good news is that I managed to get some pretty big changes committed back to the central svn repository without losing any work (which is the point of version management). The bad news is that I got stuck several times trying to figure out how to rebase properly, how to undo certain changes, how to recover a messed up checkout on top of my local work directory from the local git repository. In short, I learned a lot on this and I have still some more things to learn. On the other hand, I can track changes from svn trunk, have local topic branches, merge from those to the local git master, and dcommit back to trunk. That about covers all my basic needs.

Some adventures with ejb 3 and jax ws

You may recall some of my recent frustrated posts regarding the poor state of web services in java. While I still stand 100% behind these comments, I’ve found a somewhat more convenient way of implementing web services using JAX WS 2.0 now. I spent a few hours with jboss 4.0.4 AG to explore its implementation of some of the new JEE 5 (formerly known as J2EE) stuff. Up to now I’ve never bothered with J2EE 1.x since I consider it an overarchitected, complicated technology aimed at addressing what are (or should be) simple issues such as persisting an object to a database.

The nice thing with the latest incarnation of the specification is that it supposedly removes much of the burden of telling the application server to just do its thing. Using annotations you specify what it should do and then it actually goes about and does it without me editing hundreds of little xml files and googling an afternoon for the corresponding documentation; googling some more for stuff the documentation does not tell you and finally yet more googling to explain the obscure exceptions in the log. Well it turns out that some googling skills remain essential but it sort of works as advertised. Basically the process is to write some Pojos, add some annotations and hand the whole thing to the app server to have a magic layer of web services, persistence and transactional semantics generated automatically.
Lets be fair, jboss does not address all issues in J2EE 1.4 and the JEE 5 implementation is definately not complete. But overall it is a huge improvement over the way things used to be, provided you do it exactly as they want you to. The only other open source implementation of the latest specs is Sun’s glassfish so there is not exactly that much choice. Luckily, JBoss is pretty nice technology.
I had some issues which were related to various things not being deployed because of errors in (jboss specific) files, which actually cost me most of this morning. My original aim was to re-implement a hibernate+tomcat based web application I already had and which I am going to do some feature development on soon. This should be easy because jboss uses hibernate to implement ejb 3 persistence and tomcat to run web applications. Indeed I had some benefit since I could copy paste a bit of the more obscure things in the hibernate configuration, thus bypassing several hours of agonizing googling for issues with mysql (did that a few months ago).

The object relational mappings were of course reusable but the whole point of ejb3 is that these are now replaced with much less verbose annotations. Adding the annotations was pretty straight forward. Next step was convincing jboss to do something with them. This is as easy as embedding a persistence.xml file in the jar file with the classes. The main purpose of this file seems to be to tell the appcontainer to hook up a database resource to the entitybeans inside the jar. Additionally some database specific configuration is embedded as well. IMHO mixing configuration with deployment artifacts is not a good idea but I guess we’re stuck with this for a while since it is part of the standard now.
The next step was less easy: dependency injection. It turns out that jboss, or rather tomcat, is not quite ready for the new servlet specification that comes with JEE 5. In other words, any annotations in a servlet are ignored. If you want to use your persistent objects you need to create a so called entitymanager manually. Some googling delivered various code fragments of which one seemed to do the trick. The omnipresent fragments depending on an annotation only work inside an ejb.
Next on the agenda was creating a stateless session bean to encapsulate the business logic. Again some simple annotations do the trick and supposedly dependency injection does work here so getting the entitymanager injected actually works (with the added advantage that the app container is a lot smarter about figuring out transactional semantics). The only problem: it wasn’t getting deployed :-(. Entity beans in a war (web application archive) file are no problem but session beans are.

So I figured I should create a nice ear file (enterprise application archive). This step btw is missing in action from the nice JBoss tutorial I had been glancing through. OK it’s just a zip file with some stuff in it and a pretty straightforward. Essential is the application.xml which you shouldn’t need but which JBoss needs anyway and crucial is the little jboss-app.xml which only needs to contain a few lines to trick it into deploying the session beans with the ejb3 deployer. Similarly the war file with my servlet needs a jboss-web.xml. Anyway, the whole point of an ear is tricking jboss into deploying jar files with the right deployer. None of the info in application.xml and jboss-app.xml actually tell jboss something it couldn’t figure out itself.
The above took quite a few hours of trial and error. I wrote all java code in under 5 minutes however. The rest of this time was spent googling for the right bits and pieces. Any mistake leads to obscure errors such as a null pointer error when using the entity manager which should have been injected but wasn’t. Anyway it now works and I managed to JAX WS 2.0 enable my code with a mere two annotations. Two simple annotations to expose a session bean as a webservice is way more cool than generating wsdl + crappy stub code using axis and hooking up your code to it. I’m definately going to use this some more.
The good:

  • It actually all works as advertised.
  • Writing the java code is considerably easier when you don’t have to worry about boilerplate stuff for setting up database connections, transactions, etc.
  • I will be able to do all of the above in ten minutes in future projects.
  • JAX WS is getting quite close to how I want to work with web services: i.e. keep the stinking WSDL out of my sight.

The bad

  • Plenty of container specific gotchas left but much less than there used to be.
  • Still some pointless configuration files that I want to get rid off. Application.xml; jboss-*.xml; persistence.xml; web.xml all could be simplified or removed entirely. IMHO the jboss-* files are there because the specs omit important features related to deployment. If the spec were improved there would be no need for these files.
  • There’s likely to be a few issues I just have not ran into yet.
  • Documentation is sketchy, misleading and incomplete. The tutorial not explaining how to hook an ejb up to a servlet (kind of essential) without annotation support has cost me quite a bit of time. Since I first had to figure out why the ejb wasn’t being deployed and then how to fix that.

Overall, I’m positive and will continue to use this stuff.

that must hurt

Ouch, Forbes unleashes some criticism on Microsoft. Well deserved IMHO. I don’t see the result of six years of development by thousands of software engineers reflected in the currently marketed featureset.

A few small predictions:

  • Vista and office 2007 (or whatever it is called) are going to go into history as the two releases that reversed the growth trend in microsoft marketshare. I expect both products to do worse than their predecessors. First of all, businesses won’t touch either until forced by licensing conditions. Second of all, some businesses might opt for alternatives this time. Particularly Novell seems to be well positioned this time. Also Google will push some lightweight services into the market before the Vista release that are remarkably well suited for adoption in small businesses.
  • I expect this to have consequences for the current leadership. Specifically, I expect Steve & Bill to be pushed to the side lines after this.
  • I expect this to be the last major revision of windows this decade. They may think they are going to do another release before 2010 but reality will catch up with them. In fact, I expect that Vista will be the last time they can justify the insane R&D budget to the shareholders. Six years of development resulting in replacement purchases only is going to be a tough sell to shareholders.
  • Clearly after six years of development, Microsoft stands empty handed. The shares are due for a downward correction. Things are not going well for Microsoft and they are underperforming.
  • This is not the last delay for Vista. They are hoping it will be ready but their development process is the reason it is being delayed so they can’t actually know right now that they will have a release in 365 days. My guess is that they won’t.
  • Customer feedback on the yet to be announced additional beta will cause them to drop more features from Vista. Particularly the userinterface is going to get some heavy criticism (performance, general uglyness) and negative publicity. Something will need to be done about it. After dropping the features, they will move to release candidate status which may last quite a bit longer than they are now planning for.

good headphones

I enjoy listening to good music. In my opinion good music is good because it sounds good on anything from a cheap mono transistor radio to the most expensive badass soundset money can buy. However, having something decent to play music on can make even crappy music enjoyable and really adds to the experience when playing something genuinely decent as well.

So I replaced my sennheiser HD 210 headphones with a pair of new ones from the same brand. The previous pair has lasted me about four years. Lately something is vibrating in an annoying way for particular (low) frequencies. Other than that the sound is as clear as it was when I bought them. However, the HD 280 I replaced them with today sounds better and doesn’t come with the annoying vibrating. Like the HD 210 at the time, the HD280 is slightly over 100€. Really good headphones (also available from Sennheiser) come at as much as 600€ but connecting that to a budget sound card is a bit pointless IMHO.

Anyway, the HD280 so far sounds great and feels great. On top of that it seems to do a good job of blocking out ambient sounds such as the noise I’m making typing this and the fans of my PC. I’d say the HD280 is definately way better than the HD210 I listened to for quite a while and was pretty happy with as well.

Unchecked Exceptions

This article presents an elaborate and IMHO misguided approach to handling exceptions: ONJava.com: An Exception Handling Framework for J2EE Applications

The author poses the problem that handling exceptions is tedious and leads to lots of boilerplate code. His proposed solution is to use unchecked, run.time exceptions. His reasoning is flawed for a number of reasons:

  • Most Exceptions come from external components. When bad stuff happens, you’re supposed to do stuff (other than just logging). Thinking that bad stuff won’t happen is naive, it will. In most cases, the reason that you get an exception is either that your assumptions were wrong (add some if statements to check) or there is a real problem (like something is misconfigured, the db is down, network is down, …). In some poorly designed code there may be a third reason: the software is wrapping state information in an exception. Don’t do this, ever.
  • You shouldn’t create new exception types if you can reuse existing types. That leads to less boilerplate code and more clarity. Nothing worse than having to figure out the cause of the cause of the cause of the exception that tomcat logged.
  • A good IDE makes handling exceptions really easy (eclipse ctrl+1 will give you handy quickfixes like “add throws declaration” “add catch clause for exception”). If you’re typing all this stuff manually, you’re doing something wrong. That leaves the problem of code readability. Poorly written code tends to be unreadable. Lots of exception handling code is a symptom, not a cause. If it’s unreadable: refactor it. In general if your methods don’t fit on a 1600×1200 screen you might want to start thinking about refactoring. If your classes regularly exceed 500 lines of code you’re having design issues. What makes code really unreadable is excessive coupling and lack of cohesiveness. Refactoring is the solution.
  • Unhandled exceptions either end up in front of the user, in the log or both and can leave your application in an unexpected state. Basically all these things are bad. Users should never see any stacktrace and should always get some kind of response from the application. Nothing worse than clicking next and ending up on the same screen because some runtime exception prevented the server from doing anything useful with the request (I see this a lot).

So in short, use a decent IDE (generate the boilerplate code) and handle the exception instead of throwing it to the caller if you can. If your code is still unreadable, don’t make it worse by throwing unchecked exceptions.

In search of the One True Layout

A few weeks back when I re-launched my blog in wordpress, I made a few comments about not being interested in working around the many specification and implementation bugs of CSS and make a really nice, spiffy layout for my blog. That’s why you are looking at the (pretty) default template of wordpress.

This article captures my point perfectly:
Introduction – In search of the One True Layout

It describes a solution to a very common layout problem: how to position blocks on the page next to each other. The solution outlined works around several IE bugs. Then when it works they point out to make it do what you really want (like put the whole thing in a containing block), you will need to work around even more bugs, including a few mozilla bugs that surface when you use these workarounds. Oh and the whole thing does not work in Mozilla anyway due to a recently introduced bug that (on trunk) has just been fixed (today!).

That’s why I don’t want to do CSS/HTML based web design anymore. Any reasonably complicated design requires you to either compromise on what you want to achieve or to use a whole series of bug workarounds, stretching the css implementation well beyond its specified/intended behaviour and hoping that next months browser updates won’t break things.

Unacceptable.

CSS is a hopelessly complicated and IMHO deeply flawed standard. Sadly, no alternatives are available.

stuff gets released

Lots of stuff has been released or is about to be released. Enough to warrant a little blog post about this stuff.

Open Office 2.0

The 2.0 version is a nice improvement over 1.1. OOo 1.1 sucked IMHO but 2.0 might convince me to actually use it. If only they fixed the bugs I reported four years ago on crossreferences (not implemented properly). Without fixes for that, I can’t write large, structured content in it (i.e. scientific articles). But still, quite an improvement. Importing of office stuff now actually works. I managed to import and save an important spreadsheet at work and removed about 9 MB of redundant data in the process (no idea where this came from), which makes working with the file over the network a lot less frustrating. Also it seems to actually be able to work with word documents without seriously messing up layout and internal structure (and its a lot faster on large documents). In short, compatibility now works more or less as advertised for the past four years (1.1 didn’t, even for trivial stuff). It’s still quite ugly though and lots of usability challenges remain unaddressed. Looking cool is not a product feature, nor is blending in with your OS. It remains the poormans alternative to MS Office.

Update. It looks like I was wrong about not messing up word documents. I did some roundtrip editing on a document written in word and OOo thoroughly messed it up. It turns out it doesn’t handle documents with adjusted page settings. It applied the page settings for the title page to the whole document. As a concequence it looks like shit, all the headers and footers are in the wrong place. It’s a lot of work to fix it too.

Maven 2.0

I spent some time with a release candidate and decided not to use it. The reasons were a mix of poor documentation and a dislike of the structure it tries to enforce on everything you do. I’m pretty sure the ideas behind it are ok but it just doesn’t feel right yet. In short it didn’t pass the fifteen minute test they put on their website: the documentation keeps telling you how beautiful and useful maven is without actually telling you anything about how it works. Some crucial things are lacking like explaining how these dependencies actually work, where the repository where it magically pulls all these jar files from is, how to set up your own repository, etc.

In the end I prefer the more verbose nature of ant. I have a lot of experience writing ant build files now. I’ve even written a few ant tasks at work. I happen to both like and need its flexibility a lot. I don’t see how maven solves any of the more non trivial stuff I do it (other than allowing me to use ant).

The assumptions maven is based on are IMHO incorrect. First of all it is tool centric, if you don’t structure your projects the way it likes you’ll have lots of trouble trying to get it to do anything useful (that means it won’t be used where I work now or any other place that has an existing, complex project). Secondly it solves a lot of easy stuff that is not really a problem with ant and not much else. Compiling, generating javadoc, etc. is not that hard with ant. In fact, most of the time I reuse the same tasks for that (by importing it). And, finally, maven just adds complexity. I find maven projects hideously complicated in their structure. I’ve seen quite a few maven projects and they all spread their source code over numerous modules in nested directories. I don’t want to structure my projects like that. But the most important thing is that maven doesn’t actually solve any problem I have.

Mysql 5.0

Nice to finally see this arrive. I expect this to have some consequences for the use of commercial databases in the next few years. At work our customers still prefer commercial stuff like oracle or mssql. Increasingly this has more to do with irrationality than actual features that are actually used. Performance certainly has little to do with it. Nor does scalability. Our webapp is a few dozen simple tables with some optional stored procedures. The latter are what have kept us from fully supporting mysql though arguably they are not required in our app.

Firefox 1.5 RC1
The release candidate should be ready right about now or very soon anyway. Beta2 has worked flawlessly here, as did Beta1. See my earlier review of the beta for more details.

iTunes review

After about a day of intensive use of iTunes (5.0.1.4, win32) I have decided to stick with it for a while. However, I’m not entirely happy with it yet and I’ll list my detailed criticism here.

1) It looks nice but it is not very responsive. Especially not when managing the amounts of music the ipod is intended for. I am talking about the noticable lag when switching from one view to another, the lack of feedback what is doing when, apparently, it can’t respond right away to any mouse clicks.

2). I am used to winamp which has a nerdy interface but gets several things right in the media library that iTunes doesn’t. The most important thing, the notion of a currently playing list of songs, is missing. That means that if you are navigating your songs, you are also editing the list of songs that is currently playing (unless you are playing a playlist in a seperate window). This is extremely annoying because this means you can’t play albums and browse your other music at the same time which is the way I prefer to listen to my music.

Steps to reproduce: put the library in browse mode (so you can select artist and than album), select one of your albums, start playing the first song. Browse to some other album, click the next button. Instead of playing track 2 of the album you were listening to (IMHO the one and only desired behavior) you were playing the music stops because by now a different set of files is selected.

A solution (or rather workaround) to this would be to create playlists for each album and play those. This cannot be done automatically. I have 300+ albums. You can drag m3u files (a common playlist format that simply lists the files in the order they should be played) to itunes (good) but if you drag more than one it merges them into one big playlist (bad).

3) So if you have m3u files for your albums or other playlists, you still need to import them one by one. That sucks.

An alternative solution would be to treat albums as playlists when clicked upon.

The best solution is of course to do it like winamp. Until you start to play something new the player plays whatever is in its current playlist. If you click an album, that becomes the current playlist. So simple, intuitive and yet missing. Of course it contradicts with the misguided notion of putting checkboxes in a list of 5000 files. The browse mode sort of covers up for this design error by automatically unchecking everything hidden by the browser. That’s why your album is unchecked when you select another album.

I can guess why apple chooses to not fix this issue. It requires changing the user interface to add a list of currently selected songs. This product is for novice users and adding user interface elements makes it more complex. Incidently the ipod is much smarter! It doesn’t change the current selection until you select something new and browsing is not the same as selecting!

4) Double clicking a playlist opens a new window! The idea of a playlist is to play one song after another (like I want to do with my albums). Effectively the playlist becomes the active list once you start playing it. However, as discussed above, iTunes does not have a concept of a current playlist so they ‘fixed’ it by opening a new window. IMHO this is needlessly confusing (for windows users, I understand multiple application windows is something mac users are more used to).

5) Of course this conflicts with the minimize to traybar option which only works for the main window. You can also play playlists like albums but then you encounter issue number 2 again. Conclusion Apple’s fix for issue number 2 is a direct cause for number 4 (serious usability issue) and this issue.

6) A separate issue is album art. Many users have file based mp3 players like winamp which store album art as a separate folder.jpg file in the directory the album mp3s are in. iTunes has an album art feature but will ignore those files. Worse the only way to add album art is to add the image to each individual music file (so if your album is fifteen tracks, the same image must be added to fifteen files). Aside from the waste of diskspace (or worse flash drive space), this is just to cumbersome to manage. I found a neat tool that can automate fetching and adding album art for albums.

7) Finally some issues with the help system. I normally do not refer to help files unless I need them. A day of using iTunes has forced me to do this several times because the user interface has a lot of obscure buttons and options that are not always self explaining. For example the menu option “consolidate library” sounds rather scary and, as I found out by reading the help file, you probably don’t want to click it. Another beautiful option is “group compilations when browsing”. This is a bit harder to figure out because the help search feature returns one result for ‘compilation’ which is a huge list of tips.

The problem: the help information is not organized around the userinterface like it should be. Task based documentation is nice to have but not if you are looking for information on button X in dialog Y.

So why do I still continue to use it: it is integrated in a clever way with my ipod 🙂 and I hope to find some solutions to the problems above using 3rd party tools. Ipod integration seems to work rather nicely, just plug it in and it synchronizes. I have the big version with plenty of space so I just want everything I have to be sycnhronized to it and this seems to work well. Except for one thing:

8) Apparently I have songs that the ipod can’t play that itunes can play. The synchronization process warns of this by telling me it can’t play some songs but fails to inform me which ones (so I can’t fix it)! The obvious solution would be to translate these songs to something it can play when copying them to the ipod (and keep the original in itunes). All the tools to do this are available so it should just do this, no questions asked.

UPDATE

I’ve found some more serious issues with drag and drop:
9) You can drag albums to the sidebar to create a playlist and you can drag playlists to a folder but you cannot drag albums to folders to create a playlist there.

10) Dragging multiple albums sadly creates only one playlist so this is no solution for problem 2 and probably shares the same cause as problem 3.

Back online

Yesterday I came home, switched on my pc, started Firefox and got a connection timeout on the start page. From there I followed my usual debug & escalate scheme: try to ping a domain, then an ip number, restart modem (no connection, no line sync), run diagnostics on modem (indeed unable to connect). I then rechecked all the modem and network settings, waited a bit to see if the problem would fix itself (this actually works sometimes). Finally, I concluded that the problem was not on my side.

So I picked up the phone to call KPN. Doh! Telephone was dead as well!. IMHO that was pretty conclusive: things were broken and definately not on my side of the connection. It took four phonecalls (on my mobile phone) to get this knowledge through to KPN who eventually sent over an engineer who indeed concluded I was right (hey phone line seems to be dead!) and then figured out it was probably a problem on the other side of the cable. So he went there, found the problem, fixed the problem and all is well.

Except that I wasted a lot of time that IMHO could have been prevented from being wasted.

What really happened: regular maintenance in my area accidentally broke some cable (the engineer fixed this). I phoned the kpn helpdesk and there they went through their usual error checking procedure: Q: what’s your phone number? A: the same I just typed while in the fucking voice operated menu. Q: did you mess in any way with your DSL connection (this is a dangerous question: if you say yes here they will try to convince you that you caused the problem)? A: no and it has worked fine for two years. This lead to the inevitable conclusion that they had to do a line check (the only technical thing they can do) which they said was fine (it was broken!). They then decided to put an analyst on it who would call me back. This didn’t happen so I called them (tip, keep calling them, it’s the only way to get things done). The next day I called them again. They then claimed that they’d tried to call me several times. To which I enquired how that was possible since my phone was dead (duh!). They explained to me that they’d been trying the mobile number I gave them (they even read it to me!). My phone is one of those modern thingies which shows you an overview of missed calls: none whatsoever. After I explained that to them, things went relatively fast. I had to go home early to meet with the before mentioned engineer.

So what went wrong here:

  • Kpn was the cause of the problem.
  • When I called them they did not detect that the line was broken (WTF!)
  • Neither did they notice that the same day there had been some maintenance in my line (at this point, dots should have been connected but weren’t)
  • They then failed to call me back and even lied to me about that when I called them!
  • I went home early needlessly and wasted a lot of time on a problem that could have been diagnosed and fixed without me coming home early from work.

Why am I posting this? I’ve been here several times. I’ve had multiple incidents over the past few years all of which took way too many phone calls to get fixed. In 2000 when my phone was installed and then my adsl connection, it took no less then five visits of an engineer to get things working. The whole process took weeks. Then when a router in my area broke down and slowed down things to mere bytes per minute, it took three weeks to convince them the problem was on their side (only when I organized myself with some other disgruntled people in my area the problem mysteriously fixed itself). Then when I moved to Nijmegen they informed me my connection had been moved when in fact it had not been. That took about two weeks to get acknowledged and fixed.

In all these cases I did everything by the book and had to endure clueless, uninformed helpdesk employees, lies, the inevitable voice menu, etc.

I’m looking forward to getting rid of them permanently when I move to Finland in a few weeks.