Migrating iTunes windows to Mac

As part of my recent imac purchase came the chore of moving stuff over from my old hard drive which thankfully survived my old PC’s premature demise. Part of the stuff that I naturally wanted to migrate is of course my iTunes library.

Last time I moved my iTunes library is 3.5 years ago, when I bought my then new and now broken PC. At the time I wrote up a little howto on iTunes migration that over the years has gotten me tens of thousands of hits (dozens per day). Some people were kind enough to leave a message and apparently I helped out several of them with those instructions.

So, here’s the follow up.

Circumstances are slightly different now. I “consolidated” my library ages ago, which means iTunes moves all files to its own directory. Additionally, I reorganized my library, fixed tagging issues, etc. In other words, I put a lot of time into organizing my music and keeping it organized and would hate to start from scratch. Secondly, we’re several major iTunes versions down the road and they added album art fetching, genius and loads of other features.

So, I started out by following the original instructions of three years ago. This nearly works, except the music is not playable from iTunes for some vague reason. So no longer up to date despite a comment of some person who successfully managed a win to mac migration the “old” way.

Next I tried something that I didn’t believe would work but which worked anyway:

  • Copy library from former C drive to where it is supposed to be on your mac (all files & dirs)
  • Don’t modify a single file, they’re fine as they are.
  • Start itunes

Right after it starts it “updates” the library and that’s it. That’s a lot faster & easier. Play counts, play lists, ratings, you name it. It’s there. Thanks Apple, this is a lot better than three years ago and exactly the way it is supposed to work.

OoO 3.0 Beta & cross references

It still looks butt ugly but at least this bug was partially addressed in the latest beta release of Open Office. The opening date for this one, “Dec 19 19:13:00 +0000 2001”. That’s more than seven years ago! This show stopper has prevented me from writing my thesis, any scientific articles, or in fact anything serious in open office since writing such things requires proper cross reference functionality. But finally, they implemented the simple feature of actually being able to refer to paragraph numbers of something elsewhere in the document using an actual cross reference. This is useful to be able to refer to numbered references, figures, tables, formulas, theorems, sections, etc.

The process for this bug went something like this “you don’t need cross references” (imagine star wars type gesture here). Really for a bunch of people implementing a word processor the mere length of the period they maintained this point of view was shocking and to me has always been a strong indication that they might not be that well suited for the job of creating an actual word processor. Then they went on to a infinite loop of “hmm maybe we can hack something for open office 1.1 2.0 2.1 2.2 2.3 2.4 3.0″ and “we need to fix this because imported word documents are breaking over this” (never mind that real authors might need this for perfectly valid reasons). This went on for a very very long time, and frankly I have long since stopped considering open office as a serious alternative for doing my word processing.

I just tried it in 3.0 beta and it actually works now, sort of. Testing new OoO releases for this has become somewhat of a ritual for me. For years, the first thing I did after downloading OoO was try to insert a few cross references before shaking my head and closing the window. The UI is still horribly unusable but at least the feature is there now if you know where to look for it.

Six years ago Framemaker was the only alternative that met my technical requirements of being an actual word processor with a UI and features that support the authoring process (unlike latex, which is a compiler),  the ability to use cross references, and flexible but very strictly applied formatting. Theoretically word can do all of this as well but I don’t recommend it for reasons of buggyness and the surprising ease with which you can lose hours of work due to word automatically rearranging & moving things for you when you e.g. insert a picture, pasting a table, etc (and yes I’ve seen documents corrupt themselves just by doing these things).

The last few years, I’ve used open office only to be able to open the odd word/powerpoint file dropping in my inbox at home. I basically have close to no office application needs here at home. For my writing at work needs, I usually adapt to what my coauthors use (i.e. word and sometimes latex).  Framemaker has basically been dying since Adobe bought it. The last version I used was 6.0 and the last occasion I used it was when writing my phd thesis.

GX WebManager

Before joining Nokia, I worked for a small web startup in the Netherlands called <GX> Creative Online Development during  2004 and 2005. When I started there, I was employee number forty something (I like to think it was 42, but not sure anymore). When I left, they had grown to close to a hundred employees and judging from what I heard since, they’ve continued to grow roughly following Moore’s law in terms of number of employees. Also they seem to have executed the strategy that took shape while I was still their release manager.

When I joined GX, GX WebManager was a pretty advanced in house developed CMS that had gone through several years of field use and evolution already and enjoyed a rapidly growing number of deployments, including many big name Dutch institutions such as KPN, Ajax, ABN-AMRO, etc. At that time it was very much a in house developed thing that nobody from outside the company ever touched. Except through the provided UI of course which was fully AJAX based before the term became fashionable. By the time I left, we had upgraded release processes to push out regular technology releases first internally and later also outside to a growing number of partners that implemented GX WebManager for their customers.

I regularly check the GX website to see what thay have been up to and recently noticed that they pushed out a community edition of GX WebManager. They’ve spent the last few years rearchitecting what was already a pretty cool CMS to begin with to refit it with a standardized content repository (JSR 170) based on Apache Jackrabbit and a OSGI container based on Apache Felix. This architecture has been designed to allow easy creation of extensions by third parties. Martijn van Berkum and Arthur Meyer (product manager and lead architect) were already musing how to do this while I was still there and had gotten pretty far doing initial designs and prototyping . Last year they pushed out GX WebManager 9.0 based on the new architecture to their partners and now 9.4 to the internet community. They seem to have pretty big ambitions to grow internationally, and in my experience the technology and know-how to do it.

So congratulations to them on completing this. If you are in the market for a CMS, go check out their products and portfolio.

Modular windows

There is a nice article on Ars discussing Microsoft’s business practices regarding windows and how they appear to be not quite working lately. It used to be that your PC came with windows whereas nowadays you have to select from a around five different versions and Microsoft is rumored to go to an even more modular and subscription based model. The general idea is to be able to squeeze out as much revenue out of the market as possible. On paper it sounds good (for MS that is).

Rather than buying an overpriced OS with everything and the kitchen sink you buy what you need. There’s a huge differences between what businesses and some individuals are willing to spend and the typical home user that just wants a browser + skype + the sims. Typically the latter group ends up buying the cheapo version and the former group ends up buying the everything and the kitchen sink version. The problem is that there is unmonetized value in the latter in the sense that some owners of the  cheapo versions might be interested in getting access to some of those features in the expensive version but not in all of them.

Now to the obvious problem with the discussed solution. By selling cheapo versions with most of the value removed and factored out into separate chunks you have to pay for, you dilute the overall value of the OS. So instead of buying an OS that can do X, Y, and Z out of the box you are buying an OS that can’t do X, Y, and Z out of the box. Marketing an OS that can’t do stuff is a lot harder than trying to sell stuff that can do things.  Worse they are opening the market to third parties that might do something similar to X, Y, and Z for a better price, or in some cases for free (beer & speech). Or even worse to companies selling an alternative OS with X, Y, and Z.

That in a nutshell is what is discussed in the Ars article and why Apple Mac OS X marketshare is approaching double digit percentages. I’ve been giving it some serious thought lately and I’m also noticing the spike in Safari users in my web site statistics.

Anyway, the reason for this write up is that the article overlooks an important argument here that I believe is relevant for more markets than just operating systems. In general, the tie between OS and features such as photo galleries, online backups, or TV UIs is artificial. Microsoft only adds features like this to make the overall OS more valuable. That is, they are looking to improve the value of the OS, not the photo gallery. However, ongoing and inevitable commoditization of software actually shifts value to new features. Especially when bundled with online subscriptions, things like online photo galleries can be quite good business. For example, Flickr has many paying subscribers.

Naturally MS is interested in markets like this (which is why they are interested in Yahoo). However, the tie-in to the OS constrains the market. Why would you not want to sell these services to Apple users? Why would you not want to sell this service to Sony Playstation owners? Why would you want to want to artificially limit who can access your service just to boost sales of your OS? As long as you were trying to artificially (and apparently for MS illegally) boost value of your core OS, bundling was a valid strategy. However, as soon as your value shifts, that becomes a brake on market growth. The OS market has commoditized to the point where you can get things like Ubuntu for free, which for the low end market is about as good as what you get with the cheapo version of Vista (see my various reviews of Ubuntu for why I’m not ready to claim better yet).

So the difference between MS and Google who is eating their lunch in the services arena is that the latter is not handicapped by 20 years of Windows legacy and can freely innovate and grow marketshare without having to worry about maintaining a revenue stream from legacy software. Google doesn’t have to sell OS licenses and so they give away software on all platforms to draw more users to their services which is where they make their money.

Naturally, Google has a lot of software engineers that are working round the clock to create more value for them. Where possible Google actively collaborates with the open source community because they know that while they won’t make any money from commodities like browsers, file systems and other important software components, they do depend on those things working as good as possible and keep evolving in the right direction. Few people appreciate this but this and not ads is why Google sponsors Firefox. It’s a brilliant strategy and it is forcing their main competitor to keep investing in internet explorer rather than being able to shift resources to directly competing with Google. 50 million $ is pocket money if it is making your main competitor crap their pants and waste resources on keeping up with you in a market where you are not even trying to make money.

You might have noticed that I have carefully avoided discussing Google and Microsoft’s mobile service strategies and also noticed that yours truly is working for Nokia. Well, my readers ought to be smart enough to figure out what I’m trying to say here aren’t you :-)?

motorola cable modem & bittorrent

I’ve blogged several times already about my problems connecting my pc to the internet:

  • Getting a cable modem was easy.
  • I mistakenly bought a Siemens wireless network USB stick. Solution don’t buy crap and use a decent brand. Currently I’m using an smc pci card; my ibm/lenovo’s laptop’s built in network card and my Nokia e70 with its wlan support.
  • The driver software going paranoid from time to time.

A remaining problem that has been annoying me for months is that my cable modem, a Motorola sbg900e, has some issues. Most of the time it works fine except when applications like bittorrent run. Then it just resets more or less continuously. Motorola apparently does not believe it is important to support their customers with useful advice or firmware updates so that basically meant no bittorrent for the past few months. Bittorrent is a resource intensive protocol and it probably represents a worse case scenario for the modem in terms of number of connections, bandwidth consumed etc.
Some googling (“motorola modem reset bittorrent”), once again, brought me a work around. It is not the first time that I find out the hard way that solving a technical problem is just a matter of providing google with the right keywords. Believe me I’ve been searching many times using the type number of my modem in the query bringing up nothing but unrelated problems and advertisement material.
Anyway, one of the people in this forum was kind enough to explain that the problem is with the number of connections that the bittorrent client tries to open simultaneously. If this exceeds a certain number, the modem firmware crashes and the modem resets (apparently earlier models just crashed and did not reset, lucky me :-). The workaround consists of telling your bittorrent client to not open too many connections at the same time. It’s no problem having say 50-100 connections open at the same time but opening them all at once is a problem. True, most bittorrent clients do not have such a setting but recent versions of my favourite one (Azureus) do have such a setting. It’s called “max simultaneous outbound connection attempts” and by default it is set to 16. You can find it under connection->advanced network settings. I find that, so far, limiting it to 8 prevents the modem from crashing.

Problem solved 🙂

reinstalling windows sucks

Luckily my parents laptop came with a windows xp pro cd and a valid product key. So, after backing up important files using ubuntu, I proceeded to install windows. I had forgotten how annoying installing windows can be.

After having seen ubuntu the experience is, well, extremely user unfriendly and extremely likely to end in disaster unless you know what to do. For the record, the ubuntu live dvd recognized all hardware out of the box without any intervention. Impressive.
The easy, but tedious, part is putting the windows cd in the tray. Then the installer gives you a few seconds to trigger it by hitting enter. Then you are confronted with a slowly loading text only wizard where you will have to do something very risky: removing a partition and creating a new one. The provided tool for this is outdated by the standards set by linux distributions. The point of this step is to start with a clean disk, so getting this right is important. 99% of the user population would at this point likely make the wrong choice and either end up installing a very old xp over their existing installation or installing it in some weird location on a harddrive that is still full of spyware. OK, been there done that so, clickety and nice clean drive.

Then it starts copying. It reboots. More copying and then some questions about where I am. More copying. Product key. More copying. Modem & Network settings. This fails but I end up being online anyway (network cable plugged in so how hard can it be)? Another reboot. Then during the first boot a wizard with an annoyingly loud shitty music in the background. At this point my laptop’s volume buttons are not yet operational (have to be logged in?). Activation and finally done. It takes about one and a half hours and it requires constant fiddling with wizards so you can’t just leave it alone to do its thing.
Oh wait. We’ve just started. This is an old windows cd, I do not yet have the billion security updates that are the whole point of this operation and that do not install themselves without manual intervention. OK, two hours later I’m patched all the way up to two years ago (sp2). Now the auto update finally kicks in. Another hour, three reboots and 50+ security updates (not kidding) later it is finally over, I think.

Oh wait. Much of the hardware is actually not working correctly. Stuff like my usb ports, pc card slot, video graphics chip are all not working correctly. Nor is my wireless card (smc pcmcia card, plugging it in hangs the system). And damn this internet explorer default page is annoying. And thanks for showing me this unavoidable welcome to windows promo again (you have to watch it to get rid of the annoying traybar icon). And no thanks for the hotmail msn thingy.
Turns out the compaq support site has a nice list of stuff you could install. The problem with this site is twofold: there are multiple things to choose from without any clear guidance on what to install and what not to install. Secondly, all downloads (and there are a lot) are conveniently named sp4325345345.exe where the number is the only thing that varies. The idiot who invented that naming scheme should be taken out and shot. Anyway, I installed half a dozen of their driver packs and everything seems to work now. At least several of the downloads there are crucial to the correct operation of the laptop.

No way my parents could have done all of this on their own. The sad truth with windows is that if it is not configured correctly out of the box, it is really difficult to get everything working correctly. My mother was asking me this weekend if she could do it herself or go to the shop to have somebody do it. If only that were possible :-). In terms of labour cost, the price is a nice low end PC just waiting for all of the above to complete. Of course there is plenty of opportunity for mistakes so realistically it is probably cheaper to just go to dell and buy new hardware & software.

spyware sucks, but ubuntu doesn’t

Last weekend (Easter holiday, long weekend) was a good opportunity to visit my parents in the Netherlands. Apart from beloved son, I’m also their system administrator. Last time I made a mistake, I left them behind with a partially secured windows machine. The thing was behind a router and they were using firefox (saw to that personally). Anyway, when I checked this weekend the machine was full of very nasty spyware. It was basically unusable and the spyware interfered with normal usage of the machine.
I tried to fix it using the usual tools (adaware, spybot) but this did not work 100%. Both tools managed (on multiple attempts) to identify and remove a shitload of spyware. But the remaining few managed to ‘fix’ this as soon as they were done. Eventually I thought the machine was clean but then the rebooting started. After booting everything would look allright, and then it would reboot. Effectively I only had a few minutes to figure out what was going on before this happened. That gets old real quick.
That was roughly when I decided to bring the laptop home and start from scratch. Of course before doing so, I have to make an attempt to back up my parent’s files and family photos. Accessing the laptop in its current state is pretty much impossible, hence the title of this post. I stand corrected: ubuntu does not suck after all. It’s merely very unsuitable for end users :-).

A few weeks back I posted my not so positive review of ubuntu. Currently I’m using it to rescue some files. I won’t bother trying to actually install it on the laptop for my parents. The main reason is that I have a hard enough job supporting my parents without them having to learn an entirely new OS. After years of practice they can sort of do things by himself now. Things like burning a cd, editing photos, doing banking, etc. I have no desire to start from scratch with them on all of that.

But the point is that it would work very well. I booted into the live dvd image. I actually mistook the dvd for my knoppix cd. I was pleasantly surprised to find a booted ubuntu desktop when I came back. All hardware, including smc pcmcia wireless card, onboard sound and display had been recognized. The wireless card needed to be configured for my network, which was easy once I had found the tool. Confusingly there is a system and administration menu that both contain network related tools.

Then I had to mount the ntfs partition. I tried to use the disk tool but it is useless (you can mount but not access the partition unless you are root which is not very convenient in ubuntu where you can’t log in as root). I had to do some googling to make the correct changes to fstab manually and then proceeded to mount using the good old commandline. That worked. Then I sshed (using nautilus) into my windows box (which runs cygwin) and I’m currently uploading some crucial files. After that completes, I’ll wipe the laptop and be sure to lock it down properly this time.

lessons learned:

  • no auto update + no firewall + unsecured wlan = very bad idea
  • firefox + router != enough protection
  • adaware and spybot are not good enough for recovery, these are fine prevention tools however
  • ubuntu doesn’t suck, it’s a nice addition to any system administrators toolbox 🙂

S80

I first began considering buying a new digital camera around 2004. Forever drooling over dpreview and other sites I ultimately decided not to buy the Canon powershot A90, 520, 620, IS, IS2. Reasons varied from “I don’t really need one right now”, “camera X sure looks nice but lets wait for camera Y” to “my A40 is still pretty nice”.

After reading the reviews of the Canon S80, I decided this camera was perfect for me. Good quality lense, smaller than my clunky Canon Powershot A40, nice zoom range, wide angle lense, 8 mega pixels (yeah I know, not relevant), etc.

This was like three months ago. Since then I’ve gone to several stores, played with a testmodel at Verkkokauppa several times. Price was essentially right. Then I queued to buy it and waited, waited and left the store. I went back on a Saturday (stupid) and again ended up not buying. Then the camera was sold out (i.e. queued for 10 minutes and then left the store disappointed). Today I checked again, they had it and money was exchanged.

Whoohoo!. Currently the battery is charging, after that I’ll make a few pictures with it and maybe upload a few.

More on MS

It’s now a few days after my previous post on the vista delay. The rumour machine on the Vista delays is now rolling. A few days ago this wild claim about 60% of vista being in need of a rewrite started circulating. Inacurate of course but it woke up some people. Now this blogpost on a blog about Microsoft (fequented by many of their employees) made it to slashdot. Regardless of the accuracy of any statements in that post, this is a PR disaster. Lots of people (the entire IT industry, stockholders) read slashdot.

There’s lots of interesting details in the comments on that post that suggest that MS has at least these problems:

  • Management is clueless and generally out of touch with development progress. Claims on release dates are totally disconnected from software development planning. Release dates announced in press releases are wishful thinking at best. This is one of the reasons the date slips so often.
  • Middle management is worse. Either they have failed to communicate down when to release or up when their people tell them release is actually impossible. Either way, they have failed doing what middle management is supposed to do: implement corporate strategy and communicate up when that strategy is not working as expected.
  • Software engineers within MS are extremely frustrated with this. Enough to voice their opinions on a public blog. A lot needs to happen before I start criticizing my employer in public. I know where the money comes from. Really, I’d probably leave long before it would get to this point. So, I interpret this as MS having a few extremely frustrated employees that might very well represent a large silently disgruntled majority. Steve Ballmer seems to be rather impopular in his own company right now (never mind his external image).
  • The best MS software engineers are leaving MS and are replaced with being people of lesser quality because MS now has to compete in the job market. I remember a few years ago that MS could cherry pick from the job market. Now the cherries are leaving. Really, if your best people are leaving and you have billions in cash to fix whatever problem is causing them to leave, you are doing something wrong (like not fixing the problem).
  • Microsoft employees are spilling stock influencing information on public blogs. Opennes is one thing but this is an out of control situation. Regardless of whether they are right, these people are doing a lot of damage.

It’s probably not as bad as the comments suggest but bad enough for MS, if only for all the negative PR. Anyway, I might be revisiting the predictions I made in my previous post. I have a feeling some of them might prove to be correct in a few months already. Very amusing 🙂

that must hurt

Ouch, Forbes unleashes some criticism on Microsoft. Well deserved IMHO. I don’t see the result of six years of development by thousands of software engineers reflected in the currently marketed featureset.

A few small predictions:

  • Vista and office 2007 (or whatever it is called) are going to go into history as the two releases that reversed the growth trend in microsoft marketshare. I expect both products to do worse than their predecessors. First of all, businesses won’t touch either until forced by licensing conditions. Second of all, some businesses might opt for alternatives this time. Particularly Novell seems to be well positioned this time. Also Google will push some lightweight services into the market before the Vista release that are remarkably well suited for adoption in small businesses.
  • I expect this to have consequences for the current leadership. Specifically, I expect Steve & Bill to be pushed to the side lines after this.
  • I expect this to be the last major revision of windows this decade. They may think they are going to do another release before 2010 but reality will catch up with them. In fact, I expect that Vista will be the last time they can justify the insane R&D budget to the shareholders. Six years of development resulting in replacement purchases only is going to be a tough sell to shareholders.
  • Clearly after six years of development, Microsoft stands empty handed. The shares are due for a downward correction. Things are not going well for Microsoft and they are underperforming.
  • This is not the last delay for Vista. They are hoping it will be ready but their development process is the reason it is being delayed so they can’t actually know right now that they will have a release in 365 days. My guess is that they won’t.
  • Customer feedback on the yet to be announced additional beta will cause them to drop more features from Vista. Particularly the userinterface is going to get some heavy criticism (performance, general uglyness) and negative publicity. Something will need to be done about it. After dropping the features, they will move to release candidate status which may last quite a bit longer than they are now planning for.