One week with the N900

This is me pimping a Nokia product. I work for Nokia and I rarely do this, and never without believing 100% what I write.

With some delays, I managed to get my hands on a N900. Our internal ordering system took nearly five months to deliver this thing (something involving bureaucracy and the popularity of this device. I guess the external paying customers lining up for it had some priority too). But it was well worth the wait.

For those who don’t know what the N900 is: it is the first phone in a series of linux based tablet devices from Nokia that started with the N770 in 2006, the N800 (which I still have), and the N810. As such, this series of devices was the start of something beautiful a few years ago. Not hindered by any operator limitations, these were essentially pocketable linux pcs. So naturally the engineers working on this, selected Debian Linux and named it Maemo Linux. Then they built a tool chain and ecosystem around the platform and tapped into all the readily available OSS goodies. It was great. Any research lab in need of some hackable devices jumped on this. As I recall when I was still doing pervasive computing research, most of the researchers in this field were using these devices to study all sorts of stuff. Because no matter how obscure your OSS project is, barring screen and cpu limitations you can probably get it going on Maemo Linux. You can, and people did. Most of Ubuntu cross compiles to Maemo without much effort. For example, I was running a tomcat, equinox, and lucene on a port of Sun’s CDC J2ME environment (roughly equivalent to java 1.4) on a N800 three years ago. It actually ran well too. In short, these babies are the ultimate hackers devices. There really is no alternative in terms of openness or scope in the industry. Android may be linux deep down inside, and Palm OS may be linux deep down inside, but Maemo is Debian Linux without buts or ifs.

And now there is the N900. The N900 is about as thick as a N97, about as long and about 3mm wider and slightly heavier (I actually did the comparison). Unlike its predecessors, it is a phone as well as a Debian linux running internet tablet. So all the goodness from the past version with a 2X performance and memory boost, a good quality phone stack (hey, it’s still a Nokia), and lots of UI work. While it has some rough edges (the software, not the hardware), it is surprisingly useful as a smart phone despite its current status as an early adopter’s device. It has one of the best browsers around (some would say the best); the UI is responsive and very touch friendly, it multitasks without effort, and it comes with tons of goodies like SIP, skype, google talk, Facebook, twitter support. And that’s just the out of the box stuff. You can do most of what the N900 does on an iphone. But not all at once. You can on the N900, plus some.

So, best phone ever as far as I’m concerned. Meego, the consumer friendly version of Maemo that was born out of our recent deal with Intel and MobLin, is coming soon in the form of new Nokia phones (you can already get it for net books). I can’t wait for world+dog to start porting over their favorite software to that. Meanwhile, I just use it as is, which is plenty good. It’s a great smart phone that plays back my music, browses the web (including Google Maps, Youtube, Facebook, and other web 2.0 heavy AJAX & flash sites) without much effort. Most of the iphone optimized web apps work great on the N900 as well. For example, I use the iphone optimized mobile Google Reader (http://www.google.com/reader/i). Mail support is excellent on this device, I use mail for exchange push email and gmail. I can do regular calls, VOIP, Skype (with video), IM, upload photos/videos to facebook, flickr, and other networks. Functionally there is little left to desire. Though somebody getting a foursquare client beyond the early Alpha stage would be nice (there’s two of those).

N900 & Slashdot

I just unleashed the stuff below in a slashdot thread. 10 years ago I was a regular there (posting multiple times per day) and today I realized that I hadn’t actually even bothered to sign into slashdot since buying a mac a few months ago. Anyway, since I spent time writing this I might as well repost here. On a side note, they support OpenID for login now! Cool!

…. The next-gen Nokia phone [arstechnica.com] on the other hand (successor to the N900) will get all the hardware features of the iPhone, but with the openness of a linux software stack. Want to make an app that downloads podcasts? Fine! Want to use your phone as a modem? No problem! In fact, no corporation enforcing their moral or business rules on how you use your phone, or alienation of talented developers [macworld.com]!

You might make the case that the N900 already has the better hardware when you compare it to the iphone. And for all people dismissing Nokia as just a hardware company, there’s tons of non trivial Nokia IPR in the software stack as well (not all OSS admittedly), that provides lots of advantages in the performance or energy efficiency domain; excellent multimedia support (something a lot of smart phones are really bad at), hardware acceleration, etc. Essentially most vendors ship different combinations of chips coming from a very small range of companies so from that point of view it doesn’t really matter what you buy. The software on top makes all the difference and the immaturity of newer platforms such as Android can be a real deal breaker when it comes to e.g. battery life, multimedia support, support for peripherals, etc. There’s a difference between running linux on a phone and running it well. Nokia has invested heavily in the latter and employs masses of people specialized in tweaking hardware and software to get the most out of the hardware.

But the real beauty of the N900 for the slashdot crowd is simply the fact that it doesn’t require hacks or cracks: Nokia actively supports & encourages hackers with features, open source developer tools, websites, documentation, sponsoring, etc. Google does that to some extent with Android but the OS is off limits for normal users. Apple actively tries to stop people from bypassing the appstore and is pretty hostile to attempts to modify the OS in ways they don’t like. Forget about other platforms. Palm technically uses linux but they are still keeping even the javascript + html API they have away from users. It might as well be completely closed source. You wouldn’t know the difference.

On the other hand, the OS on the N900 is Debian. Like on Debian, the package manager is configured in /etc/sources.list which is used by dpkg and apt-get, which work just as you would expect on any decent Debian distribution. You have root access, therefore you can modify any file, including sources.list. Much of Ubuntu actually compiles with little or no modification and most of the problems you are likely to encounter relate to the small screen size. All it takes to get to that software is pointing your phone at the appropriate repositories. There was at some point a Nokia sponsored Ubuntu port to ARM even, so there is no lack of stuff that you can install. Including stuff that is pretty pointless on a smart phone (like large parts of KDE). But hey, you can do it! Games, productivity tools, you name it and there probably is some geek out there who managed to get it to build for Maemo. If you can write software and package it as a Debian package and can cross compile it to ARM (using the excellent OSS tooling of course), there’s a good chance it will just work.

So, you can modify the device to your liking at a level no other mainstream vendor allows. Having a modifiable Debian linux system with free access to all of the OS on top of what is essentially a very compact touch screen device complete with multiple radios (bluetooth, 3G, wlan), sensors (GPS, motion, light, sound), graphics, dsp, should be enough to make any self respecting geek drool.

Now with the N900 you get all of that, shipped as a fully functional smart phone with all of the features Nokia phones are popular for such as excellent voice quality and phone features, decent battery life (of course with all the radios turned on and video & audio playing none stop, your mileage may vary), great build quality and form factor, good support for bluetooth and other accessories, etc. It doesn’t get more open in the current phone market currently and this is still the largest mobile phone manufacturer in the world.

In other words, Nokia is sticking out its neck for you by developing and launching this device & platform while proclaiming it to be the future of Nokia smart phones. It’s risking a lot here because there are lots of parties in the market that are in the business of denying developers freedom and securing exclusive access to mobile phone software. If you care about stuff like this, vote with your feet and buy this or similarly open (suggestions anyone?) devices from operators that support instead of prevent you from doing so. If Nokia succeeds here, that’s a big win for the OSS community.

Disclaimer: I work for Nokia and I’m merely expressing my own views and not representing my employer in any way. That being said, I rarely actively promote any of our products and I choose to do so with this one for one reason: I believe every single word of it.

Photos Rome

I’ve been polishing my photos in Picasa and ended up using the nice sync feature to upload them to the corresponding photo sharing site as well. So go here to enjoy them.

Picasa is a bit of a downgrade since I used to spend way too much time polishing with powerful tools such as Photoshop, Gimp, etc. However, I find I like the workflow in Picasa better. And while the few basic edits you can do there leave something to be desired, it’s good enough. I have the Gimp installed as well but it’s just so slow, buggy and weird to work with it’s offensive and I won’t be investing in Photoshop on my new mac since the price is just way too high. Technically I could go for Photoshop elements except it doesn’t come with some features that I really would want (24 & 32 bit images, LAB mode, layers & ways to combine them, flexible masking, etc). You can sort of do some of that in the Gimp but it is frankly painful and the results tend to be underwhelming. I have some hopes that this KOffice photo thingy might live up to some of the hype. I’ll be giving it a try as soon as I can lay my hands on some Mac OS X binaries. Otherwise, if anyone knows of any other OSS photography tools for Mac I’d be very interested. I’m already a Hugin user as blogged earlier this week (and see above linked album for some nice panoramas).

Photos Zurich and Dagstuhl

I’m traveling a lot lately. Two weeks ago I was in Zurich at the first Internet of Things Conference. I uploaded some pictures already last week and some more today.

Last week I also attended a Dagstuhl seminar on Combining the advantages of product lines and open source to present the position paper I posted some time ago. Naturally, I also took some pictures there.

Interestingly, one of the participants was Daniel German who does a lot of interesting things including publishing good articles on software evolution and working on a source forge project called panotools that happens to power most of what makes Hugin cool. Hugin is of course the tool I have been using for some time now to stitch together photos into very nice panoramas. I felt envious and lucky at the same time watching him take photos. Envious of his nice Canon 40D with very cool fish eye lens and lucky because his photo bag was huge and probably quite heavy considering the fact that he had two more lenses in there.

Attendees of the Dagstuhl Seminar

The whole gang together. Daniel is the guy in the orange shirt.

One of the best features of Dagstuhl: 1 beer = €1. Not quite free beer but close enough. And afterall, OSS is about free speech and cheap beer definitely loosens the tongues.

Feisty fawn

I tried (again) to install ubuntu and ran into a critical bug that would have left my system unbootable if it weren’t for the fact that I know how to undo the damage the installer does. Since this will no doubt piss off the average linux fanboy, let me elaborate.

  • I ran into the “scanning the mirrors” bug. Google for “scanning the mirrors” + ubuntu and you will find plenty of material.
  • This means the installer hangs indefinitely trying to scan remote mirrors of apt repositories.
  • Predictably the servers are under quite a bit of load currently: this is extremely likely to not work for a lot of people right now. I recall running into the same issue a month ago with edgy when there was no such excuse.
  • The bug is that the installer should detect that things are not working and fail gracefully
  • Gracefully in the sense that it should
    • Allow the user to skip this step (I only had a close button, which was my only option to interrupt the scanning the mirrors procedure)
    • Never ever, ever, ever, let the user exit the installer AFTER removing the bootflag on the ntfs partition but before installing an alternative bootloader.
    • Recover from the fact that the network times out/servers are down. There’s no excuse for not handling something as common as network failure. Retry is generally a stupid strategy after the second or third attempt.
    • I actually ran ifdown to shut the network down (to force it into detecting there was no connection) and it still didn’t detect network failure!

The scanning the mirrors bug is a strange thing. Ubuntu actually configured my network correctly and I could for example browse to Google. However, for some reason crucial server side stuff was unreachable. Since ubuntu never gave an error, I can’t tell you what went wrong there. This in itself is a bug, since murphy’s law pretty damn much guarantees that potential network unreliability translates into users experiencing network problems during installation.

Could I have fixed things? Probably. Will I do so? Probably not, my main reason for trying out 7.04 was to verify that in fact not much has changed in terms of installer friendlyness since 6.10. All my suspicions were confirmed. In short, the thing still is a usability nightmare. The underlying problem is that the installer is based on the assumption that the underlying technology works properly. In light of my 10+ years experience with installing linux, this is extremely misguided. The installer merely pretends everything is ok. The problem is that it sometimes doesn’t in which case a usable system distinguishes itself from an unusable system by offering meaningful ways out. For example, display configuration failed (again, see my earlier post on edgy installation) which means I was looking at a nice 1024×768 blurry screen on a monitor with a different native resolution. I suspect the nvidia + samsung LCD screen combo is quite popular so that probably means lots of users end up with misconfigured ubuntu setups. The only way to fix it is after the installation using various commandline tools. Been there done that. The resolution change dialog is totally inadequate because it mistakenly assumes it was configured correctly and only offers resolutions that it properly detected (i.e. 640×80 -1024×768 @60Hz, no hardware has shipped this decade with that as the actual maximum specs).

I found the two main new features in the installer misguided and confusing. The installer offered to migrate my settings and create an account. The next screen then asks me who I am. Eh … didn’t I just provide a user/password combo. And BTW. what does it mean to migrate My Documents? Does it mean the installer will go ahead and actually copy everything (which I don’t want, it’s about 80GB) or will it merely mount the ntfs disk (would be useful). I need a little more info to make an informed decision here.

The other main new feature is to actually advertise the binary drivers that most end users would probably want installed by default. That’s good. The problem is that the dialog designed to offer it is very confusing (using such terms as unsupported) and also that the drivers are not actually on the cd. In other words I couldn’t actually install them for the same mysterious network problem outlined above. Similarly, the dialog doesn’t seem to have a good solution for network failure. The reality with the drivers is that they are the only thing that the hardware vendors support (i.e. they have better support for the hardware and from the actual vendor that provided it). The problem of course is that they are ideologically incompatible with some elements in the OSS community. Which probably lead to the probably highly debated blob of text explaining to the user that it is not recommended to install the unsupported software which happens to be the only way to get your 300$ video card working as advertised. The dialog does not do a good job of explaining this, which is it’s primary job.

-Ofun

I found this article rather insightful -Ofun

I agree with most of it. Many software projects (commercial, oss, big & small) have strict guidelines with respect to write access to soure repositories and usage of these rights. As the author observes many of these restrictions find their roots in the limited ability of legacy revision control systems to roll back undesirable changes and to merge sets of coherent changes. And not in any inherent process advantages (like enforcing reviews, preventing malicious commits). Consequently, this practice restricts programmers in their creativity.

Inviting creative developers to commit on a source repository is a very desirable thing. It should be made as easy as possible for them to do their thing.

On more than one occasion I have spent some time looking at source code from some OSS project (to figure out what was going wrong in my own code). Very often my hands start to itch to make some trivial changes (refactor a bit, optimize a bit, add some functionality I need). In all of these cases I ended up not doing these changes because committing the change would have required a lengthy process involving:
– get on the mailing list
– figure out who to discuss the change with
– discuss the change to get permission to send the change to this person
– wait for the person to accept/reject the change

This can be a lengthy process and upfront you already feel guilty of contacting the person about this trivial change with your limited knowledge of the system. In short, the size of the project and its members scare off any interested developers except the ones determined to get their change in.

What I’d like to do is this:
– Checkout tomcat (I work with tomcat a lot, fill in your favorite OSS project)
– Make some change I think is worthwhile having without worrying about consequences, opinions of others, etc.
– Commit it with a clear message why I changed it.
– Leave it to the people who run the project to laugh away my ignorance or accept the change as they see fit.

The apache people don’t want the change, fine. Undo it, don’t merge, whatever. But don’t restrict peoples right to suggest changes/improvements in any kind of way. If you end up rejecting 50% of the commits that means you still got 50% useful stuff. The reviewing, merging workload can be distributed among people.

In my current job (for GX, the company that I am about to leave), I am the release manager. I am the guy in charge for the source repositories of the entire GX product line. I’d like to work like outlined above but we don’t. Non product developers in the company need to contact me by mail if they want to get their changes in. Some of them do, most of them don’t. I’m convinced that I’d get a lot of useful changes. We use subversion which is nice but not very suitable for the way of working outlined above and in the article I quoted. Apache also uses subversion so I can understand why they don’t want to give people like me commit rights just like that.

So why is this post labelled as software engineering science? Well I happen to believe that practice is ahead in some things over the academic community (of which I am also a part). Practicioners have a fine nose for tools and techniques that work really well. Academic software engineering researchers don’t for a variety of reasons:
– they don’t engineer that much software
– very few of them develop at all (I do, I’m an exception)
– they are not very familiar with the tools developers use

In the past two years in practice I have learned a number of things:
– version control is key to managing large software projects. Everything in a project revolves around putting stuff in and getting stuff out of the repository. If you didn’t commit it, it doesn’t exist. Committing it puts it on the radar of people who need to know about it.
– Using branches and tags is a sign the development process is getting more mature. It means you are separating development from maintenance activities.
– Doing branches and tags on the planned time and date is an even better sign: things are going according to some plan (i.e. this almost looks like engineering).
– Software design is something non software engineers (including managers and software engineering researchers) talk about, a lot. Software engineers are usually to busy to bother.
– Consequently, few software actually gets designed in the traditional sense of the word (create important looking sheets of paper with lots of models on them).
– Instead two or three developers get together for an afternoon and lock themselves up with a whiteboard and a requirements document to take the handful of important decisions that need to be taken.
– Sometimes these decisions get documented. This is called the architecture document
– Sometimes a customer/manager (same thing really) asks for pretty pictures. Only in those cases a design document is created.
– Very few new software gets build from scratch.
– The version repository is the annotated history of the software you are trying to evolve. If important information about design decisions is not part of the annotated history, it is lost forever.
– Very few software engineers bother annotating their commits properly.
– Despite the benefits, version control systems are very primitive systems. I expect much of the progress in development practice in the next few years to come from major improvements in version control systems and the way they integrate into other tools such as bug tracking systems and document management systems.

Some additional observations on OSS projects:
– Open source projects have three important tools: the mailinglist, the bug tracking system and the version control system (and to a lesser extent wikis). These tools are comparatively primitive to what is used in the commercial software industry.
– Few oss projects have explicit requirements and design phases.
– In fact all of the processes used in OSS projcets are about the use of the before mentioned tools.
– Indeed few oss projects have designs
– Instead oss projects evolve and build a reputation after an initial commit of a small group of people of some prototype.
– Most of the life cycle of an oss project consist of evolving it more or less ad hoc. Even if there is a roadmap, that usually only serves as a common frame of reference for developers rather than as a specification of things to implement.

I’m impressed by how well some OSS projects (mozilla, kde, linux) are run and think that the key to improving commercial projects is to adopt some of the better practices in these projects.

Many commercial software actually evolves in a very similar fashion despite manager types keeping up appearances by stimulating the creation of lengthy design and requirements documents, usually after the development has finished.