Mobile Linux

A lot has been written about mobile and embedded device platforms lately (aka. ‘phone’ platforms). Usually articles are about the usual incumbent platforms: Android, IOS, and Windows Phone and the handful of alternatives from e.g. RIM and others. Most of the debate seems to revolve around the question whether IOS will crush Android, or the other way around. Kind of a boring debate that generally involves a lot of fan boys from either camp highlighting this or that feature, the beautiful design, and other stuff.

Recently this three way battle (or two way battle really, depending on your views regarding Windows Phone), has gotten a lot more interesting. However, my in view this ‘war’ was actually concluded nearly a decade ago before it even started and mobile linux won in a very unambiguous way. What is really interesting is how this is changing the market right now.

Continue reading “Mobile Linux”

N8

I’ve had the nokia N8 for a few weeks now and since I like it, I thought I’d do a little review.

Does it have rough edges? Sadly, yes there are some. But no major show stoppers. It’s stable. The camera is awesome. The video playback is great. You can play most movies without re-encoding them. I managed to not run out of battery during a trans atlantic flight, watching movies non stop, and the oled screen is fantastic.

I’m particularly happy with how maps is improving with each version.

It went from something that barely worked a few years ago to being arguably one of the best mobile maps experiences around. It’s certainly very competitive when it comes to map quality, navigation, and offline usage.Disclaimer, I work on the backend services for this application :-).

I’ve used two ovi store applications to write this post. One is called swype and it replaces the touchscreen keyboard with a version that allows you to ‘draw’ across the keyboard and it figures out the words with great accuracy and speed.

The other is the official wordpress client, which I’m trying for the first time, and of course that’s the real reason for writing all this :-).

So far so good. I managed to write a good chunk of text with swype and the wordpress app seems to do everything I need it to do.

All together this is a pretty good phone and it is getting even better with upcoming software releases. I think the recent focus of nokia on software quality and end user experience is starting to pay off. It’s not without flaws of course and it’s definitely to early to call the battle with our competitors over but it is progress and I think we’re getting there.

A final note here. I work for Nokia and am of course slightly biased. That being said, I rarely review stuff we produce and as a rule only when I really like it, without reservations. I loved the n900, which I reviewed a few months ago, but I’ve had several nokia phones that I chose not to review as well for this reason.

N900 & Slashdot

I just unleashed the stuff below in a slashdot thread. 10 years ago I was a regular there (posting multiple times per day) and today I realized that I hadn’t actually even bothered to sign into slashdot since buying a mac a few months ago. Anyway, since I spent time writing this I might as well repost here. On a side note, they support OpenID for login now! Cool!

…. The next-gen Nokia phone [arstechnica.com] on the other hand (successor to the N900) will get all the hardware features of the iPhone, but with the openness of a linux software stack. Want to make an app that downloads podcasts? Fine! Want to use your phone as a modem? No problem! In fact, no corporation enforcing their moral or business rules on how you use your phone, or alienation of talented developers [macworld.com]!

You might make the case that the N900 already has the better hardware when you compare it to the iphone. And for all people dismissing Nokia as just a hardware company, there’s tons of non trivial Nokia IPR in the software stack as well (not all OSS admittedly), that provides lots of advantages in the performance or energy efficiency domain; excellent multimedia support (something a lot of smart phones are really bad at), hardware acceleration, etc. Essentially most vendors ship different combinations of chips coming from a very small range of companies so from that point of view it doesn’t really matter what you buy. The software on top makes all the difference and the immaturity of newer platforms such as Android can be a real deal breaker when it comes to e.g. battery life, multimedia support, support for peripherals, etc. There’s a difference between running linux on a phone and running it well. Nokia has invested heavily in the latter and employs masses of people specialized in tweaking hardware and software to get the most out of the hardware.

But the real beauty of the N900 for the slashdot crowd is simply the fact that it doesn’t require hacks or cracks: Nokia actively supports & encourages hackers with features, open source developer tools, websites, documentation, sponsoring, etc. Google does that to some extent with Android but the OS is off limits for normal users. Apple actively tries to stop people from bypassing the appstore and is pretty hostile to attempts to modify the OS in ways they don’t like. Forget about other platforms. Palm technically uses linux but they are still keeping even the javascript + html API they have away from users. It might as well be completely closed source. You wouldn’t know the difference.

On the other hand, the OS on the N900 is Debian. Like on Debian, the package manager is configured in /etc/sources.list which is used by dpkg and apt-get, which work just as you would expect on any decent Debian distribution. You have root access, therefore you can modify any file, including sources.list. Much of Ubuntu actually compiles with little or no modification and most of the problems you are likely to encounter relate to the small screen size. All it takes to get to that software is pointing your phone at the appropriate repositories. There was at some point a Nokia sponsored Ubuntu port to ARM even, so there is no lack of stuff that you can install. Including stuff that is pretty pointless on a smart phone (like large parts of KDE). But hey, you can do it! Games, productivity tools, you name it and there probably is some geek out there who managed to get it to build for Maemo. If you can write software and package it as a Debian package and can cross compile it to ARM (using the excellent OSS tooling of course), there’s a good chance it will just work.

So, you can modify the device to your liking at a level no other mainstream vendor allows. Having a modifiable Debian linux system with free access to all of the OS on top of what is essentially a very compact touch screen device complete with multiple radios (bluetooth, 3G, wlan), sensors (GPS, motion, light, sound), graphics, dsp, should be enough to make any self respecting geek drool.

Now with the N900 you get all of that, shipped as a fully functional smart phone with all of the features Nokia phones are popular for such as excellent voice quality and phone features, decent battery life (of course with all the radios turned on and video & audio playing none stop, your mileage may vary), great build quality and form factor, good support for bluetooth and other accessories, etc. It doesn’t get more open in the current phone market currently and this is still the largest mobile phone manufacturer in the world.

In other words, Nokia is sticking out its neck for you by developing and launching this device & platform while proclaiming it to be the future of Nokia smart phones. It’s risking a lot here because there are lots of parties in the market that are in the business of denying developers freedom and securing exclusive access to mobile phone software. If you care about stuff like this, vote with your feet and buy this or similarly open (suggestions anyone?) devices from operators that support instead of prevent you from doing so. If Nokia succeeds here, that’s a big win for the OSS community.

Disclaimer: I work for Nokia and I’m merely expressing my own views and not representing my employer in any way. That being said, I rarely actively promote any of our products and I choose to do so with this one for one reason: I believe every single word of it.

Publications backlog

I’m now a bit more than half a year into my second ‘retirement’ from publishing (and I’m not even 35). The first one was when I was working as a developer at GX Creative Online Development 2004-2005 and paid to write code instead of text. In between then and my current job (back to coding), I was working at Nokia Research Center. So naturally I did lots of writing during that time and naturally I changed jobs before things started to actually appear on paper. Anyway, I have just added three items to my publications page. Pdfs will follow later. One of them is a magazine article for IEEE Pervasive Computing I wrote together with my colleagues in Helsinki about the work we have been doing there for the past two years. I’m particularly happy about getting that one out. It was accepted for publication in August and hopefully it will end up on actual dead trees soon. Once IEEE puts the pdf online, I’ll add it here as well. I’ve still got one more journal paper in the pipeline. Hopefully, I’ll get some news on that one soon. After that, I don’t have anything planned but you never know of course.

However, I must say that I’m quite disappointed with the whole academic publishing process, particularly when it comes to journal articles. It’s slow, tedious, the decision process is arbitrary, and ultimately only a handful of people read what you write since most journals come with really tight access control. Typically that doesn’t even happen until 2-3 years after you write it (more in some cases). I suspect the only reason people read my stuff at all is because I’ve been putting the pdfs on my site. I get more hits (80-100 on average) on a few stupid blog posts per day than most of my publications have gotten in the past decade. From what I managed to piece together on Google Scholar, I’m not even doing that bad with some of my publications (in terms of citations). But, really, academic publishing is a really, inefficient way of communication.

Essentially the whole process hasn’t really evolved much since the 17th century when the likes of Newton, Leibniz, et al. started communicating their findings in journals and print. The only qualitative difference between a scientific article and a blog post is so called peer-review (well, it’s a shitload of work to write articles of course). This is sort of like the Slashdot moderation system but performed by peers in the academic community (with about the same bias to the negative) who get to decide what is good enough for whatever workshop, conference or journal magazine you are targeting. I’ve done this chore as well and I would say that like on slashdot, most of the material passing across my desk is of a rather mediocre level. Reading the average proceedings in my field is not exactly fun since 80% tends to be pretty bad. Reading the stuff that doesn’t make it (40-60% for the better conferences) is worse though. I’ve done my bit of flaming on Slashdot (though not recently) and still maintain excellent karma there (i.e. my peers like me there). Likewise, out of 30+ publications on my publication page, only a handful is actually something that I still consider worth my time (writing it).

The reason that there are so many bad articles out there is that the whole process is optimized for meeting mostly quantitative goals universities and research institutes set for their academic staff. To reach these goals, academics organize workshops and conferences with and for each other that provides them with a channel for meeting these targets. The result is workshops full of junior researchers like I once was trying to sell their early efforts. Occasionally some really good stuff is published this way but generally the more mature material is saved for conferences, which have a bit wider audience and more strict reviewing. Finally, the only thing that really counts in the academic world is journal publications.

Those are run by for profit publishing companies that employ successful academics to do the content sorting and peer review coordination for them. Funnily these tend to also be the people running conferences and workshops. Basically, veterans of the whole peer reviewing process. Journal sales is a based on volume (e.g. once a quarter or once a month), reputation, and a steady supply of new material. This is a business model that the publishing industry has perfected over the centuries and many millions of research money flow straight to publishers. It is based on a mix of good enough papers that libraries & research institutes will pay to access and a need of the people in these institutes to get published, which requires access to the published work of others. Good enough is of course a relative term here. If you set the goals too high, you’ll end up not having enough material to make running the journal printing process commercially viable. If you set the goals too low, no-one will buy it.

In other words, top to bottom the scientific publishing process is optimized to keeping most of the academic world employed while sorting out the bad eggs and boosting the reputation of those who perform well. Nothing wrong with that, except for every Einstein, there’s tens of thousands of researchers who will never really publish anything significant or ground breaking who get published anyway. In other words, most stuff published is apparently worth the paper it is printed on (at least to the publishing industry) but not much more. I’ve always found the economics of academic publishing fascinating.

Anyway, just some Sunday morning reflections.

Maps on Ovi

Both OVI Maps, our maps and navigation client for S60 and the Maps on Ovi companion website (or MoO!!! as we refer to it internally), received a few upgrades in the past week. Maps 3.0 is a solid upgrade with lots of good new features that you will probably want to install if you are still using Maps 2.0 on your Nokia phone. Maps on Ovi is the website that goes along with it that features such niceties as synchronizing routes and pois from the site to your phone via OVI as well as a new Find Places feature, which is what me and my colleagues have been slaving away on for the past few months (particularly the places bubble that shows up on the map for some POIs).

So go check it out here: maps.ovi.com!

Our Find Places feature is at this point still a quite minor feature and the whole site is of course still in beta with lots of known issues and rough edges that are still being worked on at this point but improvements are coming of course and the site is actually perfectly usable at this point. Last Monday was the big 1.0 for our team and our first real publicly available feature set, which we will be building on in the near future. Getting it out was stressful and part of my work in the next few weeks is helping it become less stressful.

My personal technical contributions are limited to the content provisioning from various vendors such as Lonely Planet and WCities. You can find the same content also in the Nokia Here and Now client for S60, which is currently in Nokia’s Beta Lab, as well as on the device if you buy any of the premium content packages.

For the past few months I’ve been working with lots of highly talented people slaving away on all the frontend and javascript stuff as well as the pretty neat and cool server-side architecture. I can’t really reveal anything on that except to say that cool stuff will be coming from Berlin. So keep following us on e.g. the Nokia Youtube channel where our marketing people regularly post stuff, including videos featuring me and another one featuring Christian del Rosso that I reported on here earlier.

OpenID, the identity landscape, and social networks

I’m still getting used to no longer being in nokia research center. One of my disappointments of being in NRC and being a vocal proponent of openid, social networks, etc. was that despite lots of discussion on this topic not much has happened in terms of me getting room to work on these topics or me convincing a lot of people about my opinions on these topics. I have one publication that is due out whenever the magazine involved gets around to approving and printing the article. But that’s it.

So, I take great pleasure in observing how things are evolving lately and finding that I’ve been pushing the right topics all along. Earlier this week, Facebook became a relying party for OpenID. Outside the OpenID community and regular techcrunch readers, this seems to have not been a major news story. Since, just about anybody I discussed this topic with in the past few years (you know who you are) always insisted that “no way that a major network like Facebook will ever use OpenID”. If you were one of those people: admit right now that you were wrong.

It seems to me that this is a result of fact that the social networking landscape is maturing. As part of this maturation process, several open standards are emerging. Identity and authentication are very important topics here and it seems the consensus is increasingly that no single company is going to own all 6-7 billion identities on this planet. So naturally any company with the ambition to potentially separate 6-7 billion individuals from their money for some product or service, will need to either work with multiple identity providers.

So naturally such companies require a standard for doing so. That standard is OpenID. It has no competition. There is no alternative. There are plenty of proprietary APIs that only work with limited sets of identity providers but none like OpenID that can work with all of them.

Similarly, major identity providers like Google, Facebook are stuck at sharing a few hundred million users between them, they shift their attention to somehow involving all those users that didn’t sign up with them. Pretty much all of them are OpenID providers already. Facebook just took the obvious next step in becoming a relying party as well. The economics are mindbogglingly simple: Facebook doesn’t make money from verifying peoples identity but they do make money from people using their services. OpenID relying party means the group of people who can access their services just grew to the entire internet population. Why wouldn’t they want that? Of course this doesn’t mean that world + dog will now be a Facebook user but it does mean that one important obstacle has just disappeared.

BTW. Facebook’s current implementation is not very intuitive. I’ve been able to hook up myopenid to my facebook account but I haven’t actually found a login page where I can login with my openid yet. It seems that this is a work in progress still.

Anyway, this concludes my morning blogging session. Haven’t blogged this much in months. Strange how the prospect of not having to work today is energizing me 🙂

Localization rant

I’ve been living outside the Netherlands for a while and have noticed that quite many web sites are handling localization and internationalization pretty damn poorly. In general I hate the poor translations unleashed on Dutch users and generally prefer the US English version of UIs whenever available.

I just visited Youtube. I’ve had an account there for over two years. I’ve always had it set to English. So, surprise, surprise, it asked me for the second time in a few weeks, in German, whether I would like to keep my now fully Germanified Youtube set to German. Eehhhhh?!?!?! nein (no). Abrechen (cancel)! At least they ask, even though in the wrong language. Most websites don’t do even bother with this.

But stop and think about this. You’ve detected that somebody who has always had his profile set to English is apparently in Germany. Shit happens, so now what? Do you think it is a bright idea to ask this person in German whether he/she no longer wants the website presented in whatever it was set to earlier? Eh, no of course not. Chances are good people won’t even understand the question. Luckily I speak enough German to know Abrechen is the right choice for me. When I was living in Finland, convincing websites I don’t speak Finnish was way more challenging. I recall fighting with Blogger (another Google owned site) on several occasions. It defaulted to Finnish despite the fact that I was signed in to Google in and have every possible setting Google provides for this set to English. Additionally, the link for switching to English was three clicks away from the main page. Impossible to do unless you know the Finnish word for preferences, language, and OK (in which case you might pass for a native speaker). I guess I’m lucky to not live in e.g. China where I would stand no chance whatsoever to guess the meaning of buttons and links.

The point here is that most websites seem to be drawing the wrong conclusions based on a few stupid IP checks. My German colleagues are constantly complaining about Google defaulting to Dutch (i.e. my native language, which is quite different from Deutsch). Reason: the nearest Nokia proxy is in Amsterdam so Google assumes we all speak Dutch.

So, cool you can guesstimate where I am (roughly) in the world but don’t jump to conclusions. People travel and move around all the time. Mostly they don’t change their preferred language until after a lot of hard work. I mean, how hard can it be? I’m already signed in, right? Cookies set and everything. In short, you know who I am (or you bloody well should given the information I’ve been sharing with you for several years). Somewhere in my profile, it says that my preferred language is English, right? I’ve had that profile for over four years, right? So why the hell would I suddenly want to switch language to something that I might not even speak? A: I wouldn’t. No fucking way that this is even likely to occur.

It’s of course unfair to single out Google here. Other examples are iTunes which has a full English UI in Finland but made me accept the terms of use in Finnish (my knowledge of Finnish is extremely limited, to put it mildly). Finland is of course bilingual and 10 percent of its population are Swedish speaking Finns, most of which probably don’t handle Finnish that well. Additionally there are tens of thousands of immigrants, tourists and travelers, like me. Now that I live in Germany, I’m stuck with the Finnish itunes version, because I happened to sign up while I was in Finland. Switching to the German store is impossible. I.e. I can’t access the German TV shows for sale on iTunes Germany. Never mind the US English ones I’m actually interested in accessing and spending real $$$/€€€ on. Similarly, I’ve had encounters with Facebook asking me to help localize Facebook to Finnish (eh, definitely talking to the wrong guy here) and recently to German (still wrong).

So, this is madness. A series of broken assumptions leads to Apple losing revenue and Google and others annoying the hell out of people.

So here’s a localization guideline for dummies:

  • Offer a way out. Likely a large percentage of your guesses as to what the language of your users is, is going to be wrong. The smaller the amount of native speakers the more likely you will get it wrong. Languages like Finnish or Chinese are notoriously hard to learn. So, design your localized sites such that a non native speaker of such languages can get your fully localized sites set to something more reasonable.
  • Respect people’s preferences. Profiles override anything you might detect. People move around so your assumptions are likely broken if they deviate from the profile settings.
  • Language is not location. People travel around and generally don’t unlearn the language they used to speak. Additionally, most countries have sizable populations of non native speakers as well as hordes of tourists and travelers.
  • If people managed to sign up, that’s a strong clue that whatever the language of the UI was at the time is probably a language that the user has mastered well enough to understand the UI (or otherwise you’d have blind monkeys signing up all the time). So there’s no valid use case for suggesting an alternative language here. Never mind defaulting to one.

Anyway, end of rant.

Time for a little update

Hmm, it’s been more than two months since I last posted. Time for an update. A lot has happened since January.

So,

  • I moved out of Finland as planned.
  • I stayed in a temporary apartment for a month. Central-home is the company managing the facility where I lived (on Habersaathstrasse 24) and if you’re looking for temporary housing in Berlin, look no further.
  • I managed to find a nice apartment for long term in Berlin Mitte, in the Bergstrasse, which is more or less walking distance from tourist attractions like Alexanderplatz, Hackeschermarkt, Friedrichstrasse and of course the Brandenburger Tor.
  • I re-aquainted myself with Java, Java development, and lately also release management. Fun days of hacking but the normal Nokia routine of meetings creeping into my calendar is sadly kicking in.
  • I learned tons of new stuff
  • Unfortunately German is not yet one of those things. My linguistic skills are ever pathetic and English remains the only foreign language I ever managed to master more or less properly. On paper German should be dead easy since I can get by mumbling in my native language and people can still figure out what I want. In practice, I can understand it if spoken slowly (and clearly). Speaking back is challenging.
  • I’m working on it though, once a week, in a beginners class. Relearning stuff that 3 years of trying to stuff German grammar in my head in High-school did not accomplish.

Moving is tedious and tiresome. But the end result is some genuine improvement in life. I absolutely love Berlin and am looking forward to an early Spring. I was in a telco with some Finnish people today discussion the weather. They, so how’s Berlin. Any snow there still? Me: no about 20 degrees outside right now :-). Nice to have spring start at the normal time again. Not to mention the more sane distribution of daylight and darkness, throughout the year.

A shitload of updates is overdue. For several months already. I have a ton of photos to upload. WordPress needs upgrading. And some technical stuff might need some blogging about as well. Then there is still some unfinnished papers in the pipeline. So, I’ll be back with more. Some day.