Publications backlog

I’m now a bit more than half a year into my second ‘retirement’ from publishing (and I’m not even 35). The first one was when I was working as a developer at GX Creative Online Development 2004-2005 and paid to write code instead of text. In between then and my current job (back to coding), I was working at Nokia Research Center. So naturally I did lots of writing during that time and naturally I changed jobs before things started to actually appear on paper. Anyway, I have just added three items to my publications page. Pdfs will follow later. One of them is a magazine article for IEEE Pervasive Computing I wrote together with my colleagues in Helsinki about the work we have been doing there for the past two years. I’m particularly happy about getting that one out. It was accepted for publication in August and hopefully it will end up on actual dead trees soon. Once IEEE puts the pdf online, I’ll add it here as well. I’ve still got one more journal paper in the pipeline. Hopefully, I’ll get some news on that one soon. After that, I don’t have anything planned but you never know of course.

However, I must say that I’m quite disappointed with the whole academic publishing process, particularly when it comes to journal articles. It’s slow, tedious, the decision process is arbitrary, and ultimately only a handful of people read what you write since most journals come with really tight access control. Typically that doesn’t even happen until 2-3 years after you write it (more in some cases). I suspect the only reason people read my stuff at all is because I’ve been putting the pdfs on my site. I get more hits (80-100 on average) on a few stupid blog posts per day than most of my publications have gotten in the past decade. From what I managed to piece together on Google Scholar, I’m not even doing that bad with some of my publications (in terms of citations). But, really, academic publishing is a really, inefficient way of communication.

Essentially the whole process hasn’t really evolved much since the 17th century when the likes of Newton, Leibniz, et al. started communicating their findings in journals and print. The only qualitative difference between a scientific article and a blog post is so called peer-review (well, it’s a shitload of work to write articles of course). This is sort of like the Slashdot moderation system but performed by peers in the academic community (with about the same bias to the negative) who get to decide what is good enough for whatever workshop, conference or journal magazine you are targeting. I’ve done this chore as well and I would say that like on slashdot, most of the material passing across my desk is of a rather mediocre level. Reading the average proceedings in my field is not exactly fun since 80% tends to be pretty bad. Reading the stuff that doesn’t make it (40-60% for the better conferences) is worse though. I’ve done my bit of flaming on Slashdot (though not recently) and still maintain excellent karma there (i.e. my peers like me there). Likewise, out of 30+ publications on my publication page, only a handful is actually something that I still consider worth my time (writing it).

The reason that there are so many bad articles out there is that the whole process is optimized for meeting mostly quantitative goals universities and research institutes set for their academic staff. To reach these goals, academics organize workshops and conferences with and for each other that provides them with a channel for meeting these targets. The result is workshops full of junior researchers like I once was trying to sell their early efforts. Occasionally some really good stuff is published this way but generally the more mature material is saved for conferences, which have a bit wider audience and more strict reviewing. Finally, the only thing that really counts in the academic world is journal publications.

Those are run by for profit publishing companies that employ successful academics to do the content sorting and peer review coordination for them. Funnily these tend to also be the people running conferences and workshops. Basically, veterans of the whole peer reviewing process. Journal sales is a based on volume (e.g. once a quarter or once a month), reputation, and a steady supply of new material. This is a business model that the publishing industry has perfected over the centuries and many millions of research money flow straight to publishers. It is based on a mix of good enough papers that libraries & research institutes will pay to access and a need of the people in these institutes to get published, which requires access to the published work of others. Good enough is of course a relative term here. If you set the goals too high, you’ll end up not having enough material to make running the journal printing process commercially viable. If you set the goals too low, no-one will buy it.

In other words, top to bottom the scientific publishing process is optimized to keeping most of the academic world employed while sorting out the bad eggs and boosting the reputation of those who perform well. Nothing wrong with that, except for every Einstein, there’s tens of thousands of researchers who will never really publish anything significant or ground breaking who get published anyway. In other words, most stuff published is apparently worth the paper it is printed on (at least to the publishing industry) but not much more. I’ve always found the economics of academic publishing fascinating.

Anyway, just some Sunday morning reflections.

Moving to Berlin

A bit more than a month ago, I posted a little something on the reorganization in Nokia Research Center where I work and announced my availability on the job market. This was a bit of a shock of course and it has been a hectic few weeks but the end result is really nice. For me at least. Unfortunately some of my colleagues are not so lucky and are now at risk of losing their job.

In any case, a few weeks ago I visited Nokia Gate5 in Berlin for a job interview. Gate5 is a navigation software company that Nokia bought in 2006. Their software is powering what is now known as OVI Maps and whereas the whole industry is shrinking, they are growing like crazy now and rolling out one cool product after another. Today, they sent me a proposal for a contract. Barring contractual details, this means that I will be based in Berlin from February. This is something I’ve known for a few weeks but having all the necessary approvals from Nokia management and a concept contract is about as good as it gets in terms of certainty. So, since I know a few people are curious what I’ll be up to next year, I decided on this little update.

I can’t say too much about what I will do there except that it more or less matches my Java server side interests and experience perfectly. This means back to being a good old Java hacker which is just fine with me and something I’ve not had enough time to focus on lately (much to my annoyance). Just today I submitted an article and I have one or two other things to finish off in January. After that, my research will be put on hold for a while. That’s fine with me as well. After returning to a research career three years ago, I’ve done a few nice papers but to be honest, I’m not enjoying this as much as I used to.

Of course Berlin is a great place to move to. I’ve been there now twice. I remember thinking the first time in 2005 that “hmm, I wouldn’t mind living here” and when I was there three weeks ago I had the same feeling again. It’s a big city with a rich history, nice culture and lots of stuff to see and do. I also learned that this is one of the few cities in Europe where life is actually cheap. Apartment prices, food, drink and all the essentials in life are really affordable there and of excellent quality too.

Anyway, I’ll be off to France the next week visiting my parents.

Happy holidays

NRC Reorganization

My employer, Nokia, announced this week that it is reorganizing Nokia Research Center. The why and how of this operation is explained in the press release.

I learned this on Tuesday along with all my colleagues and have since been finding out how this will affect me. I can of course not comment on any organizational details but it is very likely that I start 2009 in a new job somewhere within Nokia since it looks like the research topic that I have been working on for the past two years is out of scope of the new Helsinki Lab research mission. While I’m of course unhappy about how that decision affects me, I accept and respect it. Short term, I am confident that I will be allowed to finish ongoing research activity since it has been so far highly successful within Nokia and we are quite close to going public with the trial of the system I demoed on Youtube a few weeks ago. I’m very motivated to do this because I’ve put a lot of time in it and want to see it succeed and get a lot of nice press attention.

However, a topic that is of course on my mind is what I will be doing after that and where I will be doing it. In short, I’m currently looking at several very interesting open positions in Nokia. Since I’m doing that anyway, I’ve decided to broaden my search and look at all available options, including those outside Nokia. I will pick the best offer I get. Don’t get me wrong, I think Nokia is a great employer and I am aware my skills are in strong demand inside Nokia. So, if Nokia makes me a good offer, I will likely accept it. But of course, the world is bigger than Finland where I have now spent three years and I am in no way geographically constrained (i.e. willing to move internationally).

So, I’ve updated my CV and am available to discuss any suitable offer.

Since this has happened in the past: please don’t contact me about Symbian programming or J2ME programming type jobs. Not interested in either, I’m a server guy.

The Way We Live Next

I stumbled upon somebody writing about Nokia’s 2007 Way We Live Next event in Oulu. This event was intended to give the outside world a view on what is going on in Nokia Research Center.

Nice quote

Lots of interesting stuff was shown off during the course of the two days and the most interesting I came across was the indoor positioning concept. Using WiFi and specially created maps, the devices we were issued with were running the software which enabled you to move through the NRC building and pinpoint exactly where you were. So, if the next presentation was in room 101, the device would simply, and quickly show you the way. It instantly made me think of the frustration of trying to get where I want in huge shopping centres – and I figured this had to be the perfect solution.

Next week, the 2008 edition of the WWLN is going to be in Espoo and I will be giving a demo there of our indoor location based service platform, customized for a real shopping mall. We’ve demoed last years version of our software platform at the Internet of Things Conference last April. At the time our new platform was already under development for a several months and we are getting ready to start trialing it now. The WWLN event next week will be when we first show this in public and hopefully we’ll get some nice attention from the press on this.

PS. I like (good) beer …

Web application scalability

It seems infoq picked up some stuff from a comment I left on the serverside about one of my pet topics (Server side Java).

The infoq article also mentions that I work at Nokia. I indeed work for Nokia Research Center and it’s a great place to work. Only they do require me to point out that when making such comments I’m not actually representing them.

The discussion is pretty interesting and I’ve recently also ventured into using other things than Java (mainly python lately with the Django framework). So far I dearly miss development tooling which ranges from non existent to immature crap for most languages that are not Java. Invariably the best IDEs for these languages are actually built in Java. For example, I’m using the eclipse pydev extension for python development. It’s better than nothing but it still sucks compared to how I develop Java in the same IDE. Specifically: no quickfixes; only a handful of refactorings, no inline documentation, barely working autocompletion, etc make life hell. I forgot what it is like to actually have to type whole lines of code.

I understand the development situation is hardly better for other scripting languages. There’s some progress on the ruby front since Sun started pushing things on that side but none of this stuff is actually production quality. Basically the state of the art in programming environments is currently focussed primarily on statically compiled OO languages like Java or C#. Using something else can be attractive from for example language feature point of view but the price you pay is crappy tooling.

Python as a language is quite OK although it is a bit out of date with things like non utf-8 strings and a few other things that my fellow country man Guido van Rossum is planning to fix in python 3000. Not having explicit typing takes some getting used to and also means my workload is higher because I constantly have to use Google to look up stuff that eclipse would just tell me (e.g. what methods and properties can I use on this HttpResp object I’m getting from Django; what’s the name of the exception I’m supposed to be catching here, etc). In my view that’s not progress and leads to sloppy coding practices where people don’t bother dealing with fault situations unless they have to (which long term in a large scale server environment is pretty much always).

Towards Effective Smart Space Application Development: Impediments and Research Challenges

I submitted a nice position paper with two of my colleagues at Nokia to the CMPPC’07 (Common Models and Patterns for Pervasive Computing) Workshop, at Pervasive 2007 in Toronto next month.

Abstract:

State-of-the-art research and existing commercial off-the-shelf solutions provide several technologies and methods for building Smart spaces. However, developing applications on top of such systems is quite a complex task due to several impediments and limitations of available solutions. This paper provides an overview of such impediments and outlines what are the main research challenges that still need to be solved in order to enable effective development of applications and systems that fully exploit the capabilities of state-of-the-art technologies and methodologies. The paper also outlines a few specific issues and impediments that we, at the Nokia Research Center, faced in this field so far. It also sheds some light on how we are going to tackle some of the mentioned issues in the future.

Full details are on my publication site and you can download the pdf from there as well.