Digiboksi is Finglish for DVB-C set top box. I bought one on saturday and had it replaced today by a different one.

Basically the story is that Finland is replacing good old analogue TV with digital tv. Terrestial analogue signal went dark a few months ago. A few weeks ago the ‘interesting’ channels (i.e. the ones in english) disappeared from the cable and the rest will follow in February.

So, I went to the shop for a digiboksi. Since I barely watch TV, I just wanted something that was cheap & something that worked. So I pretty much randomly selected the Samsung DCB-B263Z in the shop (hey it was black, matches my other equipment). Normally I’d do some research on the internet for such a purchase. Unfortunately this product seems to be specialized for the Finnish market so most info available seems in Finnish. Hence the randomness.

So I bought it, plugged it in and about ten minutes later was watching TV. I was quite pleased with the picture quality. The UI seemed nice too. Then it crashed. WTF! Anyway, to keep this short: it is a known issue, basically the shops are selling products with broken software. Sigh. So, after about 4 crashes in less than 24 hours, I went back to the shop today and mentioned the word crash. No need for further explanation. Apparently, lots of people are bringing these things back (I know at least one other guy). Five minutes later I walked out with a Handan 3400.

So far it seems reasonably well behaved though the picture quality is slightly less nice than the samsung (was a bit smoother and crispier). I managed to improve it slightly by disabling the built in contrast. If this one crashes as well, I’ll try another brand.

Workshop paper

Together with my two colleagues Christian Prehofer and Cristiano di Flora, I wrote a nice workshop paper for the upcoming Second Workshop on Requirements and Solutions for Pervasive Software Infrastructures (RSPSI), at UBICOMB 2007, Innsbruck, 16-19 Sebtember, 2007.

Towards the Web as a Platform for Ubiquitous Applications in Smart Spaces (pdf).


We introduce our web based middleware for smart spaces, which strongly relies on technologies used in Internet services. Following the key requirements and
technologies, we present our architecture for ubiquitous applications in smart spaces. It exploits and leverages many of the key web-technologies as well as “Web 2.0” collaborative and social Internet services, including browsers, web servers, development tools and content management systems. In this way, we aim to make many of the highly disruptive ubiquitous applications less disruptive from a technology point of view. Furthermore, we discuss a number of new challenges for applying these technologies in ubiquitous applications. These include the areas of discovery/delivery of services, security, content management, and networking.

The article is a nice milestone in our very your research group. An earlier position paper already outlined our vision for software development in smart spaces. This article builds on that vision and makes public a few details of the software we are building in the research group.

Towards Effective Smart Space Application Development: Impediments and Research Challenges

I submitted a nice position paper with two of my colleagues at Nokia to the CMPPC’07 (Common Models and Patterns for Pervasive Computing) Workshop, at Pervasive 2007 in Toronto next month.


State-of-the-art research and existing commercial off-the-shelf solutions provide several technologies and methods for building Smart spaces. However, developing applications on top of such systems is quite a complex task due to several impediments and limitations of available solutions. This paper provides an overview of such impediments and outlines what are the main research challenges that still need to be solved in order to enable effective development of applications and systems that fully exploit the capabilities of state-of-the-art technologies and methodologies. The paper also outlines a few specific issues and impediments that we, at the Nokia Research Center, faced in this field so far. It also sheds some light on how we are going to tackle some of the mentioned issues in the future.

Full details are on my publication site and you can download the pdf from there as well.

Variability Management and Compositional SPL Development

I submitted a nice position paper to the variability management workshop here in Helsinki next month.


This position paper reflects on the implications for variability management related practices in SPL development when adopting a compositional style of development. We observe that large scale software development is increasingly conducted in a decentralized fashion and on a global scale with little or no central coordination. However, much of the current SPL and variability practices seem to have strong focus on centrally maintained artifacts such as feature and architecture models. We conclude that in principle it should be possible to decentralize these practices and identify a number of related research challenges that we intend to follow up on in future research.

Full details are on my publication site and you can download the pdf from there as well.

Agile methods

Another interesting article by Martin Fowler: The New Methodology (and again I’m several months late). The predecessor of this article in 2000 was equally interesting. In fact it still is. If you have the time, read both of them.

Martin Fowler, Kent Beck, Erich Gamma and other people of that generation have greatly influenced the way I think about software engineering. I’ve been a scholar in software engineering and currently my work at Nokia also involves a great deal of software engineering research. I see these guys as the pragmatic force driving the industry forward. While we scholars are babbling about Model Driven Architectures, Architecture Description Languages, etc.in our ivory towers, these guys are all “stuff and no fluff”. It’s a good thing to once in a while consider that many of the software engineering research community ideas and concepts in the past have done and continue to do a lot of damage. For example, the waterfall model (the mother of all software development methodology) is still being taught in universities. Out of touch SE teachers still teach their students to do things one after the other (instead of iteratively).

The original papers on the waterfall model, iterative development and related methodology from the seventies are an interesting read. Their authors had a lot more vision than past and present proponents of these methods. But they are just that: the vision of software developers coming out of the cold in the seventies. We’ve learned a lot since.

If there’s one thing you can learn from Martin Fowler, Kent Beck or Alistair Cockburn, it is that you should never ever implement their methodology to the letter. If you are doing so, you didn’t get it. Agile is all about change, including changing the way you work on permanent basis. The article I’m citing presently argues this in Martin Fowler’s usual clear fashion. Go read it already.


A little announcement. After nearly two happy years in Nijmegen at GX Creative Online Development, I am now moving to Helsinki, FI.

I’ve recently been offered a nice job at the Nokia Research center. I am going to work at the Software & Applications lab as a research engineer. This 200 people division of Nokia Research is headed by Jan Bosch, who was previously responsible for getting me to Sweden (when he was heading the software engineering research group at the Blekinge Institute of Technology) and to Groningen when we made a transfer to the University in that city. Since a bit more than half a year Jan Bosch is heading the research department at the Nokia Research Center. A few months ago, he drew my attention to a vacant position and after carefully considering my options I said YES!

The final word came in friday and what remains is moving to Finland. If all goes well I will live in Finland the 1st of december.

Scientific content should be free

A recurring topic on slashdot (www.slashdot.org) and in the scientific community is open journals: peer reviewed scientific journals that make their articles available for free. Today, slashdot commented on the ideas of the IEEE to maybe open their vast electronic library to the public. I am a big proponent of this and sincerely hope that they will do this. However I am very critical of the discussions about the cost. These discussions appear to be influenced heavily by publishers who continuously try to make it appear that these costs have to be very high. They propose that authors cover the cost of 3000$ (!!!!) per article.

This is where I disagree because as far as I can see these costs don’t really exist (or rather have to exist). I used to be a Ph. D. student. I wrote articles, submitted them to conferences and journals. I also peer reviewed articles for journals and conferences. I never received a single dollar for this work from the publishers and worse now have to pay to get access to my own articles (well I cheated by saving a copy).

My point is: all the relevant work in the publishing process is done by volunteers like me. Worldwide, scientists write articles for free and review other scientist’s articles for free. The only scientists who receive money from publishers are editors who, in some cases, get a modest compensation for their precious time from a publisher who makes a lot of money. I’m convinced there would be plenty of people willing to donate their time to do this. I’m one of those people. This work mostly consists of taking decisions what to publish and what to reject, organizing and coordinating the review process, etc.

Historically we needed publishers to distribute the peer reviewed articles to libraries and this is why publishers have enjoyed an enormous revenue stream for centuries now. The profits made by publishers are huge (billions of dollars). They continue to be huge because scientists need to publish in their journals because of the journal rankings (which are based on references to articles).

Now that we have the internet, this is no longer true.

Well not entirely. Of course you still have some hosting costs, site maintainance and maybe a bunch of people coordinating the whole review distribution process, content management & site maintainance. My point is: the per article cost of the whole process is extremely low. It’s nowhere near the amount of dollars named in the article. I’d be surprised and shocked if it were more than a few dollars. An organization like the IEEE should be able to fund this using sponsoring, advertising & volunteer contributions.

Of course they’d have to reorganize how they work. A journal is a periodic bundle of articles intended for paper distribution. Electronic publishing is instant (not periodic) and essentially free of cost. Beyond organizing the process and hosting there is virtually no cost. The process which is currently optimized for paper distribution is therefore obsolete. You need volunteer authors, volunteer reviewers, volunteer editorial boards for specific scientific audiences, supporting staff and hosting (here are some real costs) and a means to establish article and editorial board rankings (this is mostly a technical problem).

Editorial boards consist of key members of a research community who invite other scientists to contribute articles and do peer reviews. The output of an editorial board consists of peer reviewed articles. Not for profit organizations like the IEEE can take care of the editing and hosting. This will require some funding. Funding is available from sponsoring, advertisements, research funds, universities, society memberships etc.

Considering the amounts that are saved by taking publishers out of the equation, this should be no problem. Universities would save millions if the whole scientific publishing community would adopt this model.

So, IEEE, ACM and other not for profit scientific organizations: do your members and the scientific community a favour (and isn’t this what you exist for in the first place?) by making content available freely. There’s no shortage of scientists willing to do the writing and reviewing for you (I’m one of those people). The rest of the process can and should be optimized for online hosting. The costs involved with the latter part should be very modest. The benefits are enormous.