Publications backlog

I’m now a bit more than half a year into my second ‘retirement’ from publishing (and I’m not even 35). The first one was when I was working as a developer at GX Creative Online Development 2004-2005 and paid to write code instead of text. In between then and my current job (back to coding), I was working at Nokia Research Center. So naturally I did lots of writing during that time and naturally I changed jobs before things started to actually appear on paper. Anyway, I have just added three items to my publications page. Pdfs will follow later. One of them is a magazine article for IEEE Pervasive Computing I wrote together with my colleagues in Helsinki about the work we have been doing there for the past two years. I’m particularly happy about getting that one out. It was accepted for publication in August and hopefully it will end up on actual dead trees soon. Once IEEE puts the pdf online, I’ll add it here as well. I’ve still got one more journal paper in the pipeline. Hopefully, I’ll get some news on that one soon. After that, I don’t have anything planned but you never know of course.

However, I must say that I’m quite disappointed with the whole academic publishing process, particularly when it comes to journal articles. It’s slow, tedious, the decision process is arbitrary, and ultimately only a handful of people read what you write since most journals come with really tight access control. Typically that doesn’t even happen until 2-3 years after you write it (more in some cases). I suspect the only reason people read my stuff at all is because I’ve been putting the pdfs on my site. I get more hits (80-100 on average) on a few stupid blog posts per day than most of my publications have gotten in the past decade. From what I managed to piece together on Google Scholar, I’m not even doing that bad with some of my publications (in terms of citations). But, really, academic publishing is a really, inefficient way of communication.

Essentially the whole process hasn’t really evolved much since the 17th century when the likes of Newton, Leibniz, et al. started communicating their findings in journals and print. The only qualitative difference between a scientific article and a blog post is so called peer-review (well, it’s a shitload of work to write articles of course). This is sort of like the Slashdot moderation system but performed by peers in the academic community (with about the same bias to the negative) who get to decide what is good enough for whatever workshop, conference or journal magazine you are targeting. I’ve done this chore as well and I would say that like on slashdot, most of the material passing across my desk is of a rather mediocre level. Reading the average proceedings in my field is not exactly fun since 80% tends to be pretty bad. Reading the stuff that doesn’t make it (40-60% for the better conferences) is worse though. I’ve done my bit of flaming on Slashdot (though not recently) and still maintain excellent karma there (i.e. my peers like me there). Likewise, out of 30+ publications on my publication page, only a handful is actually something that I still consider worth my time (writing it).

The reason that there are so many bad articles out there is that the whole process is optimized for meeting mostly quantitative goals universities and research institutes set for their academic staff. To reach these goals, academics organize workshops and conferences with and for each other that provides them with a channel for meeting these targets. The result is workshops full of junior researchers like I once was trying to sell their early efforts. Occasionally some really good stuff is published this way but generally the more mature material is saved for conferences, which have a bit wider audience and more strict reviewing. Finally, the only thing that really counts in the academic world is journal publications.

Those are run by for profit publishing companies that employ successful academics to do the content sorting and peer review coordination for them. Funnily these tend to also be the people running conferences and workshops. Basically, veterans of the whole peer reviewing process. Journal sales is a based on volume (e.g. once a quarter or once a month), reputation, and a steady supply of new material. This is a business model that the publishing industry has perfected over the centuries and many millions of research money flow straight to publishers. It is based on a mix of good enough papers that libraries & research institutes will pay to access and a need of the people in these institutes to get published, which requires access to the published work of others. Good enough is of course a relative term here. If you set the goals too high, you’ll end up not having enough material to make running the journal printing process commercially viable. If you set the goals too low, no-one will buy it.

In other words, top to bottom the scientific publishing process is optimized to keeping most of the academic world employed while sorting out the bad eggs and boosting the reputation of those who perform well. Nothing wrong with that, except for every Einstein, there’s tens of thousands of researchers who will never really publish anything significant or ground breaking who get published anyway. In other words, most stuff published is apparently worth the paper it is printed on (at least to the publishing industry) but not much more. I’ve always found the economics of academic publishing fascinating.

Anyway, just some Sunday morning reflections.

Paper on Sensor Actuator Kit evaluation

One of our interns, Filip Suba, presented a paper earlier this week in Turku at the Symposium on Applications & the Internet. He’s one of my former (since today) master thesis students that has been working in our team at NRC for the past few months. The paper he presented is actually based on his work a few months before that when he was working for us as a summer trainee (previous summer that is). His master thesis work (on a nice new security protocol) is likely to result in another paper, when we find the time to write it.

Anyway, we hired him last year to learn a bit more about the state of the art in Sensor Actuator development kit with a particular focus on building applications and services that are kit independent and exposing their internals via a web API. This proved quite hard and we wrote a nice paper on our experiences.

In short, wireless sensor actuator kits are a mess. Above the radio level very little is standardized and the little there is is rarely interoperable. Innovation here is very rapid though. The future looks bright though. I watched this nice video about Sun Spot earlier this week for example:

The notion of making writing on Sensor node software as simple as creating a J2ME Midlet is in my view quite an improvement over hacking C using some buggy toolkit, cross compiling to obscure hardware and hoping it works.

New and updated publications

As you saw in yesterday’s post, my publication site has moved to this blog. I also took the opportunity to update the page with recent work:

You can download the pdfs and find the full refs from here: publications.

Adamus 2008

One of my colleagues, Cristiano di Flora with whom I’ve written several articles over the past year, is co-organizing the second workshop on Adaptive and DependAble Mobile Ubiquitous Systems (Adamus) that will be co-hosted with this years WoWMoM symposium in California this summer.

He asked me to do a bit of promotion on my blog, mainly for the purpose of being able to link to a real blog post. So here it goes. The workshop looks like it could be very interesting and well aligned with what Cristiano and I are working on in the Smart Space Lab. So likely we will be presenting there as well. If you are interested in learning more about preliminary results from our current work, check out recent publications on publications.jillesvangurp.com.

Experiences with realizing Smart Space Web Service Applications

I had a nice article accepted at the upcoming 1st IEEE International Peer-to-Peer for Handheld Devices Workshop at the CCNC ’08 conference in Las Vegas.

Jilles van Gurp, Christian Prehofer, Cristiano di Flora, Experiences with realizing Smart Space Web Service Applications

This paper presents our approach for building an internet based middleware platform for smart spaces as well as a number of services and applications that we have developed on top of it. We outline the architecture for the smart space middleware and discuss several applications and services that we have so far realized with this middleware. The presented material highlights key concepts in our middleware vision: services are HTTP based and restful; applications are accessed through a browser so that they are available on a wide variety of devices; and we demonstrate the concept of bridging non internet enabled smart space devices to our IP and HTTP centric smart space network.

I’ve uploaded the paper to my publications site.

Workshop paper

Together with my two colleagues Christian Prehofer and Cristiano di Flora, I wrote a nice workshop paper for the upcoming Second Workshop on Requirements and Solutions for Pervasive Software Infrastructures (RSPSI), at UBICOMB 2007, Innsbruck, 16-19 Sebtember, 2007.

Towards the Web as a Platform for Ubiquitous Applications in Smart Spaces (pdf).

Abstract:

We introduce our web based middleware for smart spaces, which strongly relies on technologies used in Internet services. Following the key requirements and
technologies, we present our architecture for ubiquitous applications in smart spaces. It exploits and leverages many of the key web-technologies as well as “Web 2.0” collaborative and social Internet services, including browsers, web servers, development tools and content management systems. In this way, we aim to make many of the highly disruptive ubiquitous applications less disruptive from a technology point of view. Furthermore, we discuss a number of new challenges for applying these technologies in ubiquitous applications. These include the areas of discovery/delivery of services, security, content management, and networking.

The article is a nice milestone in our very your research group. An earlier position paper already outlined our vision for software development in smart spaces. This article builds on that vision and makes public a few details of the software we are building in the research group.

Towards Effective Smart Space Application Development: Impediments and Research Challenges

I submitted a nice position paper with two of my colleagues at Nokia to the CMPPC’07 (Common Models and Patterns for Pervasive Computing) Workshop, at Pervasive 2007 in Toronto next month.

Abstract:

State-of-the-art research and existing commercial off-the-shelf solutions provide several technologies and methods for building Smart spaces. However, developing applications on top of such systems is quite a complex task due to several impediments and limitations of available solutions. This paper provides an overview of such impediments and outlines what are the main research challenges that still need to be solved in order to enable effective development of applications and systems that fully exploit the capabilities of state-of-the-art technologies and methodologies. The paper also outlines a few specific issues and impediments that we, at the Nokia Research Center, faced in this field so far. It also sheds some light on how we are going to tackle some of the mentioned issues in the future.

Full details are on my publication site and you can download the pdf from there as well.

Variability Management and Compositional SPL Development

I submitted a nice position paper to the variability management workshop here in Helsinki next month.

Abstract:

This position paper reflects on the implications for variability management related practices in SPL development when adopting a compositional style of development. We observe that large scale software development is increasingly conducted in a decentralized fashion and on a global scale with little or no central coordination. However, much of the current SPL and variability practices seem to have strong focus on centrally maintained artifacts such as feature and architecture models. We conclude that in principle it should be possible to decentralize these practices and identify a number of related research challenges that we intend to follow up on in future research.

Full details are on my publication site and you can download the pdf from there as well.

Website maintenance

I did some maintenance on my website this morning. I fixed a few broken urls to websites of friends. Also I added a workshop paper to my publications that I co-authored with Ronald Bos nearly two years ago. Then I discovered that several of the papers there had incomplete references, so I fixed that too.

Finally, I updated my photo site and added pictures I took last Christmas in France and last week when my father, uncle, nephew, and niece’s boyfriend visited Helsinki.

Semantic diffusion

Martin Fowler wrote a nice blog post on semantic diffusion. It’s a term he coins for describing the effect that the meaning of new terms tends to diffuse as people start using it without paying too much attention to the original definitions. As examples he uses web 2.0 and agile, both of which have suffered from a lot of semantic diffusion due to the associated hype and buzz.

I’ve noticed the same with the the term “software architecture”. This term was first coined by Perry and Wolf in 1992. Soon after, people started using it. And of course every self respecting software firm suddenly had “software architecture”, even the ones that you might say were “architecturally challenged” in the sense that they had the equivalent of Stonehenge (piled together rocks) rather than, say, the Eiffel Tower. Also, by the late nineties, every software architecture conference/workshop/symposium, some person would come up to kick off a discussion on “hey what do we actually mean by software architecture”. This was fun the first dozen of times but I found that the discussion resets itself as soon as you leave the room. Nobody reads up and especially the older stuff gets ignored a lot.
However, the trend is turning around. A lot of serious software architecture books, businesses and tools have emerged that allow us to separate the men from the boys when it comes to software architectures. The type of discussion as listed above still surfaces at basically any related conference but you can now end it quickly by pointing out a few good references and asking a few simple questions about practices,  tools, etc.
This is how language works. Semantic diffusion is a crucial linguistic concept that causes languages to constantly evolve and change. New words and concepts are added on a continuous basis and old ones are re-purposed as well. Good words survive and have their definitions sharpened and eventually documented in dictionaries, encyclopedias, literature and other reference material.
I sure hope this web X.0 ends soon. I’ve already seen people blogging about web 3.0. Essentially the semantic web people have already recognized that they are missing the boat for 2.0 and are now targeting 3.0 :-). Of course it’s just a matter of time before we start seeing web 4.0 being coined by which time the actual meaning of web 2.0 will have diffused to “so 2006”.