Last month a European funded research project I’m in kicked off: SENSEI. The official project website is still under construction but somebody was kind enough to blog about it at least.
Sensei will be a lot of work and hopefully also very interesting and fun. It’s a so-called integrated project in FP7 which means it is one of the many projects with a lot of partners (21 in our case) across Europe of both academic and industrial side that are financed by the EU in the context of the seventh framework programme that will distribute billions of research money over the next few years. So if you are an EU citizen, thanks very much for making this possible with your tax money ;-).
Nokia is in Sensei for 9 person years spread over three years (50% funded). The blog post mentioned above has a pretty good overview of the key points of Sensei so I won’t repeat them here. My personal interest in this project will be on the software middleware layer.
One of my colleagues, Cristiano di Flora with whom I’ve written several articles over the past year, is co-organizing the second workshop on Adaptive and DependAble Mobile Ubiquitous Systems (Adamus) that will be co-hosted with this years WoWMoM symposium in California this summer.
He asked me to do a bit of promotion on my blog, mainly for the purpose of being able to link to a real blog post. So here it goes. The workshop looks like it could be very interesting and well aligned with what Cristiano and I are working on in the Smart Space Lab. So likely we will be presenting there as well. If you are interested in learning more about preliminary results from our current work, check out recent publications on publications.jillesvangurp.com.
I submitted a nice position paper with two of my colleagues at Nokia to the CMPPC’07 (Common Models and Patterns for Pervasive Computing) Workshop, at Pervasive 2007 in Toronto next month.
State-of-the-art research and existing commercial off-the-shelf solutions provide several technologies and methods for building Smart spaces. However, developing applications on top of such systems is quite a complex task due to several impediments and limitations of available solutions. This paper provides an overview of such impediments and outlines what are the main research challenges that still need to be solved in order to enable effective development of applications and systems that fully exploit the capabilities of state-of-the-art technologies and methodologies. The paper also outlines a few specific issues and impediments that we, at the Nokia Research Center, faced in this field so far. It also sheds some light on how we are going to tackle some of the mentioned issues in the future.
Full details are on my publication site and you can download the pdf from there as well.
Slashdot reported on the creation of Citizendium by wikipedia co-founder Larry Sanger. Citizendium is planning to do this specifically to impose some level of quality control. I’ve read through the plan currently on the website and it looks quite reasonable. Essentially it creates a layer of expert editors on top of the regular anonymous editors that do much of the grunt work in wikipedia. Expert editors are people with established, recognized backgrounds in particular topics. They must disclose their identity + verifiable credentials in order to get the status of expert editor. The idea seems to be that in case of conflicts, expert editors decide.
Like many users, I’m pretty fond of wikipedia. I am also aware of its limitations with respect to quality. Wikipedia and the associated community grew rapidly over the past few years. However the slightly anarchistic model that drives this growth ensures that has the disadvantage that the work of more knowledgeable individuals in the community can be damaged (intentionally and unintentionally) by unfortunate edits. This problem is real, not imagined, and it affects the quality of many wikipedia articles. I was reading an interesting article on mathematics the other day (brushing up some rusty skills and long forgotten concepts) which looked like somebody spent a lot of time on it. The current model of wikipedia makes it possible for people to add/change that article. However, I’d hate to see any non trivial edits in that particular article by someone without a solid mathematics background (e.g. me).
For all practical purposes wikipedia is rightly conservative in changing the way they operate. After all they have so far been very successful. Forking therefore seems a good way to experiment with new collaboration strategies. Forking does not need to be permanent, unlike source code it’s actually pretty easy to do some form of controlled synchronizing or even merging of articles. Branching might be a more appropriate name. Both branches will be able to benefit from work in the other branch.