Local Interaction demo on Youtube

I gave a demo of our system at a press event for Nokia a few weeks ago. Our PR people were busy filming and have put several short demo movies on Youtube. Including my demo and several other cool demos from colleagues in Nokia Research.

So enjoy.

Since I expect there will be people interested in learning more about this. I’ll try to give some explanation right here.

In the Youtube movie above, I am showing off our Local Interaction demo, which is a mobile website that shows off our indoor location based service platform. For the positioning we have collaborated with a different team that has been working on a indoor positioning system, which was demoed separately at the same event. Our demo leverages their technology and provides services on top.

Indoor location based services are similar to outdoor location based services in the sense that things like search, navigation, social networking, and media sharing are all things that can benefit from knowing where you are. In a nutshell, we have indoor location enhanced variants of these features integrated into our platform. However indoor location based services are different in the sense that they are much more relevant to people. People spend most of their lives indoors!

Having a platform is nice of course, but working code with real users is nicer. So we have spent most of this year preparing a trial in a real shopping mall here in Helsinki. The website you see has a nice polished UI, rich indoor content for the shopping mall and a set of useful services around the mall concept aimed at both consumers as well as proprietors of the mall, such as shop owners. The advantage of working with a real place is that it forces you to be very pragmatic about a lot of issues.

Doing a demo like ours is easy if you can assume everybody goes to the same building, uses the same phone, and just the right firmware version + your handcrafted application. A substantial part of the Ubiquitous & Pervasive Computing research community is perfectly happy with that sort of demo setups and proofs of concepts. This is why despite decades worth of demos, there’s no significant technology available for consumers beyond the boring old home automation kits and that sort of thing. It takes a bit more to make a real life impact.

A key motivation for our demo is that we don’t want to make such assumptions. Our requirements are: a capable mobile web browser (most modern Nokia phones and many phones from other vendors), and optionally, the indoor positioning client software installed. Like Location based services, our services are actually perfectly usable without positioning so we don’t actually require positioning. Being web based means that we can reach much more users than with a native application.

With everything we do we have the vision that eventually we don’t want to do this in just one shopping mall but world wide in a lot of public places. The primary goal of our trial is to gain experience rolling out this kind of technology and learning more about all the practical and technical obstacles there are for making this work on a more interesting scale in the future. We want to show that our platform is scalable in both a technical sense as well as a business sense. We want to kickoff a whole new market for indoor location based services. It doesn’t exist today, so we have to build the whole ecosystem from scratch.

For the more technical people. Our web platform is based on Python Django and integrates the positioning and other services using REST based services over HTTP. The friends feature we demo is realized using the Facebook API. We rely on Atom pub and Atom feeds internally. We intend to be mashup friendly so as to not have to reinvent every feature ourselves and instead integrate with many existing services. We have an Apache Lucene based search server that powers our indoor location based search feature. We use this feature quite heavily to look for indoor location tagged content such as photos, ads, vouchers, comments, etc. Finally, we use an off the shelf open source map server that serves up the indoor maps. In general, our philosophy is that there are already enough poorly reinvented wheels. We build what we need only if we can’t reuse what is out there. The web is out there and we use it.

For the researchers. A few articles that should be published in the next few months will outline the research we have on this. Meanwhile, you can refer to my publications page for a few workshop papers on a a predecessor of this demo that we did in 2007. Also we did a demo at the Internet of Things conference last April of our 2007 demo. And of course, there will be more details on the trial once we launch it.

About the project. This demo was developed as part of a Nokia project on an “Application Environment for Smart Spaces” which is currently running in Smart Space Lab, which is a part of Nokia Research Center. Headcount has varied quite a bit but currently we are around 8 people working full time on this. My role in this project is pushing architecture solutions and coordinating the development together with a small group of researchers and several excellent software engineers.

The Way We Live Next

I stumbled upon somebody writing about Nokia’s 2007 Way We Live Next event in Oulu. This event was intended to give the outside world a view on what is going on in Nokia Research Center.

Nice quote

Lots of interesting stuff was shown off during the course of the two days and the most interesting I came across was the indoor positioning concept. Using WiFi and specially created maps, the devices we were issued with were running the software which enabled you to move through the NRC building and pinpoint exactly where you were. So, if the next presentation was in room 101, the device would simply, and quickly show you the way. It instantly made me think of the frustration of trying to get where I want in huge shopping centres – and I figured this had to be the perfect solution.

Next week, the 2008 edition of the WWLN is going to be in Espoo and I will be giving a demo there of our indoor location based service platform, customized for a real shopping mall. We’ve demoed last years version of our software platform at the Internet of Things Conference last April. At the time our new platform was already under development for a several months and we are getting ready to start trialing it now. The WWLN event next week will be when we first show this in public and hopefully we’ll get some nice attention from the press on this.

PS. I like (good) beer …

From SPLs to Open, Compositional Platforms

Below is a position paper I submitted to the upcoming Dagstuhl seminar I am attending. It’s not peer reviewed and it is not clear at this point if there will be any proceedings. So, as an experiment, I will just put the full text in a blog post as well as the pdf you can find on my publications page. The reason I am doing this is twofold: I want people to read stuff I write and locking it up in some hard to find proceedings just isn’t doing the trick. Secondly, this blog has a comment feature. Please feel free to use it.


From SPLs to Open, Compositional Platforms

Jilles van Gurp & Christian Prehofer
Smart Space Lab
Nokia Research Center
Helsinki, Finland

Abstract. In this position paper we reflect on how software development in large organizations such as ours is slowly changing from being top down managed, as is common in SPL organizations, towards something that increasingly resembles what is happening in large open source organizations. Additionally, we highlight what this means in terms of organization and tooling.

Trends and Issues

Over the past decade of our involvement with Software Product Lines, we have seen the research field grow and prosper. By now, many companies have adopted SPL approaches for their core software development. For example, our own company, Nokia, features prominently on the SEIs Product Line hall of fame [SEI 2006]. Recently, we [Prehofer et al. 2007], and others [Ommering 2004] have published articles on the notion of compositional development that decentralizes the development of software platforms and products. The motivation for our work in this area is that we have observed that the following trends are affecting software development:

  • Widening platform scope and more diverse products. As “victims” of their own success, successful product lines allow for the creation of an ever wider range of products. Necessarily, these products have increasingly less in common with each other. Particularly, they are likely to have substantial product specific requirements and require increasing amounts of variability in the platform provided features to deal with conflicting and overlapping requirements in the base platform. In other words, the percentage of functionality shared across all products relative to the total amount of functionality in the platform is decreasing. At the same time, the percentage of platform assets actually used in any particular product is also decreasing.
  • Platforms stretch over multiple organizations. As platform and product development starts to span multiple organizational entities (companies, business units, open source projects, etc), more openness towards different and conflicting requirements, features, roadmaps and processes in different development entities is required. This concerns both open source software and commercial platforms that are developed and productized differently by third party companies.
  • Time to market and innovation speed. While time to market has always been a critical issue, it is particularly an issue with the growing size and complexity of Software Product Lines. In general, large scale software projects tend to have longer development cycles. In the case of Software Product Lines that have to cater for more and more heterogeneous products, length of development cycles
    tends to increase as complexity of the work related to defining, realizing and testing new functionality grows increasingly complex. However, time to market of features does not only include the product line development cycle but also the time needed to do product derivation as well as the development cycles of any external software the Software Product Line integrates. Worst case is that a feature first needs to be integrated in one of these dependencies; then it needs to be integrated into the next major release of the Software Product Line before finally a software product with the new feature can be developed and put in the market.

We are seeing examples of this in Nokia as well. For example, Nokia has software development spread over several major phone platforms (S30, S40, S60 and Linux Maemo) and launches multiple products from each of those platforms every year. Interesting to note here is that Nokia has never really retired a mobile phone software platform and is actively using all of them. Roughly speaking, S40 evolution is in sync with the popularization of the notion of Software Product Lines since the mid nineties. It is indeed this product line that is featured on the before mentioned SEI SPL hall of fame [SEI 2006]. Development for products and platforms is spread over many Nokia locations all over the globe as well as a complex network of subcontractors, customers and supplying companies. Additionally, the use of open source software and the intensive collaboration Nokia has with many of the associated projects are adding more complexity here. Finally, time to market is of course very important in the mobile phone market. Products tend to be on the market for only short time (e.g. 6-12 months) and developing them from a stable software platform can take more than a year in some cases. This excludes time needed for major new releases of our software platform. Consequently, disruptive new features in the platform may take years to reach the market in the form of new phones.

The way large organizations such as Nokia manage and organize their software and platform development is constantly pushing the limits of what is possible with software engineering & architecting tools and methodology. Nokia is one of a handful of companies world wide that manage tens of millions of code across its product lines and products. We see Software Product Lines as a way to develop software that has arguably been very successful in organizations like ours. However, we also note that increasingly development practice is deviating from practices that are prescribed by literature on Software Product Lines particularly with respect to centralized definition, control, ownership and management of software assets and products. Therefore, we argue that now the research community needs to adapt to this new reality as well.

The complexity and scale of the development organization increasingly make attempts to centrally manage it futile and counter productive. Conflicts of interest between stakeholders, bureaucracy, politics, etc are all affecting centralized platform and product decision making and can end up leading to unworkable compromises or delays in the software development process. Additionally, it is simply becoming impossible to develop software without depending on at least some key open source projects. Increasingly the industry is also participating as an active contributor in the open source community. Arguably, most of the open source community now consists of software developers sponsored in some way by for profit organizations. For example, Nokia is a very active participant in the mobile Linux community (the Maemo Linux platform) and ships products such as the N810 internet tablet where the majority of lines of code is actually coming from externally owned and run open source projects and even direct competitors (e.g. Intel and Motorola).

This changes the game of balancing product and platform requirements, needs and interests substantially from what is generally assumed in a classical SPL context where a single company develops both platform and products in house and where it is possibly to drive both product and platform development in a top down fashion. This simply does not work in a context where substantial amounts of critical software in a product are coming from external sources that are unwilling / unlikely to take orders from internal product managers or other types of executives external to their organization.

Effectively, this new reality necessitates a different approach to software development. Rather than driving a top down decomposition of products and features and managing development and software assets per this hierarchy, as is very much the consequence of implementing practices advertised in SPL literature, we propose to adopt a more compositional style of development.

Compositional Development

In our earlier work [Prehofer et al. 2007], we outlined an approach to adopt a more compositional approach to development. Rob van Ommering has argued along similar lines but still takes the traditional perspective of a (large) company managing a population of products [Ommering 2002][Ommering 2004]. However, what we propose here is to further decentralize development and organize similar to the open source community where many independent development teams of components, framework and product owners are working together. Each of those teams is acting to represent their own interests (and presumably those of whomever they work for). Their perspective on the external world is simply that of upstream and downstream dependencies. Downstream are the major users and customers that use the software the team produces. These stakeholders act as primary source of requirements and probably also funding. Upstream, teams operate that produce software required for using and developing the software. These teams in turn depend on their downstream dependencies and funding.

This decentralized perspective is very different from the centralized perspective and essentially allows each team to optimize for what is required from them downstream and what is available to them upstream. For example, requirements for each team come primarily from their downstream dependencies. Since there is no central controlling entity that dictates requirements, picking up these requirements and prioritizing them is very much the task of the teams themselves. Of course, they need to do so in cooperation with their downstream dependencies. Generally, especially when crossing organizational boundaries, requirements are not dictated but rather the development teams try to asses the needs of their most important customers.

Organization

As Conway’s Law [Conway 1968] predicts, the architectural decomposition of software is reflected in organizations. In many open source communities, project team dependencies reflect the architecture decomposition of software into packages, frameworks, libraries, components, or other convenient units of software decomposition. Obviously, without at least some structure and management in place, the approach advocated here results in total anarchy, which is not a good organizational model to accomplish anything but chaos. Again, we look at the open source world where organizations such as Ubuntu, Eclipse, Apache and Mozilla are driving development of thousands of projects. Each of these organizations has a surprisingly sophisticated organizational structure that comes with rules, best practices, decision making processes, etc. While there are no binding contracts enforcing these, participants in the community are required to play by the rules or risk being ignored.

In practice this means, participants voluntarily comply with practices and rules and take part in what is often called a meritocracy where important decisions are taken by those who have the merits to do so. Generally, this requires a track-record of making important contributions and having the trust of the community. For example, in the Eclipse foundation, which was founded by IBM, this means that individuals from some of their major competitors such as BEA and Red Hat actually lead some of the key projects under the eclipse umbrella. These individuals are essentially trusted by IBM to do the right things even though they work for a major competitor. Organizations such as Eclipse exist to represent the common interests of the project teams they are composed of. For example the eclipse foundation, which is very much a corporate driven (and financed) institution, represents a broad consortium of stakeholders that covers pretty much the entire spectrum of Java (and increasingly also non-Java) enterprise, desktop and mobile/embedded software related development tooling. In the past two years, they have organized two major, simultaneous releases of the major projects. In their latest release, which goes by the name of Europa, they managed to synchronize the release process of around 20 of their top level projects which are collectively developed by thousands of developers coming from dozens of companies. Many of these companies are competitors. For example, BEA and IBM are directly competing in the enterprise market and major contributors to multiple eclipse projects.

What this proves is that the way the Eclipse Foundation organizes development is extremely effective and scalable because it involves dozens of organizations and hundreds/thousands of individuals producing, integrating and testing an enormous amount of new software in a very short time frame. Organizing like this brings in the necessary flexibility to seamlessly work with numerous internal and external teams and acknowledges the reality that even internally relations between teams can be complex and difficult to manage centrally.

Tooling

A consequence of decentralizing is that aligning the use of tools across development teams becomes essential. When collaborating, it helps if tools between teams are at least similar and preferably compatible/the same. SPL research has over the past few years focused on tooling for variability management, configuration management and requirements management. However, getting these tools adopted and using them effectively in a context of thousands of software development teams that are collaborating is quite a challenge; especially since many of these tools are either in house developed or only used in a handful of companies. Tooling in the open source community tends to focus on the essentials. That being said, the OSS community has also produced many development tools that are now used on a massive scale. For example, Mozilla has had a pioneering role through their contribution of important tools such as Bugzilla and Bonsai (bug tracking and build monitoring). The whole point of the Eclipse foundation seems to be development tools. Additionally, they have a project called equinox that implements a very advanced framework that provides many interesting variability technologies and has put into mainstream use notions of using API versioning and provided and required interfaces on components. In short, there seems to be a gradual migration of SPL like tool features to mainstream tooling. Additionally, eclipse is of course a popular platform for developing such tooling in the research community.

Conclusions and Future work

In this position paper we tried to highlight a few of the key issues around the ongoing trend from integrational development towards a more open ecosystem where many stakeholders work on many pieces of software that are integrated into products by some of the stakeholders. We are currently working on an article about what it means to go from a software development practice to a compositional approach in terms of organizational models, practices and other aspects. In that article, we will list a number of practices that we associate with compositional development and evaluate these against practices in open source communities as well as selected SPL case studies from the research community. Arguably, SPLs have vastly improved software development in many companies over the past decade or so. Therefore, the key issue for the next decade will be re-aligning with the identified trends towards larger software development ecosystem while preserving and expanding the benefits that SPL development have brought.

We do not see compositional development vs. SPL development as a black and white kind of thing but instead regard this as a wide spectrum of development practices that each may or may not be applied by individual companies. The more they apply them, the more compositional their development becomes. In any case, the right set of practices is of course highly dependent on context, domain, stakeholders, etc. However, we observe that in order to scale development and in order to work with hundreds or even thousands of globally and organizationally distributed software developers effectively, it is necessary to let go of centralized control. Compositional development in this open environment is vastly more complex, organic, and so we believe, more cost effective.

References

[Conway 1968] M. E. Conway, How do committees invent, Datamation, 14(4), pp. 28-31, 1968.
[Ommering 2002] R. van Ommering, Building product populations with software components, proceedings of Proceedings of the 24rd International Conference on Software Engineering (ICSE 2002), pp. 255-265, 2002.
[Ommering 2004] R. Van Ommering, Building Product Populations with Software Components, Ph. D thesis, University of Groningen, 2004.
[Prehofer et al. 2007] C. Prehofer, J. van Gurp, J. Bosch, Compositionality in Software Platforms, in A. De Lucia, F. Ferrucci, G. Tortora, M. Tucci eds., Emerging Methods, Technologies and Process Management in Software Engineering, Wiley, 2008.
[SEI 2006] Software Engineering Institute, Product Line Hall of Fame, http://www.sei.cmu.edu/productlines/plp_hof.html, 2006.

New and updated publications

As you saw in yesterday’s post, my publication site has moved to this blog. I also took the opportunity to update the page with recent work:

You can download the pdfs and find the full refs from here: publications.

Adamus 2008

One of my colleagues, Cristiano di Flora with whom I’ve written several articles over the past year, is co-organizing the second workshop on Adaptive and DependAble Mobile Ubiquitous Systems (Adamus) that will be co-hosted with this years WoWMoM symposium in California this summer.

He asked me to do a bit of promotion on my blog, mainly for the purpose of being able to link to a real blog post. So here it goes. The workshop looks like it could be very interesting and well aligned with what Cristiano and I are working on in the Smart Space Lab. So likely we will be presenting there as well. If you are interested in learning more about preliminary results from our current work, check out recent publications on publications.jillesvangurp.com.

OpenID 2.0 and concerns about it

It seems JanRain is finally readying the final version of OpenID 2.0. There’s a great overview of some concerns that I mostly share on readwriteweb.com. Together with another recent standard (OAuth), OpenID 2.0 could be a huge step forward for web security and privacy.

Lets start with what OpenID is about and why, generally, it is a good idea. The situation right now on the web is that:

  • Pretty much every web site has its own identity solution. This means that users have to keep track of dozens of accounts. Generally users have only one or two email addresses so in practice that means most these accounts are actually tied to 1 email account. Imagine someone steals your gmail password and starts scanning your mail for all those nice account activation mails you’ve been getting for years. Hint: “mail me my password”, “reset my password”. In short, the current situation has a lot of security risks. It’s basically all the downsides of a centralized identity solution without any of the advantages. There are many valid concerns about using OpenID related to e.g. phishing. However, what most people overlook is that the current situation is much worse and also that many OpenID providers actually address the concerns by implementing various technical solutions and security practices. For example myopenid.com and verisign employ very sophisticated technologies that you won’t find on many websites where you would happily provide your credit card number. There is no technical reason whatsoever why openid providers can’t use the same or better authentication mechanisms that you probably use with your bank already.
  • While technically some websites could work together on identity, very few do and the ones that do tend to have very strong business ties (e.g. banks, local governments, etc. This means that in most cases, reusable identity is only usable on a handful of partner sites. Google, Microsoft, and Yahoo are great examples. They each have partner programs that allows externals to authenticate people with them. Only problem: almost nobody seems to do that. So reality check: OpenID is the only widespread single sign on solution on the web. There is nothing else. All the other stuff is hopelessly locked into commercial verticals. Microsoft has been trying for years to get their password solution to do what OpenID is doing today. They failed miserably so far.
  • Web sites are increasingly dependent on each other. Mashups started as an informal thing where site A used an API from site B and did something nice with it. Now these interactions are getting much more complex. The amount of sites involved in typical mashups is increasing and the amount of privacy sensitive data flying around in these mashups is also increasing. A very negative pattern that I’ve seen on several sites is the “please provide your gmail/hotmail/yahoo user password and we’ll import your friends” type feature. Do you really want to share your years of private email conversations with a startup run in a garage in California? Of course not! This is not a solution but a cheap hack. The reality is that something like OpenID + OAuth is really needed because right now many users are putting themselves in danger by happily providing their username and passwords.
  • Social networks like Facebook authenticate people for the many little apps that plug into them. So far Facebook is the most successful here. Facebook provides a nice glimpse of what OpenID makes possible on a much larger scale but it is still a centralized vertical. I am on Facebook and generally like what I see there but I’m not really comfortable with the notion that they are the web from now on (which seems to be implied in their centralized business model). Recent scares with their overly aggressive advertisement schemes shows that they can’t really be trusted.

OpenID is not a complete solution for the above problems and it is important to realize that is by design: it tries to solve only one problem and tries to solve it well. But generally it is a vast improvement over what is used today. Additionally, it can be complemented with protocols like OAuth which are about delegating permissions from one site to another on your behalf. OpenID and OAuth are very well integrated with the web architecture in the sense that they are not monolithic identity solutions but modular solutions designed to be combined with other modular solutions. This modular nature is essential because it allows for very diverse combinations of technology. This in turn allows different sites to implement the security they need but in a compatible way. For example, for some sites allowing any OpenID provider would be a bad idea. So, implement whitelisting and work with a set of OpenID providers you trust (e.g. Verisign).

OpenID and OAuth provide a very decent base level of protection that is not available from any other widely used technology currently. The closest thing to it is the Liberty Alliance/SAML/Microsoft family of identity products. These are designed for and applied exclusively in enterprise security products. You find them in banks and financial institutions; travel agencies, etc. These are also used on the web but invariably only to build verticals. Both Google and Microsoft use technologies like this to power their identity solutions. In fact, many OpenID identity providers also use these technolgies. For example, Microsoft is rumoured to OpenID enable their solution and several members of the Liberty Alliance (e.g. Sun) have been experimenting with OpenID as well. They are not mutually exclusive technologies.

It gets better though. Many OpenID providers are employing really advanced anti phishing technologies. Currently you and your cryptographically weak password are just sitting ducks for Russian/Nigerian/Whatever scammers. Even if you think your password is good, it probably isn’t. OpenID doesn’t specify how to authenticate. Consequently, OpenID providers are competing on usability and anti phishing features. For example, Verisign and myopenid.com employ techniques that makes them vastly more secure than most websites out there, including some where you make financial transactions. There has been a lot of criticism on openid and this has been picked up by those that implement it.

So now on to OpenID 2.0. This version is quite important because it is the result of many companies discussing what should be in there for a very long time. In some respects there are a few regrettable compromises and maybe not all of the spec is that good of an idea (e.g. .name support). But generally it is a vast improvement over OpenID 1.1 which is what is in use currently and which is technically flawed in several ways that 2.0 fixes. The reason 2.0 is important is because many companies have been holding off OpenID support until it was ready.

The hope/expectation is that those companies will start enabling OpenID logins for their sites over the next few months. The concern expressed here is that this may not actually happen and that in fact OpenID hype seems past its glory already. Looking at how few sites I can actually sign into with my OpenID today, I’d have to agree. As of yet, no major website has adopted OpenID. Sure there are plenty of identity providers that support OpenID but very few relying parties that accept identities from those providers. Most of the OpenID sites out there are simple blogs, startups web 2.0 type stuff, etc. The problem seems to be that everybody is waiting for everybody else and also that everybody is afraid of giving up control over their little clusters of users.

So ironically, even though there are many millions of openids out there, most of their owners don’t use them (or even are aware of having one). Pretty soon openid will be the authentication system with the most users on this planet (if not already) and people don’t even know about it. Even the largest web sites have no more than something like a hundred million users (which is a lot). Several of those sites are already openid identity providers (e.g. AOL).

The reason I hope OpenID does get some adoption is because if it isn’t it will take a very long term for something similar to emerge. This means that the current very undesirable situation is prolonged for a very long time. In my view a vast improvement is needed over the current situation and besides OpenID, there seems to be very little in terms of solutions that can realistically be used today.

The reason I am posting this is because over the past few months me and my colleagues have been struggling with how to do security in decentralized smart spaces. If you check my publications web site, you will see several recent workshop papers that provide a high level overview of what we are building. Most of these papers are pretty quiet on security so far even though obviously security and privacy is a huge concern in a world where user devices use each others services and mash them up with commercial services in both the local network and internet. Well, the solution we are applying in our research platform is a mix of OpenID, OAuth and some rather cool add-ons that we have invented to those. Unfortunately I can’t detail too much about our solutions yet except that I am very excited about them. Over the next year, we should be able to push out more information into the public.

Experiences with realizing Smart Space Web Service Applications

I had a nice article accepted at the upcoming 1st IEEE International Peer-to-Peer for Handheld Devices Workshop at the CCNC ’08 conference in Las Vegas.

Jilles van Gurp, Christian Prehofer, Cristiano di Flora, Experiences with realizing Smart Space Web Service Applications

This paper presents our approach for building an internet based middleware platform for smart spaces as well as a number of services and applications that we have developed on top of it. We outline the architecture for the smart space middleware and discuss several applications and services that we have so far realized with this middleware. The presented material highlights key concepts in our middleware vision: services are HTTP based and restful; applications are accessed through a browser so that they are available on a wide variety of devices; and we demonstrate the concept of bridging non internet enabled smart space devices to our IP and HTTP centric smart space network.

I’ve uploaded the paper to my publications site.

Workshop paper

Together with my two colleagues Christian Prehofer and Cristiano di Flora, I wrote a nice workshop paper for the upcoming Second Workshop on Requirements and Solutions for Pervasive Software Infrastructures (RSPSI), at UBICOMB 2007, Innsbruck, 16-19 Sebtember, 2007.

Towards the Web as a Platform for Ubiquitous Applications in Smart Spaces (pdf).

Abstract:

We introduce our web based middleware for smart spaces, which strongly relies on technologies used in Internet services. Following the key requirements and
technologies, we present our architecture for ubiquitous applications in smart spaces. It exploits and leverages many of the key web-technologies as well as “Web 2.0” collaborative and social Internet services, including browsers, web servers, development tools and content management systems. In this way, we aim to make many of the highly disruptive ubiquitous applications less disruptive from a technology point of view. Furthermore, we discuss a number of new challenges for applying these technologies in ubiquitous applications. These include the areas of discovery/delivery of services, security, content management, and networking.

The article is a nice milestone in our very your research group. An earlier position paper already outlined our vision for software development in smart spaces. This article builds on that vision and makes public a few details of the software we are building in the research group.

Towards Effective Smart Space Application Development: Impediments and Research Challenges

I submitted a nice position paper with two of my colleagues at Nokia to the CMPPC’07 (Common Models and Patterns for Pervasive Computing) Workshop, at Pervasive 2007 in Toronto next month.

Abstract:

State-of-the-art research and existing commercial off-the-shelf solutions provide several technologies and methods for building Smart spaces. However, developing applications on top of such systems is quite a complex task due to several impediments and limitations of available solutions. This paper provides an overview of such impediments and outlines what are the main research challenges that still need to be solved in order to enable effective development of applications and systems that fully exploit the capabilities of state-of-the-art technologies and methodologies. The paper also outlines a few specific issues and impediments that we, at the Nokia Research Center, faced in this field so far. It also sheds some light on how we are going to tackle some of the mentioned issues in the future.

Full details are on my publication site and you can download the pdf from there as well.