Updated my publications page …

It’s been a while, but now that the last two big journal articles I had in the pipeline for ages are printed, I finally updated my publications page with proper references and pdfs.

For those who got to know me more recently (or don’t know me at all). I have been moving back and forth between academic jobs and non academic jobs for about twelve years. The last two years I’ve been employed in a strictly non-academic role, which I enjoy very much. Before that I was working (and publishing) at the Nokia Research Center. In that job, I squeezed out several not so important workshop articles and book chapters as well as two major, big journal articles (which in the academic world is all that counts). Those articles took a long time to write and even longer to get reviewed, edited, re-reviewed, re-edited, accepted, re-edited, approved, edited by a professional editor, approved, pre-published, and finally printed.

Practical Web-Based Smart Spaces
Abstract:

Mobile devices are evolving into hubs of content and context information. Many research projects have shown the potential for new pervasive computing applications. This article shows how Web and resource-based smart spaces can support pervasive applications in a wide variety of devices. A framework that employs a resource-based HTTP style for pervasive services called Representational State Transfer (REST) can enable easy mashup of applications. This framework has several important features. First, a flexible access control mechanism on top of the OpenID and OAuth protocols provides security and access control in heterogeneous, dynamic environments. Second, a search engine can collaborate with existing service and network discovery mechanisms to find resources on the basis of their indoor location. Finally, an emerging W3C standard, Delivery Context: Client Interfaces (DCCI), facilitates sharing information within a device in an interoperable fashion.

The first article, we decided to write in 2008 as a way to promote the fine research we had been doing in Nokia Research. You can’t get better marketing for your research than an article in a high profile magazine with a wide distribution in the research community. The magazine we selected for this was Pervasive Computing, and I’m very proud and happy we got our article in. Since this is a magazine and not a journal, the article is comparatively limited in size, which means that it posed some interesting challenges on what to keep in it and what to omit.

Comparing Practices for Reuse in Integration-oriented Software Product Lines and Large Open Source Software Projects
Abstract:

This paper compares organization and practices for software reuse in integration- oriented software product lines and open source software projects. The main observation is that both approaches are successful regarding large variability and reuse, but differ widely in practices and organization. To capture practices in large open source projects, we describe an open compositional model, which reflects their more decentralized organization of software development. We capture key practices and organizational forms for this and validate these by comparing four case studies to this model. Two of these studies are based on published software product line case studies, for the other two we analyze the practices in two large and successful open source projects based on their published developer documentation. Our analysis highlights key differences between the practices in the two open source organizations and the more integrational practices used in the other two cases. Finally, we discuss which practices are successful in which environment and how current practices can move towards more open, widely scoped and distributed software development.

I started writing the second article, already in 2006. The first drafts were not very satisfying and we put it on ice for quite some time until deciding to finally get it published in 2008, which meant a more or less full rewrite of what we had until then. From there it took until September 2010 to get it printed. Most of those 2+ years were spent waiting and very occasionally doing major revisions of the article in response to some reviews and editor comments.

This last article (for now) has some continuity from my earlier work on software variability, software product lines, and software design erosion that I covered in my Ph. D thesis in 2003 (and several related publications). We present a model for large scale software development that we reconstructed from observing “how it’s done” in several case studies published by others as well as in the open source community as well as our own experience studying various systems and companies, as well as getting our own hands dirty with actual software engineering. Two years of subsequent practicing real software development in Nokia has only strengthened my belief in the vision presented in the paper, which is that the only proper way to scale software development to a software eco system (i.e. a thriving community of many developers across many organizations working with and on the software) is to decentralize management of the development process. If you are interested in this and want to read more, co-author Jan Bosch, my former Ph. D. supervisor who now works at Intuit, has been publicizing this view as well in his frequent talks and keynotes at various conferences. This website is dedicated to this topic.

Now with both articles out of the way, the question is “what’s next?”. The answer to that, for now, is nothing because I haven’t started writing any new articles in the last two years. And frankly I’m not likely to start writing one soon since I lack the time and I’m appalled by the snail pace at which the publication process progresses. Both articles have turned out very nice but would have had much greater impact if we had been able to get them written and published in the same year instead of (nearly) 3 to 5 years apart. Sadly, this is the reality of academic life. You write your stuff, you move on, and some day stuff actually gets printed on dead trees. There are more reasons, which I won’t rant about here but it is a big factor in me not being interested in publishing any more, for now.

wolframalpha

A few years ago, a good friend gave me a nice little present: 5 kilos of dead tree in the form of Stephen Wolfram’s “A new kind of science”. I never read it cover to cover and merely scanned a few pages with lots of pretty pictures before deciding that this wasn’t really my cup of tea. I also read a bit some of the criticism on this book from the scientific community. I’m way out of my league there so, no comments from be except a few observations:

  • Presentation of the book is rather pompous and arrogant. The author tries to convince the readers that they the most important piece of science ever produced in their hands.
  • This is what set of most of the criticism. Apparently, the author fails to both credit related work as well as properly back up some of his crucial claims with proper evidence.
  • Apparently there are quite a few insufficiently substantiated claims which affects credibility of the overall book and claims of the author
  • The approach of the author to write the book has been the ivory tower approach where he quite literally dedicated a decade+ of his life to writing it during which he did not seek out much criticism from his peers.
  • So, the book is controversial and may either turn out to be the new relativity theory (relatively speaking) or a genuine dud. I’m out of my league deciding either way

Anyway, the same Stephen Wolfram has for years been providing the #1 mathematical software IDE: Mathematica, which is one of the most popular software tools for anyone involved with mathematics. I’m not a mathematician and haven’t touched such tools in over 10 years now (dabbled a bit with linear algebra in college) but as far as I know, his company and product have a pretty solid reputation.

Now the same person has brought the approach he applied to his book and his solid reputation as a owner of Mathematica to the wonderful world of Web 2.0. Now that is something I know a thing or two about. Given the above I was initially quite sceptic when the first, pretty wild, rumors around wolframalpha started circulating. However, some hands on experience has just changed my mind. So here’s my verdict:

This stuff is great & revolutionary!

No it’s not Google. It’s not Wikipedia either. It’s not Semantic web either. Instead it’s a knowledge reasoning engine hooked up to some authoritative data sets. So, it’s not crawling the web. It’s not user editable and it is not relying on traditional Semantic web standards from e.g. W3C (though very likely it must be using similar technology).

This is the breakthrough that was needed. The semantic web community seems to be stuck in an endless loop pondering pointless standards, query formats, graph representations and generally rehashing computer science topics that have been studied for 40 years now without producing much viable business models or products. Wikipedia is nice but very chaotic and unstructured as well. The marriage of semantic web and wikipedia is obvious has been tried countless times and has so far not produced interesting results. Google is very good at searching through the chaos that is the current web but can be absolutely unhelpful with simple, fact based questions. Most fact based questions in Google return a wikipedia article as one of the links. Useful, but it doesn’t directly answer the question.

This is exactly the gap that wolframalpha fills. There’s many scientists and startups with the same ambition but Wolframalpha.com got to market first with a usable product that can answer a broad range of factual questions with knowledge imported into its system from trustworthy sources. It works beautifully for facts and knowledge it has and allows users to do two things:

  • Find answers to pretty detailed queries from trustworthy sources. Neither Wikipedia nor Google can do this, at best they can point you at a source that has the answer and leave it up to you to judge the trustworthyness of the source.
  • Fact surfing! Just like surfing from one topic to the next on Wikipedia is a fun activity, I predict that drilling down facts on wolframalpha is a equally fun and useful.

So what’s next? Obviously, wolframalpha.com will have competition. However, their core asset seems to be their reasoning engine combined with the quite huge fact database which is to date unrivaled. Improvements in both areas will solidify their position as market leader. I predict that several owners of large bodies of authoritative information will be itching to be a part of this and partnership deals will be announced. Wolframalpha could easily evolve into a crucial tool for knowledge workers. So crucial even that they might want to pay for access to certain information.

Some more predictions:

  • Several other startups will start competing soon with competing products. There should be dozens of companies working on similar or related products. Maybe all they needed was a somebody taking a first step.
  • Google likely has people working on such technologies they will either launch or buy products in this space in the next two years
  • Main competitors of Google are Yahoo and MS who have both been investing heavily in search technology and experience. They too will want a piece of this market
  • With so much money floating around in this market, wolframalpha and similar companies should have no shortage of venture capital, despite the current crisis. Also, wolframalpha might end up being bought up by Google or MS.
  • If not bought up or outcompeted (both of which I consider to be likely), wolframalpha will be the next Google

NRC Reorganization

My employer, Nokia, announced this week that it is reorganizing Nokia Research Center. The why and how of this operation is explained in the press release.

I learned this on Tuesday along with all my colleagues and have since been finding out how this will affect me. I can of course not comment on any organizational details but it is very likely that I start 2009 in a new job somewhere within Nokia since it looks like the research topic that I have been working on for the past two years is out of scope of the new Helsinki Lab research mission. While I’m of course unhappy about how that decision affects me, I accept and respect it. Short term, I am confident that I will be allowed to finish ongoing research activity since it has been so far highly successful within Nokia and we are quite close to going public with the trial of the system I demoed on Youtube a few weeks ago. I’m very motivated to do this because I’ve put a lot of time in it and want to see it succeed and get a lot of nice press attention.

However, a topic that is of course on my mind is what I will be doing after that and where I will be doing it. In short, I’m currently looking at several very interesting open positions in Nokia. Since I’m doing that anyway, I’ve decided to broaden my search and look at all available options, including those outside Nokia. I will pick the best offer I get. Don’t get me wrong, I think Nokia is a great employer and I am aware my skills are in strong demand inside Nokia. So, if Nokia makes me a good offer, I will likely accept it. But of course, the world is bigger than Finland where I have now spent three years and I am in no way geographically constrained (i.e. willing to move internationally).

So, I’ve updated my CV and am available to discuss any suitable offer.

Since this has happened in the past: please don’t contact me about Symbian programming or J2ME programming type jobs. Not interested in either, I’m a server guy.

Local Interaction demo on Youtube

I gave a demo of our system at a press event for Nokia a few weeks ago. Our PR people were busy filming and have put several short demo movies on Youtube. Including my demo and several other cool demos from colleagues in Nokia Research.

So enjoy.

Since I expect there will be people interested in learning more about this. I’ll try to give some explanation right here.

In the Youtube movie above, I am showing off our Local Interaction demo, which is a mobile website that shows off our indoor location based service platform. For the positioning we have collaborated with a different team that has been working on a indoor positioning system, which was demoed separately at the same event. Our demo leverages their technology and provides services on top.

Indoor location based services are similar to outdoor location based services in the sense that things like search, navigation, social networking, and media sharing are all things that can benefit from knowing where you are. In a nutshell, we have indoor location enhanced variants of these features integrated into our platform. However indoor location based services are different in the sense that they are much more relevant to people. People spend most of their lives indoors!

Having a platform is nice of course, but working code with real users is nicer. So we have spent most of this year preparing a trial in a real shopping mall here in Helsinki. The website you see has a nice polished UI, rich indoor content for the shopping mall and a set of useful services around the mall concept aimed at both consumers as well as proprietors of the mall, such as shop owners. The advantage of working with a real place is that it forces you to be very pragmatic about a lot of issues.

Doing a demo like ours is easy if you can assume everybody goes to the same building, uses the same phone, and just the right firmware version + your handcrafted application. A substantial part of the Ubiquitous & Pervasive Computing research community is perfectly happy with that sort of demo setups and proofs of concepts. This is why despite decades worth of demos, there’s no significant technology available for consumers beyond the boring old home automation kits and that sort of thing. It takes a bit more to make a real life impact.

A key motivation for our demo is that we don’t want to make such assumptions. Our requirements are: a capable mobile web browser (most modern Nokia phones and many phones from other vendors), and optionally, the indoor positioning client software installed. Like Location based services, our services are actually perfectly usable without positioning so we don’t actually require positioning. Being web based means that we can reach much more users than with a native application.

With everything we do we have the vision that eventually we don’t want to do this in just one shopping mall but world wide in a lot of public places. The primary goal of our trial is to gain experience rolling out this kind of technology and learning more about all the practical and technical obstacles there are for making this work on a more interesting scale in the future. We want to show that our platform is scalable in both a technical sense as well as a business sense. We want to kickoff a whole new market for indoor location based services. It doesn’t exist today, so we have to build the whole ecosystem from scratch.

For the more technical people. Our web platform is based on Python Django and integrates the positioning and other services using REST based services over HTTP. The friends feature we demo is realized using the Facebook API. We rely on Atom pub and Atom feeds internally. We intend to be mashup friendly so as to not have to reinvent every feature ourselves and instead integrate with many existing services. We have an Apache Lucene based search server that powers our indoor location based search feature. We use this feature quite heavily to look for indoor location tagged content such as photos, ads, vouchers, comments, etc. Finally, we use an off the shelf open source map server that serves up the indoor maps. In general, our philosophy is that there are already enough poorly reinvented wheels. We build what we need only if we can’t reuse what is out there. The web is out there and we use it.

For the researchers. A few articles that should be published in the next few months will outline the research we have on this. Meanwhile, you can refer to my publications page for a few workshop papers on a a predecessor of this demo that we did in 2007. Also we did a demo at the Internet of Things conference last April of our 2007 demo. And of course, there will be more details on the trial once we launch it.

About the project. This demo was developed as part of a Nokia project on an “Application Environment for Smart Spaces” which is currently running in Smart Space Lab, which is a part of Nokia Research Center. Headcount has varied quite a bit but currently we are around 8 people working full time on this. My role in this project is pushing architecture solutions and coordinating the development together with a small group of researchers and several excellent software engineers.

The Way We Live Next

I stumbled upon somebody writing about Nokia’s 2007 Way We Live Next event in Oulu. This event was intended to give the outside world a view on what is going on in Nokia Research Center.

Nice quote

Lots of interesting stuff was shown off during the course of the two days and the most interesting I came across was the indoor positioning concept. Using WiFi and specially created maps, the devices we were issued with were running the software which enabled you to move through the NRC building and pinpoint exactly where you were. So, if the next presentation was in room 101, the device would simply, and quickly show you the way. It instantly made me think of the frustration of trying to get where I want in huge shopping centres – and I figured this had to be the perfect solution.

Next week, the 2008 edition of the WWLN is going to be in Espoo and I will be giving a demo there of our indoor location based service platform, customized for a real shopping mall. We’ve demoed last years version of our software platform at the Internet of Things Conference last April. At the time our new platform was already under development for a several months and we are getting ready to start trialing it now. The WWLN event next week will be when we first show this in public and hopefully we’ll get some nice attention from the press on this.

PS. I like (good) beer …

Paper on Sensor Actuator Kit evaluation

One of our interns, Filip Suba, presented a paper earlier this week in Turku at the Symposium on Applications & the Internet. He’s one of my former (since today) master thesis students that has been working in our team at NRC for the past few months. The paper he presented is actually based on his work a few months before that when he was working for us as a summer trainee (previous summer that is). His master thesis work (on a nice new security protocol) is likely to result in another paper, when we find the time to write it.

Anyway, we hired him last year to learn a bit more about the state of the art in Sensor Actuator development kit with a particular focus on building applications and services that are kit independent and exposing their internals via a web API. This proved quite hard and we wrote a nice paper on our experiences.

In short, wireless sensor actuator kits are a mess. Above the radio level very little is standardized and the little there is is rarely interoperable. Innovation here is very rapid though. The future looks bright though. I watched this nice video about Sun Spot earlier this week for example:

The notion of making writing on Sensor node software as simple as creating a J2ME Midlet is in my view quite an improvement over hacking C using some buggy toolkit, cross compiling to obscure hardware and hoping it works.

Photos Zurich and Dagstuhl

I’m traveling a lot lately. Two weeks ago I was in Zurich at the first Internet of Things Conference. I uploaded some pictures already last week and some more today.

Last week I also attended a Dagstuhl seminar on Combining the advantages of product lines and open source to present the position paper I posted some time ago. Naturally, I also took some pictures there.

Interestingly, one of the participants was Daniel German who does a lot of interesting things including publishing good articles on software evolution and working on a source forge project called panotools that happens to power most of what makes Hugin cool. Hugin is of course the tool I have been using for some time now to stitch together photos into very nice panoramas. I felt envious and lucky at the same time watching him take photos. Envious of his nice Canon 40D with very cool fish eye lens and lucky because his photo bag was huge and probably quite heavy considering the fact that he had two more lenses in there.

Attendees of the Dagstuhl Seminar

The whole gang together. Daniel is the guy in the orange shirt.

One of the best features of Dagstuhl: 1 beer = €1. Not quite free beer but close enough. And afterall, OSS is about free speech and cheap beer definitely loosens the tongues.

From SPLs to Open, Compositional Platforms

Below is a position paper I submitted to the upcoming Dagstuhl seminar I am attending. It’s not peer reviewed and it is not clear at this point if there will be any proceedings. So, as an experiment, I will just put the full text in a blog post as well as the pdf you can find on my publications page. The reason I am doing this is twofold: I want people to read stuff I write and locking it up in some hard to find proceedings just isn’t doing the trick. Secondly, this blog has a comment feature. Please feel free to use it.


From SPLs to Open, Compositional Platforms

Jilles van Gurp & Christian Prehofer
Smart Space Lab
Nokia Research Center
Helsinki, Finland

Abstract. In this position paper we reflect on how software development in large organizations such as ours is slowly changing from being top down managed, as is common in SPL organizations, towards something that increasingly resembles what is happening in large open source organizations. Additionally, we highlight what this means in terms of organization and tooling.

Trends and Issues

Over the past decade of our involvement with Software Product Lines, we have seen the research field grow and prosper. By now, many companies have adopted SPL approaches for their core software development. For example, our own company, Nokia, features prominently on the SEIs Product Line hall of fame [SEI 2006]. Recently, we [Prehofer et al. 2007], and others [Ommering 2004] have published articles on the notion of compositional development that decentralizes the development of software platforms and products. The motivation for our work in this area is that we have observed that the following trends are affecting software development:

  • Widening platform scope and more diverse products. As “victims” of their own success, successful product lines allow for the creation of an ever wider range of products. Necessarily, these products have increasingly less in common with each other. Particularly, they are likely to have substantial product specific requirements and require increasing amounts of variability in the platform provided features to deal with conflicting and overlapping requirements in the base platform. In other words, the percentage of functionality shared across all products relative to the total amount of functionality in the platform is decreasing. At the same time, the percentage of platform assets actually used in any particular product is also decreasing.
  • Platforms stretch over multiple organizations. As platform and product development starts to span multiple organizational entities (companies, business units, open source projects, etc), more openness towards different and conflicting requirements, features, roadmaps and processes in different development entities is required. This concerns both open source software and commercial platforms that are developed and productized differently by third party companies.
  • Time to market and innovation speed. While time to market has always been a critical issue, it is particularly an issue with the growing size and complexity of Software Product Lines. In general, large scale software projects tend to have longer development cycles. In the case of Software Product Lines that have to cater for more and more heterogeneous products, length of development cycles
    tends to increase as complexity of the work related to defining, realizing and testing new functionality grows increasingly complex. However, time to market of features does not only include the product line development cycle but also the time needed to do product derivation as well as the development cycles of any external software the Software Product Line integrates. Worst case is that a feature first needs to be integrated in one of these dependencies; then it needs to be integrated into the next major release of the Software Product Line before finally a software product with the new feature can be developed and put in the market.

We are seeing examples of this in Nokia as well. For example, Nokia has software development spread over several major phone platforms (S30, S40, S60 and Linux Maemo) and launches multiple products from each of those platforms every year. Interesting to note here is that Nokia has never really retired a mobile phone software platform and is actively using all of them. Roughly speaking, S40 evolution is in sync with the popularization of the notion of Software Product Lines since the mid nineties. It is indeed this product line that is featured on the before mentioned SEI SPL hall of fame [SEI 2006]. Development for products and platforms is spread over many Nokia locations all over the globe as well as a complex network of subcontractors, customers and supplying companies. Additionally, the use of open source software and the intensive collaboration Nokia has with many of the associated projects are adding more complexity here. Finally, time to market is of course very important in the mobile phone market. Products tend to be on the market for only short time (e.g. 6-12 months) and developing them from a stable software platform can take more than a year in some cases. This excludes time needed for major new releases of our software platform. Consequently, disruptive new features in the platform may take years to reach the market in the form of new phones.

The way large organizations such as Nokia manage and organize their software and platform development is constantly pushing the limits of what is possible with software engineering & architecting tools and methodology. Nokia is one of a handful of companies world wide that manage tens of millions of code across its product lines and products. We see Software Product Lines as a way to develop software that has arguably been very successful in organizations like ours. However, we also note that increasingly development practice is deviating from practices that are prescribed by literature on Software Product Lines particularly with respect to centralized definition, control, ownership and management of software assets and products. Therefore, we argue that now the research community needs to adapt to this new reality as well.

The complexity and scale of the development organization increasingly make attempts to centrally manage it futile and counter productive. Conflicts of interest between stakeholders, bureaucracy, politics, etc are all affecting centralized platform and product decision making and can end up leading to unworkable compromises or delays in the software development process. Additionally, it is simply becoming impossible to develop software without depending on at least some key open source projects. Increasingly the industry is also participating as an active contributor in the open source community. Arguably, most of the open source community now consists of software developers sponsored in some way by for profit organizations. For example, Nokia is a very active participant in the mobile Linux community (the Maemo Linux platform) and ships products such as the N810 internet tablet where the majority of lines of code is actually coming from externally owned and run open source projects and even direct competitors (e.g. Intel and Motorola).

This changes the game of balancing product and platform requirements, needs and interests substantially from what is generally assumed in a classical SPL context where a single company develops both platform and products in house and where it is possibly to drive both product and platform development in a top down fashion. This simply does not work in a context where substantial amounts of critical software in a product are coming from external sources that are unwilling / unlikely to take orders from internal product managers or other types of executives external to their organization.

Effectively, this new reality necessitates a different approach to software development. Rather than driving a top down decomposition of products and features and managing development and software assets per this hierarchy, as is very much the consequence of implementing practices advertised in SPL literature, we propose to adopt a more compositional style of development.

Compositional Development

In our earlier work [Prehofer et al. 2007], we outlined an approach to adopt a more compositional approach to development. Rob van Ommering has argued along similar lines but still takes the traditional perspective of a (large) company managing a population of products [Ommering 2002][Ommering 2004]. However, what we propose here is to further decentralize development and organize similar to the open source community where many independent development teams of components, framework and product owners are working together. Each of those teams is acting to represent their own interests (and presumably those of whomever they work for). Their perspective on the external world is simply that of upstream and downstream dependencies. Downstream are the major users and customers that use the software the team produces. These stakeholders act as primary source of requirements and probably also funding. Upstream, teams operate that produce software required for using and developing the software. These teams in turn depend on their downstream dependencies and funding.

This decentralized perspective is very different from the centralized perspective and essentially allows each team to optimize for what is required from them downstream and what is available to them upstream. For example, requirements for each team come primarily from their downstream dependencies. Since there is no central controlling entity that dictates requirements, picking up these requirements and prioritizing them is very much the task of the teams themselves. Of course, they need to do so in cooperation with their downstream dependencies. Generally, especially when crossing organizational boundaries, requirements are not dictated but rather the development teams try to asses the needs of their most important customers.

Organization

As Conway’s Law [Conway 1968] predicts, the architectural decomposition of software is reflected in organizations. In many open source communities, project team dependencies reflect the architecture decomposition of software into packages, frameworks, libraries, components, or other convenient units of software decomposition. Obviously, without at least some structure and management in place, the approach advocated here results in total anarchy, which is not a good organizational model to accomplish anything but chaos. Again, we look at the open source world where organizations such as Ubuntu, Eclipse, Apache and Mozilla are driving development of thousands of projects. Each of these organizations has a surprisingly sophisticated organizational structure that comes with rules, best practices, decision making processes, etc. While there are no binding contracts enforcing these, participants in the community are required to play by the rules or risk being ignored.

In practice this means, participants voluntarily comply with practices and rules and take part in what is often called a meritocracy where important decisions are taken by those who have the merits to do so. Generally, this requires a track-record of making important contributions and having the trust of the community. For example, in the Eclipse foundation, which was founded by IBM, this means that individuals from some of their major competitors such as BEA and Red Hat actually lead some of the key projects under the eclipse umbrella. These individuals are essentially trusted by IBM to do the right things even though they work for a major competitor. Organizations such as Eclipse exist to represent the common interests of the project teams they are composed of. For example the eclipse foundation, which is very much a corporate driven (and financed) institution, represents a broad consortium of stakeholders that covers pretty much the entire spectrum of Java (and increasingly also non-Java) enterprise, desktop and mobile/embedded software related development tooling. In the past two years, they have organized two major, simultaneous releases of the major projects. In their latest release, which goes by the name of Europa, they managed to synchronize the release process of around 20 of their top level projects which are collectively developed by thousands of developers coming from dozens of companies. Many of these companies are competitors. For example, BEA and IBM are directly competing in the enterprise market and major contributors to multiple eclipse projects.

What this proves is that the way the Eclipse Foundation organizes development is extremely effective and scalable because it involves dozens of organizations and hundreds/thousands of individuals producing, integrating and testing an enormous amount of new software in a very short time frame. Organizing like this brings in the necessary flexibility to seamlessly work with numerous internal and external teams and acknowledges the reality that even internally relations between teams can be complex and difficult to manage centrally.

Tooling

A consequence of decentralizing is that aligning the use of tools across development teams becomes essential. When collaborating, it helps if tools between teams are at least similar and preferably compatible/the same. SPL research has over the past few years focused on tooling for variability management, configuration management and requirements management. However, getting these tools adopted and using them effectively in a context of thousands of software development teams that are collaborating is quite a challenge; especially since many of these tools are either in house developed or only used in a handful of companies. Tooling in the open source community tends to focus on the essentials. That being said, the OSS community has also produced many development tools that are now used on a massive scale. For example, Mozilla has had a pioneering role through their contribution of important tools such as Bugzilla and Bonsai (bug tracking and build monitoring). The whole point of the Eclipse foundation seems to be development tools. Additionally, they have a project called equinox that implements a very advanced framework that provides many interesting variability technologies and has put into mainstream use notions of using API versioning and provided and required interfaces on components. In short, there seems to be a gradual migration of SPL like tool features to mainstream tooling. Additionally, eclipse is of course a popular platform for developing such tooling in the research community.

Conclusions and Future work

In this position paper we tried to highlight a few of the key issues around the ongoing trend from integrational development towards a more open ecosystem where many stakeholders work on many pieces of software that are integrated into products by some of the stakeholders. We are currently working on an article about what it means to go from a software development practice to a compositional approach in terms of organizational models, practices and other aspects. In that article, we will list a number of practices that we associate with compositional development and evaluate these against practices in open source communities as well as selected SPL case studies from the research community. Arguably, SPLs have vastly improved software development in many companies over the past decade or so. Therefore, the key issue for the next decade will be re-aligning with the identified trends towards larger software development ecosystem while preserving and expanding the benefits that SPL development have brought.

We do not see compositional development vs. SPL development as a black and white kind of thing but instead regard this as a wide spectrum of development practices that each may or may not be applied by individual companies. The more they apply them, the more compositional their development becomes. In any case, the right set of practices is of course highly dependent on context, domain, stakeholders, etc. However, we observe that in order to scale development and in order to work with hundreds or even thousands of globally and organizationally distributed software developers effectively, it is necessary to let go of centralized control. Compositional development in this open environment is vastly more complex, organic, and so we believe, more cost effective.

References

[Conway 1968] M. E. Conway, How do committees invent, Datamation, 14(4), pp. 28-31, 1968.
[Ommering 2002] R. van Ommering, Building product populations with software components, proceedings of Proceedings of the 24rd International Conference on Software Engineering (ICSE 2002), pp. 255-265, 2002.
[Ommering 2004] R. Van Ommering, Building Product Populations with Software Components, Ph. D thesis, University of Groningen, 2004.
[Prehofer et al. 2007] C. Prehofer, J. van Gurp, J. Bosch, Compositionality in Software Platforms, in A. De Lucia, F. Ferrucci, G. Tortora, M. Tucci eds., Emerging Methods, Technologies and Process Management in Software Engineering, Wiley, 2008.
[SEI 2006] Software Engineering Institute, Product Line Hall of Fame, http://www.sei.cmu.edu/productlines/plp_hof.html, 2006.

New and updated publications

As you saw in yesterday’s post, my publication site has moved to this blog. I also took the opportunity to update the page with recent work:

You can download the pdfs and find the full refs from here: publications.