Nokia Android Phone

It appears that hell is freezing over and there are now strong rumors that on the evening of the completion of the deal with Microsoft, Nokia is going to push out an Android phone.

I’ve been more than a bit puzzled about this apparent move for a few weeks but I think I’ve figured out a possible universe where this actually makes sense. Disclaimer, I’ve been outside of Nokia for quite some time now and don’t have any information that I shouldn’t be sharing. I’m just speculating.

A few days ago Ars Technica published an article where they were recommending that in fact Nokia should not be forking Android, which is what it appears to be doing. One of the big arguments against this was that this isn’t working that well for Amazon either. Amazon has not licensed Google Play Services, which basically is what you need to license to get access to the play store, chrome, google maps, and all the rest of the Google circus. So while Amazon’s Kindles with Android are perfectly nice tablets to use, most Android apps are not available for it because of compatibility issues and because most app developers don’t look beyond the Google store. Blackberry has exactly the same problem (in so far they still have any ambitions in this respect).

Companies like HTC and Samsung have signed licensing deals with Google and this means they have to ship whatever Google tells them to ship and in fact the software updates for anything related to Play Services completely bypass whatever firmware these companies ship and instead updates over the air constantly. This is Google’s fix for the problem that these companies are normally hopelessly behind with updates. I recently played with a Samsung and most of their added value software wise is dubious at best. Most of it is outright crap and most tech savvy users prefer stock android. I know I like my Nexus 5 a lot better at least. Samsung is a hardware manufacturer without a solid software play. Amazon doesn’t want to be in that position, for them the software and hardware business is just a means towards an end: selling Amazon content. They compete with Google on this front and for this reason a deal between the two is unlikely.

So, I was thinking: exactly. It doesn’t make sense for Amazon to be doing this alone. Amazon needs a partner. What if that partner was Nokia + Microsoft? That would change the game substantially.

Amazon has already done a lot of work of trying to provide an implementation of Google’s proprietary APIs. Amazon is already a licensee of Nokia maps and together they could knock up an ecosystem that is big enough to convince application developers that it’s worth porting over to their app store. Microsoft and Nokia need to compete with Android not based on the notion that it is a better platform (because arguably it is not) but primarily based on the notion that it’s app store is filled with third party goodies. It’s the one thing that comes up in every review of a windows phone, blackberry (throwing them in for good measure), and Amazon device. Amazon + Nokia + Microsoft could fix this together. If you fix it for (very) low end phones, you can shove tens of millions of devices into the market in a very short time. That creates a whole new reality.

It seems that is exactly what Nokia is doing (if the rumors and screenshots are right): a low end Android phone with a windows phone like shell and without any of the Google services. One step up from this would be open sourcing the API layer that Amazon has done to provide compatibility with Google’s proprietary play services but instead plugged into competing services from Nokia, Microsoft, and Amazon. That would also be portable to other platforms. Other platforms like for example windows phone that also has had some app store related challenges. Microsoft actually has a lot of code that already makes a lot of sense on Android. For example, mono runs C# and other .Net stuff just fine on Android. With a bit of work, a lot of code could be ported over quite easily. Also, Microsoft and Nokia currently have a lot of Android manufacturers as paying customers. All they are currently getting is a license for the patents they are infringing on. And don’t forget that a lot of Android manufacturers are not necessarily happy with the power grab Google has been executing with Android. Their Play Services is a classic bait and switch strategy where they lured licensees in with open source which is now slowly being replaced with Google proprietary code. That’s why Samsung is making a big push with Tizen in the low end market this year. And it is also why people are eying Ubuntu, Firefox OS, and Sailfish as alternatives to Google.

In short, I’d be very surprised if Nokia was doing this by itself just before it sells the whole phone division. It doesn’t make sense. So, Microsoft has to be in it. And the only way that makes sense is if they want to take this all the way.

Will it work? I don’t know. I’ve seen both Microsoft and Nokia shoot themselves in their collective foots more than enough over the past few years. Both companies have done some amazingly stupid things. There is plenty of room for them to mess this up and they don’t have history on their side at this point. But it could work if they get their act together.

What Apple Knows That Facebook Doesn’t

What Apple Knows That Facebook Doesn’t.

Business week has an interesting article on the economics of platforms. Interesting, but flawed. They compare two platforms (Facebook, and Apple’s mobile platform). The argument goes roughly as follows: Apple is using it’s platform to create a new market by being open and Facebook is using traditional methods of using the market as a control point. Apple is creating an open market and Facebook is making an open market more closed. The author even goes as far as to associate the keywords good and evil here.

The article is flawed because in fact Apple is not creating an open market. They have been removing applications that don’t fit their business model (e.g. anything VOIP related) and are still keeping people from writing about the APIs because NDA has not been lifted yet. Apple is acting as a dictator here. That it is a mostly benevolent one doesn’t matter. It doesn’t sound very open to me in any case. Or very new.

Sure, their platform is pretty nice and their online shop pretty usable. That’s definitely disruptive to the mobile industry, which is not used to good quality platforms and well designed use-cases such as online shops for applications. However, there’s a pretty big market for mobile applications and most people writing for the iphone don’t do so exclusively and instead target multiple mobile platforms. You can download several VOIP applications for S60 or mobile windows and other platforms, as well as numerous games, productivity apps, etc. Then there is J2ME of course with a few billion phones in the market right now. You might say it is crappy but it has a huge reach. Incidentally, Apple also blocks components from their shop that would enable people to run J2ME applications since an open source Java platform has in fact been ported long before Apple even ‘opened’ up their platform. That’s right, a good old case of reverse engineering. Apple’s platform is quite unique in the sense that people were developing for it long before Apple decided to hand out developer kits.

Facebook indeed is also not very open but they were first to a market that they created, which is pretty big by now. As a viral way of spreading new services to users it is pretty much unrivaled so far. It is Google that has created a competition for more openness with their Open Social platform, which is in many ways similar but has open specifications and may be implemented freely by other social networks. Both Google and Facebook have a very similar centralized identity model that is designed to lock users into their mutual platforms (Google Friends Connect & Facebook Connect). Google is maybe being somewhat more smart about it but they are after the same things here: making sure trafic flows through their services so that they can sell ads.

So, Facebook’s model is advertisement driven and Apple’s business is operator driven. Apple makes most of their money from deals with operators who subsidize iphones and give Apple a share of the subscription revenue. That’s brilliant business and Apple protects it by removing any application from their shop that has conflicting interests with this revenue stream.

However, the key point of the article that the platform serves as a market creation tool is interesting. Apple managed to create an impressive amount of revenue (relative to their tiny market share of the overall mobile market) and Facebook has managed to create a huge market for Facebook applications. Both are being challenged by competitors who have no choice to be more open.

Interestingly, Google is competing on both fronts and can be seen as the primary threat to both Apple and Facebook’s platforms. Google could end up opening up the mobile market for real because it is not protecting any financial interests there but instead are trying to spawn a mobile internet market. Android is designed from the ground up to do just that. It needs to be good enough for developers, users and operators and Google has worked hard to balance interests enough so as to not alienate any of these.

All three are fighting for the favours of developers. Developers, developers, developers! (throws chair across the room & jumps like a monkey). That too is not new although Microsoft seems to have forgotten about them lately.

X-Plane 9 review

Last weekend I ordered X-plane version 9. I bought version 8 early 2006 and since then I haven’t looked back. Sure, MS Flight Simulator looks great but the flying sucks. Laminar consistently delivers with new features and bug fixes. Version 8 got its last major update (8.64) about half a year ago and since then they have been beta testing version 9. While I could have bought it earlier, I waited until they released it.

A few days ago the package with 6 double layer DVDs was delivered. Installation was not so smooth as I complained about here. But I managed to sort it out and have a working X-plane 9 now. I installed the European and US scenery. The 6 DVDs of world wide scenery is really nice and detailed but consists only of automatically computed landscapes from various databases. Europe now also includes the part I live in (Finland) which was too far north for version 8. However, I prefer to fly southern Europe, where the landscape is a bit more varied.

There are cities, forests, roads, airports, coastlines, etc. where they should be (and in surprising amount of detail) but the simulator lacks custom content like the massive amount of content that comes with Microsoft Flight simulator. To fix that, I installed the excellent Corsica scenery, which is one of the many third party scenery packages available and one of the first ones to be upgraded for version 9. This adds a nice level of realism. Flying in from Nice (another scenery package, warning horrible HTML layout) with the new Cirrus jet was pretty cool and surprisingly easy given that the Cirrus was new to me. According to the product announcement, this plane was actually created by Cirrus themselves and presumably tuned to their specifications and needs. Also, the 3D cockpit is pretty cool and much more user friendly on a PC than the average very complicated panel coming with a X-plane jet.

Technically, version 9 includes lots of improvements to the scenery rendering and simulation. The changes are outlined in great detail in the product announcement page by Laminar owner and founder, Austin Meyer. I have little to add here except to say that it mostly works and delivers as advertised. Don’t expect to max out any of the rendering settings, they have been designed such that this is not possible with any hardware available now. In fact they just raised the bar for future hardware. If you can get your hands on a NVIdia with a few GB of video ram, X-plane will probably find a use for every byte of it. The good news is that it still looks pretty good with object detail not set to “TOTALLY INSANE” (Austin Meyer loves his capitals). In case you are wondering, I have a three year old AMD 4400+ with 2GB and a NVidia 7800 GT. Anything similar or better will run X-plane just fine.

Part of the attraction of X-plane is that it is a niche product build by some dedicated people who know what they are doing and are totally focused on doing it. Considering that they have a very small programmer team and not much other people working for them, it is pretty amazing what they manage to deliver. They have to be smart and efficient about a lot of things. So their UI is totally custom and a bit wacky. But it works. The included planes are so so but there are plenty of free ones available to fix that (and some better ones for a small fee). With all these nice freeware planes out there (e.g. on x-plane.org), you have to wonder why the selection bundled with X-plane is so weak. Most of the planes don’t have 3D cockpits and quite a few even lack textures.

However, at the core of X-plane is an excellent and extremely detailed simulation of just about anything that flies and everything that makes it fly. I mean, they are worrying about the accuracy of the voltage in electrical systems here and how that behaves under different failure scenarios. The attention to detail is just amazing. This is a simulator made by absolute flight sim geeks for flight sim geeks. It has lots of rough edges but it does its core job extremely well and is arguably the best all round flight simulator available today.

Modular windows

There is a nice article on Ars discussing Microsoft’s business practices regarding windows and how they appear to be not quite working lately. It used to be that your PC came with windows whereas nowadays you have to select from a around five different versions and Microsoft is rumored to go to an even more modular and subscription based model. The general idea is to be able to squeeze out as much revenue out of the market as possible. On paper it sounds good (for MS that is).

Rather than buying an overpriced OS with everything and the kitchen sink you buy what you need. There’s a huge differences between what businesses and some individuals are willing to spend and the typical home user that just wants a browser + skype + the sims. Typically the latter group ends up buying the cheapo version and the former group ends up buying the everything and the kitchen sink version. The problem is that there is unmonetized value in the latter in the sense that some owners of the  cheapo versions might be interested in getting access to some of those features in the expensive version but not in all of them.

Now to the obvious problem with the discussed solution. By selling cheapo versions with most of the value removed and factored out into separate chunks you have to pay for, you dilute the overall value of the OS. So instead of buying an OS that can do X, Y, and Z out of the box you are buying an OS that can’t do X, Y, and Z out of the box. Marketing an OS that can’t do stuff is a lot harder than trying to sell stuff that can do things.  Worse they are opening the market to third parties that might do something similar to X, Y, and Z for a better price, or in some cases for free (beer & speech). Or even worse to companies selling an alternative OS with X, Y, and Z.

That in a nutshell is what is discussed in the Ars article and why Apple Mac OS X marketshare is approaching double digit percentages. I’ve been giving it some serious thought lately and I’m also noticing the spike in Safari users in my web site statistics.

Anyway, the reason for this write up is that the article overlooks an important argument here that I believe is relevant for more markets than just operating systems. In general, the tie between OS and features such as photo galleries, online backups, or TV UIs is artificial. Microsoft only adds features like this to make the overall OS more valuable. That is, they are looking to improve the value of the OS, not the photo gallery. However, ongoing and inevitable commoditization of software actually shifts value to new features. Especially when bundled with online subscriptions, things like online photo galleries can be quite good business. For example, Flickr has many paying subscribers.

Naturally MS is interested in markets like this (which is why they are interested in Yahoo). However, the tie-in to the OS constrains the market. Why would you not want to sell these services to Apple users? Why would you not want to sell this service to Sony Playstation owners? Why would you want to want to artificially limit who can access your service just to boost sales of your OS? As long as you were trying to artificially (and apparently for MS illegally) boost value of your core OS, bundling was a valid strategy. However, as soon as your value shifts, that becomes a brake on market growth. The OS market has commoditized to the point where you can get things like Ubuntu for free, which for the low end market is about as good as what you get with the cheapo version of Vista (see my various reviews of Ubuntu for why I’m not ready to claim better yet).

So the difference between MS and Google who is eating their lunch in the services arena is that the latter is not handicapped by 20 years of Windows legacy and can freely innovate and grow marketshare without having to worry about maintaining a revenue stream from legacy software. Google doesn’t have to sell OS licenses and so they give away software on all platforms to draw more users to their services which is where they make their money.

Naturally, Google has a lot of software engineers that are working round the clock to create more value for them. Where possible Google actively collaborates with the open source community because they know that while they won’t make any money from commodities like browsers, file systems and other important software components, they do depend on those things working as good as possible and keep evolving in the right direction. Few people appreciate this but this and not ads is why Google sponsors Firefox. It’s a brilliant strategy and it is forcing their main competitor to keep investing in internet explorer rather than being able to shift resources to directly competing with Google. 50 million $ is pocket money if it is making your main competitor crap their pants and waste resources on keeping up with you in a market where you are not even trying to make money.

You might have noticed that I have carefully avoided discussing Google and Microsoft’s mobile service strategies and also noticed that yours truly is working for Nokia. Well, my readers ought to be smart enough to figure out what I’m trying to say here aren’t you :-)?

OpenID 2.0 and concerns about it

It seems JanRain is finally readying the final version of OpenID 2.0. There’s a great overview of some concerns that I mostly share on readwriteweb.com. Together with another recent standard (OAuth), OpenID 2.0 could be a huge step forward for web security and privacy.

Lets start with what OpenID is about and why, generally, it is a good idea. The situation right now on the web is that:

  • Pretty much every web site has its own identity solution. This means that users have to keep track of dozens of accounts. Generally users have only one or two email addresses so in practice that means most these accounts are actually tied to 1 email account. Imagine someone steals your gmail password and starts scanning your mail for all those nice account activation mails you’ve been getting for years. Hint: “mail me my password”, “reset my password”. In short, the current situation has a lot of security risks. It’s basically all the downsides of a centralized identity solution without any of the advantages. There are many valid concerns about using OpenID related to e.g. phishing. However, what most people overlook is that the current situation is much worse and also that many OpenID providers actually address the concerns by implementing various technical solutions and security practices. For example myopenid.com and verisign employ very sophisticated technologies that you won’t find on many websites where you would happily provide your credit card number. There is no technical reason whatsoever why openid providers can’t use the same or better authentication mechanisms that you probably use with your bank already.
  • While technically some websites could work together on identity, very few do and the ones that do tend to have very strong business ties (e.g. banks, local governments, etc. This means that in most cases, reusable identity is only usable on a handful of partner sites. Google, Microsoft, and Yahoo are great examples. They each have partner programs that allows externals to authenticate people with them. Only problem: almost nobody seems to do that. So reality check: OpenID is the only widespread single sign on solution on the web. There is nothing else. All the other stuff is hopelessly locked into commercial verticals. Microsoft has been trying for years to get their password solution to do what OpenID is doing today. They failed miserably so far.
  • Web sites are increasingly dependent on each other. Mashups started as an informal thing where site A used an API from site B and did something nice with it. Now these interactions are getting much more complex. The amount of sites involved in typical mashups is increasing and the amount of privacy sensitive data flying around in these mashups is also increasing. A very negative pattern that I’ve seen on several sites is the “please provide your gmail/hotmail/yahoo user password and we’ll import your friends” type feature. Do you really want to share your years of private email conversations with a startup run in a garage in California? Of course not! This is not a solution but a cheap hack. The reality is that something like OpenID + OAuth is really needed because right now many users are putting themselves in danger by happily providing their username and passwords.
  • Social networks like Facebook authenticate people for the many little apps that plug into them. So far Facebook is the most successful here. Facebook provides a nice glimpse of what OpenID makes possible on a much larger scale but it is still a centralized vertical. I am on Facebook and generally like what I see there but I’m not really comfortable with the notion that they are the web from now on (which seems to be implied in their centralized business model). Recent scares with their overly aggressive advertisement schemes shows that they can’t really be trusted.

OpenID is not a complete solution for the above problems and it is important to realize that is by design: it tries to solve only one problem and tries to solve it well. But generally it is a vast improvement over what is used today. Additionally, it can be complemented with protocols like OAuth which are about delegating permissions from one site to another on your behalf. OpenID and OAuth are very well integrated with the web architecture in the sense that they are not monolithic identity solutions but modular solutions designed to be combined with other modular solutions. This modular nature is essential because it allows for very diverse combinations of technology. This in turn allows different sites to implement the security they need but in a compatible way. For example, for some sites allowing any OpenID provider would be a bad idea. So, implement whitelisting and work with a set of OpenID providers you trust (e.g. Verisign).

OpenID and OAuth provide a very decent base level of protection that is not available from any other widely used technology currently. The closest thing to it is the Liberty Alliance/SAML/Microsoft family of identity products. These are designed for and applied exclusively in enterprise security products. You find them in banks and financial institutions; travel agencies, etc. These are also used on the web but invariably only to build verticals. Both Google and Microsoft use technologies like this to power their identity solutions. In fact, many OpenID identity providers also use these technolgies. For example, Microsoft is rumoured to OpenID enable their solution and several members of the Liberty Alliance (e.g. Sun) have been experimenting with OpenID as well. They are not mutually exclusive technologies.

It gets better though. Many OpenID providers are employing really advanced anti phishing technologies. Currently you and your cryptographically weak password are just sitting ducks for Russian/Nigerian/Whatever scammers. Even if you think your password is good, it probably isn’t. OpenID doesn’t specify how to authenticate. Consequently, OpenID providers are competing on usability and anti phishing features. For example, Verisign and myopenid.com employ techniques that makes them vastly more secure than most websites out there, including some where you make financial transactions. There has been a lot of criticism on openid and this has been picked up by those that implement it.

So now on to OpenID 2.0. This version is quite important because it is the result of many companies discussing what should be in there for a very long time. In some respects there are a few regrettable compromises and maybe not all of the spec is that good of an idea (e.g. .name support). But generally it is a vast improvement over OpenID 1.1 which is what is in use currently and which is technically flawed in several ways that 2.0 fixes. The reason 2.0 is important is because many companies have been holding off OpenID support until it was ready.

The hope/expectation is that those companies will start enabling OpenID logins for their sites over the next few months. The concern expressed here is that this may not actually happen and that in fact OpenID hype seems past its glory already. Looking at how few sites I can actually sign into with my OpenID today, I’d have to agree. As of yet, no major website has adopted OpenID. Sure there are plenty of identity providers that support OpenID but very few relying parties that accept identities from those providers. Most of the OpenID sites out there are simple blogs, startups web 2.0 type stuff, etc. The problem seems to be that everybody is waiting for everybody else and also that everybody is afraid of giving up control over their little clusters of users.

So ironically, even though there are many millions of openids out there, most of their owners don’t use them (or even are aware of having one). Pretty soon openid will be the authentication system with the most users on this planet (if not already) and people don’t even know about it. Even the largest web sites have no more than something like a hundred million users (which is a lot). Several of those sites are already openid identity providers (e.g. AOL).

The reason I hope OpenID does get some adoption is because if it isn’t it will take a very long term for something similar to emerge. This means that the current very undesirable situation is prolonged for a very long time. In my view a vast improvement is needed over the current situation and besides OpenID, there seems to be very little in terms of solutions that can realistically be used today.

The reason I am posting this is because over the past few months me and my colleagues have been struggling with how to do security in decentralized smart spaces. If you check my publications web site, you will see several recent workshop papers that provide a high level overview of what we are building. Most of these papers are pretty quiet on security so far even though obviously security and privacy is a huge concern in a world where user devices use each others services and mash them up with commercial services in both the local network and internet. Well, the solution we are applying in our research platform is a mix of OpenID, OAuth and some rather cool add-ons that we have invented to those. Unfortunately I can’t detail too much about our solutions yet except that I am very excited about them. Over the next year, we should be able to push out more information into the public.

Flock

I just installed Flock – The Social Web Browser. Right now I’m trying out the blog editor included with it to write this little review. To cut the review short, I’m planning uninstalling it after publishing this post.

Lets just start by saying that this feels like a nice bunch of concepts and potentially useful Firefox extensions but not as a drop in Firefox replacement. Besides, the default theme feels rather amateurish and I already miss my dozen Firefox extensions. And while I am pleased that it supports Facebook, I find the lack of support for much else a bit disappointing. For example, I’m also on Linked in; phib; claimid. I have several openid logins; I use several Google services, including reader, gmail and calendar. All of these are unsupported by the self proclaimed social web browser. Hell, it doesn’t even integrate webmail from e.g. google, yahoo or microsoft (I have accounts with all three). You can find an overview of social networking sites I use on my blog: http://blog.jillesvangurp.com/my-other-sites/. Most of the stuff there is unsupported by Flock.

An exception seems del.icio.us. However, the extension functionality I get in Firefox is much better than the bundled del.icio.us support in Flock which is rather useless. Similarly, the blog editor is nice but nothing I can’t get using several Firefox extensions. I suppose the facebook sidebar is nice, but again there is also a firefox extension for that.

A rather novel feature seems to be the media bar. However, in its current incarnation it is limited to harvesting media from just a handful of popular sites like facebook (again), youtube and flickr. That’s nice but not all that useful to me.

So overall I have a bit mixed feelings. On one hand, this feels like a polished product, on the other hand there’s not much that I can’t get installing a few Firefox extensions. With Firefox 3 around the corner, I’m not planning to use Flock 1.0 based on the old Firefox without most of the extensions I can’t do without. Nevertheless, there’s some good ideas that I would like to see adopted in the form of Firefox extensions.

Tags: , , ,

porsche gets some good testdrive

It’s only two days ago that I bought myself a Lacie Porsche 0.5 TB usb drive. Yesterday evening, after a reboot caused by an apple security update weird shit started to happen. Basically windows informed me that “you have 3 days to activate windows”. WTF! So I dutifully click the activate now only to watch a product key being generated and the dialog closing itself, rather than letting me review the screen and opting for a internet or telephone based activation. After that it informed me that I had three more days to activate. Very weird and disturbing news! A few reboots and BSODs later (which had now also started to appear on pretty much every reboot), I took a deep breath and decided that the machine was foobarred and I needed to reinstall windows. I suspect the root cause of my problems was a reset a while back which resulted in a corrupt registry and repeated attempts by windows to repair it before booting normally. I thought the problem was fixed but apparently the damage was more extensive than I originally thought.

Considering I had a few more days to reactivat, which despite my attempts I could not do, I decided to back up everything I could think off. I.e. I have about 100 GB left on the external drive, bought it just in time :-). Copying that amount takes shitloads of time. Basically most of the backup ran overnight with the assistence of the cygwin port of rsync. After re-installing windows earlier this evening (which activated fine, to my surprise), I got to work reinstalling everything (I have a few dozen applications I just need to have) and moving back all my data. Some interesting things:

  • Luckily I thought of backing up my c:\drivers dir in which I stored various system level drivers for my motherboard and other stuff that I downloaded when I installed the machine a year ago. This included the essential driver for the lan, without which I would have had no network after the install and no way to get the driver on the machine (or to activate it). Pfew.
  • I reapplied the itunes migrate library procedure I described last year (and which still gets me loads of hits on the blog). It still works and my library, including playlists, ratings and playcounts imported fine in my new itunes install. Would be nice if Apple was a bit more supportive of recovering your stuff in a new install.
  • After installing firefox 2, I copied back my old profile folder and firefox launched as if nothing had happened. Bookmarks, cookies, passwords, extensions all there :-). Since I practically live in this thing, that pleased me a lot.
  • Then I reinstalled gaim and copied back the .gaim directory to my user directory. Launched it and it just worked. Great!
  • Same with jedit.
  • Then I installed steam, logged in and ran the restore back up tool that was created along with the 13GB backup. Seems to work fine and I’m glad that I don’t have to wait a few weeks for the download to finnish. Ok the restore was not fast either but it got the job done.

Lesson learned: backups are important. I had the opportunity to create them when it turned out I needed them. But I should have been backing up more regularly. A more catostrophic event would have caused me dataloss and much more annoyance.

So a big thank you to Bill Gates et al. for wasting my precious spare times with their rude and offensive activation crap. Fucking assholes! I’m a paying customer and very pissed. I will remember this waste of my time and genuine disregard for my rights when making any future microsoft purchasing decisions. And yes, that probably means lost revenue for you guys in Redmond. I’ve adopted opensource for most of my desktop apps by now. There’s only two reasons for me to boot windows on my PC: games and photoshop. I understand the latter is now supported by wine and I’m much less active with gaming than I used to be. Everything else I use either runs on linux or has great alternatives. But for the moment, I’ll keep using windows because I’m lazy.