Asana – killer issue tracker

I recently discovered Asana through @larsfronius. I have had a rocky history with issue trackers and productivity tools in general. Whether it is Jira, Trac, Bugzilla, Trello, the Github, Bitbucket, and Gitlab issue trackers, text files, excel sheets, or post its. It just doesn’t work for me; it gets in the way and stuff just starts happening outside it. I’ve devolved to the point where I can’t read my own handwriting so anything involving paper, pens, crayons and what not is a complete non starter for me. Besides it doesn’t work if you have people working remotely. The combination of too much bureaucracy, bad UX, and a constant avalanche of things landing in my lap means I have a tendency to mostly not document what I’m doing, have done, or am planning to do. This is bad; I know.

Fundamentally I don’t plan my working weeks waterfall style in terms of tickets which I then pick up and do. In many ways, writing a ticket is often half doing the work since it triggers my reflex to solve any problem in near sight. It’s what engineers do. If you have ever tried to have a conversation with an engineer you know what I am talking about. You talk challenges; they talk solutions. Why don’t you just do X or Y? What about Z? It’s hard to separate planning from the execution. So, there’s a bit of history with me starting to create a ticket for something and realizing half way that actually just solving the problem takes less time, is more fun, and probably, a better use of my time and then doing that instead.

I work in a startup company where I’ve more or less labeled myself as chief plumber. This means I’m dealing with a wide variety of topics, all the time. That means I’m often dealing with three things already and somebody comes along with a fourth and a fifth. All of them urgent. All of them unplanned. We’ve tried dealing with it the traditional ways of imposing process, tools, bureaucracy, etc. But it always boils down to answering this question: what is the single most productive thing I can do that moves us forward and acknowledging that this is not a fixed thing that we set in stone for whatever sprint length is fashionable at the time but subject to change. Me hopping from task to task continuously means I don’t get anything done. Me only doing what seems nice, means I get the wrong things done. In a nutshell, this doesn’t scale and I need a decent issue tracking tool to solve it properly.

Since my memory is flaky and tends to hold only a handful of things, I tend to write down things that seem important but not urgent so that I can focus on what I was doing and then come back to it later. This process is highly fluid. Something comes along; I write it down. Then later I look at what I’ve written and edit a bit and once in a while I actually get around to doing stuff that I wrote down but mostly the list just grows and I pick off things that seem the most urgent. The best tool for this process is necessarily something brutally simple. The main goal is to be minimally disruptive to the actually productive thing I was doing when I got interrupted while still getting the job of taking note whatever I was interrupted for so that I don’t forget about it. So, for a long time a simple text editor was my tool of choice here. Alt tab, type, edit,type, ctrl+s, alt tab back to whatever I was doing. This is minimally intrusive. My planning process consists of moving lines around inside the file and editing them. This sounds as primitive as it is and it has many drawbacks; especially in teams. But it beats having to deal with Jira’s convoluted UI or hunting for the right button in a web ui to find stuff across the dozen or so Github and Gitlab projects I work on. However, using a text editor doesn’t scale and I need a decent issue tracking tool to solve it properly.

Enter Asana. As you can probably imagine, I came to this tool with healthy bias of basically all previous tools that I’ve tried over the past decades not coming close to my preferred but imperfect tool: the text file. My first impression of this tool was wrong. The design and my bias lead me to believe that this was another convoluted, over-engineered issue tracker. It took me five minutes of using it before I got how wrong I was.

The biggest hurdle was actually migrating the hundred or so issues I was tracking. Or so I thought. I was not looking forward to clicking new, edit, ok etc. a hundred times, which I assumed would be the case because that is how basically all issue trackers I’ve worked with so far work. So, I had been putting off that job. It turns out Asana does not work that way: copy 100 lines of text, paste, job done. So, one minute into using it I had already migrated everything I had in my text editor. I was impressed by that.

Asana is a list of stuff where you can do all the things that you would expect to do in a decent UI for that. You can paste lines of text and each line becomes an issue. You can drag lines around to change the order. Organize them using sections, tags, and projects. You can multi select lines using similar mouse and keyboard commands to what you would use in say a spreadsheet and manipulate issues that way. Unlike every other issue tracker, the check box in the UI actually is there to allow you to mark things as done and not for selecting stuff. Instead CMD+a, SHIFT+click, or CMD+click selects issues and then clicking e.g. the tag field does what you’d expect. Typing @ triggers the autocomplete and you can easily refer things (people, issues, projects, etc.) by name. There are no ticket numbers in the UI but each line has a unique url of course. Editing the line updates all the @ references to that issue. There are no modal dialogues or editing screens that hijack the screen. Instead Asana has a list and a detail pane that sit side by side. Click any line and the pane updates and you do your edits there. Multi select some lines and anything you do in the pane happens to the selected issues. There are no save, OK, submit, or other buttons that add unnecessary levels of indirection. Just clicking in the field and typing is enough.

Asana is the first actually usable issue tracker that I’ve come across. I’ve had multiple occasions where I found that Asana actually works as I would want it to. As in, I wonder what happens if I press CMD+z. It actually undid what I just did. I wonder what happens if I do that again. WTF, that works as well! Multi level undo; in a web app. OK, lets paste CMD+X and CMD+C some issues between asana projects. Boom, 100 issues just moved. Of course you can also CMD+A and drag selected issues to another asana project. I wonder if I can assign them to multiple projects. Yes you can, just hit the big + button. This thing just completely fixed the UX around issue tracking for me. All the advantages of a text file combined with all the advantages of a proper issue tracker. Creating multiple issues is as simple as type, enter, type another one, enter, etc. Organizing them is a breeze. It’s like a text editor but backed by a proper issue tracker. This UI wipes out 20 years of forms based web UX madness and it is refreshing. We’ve been using it for nearly two months at Inbot and are loving it.

So, if you are stuck using something more primitive and are are hating it: give Asana a try and you might like it as well.

Mobile Linux

A lot has been written about mobile and embedded device platforms lately (aka. ‘phone’ platforms). Usually articles are about the usual incumbent platforms: Android, IOS, and Windows Phone and the handful of alternatives from e.g. RIM and others. Most of the debate seems to revolve around the question whether IOS will crush Android, or the other way around. Kind of a boring debate that generally involves a lot of fan boys from either camp highlighting this or that feature, the beautiful design, and other stuff.

Recently this three way battle (or two way battle really, depending on your views regarding Windows Phone), has gotten a lot more interesting. However, my in view this ‘war’ was actually concluded nearly a decade ago before it even started and mobile linux won in a very unambiguous way. What is really interesting is how this is changing the market right now.

Continue reading “Mobile Linux”

One week with the N900

This is me pimping a Nokia product. I work for Nokia and I rarely do this, and never without believing 100% what I write.

With some delays, I managed to get my hands on a N900. Our internal ordering system took nearly five months to deliver this thing (something involving bureaucracy and the popularity of this device. I guess the external paying customers lining up for it had some priority too). But it was well worth the wait.

For those who don’t know what the N900 is: it is the first phone in a series of linux based tablet devices from Nokia that started with the N770 in 2006, the N800 (which I still have), and the N810. As such, this series of devices was the start of something beautiful a few years ago. Not hindered by any operator limitations, these were essentially pocketable linux pcs. So naturally the engineers working on this, selected Debian Linux and named it Maemo Linux. Then they built a tool chain and ecosystem around the platform and tapped into all the readily available OSS goodies. It was great. Any research lab in need of some hackable devices jumped on this. As I recall when I was still doing pervasive computing research, most of the researchers in this field were using these devices to study all sorts of stuff. Because no matter how obscure your OSS project is, barring screen and cpu limitations you can probably get it going on Maemo Linux. You can, and people did. Most of Ubuntu cross compiles to Maemo without much effort. For example, I was running a tomcat, equinox, and lucene on a port of Sun’s CDC J2ME environment (roughly equivalent to java 1.4) on a N800 three years ago. It actually ran well too. In short, these babies are the ultimate hackers devices. There really is no alternative in terms of openness or scope in the industry. Android may be linux deep down inside, and Palm OS may be linux deep down inside, but Maemo is Debian Linux without buts or ifs.

And now there is the N900. The N900 is about as thick as a N97, about as long and about 3mm wider and slightly heavier (I actually did the comparison). Unlike its predecessors, it is a phone as well as a Debian linux running internet tablet. So all the goodness from the past version with a 2X performance and memory boost, a good quality phone stack (hey, it’s still a Nokia), and lots of UI work. While it has some rough edges (the software, not the hardware), it is surprisingly useful as a smart phone despite its current status as an early adopter’s device. It has one of the best browsers around (some would say the best); the UI is responsive and very touch friendly, it multitasks without effort, and it comes with tons of goodies like SIP, skype, google talk, Facebook, twitter support. And that’s just the out of the box stuff. You can do most of what the N900 does on an iphone. But not all at once. You can on the N900, plus some.

So, best phone ever as far as I’m concerned. Meego, the consumer friendly version of Maemo that was born out of our recent deal with Intel and MobLin, is coming soon in the form of new Nokia phones (you can already get it for net books). I can’t wait for world+dog to start porting over their favorite software to that. Meanwhile, I just use it as is, which is plenty good. It’s a great smart phone that plays back my music, browses the web (including Google Maps, Youtube, Facebook, and other web 2.0 heavy AJAX & flash sites) without much effort. Most of the iphone optimized web apps work great on the N900 as well. For example, I use the iphone optimized mobile Google Reader (http://www.google.com/reader/i). Mail support is excellent on this device, I use mail for exchange push email and gmail. I can do regular calls, VOIP, Skype (with video), IM, upload photos/videos to facebook, flickr, and other networks. Functionally there is little left to desire. Though somebody getting a foursquare client beyond the early Alpha stage would be nice (there’s two of those).

The Gimp

Since getting an iMac in the summer and not spending the many hundreds of dollars needed for a Photoshop license, I’ve been a pretty happy user of Google’s Picasa. However, it is a bit underpowered and lacks the type of features that are useful for fixing contrast, color, and sharpness issues in poorly lit, partly blown out, noisy, and otherwise problematic photos that you end up with if you are shooting with a nice pocketable compact camera, like I do. My Canon S80 is actually not that bad (great lens, easy to stuff in a pocket, fast to unpocket and aim and shoot, nice controls) but it has three major limitations:

  • When shooting automatic it tends to blow out on the highlights, meaning the sky and other bright areas in the photo are white. This means you have to manually set aperture, shutter time, and ISO to get more difficult shots. Most compacts suffer from this problem BTW.
  • The screen and histogram on it are not that useful. Basically you will end up with photos that are too dark and that do not use the full available dynamic range if you try to optimize for what’s on the screen. Instead, I’ve been relying on spot metering and measuring different spots and compensating for that using Ansel Adams style zoning and wet finger approach (okay a lot of this). Basically this works but it is tedious.
  • Like most compacts, it is useless at higher ISOs due to the noise. Basically I avoid shooting at 200 or above and usually shoot at 50 unless I can’t get the shot. This means in low light conditions, I need a really steady hand to get workable shots.

So as a result, my photos tend to need a bit of post processing to look presentable. Picasa handles the easy cases ok-ish but I know software can do better. So, after exploring the various (free) options on mac and deciding against buying Adobe Lightroom or Photoshop Elements, I ended up taking a fresh look at the Gimp.

The Gimp is as you no doubt know an open source photo/bitmap editor (as well as a really funny character in Pulp Fiction). It comes with a lot of features, a UI that is quite ‘challenging’ (some would say unusable), and some technical limitations. To start with the technical limitations: it doesn’t do anything but 8 bit color depth, which means lossy operations like editing contrast or running filters tend to lose a lot more information due to rounding errors that add up the more you edit. It doesn’t do adjustment layers and other forms of non destructive editing, which adds to the previous problems. It’s slow. Slow as in it can take minutes to do stuff like gaussian blur or sharpening on a large image that should be near real time in e.g. Photoshop . It doesn’t support popular non RGB color spaces (like LAB or CMYK, though it can be made to work with them if you need to). And it doesn’t come with a whole lot of filters and user friendly tools that are common in commercial packages. Finally the UI is the typical result of engineers putting together a UI for features they want to show off and of course not agreeing on such things as an overall UI design philosophy or any kind of conventions. It’s nasty, it’s weird in plenty of places, it’s counter intuitive and it looks quite ugly next to my pretty mac apps. But it sort of works, and you can actually configure some of its more annoying defaults to be more reasonable.

So there is a lot lacking and missing in the Gimp and plenty to whine about if you are used to commercial grade tooling.

But, the good news is (beyond it being free) that you can still get the job done in the Gimp. It does require a creative use of the features it has. Basically, the Gimp provides all the basic building blocks to do complex image manipulation but they are not integrated particularly well. There are only a handful of other applications that provide the same type of features and implementation quality. Most of those are expensive.

In isolation the building blocks that the Gimp provides are not that useful. You have to put them together to accomplish tasks, often in not so obvious ways (although for anyone with a solid background in advanced photo editing it is not that much of a challenge). Doing things in the Gimp mainly involves understanding what you want to do and how the Gimp does things. It’s really not very forgiving when you don’t understand this.

Here are some things that are generally in my work flow (not necessarily in this order) that work quite well in the Gimp. I just summarize the essentials here since you can find lengthy tutorials on each of these topics if you start Googling, also there is lots of potential for variation here and perfecting skills in particular areas:

Contrast: duplicate layer, set blend mode to value (just light, not color), use levels or curves tool on the layer to adjust the contrast. Fine tune the effect with layer transparency. This basically leaves the colors unmodified but modifies brightness and contrast.

Improve black and white contrast with color balance
: basically in black and white photography you can use a color filter in front of the lens to change the way light and dark effect the negative. E.g. a red filter is great for getting some nice detail in e.g. water or sky. You can achieve a similar effect with the color balance tool and a layer that has its mode set to value. This is nice for creating black and white photos but also nice for dealing with things like smog (a mostly red haze -> deemphasize the red) in color photos or getting some extra crisp skies. You can examine the individual color channels to find out which have more details and then boost the overal detail by mixing the channels in a different way. This will of course screw up the colors but you are only interested in light dark here, not color. So duplicate layer, set mode to value, and edit the layer with the color balance tool. Some basic knowledge of color theory will help you make sense of what you are doing but random fiddling with the sliders also works fine.

Local contrast: duplicate layer, use the unsharp filter to edit local contrast by setting radius to something like 50 pixels and amount to something like 0.20. Basically this will change local color contrast and change the perceived contrast in different areas by locally changing colors and lightness. If needed, restrict the layer to either value or color mode.

Contrast map: duplicate layer, blur at about 40 pixels, invert, destaturate, set layer to overlay. This is a great way to fix images but a high dynamic range (lots of shadow and highlight detail, histogram is sort of a V shape). Basically it pushes some of the detail to the center of the histogram, thus compressing the dynamic range. Basically it brightens dark spots and darkens bright spots. The blurring is the tricky bit since you can easily end up with some ghosting around high contrast areas. Fiddling with the pixel amount can fix things here. Also using the opacity on the layer is essential since you rarely want to go with 100% here.

Overlay to make a bland image pop: duplicate layer, set mode to overlay. This works well for photos with a low dynamic range. It basically stretches detail towards the shadow and highlights and enhances both contrast and saturation at the same time. Skies pop, grass is really green, etc. Cheap success but easy to overdo. Sort of the opposite effect of contrast map.

Multiply the sky. Duplicate layer, set mode to multiply, mask everything but the sky (try using a gradient for this or some feathered selection). This has the effect of darkening and intensifying the sky and is great for photos that were overexposed. Also works great for water (though you might want to use overlay).

Color noise: duplicate layer, set mode to color, switch to the red channel and use a combination of blur, and noise reduction filters to smooth out the noise. Selective gaussian blur works pretty well here. Repeat for the green and blue channels. Generally, most of the noise will be in the blue and red channels (because for every cluster of 4 pixels in the sensor, two are green, i.e. most of the detail is in the green channel). Basically, you are only editing the colors here, not the detail or the light so you can push it quite far without losing a lot of detail. Apply a light blur to the whole layer to smooth things out some more.

Luminosity noise: duplicate layer, set mode to value, like with color noise, work on the individual channels to get rid of noise. You will want to go easy on the blurring since this time you are actually erasing detail. Target channels in this order, red, blue, green (in order of noisiness and reverse order of amount of detail). Stop when enough of the luminosity noise is gone.

Color: duplicate layer, set blend mode to color, adjust color balance with curves, levels or color balance tool.

Saturation: duplicate layer, set mode to saturation, use the curves tool to edit saturation (try pulling the curve down). This is vastly superior to the saturation tool. You may want to work on the individual color channels, though this can have some side effects.

Dodge/burn: create a new empty layer, set mode to overlay, paint with black and white on it using 10-20% transparency. This will darken or brighten parts of the image without modifying the image. You can undo with the eraser. Smooth things with gaussian blur, etc. This is great for highlighting people’s eyes, pretty reflections, darkening shadow areas, etc.

Crop: select rectangle, copy, paste as new image, save. Kind of sucks that there is no crop tool but this works just fine.

Sharpening: A neat trick I re-discovered in the Gimp is high-pass sharpening. High pass filtering is about combining a layer with just the outline of the bits that need sharpening with the original photo. This is great for noisy photos since you can edit the layer with the outline independent from the photo, which means that you end up only sharpening the bits that need sharpening. How this works: copy visible, paste as new image, duplicate the layer in the new image, blur (10-20px should do it) the top layer, invert, blend at 50% opacity with the layer below. You should now see a gray image with some lines in there that represent the outlines of whatever is to be sharpened. This is called a high pass. Copy visible, paste as new layer in original image, set the high pass layer’s blend mode to overlay. Observe this sharpens your image, tweak the effect with opacity. If needed manually delete portions from the high pass that you don’t want sharpened. Tweak further with gaussian blur, curves, levels, unsharp mask on the high pass layer. Basically this is a very powerful way of sharpening that gives you a lot more control than normal sharpening filters. But it involves using a lot of Gimp features together. It works especially well on noisy images since you can avoid noise artifacts from being sharpened.

A lot of these effects you can further enhance by playing with the opacity and applying masks. A key decision is the order in which you do things and what to use as the base for a new layer (either visible, or just the original layer). Of course some of these effects can work against each other or may depend on each other and some effects are more lossy than others. In general, paste as new image and paste as a new layer together with layer blending modes like color, value, or overlay are useful to achieve the semi non destructive editing that you would achieve with adjustment layers in Photoshop. You can save layers in independent files and edit them separately. And of course you don’t want to lose any originals you have.

Also nice to be aware of is that most of the effects above you can accomplish in other software packages as well. In Photoshop, most of the tricks above give you quite a bit more control than the default user friendly tools (at the price of having to fiddle more). Some other tools tend to be a bit underpowered. I’ve tried to do several of these things in paint.net under windows and was always underwhelmed with the performance and quality.

Finally, there exist Gimp plugins and scripts that can do most of the effects listed above. I have very little experience with third party plugins and I am aware of the fact that there are a huge number of plugins for e.g. sharpening and noise. However, most of these plugins just do what you could be doing yourself manually, with much more control and precision. Understanding how to do this can help you use such plugins more effectively.

To be honest, my current workflow is to do as much as possible in Picasa and I only switch to the Gimp when I am really not satisfied with the results in Picasa. Picasa does an OK but not great job. But with hundreds of photos to edit, it is a quick and dirty way to get things done. Once I have a photo in the Gimp, I tend to need quite a bit of time before I am happy with the result. But the point is that quite good results can be achieved with it, if you know what to do. The above listed effects should enable you to address a wide range of issues with photos in the Gimp (or similar tools).

Localization rant

I’ve been living outside the Netherlands for a while and have noticed that quite many web sites are handling localization and internationalization pretty damn poorly. In general I hate the poor translations unleashed on Dutch users and generally prefer the US English version of UIs whenever available.

I just visited Youtube. I’ve had an account there for over two years. I’ve always had it set to English. So, surprise, surprise, it asked me for the second time in a few weeks, in German, whether I would like to keep my now fully Germanified Youtube set to German. Eehhhhh?!?!?! nein (no). Abrechen (cancel)! At least they ask, even though in the wrong language. Most websites don’t do even bother with this.

But stop and think about this. You’ve detected that somebody who has always had his profile set to English is apparently in Germany. Shit happens, so now what? Do you think it is a bright idea to ask this person in German whether he/she no longer wants the website presented in whatever it was set to earlier? Eh, no of course not. Chances are good people won’t even understand the question. Luckily I speak enough German to know Abrechen is the right choice for me. When I was living in Finland, convincing websites I don’t speak Finnish was way more challenging. I recall fighting with Blogger (another Google owned site) on several occasions. It defaulted to Finnish despite the fact that I was signed in to Google in and have every possible setting Google provides for this set to English. Additionally, the link for switching to English was three clicks away from the main page. Impossible to do unless you know the Finnish word for preferences, language, and OK (in which case you might pass for a native speaker). I guess I’m lucky to not live in e.g. China where I would stand no chance whatsoever to guess the meaning of buttons and links.

The point here is that most websites seem to be drawing the wrong conclusions based on a few stupid IP checks. My German colleagues are constantly complaining about Google defaulting to Dutch (i.e. my native language, which is quite different from Deutsch). Reason: the nearest Nokia proxy is in Amsterdam so Google assumes we all speak Dutch.

So, cool you can guesstimate where I am (roughly) in the world but don’t jump to conclusions. People travel and move around all the time. Mostly they don’t change their preferred language until after a lot of hard work. I mean, how hard can it be? I’m already signed in, right? Cookies set and everything. In short, you know who I am (or you bloody well should given the information I’ve been sharing with you for several years). Somewhere in my profile, it says that my preferred language is English, right? I’ve had that profile for over four years, right? So why the hell would I suddenly want to switch language to something that I might not even speak? A: I wouldn’t. No fucking way that this is even likely to occur.

It’s of course unfair to single out Google here. Other examples are iTunes which has a full English UI in Finland but made me accept the terms of use in Finnish (my knowledge of Finnish is extremely limited, to put it mildly). Finland is of course bilingual and 10 percent of its population are Swedish speaking Finns, most of which probably don’t handle Finnish that well. Additionally there are tens of thousands of immigrants, tourists and travelers, like me. Now that I live in Germany, I’m stuck with the Finnish itunes version, because I happened to sign up while I was in Finland. Switching to the German store is impossible. I.e. I can’t access the German TV shows for sale on iTunes Germany. Never mind the US English ones I’m actually interested in accessing and spending real $$$/€€€ on. Similarly, I’ve had encounters with Facebook asking me to help localize Facebook to Finnish (eh, definitely talking to the wrong guy here) and recently to German (still wrong).

So, this is madness. A series of broken assumptions leads to Apple losing revenue and Google and others annoying the hell out of people.

So here’s a localization guideline for dummies:

  • Offer a way out. Likely a large percentage of your guesses as to what the language of your users is, is going to be wrong. The smaller the amount of native speakers the more likely you will get it wrong. Languages like Finnish or Chinese are notoriously hard to learn. So, design your localized sites such that a non native speaker of such languages can get your fully localized sites set to something more reasonable.
  • Respect people’s preferences. Profiles override anything you might detect. People move around so your assumptions are likely broken if they deviate from the profile settings.
  • Language is not location. People travel around and generally don’t unlearn the language they used to speak. Additionally, most countries have sizable populations of non native speakers as well as hordes of tourists and travelers.
  • If people managed to sign up, that’s a strong clue that whatever the language of the UI was at the time is probably a language that the user has mastered well enough to understand the UI (or otherwise you’d have blind monkeys signing up all the time). So there’s no valid use case for suggesting an alternative language here. Never mind defaulting to one.

Anyway, end of rant.

Google Chrome – First Impressions

First impression: Google delivered, I’ve never used a browser this fast. It’s great.

Yesterday, a cartoon was prematurely leaked detailing Google’s vision for what a browser could look like. Now, 24 hours later I’m reviewing what until yesterday was a well kept secret.

So here’s my first impressions.

  • Fast and responsive. What can I say? Firefox 3 was an improvement over Firefox 2 but this is in a different league. There’s still lots of issues with having many tabs open in Firefox. I’ve noticed it doesn’t like handling bitmaps and switching tabs gets unusable with a few dozen tabs open. Chrome does not have this issue at all. It’s faster than anything I’ve browsed with so far (pretty much any browser you can think of probably).
  • Memory usage. Chrome starts new processes for each domain and not per tab. I opened a lot of tabs in the same domain and the number of processes did not go up. Go to a different domain and you get another chrome process. However, it does seem to use substantial amount of memory in total. Firefox 3 is definitely better. Not an issue with 2 GB like I have and the good news is that you get memory back when you close tabs. But still, 40-60MB per domain is quite a lot.
  • Javascript performance. Seems fantastic. Gmail and Google Reader load in no time at all. Easily faster than Firefox 3.
  • UI. A bit spartan if you are used to Firefox with custom bells & wistles (I have about a dozen extensions). But it works and is responsive. I like it. Some random impressions here: 
    • no status bar (good)
    • very few buttons (good)
    • no separate search field (could be confusing for users)
    • tabs on top, looks good, unlike IE7.
    • mouse & keyboard. Mostly like in Firefox. Happy to see middle click works. However, / does not work and you need to type ctrl+f to get in page search
  • URL bar. So far so good, seems to copy most of the relevant features from Firefox 3. I like Firefox 3’s behaviour better though.
  • RSS feeds. There does not seem to be any support for subscribing to, or reading feeds. Strange. If I somehow missed it, there’s a huge usability issue here. If not, I assume it will be added.
  • Bookmarks. An important feature for any browser. Google has partially duplicated Firefox 3’s behaviour with a little star icon but no tagging.
  • Extensions. none whatsoever :-(. If I end up not switching, this will be the reason. I need my extensions.
  • Import Firefox Profile. Seems pretty good, passwords, browsing history, bookmarks, etc. were all imported. Except for my cookies.
  • Home screen. Seems nicer than a blank page but nothing I’d miss. Looks a bit empty on my 1600×1200 screen.
  • Missing in action. No spelling control, no search plugins (at least no obvious way for me to use them even though all my firefox search plugins are listed in the options screen), no print preview, no bookmarks management, no menu bar (good, don’t miss it)
So Google delivers on promises they never made. Just out of the blue there is Chrome and the rest of the browser world has some catching up to do. Firefox and Safari are both working on the right things of course and have been a huge influence on Chrome (which Google gives them plenty of credit for). However, the fact is that Google is showing both of them that they can do much better. 
Technically I think the key innovation here is using multiple processes to handle tabs from different domains. This is a good idea from both a security point of view as from a performance point of view. Other browsers try to be clever here and do everything in one process with less than stellar results. I see Firefox 3 still block the entire UI regularly and that is just inherent to its architecture. This simply won’t happen with Chrome. Worst case is that one of the tabs becomes unusable and you just close it. Technically, you might wonder if they could not have done this with threads instead of processes.

So, I’m genuinely impressed here. Google is really delivering something exceptionally solid here. Download it and see for yourself.

Posting this from Chrome of course.

Songbird Beta (0.7)

Songbird Blog » Songbird Beta is Released!.

Having played with several milestone builds of songbird, I was keen to try this one. This is a big milestone for this music player & browser hybrid. Since I’ve blogged on this before, I will keep it short.

The good:

  • New feathers (songbird lingo for UI theme) looks great. Only criticism is that it seems to be a bit of an iTunes rip off.
  • Album art has landed
  • Stability and memory usage is now acceptable for actually using the application
  • Unlike iTunes, it actually supports the media buttons on my logitech keyboard.

The bad (or not so good since I have no big gripes):

  • Still no support for the iTunes invented but highly useful compilation flag (bug 9090). This means that my well organized library is now filled with all sorts of obscure artists that I barely know but apparently have one or two songs from. iTunes sorts these into compilation corner and I use this feature to keep a nice overview of artists and complete albums.
  • Despite being a media player with extension support, there appears to be no features related to sound quality. Not even an equalizer. Not even as an extension. This is a bit puzzling because this used to be a key strength of winamp, the AOL product that the songbird founders used to be involved with.
  • Despite being a browser, common browser features are missing. So no bookmarks, no apparent RSS feed, no Google preconfigured in the search bar, etc. Some of these things are easily fixed with extensions.

Verdict: much closer than previous builds but still no cigar. Key issue for me is compilation flag support. Also I’d really like to see some options for affecting audio playback quality. I can see how having a browser in my media player could be useful but this is not a good browser nor a good media player yet.

OoO 3.0 Beta & cross references

It still looks butt ugly but at least this bug was partially addressed in the latest beta release of Open Office. The opening date for this one, “Dec 19 19:13:00 +0000 2001”. That’s more than seven years ago! This show stopper has prevented me from writing my thesis, any scientific articles, or in fact anything serious in open office since writing such things requires proper cross reference functionality. But finally, they implemented the simple feature of actually being able to refer to paragraph numbers of something elsewhere in the document using an actual cross reference. This is useful to be able to refer to numbered references, figures, tables, formulas, theorems, sections, etc.

The process for this bug went something like this “you don’t need cross references” (imagine star wars type gesture here). Really for a bunch of people implementing a word processor the mere length of the period they maintained this point of view was shocking and to me has always been a strong indication that they might not be that well suited for the job of creating an actual word processor. Then they went on to a infinite loop of “hmm maybe we can hack something for open office 1.1 2.0 2.1 2.2 2.3 2.4 3.0″ and “we need to fix this because imported word documents are breaking over this” (never mind that real authors might need this for perfectly valid reasons). This went on for a very very long time, and frankly I have long since stopped considering open office as a serious alternative for doing my word processing.

I just tried it in 3.0 beta and it actually works now, sort of. Testing new OoO releases for this has become somewhat of a ritual for me. For years, the first thing I did after downloading OoO was try to insert a few cross references before shaking my head and closing the window. The UI is still horribly unusable but at least the feature is there now if you know where to look for it.

Six years ago Framemaker was the only alternative that met my technical requirements of being an actual word processor with a UI and features that support the authoring process (unlike latex, which is a compiler),  the ability to use cross references, and flexible but very strictly applied formatting. Theoretically word can do all of this as well but I don’t recommend it for reasons of buggyness and the surprising ease with which you can lose hours of work due to word automatically rearranging & moving things for you when you e.g. insert a picture, pasting a table, etc (and yes I’ve seen documents corrupt themselves just by doing these things).

The last few years, I’ve used open office only to be able to open the odd word/powerpoint file dropping in my inbox at home. I basically have close to no office application needs here at home. For my writing at work needs, I usually adapt to what my coauthors use (i.e. word and sometimes latex).  Framemaker has basically been dying since Adobe bought it. The last version I used was 6.0 and the last occasion I used it was when writing my phd thesis.

Ubuntu at work

After my many, not so positive, reviews you might be surprised to learn that I’m actually using it at work now. Last week, a recent Mac convert dumped his ‘old’ laptop on my desk which happened to be a Lenovo T60 with a nice core duo processor, ATI graphics and 2 GB of memory. One of the reasons for the mac was that the thing kept crashing. This can either be a hardware or a software problem. I suspect the latter but I’ll have to see.

It so happens that my own windows desktop is increasingly less compatible with the linux based python development going on in the team I’m in. So even before taking the laptop, I was playing around with a vmware image to run some server stuff. My idea was to do the development on my desktop (using eclipse + pydev) and deploy on a vmware server with ubuntu and the right dependencies. Slow, but it should work, mostly.

So instead, last friday I installed Ubuntu 7.10 (only CD lying around) on the T60 and then upgraded it to 8.04 over the network. The scanning the mirror error I discribed earlier struck again. This time because of a corporate http proxy (gee only the entire fortune 500 list probably uses one: either add proxy settings to the installer or don’t attempt to use the network during installation). Solution: unplug network cable and let it time out.

Display detection actually worked this time. Anyway, I was only installing 7.10 to upgrade it to 8.10. Due to the scanning the mirror error, the installer had conveniently commented out all apt repositories. Of course there’s no GUI to fix that (except gedit). After fixing that and configuring the proxy in various places, I installed some 150MB worth of upgrades and then tried to convince the update manager to show me the upgrade to 8.04 dialog that various websites assure users should show up. It refused to in my case. So back to the commandline. Having had nasty experiences upgrading debian from the commandline inside X, I opted to do this in a terminal (alt+f2). Not sure if this is still needed but it can’t hurt. Anyway, this took more than an hour. In retrospect, downloading and burning a 8.04 image would have been faster.

So far so good. The thing booted and everything seemed to work. Except the wireless lan was nowhere to be seen (known issue with the driver apparently, haven’t managed to fix this yet). Compiz actually works and looks pretty cool. I have sound. I have network (wired).

Almost works as advertised one might say.

Until I plugged the laptop in its docking station and connected that with a dvi cable to the 1600×1200 external screen. Basically, I’m still struggling with this one. Out of the box, it seems impossible to scale beyond the native laptop screensize. What should happen is that either the dockingstation acts as a second screen or that it replaces the laptop screen with a much better resolution. Neither of this happens.

I finally edited xorg.conf to partially fix the resolution issue by adding 1600×1200 as an option. Only problem: compiz (the 3d accelerated GUI layer) doesn’t like this. I can only use this resolution with compiz disabled. If I enable it, basically it adds a black bar to the right and below. I wasted quite a bit of time trying to find a solution, so far without luck although I did manage to dig up a few links to compiz/ubuntu bugs (e.g. here) and forum posts suggesting I’m not alone. This seems to be mostly a combination of compiz immaturity and x.org autodetection having some cases where it just doesn’t work. With my home setup it didn’t get this far.

My final gripe concerns the amount of pixels Ubuntu/Gnome wastes. I ran into this running eclipse and noticing that compared to windows it includes a lot of white space, ugly fonts that seem to use a lot of space. Screen real estate really matters with eclipse due to the enormous amount of information the GUI is trying to present. Check here for some tips on how to fix eclipse. This issue was emphasized even more when I tried to access my 1400×1050 windows laptop using Ubuntu’s remote desktop vnc client and the realvnc server running on windows. The retard that designed the UI for that decided in all his wisdom to show the vnc session in a vnc application window with a huge & useless toolbar with a tab bar below that (!) with in that a single tab for my windows session. Add the Ubuntu menubar + task bar and there is no way that it can show a 1400×1050 desktop in a 1600×1200 screen without scrollbars (i.e. altogether I lose around 250-300 pixels of screen real estate). Pretty damn sad piece of UI design if you ask me. Luckily it has a full screen mode.

In case you are wondering why I bother to document this, the links are a great time saver next time I need to do this. Overall, despite all the hardware issues, I think I can agree with Mark Shuttleworth now that under controlled hardware circumstances this is a pretty good OS. Window 95 wasn’t ideal either and I managed to live with that for several years.