Elasticsearch failed shard recovery

We have a single node test server with some useful data in there. After a unplanned reboot of the server, elasticsearch failed to recover one shard in our cluster and as a consequence the cluster went red, which means it doesn’t work until you fix it. Kind of not nice. If this was production, I’d be planning an extensive post mortem (how did it happen) and doing some kind of restore from a backup probably. However, this was a test environment. Which meant an opportunity to figure out if the problem can actually be fixed somehow.

I spent nearly two hours to figure out how to recover from this in a way that does not inolve going “ahhh whatever” and deleting the index in question. Been there done that. I suspect, I’m not the only one to get stuck in the maze of half truths, well intentioned but incorrect advice, etc. So, I decided to document the fix I pieced together since I have a hunch this won’t be the last time I have to do this.

This is one topic where the elasticsearch documentation is of little help. It vaguely suggests that this shouldn’t happen that red is a bad color to see in your cluster status. It also provides you plenty of ways to figure out that, yes, your cluster isn’t working and why in excruciating levels of detail. However, very few ways of actually recovering beyond a simple delete and restore backup are documented.

However, you can actually fix things sometimes and I was able to piece together something that works with a few hours of googling.

Step 0 – diagnose the problem

This mainly involves figuring out which shard(s) are the problem. So:

# check cluster status
curl localhost:9200/_cluster/health
# figure out which indices are in trouble
curl 'localhost:9200/_cluster/health?level=indices&pretty'
# figure out what shard is the problem
curl localhost:9200/_cat/shards

I can never remember these curl incantations so nice to have them in one place. Also, poke around in the log. Look for any errors when elasticsearch restarts.

In my case it was pretty clear about the fact that due to some obscure exception involving a “type not found [0]” it couldn’t start shard 2 in my inbot_activities_v29 index. I vaguely recall from a previous episode where I unceremoniously deleted the index and moved on with my life that the problem is probably related to some index format change in between elasticsearch updates some time ago. Doesn’t really matter: we know that somehow that shard is not happy.

Diagnosis: Elasticsearch is not starting because there is some kind of corruption with shard 2 in index inbot_activities_v29. Because of that the whole cluster is marked as red and nothing works. This is annoying and I want this problem to go away fast.

Btw. I also tried the _recovery API but it seems to lack an option to actuall recover anything. ALso, it seems to not list any information for those shards that failed to recover. In my case it listed the four other shards in the index that were indeed fine.

Step 1 – org.apache.lucene.index.CheckIndex to the rescue

We diagnosed the problem. Red index. Corrupted shard. No backups. Now what?

Ok, technically you are looking at data loss at this point. The question is how much data you are going to lose. Your last resort is deleting the affected index. Not great, but it at least gets the rest of the cluster green.

Say you don’t actually care about the 1 or 2 documents in the index that are blocking the shard from loading? Is there a way to recover the shard and nurse the broken cluster back to a working state minus those apparently corrupted documents? That might be a preferable approach to simply deleting the whole index.

The answer is yes. Lucene comes with a tool to fix corrupted indices. It’s not well integrated into elasticsearch. There’s an open ticket in elasticsearch that may involve addressing this. In any case, you can run this tool manually.

Assuming a centos based rpm install:

# OK last warning: you will probably lose data. Don't do this if you can't risk that.

# this is where the rpm dumped all the lucene jars
cd /usr/share/elasticsearch/lib

# run the tool. You may want to adapt the shard path 
java -cp lucene-core*.jar -ea:org.apache.lucene... org.apache.lucene.index.CheckIndex /opt/elasticsearch-data/linko_elasticsearch/nodes/0/indices/inbot_activities_v29/2/index/ -fix

The tool displays some warnings about what it is about to do, and if you are lucky reports that it fixed some issues and wrote some segment. Run the tool again and it mentions everything is fine. Excellent.

Step 2 – Convincing elasticsearch everything is fine

Except, elasticsearch is still red. Restarting it doesn’t help. It stays red. This one took me a bit longer to figure out. It turns out that all those well intentioned blogposts that mention the lucene CheckIndex tool sort of leave the rest of the process as an excercise to the reader. There’s a bit more to it:

# go to wherever the translog of your problem shard is
cd /opt/elasticsearch-data/linko_elasticsearch/nodes/0/indices/inbot_activities_v29/2/translog
ls
# note the recovery file; now would be a good time to make a backup of this file because we will remove it
sudo service elasticsearch stop
rm *recovery
sudo service elasticsearch start

After this, elasticsearch came back green for me (see step 0 for checking that). I lost a single document in the process. Very acceptable given the alternative of having to delete the entire index.

Using Java constants in JRuby

I’ve been banging my head against the wall for a while before finding a solution that doesn’t suck for a seemingly simple problem.

I have a project with mixed jruby and java code. In my Java code I have a bunch of string constants. I use these in a few places for type safety (and typo safety). Nothing sucks more than spending a whole afternoon debugging a project only to find that you swapped two letters in a string literal somewhere. Just not worth my time. Having ran into this problem with ruby a couple of times now, I decided I needed a proper fix.

So, I want to use the same constants I have in Java in jruby. Additionally, I don’t want to litter my code with Fields:: prefixes. In java I would use static imports. In ruby you can use includes, except that doesn’t quite work for java constants. And also, I want my ruby code to complain loudly when I make a typo with any of these constants. So ruby constants don’t solve my problem. Ruby symbols don’t solve my problem either (not typo proof). Attribute accessors are kind of not compatible with modules (need a class for that). So my idea was to simply add methods to the module dynamically.

So, I came up with the following which allows me to define a CommonFields module and simply include that wherever I need it.

require 'jbundler'
require 'singleton'

java_import my.package.Fields

# hack to get some level of type safety
# simply include the Fields module where you need to use 
# field names and  then you can simply use the defined 
#fields as if they were methods that return a string
module CommonFields
    @rubySpecificFields=[
      :foooo,
      :bar
    ]
    @rubySpecificFields.each do |field|
      CommonFields.send(:define_method, field.to_s) do
        return field.to_s
      end  
    end
    
    Fields.constants.each do | field |
      CommonFields.send(:define_method, Fields.const_get(field)) do
        return Fields.const_get(field)
      end
    end
end

include CommonFields
  
puts foooo, email, display_name

Basically all this does is add the Java constants (email and display_name are two of the constants) to the module dynamically when you include it. After that, you just use the constant names and it just works ™. I also added a list of ruby symbols so, I can have fields that I don’t yet have on the java side. This works pretty OK and I’m hoping jruby does the right things with respect to inlining the method calls. Most importantly, any typo will lead to an interpreter error about the method not existing. This is good enough as it will cause tests to fail and be highly visible, which is what I wanted.

Pretty neat and I guess you could easily extend this approach to implement proper enums as well. I spotted a few enum like implementations but they all suffer from the prefix verbosity problem I mentioned. The only remaining (minor) issue is that ruby constants are upper case and my Java constant names are upper case as well. Given that I want to turn them into ruby functions, that is not going to work. Luckily in my case, the constant values are actually camel cased field names that I can simply use as the function name in ruby. So that’s what I did here. I guess I could have lower cased the name as well.

I’m relatively new to ruby so I was wondering what ruby purists think of this approach.

The Gimp

Since getting an iMac in the summer and not spending the many hundreds of dollars needed for a Photoshop license, I’ve been a pretty happy user of Google’s Picasa. However, it is a bit underpowered and lacks the type of features that are useful for fixing contrast, color, and sharpness issues in poorly lit, partly blown out, noisy, and otherwise problematic photos that you end up with if you are shooting with a nice pocketable compact camera, like I do. My Canon S80 is actually not that bad (great lens, easy to stuff in a pocket, fast to unpocket and aim and shoot, nice controls) but it has three major limitations:

  • When shooting automatic it tends to blow out on the highlights, meaning the sky and other bright areas in the photo are white. This means you have to manually set aperture, shutter time, and ISO to get more difficult shots. Most compacts suffer from this problem BTW.
  • The screen and histogram on it are not that useful. Basically you will end up with photos that are too dark and that do not use the full available dynamic range if you try to optimize for what’s on the screen. Instead, I’ve been relying on spot metering and measuring different spots and compensating for that using Ansel Adams style zoning and wet finger approach (okay a lot of this). Basically this works but it is tedious.
  • Like most compacts, it is useless at higher ISOs due to the noise. Basically I avoid shooting at 200 or above and usually shoot at 50 unless I can’t get the shot. This means in low light conditions, I need a really steady hand to get workable shots.

So as a result, my photos tend to need a bit of post processing to look presentable. Picasa handles the easy cases ok-ish but I know software can do better. So, after exploring the various (free) options on mac and deciding against buying Adobe Lightroom or Photoshop Elements, I ended up taking a fresh look at the Gimp.

The Gimp is as you no doubt know an open source photo/bitmap editor (as well as a really funny character in Pulp Fiction). It comes with a lot of features, a UI that is quite ‘challenging’ (some would say unusable), and some technical limitations. To start with the technical limitations: it doesn’t do anything but 8 bit color depth, which means lossy operations like editing contrast or running filters tend to lose a lot more information due to rounding errors that add up the more you edit. It doesn’t do adjustment layers and other forms of non destructive editing, which adds to the previous problems. It’s slow. Slow as in it can take minutes to do stuff like gaussian blur or sharpening on a large image that should be near real time in e.g. Photoshop . It doesn’t support popular non RGB color spaces (like LAB or CMYK, though it can be made to work with them if you need to). And it doesn’t come with a whole lot of filters and user friendly tools that are common in commercial packages. Finally the UI is the typical result of engineers putting together a UI for features they want to show off and of course not agreeing on such things as an overall UI design philosophy or any kind of conventions. It’s nasty, it’s weird in plenty of places, it’s counter intuitive and it looks quite ugly next to my pretty mac apps. But it sort of works, and you can actually configure some of its more annoying defaults to be more reasonable.

So there is a lot lacking and missing in the Gimp and plenty to whine about if you are used to commercial grade tooling.

But, the good news is (beyond it being free) that you can still get the job done in the Gimp. It does require a creative use of the features it has. Basically, the Gimp provides all the basic building blocks to do complex image manipulation but they are not integrated particularly well. There are only a handful of other applications that provide the same type of features and implementation quality. Most of those are expensive.

In isolation the building blocks that the Gimp provides are not that useful. You have to put them together to accomplish tasks, often in not so obvious ways (although for anyone with a solid background in advanced photo editing it is not that much of a challenge). Doing things in the Gimp mainly involves understanding what you want to do and how the Gimp does things. It’s really not very forgiving when you don’t understand this.

Here are some things that are generally in my work flow (not necessarily in this order) that work quite well in the Gimp. I just summarize the essentials here since you can find lengthy tutorials on each of these topics if you start Googling, also there is lots of potential for variation here and perfecting skills in particular areas:

Contrast: duplicate layer, set blend mode to value (just light, not color), use levels or curves tool on the layer to adjust the contrast. Fine tune the effect with layer transparency. This basically leaves the colors unmodified but modifies brightness and contrast.

Improve black and white contrast with color balance
: basically in black and white photography you can use a color filter in front of the lens to change the way light and dark effect the negative. E.g. a red filter is great for getting some nice detail in e.g. water or sky. You can achieve a similar effect with the color balance tool and a layer that has its mode set to value. This is nice for creating black and white photos but also nice for dealing with things like smog (a mostly red haze -> deemphasize the red) in color photos or getting some extra crisp skies. You can examine the individual color channels to find out which have more details and then boost the overal detail by mixing the channels in a different way. This will of course screw up the colors but you are only interested in light dark here, not color. So duplicate layer, set mode to value, and edit the layer with the color balance tool. Some basic knowledge of color theory will help you make sense of what you are doing but random fiddling with the sliders also works fine.

Local contrast: duplicate layer, use the unsharp filter to edit local contrast by setting radius to something like 50 pixels and amount to something like 0.20. Basically this will change local color contrast and change the perceived contrast in different areas by locally changing colors and lightness. If needed, restrict the layer to either value or color mode.

Contrast map: duplicate layer, blur at about 40 pixels, invert, destaturate, set layer to overlay. This is a great way to fix images but a high dynamic range (lots of shadow and highlight detail, histogram is sort of a V shape). Basically it pushes some of the detail to the center of the histogram, thus compressing the dynamic range. Basically it brightens dark spots and darkens bright spots. The blurring is the tricky bit since you can easily end up with some ghosting around high contrast areas. Fiddling with the pixel amount can fix things here. Also using the opacity on the layer is essential since you rarely want to go with 100% here.

Overlay to make a bland image pop: duplicate layer, set mode to overlay. This works well for photos with a low dynamic range. It basically stretches detail towards the shadow and highlights and enhances both contrast and saturation at the same time. Skies pop, grass is really green, etc. Cheap success but easy to overdo. Sort of the opposite effect of contrast map.

Multiply the sky. Duplicate layer, set mode to multiply, mask everything but the sky (try using a gradient for this or some feathered selection). This has the effect of darkening and intensifying the sky and is great for photos that were overexposed. Also works great for water (though you might want to use overlay).

Color noise: duplicate layer, set mode to color, switch to the red channel and use a combination of blur, and noise reduction filters to smooth out the noise. Selective gaussian blur works pretty well here. Repeat for the green and blue channels. Generally, most of the noise will be in the blue and red channels (because for every cluster of 4 pixels in the sensor, two are green, i.e. most of the detail is in the green channel). Basically, you are only editing the colors here, not the detail or the light so you can push it quite far without losing a lot of detail. Apply a light blur to the whole layer to smooth things out some more.

Luminosity noise: duplicate layer, set mode to value, like with color noise, work on the individual channels to get rid of noise. You will want to go easy on the blurring since this time you are actually erasing detail. Target channels in this order, red, blue, green (in order of noisiness and reverse order of amount of detail). Stop when enough of the luminosity noise is gone.

Color: duplicate layer, set blend mode to color, adjust color balance with curves, levels or color balance tool.

Saturation: duplicate layer, set mode to saturation, use the curves tool to edit saturation (try pulling the curve down). This is vastly superior to the saturation tool. You may want to work on the individual color channels, though this can have some side effects.

Dodge/burn: create a new empty layer, set mode to overlay, paint with black and white on it using 10-20% transparency. This will darken or brighten parts of the image without modifying the image. You can undo with the eraser. Smooth things with gaussian blur, etc. This is great for highlighting people’s eyes, pretty reflections, darkening shadow areas, etc.

Crop: select rectangle, copy, paste as new image, save. Kind of sucks that there is no crop tool but this works just fine.

Sharpening: A neat trick I re-discovered in the Gimp is high-pass sharpening. High pass filtering is about combining a layer with just the outline of the bits that need sharpening with the original photo. This is great for noisy photos since you can edit the layer with the outline independent from the photo, which means that you end up only sharpening the bits that need sharpening. How this works: copy visible, paste as new image, duplicate the layer in the new image, blur (10-20px should do it) the top layer, invert, blend at 50% opacity with the layer below. You should now see a gray image with some lines in there that represent the outlines of whatever is to be sharpened. This is called a high pass. Copy visible, paste as new layer in original image, set the high pass layer’s blend mode to overlay. Observe this sharpens your image, tweak the effect with opacity. If needed manually delete portions from the high pass that you don’t want sharpened. Tweak further with gaussian blur, curves, levels, unsharp mask on the high pass layer. Basically this is a very powerful way of sharpening that gives you a lot more control than normal sharpening filters. But it involves using a lot of Gimp features together. It works especially well on noisy images since you can avoid noise artifacts from being sharpened.

A lot of these effects you can further enhance by playing with the opacity and applying masks. A key decision is the order in which you do things and what to use as the base for a new layer (either visible, or just the original layer). Of course some of these effects can work against each other or may depend on each other and some effects are more lossy than others. In general, paste as new image and paste as a new layer together with layer blending modes like color, value, or overlay are useful to achieve the semi non destructive editing that you would achieve with adjustment layers in Photoshop. You can save layers in independent files and edit them separately. And of course you don’t want to lose any originals you have.

Also nice to be aware of is that most of the effects above you can accomplish in other software packages as well. In Photoshop, most of the tricks above give you quite a bit more control than the default user friendly tools (at the price of having to fiddle more). Some other tools tend to be a bit underpowered. I’ve tried to do several of these things in paint.net under windows and was always underwhelmed with the performance and quality.

Finally, there exist Gimp plugins and scripts that can do most of the effects listed above. I have very little experience with third party plugins and I am aware of the fact that there are a huge number of plugins for e.g. sharpening and noise. However, most of these plugins just do what you could be doing yourself manually, with much more control and precision. Understanding how to do this can help you use such plugins more effectively.

To be honest, my current workflow is to do as much as possible in Picasa and I only switch to the Gimp when I am really not satisfied with the results in Picasa. Picasa does an OK but not great job. But with hundreds of photos to edit, it is a quick and dirty way to get things done. Once I have a photo in the Gimp, I tend to need quite a bit of time before I am happy with the result. But the point is that quite good results can be achieved with it, if you know what to do. The above listed effects should enable you to address a wide range of issues with photos in the Gimp (or similar tools).

Security

Knowing slightly more than average about computers, I am regularly assumed to be an expert by family or friends. A returning question is which virus scanner to use. This one always causes me some trouble because I know the right answer is to spend some money on e.g. Norton or McAfee or to experiment with one of the free solutions.

However, I don’t really know anything about the subject because I don’t use virus scanners at all. I don’t need them. But this is hardly something to recommend to somebody who has no clue what to do here. At work we have Norton Antivirus wasting a lot of my time on my slow laptop disk. Each time it boots, it insists on re-examining the same crap it hasn’t found a virus in before, ever. It so happens that this is a particularly bad time to access the disk since a dozen processes are trying to start. So, I usually kill the process to make my laptop boot in a reasonable time frame (< 10 minutes). I think over the past few years, Norton easily wasted several working days of my time by making me wait. Arguably the economic damage of such products is worse than the problems they supposedly prevent.

Installing virus scanners is not a security measure but merely a form of deniability for system administrators. If the shit hits the fan, they can point to Norton and Norton can point to the fine print in their license (which says good luck if the shit hits the fan). Norton’s businessmodel is to provide deniability to companies. The price is in dollars and productivity lost. Norton will easily transform any laptop in a dead slow machine, especially if configured for maximum deniability (scan everything, always, every time, all the time).

I know I’m OK because I know how not to get infected, which is why I don’t run any security products at home. A little bit of hygiene goes a long way. Most virus infections are ignorant users clicking on stuff they shouldn’t be clicking. Drive by infections are also common with risky things such as active x and internet explorer exposed and not updated. That’s no concern for me because I A)  use firefox, B) don’t visit suspicious sites and only use up to date, mainstream plugins, C) have adaware filter out a lot of crap, and D) keep my shit up to date. Sure, that still leaves some room for something to slip by but I’ve never been infected by anything since I stopped accepting floppy’s from strangers (long time ago).

A few days ago, some download included Norton Security Scan which is a free scanning tool designed to make you buy the full version. Since this computer has been exposed to the nasty internet for a few years now, I thought lets see what it comes up with.

Well:

  • A tracking cookie in my browser of some ad site. Tedious but not really a risk. Also shows how crappy this tool is because I have way more advertisment related cookies that I probably should remove. However, I’m too lazy to keep track of all my cookies. Once in a while I clean them up by deleting cookies for any domain I don’t know or care about.
  • Two infected mails in thunderbird’s trashcan (w32.netsky.p@mm!enc worm). That’s risky, if you open the attachment. Using Thunderbird prevents that from happening automatically. Besides, all my mail is handled by Google these days which uses serverside malware and virus filters. So no reason for me to install a virus scanner. These two mails probably predate me starting to use gmail. This was in 2005.

So two old and obvious malware mails I deleted (or thunderbird filtered them) and a tracking cookie. No worms, no rootkits, no spyware, no adware. Just a failed attempt to make me open some shitty attachment and a cookie. Thanks for confirming what I already knew Norton. I uninstalled it.

This doesn’t prove anything of course. There’s no perfect security. But so far so good. I don’t have a firewall since I have a NAT router which stops any incoming request except the ones that it shouldn’t be stopping because I told it to. I know everything outgoing is OK because I know what software I install. My router doesn’t do UPNP configuration so I control everything manually. I don’t have a virus scanner active since all my mail goes to google which already scans my mails. All my downloads are of course a risk, so I take care to only download from respectable sources. I actually have clamwin antivirus installed to manually scan files if I don’t trust the source but I rarely have a need for it and it has never found anything. Firefox 3 should warn me against malware sites in so far they are able to keep their filters up to date. Arguably if they screw up, Norton isn’t doing much better probably.

So in short, I’m keeping my money and will take my chances. If something goes wrong, I’ll only have myself to blame and will be back online in no time because I do have backups of all my important files. The last time this happened was when a particularly nasty piece of malware struck me: windows activation failed on a fully legal version of windows XP pro after installing a new usb hard disk.

Zurich Photos

Last week I was in Zurich and I took some photos, which have now been added to my photo album. Most of them were taken in the evening. Some of them are pretty nice considering I had no tripod and it was quite dark.

Zurich lake in the evening

For example this one was taken from a bridge in Zurich with the shutter open for 1.3 seconds. Despite this, it turned out quite OK. I did quite a bit of color correction and curves tweaking on this picture and it turned out quite a bit more brighter than I remember it to be or the original below.

Before editing

Antec SmartPower 2.0 500 Watt review

What happened yesterday was that I got home on monday and tried to restore the pc from standby (where it had been since friday morning). I then went into the kitchen to get some food and when I came back it was off. So I powered it again and it went through a weird on off cycle that kept repeating until I switched off the powersupply with its button. I then unplugged and replugged powercable and checked everything else and powered on. Same thing. By this point I was quite certain the powersupply was toast and removed it from the case. I spent the rest of the evening reading a book and watching some TV, things I normally don’t do that often.

Today I bought a Antec SmartPower 2.0 500 Watt unit. It seems my first ever attempt at connecting it to various components was successful. At least, it’s been running for fifteen minutes now (while writing this) and everything seems OK. Quite a relief that everything is working again. I’ve so far shied away from constructing my own PCs since I prefer to get the thing pre-assembled, tested and with warranty. The latter had expired and the company that did the assembling no longer exists despite doing a fine job (PC has been stable for over a year until the hardware failed). So, I had no choice but to get off my ass and fix things myself.

Connecting things is not that hard as long as you know what is connected to what in the first place. This is reasonably fool proof since connectors come in various shapes that line up nicely in only one way. To make sure I wouldn’t miss anything, I shot a few pictures of the internals of my PC before disconnecting everything. I consulted these pictures a few times to ensure everything was connected correctly during the whole procedure. So, I guess that counts as useful advice for people in a similar situation. Disconnecting things was a bit cumbersome since the case is full of sharp bits and pieces that tend to get in the way when you try to unplug stuff. My right hand has a few scratches but otherwise, I’m fine.

The package the new unit came in was quite nice, for a power supply. It has two useful features: modular cables and a two fans for improved cooling of the unit and the rest of the machine. The modular cables connect to four sockets on the unit. Several cables are provided with the package for all the usual stuff. The nice thing about this is that it allows you to minimize the amount of cable and it also provides some flexibility with respect to how you route the cables. I used three of the four sockets. One for the video card (dedicated PCI Extreme cable provided), one for the sata drive and one with three connectors which I used for the dvd burner, the frontside of the casing (power button and some other stuff) and the floppy drive, which I’ve never actually used. Having a dedicated cable for the video card seems useful since it is a nvidia 7800 card. These are notorious for sucking a lot of power (hence the 500 watt) and having a dedicated cable ensures it doesn’t have to share the cable with other devices (which presumably helps keeping things stable).

Despite the two fans, noise is quite ok (about the same as before) since only the internal fan is active most of the time. Besides, my cpu fan and video card fan can be quite noisy too. I haven’t heard the second one yet but I’m sure I will once I do some gaming. If you are looking for noise free, buy another unit but otherwise things are quite alright. The reason I chose this one was because it had a nice package, more or less the same specs and the above mentioned nice features. There’s not much more to it. Without access to a PC I sort of omitted my usual routine of doing elaborate comparison of various alternatives and trusted the nice people at Verkkokauppa to put the good stuff on the top shelf. Besides, this was one of the few 500 watt units they had.

another ubuntu installation test

Currently ubuntu is my favourite linux distro. I don’t use it but do install it and other distros from time to time to see where linux is going. I’ve been doing this a couple of times per year since about 1995 when I was biking home from university with 30 or so slackware floppy disks in my bag. So, I am not a newby!

Things have been getting a lot easier since then. I bought a new PC last year and recently wiped everything from the old one so I can sell it, eventually. It’s a AMD 2200+ with 1 GB of ram, 2 hds (80 and 120 GB), a nvidia 4400, a cd burner and a dvd burner and some minor heat related instability (works fine with the cover off). The stability issue is the main reason I still have this machine, I just feel bad about selling something with issues. If you are interested in taking over the hardware, let me know. I promise full disclosure of any hw issues I’m aware of and you’ll inherit some fine components at bargain price (negotiable). Should make an excellent linux desktop or server still. I suspect it is either the powersupply or cpu fan that is the issue. Should be easy to fix by somebody with more hardware tinkering experience than me.

In short, it’s ideal for one of my linux tests. So I downloaded the ubuntu 6.10 desktop iso, burned it and popped in the cd. Below is a chronological account of what I did and how that worked out.

  • It boots into a graphical desktop from the install cd. This is nice but it also takes about five minutes for it to properly boot.
  • On the desktop sits a big install icon. I guess I’m supposed to click it even though I already selected install in a previous menu.
  • The wizard launches and leads me to new and improved dialogs for setting the usual stuff (keyboard, language, user and password) and then presents me with a menu with three options to use the first hard drive, the second one or manually partition. The both option seems to be missing, so I go manual. This seems to work nicely although I now find out that I should have picked a different cd if had wanted soft raid 0. Ah too bad.
  • I partition it to have root and swap on drive 1 and home on drive 2 (the 120 GB one).
  • I click next and the installer goes to work. This takes about 20 minutes during which I have the opportunity to use the fully functional desktop that runs the installer. It seems to include a couple of games, a browser and some other stuff. Fun, I guess but cdroms don’t repsond to nicely to multiple processes doing stuff at the same time so I leave it alone.
  • Done. Reboot!
  • Oops, it’s 2007 and ubuntu still fucks up display configuration. I smelled a rat when I wasn’t presented with an option to configure the screen and indeed ubuntu botched it completely. It sort of works but 1024×768 at 60HZ sort of sucks on a CRT that can do 1280×1024 at 85HZ. It hurts my eyes! And there’s no option to fix it in the menu. There is a resolution dialog but this only allows me to select even worse resolutions. Naturally, the excellent binary driver nvidia provides is not installed so I can’t use any of the high end features of my once very expensive nvidia card. Come on guys, fix this. Both monitor and videocard are plug and play. You have no good excuse to not recognize either. I guess removing the, admittedly, very crappy monitor and videocard configuration was a bad idea because now I’ll have to do things manually and find out how much X configuration still sucks.
  • It informs me there’s 160 MB worth of updates so I do the autoupdate. Apparently there were quite a bit of issues with 6.10 when it was released causing some people to recommend the previous version. Lets see how good things are after the update.
  • Wow! A linux update that requires me to reboot. That’s a first. I guess it is getting better at reproducing the windows experience :-). OK, lets do this.
  • Yikes! So I click around a bit and suddenly the desktop freezes. Mouse still works but clicking anywhere doesn’t seem to work as expected. I can switch to the text console using ctrl+alt+f1 keybinding. So a default install with no custom software installed produces a freeze by just clicking around in the menus a bit.
  • Ctrl+alt+del has not been implemented so I use the good old reboot command from one of the consoles: “sudo reboot now”. Life is great, 0 things installed/configured and I need a command line already. This had better not happen again!
  • OK, rebooted. Lets fix the fucking screen first. Synaptic package manager sounds right, let search for nvidia. OK, clickety, nvidia-glx sounds right. Apply means install I guess. So much for usability :-).
  • Great the video driver is installed but X was not restarted. I guess that requires another manual override. I know there’s ways to do this without a restart but the option is not provided in the shutdown screen. So here goes another required reboot. This is feeling more and more like windows by the minute. Random freezing, check. Required reboots (three so far), check!
  • Oops, I still can’t fix the default resolution. This is bad! Effectively I am now stuck with a broken desktop that gives me headaches. I will need to get under the hood to fix it. I know how to do it but it just pisses me off that I have to do this on what is supposedly the most user friendly linux distro.
  • As explained here you need to fix things from the commandline. The good news: whoohoo 85 HZ. The bad news, still no 1280×1024 in the screen resolution thingy.
  • This page has the instructions for launching the good old x configuration tool. This sucker hasn’t changed substantially in ten years and insists on leading you through a whole bunch of screens for mouse and keyboard (which were detected just fine). It brings tears to my eyes that this tool actually wants me to enable the emulate three buttons option. Anyway, hitting enter works as advertised and I now run in the right resolution and refreshrate.
  • Lets launch firefox …. seems to work and it’s 2.01! Good!

OK so far for a default install. So overall, this is not good. X installation after ten years of linux distributions is still a mess. The basic job of configuring generic plug and play monitors as well as the installation of essential drivers is still a mess. Ubuntu has made this more “userfriendly” by removing the tools needed to fix this from the setup. This just makes them harder to find.

On the other hand, once installed, ubuntu is a pretty nice distribution. Of course the cd install is a bit limited but I’m fixing this right now by installing a few hundred MB worth of add on packages which are now being downloaded over the network.

Some things I noticed good and bad:

  • GOOD Graphical install seems more user friendly.
  • NOT SO GOOD The graphical install takes ages to launch. Effectively it adds five to ten minutes to the whole installation procedure.
  • VERY GOOD The graphical install boots into a fully functional CD with network connectivity and a browser. Though I did not need it this time, I have had to abort installations in the past in order to boot into windows to be able to access online documentation. So this is great!
  • EXTREMELY BAD Screen configuration is not part of the setup procedure and the automated routines fail miserably. Predictably my monitor and graphics card are misconfigured and the only fix involves entering commands on a commandline. I think it is safe to say that this part of the installation is not compatible with most end user capabilities. I don’t see how a non technical user could fix their screen settings without somebody holding their hands while they execute cryptic commands on the commandline. This is a regression from last time I installed ubuntu.
  • BAD I don’t like that the screen resolution tool provides no obvious references for fixing the extremely likely case of a misconfigured X desktop. Both the binary nvidia driver and a properly configured CRT are things most users would want to fix.
  • BAD installing nvidia-glx via synaptic does not trigger the additional configuration of x.org to actually use it that is obviously implied by this action.
  • BAD X froze on me after the updates but before I had configured anything. I had no obvious way to fix this other than go to the comandline and execute a command that no end user should be aware of. If I were an end user, I would have reset the pc.
  • NOT SO GOOD the shutdown screen does not contain an option to restart X. True, you shouldn’t need it but I needed it anyway during screen configuration so it should be there.
  • BAD synaptic is hopelessly complicated and lacks a few simple coarse grained options like just install the bloody KDE desktop without me micromanaging all the details.

BTW. I still think the shitty brown sucks.

UPDATE. install of additional software went fine but using the hibernate pc resulted in an obscure error. If it doesn’t work, don’t put it in the fucking shutdown screen!

Upgrading to wordpress 2.1

Whoohoo, wordpress 2.1 was released this morning. I’m upgrading tonight so expect a few hours of downtime/reduced functionality.

Update.

Well that went relatively ok. It seems a few of my plugins needed upgrading. Sadly, no compatible upgrade was available for widgets which I used to beautify my sidebar. But the important stuff is still there. I removed a few other plugins that I was hardly using.

Oddly, the database backup plugin is no longer included. I’ll look for a replacement.

In terms of functionality, there doesn’t seem to be that many features that are vastly different.

The editor is not working as advertised. It isn’t autosaving and the wysiwyg is missing in action. I actually like the current non wysiwyg version better, but still. I’m wondering if I did anything wrong.

Update 2.

I’ve deleted all files except my plugins (google analyticator and openid delegation), configuration and uploads. Then I re-uploaded the wordpress 2.1 stuff this time ensuring that there is no cruft from previous installations. This should have fixed the editor stuff but didn’t. I don’t see tabs for switching to wysiqyg. Also there does not seem to be any spellchecking except for the default firefox stuff. Solution: allow sites to abuse javascript by messing with stuff they shouldn’t be touching. At least that is what is claimed here http://wordpress.org/support/topic/101716?replies=9. Only thing is that it doesn’t work.

Update 3.

OK, found the problem. You need to toggle “use visual editor” in the user profile. Weird place for such an option and an obvious usability problem.

Nvidia tv out aspect ratio trouble & workaround

For a few months I’ve been putting off upgrading my nvidia video card drivers from 80.21 to the 9x.xx because of an issue with tv out. Anyway, there’s a few other bugs in the 80.xx series that have been annoying me and several nice features in the new 9x.xx series so I upgraded anyway.

It seems nvidia introduced a bug/feature in the 9x.xx series that fucks up the aspectratio of tvout. Essentially it detects when the overlay used for the full screen signal on the tv does not have 4:3 aspect ratio (e.g. a dvd movie would be 16:9 or even wider) and then ‘compensates’ by scaling it to 4:3 unless the resolution of the overlay is less than 640×480. Why is this wrong: the s-video signal is always 4:3, even if you have a widescreen tv. If the movie is widescreen (e.g. 16:9 or wider as is common with dvds) black bars should be included in the signal rather than stretching the image to 4:3. This used to work correctly but is now broken in the new driver.
There seems to be a lot of people complaining in the various nvidia forums about the aspect ratio of the tv out signal. As usual most of the suggestions are useless. Essentially this is an unfixed bug in the driver that the 80.xx series did not have. So far nvidia has not fixed it. There is no magic combination of ‘correct’ settings in the new control panel that fixes the issue. However one of the suggested solutions provides a usable workaround that involves using ffdshow to add the black bars so that the resulting image is always 4:3. Ffdshow is a popular mpeg4 decoder available on sourceforge that comes with a bunch of useful filters that you can use to postprocess the output of the decoder. This works around the problem by ensuring the overlay aspect ratio of the overlay is always 4:3 so that the driver thinks it does not need to be scaled.
To do this:

  • open ffdshow configuration
  • enable resize & aspect ratio filter
  • configure it as follows
    • toggle resize to on
    • select ‘specify size’ and fill in 1024×768. In the various forum posts everybody seems to recommend 640×480. However most nvidia cards have a tv out chip that supports 1024×768 and this seems to be the default resolution used. Both resolutions are a 4:3 aspect ratio and that seems to be the main point. Also the native resolution of 1024×768 is what the image will be scaled to anyway so scaling it to 640×480 means it gets scaled twice before being sent to the tv!
    • set aspect ratio to keep original aspect ratio. This ensures that ffdshow adds black bars rather than stretching the image to fill the 1024×768.

OK, this will addresses the issue sort of. A limitation is of course that it only works for content actually passing through ffdshow. This excludes for example dvds. However dvd playing software probably exists that can do similar things. This trick is also compatible with another little trick I apply to get a little extra resolution out of my widescreen (16:9) tv. This one requires media player classic (some other players probably have a similar feature).

  • Open a movie, right click on the picture and select pan & scan – scale to 16:9.
  • This scales the movie to anamorphic aspect ratio, it stretches the picture vertically. Due to the ffdshow config above this means the black bars are smaller. In other words there is less pixels wasted on black bars and more to actual picture.
  • This looks bad on both your monitor and tv, don’t worry.
  • Usually a widescreen tv has an option to properly display anamorphic dvds by ‘unstretching them’. Usually it is called something like wide (my tv). It does the opposite of scale to 16:9 on your tv. Use it to ‘fix’ the image on your tv.
  • Why do this? You get a slightly better resolution due to the fact that a larger portion of the 4:3 signal sent to your tv consists of actual movie pixels rather than black bars. The slight detail loss due to stretching and unstretching is more than compensated by the extra pixels you gain.

An obvious improvement would be to actually scale to a 16:9 resolution in ffdshow and then let the driver scale it to 4:3. I tried this of course but then the driver compensates by cutting away a bit on both sides to make the image scale to 4:3. DOH!

ubuntu breezy to dapper upgrade

I still have my old pc sitting next to my new pc. A few months back I installed ubuntu on it. Since then it has just sat there doing nothing. Occasionally I boot it into windows to retrieve some file I forgot to migrate. These occasions are getting very rare now. Soon it will be time to dispose of the box.

Anyway, I’ve been reading that ubuntu breezy is soooo 2005 and that ubuntu dapper is all hot and new. So I’ve attempted an upgraded. Forget about user friendly-ness. The procedure for this requires command line usage. But I’ve handled Debian before so that does not scare me. It’s not like the ubuntu installation is important to me anyway (and yes, I managed to mangle quite a few Debian installs attempting stuff like this. In my experience apt-get dist-upgrade is dangerous stuff).

For the record, it’s a vanilla ubuntu install with the kubuntu package installed and the kdm login manager installed. It should work just fine. Other than that, I did nothing to it other than trying to get it to work properly (which I eventually managed to do as reported earlier) and install Java and eclipse.

Here’s the proper procedure: use ctrl+alt+function keys to find a non graphical console. The update procedure will include all your X, Gnome and KDE stuff. Replacing it while it is running is not a good idea IMHO (it might actually work but I didn’t care to find out the hard way this time).
OK, now login. Fix /etc/apt/sources.list by replacing all occurances of ‘breezy’ with ‘dapper’. Type sudo apt-get update. This updates your package database with the dapper packages. Now (fingers crossed) type sudo apt-get dist-upgrade. Apt-get now asks you if you are sure that you want to download 750MB (in my case) and really want to install or upgrade 1000+ packages. I said yes, you should say no unless you are sure you want this. Since then it has been downloading and installing stuff by itself. Cryptic messages about packages installing and configuring themselves fly by way to fast to read. Clearly, stuff is happening!

Now, I’m writing this while it is happening. Since that takes quite some time, let me elaborate on my expectations. As said earlier. I’ve dist-upgraded debian installs before. Usually to testing and I even got to unstable once (bad idea, it really was unstable). Usually it works fine, provided both the starting source and target of the procedure are stable (Debian testing by definition is not so your mileage will vary). Worst case for me was ending up with a situation where nothing worked anymore. Best case, it just worked. Ubuntu claims to be enterprise ready. If it fails here it is not.

OK, it finnished. The whole process took about an hour (excluding download). I did a sudo reboot (I’m a windows user).

Problem #1: ubuntu boots nicely to the text console. Solution: it’s not a bug but a feature, ubuntu remembered that I was on a text console when I rebooted. Good thinking?!

Whoo, new kubuntu login screen. No more nasty shades of shitty brown. Oh wait this is just the default KDE look (nice). Log out, login into gnome. Looks like shitty brown is back on the menu :-(.

In short, the upgrade seems to have worked just fine. The little preferences I had seem to have migrated. Including my x settings; my manual install of java and eclipse; and a terminal icon I dragged to the desktop.

Other than this. My main points in my previous review still stand. Despite some slight improvements, menu layout is a mess; the theme looks shitty and the commandline is still required for non trivial stuff like performing an upgrade or configuring X properly (though admittedly I did not try the new installer; merely assuming it doesn’t do a better job than the previous version).

Some afterthought. I installed koffice. The icons only show up in the menu when I use KDE. I guess this unified menu shit is only skin deep.