Maps on Ovi

Both OVI Maps, our maps and navigation client for S60 and the Maps on Ovi companion website (or MoO!!! as we refer to it internally), received a few upgrades in the past week. Maps 3.0 is a solid upgrade with lots of good new features that you will probably want to install if you are still using Maps 2.0 on your Nokia phone. Maps on Ovi is the website that goes along with it that features such niceties as synchronizing routes and pois from the site to your phone via OVI as well as a new Find Places feature, which is what me and my colleagues have been slaving away on for the past few months (particularly the places bubble that shows up on the map for some POIs).

So go check it out here: maps.ovi.com!

Our Find Places feature is at this point still a quite minor feature and the whole site is of course still in beta with lots of known issues and rough edges that are still being worked on at this point but improvements are coming of course and the site is actually perfectly usable at this point. Last Monday was the big 1.0 for our team and our first real publicly available feature set, which we will be building on in the near future. Getting it out was stressful and part of my work in the next few weeks is helping it become less stressful.

My personal technical contributions are limited to the content provisioning from various vendors such as Lonely Planet and WCities. You can find the same content also in the Nokia Here and Now client for S60, which is currently in Nokia’s Beta Lab, as well as on the device if you buy any of the premium content packages.

For the past few months I’ve been working with lots of highly talented people slaving away on all the frontend and javascript stuff as well as the pretty neat and cool server-side architecture. I can’t really reveal anything on that except to say that cool stuff will be coming from Berlin. So keep following us on e.g. the Nokia Youtube channel where our marketing people regularly post stuff, including videos featuring me and another one featuring Christian del Rosso that I reported on here earlier.

Website reorganization

My old web site

I just made a few changes that effectively retires the static portion of my website. For several years I had a nice, funky, handcrafted, css styled bit of html living at www.jillesvangurp.com and publications.jillesvangurp.com. The design has been more or less the same since 2003 and the content was not evolving much either. Actually, most of the activity on my domain has been on blog.jillesvangurp.com which is powered by wordpress.

So as of now, www.jillesvangurp.com is equivalent to blog.jillesvangurp.com. Visiting either will get you to the frontpage of my blog. The relevant portions of the the old www.jillesvangurp.com have been migrated to their own wordpress page. You can find links to these pages in the sidebar. I’ve taken care to not break links from outside to important parts of my old site (publications).

simple note encrypt/decrypt with AES in javascript

Inspired by the hype surrounding the iphone and web applications, I hacked together a nice little toy to encrypt and decrypt text using aes. I borrowed the aes implementation from here and basically wrote a somewhat nicer UI for it. I still need to integrate sha1 hashing of passwords as the aes.js script author suggests that is a bit more secure than his current method.

I have no idea if it will work in the iphone browser since I’ve only tested in firefox. It partially works in IE7 and I have no desire to spend time finding out why it fucks up. Suggestions to improve my little javascript hacking are welcome of course.

BTW. password of the default content is: “secret”.

Using rsync for backup

As you may recall, I had a nice incident recently which made me really appreciate the fact that I was able to restore my data from a backup. Over the years I’ve sort of gobbled together my own backup solution using rsync (I use the cygwin port to windows).

First a little about hardware. Forget about using CDs or DVDs. They are just too unreliable. I’m currently recovering data from a whole bunch of CDs I had and am horrified to discover that approximately one third has CRC errors on them. Basically, the light sensitive layer has deteriorated to the point that the disc becomes unreadable. Sometimes as soon as within 2 years. I’ve used various brands of CDs over the years and some of them have higher failure rates than others but no brand seems to be 100% OK. In other words, I’ve lost data on stored on pretty much every CD brand I’ve ever tried. Particularly Fujifilm (1-48x) and unbranded CDs are bad (well over 50% failure rate) on the other hand, most of my Imation CDs seem fine so far. Luckily I didn’t lose anything valuable/irreplacable. But it has made it clear to me to not trust this medium for backups.

So, I’ve started putting money in external harddrives. External drives have several advantages: they are cheap; they are big and they are much more convenient. So far I have two usb external harddrives. I have a 300GB Maxtor drive and the 500GB Lacie Porsche drive I bought a few weeks back. Also I have a 300 GB drive in my PC. Yes that’s 1.1 TB altogether :-).

The goal of my backup procedures is to be ‘reasonably’ safe. Technically if my apartment burns down, I’ll probably lose all three drives and all data on them. Moving them offsite is the obvious solution but this also makes backups a bit harder. Reasonably safe in my view means that my backed up data survives total media failure on one of the drives and gives me an opportunity to get to the reasonably safe state again. When I say my data, I’m referring to the data that really matters to me: i.e. anything I create, movies, music, photos, bookmarks, etc.

This data is stored in specific directories on my C drive and also a directory on my big Lacie drive. I use the Maxtor drive to backup that directory and use the remaining 200GB on the lacie drive for backing up stuff from my C drive.

All this is done using commands like this:

rsync -i -v -a --delete ~/photos/ /cygdrive/e/backup/photos >> /cygdrive/e/backup/photos-rsync.txt

This probably looks a bit alien to a windows user. I use cygwin, a port of much of the gnu/linux tool chain that layers a more linux like filesystem on top of the windows filesystem. So /cygdrive/c is just the equivalent of good old c:\. One of the ported tools is ln, which I’ve used to make symbolic links in my cygwin home directory to stuff I want to backup. So ~/photos actually points to the familiar My Pictures directory.

Basically the command tries to synchronize the first directory to the second directory. The flags ensure that content of the second directory is identical to that of the first directory after execution. The –delete flag allows it to remove stuff that isn’t in the first directory. Rsync is nice because it works incrementally. I.e. it doesn’t copy data that’s already there.

The bit after the >> just redirects the output of rsync to a text file so that afterwards, you can verify what has actually been backed up. I use the -v flag to let rsync tell me exactly what it is doing.

Of course typing this command is both error prone and tedious. For that reason I’ve collected all my backup related commands in a nice script which I execute frequently. I just turn on the drives; type ./backup.sh and go get some coffee. I also use rsync to backup my remote website which is easy because rsync also works over ftp and ssh.

Part of my backup script is also creating a dump from my subversion repository. I store a lot of stuff in a subversion repository these days: my google earth placemarks; photos; documents and also some source code. The subversion work directories are spread across my harddrive but the repository itself sits in a single directory on my cdrive. Technically I could just back that up using rsync. However, using

svnadmin dump c:/svnrepo | gzip > /cygdrive/e/backup/svnrepo.gz

to dump the repository allows me to actually recreate the repository in any version of subversion from the dump. Also the dump file tends to be nicely compressed compared to either the work directory or the repository directory. Actually, the work directory is the largest because it contains 3 copies of each file. In the repository, everything is stored incrementally and in the dump gzip squeezes it even further. The nice thing of a version repository is of course that you preserve also the version history.

Site maintenance

I’ve cleaned up the static part of my website:

  • Deleted most pages. Where appropriate, I’ve moved content to my blog.
    • Stuff I created in the past has been backdated and posted to my blog under the createdbyjilles label
    • There’s a new ‘about’ me page
    • The rest was so out of date, I just removed it.
  • Updated the layout slightly.
    • I’ve adopted a more readable color scheme for links
    • I’ve used a wider banner photo so that it looks good on pages of 1600 pixels wide before it starts repeating.
    • I’ve tweaked the html slightly.

That’s about it. If you miss old content or see broken links, let me know.

kml2html.xsl

A nice feature in Google Earth is that you can export your placemarks as a kmz file. This is just a zipfile with a .kml file inside. This file in turn is just a xml format that Google uses to list your placemarks.

I spent some time creating a nice xsl file with which you can use to convert these files to html. I use it to publish some of my own placemarks on my website. Effectively, it turns Google Earth into a content management system for publishing information about places.

I do the transformation statically using an ant build file. But you can of course also let the browser do the transformation. However, search engines probably have difficulty handling the kml format. Also, not all browsers can do xsl, e.g. mobile browsers and screen readers. This is why I prefer to generate the html.

The stylesheet has the following nice features:

  • Generates link to Google Maps for each location. It also creates a link to Google Maps pointing to the original kmz file (note you need to set the base url in the build file for that to work)
  • Generates the coordinates of the location formatted using the geo microformat. This allows firefox extensions such as operator and future browsers that support microformats to detect the coordiantes.
  • Produces nice semantic html (makes it easy to style using css).
  • It works for nested folders of placemarks and structures the page using nested unordered lists.
  • Preserves any html you type in the descriptions in Google Earth. So you can add links there and have them appear in the html.
  • Of course includes a link to the original kmz file.

Update. Since posting this I made several updates to the xsl and the css. The link above always points to the latest version of the stylesheet. In the zip file you will also find the css and a ant build file. As part of this update, I also rewrote the text above.

Anamorphic aspect ratio calculator

Anamorphic aspect ratio calculator. I sometimes play movies from my PC to my widescreen TV. Unfortunately the tvout of my Geforce 4 card does not support widescreen. In other words, it sends a signal with a 4:3 aspect
ratio to my tv. Luckily my tv can stretch the image to 16×9. Normally this would result in a flattened picture on the tv, which is not the intention. So suppose you have a cinematic dvd movie (aspect ratio 11:5) and want to play it on the tv. If you just send it to the tv, you’d have a 4:3 picture with enormous black bars ontop. Using the zoom function of the tv it will display fine but you are also not using a significant amount of tv signal so you’re losing precious pixels!

What you can do instead is change the aspect ratio of the movie and let the tv stretch it back to its orginal 11:5 aspect ratio. The new aspect ratio for the film is called the anamorhic aspect ratio and you can
calculate it with this neat little calculator I created. You can enter the results in bsplayer,
which allthough enormously feature rich does not have an anamorphic setting built in (at the moment of writing) and play your movie using the full available resolution and enjoy the extra detail :-).

The jar file can be started by double clicking on it (windows, must have a Java 2 jvm installed of course) or running “java -jar anamorphic.jar” from the commandline. Source code in the form of an eclipse project can be found here.

If this all sounds too nerdy, just download media player classic from sourceforge.net and use the options->pan&scan->scale to 16:9 option to get the same effect.

Explorerer menu option to generate m3u file for directory

If you are like me, you probably ‘receive’ mp3s occasionally. Unfortunately, most mp3s do not have proper filenames and generally have incomplete metainformation tags attached. I’m one of these guys who likes the songs on an album played in the order they were put on the album. So, I rename the songs with the track number listed first with leading zeros if appropriate (for sorting) and then use this small windows shell extension to right click on the folder and automatically create a play.m3u file. On win2k or winnt you can just rightclick on the reg file and choose install. On win 9x you need to open the file and replace cmd with command and then right click and install. One limitation is that the windows commandline does not handle some characters correctly (e.g. scandinavian characters, like a few of my favorite bob hund songs).

Update 26 Aug 2002, it seems Bill Gates can’t even get the dir command right.

Update. GRR, on some machines the /s parameter was working correctly while on others it wasn’t (used the ancient 8+3 format for files, even on ntfs). I’ve removed it for the moment so the command does not work recursively.