Howto start jEdit

At work I am (in)famous for being responsible for getting jEdit onto everybodies desktop. Despite this everyone uses textpad :-/. These primitive souls are perfectly happy (or ignorant?) not using syntax highlighting, not having their xml validated, not being able to search and replace using regexs, not being able to indent their xml files, not having autocompletion, etc.

Anyway, one of the nastier aspects of jEdit is integrating properly with windows and configuring it. Older versions included a convenient but broken .exe frontend. Newer versions require some manual setup to get going.

First of all, the jvm matters. jEdit runs faster and prettier with jre 1.5. Second of all, select native look and feel unless you really like the shitty java look and feel.

A crucial thing is to provide enough memory AND specify a small enough minimum heapsize. Contrary to the popular belief, java programs are quite efficient. jEdit for example can run with just 10MB of memory heap. Unless of course you open up big files or multiple files in which case you may need more than that. The trick with Java is that you can specify upper and lower limits on the memory heap. The garbage collector will never shrink the heap below the minimum or grow it above the maximum. With jEdit, most of the time you don’t need that much, so specify 10Mb as the minimum. You may need more sometimes though, especially when you are running lots of plugins so specify 256 as the upper limit (probably way more than jEdit will ever use).

Another crucial setting is -reuseview which will allow you to reuse already running jedit windows for opening new files.

Use the following settings for a shortcut:

javaw.exe -Xms10M -Xmx256M
-jar "C:/Program Files/jEdit 4.2/jedit.jar" -reuseview

I also have a nice cygwin shell script to be able to open a file straight into jEdit.

javaw -Xms10M -Xmx256M
-jar "c:/Program Files/jEdit 4.2/jedit.jar" -reuseview `cygpath -w $currentpath/$1` &

An ‘open in jedit’ context menu option can be obtained by importing this registry setting (create text file jedit.reg and paste stuff below, save, double click on the file)

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT*shellOpen with jEdit]

[HKEY_CLASSES_ROOT*shellOpen with jEditcommand]
@="javaw -Xms40M -Xmx256M -jar \"C:\\Program Files\\jEdit 4.2\\jedit.jar\" -reuseview \"%l\""

Edited as suggested in the comments, wordpress conveniently removes slashes when you save the text :-(.

Update 02-04-2011:

It’s been a while since I wrote this and when I hit my own post accidentally with a Google query, I knew it was time to do a little update. All of the above is still valid as far as I know, except I now use a mac. For a mac, or in fact any linux/unix type installation, there’s a convenient way to start jEdit from a bash function. Just include the line below in your .profile or .bashrc (adjust paths as needed of course):

function jedit() { java -Xms15M -jar /Applications/ -reuseview "$@" &}

Update 11-07-2011:

The above line of .profile voodoo is now also available on Gist, the code snippet sharing site on Github.


I found this article rather insightful -Ofun

I agree with most of it. Many software projects (commercial, oss, big & small) have strict guidelines with respect to write access to soure repositories and usage of these rights. As the author observes many of these restrictions find their roots in the limited ability of legacy revision control systems to roll back undesirable changes and to merge sets of coherent changes. And not in any inherent process advantages (like enforcing reviews, preventing malicious commits). Consequently, this practice restricts programmers in their creativity.

Inviting creative developers to commit on a source repository is a very desirable thing. It should be made as easy as possible for them to do their thing.

On more than one occasion I have spent some time looking at source code from some OSS project (to figure out what was going wrong in my own code). Very often my hands start to itch to make some trivial changes (refactor a bit, optimize a bit, add some functionality I need). In all of these cases I ended up not doing these changes because committing the change would have required a lengthy process involving:
– get on the mailing list
– figure out who to discuss the change with
– discuss the change to get permission to send the change to this person
– wait for the person to accept/reject the change

This can be a lengthy process and upfront you already feel guilty of contacting the person about this trivial change with your limited knowledge of the system. In short, the size of the project and its members scare off any interested developers except the ones determined to get their change in.

What I’d like to do is this:
– Checkout tomcat (I work with tomcat a lot, fill in your favorite OSS project)
– Make some change I think is worthwhile having without worrying about consequences, opinions of others, etc.
– Commit it with a clear message why I changed it.
– Leave it to the people who run the project to laugh away my ignorance or accept the change as they see fit.

The apache people don’t want the change, fine. Undo it, don’t merge, whatever. But don’t restrict peoples right to suggest changes/improvements in any kind of way. If you end up rejecting 50% of the commits that means you still got 50% useful stuff. The reviewing, merging workload can be distributed among people.

In my current job (for GX, the company that I am about to leave), I am the release manager. I am the guy in charge for the source repositories of the entire GX product line. I’d like to work like outlined above but we don’t. Non product developers in the company need to contact me by mail if they want to get their changes in. Some of them do, most of them don’t. I’m convinced that I’d get a lot of useful changes. We use subversion which is nice but not very suitable for the way of working outlined above and in the article I quoted. Apache also uses subversion so I can understand why they don’t want to give people like me commit rights just like that.

So why is this post labelled as software engineering science? Well I happen to believe that practice is ahead in some things over the academic community (of which I am also a part). Practicioners have a fine nose for tools and techniques that work really well. Academic software engineering researchers don’t for a variety of reasons:
– they don’t engineer that much software
– very few of them develop at all (I do, I’m an exception)
– they are not very familiar with the tools developers use

In the past two years in practice I have learned a number of things:
– version control is key to managing large software projects. Everything in a project revolves around putting stuff in and getting stuff out of the repository. If you didn’t commit it, it doesn’t exist. Committing it puts it on the radar of people who need to know about it.
– Using branches and tags is a sign the development process is getting more mature. It means you are separating development from maintenance activities.
– Doing branches and tags on the planned time and date is an even better sign: things are going according to some plan (i.e. this almost looks like engineering).
– Software design is something non software engineers (including managers and software engineering researchers) talk about, a lot. Software engineers are usually to busy to bother.
– Consequently, few software actually gets designed in the traditional sense of the word (create important looking sheets of paper with lots of models on them).
– Instead two or three developers get together for an afternoon and lock themselves up with a whiteboard and a requirements document to take the handful of important decisions that need to be taken.
– Sometimes these decisions get documented. This is called the architecture document
– Sometimes a customer/manager (same thing really) asks for pretty pictures. Only in those cases a design document is created.
– Very few new software gets build from scratch.
– The version repository is the annotated history of the software you are trying to evolve. If important information about design decisions is not part of the annotated history, it is lost forever.
– Very few software engineers bother annotating their commits properly.
– Despite the benefits, version control systems are very primitive systems. I expect much of the progress in development practice in the next few years to come from major improvements in version control systems and the way they integrate into other tools such as bug tracking systems and document management systems.

Some additional observations on OSS projects:
– Open source projects have three important tools: the mailinglist, the bug tracking system and the version control system (and to a lesser extent wikis). These tools are comparatively primitive to what is used in the commercial software industry.
– Few oss projects have explicit requirements and design phases.
– In fact all of the processes used in OSS projcets are about the use of the before mentioned tools.
– Indeed few oss projects have designs
– Instead oss projects evolve and build a reputation after an initial commit of a small group of people of some prototype.
– Most of the life cycle of an oss project consist of evolving it more or less ad hoc. Even if there is a roadmap, that usually only serves as a common frame of reference for developers rather than as a specification of things to implement.

I’m impressed by how well some OSS projects (mozilla, kde, linux) are run and think that the key to improving commercial projects is to adopt some of the better practices in these projects.

Many commercial software actually evolves in a very similar fashion despite manager types keeping up appearances by stimulating the creation of lengthy design and requirements documents, usually after the development has finished.

FireFox Alpha a.k.a Deerpark

Quite uncharacteristically, I have not been touching any nightly builds of firefox since 1.0. Part of the reason is that the mozilla developers seem to have abandoned the notion of bi-monthly milestones (so it has been a long time since any reasonably stable build). But today I gave the DeerPark alpha rc1 a try. That means it is a first release candidate of a first alpha of what will be Mozilla FireFox 1.1 some day. It is now named Deerpark so regular Firefox users don’t touch it. Probably this is a good thing because you can expect things to break down when using alpha software.

Deerpark is a nice browser. Of course it has a few little quirks (hey it’s an alpha build) but you can browse with it and overall it is as pleasant to use as its predecessor. Deerpark is not revolutionary in its interface. A few minor tweaks in the user interface are all you will notice at a first glance. The most significant change is in the options pane where you now have a horizontal icon bar instead of a vertical one. Not an improvement IMHO but I rarely use it anyway. Also some of the preferences have been rearranged but nothing revolutionary here too.

Under the hood the biggest change is svg rendering. Svg rendering has been under development from about 2000. I recall trying svg builds years ago. Svg is one of those more or less failed w3c standards that still await widespread adoption (other than niche grahphical products and linux desktop decoration). Deerpark could be what triggers this adoption.

Another notable but mostly invisble change is gecko that is now a year older than the version shipped with FireFox 1.0 and presumably has had quite a few tweaks (performance, standards compliance, rendering bugs, etc). Apparently they also fixed inline editing.

Other than that the fixes are minor. Deerpark is a nice incremental change but nothing revolutionary.

So why am I back to FireFox 1.0.4? Answer: extensions. I need my extensions. In particular sage ( is important for me and sage needs to be fixed for deerpark. Several other extensions I use, also need to be fixed. I suspect many extensions will be fixed in the next few months. Probably the Firefox/deerpark beta will be a moment when many 1.0 users start to switch.


Co-linux is a custom linux kernel that can run as a windows application. It is bundled together with a debian linux base distro. On a whim I tried it today and I have to say that I am impressed. It boots very fast. Once booted you have what is known as the debian base image. 2d graphics are not implemented on colinux. But since linux guis can be served over a network that is not a problem. So rather than emulating some crappy display driver you just do apt-get install vncserver, download a vnc client for windows and tada graphics.

The rest is just straight debian configuration. For the average windows user that is pretty hard of course. But been there done that so no problem for me. I’ve been at it for a bit over an hour now.

The hard part was convincing windows to do internet connection sharing and remembering how to configure networks in debian (it’s been a while so it took me a few google attempts). After that it’s apt-get this and that. Woody was obsolete the day it was released years ago so I fixed sources.list and did a apt-get dist-upgrade to upgrade to testing. Then a few apt-get install commands to get an xserver, kdebase and vncserver (this is all explained in the co-linux documentation). Then I started a vncserver and connected to it using tightvnc (a nice vnc client for windows) and I am now looking at a kde 3.2.2 desktop. It’s actually running at native speeds. The only bottleneck is vnc so graphics performance basically sucks. I’m going to try using the cygwin xserver as well.

qemu & knoppix

Recently knoppix 3.8 was released. Knoppix is a nice linux distribution that you can boot straight from cd. It has two primary uses: showing off some linux desktop software without actually installing linux and doing maintenance on pcs (by bypassing the installed os). Fairly nice but useless to people like me since I am not a system administrator and know how to setup linux. No, the reason I dowloaded it was qemu. Qemu is a computer emulator that can emulate a variety of processors and is apparently advanced enough that it can run stuff like windows, linux, macos X and a whole bunch of other operating systems. Now that is interesting. And some guy put a nice qemu/knoppix bundle up for download.

Of course the problem with emulating is that it is slow. And it shows. Booting into KDE from the iso took a whopping 30 minutes. Then loading in konqueror took another two minutes. But the point is that it seems to work. With a ten fold performance improvement it would become quite useful. Of course stuff like vmware is much more suitable for doing this kind of thing. But the nice thing about qemu is that it is not limited to just emulating x86 but can also emulate a mac, a sparc and probably any processor architects that the qemu developers choose to support. It’s a tool with lots of potential and I expect to hear a lot more from it when it matures over the next few years.