Category Archives: Solutions

Google Images Plugin For Corripio

A simple post this time. Corripio is a program for OS X that lets you easily find album artwork for your music collection. However, it's a bit rough around the edges and its built in web based mechanisms for retrieving this artwork are rather limited and frankly a bit rubbish. I've managed to solve all these issues by writing a plugin to fix its unaccountable lack of support for Google Images.

However, a word to the wise: this is probably against the terms of service, so make sure you only use this plugin for personal use!


require 'net/http'
require 'open-uri'
require 'uri'
require 'cgi'

# Construct a query for good-sized Google images by taking the supplied keywords and adding "album"
query = (ARGV + ["album"]).join(" ")
sizes = ["medium", "large", "xlarge"].join("|")
uri = "{CGI.escape(query)}&imgsz=#{CGI.escape(sizes)}"

source = open(URI.parse(uri)).read.gsub("\r\n", "")

# Look for javascript statements of the form dyn.Img("foo", "bar", "lol", "")
matcher = /dyn.Img$$".*?",".*?",".*?","(.*?)"/
matches = source.scan(matcher)

# Push back the best 10 results
matches.first(10).each { |match| puts match.first }

To install the plugin, copy the above code into a new file in ~/Library/Application Support/Corripio/Plugins and make sure it is marked executable. Now, you can drag that file into the Plugins section of the Corripio preference pane and then (optionally) disable the default image search plugins. When you're done the preferences pane should look something like this:

Corripio preference pane

Breaking The Running Key Cipher: Part 2

This is the second post in a series of three covering the running key cipher. You can find the first part here.

To recap, in the last part we covered the definition of the running key cipher and looked at a simple mechanism that you might use to try and break it: a known plaintext attack. This time we'll cover the next most advanced method you can use to try and effect a break of a piece of running key ciphertext: n-gram analysis.

For those readers unfamiliar with the fields of cryptanalysis and information retrieval, we'll briefly cover what an n-gram is. Essentially it is just a list of n symbols, where symbols can be letters, words, or whatever else you are interested in analyzing. However, in typical usage the term implies that a probability that somehow reflects the likelihood of observing that symbol list is attached to each such list.

In our case we are interested in character based n-grams, simply because the running key cipher is character based, and so example bigrams (n = 2) would include "ot", "ke" and "re". Likewise, we can consider trigrams (n = 3) such as "the", "foo" and "ter", and even higher order n-grams such as the pentagrams (n = 5) "erenc" and "licks" or hexagrams (n = 6) like "okingh". Of course, to be useful, these strings have to have their attached probability. As we are interested in analyzing typical English text, these probabilities will typically be estimated from the frequency of n-gram occurrence in some publicly available corpus such as newspaper archives or the internet itself.

Before we look at how we can use n-grams for cryptanalysis, let's take a look at some code to create them. Let's begin with a class to represent the n-gram itself:

class NGram
    public string Text;
    public double Probability;

    public NGram(string text, double probability)
        this.Text = text;
        this.Probability = probability;

So far so simple. We will also need a class to analyze and contain a collection of n-grams:

class NGramFrequencies
    private Dictionary<string, int> occurrences;
    private int totalOccurrences;

    private NGramFrequencies(Dictionary<string, int> occurrences, int totalOccurrences)
        this.occurrences = occurrences;
        this.totalOccurrences = totalOccurrences;

    public double ProbabilityOf(String nGram)
        int nGramOccurrence;
        if (this.occurrences.TryGetValue(nGram, out nGramOccurrence))
        { return this.ProbabilityOf(nGramOccurrence); }
        { return 0.0; }

    private double ProbabilityOf(int occurrence)
    { return (double)occurrence / this.totalOccurrences; }

    public IEnumerable<NGram> NGrams
            foreach (KeyValuePair<string, int> pair in this.occurrences)
            { yield return new NGram(pair.Key, ProbabilityOf(pair.Value)); }

    public static NGramFrequencies Analyse(StreamReader reader, int length)
        LinkedList<char> accumulator = new LinkedList<char>();
        Dictionary<string, int> occurances = new Dictionary<string, int>();
        int totalOccurances = 0;

        while (!reader.EndOfStream)
            char nextCharacter;
            if (TryNormalizeCharacter((char)reader.Read(), out nextCharacter))
                if (accumulator.Count == length)
                    StringBuilder nGram = new StringBuilder(length);
                    foreach (char value in accumulator)
                    { nGram.Append(value); }
                    string nGramString = nGram.ToString();

                    int currentOccurrences;
                    if (occurrences.TryGetValue(nGramString, out currentOccurances))
                    { occurrences[nGramString] = currentOccurrences + 1; }
                    { occurrences[nGramString] = 1; }



        return new NGramFrequencies(occurances, totalOccurances);

    private static bool TryNormalizeCharacter(char input, out char output)
        if (Char.IsLetter(input))
            output = Char.ToUpper(input);
            return true;
            output = '?';
            return false;

That's a fair chunk of code! However, most of it is pretty simple. The real core is the Analyse method, which just counts the frequency of n-gram occurrence ready to be turned into a probability when the n-grams are enumerated later. It takes as arguments a StreamReader, containing the corpus we wish to analyze and the length of n-grams we are interested in.

Now, before we continue I feel that I should point out the most glaring flaw in this implementation: it deals very poorly with data sparsity. Any n-gram which does not occur in the corpus will be assigned a probability of zero. This is probably undesirable since almost every string is at least possible in practice, if not necessarily particularly likely! The process of estimating probabilities for unseen n-grams is called smoothing, and the methods you can use to do it vary from the simple (initialize all n-gram counts to 1 instead of 0) to the complicated (Good-Turing smoothing). However, I have to say that in practice I've not found a huge difference in outcomes when I use the simple scheme versus any more complicated one.

Despite this flaw we will plunge forward with our use of n-grams. The fundamental algorithm we will be using for the cryptanalysis is based on a so-called maximum likelihood criterion. What this means is that for every chuck of ciphertext we will try and decipher it in a bruteforce way with every possible keytext, and we will then use our n-grams to estimate the likelihood of this deciphering by taking the product of the probabilities of the resulting plaintext block and the guessed key fragment (which we are assuming to be normal English text). The deciphering which has the highest such product is considered to be the most likely one and is shown to the user. A simple implementation of this algorithm is shown below:

class Program
    static void Main(string[] args)
        NGramFrequencies[] nGramFrequencies = ObtainNGramFrequencies(new StreamReader("D:\\Downloads\\Frankenstein.txt"));
        string cipherText = File.ReadAllText("D:\\Downloads\\Ciphertext.txt");

        const int BLOCK_SIZE = 4;
        NGramFrequencies frequencies = nGramFrequencies[BLOCK_SIZE - 1];

        for (int offset = 0; offset < BLOCK_SIZE; offset++)
            StringBuilder plainTextOutput = new StringBuilder();
            StringBuilder keyTextOutput = new StringBuilder();
            for (int a = offset; a < cipherText.Length - (BLOCK_SIZE + offset); a += BLOCK_SIZE)
                string cipherTextBlock = cipherText.Substring(a, BLOCK_SIZE);

                double maximumProbability = 0.0;
                string bestPlainText = null;
                string bestKeyText = null;
                foreach (NGram nGram in frequencies.NGrams)
                    string plainText = Decipher(cipherTextBlock, nGram.Text);
                    double probability = frequencies.ProbabilityOf(plainText) * nGram.Probability;

                    if (probability >= maximumProbability)
                        maximumProbability = probability;
                        bestPlainText = plainText;
                        bestKeyText = nGram.Text;


            Console.WriteLine("After analysis on offset {0}", offset);
            Console.WriteLine("'Plain': {0}", plainTextOutput);
            Console.WriteLine("'Key': {0}", keyTextOutput);

    static string Decipher(string cipherText, string keyText)
        StringBuilder output = new StringBuilder(cipherText.Length);
        for (int a = 0; a < cipherText.Length; a++)
        { output.Append((char)('A' + ReduceMod(cipherText[a] - keyText[a]))); }

        return output.ToString();

    static int ReduceMod(int value)
        if (value < 0)
        { return value + 26; }
        { return value % 26; }

    static NGramFrequencies[] ObtainNGramFrequencies(StreamReader corpusStream)
        NGramFrequencies[] nGramFrequencies = new NGramFrequencies[6];
        for (int a = 1; a <= nGramFrequencies.Length; a++)
            nGramFrequencies[a - 1] = NGramFrequencies.Analyse(corpusStream, a);
            corpusStream.BaseStream.Seek(0, SeekOrigin.Begin);

        return nGramFrequencies;

Notice that this program assumes that you have a copy of Frankenstein on your drive to use as a corpus. Obviously in reality you will want to use a somewhat larger text than this, composed of the concatenation of many texts. I have found Project Gutenberg to be a great source for text you can use to construct such a corpus without too much trouble.

The output from this program is rather user unfriendly and it certainly won't give you the plaintext in a straightforward way. I have found it most useful for suggesting possible plaintext fragments that can then be used as a starting point for a known-plaintext search, which I described in the last post. Recall that the plaintext we suggested from last time was:


And we enciphered this horrible secret with the running key:


So let's take a look at what our n-gram based analysis program spits out as possible plaintext fragments when it is fed the corresponding ciphertext, with a BLOCK_SIZE of 4 as in the example code:

After analysis on offset 0
After analysis on offset 1
After analysis on offset 2
After analysis on offset 3

Although this example is really too small and our corpus too sparse to give very helpful results, you can see immediately that the analyzer has got at least some things right: for instance, you can see the string EAT occurs at the appropriate position in the plaintext output for blocks offset by 1. It is worth noting that there is no guarantee that the plaintext will be output in the 'Plain' field and the keytext in the 'Key' field: they could equally be swapped around, due to the commutativity of adding characters (which is essentially addition in modular arithmetic to modulus 26) and the fact that we have assumed that the governing n-gram distributions behind both texts are the same.

Although this example might have been somewhat disappointing, I've had fairly good with n-gram analysis on larger ciphertexts (where alignments of common character sequences in the plain and key texts are more common) where I've used a more complete corpus. However, we can apply even more technology to the problem: next time I'll try and show how we can add hidden Markov models into the mix to resolve some of the more fundamental problems with the crude maximum likelihood approach shown above.

Leaving Windows Behind

So, I finally switched to Mac a month ago. I've had a Mac laptop since last summer and have been very pleased with the experience, so since Windows Vista has been giving me huge amounts of trouble (e.g. see my last post on getting Cygwin to work, though I won't go into the full gamut of issues I had here) I decided to go for an Apple desktop machine too.

Happily, Steve Jobs has heard my cries of Windows-inflicted pain and ordered his minions to release a new revision of this baby:

Mac Pro

Beautiful, isn't it? With 8 cores of Xeon love, it's no slouch in the performance department either. Salivation-inducing hardware aside, it comes with OS X, which is so much better than Vista that its simply not even funny. Overall it's fair to say that I've been very pleased with my purchase 🙂

There have been some problems switching, of course. I have Parallels Desktop installed so that I can still develop using C# and I will probably end up installing Office 2007 on there at some point as well, but for pretty much everything else I've been able to find an acceptable or beyond-acceptable alternative for OS X. Here are some of my favourites:


LaunchBar is a very neat application that you can use to quickly access many things on your Mac. For instance, if I want to play the album "Twin Cinema" in iTunes, I just press Option-Space, type "twin" into the box that comes up and press enter: fast and convenient. Similarly, if I wanted to open TextMate I simply press Option-Space and type "mate". Of course, there are loads more things you can do with it such as running any AppleScript you like or make a Google search.. the list goes on. LaunchBar learns over time what abbreviations you want to associate with an action, and hence it becomes so natural that so you soon find it hard to live without it!

Unfortunately it is payware, but it's certainly well worth the price tag.


Plot is a really nice graphing application. On Windows I was using Gnuplot, which is doubtless powerful but insanely hard to use. Plot just works and supports pretty much every feature I need. The graphs it outputs look very professional: see for yourself.


LyX is what I'm using instead of Office (NeoOffice, the OS X OpenOffice port, is too sluggish for words so I'm trying to avoid it). It's a nice friendly interface onto an OS X LaTeX distribution that makes the common case fast while still letting you access the full power of LaTeX when you need it. The application is actually nominally cross platform but I had numerous problems with crashes and weird behavior in the Windows version that have yet to occur on Mac.


I bought 1Password (it actually came as part of the MacHeist deal) to replace my long-time Windows password manager Password Manager XP. I have no complaints: on the contrary, 1Passwords integration with Firefox and the OS is much more reliable and complete than Password Manager XP ever managed.

What's more, they are about to release a service called my1Password that will let me get web-enabled access to my passwords from any location and platform! I'm happy as a clam about this as it's proven impossible to find a decent cross platform desktop password manager application. I should give a shout out to Clipperz here as they have had a decent implementation of this for a while, but the lack of integration with my main password manager (so I have to maintain two lists) and minuscule password limit have put me off using it regularly. UPDATE: Marco Barulli from Clipperz has responded to what I said here: please read this post to get the full story.

Time Machine

Time Machine, oh Time Machine, how did I ever get backups done before I had you? The answer is: with great difficulty. On Windows I set up a scheduled task to use SyncBack to clone my hard disk to another server. Unfortunately, this was pretty unreliable (partly because I was backing up onto a Linux file system that had an imperfect emulation of Windows security and didn't seem to support Unicode properly) and also meant that I only had a backup of the most recent version of my filesystem. With Time Machine everything is seamless and I can go back weeks or months in time to see my files at any point, all from within the Finder! Awesome!


And finally, maybe you don't find the OS X Terminal very exciting, but for someone who has wasted many hours struggling with Cygwin and its numerous problems (e.g. the awkward attempt to reconcile the Windows and Unix permission models) it is a godsend to finally have a real Unix shell available 🙂

I haven't even mentioned some perennial favourites like Transmission, Perian or AppFresh, but my time is limited! If you really feel the need to peek into all the applications I have installed, take a peek at my iusethis profile.

Overall my switching experience has been almost entirely painless and has certainly made me more productive and satisfied with my machine. Here's to many more happy years with Apple computers!


Using Cygwin on Windows Vista as an Administrator

I've just managed to fix a particularly annoying issue I was having with Cygwin under Vista with UAC enabled. Essentially, when I ran the Cygwin bash shell as an Administrator I was seeing the following:

  • Bash started in /bin/
  • The bash shell had no knowledge of Unix paths (i.e. the current working directory was /c/Cygwin/bin)
  • The root file system was just my C: drive
  • My .bashrc and .bash_profile were not being executed

All of this made it fairly hard to actually do anything. However, I was given a clue when I started up Poderosa (a nice GUI terminal emulator) and tried to open a Cygwin console: there was an error message about not being able to access "HKEY_CURRENT_USER\Software\Cygnus Solutions\Cygwin\mounts v2\/". This key contains the mount information for the root directory of the Cygwin instance, so it looked like the root of the problem.

I then opened up Process Monitor to see what happened when, running under the Administrator user, Poderosa tried to access that key. It got "NAME NOT FOUND" as expected. I then tried the same thing as a normal user, and found that the response code as "REPARSE" instead! The log looked like this:

Poderosa Process Monitor Log

What, do you ask, is all that VirtualStore nonsense after the REPARSE has occurred? Well, that REPARSE is actually redirecting Poderosas read of the LOCAL_MACHINE registry key to some keys in the CURRENT_USER hive: this is a new "feature" of Windows Vista designed to ensure that old applications that assume they can write to LOCAL_MACHINE still work. Any such writes by a process without the appropriate permissions are redirected to the VirtualStore instead, and then read back later transparently by this REPARSE mechanism. However, in their wisdom Microsoft have decided that this redirection is disabled when applications are run as an Administrator from a UAC protected account.

What must have happened is that I accidentally installed or modified Cygwin using the Setup program as a normal user, and it's writes to HKLM were hence redirected to VirtualStore, which is then not consulted when you try and run Cygwin proper as an Administrator. My fix was simply to:

  1. Export the VirtualStore branch as a ".reg" file from the registry editor
  2. Open the resulting file in Notepad and replace the text "HKEY_CURRENT_USER\Software\Classes\VirtualStore\MACHINE" with "HKEY_LOCAL_MACHINE"
  3. Import the modified .reg file using the registry editor again

After that process the user-private data written by Cygwin setup was accessible machine-wide, solving my problem. I hope that this post will be able to help out someone else with similar problems!

Breaking The Running Key Cipher: Part 1

I love challenges of all kinds, both the algorithmic and cryptographic. There is just something inherently compelling about pitting your wits alone against an intellectual challenge, free to apply any and all means to beat it. I recently took part in a crypto challenge where we were meant to break a text encoded with the running key cipher, which I hadn't encountered before, and I came up with a rather satisfying solution for I thought might be good to share here.

For those of you who aren't familiar with the running key cipher, it's very simple. Say you have some text you would love to be kept secret:


But still feel the need to communicate it to someone, with whom you have agreed some (usually well known) key beforehand, say:


We just "add" the two texts together, with the letters being used as numbers in the obvious way: A = 0, B = 1, ..., Z = 25. This means that e.g. C + B = 2 + 1 = 3 = D. Furthermore, we do this modulo 26, so that e.g. Z + B = A. Additionally, since in our case our key is a bit longer than the secret message, we will just not use the last couple of letters of the key. This means that our encoded message is:


The receiver can just subtract their known key to retrieve the original text. Wow, great! Since we never never reuse the key at any point in that message, and assuming we never use they key again, it must constitute a one time pad! This means that we must have perfect security, right? Ah, grasshopper, if it only it were so simple! Actually, this is not very secure at all because both our key and our message exhibit distinctly non-random distributions of letters. A simple example of this can be seen if you imagine that we subtract just the letter E (the most commonly used in English) from every letter in the intercepted ciphertext. We would expect to get considerably more than 1/26 of the resulting letters being either from the key or plaintext. If we do this with our example, we obtain:

 K    K         K    K  K      KP

I've labelled each resulting letter with a K or a P depending on whether it came from the key text or the plain text. As you can see, most of the letters are from the key because it contains very few E characters.

This is all very well, but it's clearly a fairly crude attack. If we want to have a chance of actually breaking the cipher we need some more heavyweight methods. As far as I know, there are a choice of three major techniques:

  • Known plaintext attacks
  • Crude N-gram analysis
  • Hidden Markov model N-gram analysis

The last one is what I actually used to break the challenge, and the one that I think is just plain coolest, but we need to build up to it as it's pretty scary :-). So, let's take a look at the other techniques first:

Known Plaintext Attack
This is definitely the easiest attack to employ if you know something about the message being sent. For instance, if you, as the evil attacker, happen to know that the sender is a passionate gourmand, you might guess that EATING will appear somewhere in the key or plaintext. Knowing this, you can exhaustively subtract EATING from the ciphertext at every offset. If we do this for our example text, we obtain:

Text offset Text deciphered with key

Most of those look like gibberish, but one stands out as eminently readable English text: YOURBO. This is, of course, part of the key text. The diligent cryptanalyst will then typically be able to use further guesses (perhaps inspired by the discovered piece of text) to reveal more of the message and key. For instance, he might try every word beginning with BO to try and extend the known plaintext rightwards (though this is a rather difficult proposition since there are at least 368 possibilities according to my dictionary). A small aside here about my methodology: I found the optimal experience was got by combining a suitable dictionary (I use 5desk) with grep in a Cygwin Bash shell. You can then make queries such as the one below, which counts the number of words in the dictionary beginning with "bo".

$ grep '^bo' 5desk.txt | wc -l

Note that it is not strictly possible to tell whether the English words you recover are part of the key or plain text. This is because of the symmetry of the modular addition we got to perform the ciphering step. However, it is usually possible to work it out by your knowledge of the messages probable contents. Once you know which text stream is part of the key, a Google search will usually allow you to obtain the entire thing from your fragmentary knowledge, assuming that the key is a public text (as is typical: Bible extracts are popular). This will let you break the code!

However, what if you don't know anything about the message, or, like me, you're just a bit rubbish at making good guesses as to how to extend your known fragment of text? We probably need to employ something a little more sophisticated, like one of the N-gram analyses I mentioned earlier. However, as this blog entry is already getting overlong I shall leave exposition of these rather more clever techniques for a future entry: watch this space, especially those of you whom I have just bored senseless by explaining the absolute basics.

Fixing Experts Exchange With Greasemonkey

Well, I'm a bit behind the curve here. I hadn't until now felt the need to install GreaseMonkey, but Experts Exchange (which frequently shows up in my Google search results) have started to blur the comments people make on the questions. You have to sign up to view them, and since I have a pathalogical aversion to such inconveniences as 30 seconds of registration I did what any good programmer would and spent 10 minutes writing a script to solve the program.

The script plugs into GreaseMonkey and first removes the blurring, which is just done by an overlay, and then replaces the answer text with its ROT13ed equivalent, since thats the "encryption" they have opted to use! Anyway, the whole thing is basically a mashup of the scripts here and here, but I include the full source below for your convenience: do with it as you will.

// ==UserScript==
// @name ExpertsExchangeFilter 2
// @namespace All
// @description Remove Experts Exchange Stuff
// @include*
// @include*
// ==/UserScript==

function rot13(src) {
var dst = new String('');
var b;
var t = new String('');
var clear = 0;
for (var ctr = 0; ctr < src.length; ctr++) {
b = src.charCodeAt(ctr);
if (60 == b || 91 == b) {
clear = 1;
if (!clear) {
if (((b>64) && (b<78)) || ((b>96) && (b<110))) {
b = b + 13;
} else {
if (((b>77) && (b<91)) || ((b>109) && (b<123))) {
b = b - 13;
t = String.fromCharCode(b);
dst = dst.concat(t);
if (b == 62 || b == 93) {
clear = 0;
return dst;

a = document.getElementsByTagName('div')
for (i = 0; i < a.length; i++)
if (a[i].className == 'infoBody')
{ a[i].removeChild(a[i].childNodes[1]); }
else if (a[i].className == 'answerBody quoted')
{ a[i].innerHTML = rot13(a[i].innerHTML); }

Frustration + Lazyweb = Results

OK, to follow up on my last post about the quirks of XMLHTTPRequest, fuzzyman very kindly provided most of the solution I needed.

What I was trying to accomplish is optional HTTP authentication: that is, a resource logs you in if your credentials are correct, but if they aren't present then it just lets you go on as an anonymous user. This is essential if you are developing, say, a web shop: if you want to offer personalized item selections you need to request login, but if you require authentication just to browse the site you've lost a good LARGE_NUMBER% of your customers right there.

However, as fuzzyman rightly pointed out, most browers do not even bother to send the Authorization header unless they actually get a 401 on a page, even if credentials are explicitly provided (as in my use of XMLHTTPRequest). The solution to this is to "fool" the browser into thinking your site requires authentication by creating a dummy action that just requires authentication. The slight complication is that this action must be in the root directory! If you attempt to create the dummy action in a subdirectory, the browser may only send the authentication information thus forced into it when paths are accessed that appear to be in that directory. This means, for example, that if you have the authenicated action "/sessions/secret" then authorization info will be sent for "/sessions/foo", but will not for "/products/list". Making an action like "/secret" works around this, although it is slightly ugly.

class SecretController < ApplicationController

def index
if authenticated?
render :nothing => true
response.headers["WWW-Authenticate"] = 'Basic realm="Controllr"'
render :nothing => true, :status => 401


So then in your user creation view and login view you will have a script block which forces the logged in / newly created user to login. This example is for user creation (hence @user.password) and uses a modified version Prototype (the de-facto Rails JS library) to perform the request. I had to modify Prototype to add support for using the username and password parameters of the underling XMLHTTPRequest object, despite the fact that they are widely supported in practice.

new Ajax.Request("<%= url_for :controller => 'sessions', :action => "secret" %>", {
username: "<%= @user.username %>",
password: "<%= @user.password %>",

method: "get",
asynchronous: true,
onComplete: function() {
window.location = "<%= url_for :action => "index" %>";

Yes, this does send the username and password in plaintext over the wire :-). However, this is OK since they are sent in the clear during signup anyway. I'm also currently using Basic authentication, which means that upon ANY login the username/password are vulnerable (albeit after Base64 decoding): I will probably change this at some point, but the Digest scheme I should be using requires a bit of server side state to prevent replay attacks so is a little tricker to implement.

Right, I should probably try and get some revision done now :-). If only I could answer on Ruby on Rails instead of complexity theory...

Global Hotkeys With .NET

So, last year while working at Resolver Systems I worked with the author of Movable Python, which is a fairly neat application that lets you carry Python around on a USB stick. As an intellectual exercise I reimplemented the core functions in C# with a slightly less eccentric interface (no offence meant, Michael!), but was suprised to find I had to roll my own code to setup global hotkeys to run scripts, which was a nice feature of Movable I wanted to try and add.

The Win32 APIs for this are themselves are pretty grungy (all sorts of nastiness with bitflags and message loops), but I think I've been fairly successful in abstracting that away in this simple class, which you can get from here. With this, you can setup a hotkey like this:

Hotkey hk = new Hotkey();
hk.KeyCode = Keys.1;
hk.Windows = true;
hk.Pressed += delegate { Console.WriteLine("Windows+1 pressed!"); };

if (!hk.GetCanRegister(myForm))
{ Console.WriteLine("Whoops, looks like attempts to register will fail or throw an exception, show an error/visual user feedback"); }
{ hk.Register(myForm); }

// .. later, at some point
if (hk.Registered)
{ hk.Unregister(); }

Obviously this is a bit of a kitchen sink example to try and show every feature, but even then it's a damn sight easier than what you would use to set up even a basic usage manually with P/Invoke. If you are wondering about the requirement for a form parameter to Register and GetCanRegister, this is simply a requirement of the underlying APIs, probably due to the fact that they signal hotkey presses in the application message loop rather than via a callback.

I hope someone finds this useful and to facilitate that process I hereby place all the code into the public domain to be messed around with to your hearts content. Enjoy!

So, Does Anyone Even Use All These Darn CPU Instructions?

That's the question I found myself asking earlier this month when I was writing a simple compiler for an OCaml dialect called MinCaml. I don't know if you've ever taken a look at the Intel IA32 instruction references, but there are two volumes of detailed descriptions of all the functions one of these CPU provides: about 600 in total!

As this compiler backend was my first real go at writing out assembly (though I'd read quite a lot of it before) I found the task of potentially learning all these instructions extremely intimidating, and wished I had an assembly language "cheat sheet" where I could go to find out the most common (and hence probably the most useful) instructions. So, today I got around to writing a little program to use UDis86 to scan all the binaries in C:\Windows\System32 and my Cygwin /bin directory and spit out some instruction counts. Now, before we see the results let me first say that my sample has a bias in that it's all precompiled stuff that will be designed to take advantage of only the lowest common denominator of the target CPU instruction sets. This means that the distribution below will probably be quite different than if you scan, say, your machine-specific Gentoo installation. Right, onto the big monstrous Excel pie chart:

IA32 Instruction Incidence

I've also uploaded my raw data to my personal site, along with the program I used to get the results for anyone who is interested.

In the finest traditions of top 10 lists everywhere, let's give a quick rundown of the most popular instructions:

  1. add: is anyone really suprised by this?
  2. mov: again, it's fairly hard to get work done without moving data around
  3. push/pop: these instructions are commonly used for setting up/destroying function stack frames, hence their popularity despite IA32s register orientation
  4. invalid: do'h! These are from areas of code where udis86 just barfs :-). Their high ranking could mean a lot of instructions are in use that it can't deal with, or (more likely) it is trying to disassemble executable regions of the binary that just don't have anything useful in them
  5. cmp: the absolute bedrock of branching on the IA32, but what is suprising is that it's so much higher than jz. Are there really that many pure numeric comparisions in programs?
  6. dec/inc: I'm fairly suprised that dec beats inc here, especially since sub is so much lower than add in my data: does anyone have a theory?
  7. xor/or: xor no doubt clocks in so high because of the "xor reg, reg" idiom used to zero out registers, but or is quite popular too for something which is really only good for bit flags and the odd if expression...

Well that was fun, in a geeky sort of way! I'd be quite interested in knowing the reasons behind some of the oddities above: maybe another blog entry is in order...?

Edit: an experienced reverse engineer, t_w, commented on Reddit that my count for add was quite high, and that it might be caused by counting junk code. It turns out that the byte sequence 0x0000 disassembles to an add instruction, which skews my results since the null byte is common padding for executable .text sections. However, after correcting for this (the charts above have taken this into account) it still leaves add in first place (it had 83355055 counted occurances before excluding this sequence, but still 45303805 after exclusion).

Building YAWS For Windows

Inspired by Yariv's Blog, where he talks about a framework for building web applications in Erlang, and my so far abortive attempts to get into Erlang, I decided to give it another go with Erlyweb. Erlyweb depends on YAWS (Yet Another Web Server), however, and this proved to be a bit of a pain to install since I'm being difficult and using Windows on my development machine. So, in order to help any other lost souls who try to duplicate this feat in the future, I'm recording the process (tested against YAWS 1.68):

  1. Install Erlang (obviously), and make sure it is in your PATH
  2. Install Cygwin with the Perl and GNU Make packages at minimum
  3. Unpack the latest YAWS release into your home directory
  4. Now, the first trickiness: there is a small error in the YAWS makefile, so open up the yaws-X.XX\src\Makefile and for the mime_types.erl target change the first command to be not $(ERL) but "$(ERL)". The quotes mean that for those of us with Erlang installed in a path with spaces in the name (such as Windows users who put it in Program Files) the erl executable will actually be found. If you don't follow this step you'll end up with some error like:

    /cygdrive/c/Program Files/Erlang/5.5.4/bin/erl -noshell -pa ../ebin -s mime_type_c compile
    make[1]: /cygdrive/c/Program: Command not found

  5. Follow the same process to add quotes around $(ERLC) in www\shopingcart\Makefile and www\code\Makefile (somewhat weirdly, every other uses of $(ERL) and $(ERLC) have been quoted for us, suggesting this is just something they overlooked, rather than that running on Windows is a blasphemy)
  6. Whack open a Bash Cygwin shell and cd into the yaws-X.XX directory
  7. Do the usual "./configure; make" dance
  8. Open up the newly created yaws file in the bin subdirectory and change the last line so that $erl is in quotes, i.e. from this:

    ${RUN_ERL} "exec $erl $XEC"

    To this:

    ${RUN_ERL} 'exec "$erl" $XEC'

  9. From this point on I'm going to assume you need to do a local install: if you want to do your own thing, you can follow the instructions here, but you may need to adapt them based on what I'm going to talk about below. Anyway, run "make local_install" do to the install if you are following along at home
  10. Now, this is where it can get a bit confusing: although we just built YAWS under Cygwin, since we have a Windows binary of Erlang the paths in yaws.conf (which should have appeared in your home directory) must be Windows paths, but the makefile used Unix ones. Go in and fix all of those (for me, this meant putting "C:/Cygwin" in front of all of them)
  11. Point your web browser at localhost HTTPS on port 4443 to see what you have wrought
  12. Sit back and take a deep breath to cleanse your soul of the accumulated Unix gunk

Here’s one I made earlier