• git-svn, and thoughts on Subversion

    We use Subversion for our revision control system, and it’s great. It’s certainly not the most advanced thing out there, but it has perhaps the best client support on every platform out there, and when you need to work with non-coders on Windows, Linux and Mac OS X, there’s a lot of better things to do than explain how to use the command-line to people who’ve never heard of it before.

    However, I also really need to work offline. My usual modus operandi is working at a café without Internet access (thanks for still being in the stone-age when it comes to data access, Australia), which pretty rules out using Subversion, because I can’t do commits when I do the majority of my work. So, I used svk for quite a long time, and everything was good.

    Then, about a month ago, I got annoyed with svk screwing up relatively simple pushes and pulls for the last time. svk seems to work fine if you only track one branch and all you ever do is use its capabilities to commit offline, but the moment you start doing anything vaguely complicated like merges, or track both the trunk and a branch or two, it’ll explode. Workmates generally don’t like it when they see 20 commits arrive the next morning that totally FUBAR your repository.

    So, I started using git-svn instead. People who know me will understand that I have a hatred of crap user interfaces, and I have a special hatred of UIs that are different “just because”, which applies to git rather well. I absolutely refused to use tla for that reason—which thankfully never seems to be mentioned in distributed revision control circles anymore—and I stayed away from git for a long time because of its refusal to use conventional revision control terminology. git-svn in particular suffered more much from (ab)using different terminology than git, because you were intermixing Subversion jargon with git jargon. Sorry, you use checkout to revert a commit? And checkout also switches between branches? revert is like a merge? WTF? The five or ten tutorials that I found on the ‘net helped quite a lot, but since a lot of them told me to do things in different ways and I didn’t know what the subtle differences between the commands were, I went back to tolerating svk until it screwed up a commit for the very last time. I also tried really hard to use bzr-svn since I really like Bazaar (and the guys behind Bazaar), but it was clear that git-svn was stable and ready to use right now, whereas bzr-svn still had some very rough edges around it and isn’t quite ready for production yet.

    However, now that I’ve got my head wrapped around git’s jargon, I’m very happy to say that it was well worth the time for my brain to learn it. Linus elevated himself from “bloody good” to “true genius” in my eyes for writing that thing in a week, and I now have a very happy workflow using git to integrate with svn.

    So, just to pollute the Intertubes more, here’s my own git-svn cheatsheet. I don’t know if this is the most correct way to do things (is there any “correct” way to do things in git?), but it certainly works for me:

    * initial repository import (svk sync):
    git-svn init https://foo.com/svn -T trunk -b branches -t tags
    git checkout -b work trunk
    
    * pulling from upstream (svk pull):
    git-svn rebase
    
    * pulling from upstream when there's new branches/tags/etc added:
    git-svn fetch
    
    * switching between branches:
    git checkout branchname
    
    * svk/svn diff -c NNNN:
    git diff NNNN^!
    
    * commiting a change:
    git add
    git commit
    
    * reverting a change:
    git checkout path
    
    * pushing changes upstream (svk push):
    git-svn dcommit
    
    * importing svn:ignore:
    (echo; git-svn show-ignore) >> .git/info/exclude
    
    * uncommit:
    git reset <SHA1 to reset to>
    

    Drop me an email if you have suggestions to improve those. About the only thing that I miss from svk was the great feature of being able to delete a filename from the commit message, which would unstage it from the commit. That was tremendously useful; it meant that you could git commit -a all your changes except one little file, which was simply deleting one line. It’s much easier than tediously trying to git add thirteen files in different directories just you can omit one file.

    One tip for git: if your repository has top-level trunk/branches/tags directories, like this:

    trunk/
      foo/
      bar/
    branches/
      foo-experimental/
      bar-experimental/
    tags/
      foo-1.0/
      bar-0.5/
    

    That layout makes switching between the trunk and a branch of a project quite annoying, because while you can “switch” to (checkout) branches/foo-experimental/, git won’t let you checkout trunk/foo; it’ll only let you checkout trunk. This isn’t a big problem, but it does mean that your overall directory structure keeps changing because switching to trunk means that you have foo/ and bar/ directories, while switching to a foo-experimental or bar-experimental omits those directories. This ruins your git excludes and tends to cause general confusion with old files being left behind when switching branches.

    Since many of us will only want to track one particular project in a Subversion repository rather than an entire tree (i.e. switch between trunk/foo and branches/foo-experimental), change your .git/config file from this:

    [svn-remote "svn"]
        url = https://mysillyserver.com/svn
        fetch = trunk:refs/remotes/trunk
        branches = branches/*:refs/remotes/*
        tags = tags/*:refs/remotes/tags/*
    

    to this:

    [svn-remote "svn"]
        url = https://mysillyserver.com/svn
        fetch = trunk/foo:refs/remotes/trunk
         ; ^ change "trunk" to "trunk/foo" as the first part of the fetch
        branches = branches/*:refs/remotes/*
        tags = tags/*:refs/remotes/tags/*
    

    Doing that will make git’s “trunk” branch track trunk/foo/ on your server rather than just trunk/, which is probably what you want. If you want to track other projects in the tree, it’s probably better to git-svn init another directory. Update: Oops, I forgot to thank Mark Rowe for help with this. Thanks Mark!

    As an aside, while I believe that distributed version control systems look like a great future for open-source projects, it’s interesting that DVCS clients are now starting to support Subversion, which now forms some form of lowest common denominator. (I’d call it the FAT32 of revision control systems, but that’d be a bit unkind… worse-is-better, perhaps?) Apart from the more “official” clients such as command-line svn and TortoiseSVN, it’s also supported by svk, Git, Bazaar, Mercurial, and some great other GUI clients on Mac OS X and Windows. Perhaps Subversion will become a de-facto repository format that everyone else can push and pull between, since it has the widest range of client choice.

  • Speaking at DevWorld 2008

    For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.

    Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.

    Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!

  • Speaking at DevWorld 2008

    For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.

    Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.

    Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!

  • Solid State Society

    The traditional hard disk that’s likely to be in your computer right now is made out of a few magnetic circular platters, with a head attached to an actuator arm above the platter that reads and writes the data to it. The head’s such a microscopic distance away from the platter that it’s equivalent to a Boeing 747 flying at 600 miles per hour about six inches off the ground. So, when you next have a hard disk crash (and that’s when, not if), be amazed that the pilot in the 747 flying six inches off the ground didn’t crash earlier.

    Enter solid-state drives (SSDs). Unlike hard disks, SSDs contain no moving parts, and are made out of solid-state memory instead. This has two big advantages: first, SSDs don’t crash (although this is a small lie—more on that later). Second, since SSDs are made out of memory, it’s much faster than a hard disk to get to a particular piece of data on the disk. In other words, they have a random access time that are orders of magnitude faster than their magnetic cousins. Hard disks need to wait for the platter to rotate around before the head can read the data off the drive; SSDs simply fetch the data directly from a memory column & row. In modern desktop computers, random access I/O is often the main performance bottleneck, so if you can speed that up an order of magnitude, you could potentially make things a lot faster.

    Unfortunately, while SSDs are orders of magnitude faster than a hard disk for random access, they’re also an order of magnitude more expensive. That was until May this year, when this thing appeared on the scene:



    (Image courtesy of itechnews.net.)

    That boring-looking black box is a 120GB Super Talent Masterdrive MX. As far as SSD drives go, the Masterdrive MX is not particularly remarkable for its performance: it has a sustained write speed of just 40MB per second, which is a lot lower than many other SSDs and typical hard disks.

    However, it’s a lot cheaper than most other SSDs: the 120GB drive is USD$699. That’s not exactly cheap (you could easily get a whopping two terabytes of data if you spent that money on hard disks), but it’s cheap enough that people with more dollars than sense might just go buy it… people like me, for instance. I’ve had that SSD sitting in my lovely 17” MacBook Pro for the past two months, as an experiment with solid-state drives. So, how’d it go?

    I’ll spare you the benchmarks: if you’re interested in the raw numbers, there are a number of decent Masterdrive MX reviews floating around the Web now. I was more interested in the subjective performance of the drive. Does it feel faster for everyday tasks? Is it simply a better experience?

    The overall answer is: yes, it’s better, but it’s not so much better that I’d buy the SSD again if I could go back in time. With a hard disk, things occasionally get slow. I’m sure I’m not the only one to witness the Spinning Beachball of Death while I wait 5-10 seconds for the hard disk to finally deliver the I/O operations to the programs that want them completed. With a hard disk, launching a program from the dock would sometimes take 20-30 seconds under very heavy I/O load, such as when Spotlight’s indexing the disk and Xcode’s compiling something. With the SSD, those delays just went away: I can’t even remember a time where I saw the evil Beachball due to system I/O load.

    The most notable difference was in boot time. A lot of people love how Mac OS X is pretty fast to boot (and I agree with them), but when you go to log in, it’s a very different story. If, like me, you’ve got about ten applications and helper programs that launch when you log in, it can take literally minutes before Mac OS X becomes responsive. I clocked my MacBook Pro at taking just over a minute to log in with my current setup on a hard disk (which launches a mere half a dozen programs); the SSD took literally about 5 seconds. 5… 4… 3… 2… 1done. What is thy bidding, my master? I wish I’d made a video to demonstrate the difference, because it’s insanely faster when you see it. 10x faster login speed is nothing to sneeze at.

    However, aside from boot up time, normal day-to-day operation really was about the same. Sure, it was nice that applications launched faster and it booted so fast that you don’t need to make a coffee anymore when logging in, but those were the only major performance differences that I saw. Mac OS X and other modern operating systems cache data so aggressively that I guess most of the data you’ll read and write will usually hit the cache first anyway. The lower sustained write performance didn’t end up being a problem at all: the only time I noticed it was when I was copying large torrented downloadsfiles around on the same drive, but that wasn’t slow enough for me to get annoyed. The one benchmark that I really cared about—compiling—turned out to take exactly as long on the SSD as the hard disk. I thought that maybe it was possible that random I/O write speed was a possible bottleneck with gcc; it turns out that’s not true at all. (I’ll also point out that I was using Xcode to drive most of the compilation benchmarks, which is one of the fastest build systems I’ve seen that uses gcc; no spastic libtool/automake/autoconf/autogoat insanity here.) Sorry to disappoint the other coders out there.

    Aside from performance, the total silence of the SSD was a nice bonus, but it’s not something that you can’t live without once you’ve experienced it. In most environments, there’s enough background noise that you usually don’t hear the quiet hard disk hum anyway, so the lack of noise from the SSD doesn’t really matter. It was, however, very cool knowing that you could shake your laptop while it was on without fear of causing damage to your data. I’m usually pretty careful about moving my laptop around while it’s on, but with an SSD in there, I was quite happy to pick up the machine with one hand and wave it around in the air (as much as you can with a 17” MacBook Pro, anyway).

    So, with all the nice small advantages of the SSD, you may be wondering why it’s no longer in my MacBook Pro. Here’s some reviews of the disk on newegg.com that may give you a hint:



    It turns out those reviewers were right. Two months after I bought it, the Masterdrive MX completely died, which seemed like a pretty super talent for an SSD. The Mac didn’t even recognise the disk; couldn’t partition it; couldn’t format it. So much for SSDs not crashing, eh?

    While SSDs don’t crash in the traditional manner that a hard disk may, there’s a whole number of other reasons why it might crash. RAM’s known to go wonky; there’s no reason why that can’t happen to solid-state memory too. Maybe the SATA controller on the disk died. No matter what the cause, you have the same problem as a traditional hard disk crash: unless you have backups, you’re f*cked. Plus, since I was on holiday down at Mount Hotham, my last backup was two weeks ago, just before I left for holiday. All my Mass Effect saved games went kaboom, and I just finished the damn game. André not very happy, grrr.

    So, what’s the PowerPoint summary?

    • The Super Talent Masterdrive MX would be great buy if it didn’t friggin’ crash and burn your data with scary reliability. Even if you’re a super storage geek, avoid this drive until they have the reliability problems sorted out.
    • The Powerbook Guy on Market St in San Francisco is awesome. They were the guys to install the SSD in my MacBook Pro, and were extremely fast (two-hour turnaround time), professional, and had reasonable prices. (I would’ve done it myself, but I’d rather keep the warranty on my A$5000 computer, thanks.) Plus, they sold me the coolest German screwdriver ever for $6. (“This one screwdriver handles every single screw in a MacBook Pro”. Sold!)
    • The MacCentric service centre in Chatswood in Sydney is equally awesome. When the SSD died, they quoted me the most reasonable price I had ever seen for a hard disk swap in a MacBook Pro (have you seen how many screws that thing has?), and also had a two-hour turnaround time. Yeah, I know, decent Mac service in Australia! Woooooah.
    • Back up.
    • SSDs are great. I think they’ll complement rather than replace hard disks in the near future, and possibly replace them entirely if the price tumbles down enough. Next-generation SSDs are going to completely change the storage and filesystem games as they do away with the traditional stupid block-based I/O crap, and become directly addressable like RAM is today. Just don’t believe the hype about SSDs not crashing.

    I, for one, welcome the solid state society. Bring on the future!

  • Solid State Society

    The traditional hard disk that’s likely to be in your computer right now is made out of a few magnetic circular platters, with a head attached to an actuator arm above the platter that reads and writes the data to it. The head’s such a microscopic distance away from the platter that it’s equivalent to a Boeing 747 flying at 600 miles per hour about six inches off the ground. So, when you next have a hard disk crash (and that’s when, not if), be amazed that the pilot in the 747 flying six inches off the ground didn’t crash earlier.

    Enter solid-state drives (SSDs). Unlike hard disks, SSDs contain no moving parts, and are made out of solid-state memory instead. This has two big advantages: first, SSDs don’t crash (although this is a small lie—more on that later). Second, since SSDs are made out of memory, it’s much faster than a hard disk to get to a particular piece of data on the disk. In other words, they have a random access time that are orders of magnitude faster than their magnetic cousins. Hard disks need to wait for the platter to rotate around before the head can read the data off the drive; SSDs simply fetch the data directly from a memory column & row. In modern desktop computers, random access I/O is often the main performance bottleneck, so if you can speed that up an order of magnitude, you could potentially make things a lot faster.

    Unfortunately, while SSDs are orders of magnitude faster than a hard disk for random access, they’re also an order of magnitude more expensive. That was until May this year, when this thing appeared on the scene:



    (Image courtesy of itechnews.net.)

    That boring-looking black box is a 120GB Super Talent Masterdrive MX. As far as SSD drives go, the Masterdrive MX is not particularly remarkable for its performance: it has a sustained write speed of just 40MB per second, which is a lot lower than many other SSDs and typical hard disks.

    However, it’s a lot cheaper than most other SSDs: the 120GB drive is USD$699. That’s not exactly cheap (you could easily get a whopping two terabytes of data if you spent that money on hard disks), but it’s cheap enough that people with more dollars than sense might just go buy it… people like me, for instance. I’ve had that SSD sitting in my lovely 17” MacBook Pro for the past two months, as an experiment with solid-state drives. So, how’d it go?

    I’ll spare you the benchmarks: if you’re interested in the raw numbers, there are a number of decent Masterdrive MX reviews floating around the Web now. I was more interested in the subjective performance of the drive. Does it feel faster for everyday tasks? Is it simply a better experience?

    The overall answer is: yes, it’s better, but it’s not so much better that I’d buy the SSD again if I could go back in time. With a hard disk, things occasionally get slow. I’m sure I’m not the only one to witness the Spinning Beachball of Death while I wait 5-10 seconds for the hard disk to finally deliver the I/O operations to the programs that want them completed. With a hard disk, launching a program from the dock would sometimes take 20-30 seconds under very heavy I/O load, such as when Spotlight’s indexing the disk and Xcode’s compiling something. With the SSD, those delays just went away: I can’t even remember a time where I saw the evil Beachball due to system I/O load.

    The most notable difference was in boot time. A lot of people love how Mac OS X is pretty fast to boot (and I agree with them), but when you go to log in, it’s a very different story. If, like me, you’ve got about ten applications and helper programs that launch when you log in, it can take literally minutes before Mac OS X becomes responsive. I clocked my MacBook Pro at taking just over a minute to log in with my current setup on a hard disk (which launches a mere half a dozen programs); the SSD took literally about 5 seconds. 5… 4… 3… 2… 1done. What is thy bidding, my master? I wish I’d made a video to demonstrate the difference, because it’s insanely faster when you see it. 10x faster login speed is nothing to sneeze at.

    However, aside from boot up time, normal day-to-day operation really was about the same. Sure, it was nice that applications launched faster and it booted so fast that you don’t need to make a coffee anymore when logging in, but those were the only major performance differences that I saw. Mac OS X and other modern operating systems cache data so aggressively that I guess most of the data you’ll read and write will usually hit the cache first anyway. The lower sustained write performance didn’t end up being a problem at all: the only time I noticed it was when I was copying large torrented downloadsfiles around on the same drive, but that wasn’t slow enough for me to get annoyed. The one benchmark that I really cared about—compiling—turned out to take exactly as long on the SSD as the hard disk. I thought that maybe it was possible that random I/O write speed was a possible bottleneck with gcc; it turns out that’s not true at all. (I’ll also point out that I was using Xcode to drive most of the compilation benchmarks, which is one of the fastest build systems I’ve seen that uses gcc; no spastic libtool/automake/autoconf/autogoat insanity here.) Sorry to disappoint the other coders out there.

    Aside from performance, the total silence of the SSD was a nice bonus, but it’s not something that you can’t live without once you’ve experienced it. In most environments, there’s enough background noise that you usually don’t hear the quiet hard disk hum anyway, so the lack of noise from the SSD doesn’t really matter. It was, however, very cool knowing that you could shake your laptop while it was on without fear of causing damage to your data. I’m usually pretty careful about moving my laptop around while it’s on, but with an SSD in there, I was quite happy to pick up the machine with one hand and wave it around in the air (as much as you can with a 17” MacBook Pro, anyway).

    So, with all the nice small advantages of the SSD, you may be wondering why it’s no longer in my MacBook Pro. Here’s some reviews of the disk on newegg.com that may give you a hint:



    It turns out those reviewers were right. Two months after I bought it, the Masterdrive MX completely died, which seemed like a pretty super talent for an SSD. The Mac didn’t even recognise the disk; couldn’t partition it; couldn’t format it. So much for SSDs not crashing, eh?

    While SSDs don’t crash in the traditional manner that a hard disk may, there’s a whole number of other reasons why it might crash. RAM’s known to go wonky; there’s no reason why that can’t happen to solid-state memory too. Maybe the SATA controller on the disk died. No matter what the cause, you have the same problem as a traditional hard disk crash: unless you have backups, you’re f*cked. Plus, since I was on holiday down at Mount Hotham, my last backup was two weeks ago, just before I left for holiday. All my Mass Effect saved games went kaboom, and I just finished the damn game. André not very happy, grrr.

    So, what’s the PowerPoint summary?

    • The Super Talent Masterdrive MX would be great buy if it didn’t friggin’ crash and burn your data with scary reliability. Even if you’re a super storage geek, avoid this drive until they have the reliability problems sorted out.
    • The Powerbook Guy on Market St in San Francisco is awesome. They were the guys to install the SSD in my MacBook Pro, and were extremely fast (two-hour turnaround time), professional, and had reasonable prices. (I would’ve done it myself, but I’d rather keep the warranty on my A$5000 computer, thanks.) Plus, they sold me the coolest German screwdriver ever for $6. (“This one screwdriver handles every single screw in a MacBook Pro”. Sold!)
    • The MacCentric service centre in Chatswood in Sydney is equally awesome. When the SSD died, they quoted me the most reasonable price I had ever seen for a hard disk swap in a MacBook Pro (have you seen how many screws that thing has?), and also had a two-hour turnaround time. Yeah, I know, decent Mac service in Australia! Woooooah.
    • Back up.
    • SSDs are great. I think they’ll complement rather than replace hard disks in the near future, and possibly replace them entirely if the price tumbles down enough. Next-generation SSDs are going to completely change the storage and filesystem games as they do away with the traditional stupid block-based I/O crap, and become directly addressable like RAM is today. Just don’t believe the hype about SSDs not crashing.

    I, for one, welcome the solid state society. Bring on the future!

  • Mac Developer Roundtable #11

    The Mac Developer Network features an excellent series of podcasts aimed at both veteran Mac developers and those new to the platform who are interested in developing for the Mac. If you’re a current Mac coder and haven’t seen them yet, be sure to check them out. I’ve been listening to the podcasts for a long time, and they’re always both informative and entertaining. (Infotainment, baby.)

    Well, in yet another case of “Wow, do I really sound like that?”, I became a guest on The Mac Developer Roundtable episode #11, along with Marcus Zarra, Jonathan Dann, Bill Dudney, and our always-eloquent and delightfully British host, Scotty. The primary topic was Xcode 3.1, but we also chatted about the iPhone NDA (c’mon Apple, lift it already!) and… Fortran. I think I even managed to sneak in the words “Haskell” and “Visual Studio” in there, which no doubt left the other show guests questioning my sanity. I do look forward to Fortran support in Xcode 4.0.

    It was actually a small miracle that I managed to be on the show at all. Not only was the podcast recording scheduled at the ungodly time of 4am on a Saturday morning in Australian east-coast time, but I was also in transit from Sydney to the amazing alpine village of Dinner Plain the day before the recording took place. While Dinner Plain is a truly extraordinary village that boasts magnificent ski lodges and some of the best restaurants I’ve ever had the pleasure of eating at, it’s also rather… rural. The resident population is somewhere around 100, the supermarket doesn’t even sell a wine bottle opener that doesn’t suck, and Vodafone has zero phone reception there. So, it was to my great surprise that I could get ADSL hooked up to the lodge there, which was done an entire two days before the recording. Of course, since no ADSL installation ever goes smoothly, I was on the phone to iPrimus tech support1 at 10pm on Friday night, 6 hours before the recording was due to start. All that effort for the privilege of being able to drag my sleepy ass out of bed a few hours later, for the joy of talking to other Mac geeks about our beloved profession. But, I gotta say, being able to hold an international conference call over the Intertubes from a tiny little village at 4am in the morning, when snow is falling all around you… I do love technology.

    Of course, since I haven’t actually listened to the episode yet, maybe it’s all a load of bollocks and I sound like a retarded hobbit on speed. Hopefully not, though. Enjoy!

    1 Hey, I like Internode and Westnet as much as every other Australian tech geeks, but they didn’t service that area, unfortunately.

  • Mac Developer Roundtable #11

    The Mac Developer Network features an excellent series of podcasts aimed at both veteran Mac developers and those new to the platform who are interested in developing for the Mac. If you’re a current Mac coder and haven’t seen them yet, be sure to check them out. I’ve been listening to the podcasts for a long time, and they’re always both informative and entertaining. (Infotainment, baby.)

    Well, in yet another case of “Wow, do I really sound like that?”, I became a guest on The Mac Developer Roundtable episode #11, along with Marcus Zarra, Jonathan Dann, Bill Dudney, and our always-eloquent and delightfully British host, Scotty. The primary topic was Xcode 3.1, but we also chatted about the iPhone NDA (c’mon Apple, lift it already!) and… Fortran. I think I even managed to sneak in the words “Haskell” and “Visual Studio” in there, which no doubt left the other show guests questioning my sanity. I do look forward to Fortran support in Xcode 4.0.

    It was actually a small miracle that I managed to be on the show at all. Not only was the podcast recording scheduled at the ungodly time of 4am on a Saturday morning in Australian east-coast time, but I was also in transit from Sydney to the amazing alpine village of Dinner Plain the day before the recording took place. While Dinner Plain is a truly extraordinary village that boasts magnificent ski lodges and some of the best restaurants I’ve ever had the pleasure of eating at, it’s also rather… rural. The resident population is somewhere around 100, the supermarket doesn’t even sell a wine bottle opener that doesn’t suck, and Vodafone has zero phone reception there. So, it was to my great surprise that I could get ADSL hooked up to the lodge there, which was done an entire two days before the recording. Of course, since no ADSL installation ever goes smoothly, I was on the phone to iPrimus tech support1 at 10pm on Friday night, 6 hours before the recording was due to start. All that effort for the privilege of being able to drag my sleepy ass out of bed a few hours later, for the joy of talking to other Mac geeks about our beloved profession. But, I gotta say, being able to hold an international conference call over the Intertubes from a tiny little village at 4am in the morning, when snow is falling all around you… I do love technology.

    Of course, since I haven’t actually listened to the episode yet, maybe it’s all a load of bollocks and I sound like a retarded hobbit on speed. Hopefully not, though. Enjoy!

    1 Hey, I like Internode and Westnet as much as every other Australian tech geeks, but they didn’t service that area, unfortunately.

  • Talks for 2008

    I’ve given a few talks so far this year, which I’ve been kinda slack about and haven’t put up any slides for yet. So, if you’re one of the zero people who’ve been eagerly awaiting my incredibly astute and sexy opinions, I guess today’s your lucky day, punk!

    Earlier this year, on January 2, 2008, Earth Time, I gave a talk at Kiwi Foo Camp in New Zealand, also known as Baa Camp. (Harhar, foo, baa, get it?) The talk was titled “Towards the Massive Media Matrix”, with the MMM in the title being a pun on the whole WWW three-letter acronym thing. (Credit for the MMM acronym should go to Silvia Pfeiffer and Conrad Parker, who phrased the term about eight years ago :). The talk was about the importance of free and open standards on the Web, what’s wrong with the current status quo about Web video basically being Flash video, and the complications involved in trying to find a solution that satisfies everyone. I’m happy to announce that the slides for the talk are now available for download; you can also grab the details off my talks page.

    A bit later this year in March, Manuel Chakravarty and I were invited to the fp-syd functional programming user group in Sydney, to give a talk about… monads! As in, that scary Haskell thing. We understand that writing a monad tutorial seems to be a rite of passage for all Haskell programmers and was thus stereotypical of the “Haskell guys” in the group to give a talk about, but the talk seemed to be well-received.

    Manuel gave a general introduction to monads: what they are, how to use them, and why they’re actually a good thing rather than simply another hoop you have to jump through if you just want to do some simple I/O in Haskell. I focused on a practical use case of monads that didn’t involve I/O (OMG!), giving a walkthrough on how to use Haskell’s excellent Parsec library to perform parsing tasks, and why you’d want to use it instead of writing a recursive descent parser yourself, or resort to the insanity of using lex and yacc. I was flattered to find out that after my talk, Ben Leppmeier rewrote the parser for DDC (the Disciplined Disciple Compiler) to use Parsec, rather than his old system of Alex and Happy (Haskell’s equivalents of lex and yacc). So, I guess I managed to make a good impression with at least one of our audience members, which gave me a nice warm fuzzy feeling.

    You can find both Manuel’s and my slides online at the Google Groups files page for fp-syd, or you can download the slides directly from my own site. Enjoy.

    Finally, during my three-week journey to the USA last month in June, I somehow got roped into giving a talk at Galois Inc. in Portland, about pretty much whatever I wanted. Since the audience was, once again, a Haskell and functional programming crowd, I of course chose to give a talk about an object-oriented language instead: Objective-C, the lingua franca of Mac OS X development.

    If you’re a programming language geek and don’t know much about Objective-C, the talk should hopefully interest you. Objective-C is a very practical programming language that has a number of interesting features from a language point of view, such as opt-in garbage collection, and a hybrid of a dynamically typed runtime system with static type checking. If you’re a Mac OS X developer, there’s some stuff there about the internals of the Objective-C object and runtime system, and a few slides about higher-order messaging, which brings much of the expressive power of higher-order functions in other programming languages to Objective-C. Of course, if you’re a Mac OS X developer and a programming language geek, well, this should be right up your alley :). Once again, you can download the slides directly, or off my talks page.

  • Talks for 2008

    I’ve given a few talks so far this year, which I’ve been kinda slack about and haven’t put up any slides for yet. So, if you’re one of the zero people who’ve been eagerly awaiting my incredibly astute and sexy opinions, I guess today’s your lucky day, punk!

    Earlier this year, on January 2, 2008, Earth Time, I gave a talk at Kiwi Foo Camp in New Zealand, also known as Baa Camp. (Harhar, foo, baa, get it?) The talk was titled “Towards the Massive Media Matrix”, with the MMM in the title being a pun on the whole WWW three-letter acronym thing. (Credit for the MMM acronym should go to Silvia Pfeiffer and Conrad Parker, who phrased the term about eight years ago :). The talk was about the importance of free and open standards on the Web, what’s wrong with the current status quo about Web video basically being Flash video, and the complications involved in trying to find a solution that satisfies everyone. I’m happy to announce that the slides for the talk are now available for download; you can also grab the details off my talks page.

    A bit later this year in March, Manuel Chakravarty and I were invited to the fp-syd functional programming user group in Sydney, to give a talk about… monads! As in, that scary Haskell thing. We understand that writing a monad tutorial seems to be a rite of passage for all Haskell programmers and was thus stereotypical of the “Haskell guys” in the group to give a talk about, but the talk seemed to be well-received.

    Manuel gave a general introduction to monads: what they are, how to use them, and why they’re actually a good thing rather than simply another hoop you have to jump through if you just want to do some simple I/O in Haskell. I focused on a practical use case of monads that didn’t involve I/O (OMG!), giving a walkthrough on how to use Haskell’s excellent Parsec library to perform parsing tasks, and why you’d want to use it instead of writing a recursive descent parser yourself, or resort to the insanity of using lex and yacc. I was flattered to find out that after my talk, Ben Leppmeier rewrote the parser for DDC (the Disciplined Disciple Compiler) to use Parsec, rather than his old system of Alex and Happy (Haskell’s equivalents of lex and yacc). So, I guess I managed to make a good impression with at least one of our audience members, which gave me a nice warm fuzzy feeling.

    You can find both Manuel’s and my slides online at the Google Groups files page for fp-syd, or you can download the slides directly from my own site. Enjoy.

    Finally, during my three-week journey to the USA last month in June, I somehow got roped into giving a talk at Galois Inc. in Portland, about pretty much whatever I wanted. Since the audience was, once again, a Haskell and functional programming crowd, I of course chose to give a talk about an object-oriented language instead: Objective-C, the lingua franca of Mac OS X development.

    If you’re a programming language geek and don’t know much about Objective-C, the talk should hopefully interest you. Objective-C is a very practical programming language that has a number of interesting features from a language point of view, such as opt-in garbage collection, and a hybrid of a dynamically typed runtime system with static type checking. If you’re a Mac OS X developer, there’s some stuff there about the internals of the Objective-C object and runtime system, and a few slides about higher-order messaging, which brings much of the expressive power of higher-order functions in other programming languages to Objective-C. Of course, if you’re a Mac OS X developer and a programming language geek, well, this should be right up your alley :). Once again, you can download the slides directly, or off my talks page.

  • The Long Road to RapidWeaver 4

    Two years ago, I had a wonderful job working on a truly excellent piece of software named cineSync. It had the somewhat simple but cheery job of playing back movies in sync across different computers, letting people write notes about particular movie frames and scribbling drawings on them. (As you can imagine, many of the drawings that we produced when testing cineSync weren’t really fit for public consumption.) While it sounds like a simple idea, oh boy did it make some people’s lives a lot easier and a lot less stressful. People used to do crazy things like fly from city to city just to be the same room with another guy for 30 minutes to talk about a video that they were producing; sometimes they’d be flying two or three times per week just to do this. Now, they just fire up cineSync instead and get stuff done in 30 minutes, instead of 30 minutes and an extra eight hours of travelling. cineSync made the time, cost and stress savings probably an order of magnitude or two better. As a result, I have immense pride and joy in saying that it’s being used on virtually every single Hollywood movie out there today (yep, even Iron Man). So, hell of a cool project to work on? Tick ✓.

    Plus, it was practically a dream coding job when it came to programming languages and technologies. My day job consisted of programming with Mac OS X’s Cocoa, the most elegant framework I’ve ever had the pleasure of using, and working with one of the best C++ cross-platform code bases I’ve seen. I also did extensive hacking in Erlang for the server code, so I got paid to play with one of my favourite functional programming languages, which some people spend their entire life wishing for. And I got schooled in just so much stuff: wielding C++ right, designing network protocols, learning about software process, business practices… so, geek nirvana? Tick ✓.

    The ticks go on: great workplace ✓; fantastic people to work with ✓; being privy to the latest movie gossip because we were co-located with one of Australia’s premiere visual effects company ✓; sane working hours ✓; being located in Surry Hills and sampling Crown St for lunch nearly every day ✓; having the luxury of working at home and at cafés far too often ✓. So, since it was all going so well, I had decided that it was obviously time to make a life a lot harder, so I resigned, set up my own little software consulting company, and start working on Mac shareware full-time.

    Outside of the day job on cineSync, I was doing some coding on a cute little program to build websites named RapidWeaver. RapidWeaver’s kinda like Dreamweaver, but a lot more simple (and hopefully just as powerful), and it’s not stupidly priced. Or, it’s kinda like iWeb, but a lot more powerful, with hopefully most of the simplicity. I first encountered RapidWeaver as a normal customer and paid my $40 for it since I thought it was a great little program, but after writing a little plugin for it, I took on some coding tasks.

    And you know what? The code base sucked. The process sucked. Every task I had to do was a chore. When I started, there wasn’t even a revision control system in place: developers would commit their changes by emailing entire source code files or zip archives to each other. There was no formal bug tracker. Not a day went by when I shook my fist, lo, with great anger, and thunder and lightning appeared. RapidWeaver’s code base had evolved since version 1.0 from nearly a decade before, written by multiple contractors with nobody being an overall custodian of the code, and it showed. I saw methods that were over thousand lines long, multithreaded bugs that would make Baby Jesus cry, method names that were prefixed with with Java-style global package namespacing (yes, we have method names called com_rwrp_currentlySelectedPage), block nesting that got so bad that I once counted thirteen tabs before the actual line of code started, dozens of lines of commented-out code, classes that had more than a hundred and twenty instance variables, etc, etc. Definitely no tick ✗.

    But the code—just like PHP—didn’t matter, because the product just plain rocked. (Hey, I did pay $40 for it, which surprised me quite a lot because I moved to the Mac from the Linux world, and sneered off most things at the time that cost more than $0.) Despite being a tangled maze of twisty paths, the code worked. I was determined to make the product rock more. After meeting the RapidWeaver folks at WWDC 2007, I decided to take the plunge and see how it’d go full-time. So, we worked, and we worked hard. RapidWeaver 3.5 was released two years ago, in June 2006, followed by 3.5.1. 3.6 followed in May 2007, followed by a slew of upgrades: 3.6.1, 3.6.2, 3.6.3… all the way up to 3.6.7. Slowly but surely, the product improved. On the 3rd of August 2007, we created the branch for RapidWeaver 3.7, which we didn’t realise yet was going to be such a major release that it eventually became 4.0.

    And over time, it slowly dawned on me just how many users we had. A product that I initially thought had a few thousand users was much closer to about 100,000 users. I realised I was working on something that was going to affect a lot of people, so when we decided to call it version 4.0, I was a little nervous. I stared at the code base and it stared back at me; was it really possible ship a major new revision of a product and add features to it, and maintain my sanity?

    I decided in my naïvety to refactor a huge bunch of things. I held conference calls with other developers to talk about what needed to change in our plugin API, and how I was going to redo half of the internals so it wouldn’t suck anymore. Heads nodded; I was happy. After about two weeks of being pleased with myself and ripping up many of our central classes, reality set in as I realised that I was very far behind on implementing all the new features, because those two weeks were spent on nothing else but refactoring. After doing time estimation on all the tasks we had planned out for 4.0 and realising that we were about within one day of the target date, I realised we were completely screwed, because nobody sane does time estimation for software without multiplying the total estimate by about 1.5-2x longer. 4.0 was going to take twice as long as we thought it would, and since the feature list was not fixed, it was going to take even longer than that.

    So, the refactoring work was dropped, and we concentrated on adding the new required features, and porting the bugfixes from the 3.6 versions to 4.0. So, now we ended up with half-refactored code, which is arguably just as bad as no refactored code. All the best-laid plans that I had to clean up the code base went south, as we soldiered on towards feature completion for 4.0, because we simply didn’t have the time. I ended up working literally up until the last hour to get 4.0 to code completion state, and made some executive decisions to pull some features that were just too unstable in their current state. Quick Look support was pulled an hour and a half before the release as we kept finding and fixing bugs with it that crashed RapidWeaver while saving a document, which was a sure-fire way to lose customers. Ultimately, pulling Quick Look was the correct decision. (Don’t worry guys, it’ll be back in 4.0.1, without any of that crashing-on-save shenanigans.)

    So, last Thursday, it became reality: RapidWeaver 4.0 shipped out the door. While I was fighting against the code, Dan, Aron, Nik and Ben were revamping the website, which now absolutely bloody gorgeous, all the while handling the litany of support requests and being their usual easygoing sociable selves on the Realmac forums. I was rather nervous about the release: did we, and our brave beta testers, catch all the show-stopper bugs? The good news is that it seems to be mostly OK so far, although no software is ever perfect, so there’s no doubt we’ll be releasing 4.0.1 soon (if only to re-add Quick Look support).


    A day after the release, it slowly dawned on me that the code for 4.0 was basically my baby. Sure, I’d worked on RapidWeaver 3.5 and 3.6 and was the lead coder for that, but the 3.5 and 3.6 goals were much more modest than 4.0. We certainly had other developers work on 4.0 (kudos to Kevin and Josh), but if I had a bad coding day, the code basically didn’t move. So all the blood, sweat and tears that went into making 4.0 was more-or-less my pride and my responsibility. (Code-wise, at least.)


    If there’s a point to this story, I guess that’d be it: take pride and responsibility in what you do, and love your work. The 4.0 code base still sucks, sitting there sniggering at me in its half-refactored state, but we’ve finally suffered the consequences of its legacy design for long enough that we have no choice but to give it a makeover with a vengeance for the next major release. Sooner or later, everyone pays the bad code debt.

    So, it’s going to be a lot more hard work to 4.1, as 4.1 becomes the release that we all really wanted 4.0 to be. But I wouldn’t trade this job for pretty much anything else in this world right now, because it’s a great product loved by a lot of customers, and making RapidWeaver better isn’t just a job anymore, it’s a need. We love this program, and we wanna make it so good that you’ll just have to buy the thing if you own a Mac. One day, I’m sure I’ll move on from RapidWeaver to other hopefully great things, but right now, I can’t imagine doing anything else. We’ve come a long way from RapidWeaver 3.5 in the past two years, and I look forward to the long road ahead for RapidWeaver 5. Tick ✓.

  • The Long Road to RapidWeaver 4

    Two years ago, I had a wonderful job working on a truly excellent piece of software named cineSync. It had the somewhat simple but cheery job of playing back movies in sync across different computers, letting people write notes about particular movie frames and scribbling drawings on them. (As you can imagine, many of the drawings that we produced when testing cineSync weren’t really fit for public consumption.) While it sounds like a simple idea, oh boy did it make some people’s lives a lot easier and a lot less stressful. People used to do crazy things like fly from city to city just to be the same room with another guy for 30 minutes to talk about a video that they were producing; sometimes they’d be flying two or three times per week just to do this. Now, they just fire up cineSync instead and get stuff done in 30 minutes, instead of 30 minutes and an extra eight hours of travelling. cineSync made the time, cost and stress savings probably an order of magnitude or two better. As a result, I have immense pride and joy in saying that it’s being used on virtually every single Hollywood movie out there today (yep, even Iron Man). So, hell of a cool project to work on? Tick ✓.

    Plus, it was practically a dream coding job when it came to programming languages and technologies. My day job consisted of programming with Mac OS X’s Cocoa, the most elegant framework I’ve ever had the pleasure of using, and working with one of the best C++ cross-platform code bases I’ve seen. I also did extensive hacking in Erlang for the server code, so I got paid to play with one of my favourite functional programming languages, which some people spend their entire life wishing for. And I got schooled in just so much stuff: wielding C++ right, designing network protocols, learning about software process, business practices… so, geek nirvana? Tick ✓.

    The ticks go on: great workplace ✓; fantastic people to work with ✓; being privy to the latest movie gossip because we were co-located with one of Australia’s premiere visual effects company ✓; sane working hours ✓; being located in Surry Hills and sampling Crown St for lunch nearly every day ✓; having the luxury of working at home and at cafés far too often ✓. So, since it was all going so well, I had decided that it was obviously time to make a life a lot harder, so I resigned, set up my own little software consulting company, and start working on Mac shareware full-time.

    Outside of the day job on cineSync, I was doing some coding on a cute little program to build websites named RapidWeaver. RapidWeaver’s kinda like Dreamweaver, but a lot more simple (and hopefully just as powerful), and it’s not stupidly priced. Or, it’s kinda like iWeb, but a lot more powerful, with hopefully most of the simplicity. I first encountered RapidWeaver as a normal customer and paid my $40 for it since I thought it was a great little program, but after writing a little plugin for it, I took on some coding tasks.

    And you know what? The code base sucked. The process sucked. Every task I had to do was a chore. When I started, there wasn’t even a revision control system in place: developers would commit their changes by emailing entire source code files or zip archives to each other. There was no formal bug tracker. Not a day went by when I shook my fist, lo, with great anger, and thunder and lightning appeared. RapidWeaver’s code base had evolved since version 1.0 from nearly a decade before, written by multiple contractors with nobody being an overall custodian of the code, and it showed. I saw methods that were over thousand lines long, multithreaded bugs that would make Baby Jesus cry, method names that were prefixed with with Java-style global package namespacing (yes, we have method names called com_rwrp_currentlySelectedPage), block nesting that got so bad that I once counted thirteen tabs before the actual line of code started, dozens of lines of commented-out code, classes that had more than a hundred and twenty instance variables, etc, etc. Definitely no tick ✗.

    But the code—just like PHP—didn’t matter, because the product just plain rocked. (Hey, I did pay $40 for it, which surprised me quite a lot because I moved to the Mac from the Linux world, and sneered off most things at the time that cost more than $0.) Despite being a tangled maze of twisty paths, the code worked. I was determined to make the product rock more. After meeting the RapidWeaver folks at WWDC 2007, I decided to take the plunge and see how it’d go full-time. So, we worked, and we worked hard. RapidWeaver 3.5 was released two years ago, in June 2006, followed by 3.5.1. 3.6 followed in May 2007, followed by a slew of upgrades: 3.6.1, 3.6.2, 3.6.3… all the way up to 3.6.7. Slowly but surely, the product improved. On the 3rd of August 2007, we created the branch for RapidWeaver 3.7, which we didn’t realise yet was going to be such a major release that it eventually became 4.0.

    And over time, it slowly dawned on me just how many users we had. A product that I initially thought had a few thousand users was much closer to about 100,000 users. I realised I was working on something that was going to affect a lot of people, so when we decided to call it version 4.0, I was a little nervous. I stared at the code base and it stared back at me; was it really possible ship a major new revision of a product and add features to it, and maintain my sanity?

    I decided in my naïvety to refactor a huge bunch of things. I held conference calls with other developers to talk about what needed to change in our plugin API, and how I was going to redo half of the internals so it wouldn’t suck anymore. Heads nodded; I was happy. After about two weeks of being pleased with myself and ripping up many of our central classes, reality set in as I realised that I was very far behind on implementing all the new features, because those two weeks were spent on nothing else but refactoring. After doing time estimation on all the tasks we had planned out for 4.0 and realising that we were about within one day of the target date, I realised we were completely screwed, because nobody sane does time estimation for software without multiplying the total estimate by about 1.5-2x longer. 4.0 was going to take twice as long as we thought it would, and since the feature list was not fixed, it was going to take even longer than that.

    So, the refactoring work was dropped, and we concentrated on adding the new required features, and porting the bugfixes from the 3.6 versions to 4.0. So, now we ended up with half-refactored code, which is arguably just as bad as no refactored code. All the best-laid plans that I had to clean up the code base went south, as we soldiered on towards feature completion for 4.0, because we simply didn’t have the time. I ended up working literally up until the last hour to get 4.0 to code completion state, and made some executive decisions to pull some features that were just too unstable in their current state. Quick Look support was pulled an hour and a half before the release as we kept finding and fixing bugs with it that crashed RapidWeaver while saving a document, which was a sure-fire way to lose customers. Ultimately, pulling Quick Look was the correct decision. (Don’t worry guys, it’ll be back in 4.0.1, without any of that crashing-on-save shenanigans.)

    So, last Thursday, it became reality: RapidWeaver 4.0 shipped out the door. While I was fighting against the code, Dan, Aron, Nik and Ben were revamping the website, which now absolutely bloody gorgeous, all the while handling the litany of support requests and being their usual easygoing sociable selves on the Realmac forums. I was rather nervous about the release: did we, and our brave beta testers, catch all the show-stopper bugs? The good news is that it seems to be mostly OK so far, although no software is ever perfect, so there’s no doubt we’ll be releasing 4.0.1 soon (if only to re-add Quick Look support).


    A day after the release, it slowly dawned on me that the code for 4.0 was basically my baby. Sure, I’d worked on RapidWeaver 3.5 and 3.6 and was the lead coder for that, but the 3.5 and 3.6 goals were much more modest than 4.0. We certainly had other developers work on 4.0 (kudos to Kevin and Josh), but if I had a bad coding day, the code basically didn’t move. So all the blood, sweat and tears that went into making 4.0 was more-or-less my pride and my responsibility. (Code-wise, at least.)


    If there’s a point to this story, I guess that’d be it: take pride and responsibility in what you do, and love your work. The 4.0 code base still sucks, sitting there sniggering at me in its half-refactored state, but we’ve finally suffered the consequences of its legacy design for long enough that we have no choice but to give it a makeover with a vengeance for the next major release. Sooner or later, everyone pays the bad code debt.

    So, it’s going to be a lot more hard work to 4.1, as 4.1 becomes the release that we all really wanted 4.0 to be. But I wouldn’t trade this job for pretty much anything else in this world right now, because it’s a great product loved by a lot of customers, and making RapidWeaver better isn’t just a job anymore, it’s a need. We love this program, and we wanna make it so good that you’ll just have to buy the thing if you own a Mac. One day, I’m sure I’ll move on from RapidWeaver to other hopefully great things, but right now, I can’t imagine doing anything else. We’ve come a long way from RapidWeaver 3.5 in the past two years, and I look forward to the long road ahead for RapidWeaver 5. Tick ✓.

  • The Year in Movies, 2007

    It seems that my exercise to keep track of every single movie I watched last year actually worked. Here’s how 2007 turned out for me:

    • 5th of January: Blood Diamond (Hoyts Broadway, 8/10)
    • 1st of February: Perfume (Hoyts Cinema Paris, 7/10)
    • 4th of February: The Fountain (Hoyts George St City, 8/10)
    • 10th of February: Fight Club (DVD, repeat viewing, 8/10)
    • 11th of February: Pan’s Labyrinth (Hoyts George St City, 7/10)
    • 10th of March: Quand j’étais chanteur, a.k.a The Singer (Palace Academy Paddington, 6.5/10)
    • 18th of March: Hors de Prix, a.k.a. Priceless (Palace Academy Leichardt, 7/10)
    • 24th of March: The Illusionist (Greater Union Tuggerah, 7/10)
    • 3rd of April: Hot Fuzz (Hoyts Fox Studios, 8.5/10).
    • 10th of April: 300 (Hoyts Westfield Chatswood, 7.5/10).
    • 7th of May: La Science des rêves, a.k.a. The Science of Sleep (Hayden Orpheum, 7/10).
    • 12th of May: Spider-man 3 (Hoyts Westfield Chatswood, 7.5/10)
    • 22nd of May: Shooter (Greater Union Macquarie, 7/10).
    • 27th of May: Tales from Earthsea (Dendy Newtown, 6.5/10).
    • 30th of May: Pirates of the Caribbean: At World’s End (Hayden Orpheum, 7/10).
    • 27th of June: Knocked Up (AMC Pacific Theatres, The Grove, Los Angeles, 8/10).
    • 29th of June: Blades of Glory (Air New Zealand LAX to SYD, 8/10).
    • 1st of July: Transformers (Hoyts Broadway, 8/10)
    • 8th of July: Ocean’s Thirteen (Greater Union Bondi Junction, 7/10).
    • 17th of July: Harry Potter and the Order Of the Phoenix (special groovy RSP screening at Hoyts, Fox Studios, 7/10).
    • 2nd of August: Notes on a Scandal (DVD, 7/10).
    • 5th of August: Fabuleux destin d’Amélie Poulain, Le, a.k.a. Amélie (DVD, 8.5/10)
    • 7th of August: The Simpsons Movie (Hoyts Fox Studios, 7.5/10).
    • 17th of August: Die Hard 4.0 (Hoyts Broadway, 7.5/10).
    • 14th of September: The Bourne Ultimatum (Greater Union Bondi Junction, 6.5/10)
    • 22nd of September: Ratatouille (Hoyts Broadway, 8.5/10)
    • 23rd of September: An Inconvenient Truth (DVD, 8.5/10).
    • 30th of September: The Holiday (DVD, 7/10)
    • 5th of October: Shaun of the Dead (DVD, 7.5/10).
    • 6th of October: Rush Hour 3 (Hoyts Chatswood Westfield, 6.5/10).
    • 13th of October: Resident Evil: Extinction (Shaw Cinemas, Isetan Singapore, 6.5/10).
    • 4th of November: A Chinese Odyssey (DVD, 6.5/10).
    • 18th of November: Elizabeth: The Golden Age (Reading Cinemas at Rhodes, 7/10).
    • 2nd of December: Stranger Than Fiction (DVD, 8/10).
    • 9th of December: Ghost in the Shell S.A.C.: Solid State Society (DVD, 9/10)
    • 16th of December: The Prestige (DVD, 8.5/10).
    • 24th of December: National Treasure: Book of Secrets (Hoyts Chatswood Westfield, 7.5/10)
    • 28th of December: Aliens vs. Predator: Requiem (Hoyts Chatswood Mandarin, 7.5/10)

    All in all, a pretty good movie year, with Solid State Society topping the list, and The Prestige, Stranger than Fiction, An Inconvenient Truth, Ratatouille, Amélie, Transformers, Blades of Glory, Knocked Up, Hot Fuzz, The Fountain, and Blood Diamond as my personal A-graders. Reflecting back, about the only two ratings I disagree with are Pan’s Labyrinth (should’ve been way higher, probably 8 or 8.5) and An Inconvenient Truth (which I don’t think quite deserved an 8.5).

    I await the arrival of Wall•E this year. The trailer looks like, well, it was done by Pixar. Great humour, fantastic graphics (thank you 1080p), kid-friendly, and with the director and writer of Finding Nemo, Monsters Inc, and Toy Story 2. I’m actually beginning to believe that Pixar have actually built a self-reinforcing system of awesome that is going to be impossible to knock down for at least the next fifty years. It’s pretty incredible that most of their blockbuster movies have been directed and produced by completely different people.

    To all my friends working at Pixar, I love you. Please continue doing what you do best.

  • The Year in Movies&#x2c; 2007

    It seems that my exercise to keep track of every single movie I watched last year actually worked. Here’s how 2007 turned out for me:

    • 5th of January: Blood Diamond (Hoyts Broadway, 8/10)
    • 1st of February: Perfume (Hoyts Cinema Paris, 7/10)
    • 4th of February: The Fountain (Hoyts George St City, 8/10)
    • 10th of February: Fight Club (DVD, repeat viewing, 8/10)
    • 11th of February: Pan’s Labyrinth (Hoyts George St City, 7/10)
    • 10th of March: Quand j’étais chanteur, a.k.a The Singer (Palace Academy Paddington, 6.5/10)
    • 18th of March: Hors de Prix, a.k.a. Priceless (Palace Academy Leichardt, 7/10)
    • 24th of March: The Illusionist (Greater Union Tuggerah, 7/10)
    • 3rd of April: Hot Fuzz (Hoyts Fox Studios, 8.5/10).
    • 10th of April: 300 (Hoyts Westfield Chatswood, 7.5/10).
    • 7th of May: La Science des rêves, a.k.a. The Science of Sleep (Hayden Orpheum, 7/10).
    • 12th of May: Spider-man 3 (Hoyts Westfield Chatswood, 7.5/10)
    • 22nd of May: Shooter (Greater Union Macquarie, 7/10).
    • 27th of May: Tales from Earthsea (Dendy Newtown, 6.5/10).
    • 30th of May: Pirates of the Caribbean: At World’s End (Hayden Orpheum, 7/10).
    • 27th of June: Knocked Up (AMC Pacific Theatres, The Grove, Los Angeles, 8/10).
    • 29th of June: Blades of Glory (Air New Zealand LAX to SYD, 8/10).
    • 1st of July: Transformers (Hoyts Broadway, 8/10)
    • 8th of July: Ocean’s Thirteen (Greater Union Bondi Junction, 7/10).
    • 17th of July: Harry Potter and the Order Of the Phoenix (special groovy RSP screening at Hoyts, Fox Studios, 7/10).
    • 2nd of August: Notes on a Scandal (DVD, 7/10).
    • 5th of August: Fabuleux destin d’Amélie Poulain, Le, a.k.a. Amélie (DVD, 8.5/10)
    • 7th of August: The Simpsons Movie (Hoyts Fox Studios, 7.5/10).
    • 17th of August: Die Hard 4.0 (Hoyts Broadway, 7.5/10).
    • 14th of September: The Bourne Ultimatum (Greater Union Bondi Junction, 6.5/10)
    • 22nd of September: Ratatouille (Hoyts Broadway, 8.5/10)
    • 23rd of September: An Inconvenient Truth (DVD, 8.5/10).
    • 30th of September: The Holiday (DVD, 7/10)
    • 5th of October: Shaun of the Dead (DVD, 7.5/10).
    • 6th of October: Rush Hour 3 (Hoyts Chatswood Westfield, 6.5/10).
    • 13th of October: Resident Evil: Extinction (Shaw Cinemas, Isetan Singapore, 6.5/10).
    • 4th of November: A Chinese Odyssey (DVD, 6.5/10).
    • 18th of November: Elizabeth: The Golden Age (Reading Cinemas at Rhodes, 7/10).
    • 2nd of December: Stranger Than Fiction (DVD, 8/10).
    • 9th of December: Ghost in the Shell S.A.C.: Solid State Society (DVD, 9/10)
    • 16th of December: The Prestige (DVD, 8.5/10).
    • 24th of December: National Treasure: Book of Secrets (Hoyts Chatswood Westfield, 7.5/10)
    • 28th of December: Aliens vs. Predator: Requiem (Hoyts Chatswood Mandarin, 7.5/10)

    All in all, a pretty good movie year, with Solid State Society topping the list, and The Prestige, Stranger than Fiction, An Inconvenient Truth, Ratatouille, Amélie, Transformers, Blades of Glory, Knocked Up, Hot Fuzz, The Fountain, and Blood Diamond as my personal A-graders. Reflecting back, about the only two ratings I disagree with are Pan’s Labyrinth (should’ve been way higher, probably 8 or 8.5) and An Inconvenient Truth (which I don’t think quite deserved an 8.5).

    I await the arrival of Wall•E this year. The trailer looks like, well, it was done by Pixar. Great humour, fantastic graphics (thank you 1080p), kid-friendly, and with the director and writer of Finding Nemo, Monsters Inc, and Toy Story 2. I’m actually beginning to believe that Pixar have actually built a self-reinforcing system of awesome that is going to be impossible to knock down for at least the next fifty years. It’s pretty incredible that most of their blockbuster movies have been directed and produced by completely different people.

    To all my friends working at Pixar, I love you. Please continue doing what you do best.

  • iPhone: Currency Converter

    A small tip for l’iPhone digerati (aw haw haw haw!): if you, like me, like to look up currency rates, forget about all this Web browser and Web application malarkey. Use the Stocks application instead:

    1. go to the Stocks application,
    2. add a new stock,
    3. use a stock name of AUDUSD=X for the Australian to US dollar, USDGBP=X for USD to the British Pound, etc. (Use the Yahoo! finance page if you don’t know the three-letter currency codes.)

    Since a picture is worth a thousand words, here you go:

    Pretty!

  • iPhone: Currency Converter

    A small tip for l’iPhone digerati (aw haw haw haw!): if you, like me, like to look up currency rates, forget about all this Web browser and Web application malarkey. Use the Stocks application instead:

    1. go to the Stocks application,
    2. add a new stock,
    3. use a stock name of AUDUSD=X for the Australian to US dollar, USDGBP=X for USD to the British Pound, etc. (Use the Yahoo! finance page if you don’t know the three-letter currency codes.)

    Since a picture is worth a thousand words, here you go:

    Pretty!

  • Jesus on E's MP4s

    For all the oldskool Amiga demoscene folks out there, I’ve weirdly had a bit of nostalgia for the classic Jesus on E’s demo from 1992. It was was understandably not featured on Mindcandy Volume 2, although there are videos of it on the Web floating around. It’s somewhat amusing that the MPEG-4 video is around 120MB when the original version fit on two 880k disks.

    So, I chopped up the MPEG-4 videos I found floating around on the Web, and exported the soundtrack to MPEG-4 audio files so I could throw them onto my iPod. The tracks are available at:

    If you’re into oldskool techno and rave tracks from the ~1992 era, you can’t beat this stuff. (And if you don’t like oldskool techno and rave tracks, the soundtrack will probably send you completely insane). Have the appropriate amount of fun!

  • Jesus on E&#x27;s MP4s

    For all the oldskool Amiga demoscene folks out there, I’ve weirdly had a bit of nostalgia for the classic Jesus on E’s demo from 1992. It was was understandably not featured on Mindcandy Volume 2, although there are videos of it on the Web floating around. It’s somewhat amusing that the MPEG-4 video is around 120MB when the original version fit on two 880k disks.

    So, I chopped up the MPEG-4 videos I found floating around on the Web, and exported the soundtrack to MPEG-4 audio files so I could throw them onto my iPod. The tracks are available at:

    If you’re into oldskool techno and rave tracks from the ~1992 era, you can’t beat this stuff. (And if you don’t like oldskool techno and rave tracks, the soundtrack will probably send you completely insane). Have the appropriate amount of fun!

  • Xcode Distributed Builds Performance

    [Sorry if you get this post twice—let’s say that our internal builds of RapidWeaver 4.0 are still a little buggy, and I needed to re-post this ;)]

    Xcode, Apple’s IDE for Mac OS X, has this neat ability to perform distributed compilations across multiple computers. The goal, of course, is to cut down on the build time. If you’re sitting at a desktop on a local network and have a Mac or two to spare, distributed builds obviously make a lot of sense: there’s a lot of untapped power that could be harnessed to speed up your build. However, there’s another scenario where distributed builds can help, and that’s if you work mainly off a laptop and occasionally join a network that has a few other Macs around. When your laptop’s offline, you can perform a distributed build with just your laptop; when your laptop’s connected to a few other Macs, they can join in the build and speed it up.

    There’s one problem with idea, though, which is that distributed builds add overhead. I had a strong suspicion that a distributed build with only the local machine was a significant amount slower than a simple individual build. Since it’s all talk unless you have benchmarks, lo and behold, a few benchmarks later, I proved my suspicion right.

    • Individual build: 4:50.6 (first run), 4:51.7 (second run)
    • Shared network build with local machine only: 6:16.3 (first run), 6:16.3 (second run)

    This was a realistic benchmark: it was a full build of RapidWeaver including all its sub-project dependencies and core plugins. The host machine is a 2GHz MacBook with 2GB of RAM. The build process includes a typical number of non-compilation phases, such running a shell script or two (which takes a few seconds), copying files to the final application bundle, etc. So, for a typical Mac desktop application project like RapidWeaver, turning on shared network builds without any extra hosts evokes a pretty hefty speed penalty: ~30% in my case. Ouch. You don’t want to leave shared network builds on when your laptop disconnects from the network. To add to the punishment, Xcode will recompile everything from scratch if you switch from individual builds to distributed builds (and vice versa), so flipping the switch when you disconnect from a network or reconnect to it is going to require a full rebuild.

    Of course, there’s no point to using distributed builds if there’s only one machine participating. So, what happens when we add a 2.4GHz 20” Aluminium Intel iMac with 2GB of RAM, via Gigabit Ethernet? Unfortunately, not much:

    • Individual build: 4:50.6 (first run), 4:51.7 (second run)
    • Shared network build with local machine + 2.4GHz iMac: 4:46.6 (first run), 4:46.6 (second run)

    You shave an entire four seconds off the build time by getting a 2.4GHz iMac to help out a 2GHz MacBook. A 1% speed increase isn’t very close to the 40% build time reduction that you’re probably hoping for. Sure, a 2.4GHz iMac is not exactly a build farm, but you’d hope for something a little better than a 1% speed improvement by doubling the horsepower, no? Gustafson’s Law strikes again: parallelism is hard, news at 11.

    I also timed Xcode’s dedicated network builds (which are a little different from its shared network builds), but buggered if I know where I put the results for that. I vaguely remember that dedicated network builds was very similar to shared network builds with my two hosts, but my memory’s hazy.

    So, lesson #1: there’s no point using distributed builds unless there’s usually at least one machine available to help out, otherwise your builds are just going to slow down. Lesson #2: you need to add a significant amount more CPUs to save a significant amount of time with distributed builds. A single 2.4GHz iMac doesn’t appear to help much. I’m guessing that adding a quad-core or eight-core Mac Pro to the build will help. Maybe 10 × 2GHz Intel Mac minis will help, but I’d run some benchmarks on that setup before buying a cute Mac mini build farm — perhaps the overhead of distributing the build to ten other machines is going to nullify any timing advantage you’d get from throwing another 20GHz of processors into the mix.

  • Xcode Distributed Builds Performance

    [Sorry if you get this post twice—let’s say that our internal builds of RapidWeaver 4.0 are still a little buggy, and I needed to re-post this ;)]

    Xcode, Apple’s IDE for Mac OS X, has this neat ability to perform distributed compilations across multiple computers. The goal, of course, is to cut down on the build time. If you’re sitting at a desktop on a local network and have a Mac or two to spare, distributed builds obviously make a lot of sense: there’s a lot of untapped power that could be harnessed to speed up your build. However, there’s another scenario where distributed builds can help, and that’s if you work mainly off a laptop and occasionally join a network that has a few other Macs around. When your laptop’s offline, you can perform a distributed build with just your laptop; when your laptop’s connected to a few other Macs, they can join in the build and speed it up.

    There’s one problem with idea, though, which is that distributed builds add overhead. I had a strong suspicion that a distributed build with only the local machine was a significant amount slower than a simple individual build. Since it’s all talk unless you have benchmarks, lo and behold, a few benchmarks later, I proved my suspicion right.

    • Individual build: 4:50.6 (first run), 4:51.7 (second run)
    • Shared network build with local machine only: 6:16.3 (first run), 6:16.3 (second run)

    This was a realistic benchmark: it was a full build of RapidWeaver including all its sub-project dependencies and core plugins. The host machine is a 2GHz MacBook with 2GB of RAM. The build process includes a typical number of non-compilation phases, such running a shell script or two (which takes a few seconds), copying files to the final application bundle, etc. So, for a typical Mac desktop application project like RapidWeaver, turning on shared network builds without any extra hosts evokes a pretty hefty speed penalty: ~30% in my case. Ouch. You don’t want to leave shared network builds on when your laptop disconnects from the network. To add to the punishment, Xcode will recompile everything from scratch if you switch from individual builds to distributed builds (and vice versa), so flipping the switch when you disconnect from a network or reconnect to it is going to require a full rebuild.

    Of course, there’s no point to using distributed builds if there’s only one machine participating. So, what happens when we add a 2.4GHz 20” Aluminium Intel iMac with 2GB of RAM, via Gigabit Ethernet? Unfortunately, not much:

    • Individual build: 4:50.6 (first run), 4:51.7 (second run)
    • Shared network build with local machine + 2.4GHz iMac: 4:46.6 (first run), 4:46.6 (second run)

    You shave an entire four seconds off the build time by getting a 2.4GHz iMac to help out a 2GHz MacBook. A 1% speed increase isn’t very close to the 40% build time reduction that you’re probably hoping for. Sure, a 2.4GHz iMac is not exactly a build farm, but you’d hope for something a little better than a 1% speed improvement by doubling the horsepower, no? Gustafson’s Law strikes again: parallelism is hard, news at 11.

    I also timed Xcode’s dedicated network builds (which are a little different from its shared network builds), but buggered if I know where I put the results for that. I vaguely remember that dedicated network builds was very similar to shared network builds with my two hosts, but my memory’s hazy.

    So, lesson #1: there’s no point using distributed builds unless there’s usually at least one machine available to help out, otherwise your builds are just going to slow down. Lesson #2: you need to add a significant amount more CPUs to save a significant amount of time with distributed builds. A single 2.4GHz iMac doesn’t appear to help much. I’m guessing that adding a quad-core or eight-core Mac Pro to the build will help. Maybe 10 × 2GHz Intel Mac minis will help, but I’d run some benchmarks on that setup before buying a cute Mac mini build farm — perhaps the overhead of distributing the build to ten other machines is going to nullify any timing advantage you’d get from throwing another 20GHz of processors into the mix.

  • Dick Gabriel on A Lot More Than Lisp

    If you love programming, and especially if you love programming languages, there’s an episode of the Software Engineering Radio podcast that has a fantastic interview with Dick Gabriel, titled “Dick Gabriel on Lisp”. If you don’t know who Gabriel is, he’s arguably one of the more important programming language people around, is one of the founding fathers of XEmacs (neé Lucid Emacs), wrote the famous Worse is Better essay (along with a pretty cool book that I’ll personally recommend), and also gave one of the most surreal and brilliant keynotes that I’ve ever heard that received a standing ovation at HoPL III and OOPSLA.

    The episode’s about fifty minutes long, and Gabriel talks about a lot more than just Lisp in the interview: among other things, he gives some major insight into the essence of object-oriented message-passing, how functions are objects and objects are functions, what continuations are, metacircularity, and the relationship between XML and S-expressions (and why XML is just a glorified half-assed version of Lisp). There’s also some great stories in the interview for computing historians: how the Common Lisp Object System was initially inspired by Scheme and the original Actor language (yep, “actors” as in “Erlang processes”), what AI research was like in the 1960s and ’70s, and the story of how John McCarthy and his students implemented the first Lisp interpreter in one night.

    A wonderful interview, and well worth listening to if programming languages is your shindig.