-
Interview with Marshall Kirk McKusick
A website named Neat Little Mac Apps is not the kind of place you’d expect to find an interview with a operating systems and filesystems hacker. Nevertheless, one of their podcasts was just that: an interview with UNIX and BSD legend Marshall Kirk McKusick. (He has his own Wikipedia page; he must be famous!)
There’s some great stuff in there, including the origin of the BSD daemon (Pixar, would you believe? Or, well, Lucasarts at the time…), and a great story about how a bug was introduced into the 4.2 BSD version of the pervasive UNIX diff utility. Marshall’s full of energy, and it’s a great interview; it’s a little amusing to see the stark contrast between the interviewer and McKusick, both of whom have rather different definitions of what constitutes an operating system.
-
Coherence & Groupthink
Charles Petzold, one of the most famous authors of Windows programming books out there, wrote a great entry on his blog over a year ago that I’ve been meaning to comment on:
Once you’ve restricted yourself to information that turns up in Google searches, you begin having a very distorted view of the world.
On the Internet, everything is in tiny pieces. The typical online article or blog entry is 500, 1000, maybe 1500 words long. Sometimes somebody will write an extended “tutorial” on a topic, possibly 3,000 words in length, maybe even 5,000.
It’s easy to convince oneself that these bite-sized chunks of prose represent the optimum level of information granularity. It is part of the utopian vision of the web that this plethora of loosely-linked pages synergistically becomes all the information we need.
This illusion is affecting the way we learn, and I fear that we’re not getting the broader, more comprehensive overview that only a book can provide. A good author will encounter an unwieldy jungle of information and cut a coherent path through it, primarily by imposing a kind of narrative over the material. This is certainly true of works of history, biography, science, mathematics, philosophy, and so forth, and it is true of programming tutorials as well.
Sometimes you see somebody attempting to construct a tutorial narrative by providing a series a successive links to different web pages, but it never really works well because it lacks an author who has spent many months (or a year or more) primarily structuring the material into a narrative form.
For example, suppose you wanted to learn about the American Civil War. You certainly have plenty of online access to Wikipedia articles, blog entries, even scholarly articles. But I suggest that assembling all the pieces into a coherent whole is something best handled by a trained professional, and that’s why reading a book such as James McPherson’s Battle Cry of Freedom will give you a much better grasp of the American Civil War than hundreds of disparate articles.
If I sound elitist, it’s only because the time and difficulty required for wrapping a complex topic into a coherent narrative is often underestimated by those who have never done it. A book is not 150 successive blog entries, just like a novel isn’t 150 character sketches, descriptions, and scraps of dialog.
A related point I’d like to make is that people tend to read things that reinforce their viewpoints, and avoid things that go against their beliefs. If you’re a left-wing commie pinko in Sydney, you’re probably more likely to read the Sydney Morning Herald as your newspaper; if you’re a right-wing peacenik, you’ll probably prefer The Australian instead. If you’re a functional programming maven who sneers at C, you probably hang around Haskell or O’Caml or Erlang or Scheme geeks. If you’re a Mac programmer, you talk all day about how beautiful and glorious the Cocoa frameworks are, and probably have a firm hatred of C++ (even though there’s a decent chance you’ve never even used the language).
Hang around with other cultures sometimes. Like travelling, it’s good for you; it broadens your perspective, and gives you a better understanding of your own culture. The human nature of seeking confirmation of your own viewpoints, combined with Petzold’s astute observations about learning in bite-sized chunks, means that it’s incredibly easy to find information on the Internet that only explains one side of the story. How many people on your frequented mailing lists, IRC channels, Web forums or Twitter friends have similar opinions to you, and how many people in those communities truly understand other systems and have been shot down whenever they’ve tried to justify something valid that’s contrary to the community’s popular opinion? I’m not saying that hanging around like-minded communities is a bad idea; I’m simply saying to be aware of groupthink and self-reinforcing systems, and break out of your comfort zone sometimes to learn something totally different and contrary to what you’re used to. Make the effort to find out the whole picture; don’t settle for some random snippets of short tidbits that you read somewhere on the Web. Probably the best article I’ve ever read on advocacy is Mark-Jason Dominus’s Why I Hate Advocacy piece, written eight years ago in 2000. It still holds true today.
-
Coherence & Groupthink
Charles Petzold, one of the most famous authors of Windows programming books out there, wrote a great entry on his blog over a year ago that I’ve been meaning to comment on:
Once you’ve restricted yourself to information that turns up in Google searches, you begin having a very distorted view of the world.
On the Internet, everything is in tiny pieces. The typical online article or blog entry is 500, 1000, maybe 1500 words long. Sometimes somebody will write an extended “tutorial” on a topic, possibly 3,000 words in length, maybe even 5,000.
It’s easy to convince oneself that these bite-sized chunks of prose represent the optimum level of information granularity. It is part of the utopian vision of the web that this plethora of loosely-linked pages synergistically becomes all the information we need.
This illusion is affecting the way we learn, and I fear that we’re not getting the broader, more comprehensive overview that only a book can provide. A good author will encounter an unwieldy jungle of information and cut a coherent path through it, primarily by imposing a kind of narrative over the material. This is certainly true of works of history, biography, science, mathematics, philosophy, and so forth, and it is true of programming tutorials as well.
Sometimes you see somebody attempting to construct a tutorial narrative by providing a series a successive links to different web pages, but it never really works well because it lacks an author who has spent many months (or a year or more) primarily structuring the material into a narrative form.
For example, suppose you wanted to learn about the American Civil War. You certainly have plenty of online access to Wikipedia articles, blog entries, even scholarly articles. But I suggest that assembling all the pieces into a coherent whole is something best handled by a trained professional, and that’s why reading a book such as James McPherson’s Battle Cry of Freedom will give you a much better grasp of the American Civil War than hundreds of disparate articles.
If I sound elitist, it’s only because the time and difficulty required for wrapping a complex topic into a coherent narrative is often underestimated by those who have never done it. A book is not 150 successive blog entries, just like a novel isn’t 150 character sketches, descriptions, and scraps of dialog.
A related point I’d like to make is that people tend to read things that reinforce their viewpoints, and avoid things that go against their beliefs. If you’re a left-wing commie pinko in Sydney, you’re probably more likely to read the Sydney Morning Herald as your newspaper; if you’re a right-wing peacenik, you’ll probably prefer The Australian instead. If you’re a functional programming maven who sneers at C, you probably hang around Haskell or O’Caml or Erlang or Scheme geeks. If you’re a Mac programmer, you talk all day about how beautiful and glorious the Cocoa frameworks are, and probably have a firm hatred of C++ (even though there’s a decent chance you’ve never even used the language).
Hang around with other cultures sometimes. Like travelling, it’s good for you; it broadens your perspective, and gives you a better understanding of your own culture. The human nature of seeking confirmation of your own viewpoints, combined with Petzold’s astute observations about learning in bite-sized chunks, means that it’s incredibly easy to find information on the Internet that only explains one side of the story. How many people on your frequented mailing lists, IRC channels, Web forums or Twitter friends have similar opinions to you, and how many people in those communities truly understand other systems and have been shot down whenever they’ve tried to justify something valid that’s contrary to the community’s popular opinion? I’m not saying that hanging around like-minded communities is a bad idea; I’m simply saying to be aware of groupthink and self-reinforcing systems, and break out of your comfort zone sometimes to learn something totally different and contrary to what you’re used to. Make the effort to find out the whole picture; don’t settle for some random snippets of short tidbits that you read somewhere on the Web. Probably the best article I’ve ever read on advocacy is Mark-Jason Dominus’s Why I Hate Advocacy piece, written eight years ago in 2000. It still holds true today.
-
git-svn & svn:externals
I’ve written before about git-svn and why I use it, but a major stumbling block with git-svn has been been a lack of support for svn:externals. If your project’s small and you have full control over the repository, you may be fortunate enough to not have any svn:externals definitions, or perhaps you can restructure your repository so you don’t need them anymore and live in git and Subversion interoperability bliss.
However, many projects absolutely require svn:externals, and once you start having common libraries and frameworks that are shared amongst multiple projects, it becomes very difficult to avoid svn:externals. What to do for the git-svn user?
If you Google around, it’s easy enough to find solutions out there, such as git-me-up, step-by-step tutorials, explanations about using git submodules, and an overview of all the different ways you can integrate the two things nicely. However, I didn’t like any of those solutions: either they required too much effort, were too fragile and could break easily if you did something wrong with your git configuration, or were simply too complex for such a seemingly simple problem. (Ah, I do like dismissing entire classes of solutions by wand-having them as over-engineering.)
So, in the great spirit of scratching your own itch, here’s my own damn solution:
This is a very simple shell script to make git-svn clone your svn:externals definitions. Place the script in a directory where you have one or more svn:externals definitions, run it, and it will:
- git svn clone each external into a .git_externals/ directory.
- symlink the cloned repository in .git_externals/ to the proper directory name.
- add the symlink and .git_externals/ to the .git/info/excludes/ file, so that you’re not pestered about it when performing a git status.
That’s pretty much about it. Low-tech and cheap and cheery, but I couldn’t find anything else like it after extensive Googling, so hopefully some other people out there with low-tech minds like mine will find this useful.
You could certainly make the script a lot more complex and do things such as share svn:externals repositories between different git repositories, traverse through the entire git repository to detect svn:externals definitions instead of having to place the script in the correct directory, etc… but this works, it’s simple, and it does just the one thing, unlike a lot of other git/svn integration scripts that I’ve found. I absolutely do welcome those features, but I figured I’d push this out since it works for me and is probably useful for others.
The source is on github.com at http://github.com/andrep/git-svn-clone-externals/tree/master. Have fun subverting your Subversion overlords!
-
git-svn & svn:externals
I’ve written before about git-svn and why I use it, but a major stumbling block with git-svn has been been a lack of support for svn:externals. If your project’s small and you have full control over the repository, you may be fortunate enough to not have any svn:externals definitions, or perhaps you can restructure your repository so you don’t need them anymore and live in git and Subversion interoperability bliss.
However, many projects absolutely require svn:externals, and once you start having common libraries and frameworks that are shared amongst multiple projects, it becomes very difficult to avoid svn:externals. What to do for the git-svn user?
If you Google around, it’s easy enough to find solutions out there, such as git-me-up, step-by-step tutorials, explanations about using git submodules, and an overview of all the different ways you can integrate the two things nicely. However, I didn’t like any of those solutions: either they required too much effort, were too fragile and could break easily if you did something wrong with your git configuration, or were simply too complex for such a seemingly simple problem. (Ah, I do like dismissing entire classes of solutions by wand-having them as over-engineering.)
So, in the great spirit of scratching your own itch, here’s my own damn solution:
This is a very simple shell script to make git-svn clone your svn:externals definitions. Place the script in a directory where you have one or more svn:externals definitions, run it, and it will:
- git svn clone each external into a .git_externals/ directory.
- symlink the cloned repository in .git_externals/ to the proper directory name.
- add the symlink and .git_externals/ to the .git/info/excludes/ file, so that you’re not pestered about it when performing a git status.
That’s pretty much about it. Low-tech and cheap and cheery, but I couldn’t find anything else like it after extensive Googling, so hopefully some other people out there with low-tech minds like mine will find this useful.
You could certainly make the script a lot more complex and do things such as share svn:externals repositories between different git repositories, traverse through the entire git repository to detect svn:externals definitions instead of having to place the script in the correct directory, etc… but this works, it’s simple, and it does just the one thing, unlike a lot of other git/svn integration scripts that I’ve found. I absolutely do welcome those features, but I figured I’d push this out since it works for me and is probably useful for others.
The source is on github.com at http://github.com/andrep/git-svn-clone-externals/tree/master. Have fun subverting your Subversion overlords!
-
MacDev 2009

I have the small honour of being a speaker at the maiden conference of MacDev 2009, a grass-roots, independently run, Mac developer conference in the UK that’s being held in April next year. MacDev looks like it’ll be the European equivalent of C4, which was absolutely the best Mac developer conference I’ve ever been to; I’d say it’s the Mac equivalent of Linux.conf.au. If you’re a Mac developer at all, come along, it should be great fun, and give your liver a nice workout. Plus, how can you ignore such a sexy list of speakers?
Update: My talk abstract is now available…
One reason for Mac OS X’s success is Objective-C, combining the dynamism of a scripting language with the performance of a compiled language. However, how does Objective-C work its magic and what principles is it based upon? In this session, we explore the inner workings of the Objective-C runtime, and see how a little knowledge about programming language foundations—such as lambda calculus and type theory—can go a long way to tackling difficult topics in Cocoa such as error handling and concurrency. We’ll cover a broad range of areas such as garbage collection, blocks, and data structure design, with a focus on practical tips and techniques that can immediately improve your own code’s quality and maintainability.
I am a great believer in getting the foundations right. Similarly to how bad code design or architecture often leads to a ton of bugs that simply wouldn’t exist in well-designed code, building a complex system on unsteady foundations can produce a lot of unnecessary pain. What are the foundations of your favourite programming language?
It’s 2008 and we’re still seeing buffer overflows in C.
-
MacDev 2009

I have the small honour of being a speaker at the maiden conference of MacDev 2009, a grass-roots, independently run, Mac developer conference in the UK that’s being held in April next year. MacDev looks like it’ll be the European equivalent of C4, which was absolutely the best Mac developer conference I’ve ever been to; I’d say it’s the Mac equivalent of Linux.conf.au. If you’re a Mac developer at all, come along, it should be great fun, and give your liver a nice workout. Plus, how can you ignore such a sexy list of speakers?
Update: My talk abstract is now available…
One reason for Mac OS X’s success is Objective-C, combining the dynamism of a scripting language with the performance of a compiled language. However, how does Objective-C work its magic and what principles is it based upon? In this session, we explore the inner workings of the Objective-C runtime, and see how a little knowledge about programming language foundations—such as lambda calculus and type theory—can go a long way to tackling difficult topics in Cocoa such as error handling and concurrency. We’ll cover a broad range of areas such as garbage collection, blocks, and data structure design, with a focus on practical tips and techniques that can immediately improve your own code’s quality and maintainability.
I am a great believer in getting the foundations right. Similarly to how bad code design or architecture often leads to a ton of bugs that simply wouldn’t exist in well-designed code, building a complex system on unsteady foundations can produce a lot of unnecessary pain. What are the foundations of your favourite programming language?
It’s 2008 and we’re still seeing buffer overflows in C.
-
The Business of Development
After one of the longest road-trips of my life, I gave a presentation at DevWorld 08 in Melbourne, Australia, titled “The Business of Development”:
Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.
The talk has a definite Mac focus and is geared towards people who are writing commercial software, but it arguably applies to all software on any platform, whether you’re a professional programmer or a hobbyist, working on open-source or not. The slides are now online; you can find it on my talks page or download them directly (40MB PDF).
-
The Business of Development
After one of the longest road-trips of my life, I gave a presentation at DevWorld 08 in Melbourne, Australia, titled “The Business of Development”:
Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.
The talk has a definite Mac focus and is geared towards people who are writing commercial software, but it arguably applies to all software on any platform, whether you’re a professional programmer or a hobbyist, working on open-source or not. The slides are now online; you can find it on my talks page or download them directly (40MB PDF).
-
Self-Reflection
R. A. Salvatore, Road of the Patriarch, p. 280:
The point of self-reflection is, foremost, to clarify and to find honesty. Self-reflection is the way to throw self-lies out and face the truth—however painful it might be to admit that you were wrong. We seek consistency in ourselves, and so when we are faced with inconsistency, we struggle to deny.
Denial has no place in self-reflection, and so it is incumbent upon a person to admit his errors, to embrace them and to move along in a more positive direction.
We can fool ourselves for all sorts of reasons. Mostly for the sake of our ego, of course, but sometimes, I now understand, because we are afraid.
For sometimes we are afraid to hope, because hope breeds expectation, and expectation can lead to disappointment.
… Reality is a curious thing. Truth is not as solid and universal as any of us would like it to be; selfishness guides perception, and perception invites justification. The physical image in the mirror, if not pleasing, can be altered by the mere brush of fingers through hair.
And so it is true that we can manipulate our own reality. We can persuade, even deceive. We can make others view us in dishonest ways. We can hide selfishness with charity, make a craving for acceptance into magnanimity, and amplify our smile to coerce a hesitant lover.
… a more difficult alteration than the physical is the image that appears in the glass of introspection, the pureness or rot of the heart and the soul.
For many, sadly, this is not an issue, for the illusion of their lives becomes self-delusion, a masquerade that revels in the applause and sees in a pittance to charity a stain remover for the soul.
… There are those who cannot see the stains on their souls. Some lack the capacity to look in the glass of introspection, perhaps, and others alter reality without and within.
It is, then, the outward misery of Artemis Entreri that has long offered me hope. He doesn’t lack passion; he hides from it. He becomes an instrument, a weapon, because otherwise he must be human. He knows the glass all too well, I see clearly now, and he cannot talk himself around the obvious stain. His justifications for his actions ring hollow—to him most of all.
Only there, in that place, is the road of redemption, for any of us. Only in facing honestly that image in the glass can we change the reality of who we are. Only in seeing the scars and the stains and the rot can we begin to heal.
For Rebecca, who holds that glass of introspection higher than anyone else I’ve ever known. Thank you for everything.
-
Self-Reflection
R. A. Salvatore, Road of the Patriarch, p. 280:
The point of self-reflection is, foremost, to clarify and to find honesty. Self-reflection is the way to throw self-lies out and face the truth—however painful it might be to admit that you were wrong. We seek consistency in ourselves, and so when we are faced with inconsistency, we struggle to deny.
Denial has no place in self-reflection, and so it is incumbent upon a person to admit his errors, to embrace them and to move along in a more positive direction.
We can fool ourselves for all sorts of reasons. Mostly for the sake of our ego, of course, but sometimes, I now understand, because we are afraid.
For sometimes we are afraid to hope, because hope breeds expectation, and expectation can lead to disappointment.
… Reality is a curious thing. Truth is not as solid and universal as any of us would like it to be; selfishness guides perception, and perception invites justification. The physical image in the mirror, if not pleasing, can be altered by the mere brush of fingers through hair.
And so it is true that we can manipulate our own reality. We can persuade, even deceive. We can make others view us in dishonest ways. We can hide selfishness with charity, make a craving for acceptance into magnanimity, and amplify our smile to coerce a hesitant lover.
… a more difficult alteration than the physical is the image that appears in the glass of introspection, the pureness or rot of the heart and the soul.
For many, sadly, this is not an issue, for the illusion of their lives becomes self-delusion, a masquerade that revels in the applause and sees in a pittance to charity a stain remover for the soul.
… There are those who cannot see the stains on their souls. Some lack the capacity to look in the glass of introspection, perhaps, and others alter reality without and within.
It is, then, the outward misery of Artemis Entreri that has long offered me hope. He doesn’t lack passion; he hides from it. He becomes an instrument, a weapon, because otherwise he must be human. He knows the glass all too well, I see clearly now, and he cannot talk himself around the obvious stain. His justifications for his actions ring hollow—to him most of all.
Only there, in that place, is the road of redemption, for any of us. Only in facing honestly that image in the glass can we change the reality of who we are. Only in seeing the scars and the stains and the rot can we begin to heal.
For Rebecca, who holds that glass of introspection higher than anyone else I’ve ever known. Thank you for everything.
-
git-svn, and thoughts on Subversion
We use Subversion for our revision control system, and it’s great. It’s certainly not the most advanced thing out there, but it has perhaps the best client support on every platform out there, and when you need to work with non-coders on Windows, Linux and Mac OS X, there’s a lot of better things to do than explain how to use the command-line to people who’ve never heard of it before.
However, I also really need to work offline. My usual modus operandi is working at a café without Internet access (thanks for still being in the stone-age when it comes to data access, Australia), which pretty rules out using Subversion, because I can’t do commits when I do the majority of my work. So, I used svk for quite a long time, and everything was good.
Then, about a month ago, I got annoyed with svk screwing up relatively simple pushes and pulls for the last time. svk seems to work fine if you only track one branch and all you ever do is use its capabilities to commit offline, but the moment you start doing anything vaguely complicated like merges, or track both the trunk and a branch or two, it’ll explode. Workmates generally don’t like it when they see 20 commits arrive the next morning that totally FUBAR your repository.
So, I started using git-svn instead. People who know me will understand that I have a hatred of crap user interfaces, and I have a special hatred of UIs that are different “just because”, which applies to git rather well. I absolutely refused to use tla for that reason—which thankfully never seems to be mentioned in distributed revision control circles anymore—and I stayed away from git for a long time because of its refusal to use conventional revision control terminology. git-svn in particular suffered more much from (ab)using different terminology than git, because you were intermixing Subversion jargon with git jargon. Sorry, you use
checkoutto revert a commit? Andcheckoutalso switches between branches?revertis like a merge? WTF? The five or ten tutorials that I found on the ‘net helped quite a lot, but since a lot of them told me to do things in different ways and I didn’t know what the subtle differences between the commands were, I went back to tolerating svk until it screwed up a commit for the very last time. I also tried really hard to use bzr-svn since I really like Bazaar (and the guys behind Bazaar), but it was clear that git-svn was stable and ready to use right now, whereas bzr-svn still had some very rough edges around it and isn’t quite ready for production yet.However, now that I’ve got my head wrapped around git’s jargon, I’m very happy to say that it was well worth the time for my brain to learn it. Linus elevated himself from “bloody good” to “true genius” in my eyes for writing that thing in a week, and I now have a very happy workflow using git to integrate with svn.
So, just to pollute the Intertubes more, here’s my own git-svn cheatsheet. I don’t know if this is the most correct way to do things (is there any “correct” way to do things in git?), but it certainly works for me:
* initial repository import (svk sync): git-svn init https://foo.com/svn -T trunk -b branches -t tags git checkout -b work trunk * pulling from upstream (svk pull): git-svn rebase * pulling from upstream when there's new branches/tags/etc added: git-svn fetch * switching between branches: git checkout branchname * svk/svn diff -c NNNN: git diff NNNN^! * commiting a change: git add git commit * reverting a change: git checkout path * pushing changes upstream (svk push): git-svn dcommit * importing svn:ignore: (echo; git-svn show-ignore) >> .git/info/exclude * uncommit: git reset <SHA1 to reset to>Drop me an email if you have suggestions to improve those. About the only thing that I miss from svk was the great feature of being able to delete a filename from the commit message, which would unstage it from the commit. That was tremendously useful; it meant that you could
git commit -aall your changes except one little file, which was simply deleting one line. It’s much easier than tediously trying togit addthirteen files in different directories just you can omit one file.One tip for git: if your repository has top-level
trunk/branches/tagsdirectories, like this:trunk/ foo/ bar/ branches/ foo-experimental/ bar-experimental/ tags/ foo-1.0/ bar-0.5/That layout makes switching between the trunk and a branch of a project quite annoying, because while you can “switch” to (checkout)
branches/foo-experimental/, git won’t let you checkouttrunk/foo; it’ll only let you checkouttrunk. This isn’t a big problem, but it does mean that your overall directory structure keeps changing because switching totrunkmeans that you havefoo/andbar/directories, while switching to afoo-experimentalorbar-experimentalomits those directories. This ruins your git excludes and tends to cause general confusion with old files being left behind when switching branches.Since many of us will only want to track one particular project in a Subversion repository rather than an entire tree (i.e. switch between
trunk/fooandbranches/foo-experimental), change your.git/configfile from this:[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk:refs/remotes/trunk branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*to this:
[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk/foo:refs/remotes/trunk ; ^ change "trunk" to "trunk/foo" as the first part of the fetch branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*Doing that will make git’s “trunk” branch track
trunk/foo/on your server rather than justtrunk/, which is probably what you want. If you want to track other projects in the tree, it’s probably better togit-svn initanother directory. Update: Oops, I forgot to thank Mark Rowe for help with this. Thanks Mark!As an aside, while I believe that distributed version control systems look like a great future for open-source projects, it’s interesting that DVCS clients are now starting to support Subversion, which now forms some form of lowest common denominator. (I’d call it the FAT32 of revision control systems, but that’d be a bit unkind… worse-is-better, perhaps?) Apart from the more “official” clients such as command-line svn and TortoiseSVN, it’s also supported by svk, Git, Bazaar, Mercurial, and some great other GUI clients on Mac OS X and Windows. Perhaps Subversion will become a de-facto repository format that everyone else can push and pull between, since it has the widest range of client choice.
-
git-svn, and thoughts on Subversion
We use Subversion for our revision control system, and it’s great. It’s certainly not the most advanced thing out there, but it has perhaps the best client support on every platform out there, and when you need to work with non-coders on Windows, Linux and Mac OS X, there’s a lot of better things to do than explain how to use the command-line to people who’ve never heard of it before.
However, I also really need to work offline. My usual modus operandi is working at a café without Internet access (thanks for still being in the stone-age when it comes to data access, Australia), which pretty rules out using Subversion, because I can’t do commits when I do the majority of my work. So, I used svk for quite a long time, and everything was good.
Then, about a month ago, I got annoyed with svk screwing up relatively simple pushes and pulls for the last time. svk seems to work fine if you only track one branch and all you ever do is use its capabilities to commit offline, but the moment you start doing anything vaguely complicated like merges, or track both the trunk and a branch or two, it’ll explode. Workmates generally don’t like it when they see 20 commits arrive the next morning that totally FUBAR your repository.
So, I started using git-svn instead. People who know me will understand that I have a hatred of crap user interfaces, and I have a special hatred of UIs that are different “just because”, which applies to git rather well. I absolutely refused to use tla for that reason—which thankfully never seems to be mentioned in distributed revision control circles anymore—and I stayed away from git for a long time because of its refusal to use conventional revision control terminology. git-svn in particular suffered more much from (ab)using different terminology than git, because you were intermixing Subversion jargon with git jargon. Sorry, you use
checkoutto revert a commit? Andcheckoutalso switches between branches?revertis like a merge? WTF? The five or ten tutorials that I found on the ‘net helped quite a lot, but since a lot of them told me to do things in different ways and I didn’t know what the subtle differences between the commands were, I went back to tolerating svk until it screwed up a commit for the very last time. I also tried really hard to use bzr-svn since I really like Bazaar (and the guys behind Bazaar), but it was clear that git-svn was stable and ready to use right now, whereas bzr-svn still had some very rough edges around it and isn’t quite ready for production yet.However, now that I’ve got my head wrapped around git’s jargon, I’m very happy to say that it was well worth the time for my brain to learn it. Linus elevated himself from “bloody good” to “true genius” in my eyes for writing that thing in a week, and I now have a very happy workflow using git to integrate with svn.
So, just to pollute the Intertubes more, here’s my own git-svn cheatsheet. I don’t know if this is the most correct way to do things (is there any “correct” way to do things in git?), but it certainly works for me:
* initial repository import (svk sync): git-svn init https://foo.com/svn -T trunk -b branches -t tags git checkout -b work trunk * pulling from upstream (svk pull): git-svn rebase * pulling from upstream when there's new branches/tags/etc added: git-svn fetch * switching between branches: git checkout branchname * svk/svn diff -c NNNN: git diff NNNN^! * commiting a change: git add git commit * reverting a change: git checkout path * pushing changes upstream (svk push): git-svn dcommit * importing svn:ignore: (echo; git-svn show-ignore) >> .git/info/exclude * uncommit: git reset <SHA1 to reset to>Drop me an email if you have suggestions to improve those. About the only thing that I miss from svk was the great feature of being able to delete a filename from the commit message, which would unstage it from the commit. That was tremendously useful; it meant that you could
git commit -aall your changes except one little file, which was simply deleting one line. It’s much easier than tediously trying togit addthirteen files in different directories just you can omit one file.One tip for git: if your repository has top-level
trunk/branches/tagsdirectories, like this:trunk/ foo/ bar/ branches/ foo-experimental/ bar-experimental/ tags/ foo-1.0/ bar-0.5/That layout makes switching between the trunk and a branch of a project quite annoying, because while you can “switch” to (checkout)
branches/foo-experimental/, git won’t let you checkouttrunk/foo; it’ll only let you checkouttrunk. This isn’t a big problem, but it does mean that your overall directory structure keeps changing because switching totrunkmeans that you havefoo/andbar/directories, while switching to afoo-experimentalorbar-experimentalomits those directories. This ruins your git excludes and tends to cause general confusion with old files being left behind when switching branches.Since many of us will only want to track one particular project in a Subversion repository rather than an entire tree (i.e. switch between
trunk/fooandbranches/foo-experimental), change your.git/configfile from this:[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk:refs/remotes/trunk branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*to this:
[svn-remote "svn"] url = https://mysillyserver.com/svn fetch = trunk/foo:refs/remotes/trunk ; ^ change "trunk" to "trunk/foo" as the first part of the fetch branches = branches/*:refs/remotes/* tags = tags/*:refs/remotes/tags/*Doing that will make git’s “trunk” branch track
trunk/foo/on your server rather than justtrunk/, which is probably what you want. If you want to track other projects in the tree, it’s probably better togit-svn initanother directory. Update: Oops, I forgot to thank Mark Rowe for help with this. Thanks Mark!As an aside, while I believe that distributed version control systems look like a great future for open-source projects, it’s interesting that DVCS clients are now starting to support Subversion, which now forms some form of lowest common denominator. (I’d call it the FAT32 of revision control systems, but that’d be a bit unkind… worse-is-better, perhaps?) Apart from the more “official” clients such as command-line svn and TortoiseSVN, it’s also supported by svk, Git, Bazaar, Mercurial, and some great other GUI clients on Mac OS X and Windows. Perhaps Subversion will become a de-facto repository format that everyone else can push and pull between, since it has the widest range of client choice.
-
Speaking at DevWorld 2008
For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.
Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.
Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!
-
Speaking at DevWorld 2008
For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.
Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.
Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!
-
Solid State Society
The traditional hard disk that’s likely to be in your computer right now is made out of a few magnetic circular platters, with a head attached to an actuator arm above the platter that reads and writes the data to it. The head’s such a microscopic distance away from the platter that it’s equivalent to a Boeing 747 flying at 600 miles per hour about six inches off the ground. So, when you next have a hard disk crash (and that’s when, not if), be amazed that the pilot in the 747 flying six inches off the ground didn’t crash earlier.
Enter solid-state drives (SSDs). Unlike hard disks, SSDs contain no moving parts, and are made out of solid-state memory instead. This has two big advantages: first, SSDs don’t crash (although this is a small lie—more on that later). Second, since SSDs are made out of memory, it’s much faster than a hard disk to get to a particular piece of data on the disk. In other words, they have a random access time that are orders of magnitude faster than their magnetic cousins. Hard disks need to wait for the platter to rotate around before the head can read the data off the drive; SSDs simply fetch the data directly from a memory column & row. In modern desktop computers, random access I/O is often the main performance bottleneck, so if you can speed that up an order of magnitude, you could potentially make things a lot faster.
Unfortunately, while SSDs are orders of magnitude faster than a hard disk for random access, they’re also an order of magnitude more expensive. That was until May this year, when this thing appeared on the scene:
(Image courtesy of itechnews.net.)
That boring-looking black box is a 120GB Super Talent Masterdrive MX. As far as SSD drives go, the Masterdrive MX is not particularly remarkable for its performance: it has a sustained write speed of just 40MB per second, which is a lot lower than many other SSDs and typical hard disks.
However, it’s a lot cheaper than most other SSDs: the 120GB drive is USD$699. That’s not exactly cheap (you could easily get a whopping two terabytes of data if you spent that money on hard disks), but it’s cheap enough that people with more dollars than sense might just go buy it… people like me, for instance. I’ve had that SSD sitting in my lovely 17” MacBook Pro for the past two months, as an experiment with solid-state drives. So, how’d it go?
I’ll spare you the benchmarks: if you’re interested in the raw numbers, there are a number of decent Masterdrive MX reviews floating around the Web now. I was more interested in the subjective performance of the drive. Does it feel faster for everyday tasks? Is it simply a better experience?
The overall answer is: yes, it’s better, but it’s not so much better that I’d buy the SSD again if I could go back in time. With a hard disk, things occasionally get slow. I’m sure I’m not the only one to witness the Spinning Beachball of Death while I wait 5-10 seconds for the hard disk to finally deliver the I/O operations to the programs that want them completed. With a hard disk, launching a program from the dock would sometimes take 20-30 seconds under very heavy I/O load, such as when Spotlight’s indexing the disk and Xcode’s compiling something. With the SSD, those delays just went away: I can’t even remember a time where I saw the evil Beachball due to system I/O load.
The most notable difference was in boot time. A lot of people love how Mac OS X is pretty fast to boot (and I agree with them), but when you go to log in, it’s a very different story. If, like me, you’ve got about ten applications and helper programs that launch when you log in, it can take literally minutes before Mac OS X becomes responsive. I clocked my MacBook Pro at taking just over a minute to log in with my current setup on a hard disk (which launches a mere half a dozen programs); the SSD took literally about 5 seconds. 5… 4… 3… 2… 1done. What is thy bidding, my master? I wish I’d made a video to demonstrate the difference, because it’s insanely faster when you see it. 10x faster login speed is nothing to sneeze at.
However, aside from boot up time, normal day-to-day operation really was about the same. Sure, it was nice that applications launched faster and it booted so fast that you don’t need to make a coffee anymore when logging in, but those were the only major performance differences that I saw. Mac OS X and other modern operating systems cache data so aggressively that I guess most of the data you’ll read and write will usually hit the cache first anyway. The lower sustained write performance didn’t end up being a problem at all: the only time I noticed it was when I was copying largetorrented downloadsfiles around on the same drive, but that wasn’t slow enough for me to get annoyed. The one benchmark that I really cared about—compiling—turned out to take exactly as long on the SSD as the hard disk. I thought that maybe it was possible that random I/O write speed was a possible bottleneck with gcc; it turns out that’s not true at all. (I’ll also point out that I was using Xcode to drive most of the compilation benchmarks, which is one of the fastest build systems I’ve seen that uses gcc; no spastic libtool/automake/autoconf/autogoat insanity here.) Sorry to disappoint the other coders out there.
Aside from performance, the total silence of the SSD was a nice bonus, but it’s not something that you can’t live without once you’ve experienced it. In most environments, there’s enough background noise that you usually don’t hear the quiet hard disk hum anyway, so the lack of noise from the SSD doesn’t really matter. It was, however, very cool knowing that you could shake your laptop while it was on without fear of causing damage to your data. I’m usually pretty careful about moving my laptop around while it’s on, but with an SSD in there, I was quite happy to pick up the machine with one hand and wave it around in the air (as much as you can with a 17” MacBook Pro, anyway).
So, with all the nice small advantages of the SSD, you may be wondering why it’s no longer in my MacBook Pro. Here’s some reviews of the disk on newegg.com that may give you a hint:
It turns out those reviewers were right. Two months after I bought it, the Masterdrive MX completely died, which seemed like a pretty super talent for an SSD. The Mac didn’t even recognise the disk; couldn’t partition it; couldn’t format it. So much for SSDs not crashing, eh?
While SSDs don’t crash in the traditional manner that a hard disk may, there’s a whole number of other reasons why it might crash. RAM’s known to go wonky; there’s no reason why that can’t happen to solid-state memory too. Maybe the SATA controller on the disk died. No matter what the cause, you have the same problem as a traditional hard disk crash: unless you have backups, you’re f*cked. Plus, since I was on holiday down at Mount Hotham, my last backup was two weeks ago, just before I left for holiday. All my Mass Effect saved games went kaboom, and I just finished the damn game. André not very happy, grrr.So, what’s the PowerPoint summary?
- The Super Talent Masterdrive MX would be great buy if it didn’t friggin’ crash and burn your data with scary reliability. Even if you’re a super storage geek, avoid this drive until they have the reliability problems sorted out.
- The Powerbook Guy on Market St in San Francisco is awesome. They were the guys to install the SSD in my MacBook Pro, and were extremely fast (two-hour turnaround time), professional, and had reasonable prices. (I would’ve done it myself, but I’d rather keep the warranty on my A$5000 computer, thanks.) Plus, they sold me the coolest German screwdriver ever for $6. (“This one screwdriver handles every single screw in a MacBook Pro”. Sold!)
- The MacCentric service centre in Chatswood in Sydney is equally awesome. When the SSD died, they quoted me the most reasonable price I had ever seen for a hard disk swap in a MacBook Pro (have you seen how many screws that thing has?), and also had a two-hour turnaround time. Yeah, I know, decent Mac service in Australia! Woooooah.
- Back up.
- SSDs are great. I think they’ll complement rather than replace hard disks in the near future, and possibly replace them entirely if the price tumbles down enough. Next-generation SSDs are going to completely change the storage and filesystem games as they do away with the traditional stupid block-based I/O crap, and become directly addressable like RAM is today. Just don’t believe the hype about SSDs not crashing.
I, for one, welcome the solid state society. Bring on the future!
-
Solid State Society
The traditional hard disk that’s likely to be in your computer right now is made out of a few magnetic circular platters, with a head attached to an actuator arm above the platter that reads and writes the data to it. The head’s such a microscopic distance away from the platter that it’s equivalent to a Boeing 747 flying at 600 miles per hour about six inches off the ground. So, when you next have a hard disk crash (and that’s when, not if), be amazed that the pilot in the 747 flying six inches off the ground didn’t crash earlier.
Enter solid-state drives (SSDs). Unlike hard disks, SSDs contain no moving parts, and are made out of solid-state memory instead. This has two big advantages: first, SSDs don’t crash (although this is a small lie—more on that later). Second, since SSDs are made out of memory, it’s much faster than a hard disk to get to a particular piece of data on the disk. In other words, they have a random access time that are orders of magnitude faster than their magnetic cousins. Hard disks need to wait for the platter to rotate around before the head can read the data off the drive; SSDs simply fetch the data directly from a memory column & row. In modern desktop computers, random access I/O is often the main performance bottleneck, so if you can speed that up an order of magnitude, you could potentially make things a lot faster.
Unfortunately, while SSDs are orders of magnitude faster than a hard disk for random access, they’re also an order of magnitude more expensive. That was until May this year, when this thing appeared on the scene:
(Image courtesy of itechnews.net.)
That boring-looking black box is a 120GB Super Talent Masterdrive MX. As far as SSD drives go, the Masterdrive MX is not particularly remarkable for its performance: it has a sustained write speed of just 40MB per second, which is a lot lower than many other SSDs and typical hard disks.
However, it’s a lot cheaper than most other SSDs: the 120GB drive is USD$699. That’s not exactly cheap (you could easily get a whopping two terabytes of data if you spent that money on hard disks), but it’s cheap enough that people with more dollars than sense might just go buy it… people like me, for instance. I’ve had that SSD sitting in my lovely 17” MacBook Pro for the past two months, as an experiment with solid-state drives. So, how’d it go?
I’ll spare you the benchmarks: if you’re interested in the raw numbers, there are a number of decent Masterdrive MX reviews floating around the Web now. I was more interested in the subjective performance of the drive. Does it feel faster for everyday tasks? Is it simply a better experience?
The overall answer is: yes, it’s better, but it’s not so much better that I’d buy the SSD again if I could go back in time. With a hard disk, things occasionally get slow. I’m sure I’m not the only one to witness the Spinning Beachball of Death while I wait 5-10 seconds for the hard disk to finally deliver the I/O operations to the programs that want them completed. With a hard disk, launching a program from the dock would sometimes take 20-30 seconds under very heavy I/O load, such as when Spotlight’s indexing the disk and Xcode’s compiling something. With the SSD, those delays just went away: I can’t even remember a time where I saw the evil Beachball due to system I/O load.
The most notable difference was in boot time. A lot of people love how Mac OS X is pretty fast to boot (and I agree with them), but when you go to log in, it’s a very different story. If, like me, you’ve got about ten applications and helper programs that launch when you log in, it can take literally minutes before Mac OS X becomes responsive. I clocked my MacBook Pro at taking just over a minute to log in with my current setup on a hard disk (which launches a mere half a dozen programs); the SSD took literally about 5 seconds. 5… 4… 3… 2… 1done. What is thy bidding, my master? I wish I’d made a video to demonstrate the difference, because it’s insanely faster when you see it. 10x faster login speed is nothing to sneeze at.
However, aside from boot up time, normal day-to-day operation really was about the same. Sure, it was nice that applications launched faster and it booted so fast that you don’t need to make a coffee anymore when logging in, but those were the only major performance differences that I saw. Mac OS X and other modern operating systems cache data so aggressively that I guess most of the data you’ll read and write will usually hit the cache first anyway. The lower sustained write performance didn’t end up being a problem at all: the only time I noticed it was when I was copying largetorrented downloadsfiles around on the same drive, but that wasn’t slow enough for me to get annoyed. The one benchmark that I really cared about—compiling—turned out to take exactly as long on the SSD as the hard disk. I thought that maybe it was possible that random I/O write speed was a possible bottleneck with gcc; it turns out that’s not true at all. (I’ll also point out that I was using Xcode to drive most of the compilation benchmarks, which is one of the fastest build systems I’ve seen that uses gcc; no spastic libtool/automake/autoconf/autogoat insanity here.) Sorry to disappoint the other coders out there.
Aside from performance, the total silence of the SSD was a nice bonus, but it’s not something that you can’t live without once you’ve experienced it. In most environments, there’s enough background noise that you usually don’t hear the quiet hard disk hum anyway, so the lack of noise from the SSD doesn’t really matter. It was, however, very cool knowing that you could shake your laptop while it was on without fear of causing damage to your data. I’m usually pretty careful about moving my laptop around while it’s on, but with an SSD in there, I was quite happy to pick up the machine with one hand and wave it around in the air (as much as you can with a 17” MacBook Pro, anyway).
So, with all the nice small advantages of the SSD, you may be wondering why it’s no longer in my MacBook Pro. Here’s some reviews of the disk on newegg.com that may give you a hint:
It turns out those reviewers were right. Two months after I bought it, the Masterdrive MX completely died, which seemed like a pretty super talent for an SSD. The Mac didn’t even recognise the disk; couldn’t partition it; couldn’t format it. So much for SSDs not crashing, eh?
While SSDs don’t crash in the traditional manner that a hard disk may, there’s a whole number of other reasons why it might crash. RAM’s known to go wonky; there’s no reason why that can’t happen to solid-state memory too. Maybe the SATA controller on the disk died. No matter what the cause, you have the same problem as a traditional hard disk crash: unless you have backups, you’re f*cked. Plus, since I was on holiday down at Mount Hotham, my last backup was two weeks ago, just before I left for holiday. All my Mass Effect saved games went kaboom, and I just finished the damn game. André not very happy, grrr.So, what’s the PowerPoint summary?
- The Super Talent Masterdrive MX would be great buy if it didn’t friggin’ crash and burn your data with scary reliability. Even if you’re a super storage geek, avoid this drive until they have the reliability problems sorted out.
- The Powerbook Guy on Market St in San Francisco is awesome. They were the guys to install the SSD in my MacBook Pro, and were extremely fast (two-hour turnaround time), professional, and had reasonable prices. (I would’ve done it myself, but I’d rather keep the warranty on my A$5000 computer, thanks.) Plus, they sold me the coolest German screwdriver ever for $6. (“This one screwdriver handles every single screw in a MacBook Pro”. Sold!)
- The MacCentric service centre in Chatswood in Sydney is equally awesome. When the SSD died, they quoted me the most reasonable price I had ever seen for a hard disk swap in a MacBook Pro (have you seen how many screws that thing has?), and also had a two-hour turnaround time. Yeah, I know, decent Mac service in Australia! Woooooah.
- Back up.
- SSDs are great. I think they’ll complement rather than replace hard disks in the near future, and possibly replace them entirely if the price tumbles down enough. Next-generation SSDs are going to completely change the storage and filesystem games as they do away with the traditional stupid block-based I/O crap, and become directly addressable like RAM is today. Just don’t believe the hype about SSDs not crashing.
I, for one, welcome the solid state society. Bring on the future!
-
Mac Developer Roundtable #11
The Mac Developer Network features an excellent series of podcasts aimed at both veteran Mac developers and those new to the platform who are interested in developing for the Mac. If you’re a current Mac coder and haven’t seen them yet, be sure to check them out. I’ve been listening to the podcasts for a long time, and they’re always both informative and entertaining. (Infotainment, baby.)
Well, in yet another case of “Wow, do I really sound like that?”, I became a guest on The Mac Developer Roundtable episode #11, along with Marcus Zarra, Jonathan Dann, Bill Dudney, and our always-eloquent and delightfully British host, Scotty. The primary topic was Xcode 3.1, but we also chatted about the iPhone NDA (c’mon Apple, lift it already!) and… Fortran. I think I even managed to sneak in the words “Haskell” and “Visual Studio” in there, which no doubt left the other show guests questioning my sanity. I do look forward to Fortran support in Xcode 4.0.
It was actually a small miracle that I managed to be on the show at all. Not only was the podcast recording scheduled at the ungodly time of 4am on a Saturday morning in Australian east-coast time, but I was also in transit from Sydney to the amazing alpine village of Dinner Plain the day before the recording took place. While Dinner Plain is a truly extraordinary village that boasts magnificent ski lodges and some of the best restaurants I’ve ever had the pleasure of eating at, it’s also rather… rural. The resident population is somewhere around 100, the supermarket doesn’t even sell a wine bottle opener that doesn’t suck, and Vodafone has zero phone reception there. So, it was to my great surprise that I could get ADSL hooked up to the lodge there, which was done an entire two days before the recording. Of course, since no ADSL installation ever goes smoothly, I was on the phone to iPrimus tech support1 at 10pm on Friday night, 6 hours before the recording was due to start. All that effort for the privilege of being able to drag my sleepy ass out of bed a few hours later, for the joy of talking to other Mac geeks about our beloved profession. But, I gotta say, being able to hold an international conference call over the Intertubes from a tiny little village at 4am in the morning, when snow is falling all around you… I do love technology.
Of course, since I haven’t actually listened to the episode yet, maybe it’s all a load of bollocks and I sound like a retarded hobbit on speed. Hopefully not, though. Enjoy!
1 Hey, I like Internode and Westnet as much as every other Australian tech geeks, but they didn’t service that area, unfortunately.
-
Mac Developer Roundtable #11
The Mac Developer Network features an excellent series of podcasts aimed at both veteran Mac developers and those new to the platform who are interested in developing for the Mac. If you’re a current Mac coder and haven’t seen them yet, be sure to check them out. I’ve been listening to the podcasts for a long time, and they’re always both informative and entertaining. (Infotainment, baby.)
Well, in yet another case of “Wow, do I really sound like that?”, I became a guest on The Mac Developer Roundtable episode #11, along with Marcus Zarra, Jonathan Dann, Bill Dudney, and our always-eloquent and delightfully British host, Scotty. The primary topic was Xcode 3.1, but we also chatted about the iPhone NDA (c’mon Apple, lift it already!) and… Fortran. I think I even managed to sneak in the words “Haskell” and “Visual Studio” in there, which no doubt left the other show guests questioning my sanity. I do look forward to Fortran support in Xcode 4.0.
It was actually a small miracle that I managed to be on the show at all. Not only was the podcast recording scheduled at the ungodly time of 4am on a Saturday morning in Australian east-coast time, but I was also in transit from Sydney to the amazing alpine village of Dinner Plain the day before the recording took place. While Dinner Plain is a truly extraordinary village that boasts magnificent ski lodges and some of the best restaurants I’ve ever had the pleasure of eating at, it’s also rather… rural. The resident population is somewhere around 100, the supermarket doesn’t even sell a wine bottle opener that doesn’t suck, and Vodafone has zero phone reception there. So, it was to my great surprise that I could get ADSL hooked up to the lodge there, which was done an entire two days before the recording. Of course, since no ADSL installation ever goes smoothly, I was on the phone to iPrimus tech support1 at 10pm on Friday night, 6 hours before the recording was due to start. All that effort for the privilege of being able to drag my sleepy ass out of bed a few hours later, for the joy of talking to other Mac geeks about our beloved profession. But, I gotta say, being able to hold an international conference call over the Intertubes from a tiny little village at 4am in the morning, when snow is falling all around you… I do love technology.
Of course, since I haven’t actually listened to the episode yet, maybe it’s all a load of bollocks and I sound like a retarded hobbit on speed. Hopefully not, though. Enjoy!
1 Hey, I like Internode and Westnet as much as every other Australian tech geeks, but they didn’t service that area, unfortunately.
-
Talks for 2008
I’ve given a few talks so far this year, which I’ve been kinda slack about and haven’t put up any slides for yet. So, if you’re one of the zero people who’ve been eagerly awaiting my incredibly astute and sexy opinions, I guess today’s your lucky day, punk!
Earlier this year, on January 2, 2008, Earth Time, I gave a talk at Kiwi Foo Camp in New Zealand, also known as Baa Camp. (Harhar, foo, baa, get it?) The talk was titled “Towards the Massive Media Matrix”, with the MMM in the title being a pun on the whole WWW three-letter acronym thing. (Credit for the MMM acronym should go to Silvia Pfeiffer and Conrad Parker, who phrased the term about eight years ago :). The talk was about the importance of free and open standards on the Web, what’s wrong with the current status quo about Web video basically being Flash video, and the complications involved in trying to find a solution that satisfies everyone. I’m happy to announce that the slides for the talk are now available for download; you can also grab the details off my talks page.
A bit later this year in March, Manuel Chakravarty and I were invited to the fp-syd functional programming user group in Sydney, to give a talk about… monads! As in, that scary Haskell thing. We understand that writing a monad tutorial seems to be a rite of passage for all Haskell programmers and was thus stereotypical of the “Haskell guys” in the group to give a talk about, but the talk seemed to be well-received.
Manuel gave a general introduction to monads: what they are, how to use them, and why they’re actually a good thing rather than simply another hoop you have to jump through if you just want to do some simple I/O in Haskell. I focused on a practical use case of monads that didn’t involve I/O (OMG!), giving a walkthrough on how to use Haskell’s excellent Parsec library to perform parsing tasks, and why you’d want to use it instead of writing a recursive descent parser yourself, or resort to the insanity of using lex and yacc. I was flattered to find out that after my talk, Ben Leppmeier rewrote the parser for DDC (the Disciplined Disciple Compiler) to use Parsec, rather than his old system of Alex and Happy (Haskell’s equivalents of lex and yacc). So, I guess I managed to make a good impression with at least one of our audience members, which gave me a nice warm fuzzy feeling.
You can find both Manuel’s and my slides online at the Google Groups files page for fp-syd, or you can download the slides directly from my own site. Enjoy.
Finally, during my three-week journey to the USA last month in June, I somehow got roped into giving a talk at Galois Inc. in Portland, about pretty much whatever I wanted. Since the audience was, once again, a Haskell and functional programming crowd, I of course chose to give a talk about an object-oriented language instead: Objective-C, the lingua franca of Mac OS X development.
If you’re a programming language geek and don’t know much about Objective-C, the talk should hopefully interest you. Objective-C is a very practical programming language that has a number of interesting features from a language point of view, such as opt-in garbage collection, and a hybrid of a dynamically typed runtime system with static type checking. If you’re a Mac OS X developer, there’s some stuff there about the internals of the Objective-C object and runtime system, and a few slides about higher-order messaging, which brings much of the expressive power of higher-order functions in other programming languages to Objective-C. Of course, if you’re a Mac OS X developer and a programming language geek, well, this should be right up your alley :). Once again, you can download the slides directly, or off my talks page.