Tales of the Rampant Coyote

Adventures in Indie Gaming!

Troll Doll Source Control

Posted by Rampant Coyote on June 22, 2011

For those unfamiliar with it, source control is – at its most basic level – a system for archiving and managing updates to the source code for software. It’s particularly useful when you have multiple people working on the software, and you’d like to make sure that they don’t accidentally and irrevocably step on each other’s changes and wipe out each other’s hard work. It’s handy for keeping backups of your own work even if you are working alone, but it’s really critical for projects involving multiple developers.

For my first few months at SingleTrac, working on the games Twisted Metal and Warhawk, we didn’t have source control. Well, strictly speaking, that’s not true – we had a source control system in place, but it was a manual system. We called it “Troll Doll Source Control.” It consisted of one of those troll dolls – the cute, chubby kind with the wild blue or pink hair. Whomever ho had the troll doll was in charge of the master repository and merging in their code changes – very carefully – with the master.

We’d regularly merge the updated code repository into our own code – you could always grab changes from the master, you just couldn’t make changes to the master repository unless you were the keeper of the troll doll. We did this frequently was to make sure we stayed relatively up-to-date so there wouldn’t be too many surprises when it was our turn to have the troll doll. The goal was not to hold onto the troll doll too long. Inevitably there was at least one other person waiting for the troll doll after you, and anybody who had the troll doll sitting on their desk too long would start getting dirty looks.

Sony, when they asked about our source control system, was not too pleased. It wasn’t exactly a robust source control system.

Then again, neither was Microsoft Visual Source Safe, which we eventually replaced the troll doll system with.

There was one time the system failed – spectacularly – for me. I don’t know if this was a factor motivating  our change to use Source Safe, though I imagine Sony’s disapproval accounted for more. But I stupidly did a merge where I ignored changes to files that I wasn’t actively modifying… and then discovered some recently checked-in features and bug fixes were no longer there! Someone who had possessed the troll doll after me had done a poor job of merging code and had wiped out three days of work!

I went around the office trying to see if anybody had a copy of the source code from in-between the time I’d checked in my changes, and when they’d been wiped out. I don’t remember that being successful. I re-did the changes, and gave up my witch hunt to figure out who had destroyed my work.

As I said before, the troll doll was eventually retired, replaced by Visual Source Safe. That was a small improvement. But a couple of years later, working on another game, Visual Source Safe decided to do one better and completely corrupted the archive. After many efforts to recover the files, I think we eventually just had to replace that archive with the most recent one we could fine from one of our machines. It was still an improvement, but not much of one.

Filed Under: Biz, Production - Comments: 11 Comments to Read

  • Viridian said,

    Back during the early Ultima days, they used a similar system. They had two cutout flames, the “flame of code” and the “flame of world data”. Whoever had the flame was responsible for merging changes into the master repository until someone else asked for it. As long as everyone is rigorous, this system works out okay.

    Didn’t prevent the whole “I’m going to delete my local copy of the Strike Commander code. Oops, I was actually on the network drive when I issued the delete command” thing, though.

    Thank god Subversion exists now, is all I can say.

  • PsySal said,

    Interesting! I had never thought of a system like that, but it makes a lot of sense if you don’t have a dedicated system.

    I too use SVN. There are lots of other options, but I like SVN because I figure it’s been around long enough to avoid corrupting (knock on wood) or random stupidity (I have had a SVN *client* delete large amounts of source code locally though, yipes…)

    Also since I tend to work between machines (sometimes on my laptop or windows box, but usually on my linux box) it’s a really useful way of coordinating between them.

    The other nice thing about a repository is it gives me one reference point to make backups from (I keep data files in my SVN repo, too.) which saves me a lot of time.

    Well, anybody reading this is probably already aware of SVN but there you have it, my $0.02!

  • Groboclown said,

    Ah, the joys of Source Safe. It gently persuades you to find quality source control.

    But, speaking of the Origin guys, I remember back in the 90s when I went for an interview there as a tester. I encountered an elevator conversation that went something like this:

    Guy 1: “I’m going to try the same performance fix on Crusader that I did on Ultima 7.”
    Guy 2: “Make sure you make a backup this time before you do it.”

  • Spaceman Spiff said,

    Almost 20 years ago, at my job we used “sneakerNet” — literally a shoe box full of 3.5″ floppy disks – one disk per source file. Each floppy had the revision history for that file – the file extension was the revision number. You checked out a file by taking the disk. It worked, but you wouldn’t want to do it today.

    Personally, I much prefer perforce to SVN (and it’s what I have used at my last couple jobs).

    Perforce is FREE for up to 2 users and 5 workspaces – which is great for a one or two man indie effort.

    At home I have a dedicated server running WHS 2011 (Windows Home Server 2011) on an HP MediaSmart 495, and have perforce running on it. WHS 2011 is pretty inexpensive, and very well geared for running a home network that doesn’t require much work to maintain and does both development and home media deployment.

    As PsySal mentions, it really helps me when I switch development between the different machines I have at home (mac mini, Win 7 PC, Win Xp PC, and laptop)

    I heartily recommend a similar setup (small home server and source control) to anyone planning an indie development project.

  • Nick Istre said,

    Ah, source control. I’ve used SVN in the past mainly cause it was better than keeping multiple copies of our code by renumbering them and it was more flexible than CVS. Once I discovered how the new era of distributed version control systems worked via git, there was no going back for me. I ended up convincing my boss to move to Mercurial because of how much easier merging various branches from different developers was (though there was only two in our company) and how fast commits were than SVN. The last point was actually the most important to me, as yes, you still have to push the commits up to the server at the end of the day (or before someone else can pull down your changes, though, there are handy tricks around that), but each commit doesn’t require network access. I can make dozens of smaller commits as I finish each part that ends up being a handy log for time-tracking purposes for me rather than remember everything I did for the one big SVN commit at the end of the day (or when I finish a feature).

    Even if I was a single developer, I could be working on a feature for a program, get a emergency call that production code for the program has a bug, I can save where I was at with the feature in a branch, jump to the production code, make and test the fix, commit back to the production branch (and send the new code to the client), go back to my feature branch, and merge in the fix I just created from the production branch into my feature branch. I have mobile phone code that I’ve been writing for both iOS and Android which I have separated into 4 branches in Mercurial. I’m the only developer on this project, too.

    Though, really, I personally really like git, I do highly recommend Mercurial for the pure amount of Windows and Mac OS X support it has. I find that it’s also the easier to convert an SVN work process to.

    In either git’s or Mercurial’s case, they store the entire repository on your local drive. If the server you’re pushing to becomes corrupted (lightning strikes the server? Cosmic gamma ray flips random bit on the disk? I’ve yet have either git or mercurial outright corrupt on me by themselves even in spotty connections), well every developer will actually have most of the history downloaded! Simply take one copy to ‘seed’ the new server with and have everyone else push their code up and mercurial will make sure to only push up new history or branches.

  • Nick Istre said,

    Check out Joel Spolsky’s tutorial on Mercurial: http://hginit.com/

  • xenovore said,

    LOL! Awesome!

    My personal favorite is Subversion. Mercurial is pretty good as well, but has some weirdness (the whole distributed thing, for one; don’t see the point of that yet).

  • xenovore said,

    @Nick: Thanks for the Mercurial info; that actually makes sense now.

  • Nick Istre said,

    @xenovore: Honestly, I had the exact same reaction to the “distributed” part of DVCS. I mean really, what’s the point?

    But as I started working with git (and later, Mercurial), I found out in many little ways why it was a big deal, even as a single programmer. For one, I can start a little “fun” project, do hg init; hg add; hg commit, and it’s in a repository (actually I do all of that using TortoiseHG, which is how I really recommend approaching Mercurial for most people that don’t like the command-line). No need to prepare a server or anything. And I can continue committing to the local repository as I make updates to my “fun” project.

    Later, when the “fun” project becomes a “critical” project, I have my entire development history, commit comments, etc. available. Getting it on a server for others to pull down from is a push away in TortoiseHG.

    Alternately, in a more improvisation matter, I can also setup Mercurial to serve the repository directly from the laptop to allow others in a local network to pull from; great if I am with other programmers in a cafe wifi with a spotty internet connection and we want to get some coding done. It can be SSL encoded and password protected if you don’t want snoopers stealing your code, if I remember.

  • Will said,

    There’s an interesting talk that Torvalds gave on git in 2007, here http://www.youtube.com/watch?v=4XpnKHJAok8

    I haven’t had any trouble with TortoiseGit on windows yet.

  • Bad Sector said,

    I prefer git myself too. Under Windows i use msysGit and the TortoiseGit front-end Explorer extension. Under Mac OS X i use GitX. And under Linux i use the command-line git command.

    Usually i put my projects at GitHub, unless i don’t want to share them. In this case i put them in an external USB memory stick. The nice part about the repositories being distributed is that i have everything in both the computer(s) i work on and in the USB/GitHub, so if something goes wrong i’m not going to lose my stuff. Case in point: before git i used Subversion and i had a lot of repositories in a VPS system. When i learned about git, i converted most of them (in an “as needed” basis) from Subversion to git (git can import Subversion repositories). At some point i lost access to this VPS system. Every git repository i had was at some point cloned locally so it was all fine. Every Subversion repository, however, was lost.

    The only reason to use Subversion today is because it is somewhat simpler to use than git or hg. Otherwise there is no other reason: git is faster, more secure, creates smaller repositories, has more tools, it is more advanced and beyond these it has proven itself by being used for some of the most demanding projects (like the Linux kernel).

    The worst reason to not use git is that you don’t like it’s name 🙂