Tales of the Rampant Coyote

Adventures in Indie Gaming!

A Game Dev’s Story, Part V: Language Skills

Posted by Rampant Coyote on February 22, 2012

As I attempted to imitate my favorite arcade games on the Commodore 64, I found myself running up against a pretty major limitation: Speed. Specifically, the lack thereof when running my homemade games.  I learned a lot of tricks (most of which I’ve forgotten, now) to making my BASIC programs faster. But while those optimizations helped, and I could write some pretty fun little games, there was only so much an interpreted language like BASIC could handle.

What is an interpreted language? I’m glad you asked. Yes, you programmers reading this can snooze for three paragraphs – most of the readers here aren’t programmers and don’t know this stuff. 🙂  Okay, traditionally programming languages come in two different types (though we now have some extra variations): Interpreted and Compiled. The difference between the two is a little like the difference between someone creating a translation of a book from a language you don’t understand into your native language, versus you trying to translate it word-for-word as you read it.

In a compiled language, you use something called (wait for it…) a compiler which takes your human-readable code and translates it in the computer’s native machine code, which tends to be significantly more challenging for a human to read (and modify). While the compiler may not create machine code that is quite as optimized as a human being could do by hand, it is in the computer’s “native language” and thus executes as fast as the system can go.

In an interpreted language – which BASIC usually was back in the day – your program is run via another program – and interpreter program – which reads and interprets your program as it executes. This pretty obviously slow things down – a lot!  So why use an interpreted language? Mainly because it’s easy and often pretty interactive. If speed is not an issue, or you are just “experimenting” with a language (one of my favorite languages, PYTHON, is great for this), an interpreted language is really the way to go. Interpreted languages are also very portable – which is why they are used for web-scripting. In theory, at least, you can run the same interpreted language code on any system with a compatible interpreter (Javascript is a popular one) and it will run the same no matter how different the hardware may be.

There are a couple more variants on languages that are worth mentioning. The first is Assembly language. Assembly language is sort of an intermediate stage between machine code and a higher-level language. Assembly is specific to the CPU you are writing code for, and is indeed very  similar to machine code, with each statement corresponding to a machine code instruction. The advantage of assembly language it that it allows you to abstract some of the details – like using variables and labels, mnemonic opcodes, and relative memory locations. This means it still has to be “compiled” like a compiled language, but the process is so straightforward the compiling program is called an assembler instead of a compiler.  Everything’s one-to-one, so there’s no disadvantage (that I can think of) to using assembly instead of machine code.

Another variant – a relatively new one – is “bytecode compiled” languages, like Java. These languages try to straddle the gap between compiled languages and interpreted languages. In bytecode compiled languages, the program is compiled down not to native machine code, but to an optimized intermediate set of instructions that can be executed by a another program very quickly. You don’t get the level of interactivity you may enjoy with a fully interpreted language, but it’s theoretically just as portable and a lot faster.  But this is just an aside – I didn’t have this back on my C-64.

It became clear that to get speed on my C-64, I needed something faster than BASIC. This meant one of three approaches.

First of all: I could get a compiler for BASIC, to convert my BASIC code into machine language. I tried this. The compiler, unfortunately, sucked. It was unreliable, only worked on a subset of BASIC, and the results were very hard to debug. Something would work perfectly in BASIC but not work in the compiled version (although it was much, much faster), and all I could do was to start taking guesses as to what was malfunctioning and rewriting code until the compiler managed to get it “right.” After only a few attempts, I completely gave up on this option.

Both of the other options involved getting an assembler and learning to write in assembly. I’d experimented a little bit with a straight machine-code editor (which used symbolic opcodes), but it was nightmarish to use. When I finally got a decent assembler, things were pretty wonderful. My second option was to switch to writing my games in assembly. While it was certainly possible (that’s what all the professionals were doing), it was a really slow, painful process to get anything done. There’s a reason higher-level languages exist: programming down at the level of “the iron” takes a lot more work. I never got too far doing this, either.

The third approach – and the one I ended up actually getting things done using – was a hybrid: I could write the speed-critical parts of the game in Assembly – really tight “subroutines” mainly for displaying the graphics and handling things like collision and movement – and the rest of the game in interpreted BASIC. I did this a few times, and in general this worked pretty well. My biggest success came when I was writing yet another pseudo-3D first-person perspective RPG. I have a history with these things, don’t I? I had things like trees, fountains, and so forth that I wanted to display in quarter-screen-sized window, but they had to be drawn in the correct position and at a size that reasonably approximated its perspective. And everything had to be overwritten by anything closer. This was pretty slow in BASIC, but in assembly I whipped up a pretty decent rendering subroutine! You could still catch a glimpse of the entire world rendering – which meant you could basely see what was behind a wall, for example – but it was dang fast. Now, if I’d known better, I would have rendered the whole scene first to an off-screen buffer and then copied the buffer onto the screen so the player couldn’t even catch the glimpse of my lightning-fast rendering routine, but the idea never occurred to me at the time.

I never became an expert at assembly language for the 6502 processor (the chip inside the Commodore 64), though when I was required to take a course in college where we developed software in Assembly language, I got to be pleasantly amazed by how much easier it was to use the more advanced Motorola 68000-series chips used in the Apple Macintosh in the classroom.

So what did I get out of all of this?

I learned that tools are very important. The basic compiler and the early machine-code “editor” / monitor were pieces of junk that were almost of negative value to me. I was tempted to give up when I used these, and just assumed that anything beyond BASIC programming was just too hard for me. But once I got a decent book (okay, it actually wasn’t all that decent) on 6502 programming, and a good assembler, I was able to go on and do some great things. Using the right tool for the job really helped, too. Jumping straight into pure assembly programming might have discouraged me, but I found that I could take a hybrid approach which gave me some early success stories. Those helped motivate me to keep going.

How this applies to you: Get the right tools. Take baby steps so that you can get some early successes in under your belt to build confidence and familiarity while you are building your skills.

Next time: My brief early experience as an indie game developer, and a break for college.


Filed Under: A Game Dev's Story, Game Development, Retro - Comments: 5 Comments to Read



  • Adamantyr said,

    Wow, I love your retro posts. 🙂

    I absolutely agree about tools. I had similar issues in BASIC on my TI with speed. Unfortunately, it was a lot harder to do hybrid work, thanks to TI’s weird engineering decisions with BASIC.

    Assemblers were hard to come by too. There were two, the expensive disk-based one, and the bargain cassette-based one. The latter was nearly impossible to use; it was a line-by-line assembler that literally converted your instructions on the fly to code. It also over-wrote itself in memory.

    Books for assembly were tough to find. The TI’s CPU (TMS9900) was an orphan compared to the 6502, so there was very little information available about it. I had a few assembly books, but I didn’t really start to grok it until a series of magazine articles were published in a TI magazine about assembly language programming.

    I do assembly programming as a hobby now, but I stick with emulation, where I can kick the speed up and compile things fast. If I had overcome all the other problems in the old days, I’d be waiting hours for builds to complete. Also, I’d have totally fragged my floppy drives with all the read/writes.

  • Anon said,

    “Another variant – a relatively new one – is “bytecode compiled” languages, like Java.”

    Well, way before Java there was Pascal:
    http://en.wikipedia.org/wiki/Pascal_%28programming_language%29

    This language is more than 30 years old and while the most common form are probably Pascal compilers (‘Turbo Pascal’) there is also a variant called ‘UCSD-Pascal’ that produces “p-code” which needs a runtime (or pascal operating system) to work. Very similar to Java and the Java runtime, actually.

    http://en.wikipedia.org/wiki/UCSD_p-System

    p-code has two MAJOR advantages:
    a) It is portable – you only need a specific runtime for every machine
    b) It produces relatively small, efficient code.

    For classic CRPG fans it may be interesting to know that the first four ‘Wizardy’ games were implemented first on an Apple II using Apple Pascal, a variant of UCSD-Pascal.

    http://en.wikipedia.org/wiki/Wizardry

    I remember the programmers claiming that they couldn’t realize their game using BASIC because of memory constraints (p-code is way more efficient). They probably could have used an assembler, too, but as anybody with 8-bit assembly experience knows doing math isn’t nearly as easy as using a higher-level language – especially if you have to do lots of it (fighting routines in a CRPG are a good example). They would also have needed much more time – a severe problem in a games market that literally began to explode. The Apple II was THE CRPG platform at the time…

  • Rampant Coyote said,

    Maybe that’s why it took Wizardry so frickin’ long to get ported to the C-64. By the time it finally happened, I’d lost interest.

    Anyway, I didn’t mean to imply that Java did it first – it’s simply the most well-known example. But I didn’t realize that Wizardry had been written in bytecode-compiled Pascal. I’d heard (but forgotten) that it had been written in Pascal, but I’d assumed it was the more conventional compiled flavor.

    6502 assembly was a bear to work with. Division? Write your own divider functions (which meant division was always slow, in assembly OR BASIC). And the three registers on the CPU (X, Y, and A – the accumulator) were all special-purpose. There were a very few things all three could do (store a value, I guess), a few more things that two of them could do, and several things that only one of the three registers could do. That was one of the delightful discoveries on the Mac’s CPU… there were eight (IIIRC) general-purpose registers that were all identical in functionality. WOW! Luxury!

  • Adamantyr said,

    Concerning registers, I know what you mean. I was decoding an old Atari game in 6502 assembly once, and it was pretty pedantic.

    The TMS9900 has memory-based architecture, so you can just change a single register (the workspace pointer) to create a new set of 16 16-bit registers anywhere in memory I wanted. It also could do direct memory-to-memory operations, something not even possible on an x86 architecture. It also had opcodes for unsigned multiplication and division. A verifiable Cadillac of assembly programming. 🙂

    Of course, the trade-off was it was slower on register access than an architecture that has the registers right close to home. On the TI-99/4a, you’d use the only 16-bit memory in the machine (a 256 byte ‘scratchpad’) for a 30% speed boost on all operations.

  • Brian 'Psychochild' Green said,

    Finally catching up with my reading. Awesome posts, reminds me a bit of my own history. 🙂 Maybe I’ll post about that sometime.

top