Tales of the Rampant Coyote

Adventures in Indie Gaming!

Five Things I Unequivocally Love About Modern Mainstream RPGs

Posted by Rampant Coyote on March 2, 2012

I tend to rip on modern RPGs a lot here. Yeah, it’s unfair, but it’s my blog. Part of it is my own reaction sycophantic game reviewers and wannabe-journalists who either believe video games weren’t invented until 2000, or who believe that they are doing publishers are a favor by attempting to invalidate anything from prior eras as hopelessly inferior.

Now, maybe I’m just not hitting the right (read: wrong) sites anymore, but I’m not seeing so much of that as I used to. That’s probably helped my blood pressure some. A silly, illogical, hopelessly optimistic side of me wants to believe that maybe, MAYBE, this new generation of gaming cognizati might actually be learning a bit of appreciation of past classics, at least in the same way that I learned appreciation of classical architecture in my college humanities class. Which is to say, I didn’t learn to appreciate the actual architecture personally so much as I learned to appreciate the value of keeping my mouth shut to hide my ignorance.

But in the spirit of giving credit where it’s due, I’d like to point out some things I really do love about modern, mainstream RPGs.  Now, this is tough to really do right, because modern RPGs remain a pretty diverse lot. Okay, not quite as diverse as the genre used to be (*coughcough*), but in spite of a noticeable list in a certain direction, these games still have their own take on the genre as a whole. So here are some things that I really feel like they are getting right:

#1 – Graphics.  Duh. We can argue styles all day, but the new games sure are pretty. And impressive. It’s pretty obvious to see where their huge budgets go, but I can’t help to be thrilled at how these worlds come to life. Seriously, this was what I was seeing in my head when I was playing those games way back then.

 #2 – Ease of Play. While I gripe about frequently losing the depth and breadth of interactions in certain older titles, it’s not like that was a common thing back then, either. We remember the exceptional ones. And there’s a lot to be said about being able to pick up a game and just play, without needing a cardboard keyboard map laying propped up against the bottom of the monitor.  Cleaner interfaces, more intuitive (or at least familiar) controls, more gradual introductions to the games – these are generally good things. Maybe I’m just spoiled now by not having to RTFM in order to play a new game now (although at least back then the manuals were actually fun to read!). But learning to play by actually playing is definitely the Right Thing.

#3 – Emphasis on Story and Characters. Sure, don’t sacrifice my game mechanics and interesting choices on the alter of storyline – I’m still here to play a game, not watch a movie – but story and characters are still what turns a good game into a great and memorable experience. That’s really a focus in most modern RPGs, and I’m pretty happy about that.  Really, in many older games, as much as I loved the story, like the graphics it was usually the story in my head that I created as I played that I loved, not so much what was there in the game. I think modern games could do a better job outsourcing some of those critical story elements to the player’s imagination – as the old games had to do – but what’s “on the page” has far more depth to it than almost anything created before the era of Baldur’s Gate, Planescape: Torment, and Fallout.  I should probably include jRPGs in that list, too, but you know what they are. Bravo, designers. While missteps have taken place and some games may have been made worse by truly crappy stories, this seems to me to be the right direction.

#4 – Strong Game Systems / Mechanics. Okay, while the emphasis on twitchery has me fuming a bit, going back and retro-gaming old favorites and “new-to-me” classics has reminded me that many of these standouts of the genre’s history had pretty crap systems that were frequently poorly-implemented, poorly-balanced, and most likely poorly-tested retreads of some D&D rules variant. Fortunately, the games were often a lot of fun in spite of this.  Although as the CRPG Addict keeps rediscovering, some of those games remain quite fun today, but others have not aged well at all.

#5 – Less Grinding. I like some grind. Especially when it’s optional. It’s a great way to make some additional “progress” in the game without taxing the brain or the skills to heavily. But a good portion of older RPGs was grinding. The 50+ hours you’d need to put in to beat it? You weren’t following an epic story all that time. No, much of the time you were hitting the respawns on level 3 of the dungeon so you could get high enough level to cast the spells to keep you alive on level 4 of the dungeon.  There was really too much of that. It’s still an artifact in newer games, but it feels like designers are taking an active role in minimizing it.  That’s a good thing. Killing the same monsters over and over again just doesn’t scream “epic adventure” to me nowadays.


Filed Under: Mainstream Games - Comments: 17 Comments to Read



And Yet More IGF Problems

Posted by Rampant Coyote on March 1, 2012

I don’t want to disparage the IGF right before the awards, ‘cuz I’m sure they will be awesome, and the games will be too . If they may not necessarily the best indie games of 2011,  they will at least be quite worthy of attention.

But this post confirms some deep-rooted suspicions about the competition, which I hope will be improved upon in the future:

What’s Wrong With the IGF?

Quick Summary: Because the IGF chose TestFlight for their iOS distribution, which gave canny developers access to information on the judges’ activities with the game. In short, of eight total judges assigned to the game, one never installed the game, two others installed it but never played it even once, and of the other five, only three played it for more than ten minutes.  Only one played it for nearly an hour, giving it what the author considered a good, honest shake.

Fair? Maybe, maybe not. While the $95 entry fee is pretty low, I’d still expect the judges to install and try my game for at least a few minutes. I understand the judges are busy folks too, and they may have their own methods of “weeding out” games that are clearly not going to make the cut for the year’s finalists.  With all the entries this year, it’s understandable, and in a sense the first round of judges are the “slush pile editors” for the competition.

But it really makes me wonder, then, as to the value of the IGF. As many people have noted, the only games guaranteed to get a good, hard look are the ones that have already made a name for themselves. Games that are not already popular have the deck stacked against them, particularly if they are unable to stand out to the judges who are supposed to be giving them a serious look.  Isn’t the IGF supposed to be about shining the spotlight on these otherwise “unknown” games? Or is it just a popularity contest?

I don’t know if there’s a good solution.  Maybe the IGF needs to raise its entry fee and pay the judges a bit more and guarantee a certain amount of play-time on a title. Maybe there needs to be a better structure for how the nominations “ripple up” to become finalists in each category.  Whatever the case, while a hundred bucks isn’t very steep, it’s enough that someone shouldn’t be concerned that their entry will be completely ignored.


Filed Under: Indie Evangelism - Comments: 16 Comments to Read



Another Take (of Three) on JRPGs vs. WRPGs…

Posted by Rampant Coyote on February 29, 2012

Courtesy of Extra Credits – now hosted over at Penny Arcade:

Western & Japanese RPGs (Part 1)

I’m interested in hearing what else they have to say, but I’m not sure I’m going to be on board with their conclusion that WRPGs and JRPGs are two distinct genres.  Especially as I made a big case the other day about their similarities and loss of distinctiveness between the two. I guess I’m just a “big tent” kind of guy, trying to include JRPGs, turn-based RPGs, action-RPGs, roguelikes, and other offshoots all under the same general heading. One nation, many states, something like that.

I’m prepared to finish hearing out their arguments. The question is – how narrowly is a genre defined by the core play aesthetics? Action RPGs and turn-based RPGs scratch some of the same itches for me as an RPG fan, but definitely not all. Does that make them completely different genres? I know a lot of readers would emphatically state that it does, but I see too much of a spectrum over the history of CRPGs, and I really don’t know where I’d draw the line. I pretty much considered Deus Ex as an FPS for many years, in spite of people telling me how wrong I was and how it was really an RPG. I can accept that now, in hindsight, but that really enlarges the tent.

I’m happy to talk about subgenres, with the acknowledgement that clear defining lines between them are impossible to come by. I tend to see them more as ingredients. “Game A is one cup action-RPG, a half-cup of JRPG, a dash of hardcore strategy, and a half-pint of old-school console gaming to sauce it up.” And as a side-note, I have no idea what kind of game that would end up being when it came out of the oven…

But case in point: Telepath RPG – Servants of God. What kind of game is it? Which of the two genres – JRPG or WRPG – would it fit under? I’d say, “neither,” but if you put a gun to my head and forced me to make a distinction, I’d say WRPG. But the game borrows a bit from both, possibly at the same time now that the two RPG styles have even more in common. But it also borrows from “tactics” games, the style favoring the Japanese over the western styles of combat tactics games.

And although my own Frayed Knights: The Skull of S’makh-Daon is clearly rooted in the western RPG tradition, there are some aspects – the dialogs and pre-gen party, for example – that are more frequently associated with JRPGs.

I really doubt there’ll ever be a broad consensus. As always, it’s the discussion that’s fascinating and worth having, not the conclusions.


Filed Under: Design - Comments: 8 Comments to Read



Technical Dungeons

Posted by Rampant Coyote on February 28, 2012

I’ve got a new term which I’ve been using all week to describe a feature of a particular flavor of dungeon-crawler RPGs: “Technical Dungeons.”

I don’t know how to define it yet. You get it in many roguelikes and old-school western RPGs. You don’t get it in most JRPGs or modern non-indie RPGs. Darklight Dungeon Eternity has a lot of it (thus my pondering on this matter).  Frayed Knight: The Skull of S’makh-Daon has less of it than I’d like. Ultima V had it. Ultima VI, not so much. Ultima VII had very little of it. Ultima Underworld, Dungeon Master, the Eye of the Beholder games – they had it in spades.  From accounts, Wizardry IV had way too much of it to be any fun at all except to masochists. But all of the Wizardry games had plenty of it, up until Wizardry 8, which didn’t have very much of it in spite of plenty of dungeon crawling. The Elder Scrolls games had tons of dungeon crawling, but didn’t really have much of it either that I could feel.

Am I beginning to sound like Bruce Campbell yet?

What I’m talking about is the technical, analytical approach to navigating a dungeon. It’s a point where the dungeons of a game become more than just a setting where the game and story happens, and more than just a path between combat and puzzles. It’s where the dungeon itself is becomes an obstacle, encounter, or character in the game in its own right, offering explicit or implicit clues to its own nature. Where navigation of the dungeon requires a constant weighing of risk and reward. They can be automapped, but the map may actually need to be studied by the player from time to time to determine how to get to where he wants to go, or to figure out its secrets. A technical dungeon is decidedly non-linear, and is not something that will usually be “defeated” in a single session. While any old dungeon may contain combat, traps, puzzles, and secrets, in a technical dungeon these are not stand-alone elements.

Really big dungeons help. Short, quickie dungeons really don’t have the time to develop themselves in the player’s mind, and are so short that navigating them never requires much effort. In that sense, a “technical dungeon” is pretty old-school, as the early D&D campaigns often revolved around a single dungeon (Castle Greyhawk!) that demanded multiple sorties to plumb its depths, defeat its guardians, and dig out its secrets. Respawning enemies are not required in a technical dungeon, but they do promote the judicious use of retreating to safer spots that is one of the characteristics (to me) of technical dungeon gameplay.

But overall, I don’t know if I can truly define it, so much as describe it. It’s not a clear-cut thing even in my own mind. Unfortunately, it’s not something that immediately appeals to people (myself included!), either, as it’s a very slow-burn kind of thing with delayed gratification. And it’s not strictly limited to “dungeons,” either, though this is its classical form. It’s just one of those things where how you approach a dungeon is as important as what you encounter while there.

CRPG Addict (who’s back after a fortunately short hiatus) talked about this a little bit over the weekend, which was a nice bit of synchronicity.  In his post about 1975’s Game of Dungeons, he talks about how RPGs marry the “left-brain” elements of strategy games and deep rules mechanics with the more “right-brain” aspects of setting, plot and character development.  In the early days, the “left brain” aspects dominated the games, mainly because it is far easier for computers to crunch numbers than to manage story development. The latter takes up a lot more space, and is very difficult to make it vary on repeat plays. It wasn’t until the early 1980s that these latter aspects gained more equal footing with the technical side of the art. And kept going. Until you get to today, where oftentimes the gameplay mechanics feel like make-work interludes for a linear story.

(Sadly, fans of the mechanical aspect didn’t earn themselves any sympathy when they took out their frustrations on Bioware designer Jennifer Hepler recently over comments she made years ago about making RPGs more “casual” and heresy like making combat skippable in RPGs. It’s fine to disagree, but personal attacks on designers is deplorable).

So where do I sit? Firmly in the “undecided” category. As I enjoy both action-RPGs and turn-based RPGs, JRPGs and WRPGs, I find aspects of highly technical dungeons very appealing, yet sometimes I enjoy getting on with the story and having dungeons that are little more than interesting places where the “real” game happens. It’s all good. Truly technical dungeon-crawling has been in short supply in recent years. I’m not sure what the “last” mainstream game title was that really offered pretty technical dungeoneering. Maybe Durlag’s Tower in Baldur’s Gate, but I seriously don’t remember that one very much. Or maybe one of these several newer RPGs that I haven’t finished playing. Got any nominations?

I have high expectations for the up coming Legend of Grimrock. Between that, and other indie RPGs such as the recently-released Darklight Dungeon Eternity and the upcoming “Gold” overhaul of Sword & Sorcery: Underworld, I’d suggest that we indie RPG fans will still have plenty of that flavor to enjoy for a while yet.


Filed Under: Design - Comments: 10 Comments to Read



A Game Dev’s Story, Part VI: Indie Before Indie Was Even Uncool

Posted by Rampant Coyote on February 27, 2012

My dad had an overdeveloped sense of my skills as a game programmer when I was whipping out adventure games and simple arcade-style clones. He owned a tiny holding company – which are sometimes derogatorily referred to as “shell companies.” Mainly it was there to be a “parent” company to for the entrepreneurial idea of the week, so he wouldn’t have to go through the time, expense, and paperwork burden every time he decided to launch a new venture. Most of the time it didn’t do too much – Dad was never super-entrepreneur or anything – but I guess he got the bright idea we could get my games out on the market. Great idea, especially with college looming on the not-too-distant-horizon. I’m going to break it to you now, though… it never panned out.

Around this time the industry was still emerging from infancy to toddler-hood, trying to learn how to walk.  What was it going to be like? Was it going to be like the consumer electronics market, selling games (and other software) at Radio Shack next to the stereo speakers and tape decks? Was it going to be like the print publishing industry, with more-or-less solo game authors working with agents and submitting their works to the publishers? Was it going to be like the record industry, signing on “hot talent” for multi-game contracts? Or maybe even like Hollywood? Nah, let’s be serious – the games biz could NEVER even be in the same league as Hollywood, right? 🙂

Anyway, my dad was clueless, I was only slightly less clueless, but we tried the “print publisher” approach. It was something my dad was familiar with, and he already knew a guy who was billing himself as an agent looking to expand into this brave new world of computer entertainment. We took him on, and he pushed my games, and we were actually considered by a couple of publishers (including one book publisher who was flirting with the idea of going into publishing games, but then backed off the idea).

But really, the problem was (in part) that we were about four years too late. The days where Scott Adams could make little text-adventure games and sell ’em on tape by the buttload, or where Richard Garriott or Daniel Lawrence or Freeman/Connelly/Johnson could bash out an RPG in BASIC (or mostly BASIC) and sell 20,000+ copies was largely over *. In 1979, 1980, maybe even 1981 it would have been possible. But a couple years later, as a junior high school student still learning to program, this just wasn’t going to fly on any real scale. Especially with the glut of games on the market, on the cusp of becoming “The Great North American Video Game Crash of 1983.”

But I knew none of these things. It was going to be another decade before I started making games commercially. It’s kind of amusing (and a little embarrassing) to look back on it all.  But hey, I was kinda-sorta an indie way, way, way back!

Anyway, a year or two before I graduated from high school, the computer-programming thing (and computer use thing) began to die down. My brothers demanded (rightfully) time on the “home” computer, the C-64, and I was fairly busy with several other hobbies (some things never change), active participation in a medievalist group, and dating girls (the latter two may sound like mutually exclusive activities, but in practice this wasn’t really the case.)

I was “computerless” when I went off to college. I started out with an Electrical Engineering major, which had zero programming requirements. I still played arcade games and mooched off of friends’ computers for entertainment, but aside from a couple of “elective” programming courses I took and didn’t put too much effort into, I was not keeping up on any coding skills. I took a break from school to serve a mission for my church, which again kept me (mostly) away from computers. We occasionally found ourselves playing some Nintendo at family’s homes after dinner. One church member in Ogden, UT, upon discovering that I was an RPG fan, introduced me to Dungeon Master on his Amiga. We couldn’t stay long, but I was awestruck by what little I saw and played.

I still loved gaming, and the itch to create – which had never completely subsided – returned to the surface.

Upon my return to school, I tried to switch my major to computer science so I could get back into programming. I’d discovered – during a particularly boring E.E. lecture – that I really didn’t want to do that for a living. I didn’t want to make the machines, I wanted to make the machines do something COOL. Like simulations. Or games. All that had to wait pending certain military commitments related to an ROTC scholarship, but the discovery of a particular defect of my knees that had been causing me pain resulted in a medical discharge which changed a lot of my life plans.  In retrospect, I’m pleased with how things turned out, though at the time it was something of a shock. And all my life, I’d thought everybody’s kneecaps were angled diagonally like mine. I’d never really noticed…

So I went back into programming (ahem, “Software Development,” or “Computer Science,” whatever … it was programming), which I loved. While I expected I’d probably end up with some kind of job writing accounting software or something boring like that, deep down inside a part of me really wanted to make games for a living. But how likely was that to actually happen, really?

 

 * The games were, of course, Akilabeth: World of Doom & the Ultima series, Telengard, and Temple of Apshai respectively. In case you were wondering what that video of Apshai was supposed to be about.


Filed Under: A Game Dev's Story, Geek Life, Retro - Comments: 2 Comments to Read



Major Price Reduction on Eschalon Games

Posted by Rampant Coyote on February 24, 2012

If you were looking for an excuse to get the Eschalon games, this is it. Both games have been reduced in price by $10 … now at $9.95 and $14.95 for Eschalon: Book 1 and Eschalon: Book 2 respectively.

Get Eschalon: Book 1

Get Eschalon: Book 2

I haven’t changed the prices on the web pages yet to reflect the new prices yet, but if you click the “buy now” button, you’ll see the new price on the order form.

This is a great chance to get caught up on the series on the cheap in preparation for the upcoming release of Eschalon: Book 3.

The web pages explain things well enough, but here’s the nutshell: The Eschalon series are old-school style isometric-perspective RPGs. Turned-based everything, text window, rolling up a character, stats, non-linear storyline,  ‘n all that good stuff.  You do not need to have completed the first game to enjoy the second, but it’s a good place to start.  And since you can buy both games for the price of the former price of the second, now, why not grab both?


Filed Under: Deals - Comments: 5 Comments to Read



RPG Design: Variable Encounter Difficulty

Posted by Rampant Coyote on February 23, 2012

A friend asked the other night about combat encounter difficulty in RPGs – is it better to have the challenge be consistent, or variable? I answered pretty firmly in favor of the latter. I think most people who suffered through the automatic level-scaling in The Elder Scrolls IV: Oblivion would probably agree. But here’s why.

First of all – consistent, linearly-increasing difficulty level in video games is generally recognized as a Bad Thing. It’s predictable, tedious, and causes player fatigue. It’s far more entertaining to have hills and valleys along a generally increasing difficulty level slope. Naturally, the peaks of the hills will often represent boss encounters or other significant events, while the lowest dips will generally follow immediately after as the player is given a ‘breather’ after a major challenge. But a little unpredictability in the pattern helps keep things interesting.

In RPGs, another consideration is anchoring the player in their level progression. The frustration in Oblivion – as well as many other RPGs, though it is less blatant in many other games – is that the reward for level progression is increasing difficulty of encounters, which actually results in the player feeling less powerful relative to his opponents as he theoretically “levels up.” Easier combats – particularly those similar to the more “challenging” combats encountered much earlier in the game – reveal how much the player characters have progressed in ability.

One problem with the modern trend of eliminating almost all resource management in mainstream RPGs is that weaker encounters become meaningless. I first saw this in a dice-and-paper RPG years ago, when playing Fantasy Hero. The Game Master decided to try and provide one of those “anchoring” experiences I just mentioned, and had our group fight some ogres – a creature type that hadn’t challenged us in a very long time. Due to the nature of the system, not only was the fight boring, but it was pretty meaningless (and the GM never tried it again), as our party was fully ‘healed up’ and refreshed for our next encounter. So we never again encountered anything that didn’t offer at least a moderate risk to our survival.

That is exactly the problem with games like Dragon Age where the party heals up fully almost immediately after every fight. And thus designers fall back on things like overusing things like “waves” of enemies to inhibit resource restoration while re-using less challenging foes.

In D&D, hit points and magic spells refresh slowly, which means a less challenging encounter still impacts the overall game. Sure, a dozen goblins may carry no significant risk to a party of 6th level characters, but any hit points lost or spells expended in the battle won’t be there for the next one – which may be a lot more dangerous. In this way, even the “easy” battles require significant choices to balance risk and reward.  For all its weaknesses as a system, D&D still got the basics right.

Likewise, throwing in a more challenging encounter other than the conventional “gatekeeper” encounters can help keep things fresh and interesting in an RPG. It can encourage the player to adapt and use less “brute force” tactics. Or encourage him to finally put those one-shot magic items he’s been hording to good use. Or he may exercise some freedom of choice and skip the encounter, possibly coming back to it later in the game after he has progressed a little bit further (thus providing another ‘anchor.’)  It keeps things interesting.

One final, tangential word: Even as combat difficulty should be varied as it scales, combat style (for lack of a better word right now) should also be varied. Even at early levels, the player should face combat encounters that require different approaches. Enemies should vary far more than just having more hit points and do more damage with their claws and bite attacks. Encounters should vary the “best” approach, including types of attacks, types of defenses, tactical positioning, use of items, use of the environment. There can even be encounters which are best fought by not fighting at all, but luring the enemy into a trap.

Variety is the spice of life, and it is definitely the spice of any RPG.

(Image above is from the 1E AD&D Monster Manual. (c) 1979 TSR. Good ol’ David Trampier art…)


Filed Under: Design - Comments: 4 Comments to Read



A Game Dev’s Story, Part V: Language Skills

Posted by Rampant Coyote on February 22, 2012

As I attempted to imitate my favorite arcade games on the Commodore 64, I found myself running up against a pretty major limitation: Speed. Specifically, the lack thereof when running my homemade games.  I learned a lot of tricks (most of which I’ve forgotten, now) to making my BASIC programs faster. But while those optimizations helped, and I could write some pretty fun little games, there was only so much an interpreted language like BASIC could handle.

What is an interpreted language? I’m glad you asked. Yes, you programmers reading this can snooze for three paragraphs – most of the readers here aren’t programmers and don’t know this stuff. 🙂  Okay, traditionally programming languages come in two different types (though we now have some extra variations): Interpreted and Compiled. The difference between the two is a little like the difference between someone creating a translation of a book from a language you don’t understand into your native language, versus you trying to translate it word-for-word as you read it.

In a compiled language, you use something called (wait for it…) a compiler which takes your human-readable code and translates it in the computer’s native machine code, which tends to be significantly more challenging for a human to read (and modify). While the compiler may not create machine code that is quite as optimized as a human being could do by hand, it is in the computer’s “native language” and thus executes as fast as the system can go.

In an interpreted language – which BASIC usually was back in the day – your program is run via another program – and interpreter program – which reads and interprets your program as it executes. This pretty obviously slow things down – a lot!  So why use an interpreted language? Mainly because it’s easy and often pretty interactive. If speed is not an issue, or you are just “experimenting” with a language (one of my favorite languages, PYTHON, is great for this), an interpreted language is really the way to go. Interpreted languages are also very portable – which is why they are used for web-scripting. In theory, at least, you can run the same interpreted language code on any system with a compatible interpreter (Javascript is a popular one) and it will run the same no matter how different the hardware may be.

There are a couple more variants on languages that are worth mentioning. The first is Assembly language. Assembly language is sort of an intermediate stage between machine code and a higher-level language. Assembly is specific to the CPU you are writing code for, and is indeed very  similar to machine code, with each statement corresponding to a machine code instruction. The advantage of assembly language it that it allows you to abstract some of the details – like using variables and labels, mnemonic opcodes, and relative memory locations. This means it still has to be “compiled” like a compiled language, but the process is so straightforward the compiling program is called an assembler instead of a compiler.  Everything’s one-to-one, so there’s no disadvantage (that I can think of) to using assembly instead of machine code.

Another variant – a relatively new one – is “bytecode compiled” languages, like Java. These languages try to straddle the gap between compiled languages and interpreted languages. In bytecode compiled languages, the program is compiled down not to native machine code, but to an optimized intermediate set of instructions that can be executed by a another program very quickly. You don’t get the level of interactivity you may enjoy with a fully interpreted language, but it’s theoretically just as portable and a lot faster.  But this is just an aside – I didn’t have this back on my C-64.

It became clear that to get speed on my C-64, I needed something faster than BASIC. This meant one of three approaches.

First of all: I could get a compiler for BASIC, to convert my BASIC code into machine language. I tried this. The compiler, unfortunately, sucked. It was unreliable, only worked on a subset of BASIC, and the results were very hard to debug. Something would work perfectly in BASIC but not work in the compiled version (although it was much, much faster), and all I could do was to start taking guesses as to what was malfunctioning and rewriting code until the compiler managed to get it “right.” After only a few attempts, I completely gave up on this option.

Both of the other options involved getting an assembler and learning to write in assembly. I’d experimented a little bit with a straight machine-code editor (which used symbolic opcodes), but it was nightmarish to use. When I finally got a decent assembler, things were pretty wonderful. My second option was to switch to writing my games in assembly. While it was certainly possible (that’s what all the professionals were doing), it was a really slow, painful process to get anything done. There’s a reason higher-level languages exist: programming down at the level of “the iron” takes a lot more work. I never got too far doing this, either.

The third approach – and the one I ended up actually getting things done using – was a hybrid: I could write the speed-critical parts of the game in Assembly – really tight “subroutines” mainly for displaying the graphics and handling things like collision and movement – and the rest of the game in interpreted BASIC. I did this a few times, and in general this worked pretty well. My biggest success came when I was writing yet another pseudo-3D first-person perspective RPG. I have a history with these things, don’t I? I had things like trees, fountains, and so forth that I wanted to display in quarter-screen-sized window, but they had to be drawn in the correct position and at a size that reasonably approximated its perspective. And everything had to be overwritten by anything closer. This was pretty slow in BASIC, but in assembly I whipped up a pretty decent rendering subroutine! You could still catch a glimpse of the entire world rendering – which meant you could basely see what was behind a wall, for example – but it was dang fast. Now, if I’d known better, I would have rendered the whole scene first to an off-screen buffer and then copied the buffer onto the screen so the player couldn’t even catch the glimpse of my lightning-fast rendering routine, but the idea never occurred to me at the time.

I never became an expert at assembly language for the 6502 processor (the chip inside the Commodore 64), though when I was required to take a course in college where we developed software in Assembly language, I got to be pleasantly amazed by how much easier it was to use the more advanced Motorola 68000-series chips used in the Apple Macintosh in the classroom.

So what did I get out of all of this?

I learned that tools are very important. The basic compiler and the early machine-code “editor” / monitor were pieces of junk that were almost of negative value to me. I was tempted to give up when I used these, and just assumed that anything beyond BASIC programming was just too hard for me. But once I got a decent book (okay, it actually wasn’t all that decent) on 6502 programming, and a good assembler, I was able to go on and do some great things. Using the right tool for the job really helped, too. Jumping straight into pure assembly programming might have discouraged me, but I found that I could take a hybrid approach which gave me some early success stories. Those helped motivate me to keep going.

How this applies to you: Get the right tools. Take baby steps so that you can get some early successes in under your belt to build confidence and familiarity while you are building your skills.

Next time: My brief early experience as an indie game developer, and a break for college.


Filed Under: A Game Dev's Story, Game Development, Retro - Comments: 5 Comments to Read



WRPGs vs. JRPGs – Not as Different as You Might Think?

Posted by Rampant Coyote on February 21, 2012

For a while, I was tempted to split up the RPGs on RampantGames.com into “Western-Style” (WRPG) and “Japanese-Style” (JRPG) role-playing games. I’ve not done that for a number of reasons, the main one being that it is too difficult to draw the line on indie games. A game written with RPGMaker will automatically inherit the look and feel of classic 16-bit JRPGs, but if its written with WRPG sensibilities, it’s hard to tell what side of the line it is on. In the last ten years, particularly, the line has blurred even more. This makes some sense, as the stylistic differences we tend to associate with the two “subgenres” were largely a matter of a tiny number of very popular titles influencing a host of imitators.

Rowan Kaiser’s article on Joystiq makes the case that the line was getting pretty blurry almost twenty years ago. I tend to agree. This predates my own “discovery” of console JRPGs, as I didn’t own a console back then, and may help explain why I found transitioning to console RPGs relatively easy. I think the article only scratches the surface, however. You can point to other examples of pretty story-heavy, system-light WRPGs (my favorite example being Ultima VII, of course).

But there’s another face to this coin. While I’ve played very little of the earlier JRPGs, in my limited experience I’ve found that as you go back to the 8-bit consoles they more closely resemble their western, computer-based cousins. They borrowed heavily from Ultima III and Wizardry at first. They also suffered from the same technological limitations, which restricted the amount of story they were able to tell in-game. By my understanding, it was really Final Fantasy IV, released in 1991, that launched the complex, character-based plot that became the signature of the subgenre. PC games, the focus of western RPG development, weren’t far behind, though they were still contending with compatibility issues on gaming-unfriendly machines.

 

Even though they had much more to work with on the 16-bit machines,  console RPGs (and JRPGs in particular, rapidly gaining popularity in the west) still had to work with restrictions that were not there on the PC. Saved games were still extremely limited on these cartridge-based systems, which forced limited world interactions, fewer character variations, and a more linear plotline.  Consoles were limited to gamepads with very few buttons, which forced simpler controls and menu-based choices. These limitations drove a style that persisted through the 32-bit era, but the distinction was growing thin even then. 1999’s Final Fantasy VIII even did away with much of the stylized “cutesy” character look of its predecessors in favor of the more realistic look of its western counterparts.

PC games started slipping in popularity here in the west towards the latter part of the 1990s, as console games began to clearly outsell their PC counterparts. Actually, I don’t think that PC games really slipped that much in popularity so much as the consoles gaining popularity, and the relative difference in sales caused publishers to immediately switch primary targets. I think the sales numbers on the PC haven’t changed much (as I learn that The Witcher 2, not yet on any console, has managed to sell 1.1 million copies). But the bigger sales on consoles caused publishers to switch to the consoles as the primary platform, which in turn led to some optimizations (some sorely needed) for the console environment. At the same time, JRPGs are taking better advantage of newer technology, larger controllers, etc. Designers in Europe, the U.S., and Japan had all grown up playing all styles of games, and felt free to mix and match the best (and best-selling) ideas.

And so, naturally, we find the differences between JRPGs and WRPGs to be further eroded.

The distinctions that we make such a big deal about (well, those of us who care about such things) were really an artifact of a relatively brief time period that has been over for longer than it ever existed. And, as Joystiq suggests, the differences of that era weren’t really as pronounced as we tend to credit them.

And for indie games? Unlike mainstream publishers, which are compelled by economics to pursue the path of the most recent hits, indies have a bit more leeway to mix and match. We can dip back into old wells of any era (or genre, for that matter), old or new, mix them with new ideas, and see what we can come up with. As much as I advertise “old-school” style for Frayed Knights: The Skull of S’makh-Daon, I cannot deny that I’ve taken parts of it in some distinctly not-so-old-school directions as well. Or that what ‘old school’ really means is quite subject to what kinds of games we’re talking about. Likewise, I can’t pigeonhole an RPG-Maker authored title simply because the graphics and control system is reminiscent of SNES-era JRPGs. It’s a convenient shortcut to take when describing a game (and I’ll keep doing it), but it’s really not very accurate anymore.

And maybe it never was.


Filed Under: Design - Comments: 2 Comments to Read



Blades & Booms – How Games Can Spark Real-World Interests

Posted by Rampant Coyote on February 20, 2012

Most of my knowledge of historical weaponry originally came from games. Not that game stats are the primary source of my knowledge anymore, but the games are what set me on the road. I don’t know why I get intrigued by the subject, but I do. Maybe it started with an article in Dragon Magazine — “Or With a … Weird One” (the second of a pair of articles, the first one on new rules for unarmed combat entitled, “Without Any Weapons”).  The article described a variety of unusual weapons used throughout history – complete with in-game stats. Many of the weapons were “unusual” merely because they came from non-European cultures. The atlatl, chakram (later made famous by the Xena TV show),  katar, manriki-gusari, bullwhip, and other weapons made their way into my campaign directly from this article. But while the stats were handy when running the game, it was the description of their usage in the article that really piqued my interest.

Maybe this was the stepping-off point… one more way in which D&D warped my brain from childhood… but from then on weapons and military history have been a source of curiosity for me. Whether it’s ancient stone or bronze-age weaponry, or the most modern jet fighters and laser-guided munitions, I eat that stuff up.

It’s usually the games that get me started. Combat flight sims drove me to study up on the real aircraft used in various eras – their capabilities, limitations, history of their construction, war stories, etc. I became interested in modern firearms directly as a result of playing Twilight: 2000 (and many other games since then). Twilight: 2000 had some excellent write-ups on modern firearms in one of their supplements, and that’s always how it started. I start out with a mild curiosity about whatever equipment my character is using in a game. The next thing you know, I’m reading books and watching the history channel, collecting weapons at Renaissance Festivals, taking archery or fencing classes, or visiting the firing range!

As a side note, I finally got to have my first weekend in a while this week, as the worst of crunch-mode at the day job is finally over and I didn’t have to work through the weekend. To celebrate I went with some friends to a local gun range, and took turns shooting each other’s handguns. While there, we also rented a Barrett .50 sniper rifle and put a couple of rounds through this giant weapon, and a fully-automatic H&K MP5. This was my first time firing a automatic weapon. I managed to keep my groups pretty tight, so I was very proud of myself, though I think the stability of the MP5 had a lot to do with it. These weapons have been favorites of mine in games from the Twilight:2000 and Rainbow Six days, so it was a lot of fun to finally try them out in the real world. Unfortunately, it’s too expensive to do something like that very often!

Games have also caused me to research (for fun!) U.S. Civil War and World War II history, robotics, medieval history, castle architecture, and areas of folklore. I will say that I was fascinated by Greek Mythology before I’d even heard about D&D, so it’s not that I wasn’t already predisposed to fascination in these areas, but it was D&D that made me expand my interest into other areas. As another example, while our interest in visiting New Orleans last year came from many sources, for me part of it was because of its excellent as the setting in Jane Jenson’s adventure game Gabriel Knight: Sins of the Fathers.

While games are and should be a creative endeavor, this isn’t to say the worlds shouldn’t be well-researched. Weapons, world geography, folklore, and military history are natural areas of real-world knowledge that can infuse many kinds of games with a sense of realism, but it shouldn’t be restricted to that. One of the oldest computer games – the Colossal Cave Adventure, had its origin as an interactive means for Will Crowther to express his fascination with caving. While I’d be hard-pressed to try a game that was explicitly about bird-watching, I could probably catch a spark of a designer’s enthusiasm in a game that included a bit of it in its design. At the very least, its inclusion might increase the verisimilitude – and my own immersion – in the game.

I’m not saying that good game design must include well-researched real-world elements. But from my own experience, I can say those that do have the possibility of having an impact that goes far beyond the few hours spent playing them. Considering what I’ve done with my career, how I met my wife (okay, it was at a dance, but we kept talking because I was annoying and she’d admitted to playing D&D), and my (far too many) hobbies and interests, it’s really no stretch to say that games made me what I am today. And I’m pretty happy ’bout that.


Filed Under: Geek Life - Comments: 2 Comments to Read



A Game Dev’s Story, Part IV: Sixty-Four

Posted by Rampant Coyote on February 17, 2012

When released, the Commodore 64 was quite simply mass-produced awesomeness. We didn’t really know it at the time… it was just another “home computer” released in a sea of competitors. It came from a company that had specialized in business systems (the name was “Commodore Business Machines,”), with one weak home computer entry (the VIC-20, for which William “Captain Kirk” Shatner was the spokesperson). However, they did “get” the key to the home computer market: It had to kick butt at playing games, but masquerade as a “serious” machine to justify its purchase price.

I’ll probably talk more about the “C-64” in a few months when it’s 30th anniversary of release really swings around. But in September 1982, when I finally received one of the first machines, it was incredible. Sixty-four kilobytes of RAM… although only about 50K was actually addressable using BASIC. Still – that was fifty times what I was used to. Sixteen colors. Sprites with hardware collision detection. A really decent version of BASIC. Full-screen editing. An incredibly powerful sound chip – the SID chip – that could give the Atari sound chip a run for it’s money.  Atari-style joystick ports. Cartridge ports. An RS-232 interface port. The only significant downside to the entire system was an incredibly slow disk drive, the 1541. But for me, it was a dream machine.

Upon release, software for the C-64 was in short supply. This wasn’t a situation destined to last very long. But since I didn’t have much money to buy games anyway – and nobody I knew yet had a C-64 either – I was left to my own devices to get some entertainment value out of the machine. And with 64K, a decent implementation of BASIC, a beefy machine, and a couple of books filled with Creative Computing’s game listings, I was pretty much set.

I typed in Wumpus I and Wumpus II. Star Trek. HockeySea Battle. Checkers. Nomad. And a half-dozen little games played on a grid (whether displayed or described in text coordinates). I debugged the games when I learned I’d typed them in wrong. I tweaked them. And I wrote my own. Either way, I learned.

Between Wumpus and Nomad, I figured out how to create data structures for nodes representing locations.  This was key in me figuring out how to do an adventure game. Rather than having every “room” be a section of code that called a parser subroutines, I could have most of the game data-driven with a generic system. Bingo! I was learning. And one night, I put it all together in a little adventure-esque game. Well, not even a game, so much as a game space. It was simply a dungeon of about a dozen rooms. But it had a very basic parser, the “dungeon” was fully data driven, and it had potential. And the whole thing came together at once. I remember the music that was playing on the radio (volume turned down low) at the time when it all worked… it was a Saga’s “Wind Him Up.” Like “Queen of Hearts” was linked to playing Asteroids and Pac-Man, that song has ever since been linked to that feeling of victory when making games for me.

That opened up the flood gates for me, conceptually. Not just for adventure games, but for all kinds of games. Remember, I was still entirely self-taught, and had only been working in a very limited programming environment before. Now I was finally understanding how programming in general was supposed to work – the ideas of separating code and data, keeping code very reusable (and maintainable), and so forth. I was still working with BASIC, so GOTO and GOSUB commands remained the order of the day on the spaghetti-code menu. But at this point, I was able to start doing some pretty complicated games.

My only two “finished” adventure games were ones I entitled, “Dungeons of Doom” (yeah, yeah, I know) and “The Secret of Red Hill Pass.”  Dungeons of Doom was the first, and in retrospect was pretty horrible. It had limited character-based graphics to illustrate objects in certain rooms, plus a few sound effects and other special effects (like ugly, flashing color backgrounds).  The parser was limited, and I think I only allowed the player three inventory items at a time.  The puzzles were primarily figuring out what item defeated what monster. The spear was for the minotaur, the muzzle was for the dragon, etc. The trick was the hand grenade, which was able to destroy every monster in the dungeon, but one of the monsters was invulnerable to everything but the hand grenade. So if you used the hand grenade against anything else, you could never win the game. I kept score, so it was possible to earn a higher score rather than “complete” the game, but it was a pretty atrocious design. I hope I’ve learned a thing or two since then.

Dungeons of Doom consisted of about 26 rooms, in total. The Secret of Red Hill Pass was, as I recall, almost ninety rooms. It could best be described as “Zork Lite.” I totally cribbed from Colossal Cave and Zork on that game, though it still had plenty of original ideas as well. The game took place in an abandoned manor (and the dungeon beneath it) along the titular pass. I don’t think there really was a “secret” to Red Hill Pass other than the adventure-filled manor. But the parser was pretty good – not quite up to Zork standards, but superior to some of the other popular text adventures of the time.

One project I undertook (mirrored by hundreds if not thousands of other D&D fans with programming know-how) took the random dungeon generator from the back of the Dungeon Master’s Guide, plus random monster tables, plus some custom object interaction code, and created a giant random-dungeon generator. The whole thing would print out with room and event numbers over literally hundreds of sheets of tractor-fed dot-matrix printing.  The rooms had simple descriptions, monster encounters which you could run solo, and actions which you could take, choose-your-own adventure style, leading you to a new room or event number you could then look up on the sheets. Due to the cost of printer paper, I only made about three full-sized dungeons this way, but it was actually possible for me to kinda-sorta play through a solo D&D game this way. Yeah, not as cool as a fully playable RPG on a computer, but it worked. And it was a fun – and educational – exercise.

I also made dozens of unfinished games.  I had a 3D perspective party-based RPG that was playable enough that friends came over and played the thing. I guess you could say this one was marginally “finished” in that it was playable, but I really just stopped working on it after a while. I had dozens of game clones I’d written that rarely got past the point of being tech demos. I had early attempts at cloning and re-imagining favorite arcade games of the era. My Missile Command-inspired game had flying saucers that were flitting around launching bombs at various angles, for example.

I eventually had the chance to play many of the games I had jealously read about. While this is generally a good thing – certainly a great inspiration for me – it was a lot easier to play games instead of writing them. This remains the case today. Some days, writing games is a lot more fun than playing them. Other days… not so much.

Pretty soon, I did run into some of the limitations of the C-64. Speed, memory, etc. And I actually kinda-sorta had a software business at the time. More on that in the next installment.


Filed Under: A Game Dev's Story, Game Development, Retro - Comments: Read the First Comment



Robert Boyd on How To Become an Indie Game Developer

Posted by Rampant Coyote on February 16, 2012

Robert Boyd of Zeboyd Games (“Cthulhu Saves the World“) has some advice for aspiring indies…

So You Want to Be an Indie Game Developer?

There is probably nothing new here for folks who have been following this blog for a while, but it’s still good advice: Practice, Analyze, Iterate. There’s more, of course, as well as some detailed suggestions, but it really boils down to variations these three basic actions, and getting help when / where possible.

You become an indie game developer by… DUH… making indie games.

You become a GOOD indie game developer by making games a lot, soliciting feedback, and improving your craft.

You become a SUCCESSFUL indie game developer by getting good at it, and through either luck (playing the field!) or calculated effort meeting the demands of the market, and (which Boyd doesn’t really mention) discovering how to get your message out. Sometimes someone else will take that job for you, or things will just go viral and LOTS of people will do that job for you – but it still has to happen one way or another.

Like most other fields, persistence seems to be the key. As I frequently note, while almost nobody had heard of Notch before Minecraft, it was far from his first title. It wasn’t even his first successful project.

It definitely ain’t easy. But I think for me it’s so ingrained it’s just a part of me.


Filed Under: Game Development - Comments: 5 Comments to Read



A Game Dev’s Story, Part III: Grand Designs

Posted by Rampant Coyote on February 15, 2012

In our previous installment, our hero (er, me) was learning how to program on a tiny, super-cheap computer with 1K of memory. Er, the computer, that is. Our hero has slightly more memory, we’d hope.

While I was playing on a toy computer, the arcade / video game boom was hitting full swing — the boom before the bust, as a matter of fact. I lamented being unable to create anything even approaching the arcade experience. I had the chance to experiment with the Apple II, Atari 400, TRS-80 Color Computer, Atari 800, and even a VIC-20 that friends owned. All of these systems had color graphics (with expansions on the Apple II), sound (ditto), better versions of BASIC, far more memory to work with, and best of all – the programs could run without blanking out the screen. And of course, they cost several times more than our tiny little Sinclair.

My favorite of them all was the Atari 800. It seemed to have everything. The dang thing was BUILT for gaming, from graphics and sound co-processors to cartridge slots. And it was by Atari! Duh! Atari was the king of game machines, and was clearly going to remain the undisputed champion for decades to come. A couple of my friends were into programming as well, and were making very simple little games with awesome-sounding explosions using the Atari sound chip.

So I bugged my parents for a new computer. A lot. I devoured computer magazines, and quoted articles to them. I even bought a couple of books about popular games on other computers that I couldn’t play. This is how I was introduced to Wizardry. Wizardry existed in my mind years before I got to play the real thing. The real thing wasn’t quite as good as the game I’d imagined when I’d read so much about it, but it was still cool. In the meantime, I visited my friend Kevin’s house every other weekend and played on his Atari VCS (or “2600”), and we blew most of our money at the arcades.

Oh, yeah. The arcades. Let me tell you – 1982 was an incredible year for the arcades. It was they heyday. Video game machines were as common as gumball machines, and there were arcades full of them in nearly every shopping center or strip-mall. The graphics were far superior to what you’d get on the consoles of the time.  Any time I found myself in a commercial district – shopping with my family, or whatever – I’d set out on an adventure to hunt down whatever video games or arcades were in the area. Inevitably, I would discover a game I’d never played before, and I’d spend a few coins figuring it out. I was definitely a game “grazer” – I would rarely focus on a single game. I’d put a few extra quarters in my favorites, but even then I was all about exploring the variety of games.

Geekiest of all (which I am also somewhat proud of): I was designing text adventure games. I created maps on paper – huge maps – with circles for each room or “node” and notes for the names of the various exits. I had plans, dang it. My methodology was based partly on those books containing hints and partial walk-throughs of adventure games, partly on my experience designing Dungeons & Dragons adventures for friends, and partly on my own limited experience playing adventure games on other people’s computers.

I wasn’t very sure how I was going to implement these adventure game designs, but I understood programming enough by now that I could figure part of it out. I’d written something along the lines of an adventure game on the Sinclair. I think it had a grand total of three rooms. There was nothing you could do besides move between those three locations, as that consumed every bit of memory I had. Each room did its own parsing of input – which was pretty easy, as they only had two or three commands each (“L” for Look, “N” for North, and “S” for South.). I imagined that making a larger adventure game was a lot like making that tiny one, only bigger (with more rooms). At the time, I hadn’t really considered the importance of things like moving objects around from room to room, inventory, consistent parsing, etc.

None of those adventure game design plans were ever implemented, by the way.

One day, my dad announced he’d ordered a new “home computer” for “the family.” This was the good news. The bad news was that it didn’t exist yet. He’d ordered the Commodore 64, a recently-announced computer expected to ship early that summer. It didn’t. I expected to spend the summer going wild with sixty-four friggin’ kilobytes of RAM (oh what could I ever do with so much memory!). Instead, I made my paper adventure game maps. Our pre-ordered computer – which cost a mere $600 – didn’t arrive until early September. School had been back in session for over a week.  My summer vacation had been lost, forcing me to do (GAH!) Summer Vaction-y things instead of programming.

But I would make up for it. Boy would I make up for it.


Filed Under: A Game Dev's Story, Game Development, Retro - Comments: 7 Comments to Read



A Sinister Valentine’s Day

Posted by Rampant Coyote on February 14, 2012

Craig Stern’s Telepath RPG: Servants of God is a top-down, 2D, story-heavy, turn-based tactical RPG that has been in development over three years (at least it’s been over three years since I first heard about it and played the original demo). It’s been available for pre-order for something like eighteen months now. But, as previously announced, today is release day for the epic Steampunk / Middle-East flavored indie RPG.

Telepath RPG: Servants of God by Sinister Design

It’s strong on tactics, flavor, theme, world design, and characters. Tons of voice-acting, tons of thought, tons of choices, inter-character relations, turn-based tactical combat, etc. On the surface, this should be a dream indie RPG. Now that it’s released (or rather, should be shortly) and that crunch mode may be finally receding this week, I look forward to getting a chance to devote a few hours to it.

The world of Telepath RPG is one of oppression under a totalitarian theocratic regime. As a religious guy, I do worry about the anti-theocracy message turning into an anti-religion message. But considering how a study of medieval history in high school left me horrified at the atrocities committed in the name of Christianity, and similar acts committed right now in the middle eastern world under the guise of religion, I do feel I can be sympathetic to the theme. We’ll see how I feel if / when I manage to finish the dang thing. Hopefully it avoids preachiness and strikes a good balance. There was nothing that I encountered in my short previous playthroughs that led me to believe otherwise.

Another minor area of concern for me has been the purely deterministic results of combat. It’s all tactics, with no invisible dice determining hit chances or damage. This makes it more of a pure strategy game, like Chess or Go, and less like a traditional war game where players must contend with a chance of failure for even the best-laid tactics, and be prepared to revert to a plan B. Again, in practice with the demo, it didn’t bug me. Maybe several hours into the game this results in a tedious repetition of tactics, but this is a concern with any RPG, whether it has randomness or not.

But ultimately, I’m looking forward to the story, writing, and characters. From what I have seen and played, these look great, and the entire game looks like a winner. Telepath RPG: Servants of God is a fresh concept in RPGs, with a unique world and approach to the genre.  I’m a big fan of tactical combat in RPGs, so this should be a lot of fun to try out. I’m really excited about giving this one a more thorough test-drive.

But maybe not tonight. Valentine’s day and all that. I’m lucky enough to have plans with the lady. 🙂  But soon.

 


Filed Under: Game Announcements - Comments: 3 Comments to Read



A Game Dev’s Story, Part II – The Kilobyte

Posted by Rampant Coyote on February 13, 2012

Because of the binary nature of computers, they tend to handle things in twos – especially powers of two. The fundamental states used repeatedly by digital machines to represent everything is “on” (or one), or “off” (zero). Now, by itself, that doesn’t seem to be a lot of information. But then you can combine bits to increase the amount of information represented.

For example, let’s say you wanted to count to three instead of just to one. You can add a second bit that modifies the first bit, which gives you a possibility of four combinations of values: “00” for 0, “01” for 1, “10” for 2, and “11” for 3. Each additional bit you add doubles the effective range. With three bits, you get eight total values (zero through 7): 000, 001, 010, 011, 100, 101, 110, and 111.  And so forth and so on.

Someone decided to designate a bunch of these bits together like that as a “word.” But the actual number of bits in a word varied from machine to machine in the early days. So somewhere down the line they standardized terminology a bit, and decided eight bits was a pretty good collection, and labeled it a “byte.” Interestingly enough, another useful grouping was four bits, which was called a nibble (sometimes spelled “nybble”). And on x86-based architectures (and many others), a “word” has now been standardized enough to mean two bytes.

A single byte can have a total of two to the power of eight, or 256 different combinations.  That’s a pretty useful range for a lot of purposes. It was often used to represent a character of text in the English language – uppercase, lowercase, punctuation, and a little left over. The old machine-language code for these old machines were usually parings of two (or more) of these bytes together – one as an “opcode” (the instruction) and one as the operand (the value to be acted upon).  So you might have a very simple program that stores the value of five in the accumulator (part of the central processing unit or CPU), adds one to the value in the accumulator, and then stores the resulting value from the accumulator into memory at a particular location. That ridiculously simple program would use up six bytes of memory. Well, seven, actually, because you need that one extra byte left over to store the results.

So as you can see, even simple programs – written in pure machine code – use up memory fast. In order to be useful, home computers in the 1980s had to have lots of bytes of memory. In 1981 or so, this was measured in kilobytes – a thousand bytes. Although in reality, a kilobyte was actually 1024 bytes (or 2 to the power of ten). And while a thousand of anything sounds like a reasonable amount, a single kilobyte (or “K”) is really not very much memory to work with.

And that’s what I had to work with when I first got started programming. Not that I had any clue what a bit or byte was when I started. I was about to learn.

My father brought home one evening a Sinclair ZX80. These never really caught on here in the United States (or anywhere else, for that matter), but it’s later version – the ZX81 (distributed in the U.S. as by Timex – the watch-maker – as the Timex Sinclair 1000).  They retailed for just under $200, but I think he picked it up for half-price at a show of some kind.

The Sinclair Zx80 was a tiny little computer with a membrane-covered keyboard. It ran on the underpowered Z80 microprocessor, with 1K of memory (RAM), and a low-resolution black-and-white display that plugged into a television.  You could plug a tape recorder into it to (try) and save or load programs, but it was an iffy affair at best.

The Zx80 ran an extremely simple version of the BASIC computer language, which consumed some of that precious 1K of RAM. It was slow, tiny, underpowered, and not really useful for much of anything other than learning how to program. To save memory, it wouldn’t “remember” what was on the screen – it would recalculate the screen output every frame. It could only do this when the CPU was idle, not doing anything else. This meant that when you pressed a button on the membrane-keyboard or ran any calculations, the screen would blank out until it was done.

To save memory, it didn’t store all BASIC commands in their complete form. Instead, it tokenized the commands. You hit the “O” button and it would display the entire word “PRINT” on the screen. But in memory, by my understanding, it really just stored a single byte representing the BASIC instruction “PRINT.” While limiting, it was a pretty clever way of taking advantage of extremely limited memory space.

Although I had the manual which explained (kind of…) the BASIC language – or the subset offered by the ZX80, my textbooks were really Creative Computing’s “BASIC Computer Games” and “More BASIC Computer Games.” You could type in the game programs directly from the book and then run them to play the games. Then you could modify the program yourself to change the game, which was always more fun.

My problem was that very few programs in the books were (A) small enough to fit on the ZX80, and (B) Used only the subset of BASIC used by the Zx80. In the latter case, once I learned more about the language, I could figure out ways around the language subset restrictions, but the memory restrictions were another matter. Quite simply, only the tiniest, most useless little games would fit on my little computer. Even then, I had to cut some bits to get it all to fit. Since I was just entering the games for my own use, I could dispense with the prompt that said, “Enter your move (1-8):”, as that would save me around 26 bytes right there.

There was only one (count ’em: one) action video game to my knowledge for the Zx80. It was a Space Invaders clone written in machine language that used some kind of a trick to alternate between reading input, processing game logic, and displaying the visuals on the screen, so you wouldn’t get the ‘screen blank’ problem (at least not as bad). I don’t know for sure, and I never played it. The dinky little game was something like $30 ($60 in today’s dollars). However, I was intrigued by this programming language called “machine code” or “machine language,” which sounded like it could do something that BASIC clearly could not. I had no idea that it was something fairly similar to those confusing books on programming I’d checked out of the library a few months earlier.

But in the meantime, I learned BASIC, and the fundamentals of computer programming, and learned to work under some pretty tight restrictions. I wrote tiny little games, and dreamed of having a better machine on which to make games that would rival my favorites in the arcade, or emulate that Colossal Cave Adventure game.

Meanwhile, I heard about other computers with no small amount of jealousy. The Apple II, or the Atari 800 (or even 400). Heck, even the TI-99/4A, TRS-80, and new VIC-20 were quite a bit beefier than my little Sinclair. A friend of mine had a cartridge for his Atari video game system that let you program in BASIC. It was even more underpowered than the Zx80, but at least it had sound and color. But all-in-all, it did have one decent use, and that was to learn the basics of programming (and BASIC). And I did. After a couple of months, I’d gone from knowing almost nothing about computers to really doing all there was to do within the confines of BASIC on the ZX80.


Filed Under: A Game Dev's Story, Game Development, Retro - Comments: 4 Comments to Read



A New Legend Forms Around Legendary Adventure Game Designer

Posted by Rampant Coyote on February 10, 2012

So file this one away under “Adventure Games Are Dead and Other B.S. the Games Industry Tries to Sell Us.”

There’s really only one topic I could comment on today, and that is the amazing crowdfunding success experienced by Double Fine Productions. In case you missed it (it’s sort of the top story everywhere in the games-related biz right now), here’s the run-down. Tim Schafer, a game designer who’s known more for his games than for his name (he worked on Day of the Tentacle, the Monkey Island series, and was lead designer for Grim Fandango, Full Throttle, Psychonauts, Brutal Legend, etc.), decided that he wanted to do a good-old-fashioned point-and-click adventure game again with a small team from his studio. But to do that, he needed funding, and — well, the publishing world isn’t interested in giving (good) terms for that kind of thing anymore. ‘Cuz, you know, Adventure Games Are Dead. It’s been repeated often enough by marketing guys and many gaming “journalists” for over a decade, so it must be true.

There’s a little side-story out there about how Schafer complained in an interview that no publisher would fund a sequel to Psychonauts, to which Minecraft developer Markus “Notch” Persson offered to do just that, which he could pay for in something like one week’s worth of income from Minecraft‘s sales. But Schafer decided to try and “crowd-fund” a traditional adventure game instead, using the money not only to fund the game itself, but to fund an “open process” to document what really goes into making a game like this.

So could he do it? Could he get $400,000 in funding in just over a month through Kickstarter? Well, it was worth a try.

The results were a little better than expected. They hit the $400,000 goal in just under eight hours. After twenty-four hours, they’d blown all of Kickstarter’s previous records, hitting over $1 million in funding, with over 30,000 contributors.

Someone on reddit created this little image to the right illustrate what happened.

So I guess they will be able to afford to make a little adventure game. Woot!

I also have little doubt that Notch’s name and cheerleading was just as important as the reputation of the team involved. And I enjoyed his own little artistic contribution to commemorate the event:

So what should we take away from this? Besides a totally awesome new point-and-click adventure game coming soon from Schafer & co?

Here are my thoughts:

Adventure Games Aren’t Dead: They are only “not as profitable as major AAA publishers would like.” But then, we indies have known that, and we’ve been enjoying new, decent-quality adventure games for frickin’ years now. I actually get a laugh whenever I read somewhere about how the genre is long-buried. It didn’t die, it just retired young and is partying it up on the beach in Acapulco. But it is super-cool to see the genre get big headlines again.

The Era of the Giant Middleman is Over: See the above picture by Notch. Now, this doesn’t mean that the behemoths are going to go extinct, or that we don’t need these guys at all. I mean, you only need look at the difference between sales figures for Games Sold On Steam versus Games Not Sold On Steam to realize that these guys aren’t going anywhere. But the giant, all-powerful gatekeeping middleman publisher / studio model is really an artifact of the 19th and 20th centuries, the entry-level optimizations made possible by the combination of technology and the industrial revolution. A historical blip while technology caught up with ideas. Alternative models are popping up weekly, and the traditional publisher-developer system which was the one true way to sell games for three decades is becoming just one of many.

Tim Schafer Can Do It, But You Probably Can’t.  Or as former Gamepro senior editor Sid Shuman comments, “Indies who see Kickstarter as an ATM are asking for trouble.”  Indie developer Dave Toulouse writes, “It’s not a recipe. It’s just the story of the week. Tomorrow we’ll all be back to doing the best we can with the resources we have. That’s the only true way to do things.” I don’t share Dave’s disappointment that the money didn’t get redistributed to fund lots of indie games – I would rather see a game succeed because of a truly impassioned community, and I’ve never been a fan of redistribution for the sake of “fairness.” I would like to see more opportunities for indies to get visibility and succeed on merits other than “luck,” however. While luck isn’t everything, there’s still too great of a dependency on it. The bottom line here is that this whole thing happened because of several factors: The right team with a proven track record coming up with the right “story” at the right time and getting viral attention. Luck was a factor, no doubt. But that’s not a repeatable process, and even far more modest expectations from Kickstarter by relatively unknown indies are likely to be met with grave disappointment. It is what it is. But for now, I’m happy to celebrate the successes of fellow game developers and the indie process. Like Minecraft, this is a a special example of what is possible, not what is likely.

And the possibilities are endless.

The “One Size Fits All” Sales Model Is Dead. The bizarro thing that many Kickstarter projects have shown (I’m also thinking Rich Burlew’s Order of the Stick reprint funding, and more…) is that is almost seems that many people are more willing to contribute to a future project than to pay for an existing product. And it’s not limited to crowdfunding. I’m talking indie bundles, paid betas, DLC, Pay-What-You-Want, “freemium”, sponsorships, subscriptions, collectors editions, and everything else. It’s clear that the old way (which I’ve admittedly clung to with my own releases) of selling a product at a fixed price is often NOT the best way to go. It will always be an option, and a popular one, but its era of dominance is at an end. Indies need to be more creative. And yes, I mean me, too.  With a global economy (especially for digital downloads), customers and fans are just too friggin’ diverse in their tastes and means. While $25 for a “big” indie game in a niche genre is pretty reasonable for a middle-class American,  it’s far too expensive for many potential fans, and is leaving money on the table for a few others who would be happy to contribute more.

This is a great time to be a gamer. Seriously.  It’s the new frickin’ golden age as far as I am concerned. I am overwhelmed with games to play – both indie and mainstream – and time is a far more limiting factor than money. We’re still getting plenty of high-end AAA games with incredible graphics and streamlined gameplay; retro-gaming has made a pretty nice resurgence; and the indies are on fire. Life is good.

Have fun!


Filed Under: Adventure Games, Biz, Indie Evangelism - Comments: 13 Comments to Read



« previous top next »