Tales of the Rampant Coyote

Adventures in Indie Gaming!

Using Genetic Algorithms In Games

Posted by Rampant Coyote on April 18, 2012

Because my upcoming post on my experiences developing Jet Moto is getting kinda long, I thought I’d break out this story for your entertainment. Programmers will especially get a kick out of it.

An interesting aspect of the development of Jet Moto was that we did not do anything like the traditional racing game approach. Our tracks were not on a spline. The AI were not simply moving obstacles acting as multiple pacers. Our AI was running a real race. It was possible to win just by having the AI do poorly. Our tracks could be extremely open-ended, only requiring that the bikes hit the checkpoints in order – like a rally race – rather than confining them to the track.  We made it hard for ourselves in the short term, but it was also what made us stand out from most of the other racing games at the time that did things the “right” way.

I was not looking forward to figuring out optimal paths for the various bikes (because they all had different physical characteristics, so an optimal path for one would not resemble the optimal path for another). And I’d read up on some experiments with genetic algorithms (GAs). GAs are sort of an evolutionary approach to creating AI – you start out with several sets of characteristics or instructions, plus a ‘fitness algorithm’ to score them on their level of success on completing a task.  You treat the characteristics / instructions as DNA, and you create a means of ‘mutating’ these values from generation to generation.

So what you do (the simple version) is this: You create the first generation of AI pretty much randomly. Dump them in their environment. Have them do their thing. When they are done, score them.  Have the best scorers of the generation ‘mate’ – creating children that are some mix of the two parent attributes. While this mixture is random, you’ll also want to introduce some random mutations into the next generation, so they get some attributes that come from neither parent. Then have the face-off between the new generation (and possibly their parents as well). Keep doing that for as many generations as it takes, and see what kind of ‘optimal’ results you get at the end.

I did this for Jet Moto one day, when it was still fairly early in development. The first generation of bikes didn’t even come close to completing the race. I think I’d run them for a maximum of three minutes. Scoring was easy for a racing game – the bikes with the biggest lead won out. Once I was satisfied the test was working properly, I went home for the night. I came home the next day, excited to see what happened. So were my coworkers.

Much to our amusement, the AI bikes had found – and exploited – a collision hole in our track.  So they were breaking through the wall, cutting across the middle, going a little bit backwards to hit the checkpoint, turning around, and then finishing the race and going on to the next lap – with a few weird quirks or unnecessary movements still in evidence.

The team thought I’d come up with an amazing way to automatically test for bugs. I explained to them that this was no brute-force way of exhaustively finding problems… a future test might never find that hole … but it had proved a fun little experiment.

Eventually, I dumped the idea, and used a more traditional approach to calculate paths through the game. But I still like the concept. Here are some thoughts here if you want to experiment with the idea.

Genetic Algorithms are generally best for solving a single problem.  As is true in real life, ‘fitness’ is very dependent upon situation. When the situation changes, the optimal approach does, too.  Games are often pretty dynamic, frustrating the AI’s best efforts at following the optimal path and leaving them unable to adapt.  That’s a weakness of GAs. It’s something that makes GAs – as far as I can tell – pretty useless for a game like Chess.

But for some other kinds of games, GAs may still be useful. Standard heuristic code can be made to help the AI get ‘back on track’ as best as possible in these situations. I also believe that adaptation strategies can also be included in the “DNA Code” to make them more robust. This is actually an area I would be really interested in exploring in the future.

Another interesting experiment is in how the ‘fitness algorithm’ scores the AI. In a game, you don’t necessarily want the AI to be really good at its job – you want it to be entertaining to the player.  But can a fitness algorithm incorporate this factor into its logic? Can you identify the features in advance that makes an AI more fun? Can you write a fitness algorithm that rates an actor higher if they display a critical weakness in their otherwise challenging strategy?

Could this idea be extended to automatically create not just active AI characters, but to construct environments as well? Can you create a requirement that a game level will challenge skill X and skill Y of a player and exhibit feature A and B, and then set it loose overnight to create an optimally-interesting level?

Naturally, in all of these cases, it may be best if the results are considered a ‘rough draft’ for a human developer to them come along and polish up. And you may have to construct your gameplay deliberately to take advantage of the use of GAs (and I’m typically wary of building gameplay around a gimmick like that… but sometimes it does work pretty well).

Yet another twist is possible thanks to online gaming, with an existing base of players: The fitness algorithm could take player feedback into consideration. You could have self-modifying game elements based on feedback metrics that tweak the gameplay daily based on player response.

Fun stuff to noodle on, doncha think?

 


Filed Under: Design, Game Development, Programming - Comments: 6 Comments to Read



  • John Evans said,

    You should read about Galactic Arms Race: http://gar.eecs.ucf.edu/

  • Felix Pleșoianu said,

    In the case of too strong AI, I suspect you could time a great number of players, figure out how good you can expect them to be, and change the AI’s fitness function to eliminate AIs that are too strong along with those that are too weak. You’ll want a whole range of AI opponents anyway.

    At least with this kind of game, the fitness function is obvious. That doesn’t happen in too many cases.

  • Anon said,

    > You’ll want a whole range of AI opponents anyway.

    Very important – especially in a racing game where you don’t want to follow a pack of seven “drivers” with the same performance (ignoring different car performances now).
    Make them all idiot drivers and you won’t have fun, too.

    Ideally, you want to easily crush the slower and “unskilled” cars and then battle the top two, three cars to get an entertaining race.

    Game AI is a very interesting subject, actually, and it’s a pity that many games are using scripting even if they don’t need to. For many things scripting is usable or even good (advancing plot for example) but a more universal approach with clever AI that adapts to the environments can go along way. Some first person shooters are really good examples for this, especially older ones like the first Half-Life and the first Farcry.

    The challenge is not to create the most powerful AI but the most entertaining one – and behaving realistically (with driver errors for example) is one way to make the game more fun.

  • jzoeller said,

    GA’s and neural nets are often combined together, and do pretty well for certain applications.

  • slenkar said,

    what kind of genes did the bike riders have?

    Like how they take corners? or simpler stuff?

  • Resources Update 3rd May 2012 « Teach Games said,

    […] discussion on using Genetic Algorithms in computer games. Genetic algorithms are used to generate solutions to problems, by repeatedly […]

top