The Little-Used Tools of Game AI

This week’s developer discussion on AiGameDev.com is introduced by Dave Mark from Intrinsic Algorithm. Let him know how many tools you use for your game AI and why; post a comment below or in the forums.

What? No hammer?

I’ve never been the type of guy who can spend all weekend in the garage or the workshop. I am far more likely to linger in the electronics department at the store than I am to sigh wistfully over the endless aisles of power tools. However, like many people, I remember being rather impressed by things such as the almost cliched Swiss Army Knife. Examining one is an adventure of exploration. As you unfold each appendage from it’s slot, there is that moment of curiosity (“What could this be?”), followed by a moment of discovery (“Wow! It’s a corkscrew!”), followed-up by a sense of wonder (“That surgical knife could come in handy!”). It’s only later that a more sobering, pragmatic reality sets in. (“It would certainly suck to try and cut firewood with a 2-inch saw.”) Put into practice, a vast majority of the people use one of the knives and maybe a screwdriver. The rest of the now overly bulky tool is relegated to the “gee whiz” effect wherein you pull out your massive knife and, while exchanging wise, serious nods with your camping buddies, state with grave certainty “it’s nice to have these other tools around just in case I need them.”

Game AI seems to have a similar problem of late. Looking through web sites, books, and the various conferences such as GDC and AIIDE, there is an endless parade of esoteric, seemingly mystical techniques. As perpetual students in our rapidly-changing art, we read and attend with a reverent demeanor of an exploratory scientist. We soak up all the knowledge and ponder the applications and implications. We engage in heady, philosophical discussions with our peers. We exclaim our exuberance and proclaim our allegiance to new methodologies. And then, upon returning home to our individual, pragmatic realities. We resign ourselves to the relatively bland, yet utilitarian knife and screwdriver: Finite State Machines and Pathfinding.

And what of the other tools in the Swiss Army Knife of game AI? What about the planning and fuzzy logic? The lofty towers of neural networks and genetic algorithms? Game theory and reasoning under uncertainty? Influence maps? Minimax plays a killer game of Tic Tac Toe, right? Flocking? We’ve all seen articles on flocking! Not being used? Wow… there sure are a lot of tools in this knife. We can all see places where they may come in handy. Granted, some of them may be like trying to cut firewood with a 2-inch saw - but aren’t some of them truly useful? So why don’t we use them in the real world of creating our pretend worlds rather than simply pretending we are going to use them in the real world?

Click to continue →

Game AI Roundup Week #19 2008:
7 Stories, 1 Quote, 2 Videos

Weekends at AiGameDev.com are dedicated to rounding up smart links from the web relating to artificial intelligence and game development. This was a more quiet week but you’ll still find a few great blog posts, articles and videos. Remember, there’s also lots of great content to be found in the forums here! (All you have to do is introduce yourself.) Also don’t forget the Twitter account for random thoughts!

This post is brought to you by Novack and Alex Champandard. If you have any news or tips for next week, be sure to email them in to editors at AiGameDev.com. Remember there’s a mini-blog over at news.AiGameDev.com (RSS) with game AI news from the web as it happens.

Click to continue →

Common AI Challenges for Modern First-Person Shooters (Part 1, Video)

A few weeks ago, a promotional trailer was released for Brothers in Arms 2. It’s interesting from more than a marketing perspective because it reveals many challenges that should be familiar to developers working on first-person shooters. A big thanks goes to Remco Straatman for pointing out some of these issues and the original video. It only hit me a while later that this would make an awesome video blog post.

In this first part of my analysis, I look into destructible cover, in particular how it becomes much harder for the AI to handle if it’s simulated using a regular physics engine. When raycasts are too computationally expensive as a solution, then a behavioral approach is always a good fallback. Watch the video below for more details; it’s 7.8 Mb and lasts for 2:47 minutes.

Click to continue →

Sharing the Sandbox:
Can We Improve on GTA’s Playmates?

For this week’s developer discussion on AiGameDev.com, Dave Mark looks into the recent best-seller Grand Theft Auto 4. Let him know how you think the AI could be improved by posting a comment below!

Let’s face it. There is only one game on the radar these past few weeks. The release of the latest entertainment excursion into criminal mischief has gamers and non-gamers alike a-buzz with the usual dichotomous din of praise and vilification. From the seat of a game industry professional, releases such as these are interesting in a different sort of way. As the avalanche of reviews, both paid and amateur, come rolling in, game developers of all stripes start digging through the fluff and the furor. We are always searching for the sometimes obscure fibers of opinion that almost take on the role of forensic evidence. And to what end are we perusing this cavalcade of critique? To find a clue that may lead us closer to the holy grail that is “customer satisfaction” in our industry — what do people either worship or abhor about the latest and greatest title on the shelves? Put simply… what do we need to focus on?

Click to continue →

Game AI Roundup Week #18 2008:
10 Stories, 1 Video, 3 Jobs, 3 Quotes

Weekends at AiGameDev.com are dedicated to rounding up smart links from the web relating to artificial intelligence and game development. There are even more jobs this week, blog posts and some great papers. Also don’t forget the Twitter account for random thoughts!

This post is brought to you by Novack and Alex Champandard. If you have any news or tips for next week, be sure to email them in to editors at AiGameDev.com. Remember there’s a mini-blog over at news.AiGameDev.com (RSS) with game AI news from the web as it happens.

Click to continue →

Chasing Strawmen Out of Game AI Research

The call for papers for the AIIDE ‘08 conference has just expired. This year I’m on the Program Committee, so today I got my hands on four papers to review based on my selected topics. There are some pretty great research projects this year…

However, I find the opening paragraphs that describe the motivation of the paper very frustrating. While many of the arguments presented may have been valid quite a few years ago, technology is moving very quickly these days. Most of the papers cited written by people in industry are already a few years out of date, and as such, don’t really represent a valid basis for research anymore.

So, until I write a paper documenting the typical techniques used in industry these days, here’s an article dismissing the four most common fallacies that you shouldn’t base your arguments on.

Click to continue →

Automated AI Testing:
Unraveling the Combinatorial Explosion

Dave Mark is back this week to introduce AiGameDev.com’s regular developer discussion. When he’s not working on lucrative database contracts, he spends his time developing AI over at Intrinsic Algorithm. Post a comment below and let him know how you think testing game AI code compares to SQL statements!

When I started to write this column over a week ago, I was in the middle of a minor crunch-worthy catastrophe on the project that I was doing for a client. I’m working on a database project for a local retailer and we are in the process of trying to export a very large amount of data to his clients. (This is all relevant, stick with me here!) When we dumped the over 32,000 result rows of data from the database, my client decided to quickly check to see if a few select items were included. Much to his consternation, he couldn’t find the items that he was looking for. In the process of exploring why, I noticed that some of the data that was in there seemed like it might possibly not be correct. My client looked at my samples and confirmed that, indeed, they were showing errors in dollar amounts that were subtle but significant. And this is where my weekend went to heck… and why Alex covered for me in this column last week. (Again, hang in there, this description is more than an excuse for not being here!)

Ironically, the false starts that I made on this column were starting to outline the theme of the column — that of automated testing. I didn’t think about the connection until later in the week. The reason that it came to mind was when my client asked me, “and how do we know that this new run is completely correct?” My short answer truly had to be, “unless we are willing to hand-check every one of the 32,000 rows of data, we can’t be sure.”

What it came down to was that we had to trust all the layers of data sources, queries, and algorithms — some of which were not even in our control. I had to check each step in the process, confirm that it was doing what it was supposed to do, check a quick sample of what it was spitting out, and move on to the next. I had to proceed under the tenuous premise that, if each step along the way was correct, the end result would be as well. As for that black-box data we were getting from elsewhere? Well, we had to trust that the programmer responsible for that had done his due diligence as well. Unless we wanted to check all 32,000 rows by hand and eye. And how do you tell at a glance if numbers that spill out of a formula or correct of if they are off by some amount? Wow… too bad we didn’t have a way of automating the testing of the data! Or did we?

Click to continue →

Game AI Character