Tuesday, December 23, 2008

Personalities as Problem Solving Aides

Last post I talked about personalities in computing, but I only briefly touched on the idea as to why we may want personalities in computer programs or robots.

Aside from the obvious applications where a personality is necessary - robot pet, disabled care bots, sex bots, etc - the idea of putting a personality into the equation doesn't seem like a good one.

The most obvious example is Clippy. The basic idea of Clippy was that Word was getting rather complicated, and MS wanted some way to have a human-feeling tutorial system. A "smart guide".

Of course, it was a travesty, and is a sterling example of why NOT to make your programs have a personality. After all, if the solution or path is clear, the user doesn't need someone popping up and getting in their way. And, usually, if the solution or path isn't clear, it's easier to do a keyword search rather than clumsily interact with some primitive personality program.

But I think there are situations in which personable programs are required. A personable program may not have an overt, anthropomorphic personality, but there are times when even going that far might be useful.

First off, the basic idea of a personable program is that it understands the user better than current programs do. As time goes on and our need for context-sensitive data increases, our programs are going to have to get better at handling that. One example is a music bot that would pull music off the internet for you to listen to. Right now, the music bot software that's out there can be customized to your overall preferences, but it doesn't understand that sometimes you feel like listening to reggae, sometimes pop, sometimes filk. Humans have moods, and the current generation of software is only dimly aware of that fact.

A more advanced music bot would try to track your moods, trying to determine if, for example, you generally like funk in the morning and rap in the evening. A more physically present robot would probably try to determine your mood by scanning you - facial expression, body temperature, what clothes you decided to wear, whatever.

At this point the software can't be said to have an anthropomorphic personality. There's no pair of glasses with a mouth popping up and saying "it looks like you're trying to listen to Aerosmith. Would you like some help?"

But, despite that, you can see that the program is developing some rudimentary personality. In trying to predict your personal preferences, it will show personality, at least as far as you are concerned. Today it played military marches all day. Why? Yesterday it seemed to like the B52s. How interesting. It's got a personality, although not a clear or deep one.

Now the issue is that you've got this thing with a personality, but your interactions with it are extremely limited, largely consisting of fast-forwarding through songs. So, when it gets some weird idea in its head, you don't really have any way to (A) figure out why it's doing that or (B) get it to stop doing that.

On the other hand, if we do have a pair of glasses with a mouth pop up, we can see why it thinks what it thinks. Obviously, a cutesy icon is not what I'd choose. I'd probably either go for a blinking cursor or a sexy librarian, but we'll stay neutral for the moment.

You are fast-forwarding through songs it really thought you would like. Instead of flailing around randomly to try to determine if you're in a shitty mood or have a guest or something, it can gently pop up and say "Hey, what's the deal? You in a bad mood or something?"

And you can type back - "entertaining guests" or something. The software will understand, or at least fake it, and try to find a mode of music that fits your interests better.


Now, at this stage, I've only covered about a quarter of the reason that software may require personalities.

The other half of the first half is that, as time goes on, this kind of contextual data feeding will become more and more critical and overwhelming. What sounds like a cute little semi-feature will become much more necessary when you're, say, trawling through Twitter, Flickr, YouTube, and whatever comes out next trying to siphon out today's trends.

The second half happens on the back end - the side that doesn't face you. The programs require a personality because the data they'll be filtering - and how they filter it - is based on personalities.

If you don't have a personality, how can you form a meaningful opinion on whether a cute cat video is fun or not? On whether your friend (another piece of software) would like to hear this news from an earthquake-destroyed city? On this insider detail about a specific game developer...

As the volume of our media increases, we'll see a corresponding increase in the number of agents (programs) trawling through it, trying to make sense of it. Because the media is fundamentally based on humans, it's easiest to judge if you have some semblance of human in you. A personality.

This is especially true when it comes to creating and navigating the semantic net that will arise from all this data filtering...

So, in long, I think that we're going to see a rise in personality-filled programs because we're going to see a rise in the number of programs that have to interact with personalities.

Saturday, December 20, 2008

For Love of Robots

I've been thinking about Nintendogs, and Tamagochi, and Animal Crossing - all these games which feature the player nurturing some kind of pseudo-realtime entities.

A lot of people really like these games. But that's not terribly surprising to me. What is terribly surprising to me is the reason they like these games. And one of those reasons is the same surprising reason that people like The Sims, even though a lot of people who like the Sims don't like Nintendogs and visa-versa.

When you talk to these people, they talk about how their dogs or citizens or whatever other characters there are feel. They enjoy "keeping their Nintendog happy", and they notice that "one of my people has a grudge against another".

In regards to how I feel about it, maybe I dive a bit too deeply, a bit too quickly. My drive is not to figure out how these characters are feeling, it's to figure out how they feel things. So, when I dive into Nintendogs and The Sims, I am rapidly disappointed by the shallow simulation and brutally predictable responses. But the satisfied players either don't notice or don't care about that sort of thing.

I think it might be a race between how quickly a character can establish themself in your mind and how quickly you can tear their mind apart. The question in my mind is whether the characters could establish themselves even after I've figured them out. What I mean is this: I stop playing Nintendogs and The Sims because I figure them out, so I get bored and close up shop. But if I was forced to play them every day for a month, would I grow to have personal emotions about the characters?

I think I would. They would probably be hate, though, because it'd be a real bother to have to play the same boring game day after day. But... could it be another emotion? Could I grow to like a character whose actions and emotions are painfully transparent?

Well, research suggests yes, but the dynamics are a bit more difficult. In order to make it so that I like the character, the character has to serve some useful purpose in my life. That purpose could be almost anything, but it can't be "to waste your time".

The problem is that software isn't really advanced enough to do useful things on a personable, semi-automated level. For example, we could theoretically make our calendars management software a character - a virtual secretary - but it turns out that interacting with our virtual secretary is more difficult than simply filling out the raw calendar ourselves. So, even though she theoretically exists to help us out, in actuality she just wastes our time. Example: Clippy.

But I think the solution can't be applied because the problem doesn't exist yet. Right now, there aren't many applications that require a "personable heuristic", so there aren't many spaces to put one.

There are a lot of hints of this kind of thing on the horizon, though. An example are the elderly care robots you might see in Japan. These robots are pretty bad, not due to any design problems, but simply due to the fact that interacting with the real world is quite difficult and our techniques aren't quite there yet. The end result is that most of these robots are close to useless. We can imagine them growing more capable over time, however, and it stands to reason that they will get more and more common until they might be ubiquitous.

Another possible example of a hardware sort are prosthetics or meta-prosthetics. For example, if you're deaf, I can see a personable piece of software detecting all the sounds around you and regurgitating the important ones as text. It would need to be personable because what sounds any given person wants to be alerted to at any given time will change, as will the "expression" they should be reported in.

A prosthetic arm is fairly straightforward and probably wouldn't require a personality of any sort. But what about larger meta-prosthetics such as cars, houses, and computers as they become ever more closely linked to us? It makes sense to give them a personality that can not only adapt to the mood of the the owner and situation, but can also express its own "mood" to easily reveal the very complex details of the system's state.

Pure software is also an option. Right now it doesn't seem to make any sense to have a web browser with a personality: we've restricted our use of the web to that which Google can provide us. However, even in that limited term, Google's search engine attempts to adapt to our requests and even do some contextual analysis. In essence, it's a primitive version of personable software.

Is there any reason to think this will do anything besides advance? Let's look at a version that could be made today:

What if we had a Twitter aide? A virtual character who exists to feed us twitters. By knowing the sorts of things, people, and places we're interested in, he could bring us relevant twitters and, judging our responses to them, give us more twitters along the same line or even send us out to web pages with more information.

Moreover, such an aid could "time-compress" Twitter. For example, if I want to watch someone's Twitter, but a lot of it is crap, I could have the character give me a summary, or at least filter out the crap.

Right now all this stuff can be done manually, of course, but the point is to give you a hint of what might come in the future. The amount of data spooling out of Twitter is a microscopic fraction of the amount of data that will spool out of the social networks of tomorrow, and the idea that you'll spend time manually going through all those entries is silly.

"But, Craig, why would these entities need a personality? Sure, I understand for things like pets and nursemaids, but why would a super-Twitter aggregator need a personality?"

Well, I think that the next tier of software is going to have to have a personality because it will allow the users to psychologically deal with the complexity of the software. That was the idea behind Clippy, after all: create a simple interface to the complicated parts of Microsoft Word.

Microsoft Word isn't complex enough to require that kind of assistance, but high-volume data filtering very well may be. As users, we have to "understand" why our entity is bringing us this particular data, and we have to not get upset by the communication difficulties we're going to have with these entities. Both of these things are easily accomplished by a carefully chosen avatar, and that's quite apart from the fact that our software entity (whether strictly software or partially hardware) will need to empathize with us and our moods.

In some regards, this can be seen as depressing: oh, old people cared for by soul-less ro-bots instead of hu-mans. People making friends with ro-bots instead of hu-mans. Pretty sad! Or is it?

I think it's a cultural artifact. I think that once we're there, we'll find it's not sad at all.

ANYWAY, I think I'll wrap up there.

What are your opinions?

Friday, December 19, 2008

Dead Space

So, I wasn't planning on reviewing Dead Space, since it's... not really very interesting. It's basically System Shock II minus Shodan, psychic powers, and hacking. IE, a random repairman against The Many.

I was just gonna let it slide peacefully into oblivion but, but...

Then I met the Asteroid Shooting Minigame. A mandatory minigame where they put you in a seat and make you play Tie Fighter. Not the later ones. The original one. On the Apple II. The one that isn't even listed in Wikipedia, presumably because it was someone's basement hack.

I always hated that game, and I have not improved any with experience. So I've tried to beat this dumb, infuriating, pointless thing several times. Each time I think to myself, "IF I WANTED TO PLAY TIE FIGHTER, I'D TIME TRAVEL BACK TO 1988! SHUT UP AND BE A SHOOTER!"

But every time I die. And every time, I get a little further. Oh, good, maybe I'll eventually beat it!

Except your captain-type-dude is sitting in your ear the whole time. "Almost got it! Just a little more!" "You said that five minutes ago, dipshit, what's the point of lying to me? Just to fool me into thinking maybe I'll win this time, maybe I'll hold out long enough on this idiotic, sub-par minigame designed by brain-damaged, idiot monkey-men and implemented by sadistic, gibbering idiots and playtested, evidently, by savants with astonishingly good control over the MOST IRRITATING MOTIONS ON THE CONTROLLER? We'll call them idiot savants, just to keep with the theme."

Yeah, I'm really enjoying it.

So the game went from being "decent" to being "totally shitty" in one fell swoop.

Reading the walkthroughs, I find that not only does nobody have any useful suggestions aside from "not sucking", but there's ANOTHER ONE OF THEM LATER ON.

See, this kind of shit is just bad game design. The idea here is that they mix it up a bit, you know? Give you a break from the regular gameplay. Maybe the game needs it: the regular gameplay consists almost entirely of walking around slowly, then freezing and dismembering zombies. It's not exactly rapid-fire. The minigame certainly is.

But you know what? Mandatory minigames are a sign that your game design is fundamentally flawed. Doesn't matter what they are - quicktime sequences, turret fighting sequences, PRESS A REALLY QUICK sequences... they're all a sign of shitty basic play being desperately propped up by other shitty play.

You can put minigames in, sure. System Shock II, which Dead Space obviously wanted to be, had a hacking minigame. I'm sure it irritated some people. But you know what?


Man, I go on and on about weird, advanced little theories about game design, but then I go play a so-called triple-A game and I find they need BASIC DESIGN LESSONS.

I can't imagine the designers were really this bad. All I can think of is that they had a boss breathing down their neck and two days to do something. Because it's really bad. Ugh.


The funny thing is that every other review on the planet seems to have loved the game. Not only did they not even notice this minigame, they thought the game itself was better than I think it is. This is probably the most negative review written about the game, but even before I got into this dumbass minigame, I didn't consider the game to be so great.

Maybe I'm spoiled by the fact that I'm kind of a scifi-survival-horror specialist. All these people comparing Resident Evil to Dead Space. Nooo, you did NOT. No wonder you think it's good. Maybe you should play System Shock II again. Or, hell, Shadowgrounds is scarier than this is.

This artificially crippling the camera crap? It doesn't make the game scarier to me. At all. More responsive - even eagle-eye - cameras work just fine because in survival horror, a big part of sustaining the scare is in maneuvering. And there isn't any in RE4 OR in Dead Space.

It's just someone walking around shooting zombies.

And, you know, playing painfully retarded minigames.

Tuesday, December 16, 2008

Condition RED Line!

Stolen more or less (more less and less more, I guess) verbatim from the MBTA here in Boston:

"Hi, this is Dan Grabauskas, general manager of the MBTA. Safety is our number one concern on the T.

"As our eyes and ears in the system, it's more important than ever for you to keep alert for suspicious behavior and activities. Even though there has not been a significant terrorist attack on a bus, train, or subway in America in our lifetime, even though Boston has never been the target of a significant terrorist attack, and even though there is no reason to think that we will be targeted any time soon, we're relying on you to report any suspicious activities, such as people with funny hats.

"Remember: in these dark times where literally ones of Americans are being murdered by foreigners every day, occasionally within two thousand miles of our borders, paranoia is a virtue and a welcome distraction. So, see something, say something: it's better to wrongfully arrest ten innocent civilians with dark skin than to let one person remember that they are safer in Boston than virtually anywhere else on the planet.

"Please stay tuned for another, back-to-back announcement on the topic of unfounded paranoia, and remember: watch that dirty rotten foreigner two seats down LIKE A HAWK."

Sunday, December 14, 2008

Dangerous Fabricators

I happened to watch this today, only a day after writing this, and the two overlap.

I always get irritated at the beginning of these kinds of talks. It's so easy for people to get worked up about the potential horrors that theoretically await us. Perhaps unsurprisingly, Bill Joy isn't as blindly reactive as many of these kinds of people, but I thought I'd take the moment to give MY opinion on the matter.

The basic concern is that as individuals gain more capability to create higher-technology devices, someone will do something horrible. For example, once we have home printers for printing life forms (virii and bacteria first), what's to stop someone from printing up some new superdisease, causing an epidemic, and killing a billion people? The home printers for printing life forms are not scifi, they're... ten years away. Twenty at the most. Then we'll all die OMG!

Welllllll, the answer isn't easy, mostly because the question presumes things that are false are true. It supposes an incorrect social dynamic. It is in error about the way human minds work. I suppose you could call it a misunderstanding of basic memetics, except that basic memetics doesn't exist. It ALSO misunderstands the nature of science in a fundamental way. These two misunderstandings radically alter the dynamic of this kind of terrorism.

One thing that isn't an answer that I need to address is regulation. Regulation will not work when you're regulating home use. If we can do something in our house, it cannot be regulated without destroying the society you're trying to protect. Once I have a DNA printer, it's impossible to stop me, impossible to catch me, short of omnipresent monitoring from an insanely overpowered government.

Nothing less will accomplish anything, as you can clearly see from the explosion of music, video, and more flat-out illegal things that is saturating modern computers. These things - especially toxic data such as child porn - are controlled by law. Stringently. But they still range somewhere between available and omnipresent, especially if you have access to gray networks (such as Freenet) or can speak multiple languages (to search outside your government's jurisdiction).

Some people are happy to settle for omnipresent monitoring - a total loss of privacy - out of the stark fear that someday some asshole with a DNA printer will kill off half the world's population. Those people, as I mentioned, are operating under at least two false premises: they misunderstand human dynamics and they misunderstand scientific dynamics.

I'll cover the scientific dynamics first.

It's forty years in the future. I'm at home with my printer and, having been spurned by my erobot, I'm planning to destroy the world with a particularly clever little microbe that makes people explode violently. Like in video games.

I release it into the wild. What happens?

Well, if I released it TODAY, man, I'd kill off half the world. It would be aweso... ful. Yeah. Awful.

But I'm not releasing it today, I'm releasing it forty years from now. When the technology has been developed to allow a home user to build this sort of thing.

I can't predict precisely what the defense will be, but I know there will be defenses, because we always develop defenses in line with the technology. Sometimes those are nontechnical (such as the very political defenses against atomic weapons), but the majority of times they evolve as sister technologies or practices and grow to a level where we stop even considering the original technology a threat.

For example, a government worker can easily email billions of dollars of secrets to China and get paid to an anonymous account. Nobody the wiser. But it's not something people run around screaming about.

Because we've evolved defenses against it. Some are technical - email monitoring from secure sites, security clearance requirements, etc - and some are political. I don't really know what those are, but given that I could get around the technical requirements in half an hour, I presume they exist.

I'm going to list a few technical defenses that may evolve simultaneously with home DNA printers, but in honesty I think the biggest defense will be a sister technology allowing for a more... socially advanced?... human. I don't really want to go into it here, but it's pretty clear that this level of technological change fundamentally changes human society.

Anyway, technical. I think these are the most likely:

1) Alarm vaccines. These are intelligent defenses, probably bacteria, that scan against a wide variety of known intruders and actually report on the kinds of microbial intruders you're experiencing. If they find a NEW one, you can see the alarms howl! And, of course, they immediately report the structure and activities of this new form immediately, as well as beginning basic defensive operations. Sort of like a biological Norton, if you want.

2) Airborne scanners. Same idea, but in boxes rather than people. This has the advantage of being easier to update and able to use technology that couldn't exist in people's bodies. However, it would also be less widespread and unable to accurately watch the progress of a new monster against a host. Plus side, they'd catch nonhuman-targeted microbes.

3) DNA Reverse Engineering. Get a sample of the new critter, run it through the reverse engineer, and you've got instant vaccine and cure. Release the hounds within an hour of detecting the thing in the first place.

4) Internal Weather Control. Instead of injecting a drug, we inject a bacteria that can manufacture the drugs we want. Moreover, the bacteria can be recalibrated if it turns out that cocktail isn't working.

Now these all sound hopelessly farfetched and futuristic, but you have to remember that we're talking about fighting off a microbe that I created on my home machine. They're not really more futuristic than that.

The point is that by the time we have home machines for this, we'll have developed defenses and responses to the threats the home machines create.

You can argue this isn't true - that we don't have defenses against nuclear bombs, for example - but those can't be created at home. Computers and the internet were once predicted to have catastrophic consequences for humanity back in the eighties, if you recall. "A hacker brought down wall street! OMG!"

But that passed because we developed safeguards, built them right into the infrastructure of the system. Built them right in because we WERE the system. The home users actually enabled the development of the defenses we use against the many horrors of the internet.

Same idea. Technology doesn't advance on its own. It advances for a large number of people and in tandem with a large number of brother and sister technologies.


The OTHER issue here is a fundamental misunderstanding of humans. And this is perhaps a more serious misunderstanding.


Nearly every crime ON THE PLANET is perpetrated by someone who spends a great deal of time near the victim. Likely it's a member of their family. Kidnapped children? It's usually a family member. Abuse? It's usually a family member or a trusted friend. Theft? It's often someone you know, the rest of the time it's usually someone who lives relatively nearby. Mugging? Usually muggers stay pretty close to home or their primary hang out, believe it or not.

Believe it or not, humans shit where we eat. Criminals commit most of their crimes within spitting distance of their normal zone of activity.

There are exceptions. For example, a lot of people knock over convenience stores. But it's usually a convenience store they've been in before or at least driven by a bunch. The convenience store makes a particularly appetizing target, so I guess it's understandable.

"But that doesn't hold true for terrorists and serial murderers... does it?"

Yyyyeah, it does. To a large extent.

It seems the majority of terrorist attacks happen in the same city as the terrorist lives. Nearly all of the rest happen in the same nation. You'll notice that most nations that suffer terrorist attacks have a higher percentage of immigrants from so-called "at risk cultures".

Our very own terrorist attack was a black swan. A particularly aggressive gambit using a particularly appetizing method against a particularly appetizing target.

I need to be clear. I'm not arguing against immigration or whatever else your personal concerns are. I'm explaining that terrorists do not simply pop over on a boat and blow you up. They blow themselves up near where they live, in the same way that muggers tend to mug people on the same set of streets, often within walking distance (or bus distance) of their home.

The same is true for our theoretical disease-criminal of the future. He is operating not out of coordinated misanthropy, but out of greed or UNcoordinated misanthropy. He won't cause any more damage that criminals today do. In fact, he'll probably cause less, because we'll be operating under augmented reality, which will make us very community-driven organisms. But let's assume the same level of misanthropy as today. Not a big threat.

"How can you be so sure? Isn't in DANGEROUS?!?!?!"

Well, hell, I can put together a mean chemical cocktail out of the crap under my sink and kill seventy people on a crammed-full red line train. Anyone can. But it hasn't happened, because that's not how humans work. We just don't work like that.

There are some amazing exceptions, most of them American. The unabomber, for example. But to call these uncommon is an understatement. They are so rare that we still remember him, even though he only killed three people. THREE PEOPLE! Similarly, some of you remember the Japanese guy who attempted pretty much what I described in the last paragraph. But more people are harmed in bowling-related accidents than these kind of coordinated criminal misanthropy. Perhaps it's because anyone whose brain is working so poorly as to WANT to do these things can't possibly do them WELL. I'd like to think that if I went off to kill people, I'd do it quite a lot better... but maybe there's something inherent in the act, some kind of mental screw-up? Perhaps there's some other reason. But... these people seem incapable of doing massive harm, even though they already have the tools to do so.

There are exceptions. Black-swan events like 9/11. That's what the technical defenses are for.

Also, there are some memetic factors worth considering. For example, suicide rates go up when there's lots of news coverage of suicides. School shootings went up when the media started going nuts about school shootings. (In fact, they basically didn't exist until the media went nuts.)

These are worth considering, but it's not something that we can stop progress for.

In fact, you can't stop progress at all.

Even if I haven't convinced you, in twenty years you'll find me sipping lemonade as I genetically engineer a green puppy.

Play and Story

I guess this is a rant. I doubt you've heard it before.

As time goes on and I build little demos and tabletop games, I find myself less and less interested in play. But in a really weird way. Lemme sup up.

I have noticed a huge difference in how I design tabletop vs how I design computer games. Here's an example of what I mean. I've designed several pseudo-star-wars games, both as computer game prototypes and as tabletop games (mostly RPGs).

When I design the computer game, I spend eons going over the play dynamics. "Well, how should light saber combat be? Just like so, using time-sensitive balance and yardeyaryaryar?"

When I design the tabletop game, I spend eons going over the, um, "narrative components". "Well, here, I've drawn a picture of the ambassador from Mrrhork, and his stats are on the back. He'll be useful because he's good friends with a rebel naval admiral and..."

Now, it's important to note that I'm not designing tabletops with common RPG mechanics. I'm not popping off to GURPS or d20 and pulling out some tired standard. I'm actually creating very new and strange mechanics for every tabletop game I make. But... but it's so easy. It takes maybe half an hour.

On the computer, it takes ten times that long just to duplicate freakin' pong, man. There's no time left for the story - I've spent all my time on trying to get the game to work in the first place!

The big factor at work here is that the interface for generating and executing rules for tabletop games is significantly simpler than computer games. IE, I write them down, then I remember how to do it. If it's vague or something I didn't expect crops up, I can modify them on the fly.

But! But!

But I hate using standard gameplay.

A big part of my dislike for the new Fallout game (which I thought was merely "quite good") was that they literally used the exact same system as for Oblivion. Even though the system wasn't very well suited to the Fallout theme. (I also disliked the world design, which was also inherited from Oblivion and had the same problem.)

Sure, they applied desperate patches and some varnish. (I'm SPECIAL. How cute.) But underneath it, we're looking at a recycled system. I can't stand that.

Its why I can't stand D&D or d20 or Gurps, either: these are systems that aren't designed specifically for the game the players are playing today. So I can't stand it. It feels like someone's trying to hammer a round peg into a Hulk-Hogan-shaped hole.

I want my gameplay to match the story (narrative elements, whatever). The game is a cohesive experience, and the idea that half of the game can be largely recycled from a completely unrelated experience is repugnant to me. So I quite literally cannot take a piece of gaming middleware such as Game Maker and make a game out of it. I hit a wall where I feel the grind between what dynamics the game needs to have and the dynamics that the middleware allows.

In some cases you can go digging, script up your specific gameplay system... but it takes just as long as making the damn thing from scratch!

It used to be that I'd make my little demos, focus on the gameplay aspects, and then drop them. But I'm having a really hard time doing that now, because I'm starting to really feel the missing half of the experience. It's like I'm arranging furniture in a house that has no roof or walls.

The horrible part is that this isn't a problem when designing tabletop games of any sort. Doesn't matter how hideously complex the game is. 200 page GM guide on time-traveling probability mathematicians? No problem, takes me a week. Board game with 500 illustrated cards? No problem, happy to spend the time.

But freakin' Tetris with a story, man? NO CAN DO.

Anyone else feel this way?

Saturday, December 13, 2008

Information Economy

Futurist crap

I've mentioned this before, but I guess it bears repeating. A lot of the stuff we now consider information wasn't considered information until technology allowed the local user to create the final form from information.

I know it's not terribly clear, so let me give an example: music.

Music wasn't information for most of the history of humans. It might CONTAIN information, but the music itself was created by the use of complicated instruments, and would be considered to be a product of that instrument plus someone who could play it.

Then we got phonographs which, in combination with records, allowed us to play back music that was recorded on any number of instruments with a very simple instrument: the speaker. The physical presence of the instrument and the time of its playing was removed from the equation: you could listen to the symphony or a jug band, all out of your home device.

This continued to improve, of course, with the advent of radio stations, tapes, CDs, and now MP3s. Moreover, some music is created without anyone ever playing an instrument at any point! At each step, the creation of the song and the end user's enjoyment of it is separated more and more. It becomes information.

It may be difficult to imagine a time when music wasn't considered information, back in a time where songs weren't protected by law because it wasn't the schematic of the song that mattered, but the individual rendering it.

As songs became more and more informationesque, laws were pushed to limit the spread of the information component. Whereas before anyone could pretty much play whatever song they wanted, now there are extremely strict limits on which songs musicians are allowed to play and in what situations they are allowed to play them. There are similarly complex laws governing the final rendering of music. This absurdity that most of the world takes as a given only exists to commoditize something that has always existed freely, but has only recently become useful.

I'm not going to argue whether it's good or bad, and I'm not someone who's advocating an information anarchy. I'm simply pointing out that as songs have had their physical requirements lifted and become information, laws have been made about who is allowed to access and replicate that information. Information that, only a few generations ago, would have been happily passed from person to person without anyone even noticing.

Of course, we all break those laws every day. Well, except me, the sterling example of not-a-music-pirate (cough). But that's not because we're evil, that's because it's gotten to the point where music is about a half a millimeter from actually being nothing but information. It has become so insanely easy to transmit and replicate that it is almost impossible for us computer nerds to really imagine it restricted.

Now, the point of this essay is that many things march towards information in the same way as song did. This is not obvious because music is the only one we're really very familiar with in our modern culture. But here are some examples:

Writing! Writing wasn't originally information, although it conveyed information. Instead, writing was in heavy books or scrolls (or cave walls or bark or whatever). It required the physical presence of these displays to exist. Now, of course, we have a billion displays that can render writing on the fly, and writing has turned into information. Just like music: we no longer need to have the physical form that writing originally required. Our ability to manufacture any "blueprint" of writing on a wide variety of personal screens or printers makes those things obsolete. We take the information - the "blueprint" of the book or essay - and we render it on our screen instantly and painlessly. You're doing it right now.

We're seeing slow strides in that direction for many, many, many things. For example, Pepsi keeps their recipes secret because, unlike three hundred years ago, that information would allow their competitors to easily "render" Pepsi. It's not feasible for individuals to do it, but other large companies can easily do it. That means that Pepsi is largely an information product. Sure, you buy a can of Pepsi, but that can's contents could be manufactured by anyone with a soda mixing plant anywhere on the planet. Pepsi's existence as a unique product is only true because of the countless laws that protect their specific mixture from being stolen and duplicated.

Does that seem odd, to consider a soft drink "information"? Well, how about coffee? More and more homes are getting coffee-savvy, with their own coffee machines, espresso makers, and so forth. Obviously, they're still using beans that are physically grown somewhere, but the final drinks they're making are the result of simple recipes - information about coffee.

My uncle makes something almost indistinguishable from Starbuck's Frappuccino. In fact, we call it a Frappuccino. My dad prefers hot drinks, so on his machine he simply makes coffee, but he grinds the beans to a very specific level and lets them seep for a very specific amount of time under a very specific heat, and it often varies from bean to bean. He's producing, from his home, coffee far superior to most public coffeehouses because his recipes - his "coffee information" - is superior and calibrated specifically to his taste.

I can't go so far as to say that coffee is information, but I can say that it is moving in that direction in the same way that the record began making music into information. It sounds insane to say that someday we'll just say "Coffee, Barbados blend from 2024", and out will pop coffee. But two hundred years ago, it would have sounded insane to say that someday we could just say "Bach, concerto #7, Philharmonic symphony orchestra 2001"... but today, that's almost literally what we can do.

"But that's so far in the future it doesn't bear to be considered!"

It's actually amusing to look back at those times and see their predictions for the future. When people were saying things like, "The phonograph of the future will allow the family to listen to up to thirty different concerts in the quiet of their own home!"

Yeah, I've got three hundred concerts on this machine, and that's only classical music. Um, legally, yeah.

The thing about progress is it's not linear. It's exponential. Just when you can see far enough to start realizing what might happen, it explodes into insanity.

So, let me go ahead and explode this "informatization" of products into insanity.

Cars as information. Download a blueprint from OpenSourceCars, print it out on your home machine, and drive a Ferrarenti to work today. I'm officially coining that pun.

Genetics as information. Plan out a garden consisting entirely of plants to grow in exactly your soil, in exactly your weather conditions, and form a stable biome. Print them out. AS PLANTS, not seeds.

Neighborhoods as information. Get together with your community and pound out the specifics of your community power generation, water filtering, shared spaces, optimal parking... then grow it using a pseudo-life form that excretes complex structures like a clam forms a shell.

Insanely over-futuristic?

Well, it's certainly not going to happen tomorrow!

But all it needs is something that lets you render a product locally. Just a machine that lets you print out the things you want.

Wednesday, December 03, 2008

Characters that feel like Characters

I'm trying to trim this down to readability...

There are more and more games that try for an open world with some level of social play. Fable II, for example, lets you do any pose to anyone. But though the world has many NPCs in it, none of them feel like a character. They all feel like cardboard cut-outs.

There's something to be said for better play involving characters. In fact, I've said tons on the subject myself. So I'll skip that stuff.

I've come to believe that's only half the issue. I think the other half lies in how you interact with the character.

Most games of this sort give you generic interaction options that can be pointed at any character. For example, in Fable II you can "dance" at anyone, you can "yell" at anyone, etc. Other games featuring open-world (or character-gen) social systems use the same idea, although they often use a different set of generics. Even I have, in the past, done this for nearly all of my prototypes.

But I think that's the flaw. I think that's the big flaw.

The idea behind it is that the characters will react to your generic action in a specific way, showing off their personality. But that's a crippled framework because it depends on the generic message. It's a reply to a generic comment. It's like this: you want to get a feel for how interesting a photograph is. But you're only allowed to ask a few specific questions: how big is it? How red is it? Is there a person in it?

Even if the photo is very interesting, you'll be hard-pressed to tell because your questions are so shaky. And if you have enough generic, pre-defined questions to tell you, then you have thousands of generic questions, 99.9% of which are useless in any given situation.

Social gameplay is the same way. Even if the character is interesting, you won't see that in how they respond to your thumbs-up or your yell. You can't really get to know a character because your AVATAR can't get to know the character. Your avatar's behavior doesn't change to be more specific to the character.

What if it did?

Let's pretend that instead of generic responses like thumbs-up and laugh and so forth, we have four "social action slots". The slots are filled by the places where your character's personality, the other character's personality, and your relationship collide.

The idea is that there could be hundreds, thousands, even billions of potential interactions that could be loaded up. They could be built out of sub-interactions, for example, or even made with a numeric scale involved.

A simple example is a hug. If you can hug someone in a game, they use the same animation no matter who you're hugging. But you hug people very, very differently depending on your relationship, your current mood, etc. And they hug you back (or punch you in the nose) very, very differently. Not just big differences like bum-grabbing or carefully maintaining inches of air between you, but small differences like how long they hesitate before returning your hug, exactly where on your back their hands fall, etc.

Calling it a "hug" is a crime caused by the limitations of our language. There are a billion different kinds of hugs. Instead of trying to make a big list of them, it would be much better to create a hug generating system.


I should stress that this is not an always-on thing. You cannot simply walk up to anyone and hug them. You only get the option to when your avatar thinks a hug would be appropriate (or, well, whenever he wants to).

This is a breach of common game design philosophy. We've grown very used to the idea that the player should be permitted to do any legal action at any time. If you take away a player's ability to jump "because his character doesn't feel like it", the players will probably crucify you.

This is really no different, and I expect the players would be irritated that they can't simply choose to try to hug (or whatever) their favorite character whenever they like. However, if it's built fairly transparently, getting that option would be one of the fun gameplay challenges.

But, as with all this week, this relies heavily on the player's avatar having a personality.

What do you think?

Monday, December 01, 2008

Filling in the Blanks

To continue this theme of hollow characters, I'd like to do a little thought exercise. I think you should try it too, because it's fun.

Picture one of the games you've played recently that had a hollow or window main character. Now replace them with a very strongly NON-hollow character. Imagine how the game would feel differently and then imagine how, if the game was designed with that character in mind from the start, how it might be fundamentally different.

For example, replace Mass Effect's "Shepard" with, say, Captain Kirk. This one's easy.

In most regards, Kirk is easy to picture in Shepard's shoes. It's not much of a stretch to see Kirk going to war like that, although it's not exactly what he would do in Trek. I can't see Kirk driving around on barren planets looking for mineral deposits and loot, but I couldn't see Shepard doing that, either.

From a plot perspective, Kirk fits fine into the role of the young renegade captain sent on a special mission with a special ship. However, all the interpersonal aspects of the plot will screw up a bit because Kirk's character doesn't really have those dynamics. His romances are always held at arm's length, for example.

The whole idea of "choosing noble or asshole" is still viable, but it would be done in true Kirkian style instead of mushy, wishy-washy hollow character. "Garrus... you can't... go around killing people!" "It's Wrex - he's gone out of... control!"

The fact that Kirk actually has a personality lets the writing justify letting Kirk take the lead more often, whereas Shepard is ENTIRELY a reactive character. This is because writing an active role for a hollow character results in the players feeling like they've been gypped out of the options they would really like... but that problem is much reduced if the main character has such a strong personality that the player can't deny that the options are all that make sense for him.

If the game was designed with Kirk in mind from the start, I think it would feature the ship more centrally, because Kirk is a captain above all. Sure, he gets in fist fights and fires lasers and boffs a space elf, but the whole purpose of his character is to be the beating heart of his ship.

This sort of character replacement is kind of a fun exercise. That example was pretty straight - a substitution of a bland character with a very similar non-bland character. But it's often fun to imagine really zany mixups, and we can still claim it's educational so long as we think about how it would actually change the game.

For example, imagine Mirror's Edge with Raz from Psychonauts as the main character.

Or imagine Ash from Evil Dead as the main character in Crackdown. (Or Ash from Pokemon, I suppose.)

Or imagine Shodan as the "main character" in SimCity. Go nuts.

The point isn't "How would these characters fit into the game?" The point is "How would the game change to fit in these characters?"


I especially like that SimCity one. Imagine a city-building game where you play an evil artificial intelligence. Ha! "The only thinggggs of beauty in the dirt you call a citttyyyy... are the thiinnnnggggss IIIiiii builllllt therrrrrre."

Sunday, November 30, 2008

No, Don't Shake It!

After my last post, I realized that it sounds like I'm saying that it would be a good idea for an IP such as Final Fantasy or Gears of War to come out with a bizarre new style of game instead of a normal sequel.

I'm definitely not suggesting that! Those IPs are not built to support that kind of variation. If you came out with Gears of War the RPG, it would probably be considered a very awkward move.

If IPs like these do come out with games from outside their previous style, they tend to be fairly conservative and standard. Chances are very high that the Halo RTS will not have any amazing new dynamics in it: it'll basically feel similar to most other RTS on the market. I don't mean boring - I just mean that the mechanics will be very similar to those things that RTS fans expect.

When I wonder if it's possible to have an IP that lets you experiment with very new and unusual games as sequels, I'm not talking about any existing IPs. I'm talking about whether you can come up with an IP that would allow that.

It would need to have a specific set of attributes. For example, it would need to be diverse enough to allow for a variety of gameplay styles. Halo is better at this than Final Fantasy, because Halo concentrated on developing its non-gameplay aspects across multiple games, while Final Fantasy always develops new non-gameplay aspects. This means that Halo can abandon their typical play style and still feel like Halo so long as they keep those assets in play.

I don't think Halo is a really great example, though, because the Halo universe is very confined. It's not just a matter of how many different styles of games it can support, but also how many different styles of... narrative, for lack of a better word. You'll never get a Halo game that is about falling to your darker urges, like in Star Wars. It's just not supported by the nature of the game, and if you published such a game, it would be heralded as "a new direction for the Halo franchise". It would be hard to do an adventurous game (whether an adventure or an open-worldish RPG) because the IP is completely militarized: the lives and adventures outside of the military are not part of the Halo IP, even though they obviously must exist in a technical sense. Again, you could make a Halo the Adventure game, but it would feel very un-Halolike.

So the question is...

Can you imagine an IP and an approach that would allow you to develop radically different games from sequel to sequel without ever losing the "feel" of the IP?

Friday, November 28, 2008


Most of the best-selling games these days are sequels. A lot of people don't like that.

I think it's a sign of the industry's health: it's a good sign that the first game did well enough to fund the second game, the third game, however many games come out. It's true that it seems like most people only buy sequels to games they've already played, but that doesn't mean there are no new, creative games. It just means they aren't as well marketed. In many cases (Psychonauts, cough) this dooms them to die a horrible death, but that's true even if there are no sequels running around.

For me, I like sequels because I know what to expect. I'm a little disappointed by how WELL I know what to expect, but that's a different matter. The point is that, in general, sequels are strong games made by a strong, experienced team working with an IP and tech base they are very experienced with. They're successful because they're usually quite good.

If I want to be surprised by something, I buy something that doesn't have a number in the title. Usually, I buy strictly indie games on that front: I'm nervous buying anything like, say, Mirror's Edge, because they tend to have the worst of both worlds: an inexperienced team working beneath a "creative board" that cripples any creative impulses. Creative games with inexperienced teams are fine, clones with experienced teams are fine, but clones with inexperienced teams? No thanks.

(Obviously, Mirror's edge wasn't a clone... but it was definitely rough, especially in the "writing" department. I put it in quotes because I would hesitate to call that "writing".)

For me, there is an interesting edge to new games, games that aren't cycling through an old IP. And this relates to my last post about "full characters", actually. A new game will typically revolve around some powerful organizing concept (often the main character's weird abilities) and will therefore have a very unique flavor.

Even if an IP is quite good, that doesn't happen in sequels. The powerful organizing concept might be there in the first game, but after that, it's pretty familiar, pretty well explored. There's a push to keep the nth game feeling like the nth-1 game, and that means that the edge wears off even as the team starts to come together and polish the game to a shine.

I wonder if it's possible to build an IP that explores weird new games, an IP that keeps its edge no matter how many games you release. I have a sneaking suspicion that even if you managed to come up with a way to do it, it would be instantly derailed the moment you became a success as the well-meaning (greedy) board of directors gets its talons into the project.


Wednesday, November 26, 2008

Fill 'Er Up

I hate blank-slate main characters, and here's why.

These days, most games have blank slate main characters. The idea is that the player's avatar is so incredibly nonexistent (especially outside of cut scenes) that the player can fill him with any personality he chooses.

To say this is common is an understatement. It's ubiquitous. To the point where it exists even in games with seemingly strongly defined main characters.

For example, in Gears of War, you play the improbably named "Marcus Fenix". He seems like an actual character. He's got a badass voice actor, he has lines of dialog, he's definitely not visually neutral.

But he's still an empty character. After establishing his very basic personality and history in the early cut scenes, he rarely says anything more emotional than "keep moving" and "could be a trap".

This sort of thing is easy to see if you compare the main character to the secondary characters. Compare and contrast: your main character in any Bioware RPG. Even if we take into account all the dialog you choose (and we shouldn't, because it doesn't actually constitute the avatar feeling any emotion), you STILL have less character development dialog than any given party member's left big toe. The other characters will have strong judgments, wacky opinions, fun lines of dialog, actual emotions on their faces... you, on the other hand, get to choose whether to kill the beggar or give him money.

Obviously, you can have good games with hollow avatars. The world is full of them: the Final Fantasies, if you like them. System Shock II. Fallout I & II. Half Life. However, this long list is not because of any strength in the idea of a hollow character: it's because there are ten times more hollow-character games than defined-character games. Even games that theoretically have strongly defined characters, such as the Spider-man games, end up hollowing out the character for your use.

But compare these to games which have a main character that is not afraid to have a strong personality. Psychonauts. Beyond Good and Evil. Planescape: Torment. Sly Cooper. Grim Fandango. Most of the best adventure games (and the worst, I admit).

These games have a very different feel from hollow-avatar games of the same sort. The main character's existence as an actual person shapes the whole game. It shapes the plot, the gameplay, the color, the EVERYTHING. You are living in their world.


The hollow avatar started back in the beginning, when you didn't really have much choice. Even if you made your avatar a bright red and blue clown, you couldn't really show him doing much in terms of emotion or characterization. But these characters were not intended to be hollow. They just HAD to be.

In the game designers' minds, the characters were fully formed. You can tell by the way the world is so strongly shaped around their personalities... even though you can't actually see their personalities. Little Nemo and Bonk are very clear examples: you don't really see much in the way of emotion or personality during the normal course of the game, but the whole world is built around their very strong personalities.

In some games, hollow avatars are used to great effect. System Shock II being an excellent example, or Half Life if you prefer the wussy little brother. These games make use of the same basic limitations: you never see your avatar's emotions because A) you never see your avatar and B) your avatar never runs into anyone he can have a chat with.

Similarly, Ico and Shadows of the Colossus both have hollow main characters, but, again, they are hollow because their personality rarely gets a chance to show. The worlds and plots are built around a very strong personality and set of emotions, and this shines through.

That's not the case in a game like Gears of War, where you spend the majority of the game staring at an armor-plated football player hanging out with other armor-plated football players. Obviously, your avatar has a billion chances to show emotion and personality. You can always see his body language, you almost always have people within talking distance, so why is it that the other characters all have interesting lines and you always just grunt? Why is it that the OTHER characters express all the things I, as THE PLAYER, am feeling?

Even in games where you make your own character, such as Oblivion, I think I would enjoy playing someone with a personality.

I think people assume that a hollow character is the best thing because it allows anyone to give any personality to the character. But that's stupid. If you've played Beyond Good and Evil, which avatar feels more real? Which avatar adds more to the game? The photo realistic, extremely cool Fenix... or the cartoony, slightly insipid Jade? I'll give you a hint: it's Jade. If you doubt it, go play both games again.

In honesty, I would have preferred to play Cole or either of the Carmines in Gears of War. I have a feeling I would have felt the war far more personally that way.

It's not as if a hollow character can be filled with any given personality, either. Even if we're playing something like Oblivion, with an almost unlimited set of options, the personality we give our character is going to be tightly linked to how the character plays. A good example is the assassin: a lot of people wanted to be assassins in Oblivion, but you can't make your avatar feel like a dangerous assassin. A) Assassins have to frolic (literally) through the woods picking daisies and B) bows suck. The "dangerous assassin" personality dies quickly, mutated into something bizarre and perhaps hilarious.

To me, this means there is no excuse not to give the avatar a strong personality. Make it permeate every facet of the game.

"But what if the players don't like the main character?"

That actually happens pretty rarely in decent games, and the reason is because the quality of the gameplay and plot will change their judgment on the quality of the character. Jade's character is actually quite irritating, taken out of context. But the gameplay is good and the way she's exposed to plot points takes the butter out of the smarmalade and lets us admire her.

Sly Cooper's about as interesting as a piece of cardboard, but he's got an extremely STRONG two-dimensional personality, and the rest of the game supports it. The main character from Destroy All Humans is practically ONE-dimensional but, again, the game makes him interesting to play as.

So... characters. Yeah. That feel things. And have personalities.

It's cool.

I think.


Tuesday, November 25, 2008

If You Can't Kill the Children, It's Not Fun

I'm sure this is familiar to some of you, but I need to type it out anyway.

Bethesda released a new Fallout game, which is a good example to use, although the same complaints can be aimed at a wide variety of games. In Fallout 3, you can't kill children. In the earlier games, you can. In fact, it's something that can happen by accident, especially if you're having a firefight in the middle of a village. Using plasma cannons and missiles. Which happens fairly often, actually.

Non-gamers (and probably most gamers) will stare at you in horror when you say that the game is not as good because you can't kill children. "You want to kill children?!"

Tch, that's not the point. This is not a matter of programming children to be killable. This is a matter of programming children specifically to be exempt from death in a world where you can kill anything and everything else. It's roughly the equivalent of going to see Blade Runner or some other beautifully atmospheric movie, only to have a corner of the screen filled with a cheerful cartoon animation doing the Macarena. For the whole movie. But it gets bigger when the movie is at its darkest and most atmospheric. After all, you wouldn't want anyone to get depressed by the dark and atmospheric nature of the movie!

Let me see if I can explain it in another way.

You want to live in that game's world, at least for the moment. You want to live in that world, but THE CHILDREN AREN'T LIVING IN THAT WORLD. They are exempt from the world. They are immune to its dangers, pressures, etc. They are the equivalent of a comic book character breaking the fourth wall, and that's not always suitable.

So, YES, if you can't kill the children, it's no good. Not because killing the children is good or even because I want to. It's because if they're immortal god-beings, it completely ruins the immersion. Destroys the reality of the game. For the sake of some little old lady's heart?

It's rated M, lady. Kill the children.

Open World Games

... They're very popular, these open world games. But I think it's worth considering their strengths and weaknesses rather than simply screaming "open world!" and spending ten million dollars on it.

The core idea of an open world game is that you can interact with the world in any way you see fit, rather than being stuck in some kind of linear mission progression. The truth is usually that you're stuck in a linear mission progression, but you can ignore it and go diddle around.

I don't like open world games.

Okay, that's really wrong. I love open world games. I love them so much that I hate them for falling short so much.

For example, Crackdown is an "open world game". You have the whole city. You can go anywhere you want. You can even take your starter pistol and baby-fat starting character over into ninjas-kill-you land. If you're good enough, you might even win. And the game can handle that.

The issue here is that this is not a simulationist open world. You can tackle the quests in whatever order you like, you can collect orbs, you can even fight the cops, but none of this stuff is terribly emergent or adaptive. Even the cops get tired of chasing you after a bit.

This is true of every modern open world game I've played, except maybe some of the roguelikes. GTA3 and Mass Effect are the same way, as simple examples: you can do the missions, you can explore the city/universe, and maybe you can play with the cops for a bit until you get bored.

There are things to do in the city - steal cars, do races, etc - but these are simply side missions scattered around the city just for kicks. They are scripted in, preprogrammed pieces that change nothing except, perhaps, your XP meter.

I find that these are a disappointment. I feel that an open world game should maybe be about THE WORLD. Hence, you know, "open WORLD".

Fable II offers a very basic glimpse into this kind of idea, although I only bring it up because it's recent: half the space-adventure games since 1993 have done the same thing. It's relatively easy to model the economy of a system if you ignore realism, so these games all have an economic system for you to open-worldly abuse.

The problem with these kinds of games is that I find them a bit unsatisfying. Once you've bought the city (or learned that buying the city will take all year and isn't worth it), what's the point? All of those stupid prescripted missions do have a purpose: they give the world texture and flavor. They make THIS planet different from the last one you landed on more than superficially.

But the missions are boring! Not only are they only minimally adaptive, they're not "open": the mission goes no deeper or shallower than scripted. For example, if I save a group of slaves from Plorbax the Grundarian, I'm generally given a good option (let them go, even though we're in the middle of a jungle full of fifty foot high monsters) or an evil option (usually, kill them. Occasionally, sell them).

I can't actually interact with these newly rescued people. I can't offer to schlep them back to their homeworlds, can't offer to make them my crew, can't try to date one of them, can't try to settle down in the jungle like the Robinsons, can't do anything.

Obviously, simulation of PEOPLE is a bit more difficult than simulating an economy, not least because we can't easily simplify people without losing what makes them interesting. It's a bit more difficult, yeah, like 'nobody's ever done it' difficult.


But what, I thought, if we're coming at this from the wrong angle. What if instead of trying to simulate the people, we just try to simulate the illusion of emergent behavior?

In my mind, the point of an open world game is that I am permitted to explore the universe as quickly or slowly as I see fit, in as much or as little detail as I wish.

Let's say that we build a typical open world with all its kajillion little side quests. But, instead of placing those side quests in the world, we leave them floating free in the database.

As the player explores our world, we can assign these side quests. So, he's trying to chat up someone on the street? Now she's the main character in the "I'm being chased by the mafia!" side quest. Going down a dark alley? Now it's the "out of control car!" alley side quest. Looking more closely at a corpse? Now it's the "mysterious letter in my pocket!" side quest.

Furthermore, using this method it would be easy to release supplements or mods, free or for a price, to instantly integrate into the world. You would keep the density down, obviously: you don't want every random passerby to hit up the player with a side quest, you don't want every new building to be a weird new situation: just the ones the player seems to show an interest in. It's also quite possible to string them together: while rescuing the slaves, you can take special interest in one and they will have a suitable side quest assigned, just as if they were an actual character with an actual, meaningful existence.

In addition I think this would permit a whole new set of "flavor" subquests. For example, if you're on top of a building, admiring the view, it could spawn the "side quest" for someone else admiring the view with you. There's no competition, no challenge, it just adds some targeted flavor to the world.

What do you think?

Monday, November 24, 2008

Poor Bill Paxton

So, yesterday someone was watching TV and I heard a very familiar voice. It was saying something boring about some TV series, but in my head I clearly heard "That's it! Game over, man! Game over!"

I wonder what it must be like to have your WHOLE LIFE defined by one relatively minor role. I mean, does anyone see him as Bill Henrickson, or, like me, is it all "Why is Hudson wearing a suit?"

Twenty years later, and I still can't picture this poor guy as anything other than a doomed space marine. Yeah, I know he starred in Twister and played a big role in Apollo 13. At those times, I thought "Why is Hudson driving a car?" and "Why is Hudson wearing a space suit?"

Oh, no, wait, that makes sense.

Sorry, Bill!

Wednesday, November 19, 2008

Transition Team

As a side note, I'd like to point out that Obama's various transition teams contain a nobel-prize winning chemist and two people who actually know something about virtual worlds/the internet. This so far makes his teams the most qualified we've had in eight years. (Actually, any one of them makes his teams the most qualified we've had in eight years.)

What bad news! I'm not happy about him assigning people who actually know their business to the FCC transition team. I prefer my oppressors inept, you know? ;)

Cityscapes and Immersion

It sounds silly, but one of the things I enjoyed most about Mirror's Edge was the menu sequence. The city in the background was beautifully done, and the music was perfect. I've always had a thing for cityscapes, even if they're only fleeting: the intro for Megaman and Streets of Rage, for example. The games had nothing about cities in them, but I liked the intro sequences. Don't even get me started on Blade Runner.

There's something very vibrant about a cityscape that vanishes when the game actually starts. Even if the game is about the city, such as Simcity, it doesn't contain the same feeling of potential and vibrancy.

So I was thinking about what kind of game could still give that kind of feeling. If we zoom in, we can get the same kind of feeling from unique things in the city. Not simply traffic patterns and police stations scattered in among the commercial zone, but unique bits: what makes this police station different from other stations? Why does the foot traffic linger at that particular toy store? Why is that particular person unique?

You can see it in games like Assassin's Creed, Crackdown, and even City of Heroes. There is an inkling that THIS roof is particularly interesting, or that the crowd is interesting, or that any number of other bits of city are interesting. However, these all tend to fall flat if you get close: the crowd is just a milling of identical NPCs, the roof is a nice enough view, but gets boring instantly, the graffiti is fun as background noise, but if you actually look at it, it's one of five designs repeated a thousandfold.

I think that one major reason for this is because video games are not particularly immersive. We talk about immersion, but one reason we talk about it so much is that we're really pretty BAD at it. Video games are marginally okay at immersing us in a "flow" sense - getting us involved in the mechanical rules and so forth - but they're pretty bad at getting us immersed in a more classical sense. Movies are much better at it, because a skilled director will draw our attention to immersive details, such as a unique piece of graffiti, an interesting view from a roof, a crowd that contains real individuals...

A big component of these tricks is that they calibrate our judgment by showing us what "people" think and do (even if those people are imaginary). Sometimes this is extremely straightforward: they simply show people thinking, doing, and feeling. This is especially common in children's shows (and scifi), with their hundreds of close-ups on people's smiling or unhappy or angry faces. As the audience matures and learns the thousands of cultural clues, the directors simply linger on those cultural clues instead of the faces proper.

For example, we know how to feel about dingy underlevels of the spaceship in Alien because the director uses the same cultural clues we've learned about dingy industrial areas. He then plays it up by focusing on the interactions of the workers, showing how THEY feel about the place (and the crew above them). Using these cues we can also judge the lower levels strongly, and when the fight starts raging down there, we feel a lot more immersed.

Video games often use these methods, although usually pretty clumsily. However, they are very limited by camerawork. The only ways to zoom in on someone are to either have a cutscene and take movie-like control or to have pop-up portraits. Neither of these is ideal: one is extremely limited, the other makes the game into a non-game, however temporarily.

But if we acknowledge that this is something we need to implement, we can use our limited methods to give judgment clues a lot more adaptively and precisely, perhaps (in the long run) even better than a movie director.

For example, lets say you're playing a game where you're moving around a city, similar to Assassin's Creed or Crackdown (or Mirror's Edge if you could actually move around the city). When you discover a particularly nifty place, you stop for a moment to enjoy it. But there is no real sense of immersion, nor any reason to dally, so the enjoyment quickly wears off and you move on.

But what if there was someone with you up on that neat rooftop, someone who could say "wow!" and thoroughly enjoy the view? In a movie, that's what would happen: the actors involved would stand at the edge of the roof and look around, and we would mirror their judgments. In a video game, that doesn't happen. But it could.

In theory it could be our own avatar that does these things, but assuming you have any kind of direct control normally, I think that stealing control away would BREAK immersion rather than reinforce it. So we are more or less limited to having another character present.

There's no reason it has to be human. Maybe it is - maybe it's your sidekick, or your girlfriend, or whatever. But it could also be an AI, a ghost, a memory, a flying dog... they don't have to be human, they just have to make judgments that we can agree with.

Pacing is an issue, too: most modern games are paradise for people with no attention spans. If we're deeply engaged in the rules and mechanics of a game, taking time out to think about how pretty the view is would be distracting.

On the other side of the affair is the difficulty in actually getting characters to judge things in a way that aligns with the things our player wants to judge. In a movie, this is easy: the audience is damn well going to judge whatever you point them at. In a game, it's more difficult, because you can't point the bastards. You can't even rely on where they point themselves: in a first-person shooter, I might be running along looking down the street, but as a player I'll be noting the cool sunlight effects in the corner of the screen. If a character behind me pops up and says, "wow, this street sure is dirty!" I'll reply, "durrwhocares?"

Add on to this the difficulty of actually getting characters to have the breadth of judgment required, you've got quite a feat. After all, these characters will have to be able to not just comment intelligently on a nice view, but also have to feel immersed in that view. Then, ten steps later, they've got to be equally immersed and judgmental about, say, a traffic jam. Or a man wearing a pink tutu.

I think it might be possible to do this to some extent, although I'm not sure how suitable it would be in any modern game. These days, all the games we play are horribly egocentric and impatient: the city doesn't exist, it's just an excuse to provide us with city-themed levels. We don't like slowing down to smell the roses because we've been trained to think in terms of advantage, and there isn't any in smelling roses.

I'm not sure that the characters' judgment clues would actually MATTER, either, since we don't often consider them to be worth thinking about. They inhabit a lesser world and are obviously not intelligent beings, especially if we save and load and replay this mission a lot. It might be necessary to radically change that in order to make this sort of thing work.


I guess that's my rambly essay. What do you think?

Tuesday, November 18, 2008

Mirror's Edge & Motion Sickness

A few of my less game-related RSS feeds are pointing to a Wired article which suggests that Mirror's Edge is a proprioception hack. IE, it screws up your sense of where you real limbs and body are.

I guess, technically, you can call motion sickness (which is what he's talking about) a proprioceptive effect, because it originates from visual cues not matching up to your body's more subtle cues about movement, speed, etc. But to imply it's something new and amazing is really pretty silly.

Motion sickness is a pretty complicated subject, and most people don't realize the full spread of cues and effects. For example, my mom is fine on boats, in cars, and so forth, but revolving cameras in movies and first-person video games give her serious trouble. Similarly, I have a friend who can play FPS games all day, but if you expand the view angle to more than ninety degrees, he gets really ill.

What we're seeing in Mirror's Edge has nothing to do with the fact that you can see her hands and everything to do with the motion of the camera. We're not used to this kind of bobbing, tilting camera work in a video game. You know the only thing it's been used for until now?

HORRIBLE MONSTERS. We use it in movies to show the viewpoint of a werewolf or some other slavering beasty. Why do we use it? Well, one of the reasons is BECAUSE IT MAKES PEOPLE UNCOMFORTABLE AND NAUSEOUS.

I will say that I am 99% sure Mirror's Edge is not doing some amazing new "proprioception hack". It's just making you (well, some people) motion sick.

It's riding the edge of providing too much visual data about motion your real body isn't doing. The more data it provides, the more realistic the game will feel... but the more people will tend to get motion sick. If that realism is a "proprioception hack", people's standards on the matter are way, way too low.

Saturday, November 15, 2008

Mirror's Edge

This is in two parts. The first part is me making my typical whiny review of a game (no spoilers). The second is me discussing design details. If you don't care to hear my whining, skip to the "***". The other "***", obviously. I mean, uh, the FOURTH "***".

So I'm playing Mirror's Edge. I'd like to remind everyone that I love moving in games. I love my avatar maneuvering through levels. It's a passion. So, I would say that I am their target audience.

It's not a game I can give a numeric rating to. Like the first Sly Cooper, there's nothing else in the niche to rate it against. Unfortunately, Unlike Sly Cooper, it is a pretty badly flawed game.

It's not a BAD game. It's just that, like its main character, it tends to fall short a lot.

First, the cutscenes are mostly 2D animations done by the Esurance guys. That style works well for snappy commercials, but is ill-suited for anything faster, slower, or more fluid. When they can tween, they run at full frames, but when they actually have to redraw a frame, they drop to animating on twos. There's nothing wrong with animating on twos IF YOU STICK TO IT, but switching framerates mid-scene is horribly jarring. Also, they don't use any animator's tricks, so things like sprinting appear very disjointed and jumpy instead of appearing properly fluid.

If I had to describe their animation style, it would be a graphical quality slightly below that of Teen Titans with the animation quality of South Park. So, not very impressed.

Also not impressed by the character design. All the characters are painfully bog-standard except the main character, who is painfully bog-standard and ugly. I'm impressed by the shoe design, and by the way that the level designers played around with color and luminescence.

Graphically, the game is glitchy even on the 360, with numerous dropped polies and thin black origin lines flickering around. It also features The Return of the Long Ass Elevator Rides Two: Electric Boogaloo.

The story has only one upside: they didn't try to write a romance plot in. Unfortunately, everything they DID write in was painfully stereotypical. The only surprises in this game are in the level design - you won't find any in the plot. The "big twists" were evidently shocking to my avatar, but they only served to make me think Faith is a bit of a retard.

No, I take that back. There were surprises in the plot: when I overestimated them, they consistently surprised me by falling short.

Only bloody-mindedness kept me from skipping the cut scenes, which are badly written and animated in a style I don't much like. At least the audio is solid, with decent voice actors, decent sound effects, and very suitable and listenable-toable music.

The setting is a painfully bland police state with painfully bland corruption running its painfully bland course. At least it doesn't look so standard, thanks to the graphics used in the levels. It's a very CLEAN painfully bland city, you see, which actually adds to the experience.

"But that's all besides the point! It's all about the gameplay! How's this first-person free-running game PLAY?"

Weeeeelllllll... let's leave the bulk of that for the design section. But lets briefly talk about level design and pacing.

The game features a lot of classic navigation puzzlers a'la Prince of Persia, but the pacing is awkward unless you've memorized the levels. The fighting is a shame, included just because nobody can imagine releasing a noncombat game. It's terrible, whether you're kung-fu fighting with ninja (AKA "strafe, slide-kick, press Y") or fighting cops (AKA "Press X, press Y, hold the trigger"). It's not terrible in that it's EASY - you die like a dog - it's terrible in that it's NOT FUN.

The game is laser-focused on its linear path. There is no side exploration, no bonuses, no nothing. The closest you can get to expressing yourself as a player is to find little bags which serve ABSOLUTELY NO PURPOSE. There are no interesting secret zones, very little secret information (a few screens with a bit of text hidden in a corner, a few voice mail messages), very little of interest. There are occasionally multiple paths to your goal, but one is obviously better than the other and neither is particularly interesting.


Let's talk about the design details.

Moving a platformer into first-person view is a dangerous move, but one that's been brewing for a while. Games have become more and more free-running-esque as the years go by and our hardware improves. Prince of Persia has become a nostalgic look into the days when swinging from flagpoles was new and amazing. These days, even the (surprisingly good) TMNT game features more advanced motion than PoP, and games like Assassin's Creed and Crackdown make PoP look like a tinkertoy.

All of these games are third-person for a reason. As you know, Bob, platformer games are mostly about where you are in relation to other things. Like, say, the platforms.

If you can see where you are in relation to other things (IE, you can see your feet), you can know when and how to jump, land, and so forth. This is, as you might have guessed, an important part of your balanced daily not-falling-to-your-deathfast.

So moving to a first-person view offers some serious challenges. You have to keep in mind that your players don't have as clear a view as they normally do. It's very hard to chain motions together because first-person view means they'll only be looking at whatever they are about to hit instead of being able to see everything around them.

Mirror's Edge tried to solve this problem in two ways. First, it painted things red. Red means "jump here", and if you see something red, you'll remember where it is even if you can't see it precisely at the moment. In the game it is used primarily to tell you where it is safe to make blind jumps, but it is occasionally (and in my mind, more importantly) useful in highlighting a path through puzzling terrain. In either way it works to offset the limited view of the player, although in the former application (jumping off buildings) it offsets a limited view that even 3rd-person players have. Crackdown had to zoom pretty far out and up to give you a decent view of where you were jumping to in such situations, and it still wasn't perfect.

The second way Mirror's Edge tries to solve the problem is through level design. Most of the levels consist of climbing up sheer, blind faces (or staircases) for a while, and then running and jumping in a generally downward direction. This is helpful because if you're looking down at where you're about to go, you can get a pretty good mental map of the place even if your viewpoint will be too limited to see much when you're actually in it. This considerably offsets the blindness, especially since you can usually see the red marker showing you where you ought to go next.

However, these methods (and other aspects of their level design) take most of the spontaneity out of navigating the levels. It's all made very linear, mostly a matter of pressing the left bumper at the right times, occasionally while fiddling with the control sticks. When you are "free" to move around (say, on a rooftop), your freedom is pointless as the only things worth doing are moving to whatever is red or climbing to the highest point so you can see where you're going.

My problem isn't that such movement is boring: it isn't. Or, rather, it doesn't have to be. My problem is that they didn't really embrace it at all.

In most modern movement games (Assassin's Creed, for example) a big part of the gameplay is figuring out how to work your way to some location. Sometimes, it's painfully linear, but there's usually a rewarding rhythm and sense of progress - for example, finding a new plot element, getting an upgrade, climbing to someplace high, or finding something weird. Unfortunately, Mirror's Edge tries to do this, but there is never any sense of progress because they wanted to focus on relentless forward movement rather than the slow figuring-and-working-forward pace of normal movement games. They wanted to be more like Sonic, so they don't ever give you any roses to stop and smell.

In other movement games, flow is the point. TMNT and Crackdown are good examples. When these are linear, they are linear in a really obvious way to allow you to know well in advance what buttons you should press when, giving your avatar a continuous stream of uninterrupted forward movement. When they aren't linear, they allow you to freely explore an INTERESTING terrain, rather than a tiny rooftop. Mirror's Edge tried to do this with red things but they didn't take it very far: they wanted to be more like Assassin's Creed.

This has the added problem that Mirror's Edge is entirely first person, which means you are mostly good at seeing places you AREN'T. Fundamentally, I think it's a rotten choice for a movement-based game, because movement-based games are about changing where you are, not changing where you aren't.

Another big issue is that the players are limited by what they can recognize in addition to what they can see. Games like Crackdown make it easy because your method of movement is actually pretty simple and limited, just amped up to silliness. But in a proper free running game, your movement capabilities are going to be a lot more nuanced: you aren't just jumping the fence, you're scrambling over it, or maybe it's low enough to Kong. Similarly, maybe you need to wall jump up to the lattice across the way.

This would be almost impossible to properly see using third-person view, and it IS impossible to see using first-person view. This is why real parkour and free runner folks carefully familiarize themselves with wherever they are before they do anything dangerous: they need to know the "grooves" in the local space, places where their capabilities will allow them to fluidly move through. It's not something that can really be done on the fly.

Most games try to get around this with level design. Oh, look, it's a wall of exactly that height, so I know I can scramble up it, every time. And I know there will never be a wall of ALMOST that height, or a wall of that height I can't scramble up for some reason... and I know that if I vault that wall, I will land safely on the other side, no matter what the terrain.

But this doesn't help chains of movements. You know you can vault the wall, but you don't realize that you need to spring off the wall to the chandelier and then swing across through the window. The only way you can know that is if you've seen all the parts of the chain and have them mapped out in your head. That's much easier to do from third person.

Well, there are a few other options, I think. One is touched on in Mirror's Edge: important elements can stand out (in this case, be colored red). If used more excessively, they can form an unbroken chain of where you SHOULD be going, allowing you to maintain flow. I would actually suggest something a bit less world-centric, such as lines of flowing, floating pips.

This limits the puzzle-ability of your game, though. If there's always an obvious path to get where you want to go, you're not going to have to think about it much. Instead, you'll be more like Sonic or TMNT or Crackdown: navigating is a lot of fun, but many of the challenges come from exploration or combat rather than navigation.

Another option is to remove the realtime element from the game. If you "draw" your path in space rather than navigating it personally, it will allow you to analyze exactly what the situation is in a more relaxed way, allowing your avatar's progress to be fluid and sequenced even if YOUR progress involves tweaking paths for a minute.

Anyhow, either or both of these options would get rid of one of the stupidest parts of Mirror's Edge: DYING. Whoops, I died. Respawn. Whoops, I died. Respawn. Often five, six times in a row. Unlike TMNT, the respawn is a significant irritation which often takes you back forty or fifty seconds (this doesn't sound irritating until you realize that you have to repeat that minute many times if you're trying for a difficult jump). Also unlike TMNT, it MAKES NO SENSE. There is absolutely no reason given for your amazing respawn talent. It's not part of the game world in any way. It's just there to let the designers create levels that you have to play through several times to be able to beat.

Nothing breaks immersion like inexplicable and unrewarding failure mechanics. :P

All that aside, I think that there IS some potential here. First-person mechanics are inherently more immersive, and you could make a very exciting game if only you could figure out how the player would navigate it.

Friday, November 14, 2008

Crossing the Geographical Wires

Here's a thought experiment. I'm not suggesting we do this, but it's an interesting thought experiment.

Imagine that voting for president works the same way it does right now, with one tiiiiiny little difference: any citizen can cast their vote in any one state (or county, or however you divide it).

So I could cast my vote for MA, where I live, or I could cast my vote in, say, Texas.

Put aside the idea of ideal behavior: people aren't ideal. Instead, think about how many people of what types you think would take advantage of this "switched voting", and what you think the effects might be as years pass and people get used to the idea. What sort of new groups do you think voters would create to try to leverage their votes?

I'll get things started: I think that most groups of independents (such as libertarians) would get together and agree to win a specific (low-population) state. "Green party wins Alaska!" would be hilarious...

Sunday, November 09, 2008

Space Navigation

This is a technical essay for beginners on the subject of AI navigation in space games.

A few weeks ago, I tried to build a space-fleet-combat-game. In the process, I learned a whole lot about how to get an AI to navigate in space. Recently, one of my friends tried the same thing, and I realized it might be worth doing an introductory essay on space navigation, plotting intercept courses, taking into account gravity wells, etc.

Proper space games, whether Star Control 2-style or Wing Commander-style, have a few factors that make them completely different from other games. First, the main engines point directly backwards and turning isn't instant. Second, you keep going on the same vector: no (or very low) friction.

This totally destroys the old fashioned interception calculations that you might use for a first-person-shooter or RPG, because as time passes, the target will be in a different location relative to you (whether it's you, he, or both that are moving).

The easiest way I found to calculate out a basic intercept course is as follows:

Get the distance between you and your target. Divide it by your average speed. That is the ETA.

Calculate the new positions of you and your target at ETA, but cap your vector at a certain constant number of seconds (3 is good for most action space games). IE, if the enemy is moving x+1m/s, in ten seconds he'll be at x+10m. However, if you are moving at x+1m/s, in ten seconds you'll be at x+3m, because your vector vanishes after three seconds. We negate our own vector because we plan on changing it radically.

Now, determine how long it would take to reach these virtual positions. IE, how quickly you can turn to the proper heading and close at speed. Do not bother taking into account your current vector. This is the estimated course (EC).

If the EC is roughly the same length as the ETA, go for it. If it's significantly different, increase or decrease your ETA significantly and try again until you find an EC that works or your ETA becomes stupid (error out).

This system works in most space games because most space games have a maximum speed cap. If your maximum speed is 100m/s and you're traveling at x+100m/s, accelerating along the z axis will make you arc away from that and to z+100m/s, no x velocity at all. This is true both of 2D and 3D space games: most of them impose this artificial speed cap so that people have a hope of being able to navigate intuitively.

The slower you turn and the longer it takes to reach your maximum speed, the more of a factor your current velocity will be, and the longer your personal velocity cut-off time should be. In addition, when calculating your estimated course (EC), you should take into account that your average speed will be lower than your maximum speed because you have to accelerate - and if your acceleration is slow, that will be more important. Obviously, the longer the thrust, the closer to your maximum speed your average speed will be.


Now, if you're like me, you're the sort of person who doesn't want a speed cap. If you're traveling at x+100m/s and you start thrusting along z, you'll reach x+100m/s & z+100m/s. No speed cap except maybe light speed.

This makes navigation difficult because you have to factor in your vector much more strongly, often moving to negate it. On the surface, you should just be able to do away with the velocity cutoff. However, it's not quite that simple.

First, you need to calculate not just where the enemy ship will be at ETA based on his current vector, but also based on his acceleration. If a ship is accelerating away, you need to take that into account or you'll be hopelessly off course.

Second, this system produces radically inappropriate intercept speeds. You'll blow by your enemy doing 0.3c, you won't even see the bastard as he blips by. So you need to put in some kind of rough vector matching mixed into the basic intercept package. It's possible to program, but the actual physics of the matter make it highly difficult: unless your ship has many times the thrust of the ship you're intercepting, it will take you a long, long time to close and match vectors. And if you've got any human-controlled ships, forget it!

Another factor in this kind of game is the light-speed cap, which is a common thing to want to include. The idea is that the speed cap is light speed, and the closer you get to light speed, the more thrust it takes to change your vector.

There are a lot of problems with this. In basics, this means that your current velocity will screw up your acceleration. Plotting an intercept course may involve having to slow down so you can accelerate along another vector.

Of course, the truth is that the issue is radically more complex than that, because you're always operating from your own frame of reference. You aren't going 0.9c: everything else is. The light speed limit is no good for gameplay unless you (A) are a physicist or (B) throw away any realism it may have had.


With the described system, basic navigation is possible. However, most games want more advanced navigation: they want to be able to orbit planets, do gravitational slingshots, present broadsides, interpose specific faces of their ships, ram (and avoid ramming), etc, etc, etc.

Each of these things requires some pretty significant tweaking... and, of course, the basic calculations presented here are not exactly optimized. They give pretty rough (but functional) intercepts.

There's the basics. Hope it helps.