A few days ago, Darius posted a link to this, a video game where the players are all cooperating bridge crew. The details are a bit scarce, but the basic idea is pretty clear.
This got me thinking about games like this (including tabletop RPGs). Games where there are a fairly large number of players cooperating. I've run scads of them, so I have a pretty good eye for how party dynamics tend to develop.
The difficulty in any party-based game, whether it's a bridge crew or a team of adventurers, is that not everyone is always busy. It's very easy to end up with a minimal role - a mistake made in a lot of old LARPs. "Color characters", we derisively called them. The same problem applies to characters that have the same amount of business but dramatically different amounts of tension.
The classic example of a color character is a LARP in which a few characters are simply "henchmen" or "thug". It's possible to write such characters to have an interesting role, but classically their role is just to follow the boss around and be muscle when they need to be. 90% of the time, they do nothing.
This is "realistic", sure. And some players can turn this kind of crap role into something entertaining, but rarely without upsetting the whole boat. In general, the use of color characters is a sign that your LARP design is painfully oldschool.
The classic example of the "tensionless" character is the cleric in a D&D game. The cleric can fight and heal, but it's pretty rare for them to feel tension: healing is rarely tense, and they rarely have an actual dog in the fight. They're just playing fighter support, and they know it. Even though they get the same number of rolls and have access to the same complexity of combat as the fighter, the tension just isn't there. They're just fighting to pass time between throwing heals and turning zombies.
These two kinds of characters - tensionless and color - are the banes of every team game. They are going to plague a game like a "bridge crew" simulator especially hard, because there is very little crossing over from one role to another. If you're an engineering officer, you might become important if things are going badly, but you're basically a cleric: you have no 'natural' role to play unless things get bad.
...
So far, I've been casting tensionless and color characters as uniformly bad, and I think that they are. However, it's also possible to make the opposite mistake, and desperately try to keep everyone busy all the time.
If we fall back to the bridge crew example, not every player needs to always be working flat-out. The immersion is quite strong due to the physical setting. Momentary downtime could be useful to build tension and immersion, as players sit by helplessly awaiting the next moment. This is generally quite difficult to do in a low-immersion setting, but in something like a LARP it can be fairly effective if your setting fits your setting. Physical setting fits your game setting.
A major difficulty is the modern person's reaction to a lull: Facebook.
Trained by almost two decades of on-line games, players have learned that the game is only one of the ten things they do while playing the game. They fill the empty grind time with random internet doings.
This will prevent immersion. If you are playing a game in person, absolutely ban non-game activities, especially if people are using computers. They can do non-game activities if they need to, but they need to leave the game arena to do so. Otherwise, even momentary lulls will sabotage the flow of the game as the party is torn apart by their secondary pursuits.
Ahhh, such concerns are at the heart of LARP design, and LARP design is at the heart of in-person AR games like the bridge crew example.
It would be interesting to do more of these designs, but it takes a large number of players with a big chunk of time on their hands, so you basically need a college.
Wednesday, October 27, 2010
Sunday, October 24, 2010
Empty Worlds Plus
A steadily increasing number of indie games are following the Dwarf Fortress trend: the point of the game is to build. The world is normally a randomly generated mess to give everything variety, but the focus is definitely on construction rather than exploration.
This is part of a trend towards larger game worlds in general, but games with an actual budget tend to try to populate their large worlds themselves rather than relying on the player. However, I'd like to focus on player-generated worlds for now.
Most player-generated worlds are generated for the entertainment of the player generating them. If you play Dwarf Fortress, you're probably mostly interested in seeing how far you can stretch the game world.
A few newer games are beginning to add human (by which I mean player-human, not game-world human) elements. For example, in Minecraft it is pretty common for players to get together on a shared server to build. Even in Dwarf Fortress, shared fortresses and long, amusingly-told war stories are common.
It's clear that sharing player-constructed worlds with other players in various ways is a concept with legs. One way to do this is through shared construction or management of a world, and another way is through creating artistic sets and allowing other players to experience/use them. Both of these have merit, and they both have, at their core, something that isn't in the game: other humans.
Whether you're interacting with other humans in real time or delayed, whether they have management rights or are just here for the view, these humans are experiencing the world through their own filters, their own judgment.
This sounds painfully obvious, but this is actually an important point that isn't addressed well enough in these games with player-generated shared worlds. What we could really use is some mechanism for helping players to share their thoughts with each other in game space.
Text or voice chat is more or less all that exists right now, and it serves well enough that people don't tend to think about better options. However, if you look at any out-of-game sharing, you'll see an immense amount of emotional value added through pictures, music, camera control, snarky voice- or text-over that's linked to specific times or camera angles.
This kind of machinima is certainly not unique to shared, constructed worlds. For example, there are whole series using this kind of stuff with the Halo engines. However, I think that shared, generated worlds have a much higher need for this sort of thing and need to have it integrated into the game itself.
It's a kind of scripting. Can't use normal scripting languages for an interface, though, because they're not really suited for what we want to do. We almost certainly need a scripting language to actually control the world, but it's overly cumbersome for sharing our emotions and judgments.
So what can we use?
Well, what do we want to do? We want to allow players to inject their own values to the experience of other players. For example, if there's a part of the map with an incredible view, we want to lure players into looking out over the map from that viewpoint and hopefully have them struck by the same sights in roughly the same way. Or we want the player to be able to tell a story about the people who lived in a given place through tiny details that can be discovered by investigating, leading you on a cross-map treasure hunt.
To a large extent, a simple tagging system would work - allowing players to add text or other media to in-world locations or objects. Of course, you would also have to be able to add a camera vector and a "flag" to alert nearby players that there's a thingie to trigger.
Unfortunately, the sort of experience we're talking about isn't really "chunky". It's theoretically possible to cleverly put down triggers such that they cause a more continuous flow of... what other people think... but it would require an extremely deft touch.
Instead, we might want to think about how to allow players to sculpt the world on not just a spatial way, but an emotional one.
The problem isn't the sculpting - there's a thousand ways to do that. The problem is how to give it to the other participants as they pass through the world without (A) stealing their control or (B) being irritating.
The three ways I can think of are music, avatar animation, and companions.
Music is probably the best way, since music is so good at causing emotions. If your game has a music generator in it, it should feed partially off the "emotional sculpting" the players have been doing. It probably is less about the exact spot you're standing on and more about the emotional content of the spots you can see. So if you look out over a vast plain of nostalgia-covered lands, the music should get extremely nostalgic. If you're in a tight space, able to only really see walls, the claustrophobic lack of other areas with music triggers will be reflected in the music.
Still, adaptive music may not be something you can do. So two more options.
Avatar animation is suitable for third-person games. By allowing people to make "points of interest" or "ideal camera angles", you can get the avatar to look in that direction, signalling to the player to do the same. Emotional values in the landscape can affect the avatar's gross animation or coloration as well.
The last option I can think of is similar to avatar animation: a companion follows you around or, at least, is nearby. The companion would be scripted to do specific kinds of things, but this scripting should be open to disruption rather than rock-solid. Companions might be ghosts, people, fairies, dogs - anything that can move around and can show an emotion. They can be aggressive like Navi, or completely passive inhabitants of the region.
The companion option requires a pretty powerful pathfinder and adaptive scripting system, though, so it's not quite as easy to do as you might think.
Anyway, those are my quick thoughts on the matter. What are yours?
This is part of a trend towards larger game worlds in general, but games with an actual budget tend to try to populate their large worlds themselves rather than relying on the player. However, I'd like to focus on player-generated worlds for now.
Most player-generated worlds are generated for the entertainment of the player generating them. If you play Dwarf Fortress, you're probably mostly interested in seeing how far you can stretch the game world.
A few newer games are beginning to add human (by which I mean player-human, not game-world human) elements. For example, in Minecraft it is pretty common for players to get together on a shared server to build. Even in Dwarf Fortress, shared fortresses and long, amusingly-told war stories are common.
It's clear that sharing player-constructed worlds with other players in various ways is a concept with legs. One way to do this is through shared construction or management of a world, and another way is through creating artistic sets and allowing other players to experience/use them. Both of these have merit, and they both have, at their core, something that isn't in the game: other humans.
Whether you're interacting with other humans in real time or delayed, whether they have management rights or are just here for the view, these humans are experiencing the world through their own filters, their own judgment.
This sounds painfully obvious, but this is actually an important point that isn't addressed well enough in these games with player-generated shared worlds. What we could really use is some mechanism for helping players to share their thoughts with each other in game space.
Text or voice chat is more or less all that exists right now, and it serves well enough that people don't tend to think about better options. However, if you look at any out-of-game sharing, you'll see an immense amount of emotional value added through pictures, music, camera control, snarky voice- or text-over that's linked to specific times or camera angles.
This kind of machinima is certainly not unique to shared, constructed worlds. For example, there are whole series using this kind of stuff with the Halo engines. However, I think that shared, generated worlds have a much higher need for this sort of thing and need to have it integrated into the game itself.
It's a kind of scripting. Can't use normal scripting languages for an interface, though, because they're not really suited for what we want to do. We almost certainly need a scripting language to actually control the world, but it's overly cumbersome for sharing our emotions and judgments.
So what can we use?
Well, what do we want to do? We want to allow players to inject their own values to the experience of other players. For example, if there's a part of the map with an incredible view, we want to lure players into looking out over the map from that viewpoint and hopefully have them struck by the same sights in roughly the same way. Or we want the player to be able to tell a story about the people who lived in a given place through tiny details that can be discovered by investigating, leading you on a cross-map treasure hunt.
To a large extent, a simple tagging system would work - allowing players to add text or other media to in-world locations or objects. Of course, you would also have to be able to add a camera vector and a "flag" to alert nearby players that there's a thingie to trigger.
Unfortunately, the sort of experience we're talking about isn't really "chunky". It's theoretically possible to cleverly put down triggers such that they cause a more continuous flow of... what other people think... but it would require an extremely deft touch.
Instead, we might want to think about how to allow players to sculpt the world on not just a spatial way, but an emotional one.
The problem isn't the sculpting - there's a thousand ways to do that. The problem is how to give it to the other participants as they pass through the world without (A) stealing their control or (B) being irritating.
The three ways I can think of are music, avatar animation, and companions.
Music is probably the best way, since music is so good at causing emotions. If your game has a music generator in it, it should feed partially off the "emotional sculpting" the players have been doing. It probably is less about the exact spot you're standing on and more about the emotional content of the spots you can see. So if you look out over a vast plain of nostalgia-covered lands, the music should get extremely nostalgic. If you're in a tight space, able to only really see walls, the claustrophobic lack of other areas with music triggers will be reflected in the music.
Still, adaptive music may not be something you can do. So two more options.
Avatar animation is suitable for third-person games. By allowing people to make "points of interest" or "ideal camera angles", you can get the avatar to look in that direction, signalling to the player to do the same. Emotional values in the landscape can affect the avatar's gross animation or coloration as well.
The last option I can think of is similar to avatar animation: a companion follows you around or, at least, is nearby. The companion would be scripted to do specific kinds of things, but this scripting should be open to disruption rather than rock-solid. Companions might be ghosts, people, fairies, dogs - anything that can move around and can show an emotion. They can be aggressive like Navi, or completely passive inhabitants of the region.
The companion option requires a pretty powerful pathfinder and adaptive scripting system, though, so it's not quite as easy to do as you might think.
Anyway, those are my quick thoughts on the matter. What are yours?
Friday, October 22, 2010
The Proxy Publisher
I've been reading and viewing a lot of stuff, and I've recently seen the Creative Commons license popping up everywhere. Even in paper books! If you're not sure exactly what Creative Commons is, I recommend this as a first read, keeping in mind that it's three years old.
So CC is spreading, and it's popular. Yay!
With that said, I also have seen a lot of people make the same mistake.
Keep in mind that the Creative Commons community is large and diverse: my opinions are my own. However, my opinions are not unique.
The mistake many people make is right at the beginning. They decide they're going to wade into the Creative Commons waters, so they release something under a Creative Commons license. They feel good, they're participating, they're "next wave", yeah!
The problem is that they publish under the most restrictive version of the license they can imagine: CC BY-NC-ND. This is the "proxy publisher" setting. It says "spread this around, please, but don't change anything!"
There's nothing fundamentally wrong with this license, and in some cases it is what you might actually want to do. However, in most cases, it's a mistake.
The problem is that ND: "No Derivatives".
I understand the idea. These artists would like to keep control over what they release. They don't want it mutating out of their control. BUT. This is a flawed presumption. In many cases, it is the wrong way to come at the situation.
The question to ask yourself: what does the ND license actually protect me from?
Has a CC-released work ever been derived into something that upset the author and was popular enough that it got noticed? I know a few derivatives have been pretty crap, but the community simply ignores them. No harm is prevented by taking the ND license.
Also, derivatives (in the unlikely chance you are that popular) are what give your product (and your personal brand) serious legs. The ability for people in various circumstances to adapt your work to suit their needs means that your work can reach them - and everyone they touch.
To be honest, the only thing I think might be better released as ND is intensely personal art. Even then, I would probably argue against it. ND protects you from something that isn't a threat and cripples your circulation.
I know that a lot of artists have problems with piracy. But that's a completely unrelated issue. Completely. The two aren't even vaguely related, and ND will not protect you from pirates, or from tweens who make your pictures into avatars, or fanfic writers - all it will do is discourage talented people who actually want to build on what you've created.
What I'm trying to say is... think carefully about what you're releasing. Do you really need to maintain an iron grip? Don't you think the community has something to add? Do you really think you are the only person with useful thoughts on this matter?
When you release, go whole hog: leave off the ND. Release with SA, instead. The worst that could happen is that people are led to your work through screwball videos or parodies, and is that really so bad?
So CC is spreading, and it's popular. Yay!
With that said, I also have seen a lot of people make the same mistake.
Keep in mind that the Creative Commons community is large and diverse: my opinions are my own. However, my opinions are not unique.
The mistake many people make is right at the beginning. They decide they're going to wade into the Creative Commons waters, so they release something under a Creative Commons license. They feel good, they're participating, they're "next wave", yeah!
The problem is that they publish under the most restrictive version of the license they can imagine: CC BY-NC-ND. This is the "proxy publisher" setting. It says "spread this around, please, but don't change anything!"
There's nothing fundamentally wrong with this license, and in some cases it is what you might actually want to do. However, in most cases, it's a mistake.
The problem is that ND: "No Derivatives".
I understand the idea. These artists would like to keep control over what they release. They don't want it mutating out of their control. BUT. This is a flawed presumption. In many cases, it is the wrong way to come at the situation.
The question to ask yourself: what does the ND license actually protect me from?
Has a CC-released work ever been derived into something that upset the author and was popular enough that it got noticed? I know a few derivatives have been pretty crap, but the community simply ignores them. No harm is prevented by taking the ND license.
Also, derivatives (in the unlikely chance you are that popular) are what give your product (and your personal brand) serious legs. The ability for people in various circumstances to adapt your work to suit their needs means that your work can reach them - and everyone they touch.
To be honest, the only thing I think might be better released as ND is intensely personal art. Even then, I would probably argue against it. ND protects you from something that isn't a threat and cripples your circulation.
I know that a lot of artists have problems with piracy. But that's a completely unrelated issue. Completely. The two aren't even vaguely related, and ND will not protect you from pirates, or from tweens who make your pictures into avatars, or fanfic writers - all it will do is discourage talented people who actually want to build on what you've created.
What I'm trying to say is... think carefully about what you're releasing. Do you really need to maintain an iron grip? Don't you think the community has something to add? Do you really think you are the only person with useful thoughts on this matter?
When you release, go whole hog: leave off the ND. Release with SA, instead. The worst that could happen is that people are led to your work through screwball videos or parodies, and is that really so bad?
Wednesday, October 13, 2010
Social AI Redux
I've talked a lot about social AI in the past, but it's been at least a year, so here's another post about it. Please note that by "social AI", I really mean "the appearance of social AI". I don't have any intention to solve any fundamental AI challenges.
One issue with social AI is that it requires really dense, nuanced information to react realistically. On the order of hundreds or thousands of times more nuanced than today's video games. Perhaps the Kinect will provide enough human feedback, barely, but in more general situations you're going to have to synthesize a lot of the density out of the game world without many cues from human input.
As an example, if your friend touches you, the exact meaning varies hugely depending on the location and type of touch. Arm, shoulder, head, back, waist, chest, hand, etc, etc. Each location gives a different impression. Is it a tap, a pat, a reassuring grip, a restraining grip, a rap, a friendly punch, a warning punch, a caress, a guiding push?
To say that there are N ways to get touched, or even N thousand ways, is a mistake. The fact is that there are an infinite number of ways to get touched. You can't list them all, and even if you could, you don't want to try to interpret them based on a big list. Instead, you want input that is sufficiently dense so as to allow the program to figure out the nature of the touch based on context.
By the way, this density is also important to humans. Humans who are stuck in simple and restricted environments tend to have simpler and less nuanced responses. They often go a little bit batty, like an edge case in a simulation. For example, being stuck in an arctic base for six months. This is fairly well documented, to the point where there are specific recommendations for how to keep your people from going nuts when you station them somewhere with so little stimulation.
Well, aside from that, there's also a ton of interpretive complexity. What is a friendly tap in one country might be an aggressive warning in another, or even a flirty move. Even within one country, different people will react differently. Even the same person will react differently depending on the moods of the people involved and the surrounding context.
The normal method of trying to make a social AI for a game is to give you a variety of interactions, and the AI responds to those interactions in a fairly straightforward way. At its peak, this consists of basically building up a tremendous expert system which takes the mood and the situation and the type of tap and then spits out a response.
This is not a good way to do it for the same reason that carefully scripting every branch of a plot is not a good way to do it. A) it creates distinct 'paths' or 'branches', rather than giving real freedom. B) it gets radically more complex with every choice or branch you add.
So, to quickly state where we are:
You need to do social interactions with a culturally and contextually aware algorithm, rather than using a state machine or expert system, if you want really adaptable social interactions.
You need extremely varied and nuanced inputs to feed that algorithm, or you'll end up basically creating a state machine. AKA "The Arctic Base Issue".
Very few or perhaps none of the human input devices available to you can actually transmit that much nuance.
...
That's really only a tenth of the story. It's the foundation on which you start to talk about social AI, or the appearance of social AI. But it's plenty long as is, I think.
One issue with social AI is that it requires really dense, nuanced information to react realistically. On the order of hundreds or thousands of times more nuanced than today's video games. Perhaps the Kinect will provide enough human feedback, barely, but in more general situations you're going to have to synthesize a lot of the density out of the game world without many cues from human input.
As an example, if your friend touches you, the exact meaning varies hugely depending on the location and type of touch. Arm, shoulder, head, back, waist, chest, hand, etc, etc. Each location gives a different impression. Is it a tap, a pat, a reassuring grip, a restraining grip, a rap, a friendly punch, a warning punch, a caress, a guiding push?
To say that there are N ways to get touched, or even N thousand ways, is a mistake. The fact is that there are an infinite number of ways to get touched. You can't list them all, and even if you could, you don't want to try to interpret them based on a big list. Instead, you want input that is sufficiently dense so as to allow the program to figure out the nature of the touch based on context.
By the way, this density is also important to humans. Humans who are stuck in simple and restricted environments tend to have simpler and less nuanced responses. They often go a little bit batty, like an edge case in a simulation. For example, being stuck in an arctic base for six months. This is fairly well documented, to the point where there are specific recommendations for how to keep your people from going nuts when you station them somewhere with so little stimulation.
Well, aside from that, there's also a ton of interpretive complexity. What is a friendly tap in one country might be an aggressive warning in another, or even a flirty move. Even within one country, different people will react differently. Even the same person will react differently depending on the moods of the people involved and the surrounding context.
The normal method of trying to make a social AI for a game is to give you a variety of interactions, and the AI responds to those interactions in a fairly straightforward way. At its peak, this consists of basically building up a tremendous expert system which takes the mood and the situation and the type of tap and then spits out a response.
This is not a good way to do it for the same reason that carefully scripting every branch of a plot is not a good way to do it. A) it creates distinct 'paths' or 'branches', rather than giving real freedom. B) it gets radically more complex with every choice or branch you add.
So, to quickly state where we are:
You need to do social interactions with a culturally and contextually aware algorithm, rather than using a state machine or expert system, if you want really adaptable social interactions.
You need extremely varied and nuanced inputs to feed that algorithm, or you'll end up basically creating a state machine. AKA "The Arctic Base Issue".
Very few or perhaps none of the human input devices available to you can actually transmit that much nuance.
...
That's really only a tenth of the story. It's the foundation on which you start to talk about social AI, or the appearance of social AI. But it's plenty long as is, I think.
Saturday, October 09, 2010
Interactive Movies
I've always been a fan of Quantic Dream's aspirations. The games they create are always worth playing, if just for how unusual they are. For those of you playing at home, Quantic Dream is known for creating games that are interactive movies. Heavy Rain was the most recent one.
I think it's a genre waiting to unfold. But I also think that Quantic Dream's games so far haven't started the unfolding process.
When I play their games, I am happy to play them. But the instant I set them down, there's absolutely no draw to pick it back up. In fact, there's a barrier.
To me, this feels the same way as when I play a long, constructive game like Civilization. If I don't finish the game in one stretch, I start over. Trying to pick up a half-built world is no fun. All the details that kept you interested have faded. You can remember the big strokes, but there's not much emotional investment any more.
Of course, when I lose track of a Civilization game, I can just start a new, random game. With an interactive movie, that's not really the case. Starting over means retreading the same steps, and it gets more boring each time.
What I'm saying is that I love playing interactive movies, but they're like movies: you don't watch half a movie and then come back to it the next day. At least, most people don't, and it's not really recommended.
The techniques you use to establish emotional investment in a movie or world-building game like Civilization revolve around details. The good acting, the interesting setpiece, the particular way he talks about her, and exactly what that city is building right now. This works well, it really gets us going. But it fades fast.
Games that are intended to be dropped and picked up again use different techniques. The Gears of War and Halo goons don't have to have nuanced relationships and excellent body language. The draw is in the quick drop into gameplay. It doesn't matter what the details are, the algorithm of the gameplay is what brings you back.
Or, in the case of things like StarWars and Star Trek, setting. I think that strong settings feel like gameplay in some ways. A setting with strong, evocative points that all the details hang from seems easier to immerse yourself in, especially on repeat visits.
At least, that's my feeling.
What's in the future for interactive movies and story-games?
Well, I think we'll find they become strongly episodic, and the episodes will be very short - four to five hours. Maybe the episodes will be built and sold in distinct packages, but I think it's more likely that the engine that runs the game will know how to build the next episode based on what happened in your last episode, within the limits of the dramatic arc.
We already see this a bit in Heavy Rain, where the game's progression is radically different depending on exactly what your characters do. I think Heavy Rain accomplishes this through insane amounts of carefully scripted events, but our techniques for doing some of this generatively are steadily advancing. As they do, I expect the game to make very clear distinctions as to the game's internal episodes.
I also think that story games will have to be built around settings rather than stories. The stories are important, but I think that a really vibrant setting with very clear "centerpieces" (such as the Force, the Enterprise, Mordor, etc) will be what draws the players back in after each episode, and I think most games will lean that way.
The biggest problem with creating a game like this is what to do at chapter ends.
Players are notoriously unreliable. Some players will happily play for twelve hours straight, while others will only eke out maybe forty-five minutes. You can ask them how long they plan on playing - and I expect we will start to - but even that is only vaguely accurate.
The ending of a chapter is therefore something really irritating. If your player quits halfway into the chapter, that can still be salvaged, although it's not ideal. But if a player quits fifteen minutes before the end boss? Or what if the chapter ends, but the player wants to continue playing for another twenty minutes?
I think there are two solution, one for each of those problems. Here are my suggestions.
For chapters ending early, I recommend an adaptive progression. If your player breaks it off, then when he comes back, make sure there's at least an hour of chapter left, even if he quit literally at the final fight. This hour of chapter gives you time to reintroduce the player to all the tiny details that get the moment-to-moment emotional investment. Similarly, you have to assume that they have forgotten all the little details from the first part of the chapter.
I don't mean that Gloria is trying to level up her fireball, or that Sven loves Sue. I mean the emotional touches - Gloria's husky voice, Sven's nervous coin-rolling trick, and the way Sue struggles with her umbrella in the rain. You can't just rush back into the scene where Sven is declaring his love for Sue, you've got to re-establish both Sven and Sue as characters worth caring about.
The other side of the problem is the post-chapter play.
Done right, an ending will really kick you down. There's a feeling like you just want to sit quietly for a while. As gamers, we seem to discourage this feeling. These days, our games either never end, or have endings that immediately scoot us on into the rest of the IP. It's rare that a game ends with as much emotional force as Chronotrigger, Beyond Good and Evil, and so many other famous games.
I recommend that we do not allow players to instantly move on to the next episode. I recommend that there is a second type of play where the players basically play house. Allowing the players to do some neutral gameplay like walking around the town, designing costumes, and playing around with move sets will allow them to decompress from what was hopefully a fantastic ending.
SO, that's my prediction for the future of story games. You?
I think it's a genre waiting to unfold. But I also think that Quantic Dream's games so far haven't started the unfolding process.
When I play their games, I am happy to play them. But the instant I set them down, there's absolutely no draw to pick it back up. In fact, there's a barrier.
To me, this feels the same way as when I play a long, constructive game like Civilization. If I don't finish the game in one stretch, I start over. Trying to pick up a half-built world is no fun. All the details that kept you interested have faded. You can remember the big strokes, but there's not much emotional investment any more.
Of course, when I lose track of a Civilization game, I can just start a new, random game. With an interactive movie, that's not really the case. Starting over means retreading the same steps, and it gets more boring each time.
What I'm saying is that I love playing interactive movies, but they're like movies: you don't watch half a movie and then come back to it the next day. At least, most people don't, and it's not really recommended.
The techniques you use to establish emotional investment in a movie or world-building game like Civilization revolve around details. The good acting, the interesting setpiece, the particular way he talks about her, and exactly what that city is building right now. This works well, it really gets us going. But it fades fast.
Games that are intended to be dropped and picked up again use different techniques. The Gears of War and Halo goons don't have to have nuanced relationships and excellent body language. The draw is in the quick drop into gameplay. It doesn't matter what the details are, the algorithm of the gameplay is what brings you back.
Or, in the case of things like StarWars and Star Trek, setting. I think that strong settings feel like gameplay in some ways. A setting with strong, evocative points that all the details hang from seems easier to immerse yourself in, especially on repeat visits.
At least, that's my feeling.
What's in the future for interactive movies and story-games?
Well, I think we'll find they become strongly episodic, and the episodes will be very short - four to five hours. Maybe the episodes will be built and sold in distinct packages, but I think it's more likely that the engine that runs the game will know how to build the next episode based on what happened in your last episode, within the limits of the dramatic arc.
We already see this a bit in Heavy Rain, where the game's progression is radically different depending on exactly what your characters do. I think Heavy Rain accomplishes this through insane amounts of carefully scripted events, but our techniques for doing some of this generatively are steadily advancing. As they do, I expect the game to make very clear distinctions as to the game's internal episodes.
I also think that story games will have to be built around settings rather than stories. The stories are important, but I think that a really vibrant setting with very clear "centerpieces" (such as the Force, the Enterprise, Mordor, etc) will be what draws the players back in after each episode, and I think most games will lean that way.
The biggest problem with creating a game like this is what to do at chapter ends.
Players are notoriously unreliable. Some players will happily play for twelve hours straight, while others will only eke out maybe forty-five minutes. You can ask them how long they plan on playing - and I expect we will start to - but even that is only vaguely accurate.
The ending of a chapter is therefore something really irritating. If your player quits halfway into the chapter, that can still be salvaged, although it's not ideal. But if a player quits fifteen minutes before the end boss? Or what if the chapter ends, but the player wants to continue playing for another twenty minutes?
I think there are two solution, one for each of those problems. Here are my suggestions.
For chapters ending early, I recommend an adaptive progression. If your player breaks it off, then when he comes back, make sure there's at least an hour of chapter left, even if he quit literally at the final fight. This hour of chapter gives you time to reintroduce the player to all the tiny details that get the moment-to-moment emotional investment. Similarly, you have to assume that they have forgotten all the little details from the first part of the chapter.
I don't mean that Gloria is trying to level up her fireball, or that Sven loves Sue. I mean the emotional touches - Gloria's husky voice, Sven's nervous coin-rolling trick, and the way Sue struggles with her umbrella in the rain. You can't just rush back into the scene where Sven is declaring his love for Sue, you've got to re-establish both Sven and Sue as characters worth caring about.
The other side of the problem is the post-chapter play.
Done right, an ending will really kick you down. There's a feeling like you just want to sit quietly for a while. As gamers, we seem to discourage this feeling. These days, our games either never end, or have endings that immediately scoot us on into the rest of the IP. It's rare that a game ends with as much emotional force as Chronotrigger, Beyond Good and Evil, and so many other famous games.
I recommend that we do not allow players to instantly move on to the next episode. I recommend that there is a second type of play where the players basically play house. Allowing the players to do some neutral gameplay like walking around the town, designing costumes, and playing around with move sets will allow them to decompress from what was hopefully a fantastic ending.
SO, that's my prediction for the future of story games. You?
Thursday, October 07, 2010
Simulating the Early Years
As you know, I'm a big fan of creating worlds and then colonizing them. Some of the research I do towards this end is interesting enough that it can stand on its own. Like this post about the very early years, when hunter-gatherers gave way to sedentary farmers.
If you have any familiarity with the subject, you might have the impression that this was an explosion or revolution. Poom! Now we're all farmers and we have cities, high population density, specialization, and so on.
That's not really true. Farming flickered on and off for thousands of years before it really took root, and even then its spread was pretty rough. I don't think there's any textbook-endorsed reason for this "flickering", but a little thought yields an obvious answer.
The stuff we grow on farms today is not wild stuff, it's tame stuff. It's domesticated, just like the dog: we brought in wild barley and bred it decade after decade until it behaved like we wanted it to.
So the first farmers were starting with wild plants. Wild plants that grow perfectly fine wherever they currently grow in the wild, and don't really grow much better if carefully tended. It's easier just to go pick raspberries than grow bushes of them. Sure, you might plant some raspberries along the river bank so that when you swing by again in a few years you'll have plenty of raspberries, but you aren't going to sit down and watch.
In order to successfully go agricultural, your farm needs to produce an immense yield compared to the wild land's yield. Factors that can improve your relative yield are: plant domestication, irrigation, and having crappy wild yield (or being unable to range very far to collect it). I imagine that early farmers would often need all three of these factors in order to really consider agriculture as a way of life.
You may think that plant domestication is kind of one-way - once someone domesticates it, it stays domesticated. However, that's probably not really true. Unfortunately, even a successful farmstead is in a tremendous amount of danger. Not just from raiders or weather, but also from soil degradation and salinity buildup. Every year, your field probably produces less than it did the year before. The farm goes bust, the slightly-tamed crops interbreed with their wild brothers.
Eventually, there were enough pseudo-domesticated strains of cereal grass running around that farming could finish the job and properly take root. This seems to have happened on earth about 10,000 years ago, although there are signs that it took root and then un-rooted many times before.
The Fertile Crescent is the famous "birth of agriculture" spot, and this is because the crescent has long dry seasons and short rainy seasons. Grasses grow in these locations instead of dense forests, and that has two effects. 1) long grasses produce excellent farming soil, and 2) cereal grasses are pretty adaptable, and can be domesticated to higher yields very easily. The highly variable terrain also makes settling down a bit more attractive: it's harder to range very far on foot.
These three conditions happen to satisfy two of the three preconditions mentioned, and irrigation was also possible, so the crescent was an ideal place for agriculture to set in.
In Africa, things like millet and coffee were being domesticated, although not at quite such an early date. The places these agricultural revolutions took place in (such as the Ethiopian highlands) have many of the same characteristics as the Fertile Crescent: wet season and dry season, variable terrain, available water.
Asia and America followed in the same kind of pattern.
Why does this matter?
Well, if you're putting down initial settlements in a world-building game, and you're starting at the agricultural revolution, then this tells you precisely where to put them: on variable terrain with a wet and dry season.
Moreover, it also gives you a new lever. Normally, farms are treated as farms. However, farm technology is not simply irrigation and crop rotation. The crops themselves are a technology. The products we grow today have spent thousands of years slowly shifting from the original versions. Over time and as long as war and ruin don't interfere, that technology will improve.
If you really want the game to rely on food, you can also work in soil degradation and salinity. The middle east wasn't always a desert: many of those lands were lush and fertile until they were catastrophically overfarmed. There's quite a bit of evidence that many of the mostly-forgotten great old empires from across the world collapsed due to overfarming.
Anyhow, I actually wanted to post on weather simulation. Maybe some other day.
If you have any familiarity with the subject, you might have the impression that this was an explosion or revolution. Poom! Now we're all farmers and we have cities, high population density, specialization, and so on.
That's not really true. Farming flickered on and off for thousands of years before it really took root, and even then its spread was pretty rough. I don't think there's any textbook-endorsed reason for this "flickering", but a little thought yields an obvious answer.
The stuff we grow on farms today is not wild stuff, it's tame stuff. It's domesticated, just like the dog: we brought in wild barley and bred it decade after decade until it behaved like we wanted it to.
So the first farmers were starting with wild plants. Wild plants that grow perfectly fine wherever they currently grow in the wild, and don't really grow much better if carefully tended. It's easier just to go pick raspberries than grow bushes of them. Sure, you might plant some raspberries along the river bank so that when you swing by again in a few years you'll have plenty of raspberries, but you aren't going to sit down and watch.
In order to successfully go agricultural, your farm needs to produce an immense yield compared to the wild land's yield. Factors that can improve your relative yield are: plant domestication, irrigation, and having crappy wild yield (or being unable to range very far to collect it). I imagine that early farmers would often need all three of these factors in order to really consider agriculture as a way of life.
You may think that plant domestication is kind of one-way - once someone domesticates it, it stays domesticated. However, that's probably not really true. Unfortunately, even a successful farmstead is in a tremendous amount of danger. Not just from raiders or weather, but also from soil degradation and salinity buildup. Every year, your field probably produces less than it did the year before. The farm goes bust, the slightly-tamed crops interbreed with their wild brothers.
Eventually, there were enough pseudo-domesticated strains of cereal grass running around that farming could finish the job and properly take root. This seems to have happened on earth about 10,000 years ago, although there are signs that it took root and then un-rooted many times before.
The Fertile Crescent is the famous "birth of agriculture" spot, and this is because the crescent has long dry seasons and short rainy seasons. Grasses grow in these locations instead of dense forests, and that has two effects. 1) long grasses produce excellent farming soil, and 2) cereal grasses are pretty adaptable, and can be domesticated to higher yields very easily. The highly variable terrain also makes settling down a bit more attractive: it's harder to range very far on foot.
These three conditions happen to satisfy two of the three preconditions mentioned, and irrigation was also possible, so the crescent was an ideal place for agriculture to set in.
In Africa, things like millet and coffee were being domesticated, although not at quite such an early date. The places these agricultural revolutions took place in (such as the Ethiopian highlands) have many of the same characteristics as the Fertile Crescent: wet season and dry season, variable terrain, available water.
Asia and America followed in the same kind of pattern.
Why does this matter?
Well, if you're putting down initial settlements in a world-building game, and you're starting at the agricultural revolution, then this tells you precisely where to put them: on variable terrain with a wet and dry season.
Moreover, it also gives you a new lever. Normally, farms are treated as farms. However, farm technology is not simply irrigation and crop rotation. The crops themselves are a technology. The products we grow today have spent thousands of years slowly shifting from the original versions. Over time and as long as war and ruin don't interfere, that technology will improve.
If you really want the game to rely on food, you can also work in soil degradation and salinity. The middle east wasn't always a desert: many of those lands were lush and fertile until they were catastrophically overfarmed. There's quite a bit of evidence that many of the mostly-forgotten great old empires from across the world collapsed due to overfarming.
Anyhow, I actually wanted to post on weather simulation. Maybe some other day.
Tuesday, October 05, 2010
The Future of Tweeting
I was going to call this "the Social Network", until I realized there's a movie out now by that name. This post talks about potential descendants of Twitter and the like.
I just had a conversation with a friend, where we talked about the difficulties of actually finding the right things in this world full of stuff. As a quote:
"They understand they want a cube that's two inches to an edge and blue, but they don't know what that cube is called, or what its manufacturer would market it as. They know they want a material that is liquid at room temperature and has a viscosity somewhere between water and grapefruit juice, but they don't know who would make such a thing, or what market its currently being sold for."
This is something that does crop up fairly regularly in both large and small cases. For example, I met a man who needed a "poster-quality technical illustration", but didn't know the terminology, so he was searching for "data visualization" and other keywords that kept leading him down dead ends.
My response to my friend was "I think a descendant of Twitter will solve this problem."
His response was "I don't think this is one of those problems that social networking or its derivatives is going to help with. We're not searching based on who knows who, or even who knows what, or even for people at all (expect to say we're searching for the person who makes the thing we want.) We're searching for an object, process, or intellectual property that meets certain parameters."
My response was "I'll write a blog post!" and his response was "facepalm". And now you are up to speed.
The future of everything is the social network.
First thing first, the term "social network" is being radically misused. Facebook is not a social network, it's a web site that enables social networks. Twitter is not a social network, it's a web site and API that enables social networks. So, when I say "social network", I don't mean "Twitter". I mean the underlying mass of connections between the participants on various social networking sites, and all the context those connections contain.
Right now, social networks are seen as just that - social. But that's just how the current generation is marketed. In reality, a social network is about connections that you know how much to trust. The people you follow on Twitter, you follow because you value their input at some level. Maybe you follow Gibson because you trust his judgment, maybe because he throws out interesting links, maybe because you hate him but you want to track what he says. The point is, you understand how much he can be trusted on what subject.
(As it turns out, Gibson is mostly a retweeter, so most people that follow him are using him as a source for filtered links. But we trust his filter to do as we expect it to, letting through certain kinds of links and not others.)
Things go the other direction, too. If you look at the Freakonomics blog or Warren Ellis, you'll find that these people use their readers as a vast resource. They constantly ask for information - what's a good band, give us quotes from 1930, send me your pictures, what do you think of this analysis... people with a lot of readers tend to be very interactive with those readers. If Freakonomics people had posted "I need this kind of data visualization..." they wouldn't have needed to search for the right match for days: some of their readers would have instantly known what they were asking for, and they would be hooked up inside hours.
It's not that these people trust their readers. I'm sure a lot of their readers send in crap, and I'm sure they get a lot of spam. But they have a lot of really great readers specifically because their readership trusts them (in specific ways) and therefore wants to impress them/participate.
That's the state of the world today.
It's not really that much of a jump to imagine the next generation of software will help with this sort of thing, will be more useful for the kinds of meaningful interactions that really make social networking worthwhile. Of course, you'll still be able to hear that Anne just ate a sandwich, if you want. Those interactions have value, too.
The other half of this is that I don't think it's much of a jump to imagine companies (or people within companies) using social networking software to figure out exactly what they're looking for and a good source of it.
Social networks have been used for thousands or even tens of thousands of years for precisely this purpose. The purchasing manager wants to buy Ye Old Paste, so he asks his friends if they know a good paste maker, maybe someone who'll hook them up at a discount. It continues today - my company regularly get requests from people who want to know more about what specific panels they should buy, even though we don't sell or install them. It's because they know us, and we know these guys...
It's the exact same as Facebook or Twitter, except without the technical assistance of a piece of software.
Sure, if your social network fails (or is hopelessly inadequate) you fall back on reading advertising or Google-combing. But those are techniques I would like to render obsolete. If we can radically expand social networking software to the point where it allows people to talk to each other in a businesslike way without feeling the stigma of "found it on the internet", we may very well end up making that desperate and blind search for an answer a thing of the past.
Actually, I've already hired people via Twitter, so I can't imagine it'll be long coming.
I just had a conversation with a friend, where we talked about the difficulties of actually finding the right things in this world full of stuff. As a quote:
"They understand they want a cube that's two inches to an edge and blue, but they don't know what that cube is called, or what its manufacturer would market it as. They know they want a material that is liquid at room temperature and has a viscosity somewhere between water and grapefruit juice, but they don't know who would make such a thing, or what market its currently being sold for."
This is something that does crop up fairly regularly in both large and small cases. For example, I met a man who needed a "poster-quality technical illustration", but didn't know the terminology, so he was searching for "data visualization" and other keywords that kept leading him down dead ends.
My response to my friend was "I think a descendant of Twitter will solve this problem."
His response was "I don't think this is one of those problems that social networking or its derivatives is going to help with. We're not searching based on who knows who, or even who knows what, or even for people at all (expect to say we're searching for the person who makes the thing we want.) We're searching for an object, process, or intellectual property that meets certain parameters."
My response was "I'll write a blog post!" and his response was "facepalm". And now you are up to speed.
The future of everything is the social network.
First thing first, the term "social network" is being radically misused. Facebook is not a social network, it's a web site that enables social networks. Twitter is not a social network, it's a web site and API that enables social networks. So, when I say "social network", I don't mean "Twitter". I mean the underlying mass of connections between the participants on various social networking sites, and all the context those connections contain.
Right now, social networks are seen as just that - social. But that's just how the current generation is marketed. In reality, a social network is about connections that you know how much to trust. The people you follow on Twitter, you follow because you value their input at some level. Maybe you follow Gibson because you trust his judgment, maybe because he throws out interesting links, maybe because you hate him but you want to track what he says. The point is, you understand how much he can be trusted on what subject.
(As it turns out, Gibson is mostly a retweeter, so most people that follow him are using him as a source for filtered links. But we trust his filter to do as we expect it to, letting through certain kinds of links and not others.)
Things go the other direction, too. If you look at the Freakonomics blog or Warren Ellis, you'll find that these people use their readers as a vast resource. They constantly ask for information - what's a good band, give us quotes from 1930, send me your pictures, what do you think of this analysis... people with a lot of readers tend to be very interactive with those readers. If Freakonomics people had posted "I need this kind of data visualization..." they wouldn't have needed to search for the right match for days: some of their readers would have instantly known what they were asking for, and they would be hooked up inside hours.
It's not that these people trust their readers. I'm sure a lot of their readers send in crap, and I'm sure they get a lot of spam. But they have a lot of really great readers specifically because their readership trusts them (in specific ways) and therefore wants to impress them/participate.
That's the state of the world today.
It's not really that much of a jump to imagine the next generation of software will help with this sort of thing, will be more useful for the kinds of meaningful interactions that really make social networking worthwhile. Of course, you'll still be able to hear that Anne just ate a sandwich, if you want. Those interactions have value, too.
The other half of this is that I don't think it's much of a jump to imagine companies (or people within companies) using social networking software to figure out exactly what they're looking for and a good source of it.
Social networks have been used for thousands or even tens of thousands of years for precisely this purpose. The purchasing manager wants to buy Ye Old Paste, so he asks his friends if they know a good paste maker, maybe someone who'll hook them up at a discount. It continues today - my company regularly get requests from people who want to know more about what specific panels they should buy, even though we don't sell or install them. It's because they know us, and we know these guys...
It's the exact same as Facebook or Twitter, except without the technical assistance of a piece of software.
Sure, if your social network fails (or is hopelessly inadequate) you fall back on reading advertising or Google-combing. But those are techniques I would like to render obsolete. If we can radically expand social networking software to the point where it allows people to talk to each other in a businesslike way without feeling the stigma of "found it on the internet", we may very well end up making that desperate and blind search for an answer a thing of the past.
Actually, I've already hired people via Twitter, so I can't imagine it'll be long coming.
Monday, October 04, 2010
The Fog of the Uncanny Valley
Recently, I've been getting irked by the topic of the "uncanny valley", so I'm going to wrestle it into submission in this post. If you don't know what the uncanny valley is, information is here. If you haven't read the Wikipedia article before, it contains some of what I'll be talking about. Or, rather, rambling about. This isn't very well pruned.
The first thing is that the uncanny valley is a moving target. People talk about "staying out of the uncanny valley", but the position of the uncanny valley depends on the person doing the viewing, and even then can change over time.
For example, my uncanny valley is far to the "left" of many people's. The deadeyed near-humans in modern video games are pretty far up my righthand slope, even though they seem to fall pretty clearly into the valley for many nongamers. On the other hand, Bayonetta is the creepiest thing ever made, wallowing in the uncanny valley even though she's obviously intended to be reasonably far up the lefthand side of that slope.
I don't think this is because of any intrinsic trait I have. I think the curve is a learned one. The hypothesis of the valley is that, by failing to be human enough, something sets off our creepy alert. I've spent enough time working with fictional 3D human beings that their attributes no longer trigger an alert. I think this can happen to everyone.
Therefore, I think the idea of "staying out of the valley" is actually a wrong-headed one. Attempting to climb the right side of the valley doesn't even have to work for it to work: just making the attempt and exposing the gaming population to these entities will cause their valleys, as a whole, to shift left. IE, the valley will always be located to the left of what we normally think of as humans, even if what we normally include in that group is a bunch of fictional characters.
This can also be seen in detail with people who collect dolls or similar. You go into someone's room and it's full of dolls, that's incredibly creepy. But they're used to it.
Of course, I'm talking about "to the left" and "uncanny valley" and all these other words, but there's an important thing to remember.
Despite the way we treat it, the uncanny valley is not a proven fact. It's not really even a scientific theory: it's barely a hypothesis. It's more of a simplification. It's easy to think of things in terms of the uncanny valley, but it's like thinking of planes as big metal birds: it doesn't actually explain what's going on and is absolutely no help in designing them.
My theory is actually the opposite. I don't believe that the uncanny valley is because we failed to be human enough. I think it's caused by the same thing that causes creepiness in any situation.
If you look at creepy characters and situations, the creepiness is usually caused by magnifying a particular attribute to the point where it is no longer reasonable. Sometimes this is straightforward, such as Freddie having a fire-scarred face and blade gloves: we take an attribute that is discomforting, and we amplify it.
Sometimes it can be a bit more abstract. Pyramid head's creepiness comes from the way we amplify inhuman characteristics. Not non-human, but inhuman: pyramid head gives off all the body language of a seriously disturbed person, amped to eleven. His namesake - the pyramid he has instead of a head - serves not to make him creepy, but to make him iconic. He is not creepy because he has a pyramid for a head, he's just easy to remember. His creepy traits are the ones he inherits and amplifies from the creepy people in our lives.
Approached from this standpoint, what we call the "uncanny valley" is clearly just what we call this sort of thing when we do it by accident. The dead-eyed characters from a modern video game aren't creepy because they just fail to be human, they're creepy because having pallid skin and flat, unfocused eyes is an unsettling attribute.
It may be true that they have to be fairly close to human for these attributes to matter. But that's not necessarily any reason to label it as a specifically "relative to human" thing. After all, there are plenty of creepy attributes out there, and even some attributes which are only creepy in certain situations. I can make a creepy cat in the same way I make a creepy person, but I'd probably amp different attributes. Moreover, what's creepy in one situation might be completely uncreepy in another.
For example, the new Street Fighter character C. Viper is extremely creepy to me. It's because she's wearing high heels. Obviously, someone wearing high heels isn't creepy in and of itself: it's the fact that a warrior in the Street Fighter universe is wearing high heels.
I also find cell-shaded Link disquieting. Not because he fails to be human, but because he fails to be a cartoon. He amplifies several cartoon traits and leaves others unamplified, creating a really uncomfortable result. To me.
So, in the end, I think that what we call the "uncanny valley" is simply when we accidentally amp up disquieting traits.
Moving Target
The first thing is that the uncanny valley is a moving target. People talk about "staying out of the uncanny valley", but the position of the uncanny valley depends on the person doing the viewing, and even then can change over time.
For example, my uncanny valley is far to the "left" of many people's. The deadeyed near-humans in modern video games are pretty far up my righthand slope, even though they seem to fall pretty clearly into the valley for many nongamers. On the other hand, Bayonetta is the creepiest thing ever made, wallowing in the uncanny valley even though she's obviously intended to be reasonably far up the lefthand side of that slope.
I don't think this is because of any intrinsic trait I have. I think the curve is a learned one. The hypothesis of the valley is that, by failing to be human enough, something sets off our creepy alert. I've spent enough time working with fictional 3D human beings that their attributes no longer trigger an alert. I think this can happen to everyone.
Therefore, I think the idea of "staying out of the valley" is actually a wrong-headed one. Attempting to climb the right side of the valley doesn't even have to work for it to work: just making the attempt and exposing the gaming population to these entities will cause their valleys, as a whole, to shift left. IE, the valley will always be located to the left of what we normally think of as humans, even if what we normally include in that group is a bunch of fictional characters.
This can also be seen in detail with people who collect dolls or similar. You go into someone's room and it's full of dolls, that's incredibly creepy. But they're used to it.
Of course, I'm talking about "to the left" and "uncanny valley" and all these other words, but there's an important thing to remember.
Not a Good Simplification
Despite the way we treat it, the uncanny valley is not a proven fact. It's not really even a scientific theory: it's barely a hypothesis. It's more of a simplification. It's easy to think of things in terms of the uncanny valley, but it's like thinking of planes as big metal birds: it doesn't actually explain what's going on and is absolutely no help in designing them.
My theory is actually the opposite. I don't believe that the uncanny valley is because we failed to be human enough. I think it's caused by the same thing that causes creepiness in any situation.
If you look at creepy characters and situations, the creepiness is usually caused by magnifying a particular attribute to the point where it is no longer reasonable. Sometimes this is straightforward, such as Freddie having a fire-scarred face and blade gloves: we take an attribute that is discomforting, and we amplify it.
Sometimes it can be a bit more abstract. Pyramid head's creepiness comes from the way we amplify inhuman characteristics. Not non-human, but inhuman: pyramid head gives off all the body language of a seriously disturbed person, amped to eleven. His namesake - the pyramid he has instead of a head - serves not to make him creepy, but to make him iconic. He is not creepy because he has a pyramid for a head, he's just easy to remember. His creepy traits are the ones he inherits and amplifies from the creepy people in our lives.
Approached from this standpoint, what we call the "uncanny valley" is clearly just what we call this sort of thing when we do it by accident. The dead-eyed characters from a modern video game aren't creepy because they just fail to be human, they're creepy because having pallid skin and flat, unfocused eyes is an unsettling attribute.
It may be true that they have to be fairly close to human for these attributes to matter. But that's not necessarily any reason to label it as a specifically "relative to human" thing. After all, there are plenty of creepy attributes out there, and even some attributes which are only creepy in certain situations. I can make a creepy cat in the same way I make a creepy person, but I'd probably amp different attributes. Moreover, what's creepy in one situation might be completely uncreepy in another.
For example, the new Street Fighter character C. Viper is extremely creepy to me. It's because she's wearing high heels. Obviously, someone wearing high heels isn't creepy in and of itself: it's the fact that a warrior in the Street Fighter universe is wearing high heels.
I also find cell-shaded Link disquieting. Not because he fails to be human, but because he fails to be a cartoon. He amplifies several cartoon traits and leaves others unamplified, creating a really uncomfortable result. To me.
So, in the end, I think that what we call the "uncanny valley" is simply when we accidentally amp up disquieting traits.
Subscribe to:
Posts (Atom)