Wednesday, December 02, 2009

An Article on Choice

I seem to be all about reading other people's articles this week. Here's one from Pixel Poppers on player choice. Doctor Professor is harping on the difference between "real" choice and "fake" choice, which is something I have also harped on. Out here in Boston-land we use the term "agency" as an umbrella word to refer to how much the game allow the players to express themselves and change the game world. I presume that it's not just a local term.

I agree with most of the article, although I think the Little Sister "choice" was a stupid one rather than an interesting one. The reason I think it's stupid is the same reason I think all light side/dark side choice threads are stupid. Although you are faced with the choice fifty times (or five hundred times) in your play, you only actually choose ONCE, near the beginning, and after that you're simply reassuring a skittish computer that you are still playing the same character.

That's a major problem with all these games that let you choose between two narrative options. In order to really make them even vaguely interesting, you'd have to (A) have a lot more characterization of the avatar and (B) have to have choices present very differently depending on past choices. For example, in Fallout 3, you can blow up or disarm a nuclear bomb in the first city. This is a simple "good vs evil" decision. However, blowing up the city is a rather hideously evil thing to do, just fantastically evil.

Why is it that your avatar, a man with such intense evil in his past, can then go on to cheerfully befriend the other cities and people in the game? Obviously, the choices for these other cities were scripted to be "compatible" with "every play thread", which in turn means they don't express the avatar's personality very well at all. I can see getting along with these future towns if it's played in a sleazy way, or a repentant way, but that doesn't happen.

The act of blowing up the city has profound game effects - it actually makes you use an entirely different city, and gives you access to all kinds of other quests. However, it doesn't actually change your AVATAR. Your avatar still thinks to himself, at all further moments, "well, I could be nice here, or maybe a little mean". That choice fails to have any meaning to someone who personally nuked a city.

So while I agree with the linked article, I would stress the need to drag a little attention off of scripting the world and on to scripting the characters. Funnily, the current situation is actually the reverse of the old "dost thou love me" trick. While the game does give you lots of agency, it gives the illusion that your earlier choices didn't matter. Ha!

Properly scripting the character to allow the player to define their avatar's personality is more or less impossible at the current time. But I'd be satisfied with small steps.

Anyway, just more mumbling.

8 comments:

Kevin T said...

I just finished playing Iji not long ago. Not the greatest game, but very good at what it does.

I believe it is the only real example I have seen for a sliding scale of moral choice. Basically, your only real decision in the game is how much you murder enemies, and when. That said, it becomes extremely difficult to go either extreme, and if you are playing the game on hard mode you might have a brief moment where you murder a hoard of enemies just to make things more convenient.

The effects on the game are, for the most part, fairly limited. You can at most make a few less enemies for yourself in the future, and change the text-based plot a bit more.

Your effects on the world are somewhat limited, but I believe this is the best example of a real gradient when you present the same choice over and over (to kill or not to kill).

If you haven't played the game, I recommend it. It is worth a couple hours of your time.

Craig Perko said...

I think it is possible to make a game that asks moral questions such as a light side/dark side game. But you have to change the contexts for each question, and change them far enough that the answer will be different.

For example, if you are a good guy, the meaningful exploration is what kind of good guy you are. Will you save children if it means killing an innocent? Will you be dishonorable to help someone? How far will you go to be only slightly good?

These questions have meaning and are not all the same question with a pat answer. That sort of exploration is theoretically possible.

I played Iji and it did have a scent of this sort of thing. While the impulse was always the same - to kill or let live - there were contextual limits. Mostly in how hard it was to accomplish the murder/sneakery.

Kevin T said...

I am thinking that it would be possible to explore this kind of thing in a kind of piece-meal fashion.

Lets assume your #1 way of interacting with your environment is NOT through text. It is through violence, or some such. By default people are non-violent towards you, and you experience a large number of situations in a semi-forced story that develop the story as you go.

For example, there are orphans beating to death an old man. You help the orphans and you will get orphan help from then on, you help the old man then you will be attacked by crazy orphans in the streets but have the old man join your party. Things you can do later in the game become easier or harder depending upon who you have with you and what choices you made in the past, but are not solely dependent upon them.

Because you have 'trophies' from your past moral decisions, you can use them to mark what type of person your character is, and it can shape the story because of the changes to your condition.

I think the best way to go about something like moral choices is contextual limits. Unless you killed the guy because you felt there was no choice, the decision doesn't really mean anything. Unless you are making the decisions with an in-game context, they are just arbitrary.

Craig Perko said...

Yes, I think you're right. We now have the power to make worlds that both offer you interesting choices and actually change to reflect your choices.

What I'd really like to see are games that take that deeper. Instead of having one piece (the world) that changes and the rest (your avatar, conversation choices, etc) stay the same, I hope for games where they all change.

I think that it's difficult because of the way we approach writing for our game worlds. Scripting every line of dialog means you can't fork much because the amount of work you have to do rises exponentially. This is in stark contrast to the relative ease of writing a game world that lets you take character A or character B, or lets you blow up a building or go in through the roof, or any of that stuff.

The question is: how do we link up the stuff we can do algorithmically with a more "meaningful" exploration of "human condition".

Kevin T said...

You've already said this in the past, but trying to script everything is going about it the wrong way. Even attempting to make the player have a perfect story is going about it the wrong way.

It is about atmosphere. Attempting to make a sand-box where people feel emerged is unrealistic. Sand-box style games are inherently emotionless, because real life isn't that free-form. There is no context, no hard decisions, everything is inherently whimsical.

I think they key for making a meaningful exploration in the game is to apply more limiting factors to the player. You might let the roam wherever they want, but the plot will move on without them. They might be able to sit and watch, but more horrible things will occur with inaction.

The player needs to be moving in the world, as an agent of change. If the world moves according to the players whims, then it is all too easy for them.

I think making things open-ended is easier when there is less to be open-ended. At the same time, the choices will be more real to the player because they have meaningful effects on the world around them.

Craig Perko said...

Well, I agree, but only to a point.

I mean, think of all the fun that the physics model gave us in Halo and GTA. So much fun that later games, even other IP, tried to simulate the same level of physics.

But the reason it was so successful in Halo and GTA was because the game gave you the freedom to, say, grenade-launch the warthog, or drive a Ferrari off a hundred-foot cliff.

I think there may be something similar in the future for... I hesitate to say "drama". For "people-things".

I'm no fan of open worlds, but this doesn't require an open world. Halo certainly wasn't open-world. It just requires giving the players enough rope to hang themselves with, while still providing enough structure that they have a game to play in between having play to play.

Jamey Stevenson said...

Hi Craig, I've been reading your blog for quite a while now, as I find many of the topics you cover interesting and enjoy the thoughtful perspective you apply to them.

First of all, I completely agree with your assessment about why the Little Sister dilemma is uninteresting. The whole nagging "do you still want to be a hero/villain?" thing just makes me imagine a little devil/angel version of Navi from Ocarina of Time. "Hey! Listen! There is an ETHICAL CHOICE to be made!"

However, I feel some trepidation regarding your proposed solution of limiting future options based on past player behavior. To me, this sort of use of ethical metrics in a prescriptive, rather than descriptive, sense is akin to limiting a player's options in order to enforce "good role-playing", which I believe is only really possible to achieve when the player is already on board anyway.

For example, this is a problem I always had with the ethical alignment system in D&D, or at least how I often see it enforced: it assumes that if your character is neutral good, they will always be neutral good. There is nothing within the system to account for personal growth, for change, penitence, corruption, and so on. But that strikes me as similar to what you seem to be describing in your proposed solution, which leads back to the same essential problem of really only having a single choice that the player makes toward the beginning of the game.

Now granted, the type of person that would nuke an entire town is unlikely to later have a Grinch-esque change of heart and be suddenly brimming over with goodwill toward others, but that's more of a larger issue with the ludicrous severity of the whole good/evil dichotomy employed by most current ethical game systems. Personally, choices that are so blatantly simplistic aren't that interesting to me to begin with, since as in the BioShock example where, as you said, the only real choice the player makes is a single "do I want to play as a hero or villain?" at the beginning of the game.

But if games must include these kinds of unrealistically stark moral choices, is it really an improvement to preclude the possibility of redemption by limiting the player's future choices based on past ones? That's hardly a better reflection of human nature, is it? At best, I would call it an extremely cynical take on the subject. Even if, as you claim, players rarely roleplay this kind of transformative arc for their avatars, why not at least provide the option for the few that do?

Oh, also, I thought it was a really salient point you guys were making about context, and how that could be used to kind of escalate and further explore the ethical ramifications of previous choices. If you have the time, one game I played recently that did an interesting job of this was the interactive fiction game The Baron. Even if you aren't big on text adventures, you should definitely check it out as it is a compelling example of contextual variance as a means of ethical exploration. I'd be curious to hear reactions from either of you if you do play it.

Craig Perko said...

You're completely right. I skipped over the redemption and change aspects due to a lack of space, but I have considered it.

The problem is a more fundamental one. You obviously should be able to change over time: character arcs are a fundamental part of stories, and are painfully badly implemented in games at the moment. In order to be relevant to the player's choices so far, the system has to understand "where" the player is in which arc, and understand the choices that should be available.

This is an extremely tall order. With current methods, that would require hundreds of thousands of scripted lines for any game of significant length.

There are some methods that can be used to get around that a bit, but the problem remains: so long as we explicitly script options, every additional fork doubles our work.

I have made some tabletop games that tackle these sorts of questions, but they work specifically because the GM and the players can figure out what choices are available and act in character without an explicit set of options.

It's a tough situation.

I haven't played the Baron, and I don't know that I will be able to play it any time soon, but if I do, I'll consider posting about it.