I was cruising and I found this post, a little toss-off post about marketing. But I've never heard the argument about different kinds of reason-giving before, and it strikes me as an interesting idea.
According to Tilly, according to Gladwell, there are four ways we explain what we're doing. Social conventions, stories, codes, and accounts. If you use one when another is called for, you come off looking foolish. Now, I'm not going so far as to say that's right, but I am going to think about how you might use that in a game.
It doesn't explain how we decide something, just how we explain how we've decided something. But, as everyone knows, in a game it's really the explanation that matters, not the action. If your characters do semi-random things (with a central theme making them reliable), so long as we can rationalize why we're happy.
Now, I think it's possible to drive an entire narrative AI using something similar to these four basic ideas. See, each person in a simulation needs to do things, but making those things both interesting and comprehensible is currently beyond us. Instead, we program that stuff in by hand, which limits us to "small" and "everyone gets the same stuff".
In my experience, the hard part of any social engine is that there are several "pieces" to life, and these pieces don't appear to act the same. If you drive your characters with one algorithm, they become "flat". For example, obsessed with social contacts or obsessed with money or whatever. Even if you tweak your algorithm, there's a huge difficulty making your people feel different from each other in any interesting way. This is worse because you have a hard time showing that people change depending on who they are with. There's a big difference between me at a party, me at home, me hanging out with friends, and me trying to get a job done. With a monolithic algorithm, I haven't seen those differences come out in anything short of spaghetti code.
Well, how about giving each NPC several AI algorithms that tell it what to do. Depending on the situation, it either calls one AI, or refers one AI to another.
To use a modified version of the four systems of explanation Tilly apparently uses, we could have four AI (each of which would be a different level of power in each NPC):
The social convention AI determines what kind of norms this culture has, and presses the NPC to follow them. For example, it cuts in with "murder is bad" and "keep your clothes on". When another AI forces the character to break these social conventions, there is a lot of discomfort. How much depends on the situation, of course.
They social story AI attempts to build a network of people you know. It is what refers you to friends rather than strangers, what makes you crave vengeance or be jealous. It tells you to give your friends an absurd amount of help, and allows you to make sacrifices that the other AI wouldn't.
The law AI recognizes the laws of the area but does not emotionally internalize them. For example, a sociopath with no social convention AI would not feel upset about murder, but he would recognize that he needs to be careful about it. If the NPC is in the same place they grew up in, the law AI and social convention AI are likely to be very similar, except for silly laws without a solid emotional hook. Like, say, jaywalking. But the law AI adapts to the situation: you can add laws from the culture you're standing in, figure out the limits of the tolerances of your party, and so forth. It allows for mental adaptation.
The reality AI is what drives the NPC to do things. Big and small. It says, "time to eat". It says, "you don't have any food, go get some." It says, "conquer the world!" Whatever the NPCs goals are - short or long term - the reality AI's job is to tell him how to accomplish them using a simple logic search. The reality AI also learns "habits" and generally comes preprogrammed with several habits picked up in your youth (such as, say, brushing your teeth and taking a shower). The longer you deny a habit, the more it bothers you until it begins to decay and bother you less.
In any given situation, all four of these AI might be called. Getting up in the morning? Call each of the AI. Combine the results. Refer them to each other if needed.
The trick is that this sounds computationally expensive. You'd need shortcuts. For example, building a social network takes months or years, and our NPCs are supposed to have had months or years... but our computer can't possibly simulate those. So there would be two tiers of AI: "on-screen" in full detail, and "off-screen", where it just does the rough stuff when loading and then more or less freezes.
Used accurately, the interactions of these four AI could lead to all the stereotypical behavior which eludes normal AI. For example, the loudmouthed, pushy bastard who backs down from any real fight. The social AI says, "push! Be the biggest!" and the reality AI says, "whoa, that's too dangerous!" :)
Comments?
2 comments:
OK, I can understand this somewhat from an explanatory point of view...but the breakdown of the AI categories doesn't sit well with me. Also, I don't think 'AI' is the term you're looking for. Each of these layers isn't so much an independent intelligence as it is a 'filter' or 'motivator' for the actions of you character.
In the case you've outlined, the 'Social Story' and 'Reality' layers would be motivators, with the 'Social Convention' and 'Law' layers would be filters through with the motivators are moderated.
But I think that the breakdown you use here is poorly named. For motivational categories, I would have chosen 'Immediate necessities' and 'Eventual goals'. 'Reality' maps fairly closely to 'Immediate necessities', but it lops off some of the abstractness and places it into 'Long term goals' along with everything in 'Social Story' that's not an activity filter.
Similarly, I would use 'Personal morality' instead of 'Social convention', even if they are identical, because it gives the idea that it can be tuned to the individual. For a second filter, I'd use 'Situational awareness'. It can act just like 'Law' does, if you're in a roomful of people, but if you're alone in that same room, you might not be so inclined to follow all the laws of the society. Anyone who follows laws out of personal choice or habbit would have that fall under 'Personal morality', after all.
I had an idea like this a ways back. As Kestrel points out, your "AI" are more a filter than an actual intelligence, though I think thats pretty much true of all "AI" yet created.
My model takes three AIs, one concerned with the total shape and direction of the discourse (like a PAC engine running a model of social drama), one constructs and adapts the material constraints of the world (and possibly balanced by the directing agent to provide the right attunement, and the final one constructs the formal constraints, which include all the goals, thoughts, social morays, personally unique memes ect.
They each run a hueristic powered evolutionary algorithm, but doing distinctly different things with data. Then these outcomes will point to each other, like the characters' motivations point to adjusting the overall plot vector, which creates different material situations, which affects the characters to think and behave a certain way. And on and on it goes, a hurricane for the player to steer from the center.
How's that for a description?
Post a Comment