I was cruising and I found this post, a little toss-off post about marketing. But I've never heard the argument about different kinds of reason-giving before, and it strikes me as an interesting idea.
According to Tilly, according to Gladwell, there are four ways we explain what we're doing. Social conventions, stories, codes, and accounts. If you use one when another is called for, you come off looking foolish. Now, I'm not going so far as to say that's right, but I am going to think about how you might use that in a game.
It doesn't explain how we decide something, just how we explain how we've decided something. But, as everyone knows, in a game it's really the explanation that matters, not the action. If your characters do semi-random things (with a central theme making them reliable), so long as we can rationalize why we're happy.
Now, I think it's possible to drive an entire narrative AI using something similar to these four basic ideas. See, each person in a simulation needs to do things, but making those things both interesting and comprehensible is currently beyond us. Instead, we program that stuff in by hand, which limits us to "small" and "everyone gets the same stuff".
In my experience, the hard part of any social engine is that there are several "pieces" to life, and these pieces don't appear to act the same. If you drive your characters with one algorithm, they become "flat". For example, obsessed with social contacts or obsessed with money or whatever. Even if you tweak your algorithm, there's a huge difficulty making your people feel different from each other in any interesting way. This is worse because you have a hard time showing that people change depending on who they are with. There's a big difference between me at a party, me at home, me hanging out with friends, and me trying to get a job done. With a monolithic algorithm, I haven't seen those differences come out in anything short of spaghetti code.
Well, how about giving each NPC several AI algorithms that tell it what to do. Depending on the situation, it either calls one AI, or refers one AI to another.
To use a modified version of the four systems of explanation Tilly apparently uses, we could have four AI (each of which would be a different level of power in each NPC):
The social convention AI determines what kind of norms this culture has, and presses the NPC to follow them. For example, it cuts in with "murder is bad" and "keep your clothes on". When another AI forces the character to break these social conventions, there is a lot of discomfort. How much depends on the situation, of course.
They social story AI attempts to build a network of people you know. It is what refers you to friends rather than strangers, what makes you crave vengeance or be jealous. It tells you to give your friends an absurd amount of help, and allows you to make sacrifices that the other AI wouldn't.
The law AI recognizes the laws of the area but does not emotionally internalize them. For example, a sociopath with no social convention AI would not feel upset about murder, but he would recognize that he needs to be careful about it. If the NPC is in the same place they grew up in, the law AI and social convention AI are likely to be very similar, except for silly laws without a solid emotional hook. Like, say, jaywalking. But the law AI adapts to the situation: you can add laws from the culture you're standing in, figure out the limits of the tolerances of your party, and so forth. It allows for mental adaptation.
The reality AI is what drives the NPC to do things. Big and small. It says, "time to eat". It says, "you don't have any food, go get some." It says, "conquer the world!" Whatever the NPCs goals are - short or long term - the reality AI's job is to tell him how to accomplish them using a simple logic search. The reality AI also learns "habits" and generally comes preprogrammed with several habits picked up in your youth (such as, say, brushing your teeth and taking a shower). The longer you deny a habit, the more it bothers you until it begins to decay and bother you less.
In any given situation, all four of these AI might be called. Getting up in the morning? Call each of the AI. Combine the results. Refer them to each other if needed.
The trick is that this sounds computationally expensive. You'd need shortcuts. For example, building a social network takes months or years, and our NPCs are supposed to have had months or years... but our computer can't possibly simulate those. So there would be two tiers of AI: "on-screen" in full detail, and "off-screen", where it just does the rough stuff when loading and then more or less freezes.
Used accurately, the interactions of these four AI could lead to all the stereotypical behavior which eludes normal AI. For example, the loudmouthed, pushy bastard who backs down from any real fight. The social AI says, "push! Be the biggest!" and the reality AI says, "whoa, that's too dangerous!" :)