Nothin' but theory.
It's obvious we can't keep scripting every aspect of our NPCs' lives. There are too many NPCs, too many subtle differences in how the player can treat them. We need to build algorithms that can reasonably drive an NPC's actions - not for all games, but for the growing number that require extremely detailed, player-driven interactions with NPCs.
We've tried to use alignments - you know, lawful good, chaotic good, lawful neutral. But these alignments were created specifically to give tabletop players a framework for making tough moral choices. If you're lawful good, you'll eventually have to choose between honor and justice. If you're lawful neutral, you'll have to decide which is better: order or peace?
These tough questions are possible because of the framework of "lawful/chaotic, good/evil", but they can only be answered by a human mind. Well, a computer could pick a pre-scripted answer (or a random one), but that's not the same at all.
Another attempt is with factions. If a character is on the magician's faction, he wants to help magicians and hinder their enemies. But this breaks down for the opposite reason that alignments do: factions are too simplistic. The pat answer of "anything the magicians do is right" is robotic and unrealistic, and the greatest source of personal strife in this environment would be a "fall from grace", where a character decides the magician's guild isn't very good, and what they decide to do about it. As with answering the questions an alignment poses, there's no way to create a meaningful answer out of this data set.
Don't even think that factions plus alignment is the answer: that just introduces two dimensions of moral choice instead of one, and no answers on either axis.
Unfortunately, to make NPCs capable of having these kinds of moral dilemmas and subtle moral choices, we have to have a much more rugged and nuanced model.
The first step is to build a graph (node graph, not bar graph) of the things the character cares about. This could be people, places, ideals, etc. This would probably need to be scripted, or created from augmented stereotypes: randomly assigning them wouldn't make much sense. This is a simple positive or negative number for each.
From this foundation we can create their opinions on other people, places, ideals, and things. Some of these would probably have defaults set up - for example, if you are for the ideal of law and order, then you probably like the town guard. If you have a father who is a town guard, you probably like the town guard.
These defaults can be over-ridden if the designer feels it would be interesting to have a different value, and of course things that are unrelated in most people's minds might be related in a given NPC's mind due to their personal experiences.
All of these values are positive or negative, and there are edges linking them back to the node(s) they spawn from.
This propagation can continue indefinitely - if you like the town guard, then you like the guy who likes the town guard - but should probably be capped to three layers.
This foundation is significantly more complex than the simpler faction model, but it allows us half of the equation we need in order to make more nuanced decisions. You like the city guard, but if you see the city guard going bad, you'll have second thoughts and perhaps even turn against them, since you only like the city guard because you like law and order. This is true even if you like the city guard a lot more than you like law and order, because even though you may not be aware of it, your liking of the city guard does, in the long run, descend from those fundamental values.
When talking about simple reactive responses, this model is not better than either of the more basic models. If the player attacks a guard, the NPC's response to the player is no different than if the NPC simply had a faction preference for the guards (or, more likely, the government, since it's always abstracted way out).
But the whole point is to pull the NPC away from simple reactive responses into having justified moral reactions. This framework allows the NPC to change their feelings over time in a meaningful manner, especially in response to the aftereffects of player intervention. If the player kills a cop, that festers in the minds of the NPCs who care... but if the cop shoots at the player in cold blood, that also festers.
It also allows them to stay cozy in their bias, because the positive reactions from positive propaganda would offset a larger amount of negative press, just due to the math involved.
Adding into this a news/rumor system, you could create a city that actually responds to events in an intelligent and emotional manner, even though they're probably stuck expressing it with canned catchphrases from a voice actor. It would also create a "disinformation" system of crooked politicians and self-centered media clowns, just like the real world. Although that's optional when you're creating the world from scratch.
However, I don't really think that's enough, because the NPCs still have no way to be proactive. This allows them to know what they think about things, and allows them to change how they think according to what they see, but it doesn't allow them to make or interpret plans.
I haven't really come up with anything solid on that side, but I have the strong idea that it involves ranking change over time and remembering causes of change. This would have the benefit of also allowing for recollection - an NPC who feels maudlin when they go to the park where they spent much of their childhood.
However, the progression doesn't work out yet.
What do you think?