Saturday, December 20, 2008

For Love of Robots

I've been thinking about Nintendogs, and Tamagochi, and Animal Crossing - all these games which feature the player nurturing some kind of pseudo-realtime entities.

A lot of people really like these games. But that's not terribly surprising to me. What is terribly surprising to me is the reason they like these games. And one of those reasons is the same surprising reason that people like The Sims, even though a lot of people who like the Sims don't like Nintendogs and visa-versa.

When you talk to these people, they talk about how their dogs or citizens or whatever other characters there are feel. They enjoy "keeping their Nintendog happy", and they notice that "one of my people has a grudge against another".

In regards to how I feel about it, maybe I dive a bit too deeply, a bit too quickly. My drive is not to figure out how these characters are feeling, it's to figure out how they feel things. So, when I dive into Nintendogs and The Sims, I am rapidly disappointed by the shallow simulation and brutally predictable responses. But the satisfied players either don't notice or don't care about that sort of thing.

I think it might be a race between how quickly a character can establish themself in your mind and how quickly you can tear their mind apart. The question in my mind is whether the characters could establish themselves even after I've figured them out. What I mean is this: I stop playing Nintendogs and The Sims because I figure them out, so I get bored and close up shop. But if I was forced to play them every day for a month, would I grow to have personal emotions about the characters?

I think I would. They would probably be hate, though, because it'd be a real bother to have to play the same boring game day after day. But... could it be another emotion? Could I grow to like a character whose actions and emotions are painfully transparent?

Well, research suggests yes, but the dynamics are a bit more difficult. In order to make it so that I like the character, the character has to serve some useful purpose in my life. That purpose could be almost anything, but it can't be "to waste your time".

The problem is that software isn't really advanced enough to do useful things on a personable, semi-automated level. For example, we could theoretically make our calendars management software a character - a virtual secretary - but it turns out that interacting with our virtual secretary is more difficult than simply filling out the raw calendar ourselves. So, even though she theoretically exists to help us out, in actuality she just wastes our time. Example: Clippy.

But I think the solution can't be applied because the problem doesn't exist yet. Right now, there aren't many applications that require a "personable heuristic", so there aren't many spaces to put one.

There are a lot of hints of this kind of thing on the horizon, though. An example are the elderly care robots you might see in Japan. These robots are pretty bad, not due to any design problems, but simply due to the fact that interacting with the real world is quite difficult and our techniques aren't quite there yet. The end result is that most of these robots are close to useless. We can imagine them growing more capable over time, however, and it stands to reason that they will get more and more common until they might be ubiquitous.

Another possible example of a hardware sort are prosthetics or meta-prosthetics. For example, if you're deaf, I can see a personable piece of software detecting all the sounds around you and regurgitating the important ones as text. It would need to be personable because what sounds any given person wants to be alerted to at any given time will change, as will the "expression" they should be reported in.

A prosthetic arm is fairly straightforward and probably wouldn't require a personality of any sort. But what about larger meta-prosthetics such as cars, houses, and computers as they become ever more closely linked to us? It makes sense to give them a personality that can not only adapt to the mood of the the owner and situation, but can also express its own "mood" to easily reveal the very complex details of the system's state.

Pure software is also an option. Right now it doesn't seem to make any sense to have a web browser with a personality: we've restricted our use of the web to that which Google can provide us. However, even in that limited term, Google's search engine attempts to adapt to our requests and even do some contextual analysis. In essence, it's a primitive version of personable software.

Is there any reason to think this will do anything besides advance? Let's look at a version that could be made today:

What if we had a Twitter aide? A virtual character who exists to feed us twitters. By knowing the sorts of things, people, and places we're interested in, he could bring us relevant twitters and, judging our responses to them, give us more twitters along the same line or even send us out to web pages with more information.

Moreover, such an aid could "time-compress" Twitter. For example, if I want to watch someone's Twitter, but a lot of it is crap, I could have the character give me a summary, or at least filter out the crap.

Right now all this stuff can be done manually, of course, but the point is to give you a hint of what might come in the future. The amount of data spooling out of Twitter is a microscopic fraction of the amount of data that will spool out of the social networks of tomorrow, and the idea that you'll spend time manually going through all those entries is silly.

"But, Craig, why would these entities need a personality? Sure, I understand for things like pets and nursemaids, but why would a super-Twitter aggregator need a personality?"

Well, I think that the next tier of software is going to have to have a personality because it will allow the users to psychologically deal with the complexity of the software. That was the idea behind Clippy, after all: create a simple interface to the complicated parts of Microsoft Word.

Microsoft Word isn't complex enough to require that kind of assistance, but high-volume data filtering very well may be. As users, we have to "understand" why our entity is bringing us this particular data, and we have to not get upset by the communication difficulties we're going to have with these entities. Both of these things are easily accomplished by a carefully chosen avatar, and that's quite apart from the fact that our software entity (whether strictly software or partially hardware) will need to empathize with us and our moods.

In some regards, this can be seen as depressing: oh, old people cared for by soul-less ro-bots instead of hu-mans. People making friends with ro-bots instead of hu-mans. Pretty sad! Or is it?

I think it's a cultural artifact. I think that once we're there, we'll find it's not sad at all.

ANYWAY, I think I'll wrap up there.

What are your opinions?


Patrick said...

There's probably a whole other dimension here for the mediation of multi-player or multi-party interactions.

Craig Perko said...


Greg Tannahill said...

The first part of your post is Koster Theory of Fun stuff - that you approach the game as a system to be understood and mastered, and once you've reduced it from a challenge into a machine which you can reliably and predictably operate, the fun stops.

I've always found a certain irony in procedurally-driven characters. If you have a character who behaves by reference to a set of understandable rules, players will write him off as "just a machine". Conversely, in systems driven by genuine randomness, especially where some kind of reward system is involved, players will invent totally spurious rules that allegedly guide the system, and start heaping (often malevolent) personality on a genuinely random event.

I think the key therefore to AIs who evoke an emotional response is some kind of sequential randomness, where the AI is crazily unpredictable, except that the unpredictability is funnelled in the direction of whatever task the AI was created to perform.

Craig Perko said...

There's some truth in that, but I'm not sure it needs much randomness. You don't want it to be so random that it adversely affects its performance, after all.

One option is to, instead of having it be random, have it affected by outside influences. For example, if you have an AI that's heavily influenced by whatever people are Twittering about today, it's going to seem random to someone who isn't in on the whole Twitter phenominon.

But it won't BE random, and it can be tracked back (via conversation) to something interesting and theoretically useful.

This would be especially useful in social robots, who would need to be at least slightly interesting as part of their basic function.

Greg Tannahill said...

The problem is for most tasks there is an optimal solution. Either the AI is taking the optimal path, which is predictable, or it features a deliberately introduced random inefficiency for the purpose of being non-predictable.

Social bots are different in that their goal is to (ultimately) entertain. They can be reactive, or generative.

Reactive sociality is about responding to player input in a way that gratifies the player, which is the kind of randomised predictability I'm talking about - the responses have to be GENERALLY of the same character in order to be engaging, but need to be random to avoid being redundant. Many chatbots do this by regurgitating past answers from other users in the hope that what was appropriate previously remains appropriate now, but this generally yields very mixed results. Good reactive sociality really does require some hard rules as to what sort of responses are appropriate in a given context, and then a deep pool of randomness to draw from in order to find a reaction that fits those rules.

Generative sociality involves entertaining by offering the user new content sorted according to their taste and served wrapped in a unifying sense of style. This is akin to grafting a smiley-face onto YouTube and saying it wishes to be called "Bob" now. It can only be done by reference to other people (whether the user or a third-party community) so it really only represents a refinement and personification of existing platforms.

Craig Perko said...

Well, that's certainly true of today's applications, but the whole point of my point is that I don't think it's true of tomorrow's applications.

If something has a clearly visible optimal solution, pasting a personality to it will just waste the user's time. Like Clippy.

But if the situation is very complicated, arriving at a solution can be thought of as a little bit of an art. It's then that personalities become useful.

Greg Tannahill said...

My feeling is that any task with a clear goal does have an optimal solution, and in an optimal program a personality is just a needless inefficiency, although the illusion of personality might be part of the optimal solution.

Where an optimal solution isn't clear, you either have insufficient processing power to deduce it, you're operating with incorrect data or assumptions, or you're suffering from a confusion of goals with unclear priorities among the goals. (Social interaction is an example of this second situation.)

Don't know that this takes the discussion any further though. I'll look forward to your next post on this or a related topic.