I've been thinking about Nintendogs, and Tamagochi, and Animal Crossing - all these games which feature the player nurturing some kind of pseudo-realtime entities.
A lot of people really like these games. But that's not terribly surprising to me. What is terribly surprising to me is the reason they like these games. And one of those reasons is the same surprising reason that people like The Sims, even though a lot of people who like the Sims don't like Nintendogs and visa-versa.
When you talk to these people, they talk about how their dogs or citizens or whatever other characters there are feel. They enjoy "keeping their Nintendog happy", and they notice that "one of my people has a grudge against another".
In regards to how I feel about it, maybe I dive a bit too deeply, a bit too quickly. My drive is not to figure out how these characters are feeling, it's to figure out how they feel things. So, when I dive into Nintendogs and The Sims, I am rapidly disappointed by the shallow simulation and brutally predictable responses. But the satisfied players either don't notice or don't care about that sort of thing.
I think it might be a race between how quickly a character can establish themself in your mind and how quickly you can tear their mind apart. The question in my mind is whether the characters could establish themselves even after I've figured them out. What I mean is this: I stop playing Nintendogs and The Sims because I figure them out, so I get bored and close up shop. But if I was forced to play them every day for a month, would I grow to have personal emotions about the characters?
I think I would. They would probably be hate, though, because it'd be a real bother to have to play the same boring game day after day. But... could it be another emotion? Could I grow to like a character whose actions and emotions are painfully transparent?
Well, research suggests yes, but the dynamics are a bit more difficult. In order to make it so that I like the character, the character has to serve some useful purpose in my life. That purpose could be almost anything, but it can't be "to waste your time".
The problem is that software isn't really advanced enough to do useful things on a personable, semi-automated level. For example, we could theoretically make our calendars management software a character - a virtual secretary - but it turns out that interacting with our virtual secretary is more difficult than simply filling out the raw calendar ourselves. So, even though she theoretically exists to help us out, in actuality she just wastes our time. Example: Clippy.
But I think the solution can't be applied because the problem doesn't exist yet. Right now, there aren't many applications that require a "personable heuristic", so there aren't many spaces to put one.
There are a lot of hints of this kind of thing on the horizon, though. An example are the elderly care robots you might see in Japan. These robots are pretty bad, not due to any design problems, but simply due to the fact that interacting with the real world is quite difficult and our techniques aren't quite there yet. The end result is that most of these robots are close to useless. We can imagine them growing more capable over time, however, and it stands to reason that they will get more and more common until they might be ubiquitous.
Another possible example of a hardware sort are prosthetics or meta-prosthetics. For example, if you're deaf, I can see a personable piece of software detecting all the sounds around you and regurgitating the important ones as text. It would need to be personable because what sounds any given person wants to be alerted to at any given time will change, as will the "expression" they should be reported in.
A prosthetic arm is fairly straightforward and probably wouldn't require a personality of any sort. But what about larger meta-prosthetics such as cars, houses, and computers as they become ever more closely linked to us? It makes sense to give them a personality that can not only adapt to the mood of the the owner and situation, but can also express its own "mood" to easily reveal the very complex details of the system's state.
Pure software is also an option. Right now it doesn't seem to make any sense to have a web browser with a personality: we've restricted our use of the web to that which Google can provide us. However, even in that limited term, Google's search engine attempts to adapt to our requests and even do some contextual analysis. In essence, it's a primitive version of personable software.
Is there any reason to think this will do anything besides advance? Let's look at a version that could be made today:
What if we had a Twitter aide? A virtual character who exists to feed us twitters. By knowing the sorts of things, people, and places we're interested in, he could bring us relevant twitters and, judging our responses to them, give us more twitters along the same line or even send us out to web pages with more information.
Moreover, such an aid could "time-compress" Twitter. For example, if I want to watch someone's Twitter, but a lot of it is crap, I could have the character give me a summary, or at least filter out the crap.
Right now all this stuff can be done manually, of course, but the point is to give you a hint of what might come in the future. The amount of data spooling out of Twitter is a microscopic fraction of the amount of data that will spool out of the social networks of tomorrow, and the idea that you'll spend time manually going through all those entries is silly.
"But, Craig, why would these entities need a personality? Sure, I understand for things like pets and nursemaids, but why would a super-Twitter aggregator need a personality?"
Well, I think that the next tier of software is going to have to have a personality because it will allow the users to psychologically deal with the complexity of the software. That was the idea behind Clippy, after all: create a simple interface to the complicated parts of Microsoft Word.
Microsoft Word isn't complex enough to require that kind of assistance, but high-volume data filtering very well may be. As users, we have to "understand" why our entity is bringing us this particular data, and we have to not get upset by the communication difficulties we're going to have with these entities. Both of these things are easily accomplished by a carefully chosen avatar, and that's quite apart from the fact that our software entity (whether strictly software or partially hardware) will need to empathize with us and our moods.
In some regards, this can be seen as depressing: oh, old people cared for by soul-less ro-bots instead of hu-mans. People making friends with ro-bots instead of hu-mans. Pretty sad! Or is it?
I think it's a cultural artifact. I think that once we're there, we'll find it's not sad at all.
ANYWAY, I think I'll wrap up there.
What are your opinions?