Friday, August 05, 2005

Social Robots

I've done a lot of these things. I've programmed dozens of different versions, including several versions which made new versions. The problem is simple:

Your social structures are based on your reality.

As an obvious example: most people don't feel real comfortable strutting in the buff. Why?

There's a hundred different theories. I won't discuss them in this post. But the inevitable conclusion is that it hinges around you having a buff to strut around in. Moreover, around you having a strut to strut with. And the fact that there are other people who share your strutting and buffing nature, so can judge it.

It's not inborn. Young children don't seem to have a predilection to clothes - quite the opposite. You'll find most "primitives" also tend towards a rather minimalist wardrobe. If they fear their flesh being shown to millions of horny teens pawing through National Geographic, it's not because of their flesh, it's some weird religious thing about having your soul sucked out.

A computer simulation can only simulate the things it can simulate. Durrrr. And no computer I've ever seen can simulate human life with anything approaching fidelity. Meaning: even if you COULD get a "thinking AI", you can't expect it to share our cultural foibles. It has entirely different innie and outie bits. Whereas we might blush being caught surfing porn sites, it might "blush" being caught using Appletalk, or reading up on radiation. Dirty, dirty radiation. Electrons turn me on!

Moreover, much of our culture arises from difficult situations and the contrast with luxury. When stripped of a need to provide for themselves, humans seem to inevitably gravitate towards sex and signs of power. Assuming the social algorithm was at all accurate, you'd probably be living in a world filled with some of the scariest, most perverse culture you could imagine. Electrons everywhere. Ewwwww.

This has, in turn, given social simulations more than a black eye: it has cut off their arms and legs. Not only do all the possible interactions have to be programmed in, but all the cultural weights we lend them, too. It doesn't evolve out of the core rules: it's all special cases.

There are three ways to get a "decent" social interaction system. The first is to program in rules, such as "naked is mortifying". These rules can be set and weighted for individuals however you want. Downside being, of course, that there are millions of these, including ones like "don't stare at people on elevators" and "hip-hop is liked by many young folk, but hated by most mature adults". The upside is, of course, that you can program for any culture you like, including ones where it is a standard ritual to stare at one another on the elevator.

Of course, the OTHER downside is that this doesn't provide an 'accurate' culture. These piecemeal rules will leave strange gaps and bizarre behaviors galore.

The second way to program in rules is to let it learn the rules by parsing human interactions. This would be great, except that it requires a pattern recognizer, and we don't have one.

The third way to program in rules is to create a base set of behavior chunks, set some variables, and let it cogitate on the matter. This, of course, only creates as well as the rules are defined.

I, as always, choose the most abstract. So I've been thinking about that third method. I've got some interesting ideas, but they require diagrams to explain.

It would be pretty interesting if it worked. It probably won't, though. We'll see.


Textual Harassment said...

Have you thought about simulating social conditioning?

I would say that humans learn in two ways: reinforcement and imitation. People have learned to be uncomfortable while naked because a). after a certain age it was strongly discouraged and b). they observe other people being uncomfortable while naked c). there's probably also a built-in subconcious thing about vulnerability, too, but I don't know if you'd need to teach such things to your AI.

So the necessary first step to buiding a socially recognizable AI is to put it into a recognizable social structure. It would require the AI to have things like a body and to be able to interact physically with other people, because that exists in our world.

Maybe this is similar to your second approach but you'll need a human teacher to administer and interpret the lessons. Like a parent, except the child, being a computer, can't interpret anything on its own.

Indeed, your first method won't work because there will be gaps.

Textual Harassment said...

Continued(I didn't hit post, I swear!)

The third method won't work because it necessitates a comprehensive understanding the root of all human social interaction, which is pretty much impossible. The theories and systems people come up with about society are really only attempts to verbalize what they know through a lifetime of conditioning.

So I think the way to make social AI is to "grow" it in some social environment. Whether it's a robot interacting with real life or people in a virtual world, the AI should be programmed take on the norms of its environment.

Craig Perko said...

You're right: the best method is to let it grow and learn. But I don't have that kind of pattern recognition capability.

I'm not simulating a creature with my rules - I'm creating, from whole cloth, an entire social system. Extrapolating from the rules will create specific behavior patterns.

They'll be strange and foreign, but at least they'll be coherent.