It's always on the back burner: the idea that games are The Big Thing that will Enable Us To Solve The Big Problems. Well, I don't think that's true. If something vaguely gamelike does those sorts of things, it will be about as similar to today's games as we are to plankton.
To give a specific example, let's talk artificial intelligence. Real artificial intelligence - hard, general intelligence.
It would seem that games are just perfect for "training up" such an intelligence, don't you think? They've got clear rules, clear rewards and punishments, and a wide variety of situations. Plus, they have the ability to interact with other players! I mean, sure, you aren't just going to plunk one down in front of Final Fantasy MMCXLVIII, but surely you could start simple and work your way up to it, right?
No, not really.
You see, any general intelligence algorithm is going to build a mind out of experiences. There are a few algorithms out there already that might work... but they can't really be tested due to some very specific constraints.
One major constraint is hardware: even the biggest, nastiest supercomputers can't simulate the kind of data wrangling that these algorithms require. This is especially true because of the confusing high-fidelity inputs we receive from cameras and microphones and gyrometers.
To get around this constraint, we can try to use these algorithms on simpler data. Instead of using cameras, we use a short movie clip that can be analyzed at leisure. Instead of microphones, we might try text.
But the thing is, this doesn't end up working. The "encoded" data might be simpler, but it carries with it a huge slew of assumptions that the algorithm cannot learn because they are wholly outside its realm of perception. It's theoretically possible to create "neutered" encoded data that will "train up" an algorithm, but not only is it painfully difficult, it's also obviously not a terribly good example of general intelligence. Even then, it won't fix anything.
For example, text is very simple to represent. And there are just cartloads of algorithms out there that can analyze vast swaths of text and come up with something resembling a good representation of text language.
But this doesn't really allow the algorithm to discuss things. The algorithm can build a representation of the text, but even if it could "think" about things like context, it really has no way to determine context. Context is not really part of our written language. Instead, context is learned through exchanges of language. Interactions.
On the surface it's possible that, given enough time and enough people chatting with it, a general algorithm could learn to represent context and whatever other details need representing. The general algorithm could "think". But, unfortunately, this never happens.
Why?
Well, let's look at a game. Let's skip the easy steps and move straight to a game you would think would be suitable: WoW.
Let's put a general intelligence in WoW. WoW has a few big advantages which should be obvious: easy to understand rules, other players, conversations at every level.
In theory, the general intelligence would learn (if it was built correctly) how to fight, how to scout, how to run away.
Things like buying equipment and grouping aren't in the same domain, but our general intelligence has a miraculous set of fundamental systems, so he will eventually pick up on those things too.
Okay, so now our general intelligence is running around in WoW, understanding "LFG TS13+" or whatever.
Or is he?
How is he understanding these complex things?
Through that algorithm, right?
Yes, that algorithm and many many months of experience. Many, many months of experience. All carefully analyzed and collated. He's built up a massive set of interpretive systems to deal with this complex data and produce a decent result.
He has to know that this land slopes too steeply for anyone to run up... which means that he can't escape if he's on the bottom, but the monster can't reach him if he's on the top. He has to understand that the monster over there can heal, and the monster over there is a DPSer. He has to know that when someone says "orly" they are going to suck in a party and need to be ignored.
Even if his algorithm can handle that amount and diversity of data, what is it running on? What computer is running this maze of information? What system is supporting this nightmarishly complex structure that - I guarantee - will be at least fifty gigs of compressed data.
There is no possible way to exaggerate how much computation this level of sophistication requires. Regardless of whether you're very sophisticated in a game or very sophisticated in real life, that sophistication has to be calculated.
See, getting around the complex inputs from cameras and speakers doesn't actually solve anything. The amount of computation required to process complex inputs is significantly less than the amount of computation required to deal with the sophistication on how to put the processed result together intelligently.
So, sure, if you had the computational power to run a general intelligence, you might be able to run it in a game. Personally, I think that 99.99999999% of games - certainly all non-MMOGs - simply don't have enough interactive complexity to allow such an algorithm to learn adequately. But, in theory, you could run it in a game.
Why would you?
Why wouldn't you slap a camera and a microphone on it?
I can see reasons you might want to develop a game-like environment for such a thing, but those are specific applications, not general research. You could argue that hardware is irritating and faulty, but games are buggy, shallow, and have very poor interactivity compared to real life.
No, games are not The Key to this issue...
Although, obviously, feel free to disagree and tell me why.
13 comments:
So if I can distill your argument, it that there isn't a) enough interactivity and b) enough computational power. What if you designed a world specifically around interaction with an AGI? It's definetly a hard design problem, but you concede it's not impossible.
I wouldn't assert that "games" per se are the key, but rather, a tool, and that interactivity in general is the key. I don't intent to build a palace with a sharpened stone, but I can craft more complex tools with it.
As to computational power, I'm not sure as to the specifics, as that would depend on the particulars of the AGI's architecture. Can you unequivocally state that there is no AGI that can be designed in a way where this kind of fielded interaction is computationally feasible?
Seems to me like it's a matter of design in both cases.
As to (A), there's very little reason to use any kind of interactive virtual world for general research of this type: it's extremely inefficient, especially compared to the ready-made high-interactivity real world.
The only reason to use a virtual world was if you were getting some kind of radical efficiency boost. For example, being able to simultaneously simulate hundreds of agents, or being able to simulate deep space. But these are not feasible because of the development costs, reduced/misdirected complexity, and computational difficulties.
It's theoretically possible that there is a supremely efficient algorithm that can reason more than a million times more efficiently than any yet discovered (which is about the level of improvement we would need). It would need to be thousands of times more efficient than nature's solutions.
But chances are that our computational abilities will improve a millionfold before that algorithm is discovered. It will be doubly difficult to discover that more advanced algorithm with no baseline to compare against: most better solutions are discovered once the inferior solution is tested and found wanting.
For me, I plan on being satisfied by weak AI combined with pretty smoke and mirrors.
Maybe, but even if that is true (I don't have the expertise to judge CPU usage), Strong AI combined with smoke and mirrors probably has potent commercial applications, which is my focus. The added value of a virtual world is that you can instance the AGI as thousands or millions of different avatars, each with their own distinct "personality" filter and play style, and aggregate interactions with thousands to millions of people. You seem to be saying that isn't computationally possible...
Your definition of "strong AI" seems to be "any AI that has an application".
It's not nearly as hard to create some kind of automatic personality system as it is to solve the mystery of intelligent thought! That's not computationally crippling, it's just extremely difficult to create.
Actually, there's AGI I have in mind, I'm looking into the computational overhead issue you brought up, because it directly affects the feasibility of commercial applications. However, I have a suspicion that the computional lode is architected in such a way that it can do what you claim is currently impossible, let me get back to you on that after consulting a third party.
"AGI I have in mind" should be "a specific AGI I have in mind."
One other thing, having a singly-embodied AGI interacting as a robot isn't mutually exclusive to games training AGI. ARGs and such can apply there, as does the general games of human society; language, economics, law, and so on. The point is, the game can be structured to emphasize certain things, and implicit in that structure is a contract of "empathy", in the sense of mutually reinforcing models of self and other. I hadn't really considered robotics much in my thinking, because that seems more expensive and is outside the realm of my expertise, but I appreciate the jolt to my worldview.
You may be right, this isn't the AGI I was referring to, but supposedly they're using 100 teraflops to run one instance, that's pretty expensive. However, an architecture could use the 100 teraflops as the baseline, then use an addition 20 ghz or so per instance, something like that - the economy of scale would make sense there.
"Impossible and then easy" is still "impossible" first and "easy" second.
I forgot to drop the link.
Can I get a touchez?
No. It's a motherfucking chatbot. I made one when I was ten.
It's just a chatbot with a big budget.
I'm beginning to think you don't know the difference between general AI and weak AI.
This discussion is getting out of hand. Let's leave assumptions aside and see how things play out in the near term.
Though I should note, it probably takes a generally intelligent chatbot to fuck mothers.
Post a Comment