It's always on the back burner: the idea that games are The Big Thing that will Enable Us To Solve The Big Problems. Well, I don't think that's true. If something vaguely gamelike does those sorts of things, it will be about as similar to today's games as we are to plankton.
To give a specific example, let's talk artificial intelligence. Real artificial intelligence - hard, general intelligence.
It would seem that games are just perfect for "training up" such an intelligence, don't you think? They've got clear rules, clear rewards and punishments, and a wide variety of situations. Plus, they have the ability to interact with other players! I mean, sure, you aren't just going to plunk one down in front of Final Fantasy MMCXLVIII, but surely you could start simple and work your way up to it, right?
No, not really.
You see, any general intelligence algorithm is going to build a mind out of experiences. There are a few algorithms out there already that might work... but they can't really be tested due to some very specific constraints.
One major constraint is hardware: even the biggest, nastiest supercomputers can't simulate the kind of data wrangling that these algorithms require. This is especially true because of the confusing high-fidelity inputs we receive from cameras and microphones and gyrometers.
To get around this constraint, we can try to use these algorithms on simpler data. Instead of using cameras, we use a short movie clip that can be analyzed at leisure. Instead of microphones, we might try text.
But the thing is, this doesn't end up working. The "encoded" data might be simpler, but it carries with it a huge slew of assumptions that the algorithm cannot learn because they are wholly outside its realm of perception. It's theoretically possible to create "neutered" encoded data that will "train up" an algorithm, but not only is it painfully difficult, it's also obviously not a terribly good example of general intelligence. Even then, it won't fix anything.
For example, text is very simple to represent. And there are just cartloads of algorithms out there that can analyze vast swaths of text and come up with something resembling a good representation of text language.
But this doesn't really allow the algorithm to discuss things. The algorithm can build a representation of the text, but even if it could "think" about things like context, it really has no way to determine context. Context is not really part of our written language. Instead, context is learned through exchanges of language. Interactions.
On the surface it's possible that, given enough time and enough people chatting with it, a general algorithm could learn to represent context and whatever other details need representing. The general algorithm could "think". But, unfortunately, this never happens.
Well, let's look at a game. Let's skip the easy steps and move straight to a game you would think would be suitable: WoW.
Let's put a general intelligence in WoW. WoW has a few big advantages which should be obvious: easy to understand rules, other players, conversations at every level.
In theory, the general intelligence would learn (if it was built correctly) how to fight, how to scout, how to run away.
Things like buying equipment and grouping aren't in the same domain, but our general intelligence has a miraculous set of fundamental systems, so he will eventually pick up on those things too.
Okay, so now our general intelligence is running around in WoW, understanding "LFG TS13+" or whatever.
Or is he?
How is he understanding these complex things?
Through that algorithm, right?
Yes, that algorithm and many many months of experience. Many, many months of experience. All carefully analyzed and collated. He's built up a massive set of interpretive systems to deal with this complex data and produce a decent result.
He has to know that this land slopes too steeply for anyone to run up... which means that he can't escape if he's on the bottom, but the monster can't reach him if he's on the top. He has to understand that the monster over there can heal, and the monster over there is a DPSer. He has to know that when someone says "orly" they are going to suck in a party and need to be ignored.
Even if his algorithm can handle that amount and diversity of data, what is it running on? What computer is running this maze of information? What system is supporting this nightmarishly complex structure that - I guarantee - will be at least fifty gigs of compressed data.
There is no possible way to exaggerate how much computation this level of sophistication requires. Regardless of whether you're very sophisticated in a game or very sophisticated in real life, that sophistication has to be calculated.
See, getting around the complex inputs from cameras and speakers doesn't actually solve anything. The amount of computation required to process complex inputs is significantly less than the amount of computation required to deal with the sophistication on how to put the processed result together intelligently.
So, sure, if you had the computational power to run a general intelligence, you might be able to run it in a game. Personally, I think that 99.99999999% of games - certainly all non-MMOGs - simply don't have enough interactive complexity to allow such an algorithm to learn adequately. But, in theory, you could run it in a game.
Why would you?
Why wouldn't you slap a camera and a microphone on it?
I can see reasons you might want to develop a game-like environment for such a thing, but those are specific applications, not general research. You could argue that hardware is irritating and faulty, but games are buggy, shallow, and have very poor interactivity compared to real life.
No, games are not The Key to this issue...
Although, obviously, feel free to disagree and tell me why.