Wednesday, November 02, 2005


A lot of people like the idea of a "singularity", or other, similar theories.

Personally, I love the idea. But there is something to remember:

99% of humanity isn't interested in participating.

This creates one hell of a drag! As far as I can see, there are only three ways a "singularity" could occur. On the plus side, I can't see many ways in which a singularity will fail to occur - just ways in which it can be pushed forward and back.

1) The early adopter curve. In theory, a singularity could be steadily "adopted" in. The way the internet has been. This would probably take 20 years per "stage", plus an additional 20 years. However, I don't see this as very likely when compared to the other options.

2) A biological upgrade. The reason there's so much cultural drag is because the majority of people are against changing. Offer eternal youth, that 99/1 ratio becomes a 1/99 ratio. Given the remarkable advances in biotech, this is plausible. However, given society's behavior about such research, it would be akin to an act of sheer anarchy to provide it. If this route is followed, human society would utterly collapse and be rebuilt in less than twenty years.

3) Carnivores. This is the way I see as most likely. Carnivores don't much care about bringing humanity out of their ancient ways. Instead, we - uh, I mean, they - utilize existing human infrastructure to fund our future, thanks to our vastly improved ability to manipulate it. The end result from the human side is "business is good"... but the end result from our side is "thanks for the space ship, see you never." The end result is an obsolete, languishing humanity watching the ever-growing exploits of their children, who may be in space or might stay on earth. Please note, this is compatible with type (2), if type (2) bioadvances are restricted, either by law or by cultural norms.

The idea of intelligence amplification and artificial intelligence are entirely type 3. Most of humanity won't get amped. That's even more true if it's an AI, rather than a human, doing the thinking. I hold that true AI is inevitable, but my short-term bet is on IA, which is happening as we speak. As you read this blog, for example.

The final thought here is a bit unsettling:

All your "learning" is useless, save as practice. What you need to do is adopt practices which increase your capacity for learning and cogitating. You can expand your "vision" by, say, reading really good blogs, but that doesn't actually increase your capacity above what the intellectual excersize provides.

Increase your capacity by using all the methods you can find. For example, making friends with geniuses is a great way to increase both your and their intelligence. Unfortunately, this is still very difficult on the internet due to a lack of peer pressure. Keeping extensive and well-networked data records is another way to increase your intelligence, as is subjecting yourself to extensive memetic control (also known as "reading something incredibly inspiring every day").

To expand your intelligence in the "correct" direction requires you to know what direction you're expanding in.

I'm still working on that bit...

But if you don't think of your mind as a device that can be improved, you're really missing out. Whatever your intellectual capacity, it can be shoved through the roof if you try.

Don't think about acting. Act about thinking. Then thinking about acting will be so much easier!


Patrick Dugan said...

I subscribe to the Yudkowskyian/Goertzelian viewpoint that recursively self-improving AGI is going to blaze the trail and realize that most extreme moment of change referred to as "The Singularity". Sure IA happens when people read stuff and interact with tech to become "smarter", a very nice book by Stephen Johnson makes this very point about popular culture in general (including videogames). As you pointed out these new connections and innovations are merely manifestations of memetic evolution under the constraint of human intelligence. The real challenge is figuring out how to build an implicitedly Friendly AGI (a FAGI, I suppose) and ensure that the AGI community complies with the nessecary standards. If you're really interested in the technical details of the AGI movement, you should subscribe to the SL4 mailing list. I recommend reading some of Ben Goertzel's essays as well, he's working on one of the most far-along AGI projects at the moment, in addition to being a very cool guy.

Heres my thing, yes 99% of the human population is going to resist transhumanism/singularity the same way a large percentage has resisted the internet and globalization. However, I think the best way to help people wake up and smell the quantum computing coffee is by using interactive games as a means of not merely transmitting memes, but encouraging personal memetic growth in the audience. The closer global human culture gets to the critical moment, the more games along those lines will be needed.

Craig Perko said...

Those are the same things I thought when I was younger - to a "T". Now I believe those ideas to be untenable.

Patrick Dugan said...

What has made you find recursively self-improving AGI as untenable, besides the obvious hard software problems? Do you disagree that the interactive medium can be a tool in helping mass amounts of people improve their cognitive faculties?

Craig Perko said...

I believe hard AI is untenable in the near future. Perhaps a genius will uncover it in the next five years, but I see no signs of advancement in the field, so I doubt it.

I don't believe anything can improve "mass amounts" of people, because "mass amounts" of people rarely can be improved. I simply believe that the top few can improve themselves significantly using IA. The rest will have it forced on them at an insulting slow pace and with astonishing inefficiency.

Patrick Dugan said...

I'd say a genuis might pull it off in 5-15 yrs, personally I'm very interested in how Ben Goertzel's Novamente project turns out, though its probably got a few more years of implementation and learning feedback to go. It probably won't undergo take-off, but it will likely have a big effect on cognitive science as well as applications in pattern recognition and data representation (maybe even game design).

As for the rest of them, maybe I'm a populist, but I think striving to "enlighten" as many people as possible is a worthwhile aim. The question is, if highly potent memetic adaptions which might start someone on a path of IA (whether with raw hardware or just interfacing ability) are inspired in someone by playing an interactive story, is this "forcing" IA on someone or letting them choose for themselves? You're a reductionist right? You concede that absolute free will is an illusion, even if you convince someone to do something theres still a subtle sort of coercion going on, isn't there?

Craig Perko said...

I don't see Novamente as a viable project. They do not have a viable pattern recognition system, so their engine cannot "learn" save at the most simplistic levels. "Naturally emerges" is not a viable method, as has been proven by a hundred projects.

As to your second point, I'm not saying humans can't be convinced to "enhance" themselves. However, that takes time, and time is a luxury I'm not planning on giving them. I'm not going to wait around for them: the race has already started.