Tuesday, January 20, 2009

Extending the Human Mind

Yeah, this one's all futurism. Try to ignore the political tone if it grates on you. It's not the central theme, it's just setting the stage.

Today I was not listening to the inauguration. To be specific, it wasn't that I simply wasn't listening, it was that I didn't want to listen. Political speeches are, by their very nature, almost completely devoid of anything resembling meaning. I didn't want to hear the speech and definitely didn't want to weather the teeth-grittingly obvious televangelist-adapted speaking techniques. Memetic infection vectors, if you want to be suitably super-nerdy.

But I was interested in understanding what the speech meant and, perhaps more importantly, how people were reacting to it. So I watched the Twittering. I got my piecemeal analysis of the speech. And then, just to verify, I read a transcription and verified that, in fact, the analysis was correct.

There were some parts I cared about. Not a fan of letting an ancient death cult with a coat of paint determine my future, still not much actual separation of church and state going on there. But his paragraph about science was most promising. If he means it.

But the speech was more or less unimportant. The Twitter feed was more important. Because those people are pretty good indicators of the pulse of the nation, and I'm hoping are pretty good predictors of future trends.

For example, after he said "and non-believers", virtually every post for thirty seconds was about how great it was that he gave "us" a "shout out". We'll politely ignore the paragraphs where he talks about following god's will and being guided by scripture, since that sort of crap is mandatory for US politicians and we are easily quieted by table scraps. But it means that, at least for a while, the nonbelievers - who outnumber the American Jewish and Muslim contingents combined - are going to be very happy with our new president. That's probably not an analysis you'll see much in more mainstream news sources, and it's not something you could get just from reading the speech.

The point is that the analysis was more important than the content. So instead of doing the analysis myself, I allowed thousands of people to do the analysis and then picked out the best bits I could find. Open source analysis, I guess.

Things come to light when you do this. Correlations and details not in the actual event but brought up in the analysis. Trends that may predict future political activities... for example, based on the Twitters I saw, I would predict that African aid is going to be an extremely low priority in the upcoming four years. I'm not saying that because I've analyzed the variables or made any logical connections, I'm saying it because it was the only part of his speech that I didn't see one Twitter about. It's possible they were just lost in the staggering number of posts, but it's statistically unlikely that there were very many.

But it's possible. There were a staggering number of posts. They might have been out of synch with my rabid refresh-clicking. What I would have preferred to do is to read all the twitters, but as they were coming at roughly 100/second, that wasn't going to happen.

What I really needed, you see, was an analysis of the Twitter posts in the same way that the Twitter posts were an analysis of the speech. With the goal of reducing the posts down to a reasonable number, either by displaying only the most trend-representative posts or by simply describing the trends in the posts.

Humans couldn't do it: even if we put ten thousand more people on analyzing the first ten thousand people's posts, there's too many posts, too quickly. It requires software.

A software agent. A piece of personal software that reads all the Twitter posts and tells me what to think about them.

... what? ...

"Wait! You want software to think for you?"

Well, DUH. There's a lot of information analysis scut-work. I don't have time to keep up with all the information, even if I wanted to. It's going to require us to outsource our thinking, to create a layer of abstraction between us and the original data.

Of course, I'm not interested in letting just any old program do my thinking for me. It has to be software that I can trust and that I can dive into and make sure it really is thinking like me. I'm not going to buy "MS Brain Replacement", but I might get "Neuralinux" or whatever. Properly understood and under my control, this software would be able to parse the high-volume information streams, understanding what my personal values are. It would also be able to leverage other infomorphs' analysis: for trend analysis, for saving computation, and for trusted analysis on restricted information.

This method of extending the human mind has been talked about a lot in the more futuristic segment of geekhood. Some people talk about outrigger brains that will directly interface with our own, but at least in the forseeable future, it's more likely to be this kind of software agent. I can see, looking further ahead, that the line would become blurred - where our mind ends and the software begins. I don't see that as a problem, as long as our software is our software.

After all, we trust our cars. We trust airplanes. We trust our banks. We do a lot of trusting of external agencies, and these agencies radically enhance our capabilities. Our brain is no different, except that we're only now starting to take hold of the reins. Until recently, we've been forced to trust news agencies, politicians, and other rather questionable sources of analysis. But connectivity works in our favor: our personal software agents will be able to "see" a huge number of sources and get millions of opinions.

I look forward to it, personally.

I look forward to being in the voting booth and seeing names I don't recognize. Pull up my infomorph. It tells me what I think of their politics, their scandals, and so on. No reason to blind-vote or go the party line.

I look forward to posting an essay like this one and suddenly getting a rash of highly educated specialists poking holes in it.

I look forward to the capabilities I can't even imagine.

How about you?

10 comments:

Adrian Lopez said...

Brain 2.0?

I think software intelligent enough to think for me is likely to suffer from many of the same cognitive faults as myself and my fellow humans, and perhaps from other faults unique to these digital brains. With that in mind, I wouldn't trust these brains enough to outsource my thinking to them.

Brain 2.0 as a debating partner? Can't wait. As a tool for unfiltered brain augmentation? Get away from me with that thing, doc!

Craig Perko said...

I think you'll find your views become old-fashioned as we get used to it. Sort of like when the first high-speed trains were being tested and people were terrified of the idea of a device that could go a whole 30 mph.

Foolish humans, speeding so fast: any error was sure to leave them paste, even assuming that the sheer velocity didn't make them go gibberingly insane!

Of course, over time, these concerns abated as faster travel became safer and more common. I expect the same thing to happen with this kind of software.

Adrian Lopez said...

Would my views become old-fashioned due to these brains being more reliable than my pessimistic outlook suggests, or would they become old-fashioned simply because humans have historically demonstrated a willingness to leave the thinking to others?

If it's the former, I simply lack the imagination to conceive of an intelligent device that's practically invulnerable to magical thinking despite it drawing its conclusions from data supplied by large numbers of those who do in fact suffer from magical thinking. If, however, it is the latter, I can say without any reservations that humanity is doomed.

Let's not forget... the wisdom of crowds has given us all kinds of religions... and Hitler.

Craig Perko said...

No, I think you're approaching the situation from the angle suggested by our culture, rather than dispassionately.

It is not a matter of quality, but of quantity. When machine looms came into being, there was a lot of uproar over them. Sure, they allow less people to make more, higher-quality goods, but those goods lack "soul", and sometimes the looms eat people. (Not as overall bad for the workers as being crouched over an oldschool loom 12 hours a day, but they skip that part.)

Regardless of what people thought - and still think - about the matter, machine looms reign supreme. This is because we can do more with machine looms. As the years went by, our machine looms didn't just surpass the production of hand looms, but left it so far in the dust that a single textiles factory in England can produce more textiles than the entire population of England could back when it was done by hand. But only an idiot would argue that this increase in production has been bad for society: everyone's quality of life is so much higher with our mass-produced pants.

Personal infomorphs are the same matter, only we're weaving thought.

There is some concern over the concept of "losing our minds" to software, but it's not a concept that exists in reality, it's a romanticized ideal based on our current culture's preoccupation with our mental "hand looms".

Our minds are not some kind of inviolate temple. Our brains have a clear boundary, but our minds do not: they are influenced deeply by many external factors, ranging from how you use your cell phone to what paintings you hang on your wall to what news media you watch.

This is not a matter of replacing a part of our minds. It is a matter of switching us from relying on external sources we DON'T control (news media) to external sources we DO control (an infomorph we've customized and secured).

Except this new source is so much more capable than the old sources that it's almost impossible to compare.

We see it from the perspective of us. We don't see the vast tapestries, complex patterns, and new fabrics that these new thought-looms can produce... we only see that the way we think today is in danger of going extinct.

... I hope I'm being clear, but it's hard to tell, when you're working with this kind of ridiculously far-out concept.

Ellipsis said...

In a way, we're already doing this. Not just with major news sources like CNN - I'm thinking of readers, for example, which are a tool for filtering content from blogs, which are themselves interpretations of the wealth of information.

If instead of trying to read all the thousands of posts, you just read a couple dozen, you can take what you see as the relevant aspects of those points and distill it into a single post. If a hundred people do this, then you now have the relevant content compacted into a hundred posts, and then those posts will be shared to varying degree depending on their perceived insight, so at the end of the day, you might get 5-10 posts that are representative, in some sense, of the original set of thousands.

Patrick said...

Surely, any sophisticated person in this future will refine means of risk managing the worldview positions offered to them by their software. Remember, most hedge funds are going down because the patterns of the market became too chaotic and broke their trading systems, the same applies to worldviews.

Soyweiser said...

If I read this: http://www.moserware.com/2008/05/who-is-this-licklider-guy.html correctly, having machines expand human knowledge, and assist them in doing clearer thinking. Has always been the plan. This idea has jumpstarted the internet.

Adrian Lopez said...

"Surely, any sophisticated person in this future will refine means of risk managing the worldview positions offered to them by their software."

But that's not having your software think for you, is it? An intelligent data mining tool I can deal with. Software that thinks for me, I'm not sure that I can.

Anyway... it's tough to argue against a futurist's pet theories. There's always something the critic "doesn't get", supposedly just like the old-fashioned masses of people who didn't get today's most important inventions.

People "laughed" at Galileo when he suggested the Earth orbited the Sun, but people also laughed at Bozo the Clown. The best thing about being a futurist is that you usually get to be dead long before being proved right or wrong.

Ellipsis said...

At this point I feel obliged to point out that part of the resistance to the idea in Perko's post stems from the fact that he's choosing the most dramatic sounding words to describe it.

There's opposition to the idea of having software "think for us" but none to the idea of having an "information filtering tool," when they're basically the same thing. Books are a way of "outsourcing memory," for example, but no one's worried that books are ruining our memories (even though they actually have - the average person's memory isn't as robust in a literate era, because it doesn't have to be). It isn't even because we're just used to books that we don't mind them - we immediately recognize that they're a tool to be used by us, and only become ubiquitous because they really are useful.

Not that I mind dramatic ways of describing things myself...

Craig Perko said...

Sorry that I take so long to post replies, guys, I only have time to do so in the evenings.

I'd been thinking about exactly how to say what I meant, but then Ellipsis went and said it for me.

I really am describing a data mining tool. But there isn't some clear line as to... well, he described it, I won't go into my duplicate description.

Things like CNN and, hell, signing up for this blog on your reader - those are all data mining tools. They affect how you view the world.

I'm simply talking about a tool that actually reports intelligently, taking into account your personal concerns and preferences.

Then, of course, I'm wandering off into the future where this sort of thing becomes so powerful that it is hard to tell where you stop and where your software begins.

Re: Patrick. I think it's an important point. You don't STOP thinking just because you have something very effective to help you think. The same pattern recognition system you use for driving safely and cooking tacos will serve equally well at managing these information agents.

Soyweiser: Yeah, Licklider was ahead of his time. And ours.

I never intended to claim this was my idea... it's fairly common in certain crowds. I just wanted to highlight how it got brought to the fore in my mind.

Re: Ellipsis (older post): I think you could do that at, and least in the near term, most of the more valuable analysis will still be done by humans. But computers are faster and have a better memory, so I suspect that they will start doing more and more of the grunt work, even if it's only preliminary to having a human weigh in.