Thursday, July 10, 2008

Cheap Computing Means...

One thing that always amazes me is the way that people can't understand that "more" does not necessarily mean "surplus". The cost of computing is getting so cheap that any punch-card programmer teleported to the modern era would be unable to comprehend how we could use a cell phone's processor at 100%, let alone a desktop's. But we do. Constantly.

It's only going to get cheaper, of course. What are we going to do when computation gets so absurdly cheap it is close to free?

I like this silly idea, and I'd love to see you comment with your own ideas. But remember: no miracle breakthroughs. Just the ever-present doubling.

My idea is this: a mesh network made out of super-cheap nodes. Yeah, pretty tame. Probably be here in five years. So let's kick it up a notch.

Let's give every node a speaker that can pump out high-hertz pulses, and a mic to pick up the same.

So, the first step in this mess is that the mesh network tries to map itself out. By sending out a wireless notification and a sound pulse at the same time, other nodes can calculate by the time elapsed between wireless and sonic reception to determine how far away the node is. With at least three nodes in sounding range, they can compare numbers and come up with exact relative positions.

This is obviously messy when we're talking about, say, inside a house. However, by measuring sound quality we'll probably be able to at least tell that we're not on a clear path. (Sound that echoes from walls and is channeled through doorways/windows is going to be much "fuzzier" than the same sound going straight through the air.)

If you have enough nodes, it's actually possible to map out the blockages by who hears who at what resolution... but let's skip that, because it's not required.

Okay, who cares? Why do we want a mesh network that knows its own physical locations? How POINTLESS!

Ahhh... what we've actually built is a distributed sonar system.

A node sends out a sonar pulse and receives back a mess of echoes. Because a node is not a wide array of mics, it cannot personally interpret these echoes. (It probably has two mics, one on each end, but that's not really enough to be able to tell much.)

However, all of the other nodes within sounding distance hear the pulse, and also get a wad of echoes. Different echoes, because they're in a different location.

What we have is a wide array of mics. We know their exact location. By sharing their information, we can calculate what echoes originated where, and we can build a sonar map.

By originating from different nodes, we can get different angles on our sonar, and clearly see "behind" and "through" things that any given node can't handle.

By changing the frequency of the ping, we may even be able to tell what kind of materials any given surface consists of.

So, what we have now is a mesh network that knows the world around it. Because these are intended to be pathetically cheap, this is not a real time sonar analysis: it has an averaged map because it pings maybe once every few minutes. So if every street lamp had one, the street would be mapped but cars and pedestrians would be "traffic blurs" rather than "this car is moving at 60mph down main street".

Okay, this is starting to be kind of interesting. If done right, it would be the ultimate traffic system, able to tell how bad traffic is, whether there's construction happening, and so forth. Hell, because the speed of sound is slightly different depending on temperature and moisture, you might actually be able to tell what the weather is. You can certainly hear rain!

Let's keep going.

The point of a mesh network is that you can drop new nodes in. So let's say that you carry a portable node with you. Using the locater ping (not a full sonar burst, just a ping like we talked about first) you can easily be tracked if you want to be. So the mesh network not only allows to you connect and do data-stuff, it also knows precisely where you are.

There are some interesting uses at this stage, but let's push it further.

Your node has headphones. You wear the headphones.

Suddenly, there's a "sonic world" around you. All of the nodes know precisely where you are (perhaps even specifically your left and right ear), so they can tell your headphones exactly what kinds of sounds it should play.

This intermediate step would allow for some interesting "phantasmal presences"... but let's push on to the big finish.

Assuming a slightly greater amount of computation, let's make your node have a camera and playback system. Slim little VR goggles, for instance. The nodes all know where you are and where you are looking. They know exactly what you can see, or should be able to see.

They can map what you ARE seeing to that. They can build up a full internal representation of what the world should look like, and they can play it back to you.

This would allow for a lot of interesting things. For example, they could tag other augmented-reality users, make them glow. They can highlight a path you should take - a big step up from "in 300 yards, turn left" of today. They can "clear up" bad weather and let you clearly see where walls and curbs are even in the dark. They can allow you to post messages on any surface, using any kind of grouping: "tag" a bus with something only your friends (or internet group) can see.

Virtual pets are, of course, a given: able to run around the physical world, they are rendered with loving care by the mesh network, realistically interacting with things from that world.

Advanced uses might include "virtual makeovers": interactive mirrors that let you see yourself dressed in whatever the shop is offering (or you've found on line). Custom displays for various surfaces, such as windows that display a view of mars, or a street that is tagged with the locations of every accident in the past five years... cars that are "souped up cyber-rides" - take off your specs, it's a Pinto. Put 'em on, it's half batmobile, half pneumatic main battle tank.

And this is all with the mesh network not being able to get moment-to-moment sonar data. If we were to upgrade the network to use something more continuous, it would be able to do a lot more things!

These are just the uses I can think of... I'm sure there are a lot of others. I'm sure that, in fifty years, augmented reality will be as important as real reality in the same way cell phones are often as important as real conversations...

What sort of future-shock do YOU see cheap computation accomplishing?

2 comments:

Chris said...

I take it you've seen or heard of Denno Coil?
http://en.wikipedia.org/wiki/Denno_Coil

If not, I think you'll be in for an interesting surprise. You can find a fansub copy of the first episode at dattebayo if you haven't already.

That said, I like a lot of the mechanics of your approach. Watching the first episode of Denno coil, I wasn't quite sure how the entire thing worked/fit together, but mesh networks actually make the entire thing "click" together quite nicely.

Craig Perko said...

Hmmm, I MAY have...