One of the things I've been playing around with more and more recently is different input types. I've used touchscreens a lot recently, but I've also been toying around with XBox controllers, augmented reality gear, and accelerometers. Nothing worth talking about in particular, but it highlights something I really want to talk about: what you can do is largely limited by how you can do it.
For example, one of the things I frequently want to do is adjust some other element of my interface temporarily. Right now, I want to turn down my music so I can focus on this paragraph a bit better. I did so, but it involved stopping writing, going over to the mouse, running the mouse over to a window on the other screen, and turning it down. In other words, it was distracting. Ideally, I could have just glanced over, or made a quick hand gesture.
This is just one of a thousand tiny details I wish I could tweak. It's a little hard to see it now because we can't do it: it's like someone using visicalc and trying to imagine the kinds of features a million times more powerful hardware will allow.
Trying to imagine the kind of work environment we might get out of more aware computers is a similarly difficult task. Let's do some thought experiments.
If you've read this far, take a moment to try to imagine a future computer. It can read you like a book. What kind of interface features does it have?
...
...
Come up with anything?
One of the big features we'll see almost immediately is some kind of "broad vs precise" gesture set. The ability to affect environmental programs without having to hunt them down is already becoming very common. Keyboards and laptops are often equipped with volume controls, even play and stop buttons. My keyboard actually has more than a dozen buttons I've never used ever on any keyboard, and that's in addition to the useless OLD buttons I never bothered using.
But the addition of an audio environment is a fairly recent one. The idea that a computer would constantly play music as you worked is relatively new and weird, so it's only now that we realize we want to be able to control it without interrupting whatever we're actually doing.
There are a lot of other features that we might see added to this environmental backbone.
For example, I can imagine my other monitor having a pastiche of imagery popping along it. The idea would be twofold. 1) to give me "microbreaks" and 2) to produce a kind of visual "white noise" to cushion me against any actual visual distractions. Such a program could easily integrate various alerts and email notifications, and I can imagine a lot of really interesting low-impact ways to handle it (a Twitter map would be fun).
However, this kind of thing is only good if the computer can tell fairly well when you've gone into "work mode", and even what kind of very precise mood you're in at the moment. The "sideline" stuff would be highly irritating and even distracting if it assumed you were in a mode you weren't in. This could be easily controlled by reading eye movements, even if they couldn't read precisely. But this kind of system can't be controlled using a mouse or keyboard: it needs to be largely controlled by a passive system, something the user doesn't need to activate.
It may sound like this sort of thing would be distracting. But... that's not really the case. It's very hard to see how much something would get used until it's being used. There have been many famous quotes by experts that radically underestimate adoption rates - a certain amount of RAM should be enough for everyone, or someday there will be a phone in every city in America...
Imagine further. Today I'm reading a textbook on investing. Sometimes the book is very dense. Imagine that a delicately configured program can tell not only what text I'm reading, but my comprehension level (by reading my pupil dilation, skin tension, or similar). As I read, some kind of halo follows my center of attention around. When I hesitate or backtrack, it highlights related parts of the sentence and runs through it again.
Wow, that sounds really distracting!
Your children won't think so...
But none of this is possible until we get something more than a mouse and keyboard!
2 comments:
On the other hand, many keyboards allow you to adjust volume by pushing one of two buttons.
Biometrics and non-invasive BCIs have a lot of potential here but there are some tall walls to climb. I think the low hanging fruit is in interaction models that, regardless of interface, tend to coax out enough data to steer in the user in the right direction.
I quote me: "Keyboards and laptops are often equipped with volume controls, even play and stop buttons. My keyboard actually has more than a dozen buttons I've never used ever on any keyboard, and that's in addition to the useless OLD buttons I never bothered using."
Unfortunately, it is a slapdash, clumsy solution. For example, I want a lot more control. I want the thing to start playing techno sometimes, other times rock, etc, etc. Should I remap all my function keys to a music style each?
I think that there is definitely a need for better UI design in general, but we're reaching a very hard limit on the kind of accurate subtlety we can glean from today's interfaces.
Post a Comment