Here, something a bit lighter and software-related.
I've just seen Microsofts "year 2019" videos, and they're all about touchscreens.
As someone who actually WORKS with touchscreens and tablets both in my hobbies and in my work, allow me to edjumicatify you.
I llllove touchscreens and I think that the killer app is going to be flexible touchscreens. Once we have a flexible Kindle, we're going supernova. Nothing will ever be the same. Imagine a thin, hardback book, about the size of a Macbook Air. It has a dozen pages in it. Each of the pages can display moving video and is touch sensitive.
The pages are designed to be removed from the book, to operate as independent displays or, if arranged in a lattice, larger displays. They can hook up with other books and pages. You can use "smart inertial dragging" to flick things from one page to another even if they are not attached to each other.
Yay!
But!
And this needs to be stressed.
BUT!
Touchscreens are quite literally an interim technology.
The real next step is going to visual recognition.
Touchscreens are limited by the scope of your hand. The screen I develop for at work is fairly large as such things go, and it's pretty tiring to use. I can't imagine using a wall-sized touchscreen for anything besides collaboration (when it will be important for the other users to clearly see your motions).
Instead, what I expect is for larger screens (and virtual screens) to instead detect your intentions by your eye and hand gestures. You don't reach up and drag something across three feet of wall. You look at it, gesture slightly with your hand, and look at where you want it to go. Vwip, it goes there.
Similarly, interacting with controls is a matter of looking at them and making subtle hand gestures. You don't need to be anywhere near the screen, as long as it can tell where you are looking precisely enough.
Now, you're not always going to want to be looking at what you want to control. For example, an audio guy might be studying the wave forms of whatever he's listening to, but adjusting knobs somewhere else. It's pretty easy to set this up, either using positional recognition (when he looks there, he wants the controls over here) or lock-assignation like sign language speakers sometimes do. That is, looking at what you want, gesturing to "lock it in" to a place (say, your left hand, or just off to the right) and then gesturing relative to that.
THAT is the real future of large screens.
Touchscreens are just a handy-dandy interim technology that we'll use until we get there.
No comments:
Post a Comment