Sunday, February 21, 2010

Talkative Technology

Ugh, wake up on Sunday morning and there's Warren Ellis, posting at the top of his game. I haven't even had coffee yet.

Here's an interesting article for you about ubiquitous computing. Apparently, there was an interesting convention on the other side of the planet. Fortunately, Mr. Nova gives us a summary.

I tend to focus on the software side of things, so when I think of ubiquitous computing, I have a specific vision: the hardware fades away and we're left with ubiquitous transparent interfaces. The thought exercise is how being "virtualized" or "made smart" can make things in your life easier and better to use.

The classic example is a toaster. We have a toaster in the office. It is almost unusable because it is "too smart". Instead of letting you set a heat level, a timer, and then press a button, it requires you to select an exact type of thing you want to toast ("bagel", for example) and a number of slices. Of course, the list of things you can toast is too long to be easy, but too short to actually include the things you're likely to want to toast, and the idea of "slices" is insufficient for that all-important browness level. So you end up guessing: is toasting this cut-open croissant more like toasting a single slice of bagel, or more like toasting two slices of bread?

Whatever you choose, the toaster is a bad design. It looks nice, very Star Trek, very chrome, but it actually does its job worse than a toaster with two dials and a button.

I can't really think of a toaster that's significantly better than two dials and a button. There's not much need for a toaster that's smarter than that. The next step would be a toaster that could fetch the bread, butter it, and toast it up while you're still figuring out whether your pants are on frontwards or not. That's not something that can be virtualized, and your toaster will never get there by simply being smarter.

But there are things in your life that can be made better if they are smarter.

Almost all of them are, for me, methods of putting yourself into a community without needing to actually be there. As an example, I wouldn't at all mind if my stereo played songs recommended by my friends (or even respected strangers). Sure, the music tastes sweeter hunted down track by track, but I seem to spend most of my time hunting down ridiculous futurism articles instead.

How about a picture frame which fades between images that your friends have recommended, or even to newly taken pictures from their photo galleries? Or displays a virtual world where you all "live"?

This just starts to touch on the idea of passive integration into some kind of community. It will be a while after that when we start to see more aggressive integration. This is because A) we haven't come to the stage where an on-line community is as strong as a real community and B) it'll take us that long to figure out the privacy concerns.

Still, think about all the more aggressive options.

For example, your notebook that you scribble little notes in. What if there's a "public notes" section which works a bit like Buzz, except it's got a tablet interface for doodles and scribbles as well? Doodles and scribbles are a lot more interesting for everyone involved, and I find that half-formed ideas often offer more potential than the final idea, simply because they can go a dozen different ways while the final idea can only go one. I call this "idea aji", which I won't bother to explain.

This notebook idea has no place in our current lives. There's just no mechanic for it. Even if we had a notebook capable of doing that (which we do, actually, but let's pretend we don't), there's no mechanic in our on-line life for this kind of sharing. It looks like Twitter on the surface, but it's very different underneath.

There are dozens more examples. For example, my piano keyboard would be a lot more entertaining if it could connect to faraway piano keyboards and allow us to play distant duets. My kitchen would be a lot more useful if the various cabinet doors were screens for recipes and labels. My clothes would be more entertaining if they could arbitrarily change their patterns, especially if you lost a bet.

And it'd all be a lot more useful if I could "flip" windows from one screen to another across the hall, or in my pocket, or across the city.

Really, ubiquitous computing isn't about ubiquitous computing at all. It's about ubiquitous interfacing. The problem is that, at the moment, we don't have the community infrastructure to allow for that interfacing

Even if all the technology was cheap, even if all the software existed, we still wouldn't be ready, socially, to use it.

So the question then becomes "which baby steps get taken first?"

The more immediate question is "why am I posting stuff like this before coffee?"

Friday, February 19, 2010

Piracy and Economy and Ethics

The recent debacle of over-protected games has made the situation even more clear. The old ways are on their last legs. If people are refusing to buy your game because it is easier to steal it, there's something wrong with your business model. Especially if those people are a major part of your target audience.

A government's laws are basically unimportant and unenforceable when it comes to this sort of thing. Sure, there's always froth, always some poor innocent grandma or seven-year-old getting crushed on the public stage, but you're more likely to get hit by lightning than charged with piracy. This is true no matter what kind of piracy you engage in, aside from fairly obvious exceptions such as actual, physical hijacking of naval vessels.

I don't pirate because I find I (A) want to support people who produce things I like and (B) have enough money to do so. The only times I am tempted to pirate is when a game I want to play has ridiculous anti-piracy measures. Once pirated, I don't have to worry about the piracy protection. Which is an indication as to the hopeless nature of the fight.

However, I haven't pirated in quite a long time. I generally find there are plenty of games I want to play, so I just go get a different one. (This is actually pretty severe. I went to Bioshock 1's opening party, got myself "adam" shots from models in bloody nurse outfits, the whole affair. I did not buy (or pirate) Bioshock 2. Because of its "protection".)

My "morality" of not pirating is not some kind of fundamental morality. In this situation, there is no fundamental morality. There's a reality, and it is our nature as humans to adapt to the reality, and then backwards-reason excuses for ourselves. Anyone who proclaims that piracy is wrong, or anyone who angrily claims pirate protection is oppressive, they're both reasoning backwards to reach that point.

The truth is that the cost of data duplication is so low that it's more or less free. So you think that someone has the right to steal music, or pirate games, or whatever, that's irrelevant. Same with thinking that the people who produced it have a right to gate it. Also irrelevant. Both of those are reasons provided by your clever backwards-thinking brain in response to older experiences in an older reality.

Welcome to today.

As time progresses, our morality will shift to a data-centric view. Because data is cheap to duplicate and extremely hard to contain, we are going to drift towards a "data is free" mentality. Your kids (or grandkids, I don't know how old you are) will look back on the idea that game companies charged for distributing games with awe. Sort of like you looking back on the laws intended to protect scribes from the printing press. How could anyone have tried to stop or slow down the printing press? Sure, you feel for the scribes, but you can't save them.

It's the same situation here.

Right now, most of our morality still derives from rationalizing our old behaviors in our economy of stuff. In an economy of stuff, stealing something is bad because it makes that something not available to the person who owns it. The idea of patents and copyrights was perhaps the first major reference to data economies, but that was in a very different time with very different characteristics. We have formed our morality around them, but it isn't a fundamental morality. Our current era works differently, and we'll watch as our morality swings to favor reality.

How far this will go or what will result is hard to say. Some people believe it will be the death of art, games, music, and so on. These people are really dumb. It may be the end of the reign of pop music and EA, but games and music will continue on without any problems. Hell, people will make them for free even if they can't figure out a way to profit from it. (Although I think there are plenty of ways to profit.)

Other people think it will be the death of money. This seems equally unlikely. Our data economy still exists in the "cracks" of our fundamental economy, the one that sells us power and lattes and other physical things. It may be that in the fullness of time these things will become so easy to produce that they will not be worth much money, but until then, we'll always need cash.

All I can say for sure is that it means the death of copyright and patents as we know them. It may take twenty more years, but eventually people will begin to think that distributing data is a fundamental right, like being allowed to walk down main street. They will begin to think this because we will rationalize the fact that we do distribute data willy-nilly. Our moralities arise from our world, they are not some magic set of rules handed down from on high.

Please note this also applies to DNA, and our ability to manipulate it.

Thursday, February 18, 2010

FF13 and Weak Play

So, I pre-ordered FF13 yesterday. Because I buy almost every RPG in the desperate hope of finding one that has a spine. I know, I know, Final Fantasy: not the place to look.

When you pre-order, you get a little book right away. The main purpose of the manual is to sell you the strategy guide, but along the way it accidentally teaches you a bit about how to play the game. And I learned something interesting: it saves immediately before each battle, and completely restores all your health after each battle.

Not two weeks ago I was busy ragging on Mass Effect 2. Every encounter in ME2 is packaged up separate from the rest of the game, because every encounter is boolean win/lose: you are always restored to full before the next fight, and no fight has lasting consequences as a result of your actions. What I mean is this: if you can replace any fight with "press A to continue" without having any impact on the rest of the game, it may be that those fights are A BIT POINTLESS. This is made worse by the fact that the fights themselves are painfully boring, although I'm apparently the only person who thinks so.

ME1 was the same way, but at least it featured a complex system of leveling and inventory management that allowed your adventure to interact with your combat, if not the other way around. ME2 ditched that, so you end up with what is essentially a movie with four endings. A movie that often pauses itself and forces you to go find the remote before you can continue.

Now, I expect bad gameplay from FF13. It's a given: every FF game has worse gameplay than the last in their head-over-heels rush to become more MMORPG-like and less... interesting, fun, or deep. But to take it so far? To make it so that there is NO attrition in your adventure? NO worry about the future?

I sat down to think a bit more. The idea is inherently disgusting. It's like instead of letting you paint a beautiful picture, they are instead asking you to paint one line on ten thousand sheets of paper. But I must be more careful. If it's just one AAA game doing this, I can see it being a shortsighted mistake, or targeting a dumber breed of gamer, or something. But virtually EVERY AAA game on the market these days has this same "dumbed down" approach.

FF13 has a few details that are obviously a desperate attempt to reintroduce some kind of non-boolean result to the fight. You get a rank in stars depending on how well you do, although it's not clear what that rank earns you. And, of course, the recent FF games have started to rely extremely heavily on consumables (probably another MMORPG relic). So if you do badly and have to use a potion, then that's a potion you won't have for later. But I haven't seen an FF game in the past decade hard enough to make me use even a tenth of the potions I just randomly pick up off the floor, so that's not really much of a price to pay.

Also, you do, eventually, start getting various kinds of rewards from fights. According to the booklet, the first half (185,561,839 hours) of the game features NO statistical character development, though, so I'm not sure how well that is going to work. It seems like fights could be easily replaced by "press A to continue" with no effect on the larger game.

I have to ask myself: is it important that consequences cascade from your actions? Or is that a relic of my childhood?

ME2 has completely separated combat and adventure. While you can get adventure-side upgrades that affect your combat, they are extremely minor. Leveling up has almost no relationship to what you do in combat OR adventure, so while it is important, it doesn't descend from your actions much.

You could argue that they make your social interactions matter, but that's a lie that should be obvious to anyone. There is very little difference between choosing to be a pushover or an asshole. It's just minor flavor changes, nothing significant on any level.

Still, it gets exceedingly, insanely high scores. I don't hate it, either. It's just that the gameplay is about as deep as a square.

Perhaps my focus on gameplay has been too tight? I like my pretty graphics, I like my fun storylines, but... I just can't get excited about a game where the gameplay is literally just squatting behind a rock and occasionally popping your head out to automatically hit an enemy with a special attack. It's so... low-agency. Especially when it doesn't matter how well you did.

I always thought most people over the age of 15 felt the same way, but evidently not.

What do you think?

Wednesday, February 17, 2010

Squaring Space

I've been thinking a lot about the interfaces of the future. Not just the hardware, but the actual visuals involved. One reason I'm so interested is because there is a symbiotic relationship between what we think we want to do and what we can do with our current systems. It is hard to "jump" forward, because it is hard to see around the corner.

I'd like to talk about a few potential uses for very different interfaces. I cannot think of any way to do these interactions on our current linear (sometimes fractured- or multi-linear, but always linear) interfaces. All of these require some kind of new interface format. I'd like to talk about several, but I want the post to be less large than a book, so I'll stick to just one.

The one example is "the open conversation". This is the idea that you want to talk about a given topic. For example, you want to talk about green power. Specifically, you want to talk about the best setup for a solar power system.

In all the current methods of having an open conversation, you have two problems. One is getting the conversation started: nobody will hear you unless you've already got a pack of people listening to you, and even then, it's not likely that pack of people are solar power experts. Topic watch lists can help, but they aren't a very good solution.

The other problem is derailment. Your talk of solar power will get infested by other topics, eventually spiraling into a grand mess where Fox meme-bots argue that there is no global warming and nobody should ever use solar power. Even without them, you'll be pulled into all sorts of random, off-topic chatter.

The "open conversation" of the future needs to allow for both drift and on-topic conversing at the same time.

One possible way to do this is to have the posts in a kind of "starfield" arrangement instead of this linear vertical system we normally use. When you make a new post, you can link it to an existing post by simply dragging a line between the two. Similarly, you can link any post to any other post simply by dragging a line between them (and you may "label" this line with a post, so curious people can highlight the line and see why you've linked them).

Linked posts stay larger, longer, while less linked posts rapidly shrink away. However, links are not universal. They are inherited by degrees of separation. You get the links your friends can see at half strength. So this means that if your friend makes a link, you see it at half-strength, and your friends see it at a quarter strength. If someone is following both you and him, they see it at 75% strength (or similar) because they see it from both you and him. Some care needs to be taken to avoid recursive link-strength calculations, but it's really not that hard. Links can become extremely strong, well over 100%. Individual posts can also be upranked and downranked, which is also shared and affects their size.

A very basic physics system pulls the linked posts into a loose formation, where you can clearly see that they are clustered, but they're not so overlapped you can't highlight one and expand it. Fresh posts and links are the biggest and most obvious, so you won't overlook new commentary... but their size is relative to the importance of the posts they link to. So downrank the "root" node of the conversation and you're officially ignoring the whole thing. There may be some "bleed" if other links fold back into the parts of the conversation you're interested in, but it is small and easily ignored.

So if someone starts a long chain of global warming denialism, just downrank the root node and continue with your conversation.

At first glance, this doesn't seem very significant. It seems like it's a lot more maintenance than Twitter requires. But it's actually not as much maintenance as you think - it's basically a very, very fast way to reply, retweet, or create links. Not everyone bothers to link every tweet - quite the opposite. Usually it's only the original poster that will link the tweet, or perhaps their followers if they think the tweet is relevant to something else. Also, of course, these are not "tweets", they are posts, and can be any size and contain any kind of media.

How does this system solve the two problems of current methods of open conversation?

Well, first, the ability to link your post to an existing galaxy of posts allows you to ask your question and link it to a similar post. Anyone who thinks that sort of thing is interesting or was involved in the previous conversation will see your tweet standing out of the background radiation, because they have that conversation ranked pretty high (automatically, due to link-creating). The system will, of course, put your tweet in some kind of "possibly interesting" category for them: they won't have to hunt and peck through the galaxy of tweets to try to find the new sparkles.

This is a method to automatically contact the people who are most likely to be able to help you, without knowing who they are, without having to bother them explicitly.

It solves the derailing problem by allowing for very complex threading - if people want to talk about some random side issue, they can. Over there. It doesn't derail your thread, so you can still have the conversation you want.

There are a fair number of usability questions. For example, on a normal screen, it would be fairly hard to really display the "galaxy" of tweets and smoothly figure out where what is. There are a few solutions to this, my favorite of which is using not-so-normal screens. However, even with normal screens it should be possible to do a "clustering" system where tightly linked posts are considered "a post" at the top level. As you zoom out further, whole conversations might be condensed to "a post", and you can simply link to the whole conversation rather than an individual post. This has some drawbacks, but can be used even on a normal screen. Smart phone screens are probably too small, period, but they could leverage your follows and preferences to give you some added capabilities.

Anyway, I really want to talk about some other possibilities, but this is already plenty long.

Monday, February 15, 2010

Fer the Children

Like most censorship, Australia's censorship attempts are touted as being "for the children". Everyone, even the Australian government, knows that's crap. You may be unaware of the full extent of Australia's censorship interests, which include not only censoring the internet, but also things like making pictures of flat women illegal. Obviously, this is a level of retarded that makes rocks and bits of dirt look pretty bright.

But I'm actually here to come at this from the other angle.

A lot of people argue that these censorship attempts are stupid because parents should look after their children. An adult should be responsible for the things their children see, and if they want to be safe without shoulder-surfing all the time, they should get a net nanny program of some sort.

Fair enough, except that, um, that's not gonna cut it.

Your daughter can browse the web on her phone. Is your net nanny installed on her phone?

The phone is just the tip of the iceberg. We're advancing very fast, and children are growing up with more and more computing devices, more and more ability to access data. But the parents are still thinking in terms of porno mags and filched beer.

Frankly, kids are more than a match for their parents. Even if mommy and daddy are computer experts, they're going to be very hard-pressed to keep up with their kid's movements on line. The kid has a lot of time, a lot of access points, and an almost infinite amount of energy to throw at any given problem.

And it's going to get a hundred times worse. If your kid is a baby right now, their teenage years are going to be unrecognizable to you. They'll be unfolding their flashpaper rig and popping onto the darknet that the kids at school have formed. They'll be getting answers to questions before the teacher has even asked them. And they'll have more information on the most recent earthquake across the planet than you do.

We are not breeding cute little humanoids to grow up in the seventies or eighties and become sharp-dressed Wall Street execs and prim wives. We're breeding infomorphs. We're breeding creatures of data. They're gonna run circles around their parents.

Some argue that this is a good reason for the government to step in. Hah! Parents don't stand a chance, but the government is just hilariously inept. I recommend it! The more censorship a government enforces, the earlier the children will learn to hack. Someday, a four year old with a smart watch is going to stand in front of Congress and bring up every blacklisted site on the planet, one by one.

What we should be thinking of is not censorship, not walls and fences, but instead havens. A child in the eighties could walk far enough to find truly dangerous and bad parts of town. However, they generally didn't, because they were satisfied to just walk down to the playground or the seven eleven alone. The need for independence, to figure stuff out on your own, was satisfied without needing to dodge into some sleazy brothel or mafia drug deal.

Wouldn't it make a lot of sense to have child data havens? Full of places children could go - with or without adult accompaniment? Places to discover things, to play with other children, or just to hang out? To be independent and figure stuff out on their own, without actually being in a dangerous area?

The seeds for these places already exist in MySpace or any given low-rent MMORPG. These could be expanded, structured carefully. A sort of "data suburbia". It would be tricky, but possible.

The idea wouldn't be to limit the children. It would be to make an environment rich enough that their emerging culture can handle occasional encounters with the nasty and vile deeps of the internet. In the same way that your kids survive falling off the monkey bars, or crashing their bike. They don't suddenly panic whenever they see monkey bars or bikes, they just get back on. Once they've had their cry and gotten a band-aid.

That's the sort of environment I think we need to create on the web. Not to make the web safe for children, but to make a place for children on the web. Not even a safe place for children. Just a place.

It would be very hard to do, because the creators would inevitably misunderstand what the children want, underprovide, talk down to them, and in general not make a place designed to grow with children year by year by year. But it should be POSSIBLE, and unlike censorship, it wouldn't be a fundamentally immoral act.

Thursday, February 11, 2010

What's the Buzz

Like everyone else on the planet, I'd like to talk about Buzz for a second.

I'm not one of the people who thinks Buzz is stupid or off-base: Buzz actually fits rather nicely into Google's framework and policies. But I'm also not someone who thinks Buzz is great.

There are things I like about Buzz that I think we'll see a lot of in future products, and I'd like to hit those and talk about further development on them.

First, Buzz is a prototype for an email replacement system. With just a bit more polish, Buzz could quite literally replace email entirely. I think that's a good thing, and I think we'll see a lot more things with that kind of functionality. I don't think Buzz will replace email, but it is a step towards something that will.

Second, Buzz does threaded conversations. So does Gmail proper, of course, but most "social" solutions don't. Twitter, especially, does threading extremely badly.

Buzz doesn't thread very well, to be honest. There are some nice innovations, but it is still making the assumption that (A) threads are inviolate and (B) I care more about popular threads. Both of these assumptions are wrong.

I expect that threading will become more and more important as time goes on, and in the future I expect to see products that thread much more intelligently. This includes not just Slashdot-style recursive threading, but also "out-threading", connecting to other threads or posts in a method that isn't a manual, clumsy link.

For example, Amy and I are talking about a cool YouTube video about man-eating squirrels. The threading system automatically "out-threads" us to other people who are also talking about that YouTube video. Their comments do not appear "in line" - they're not part of the same thread. But they do appear, and can be tracked.

Similarly, if Bob joins in and links us to another video about a sumo wrestling squirrel, his post is automatically "out-threaded" to link up to other people talking about the sumo wrestling squirrel.

The problem with these kinds of advanced threads is that they can't be done blind. There are a lot of people out there that I just don't care about, and I don't want to hear what they have to say. It's not a matter of just ignoring the little out-threads, either: I want good out-threads, not bad out-threads.

To this end there needs to be an algorithm which tries to determine if someone who I'm not directly connected to is someone I would find interesting. There are some algorithms to do this, but they vary in complexity and some of them are pretty absurd. Normally, I would recommend making them run on the client's machine, because it would be a lot of work to run on the servers. On the other hand, Google (or whoever) will want to abuse it for money, so running on the server might be the only profitable solution.

Anyway, "threading", "context", and a kind of "user terrain" are probably going to be more and more important as the years go by.

I want to follow someone's posts, but I don't want to hear their posts about their shoes or how much weight they've lost. These posts tend to also gain the most comments. This is why new algorithms are required: not only does the algorithm have to tell that I might want to listen to someone, but it also needs to be able to tell when I don't want to listen to someone I follow.

Buzz isn't this solution, but it's starting to point in the right direction.

Monday, February 01, 2010

The New Protocols

Originally, this was going to be about human language, but I had to build up too much of a computer-protocol backstory. So now it's about computer protocols. Maybe I'll do the human language one later.

Protocols evolve as the requirements change. Early computer-to-computer protocols were simply transferring bits and bytes wholesale. This worked because there wasn't as much traffic and all the sharing computers ran the same operating system and understood bits in the same way.

As time passed, we evolved dozens of more specialized protocols. Nearly every computer has most of the same protocols even if it runs completely different operating systems. We all access the internet, we all read text files, we all view images. There are differences: EOL characters, for example. But, for the most part, these protocols allow our computers to trade information transparently, without needing to even know what kind of system we're talking to.

The reason I'm rambling on like this is actually fairly straightforward. Computers have fundamentally changed how humans can gather and think about data. Protocols help that along. We need (and will get) better protocols.

We see the problem in email protocols. Email protocols were designed back in the early days, before the internet was really an "open" place. Therefore, they didn't worry much about contexts and limits and so on. So today, most of the email on the internet is blind-mailed spam. Most of the rest is unwanted but targeted spam, such as my continuous emails from Amazon.com telling me I want a Kindle or the newest book from Oprah's list, both of which I despise. Email is "sort of" efficient in that it seems efficient to us, but only because we aren't aware of the poor performance it's actually giving us. Sort of like a skateboard feels efficient until you get on a bike, and a bike feels efficient until you drive a car.

New email protocols have been discussed. A lot of people have talked about doing things like charging a tenth of a penny per sent email. But changing the email protocol is almost impossible, sort of like trying to replace a jet engine in mid flight. Personally, I think the answer is easier than that: I think email is on its way out. We don't need to replace the jet engine, because the jet is going to land safely at its destination and we're all going to get off.

That whole method of conversation, that whole protocol, is going to fade away. It's going to be replaced be feeders. RSS feeds, Twitter feeds, and new kinds of feeds that haven't really been invented yet. Feeds are a high-context pseudo-whitelist system that minimizes unwanted spam and maximizes comprehension.

These next generation feeds are going to radically increase the amount of context a message contains. If someone Twitters "I like these shoes!", we see the thread back to the shoes. Not an embedded link, exactly. A context thread. If we visit the shoes, we'll see threads lancing "forward" from them to people who have talked about them, and threads lancing "backwards" to things like shipping information, company information, who makes the shoes in what nation, and so on.

This is not something that is easy to imagine if you try to stuff it into today's linear, square-screen environment. Instead, throw that image away and try to think of something entirely different.

I don't know exactly what the dominant interface will look like. Perhaps some kind of mouseover-friendly "constellation" feeling. Perhaps it will be bullet points, and the computer will quickly give details about each in series, highlighting your salient concerns. Perhaps it will be more like a river, with contextualized information flowing rapidly by, random pieces flipping up to give you a feeling for the overall flow. I don't know. But the point is to think wonky, not to think of an evolution of today's displays.

We're talking about a fundamental change from a 1D system of information gathering and display to a much more complex system of information gathering and display. The tree of interconnectivity and context will grow faster and faster and faster, become more and more automated. This is not 1D by any stretch of the imagination, and even now our web pages fracture under the strain, separating into menus and sidebars and ads and last-posted links and and and. If you think this is a good way to display data, you're wrong. It's got to be replaced.

This complexity hasn't much leaked into email, which is actually bad. The fracturing is not a bad progression. It is simply growth pains. Our experience has gotten more interactive, higher context, faster communication. If we reach our news front page and we know how it works, we can see all of today's news quickly lined up, and instantly navigate in if we want more detail. Much more efficient than newspapers, certainly. We've gotten very efficient at navigating the web due to this fracturing.

More efficient with everything except how we, as actual people, talk to other people, as actual people. IE, personal emails and communications.

That seems like the most important kind of communication to me. And I think the next generation of protocols, the next big method we use to interact with each other, will replace email. Replace it with something much deeper, much higher-context. And, hopefully, much less prone to spam.