Subvocalization Will Someday Be The Killer App for Google Glass

by Mike Levin SEO & Datamaster, 05/06/2014

It’s time for one-handed morning writing, wishing once again I had a Dictaphone built into my head. I can feel the nerd-need for silent narration ala Wonder Years or Scrubs. It’s clearly tomorrow’s killer app waiting to happen—not just for next-gen blogging, but also for silent text messaging and everywhere else that writing and speaking are the preferred user interface (as opposed to audio/video and gestures). It’s the killer app of the wearable, embedded, Internet-of-things world we’re embarking upon, right on the edge of springing into reality.

The vision is not new—it’s been articulated well in two of my all-time favorite SciFi books: Orson Scott Card’s Ender’s Game and Vernor Vinge’s Rainbows End. It uses a trick called sub-vocalization, where sensitive microphones pick up the movements of your voice box, without actually pushing air through your larynx to make the sounds—something that “feels” feasible even with today’s bone-induction mics. This mornings article is about pining for this feature—the ultimate user interface which Siri and Google Now are inching towards: silent command-of and interaction-with the machine world.

I’ve recently heard the term “embedded” start to be used for: “embedded into the human body”. But really, the term merely means building sensors and data input/output capabilities (and perchance a tiny general computer) into common devices like toys or thermostats. Think Furbys, Nest and cars. The movement recently towards wearable is merely extending an already broadly popular concept to articles on our bodies.

The great part about this not-in-the-body embedded-vision is how quickly you can get “off the net” of go offline when you wish. Addiction and bad habits aside, you merely take off your glasses or watch and you can revert to an Amish existence again if you like. Being plugged-into and immersed into tomorrow’s enhanced-human world doesn’t mean losing your humanity. And inevitable sub-vocalization user interfaces are an ultimate example of technology providing yet another tool.

Luddites fear change that upsets their worldview—the realities they were indoctrinated into probably by their parents. The world works in such-and-such a way, and if you only follow these rules, you will be okay. You’ll find a mate, get a job, buy a house, retire with a pension, and have grandchildren running around the house you can spoil on weekends. Every culture has their version of the promise, and riding the excess industrial capacity and nationalism of a war, out-of-whack societal promises can actually work for a time. So now we’ve got suburbs, a car-dependency, and frustrating traffic congestion and gas prices eating away at our happiness and pocketbooks.

Change happens in great societies. It has to, because societies such as ours and the Romans and Egyptians and Mayans and all others are based on the social, economic and geographic realities of the day. Those realities change. Natural disasters strike. Nations are conquered. And world-changing inventions are made. Of all these, technology is the one that all of us can see play out within our lifetimes. Luddites can’t stem the tide. All they can do is group together in increasingly small enclaves waiting to go extinct through attrition. There are a few small exceptions, but I would propose they’re pragmatic traditionalists, and not really Luddites—Amish, for example.

Who wouldn’t benefit from a more perfect memory? And this whole sub-vocalization thing isn’t really one of those technologies that promotes intellectual laziness. Quite the opposite, in fact: recording quality thoughts requires quality thinking! Spoken and written languages are the encoding of ideas for later playback by similarly equipped idea-machines and reality simulators—our brains! And it’s frankly a difficult form of encoding that requires being thoughtful working the brain. This is as opposed to audio/visual languages that only require hitting “record” and framing the picture—thereby permitting certain intellectual laziness. You don’t have the bottleneck of the human brain to absorb, process, interpret, and encode-as-writing.

This is what’s missing in Google Glass. This is the killer app I’m working towards. If it actually turns out to be me creating it, I will be standing on the shoulders of many giants. I won’t be inventing the hardware. It’ll be Google or Jawbone or some company like that. I won’t be inventing the voice-to-speech software. That’ll be… well, who knows. I’ll only be connecting the dots and bleed a little on the bleeding edge as one of the first people trying to make it work, maybe with tongue-clicking Morse Code or whatever’s actually possible today with Google Glass.

Oh well, my subway ride is coming to an end. This is how much I can write in the morning, iPhone in one hand, coffee in the other. Take that, you suburban commuting chumps stuck in traffic. Even when sub-vocalization hits, you won’t be able to compose thoughtful pieces while driving because it’ll be too much like daydreaming—oh I guess that is until Google is driving your car for you.