Google Glass Almost Fades Into Background of Wonderful Weekend

by Mike Levin SEO & Datamaster, 04/28/2014

Google Glass at Washington Square ParkIt’s Monday morning and I’m wearing Google Glass again on the subway on my way to work after an activity-packed weekend, typing out this article one-handed on my iPhone. It’s my fifth day with prescription Google Glass and I did fun events over the weekend for which Glass seemed to be made, raising my estimation of them. On Saturday, we did the Cherry Blossom festival at the Brooklyn Botanical Gardens full of Japanese-style cosplayers and beautiful blossoming trees and flowers. It was a feast for the eyes, and it was fun and almost approaching natural to be winking off pics, never once stopping to pull out a phone and fiddle with an app and framing. I just kept pace walking with my group and kept both eyes constantly on my 3-year-old daughter who had her own ideas about which way to head. The Cherry Blossom festival was indeed a Google Glass sweet spot.

The next day we ended up downtown near the East Village in Washington Square Park where they had this wonderful AstroTurf covered rolling hills for the kids to romp around on, equipped with a hand-over-hand rope ladder crossing a little valley. Other tykes were sledding down the AstroTurf on cardboard beer boxes that some of the more creative parents must have brought with them. The area was only lightly fenced in with a low chain and kids were running all around. Now I’m not a believer in stranger danger, but one blink and my child could be out of my sight. It was a golden picture opportunity but fiddling with a phone would have felt irresponsible, opening even that small opportunity to lose track of said kid.

Needless to say, this was another Glass sweet spot. The killer app is the default built-in picture-taking ability. I tried to make the experimental wink-to-take-pic feature work reliably, but I kept having to recalibrate it, and as my wife said, I didn’t want it to give me a nervous tick from trying and re-trying. Plus, you don’t want to risk missing that perfect moment by an eye sensor not registering a wink. Better to just reach up and press the hardware shutter button.

Speaking of which, I’m trying out that long-press on the shutter button to capture 10 seconds of video more and more. Even with Glass, it’s hard to capture that one ideal precious moment, so if there’s any doubt, press-and-hold for an instant. I’m thinking of it like burst photo mode on recent smartphones and cameras, with the admittedly lower resolution of video. But the nuances of a situation you can capture that way are 1000-times greater. It’s just not that one perfect high resolution moment. And I don’t yet see a way to snap the high res pics while the video is recording a with smartphones.

Ah, the next point: my whole time in Washington Square Park, not one of the parents and almost none of the kids even seemed to notice I was wearing Glass. And I was regularly interacting with them, chatting with parents and weaving through kids. I was just another watchful parent, which I think is a major testament to the new prescription designs. Charcoal black fades into your face—even the hovering glass piece and thick side-pad apparently. One kid, an observant little 10-year-old or-so running along with his friends suddenly looked up at me and said: “Is that Google Glass?” I smiled and said yes and answered a few questions. He was all smiles and enamored of the idea, it seems. I congratulated him on being the only one who noticed, and he was very proud.

I notice my observational skills suddenly sharpening. There’s something about knowing that your very field of vision is always potentially composing a picture. I always say to myself that one should be observing one’s situation through internal narration: she was wearing this, he had such-and-such features. This was before Glass, and I have always imagined the killer app of a platform like this to be that very internal narration system. There is however one critical flaw in Glass the way it is today—although I am not sure (yet) if it is a hardware limitation or not: the lack of undetectable voice-like input.

This undetectable thought-to-text self-narrating or computer user interface is not a new idea. Speaking out loud with a computer like that is a mainstay of Star Trek with the ship’s computer, but doing it on a more personal and clandestine level appears in Ender’s Game where Ender talks to an AI named Jane through a tiny earring able to detect his sub-vocalizations. I’m sure the demand for such a feature will finally register in mainstream population when Ender’s Game or it’s sequels finally hit. The idea behind subvocalization is that when you’re talking to yourself in your head like slowly reading a book without skimming, you’re actually moving your larynx (voice box) just like you were talking, but that you’re not pushing air through it to vocalize. However, these motions are theoretically detectable by a wearable. It’s not mind-reading. It’s just voice-to-text with much more sensitive mics—not terribly different than the jaw bone microphones that already exist, and are built-into Glass.

Subvocalization is only one of the ways to go. There’s brainwave reading, as built-into some Sony game console controllers, which my very own company used to make a tug-of-war game for our #client Subway for the SXSW conference. This holds some promise too, but all the signals need to be interpreted more-so than subvocalization where you’ve already essentially composed and encoded your thoughts into words, and the computer need only snatch them. There’s other types of encoding too, like tapping out yes/no responses on the smartphone that’s still probably in your pocket (Microsoft is experimenting with this), or blinking yes/no responses—however, I’m skeptical of any scheme that has you twitching facial features to communicate, other than just plain talking.

I have to admit that I did just buy a Pressy from the Kickstarter project that adds a hardware button to your Android smartphone by plugging into the microphone jack, which can be programmed to do anything that your phone can do. So one of my first apps for Android will probably be a shutter-button for Google Glass, so you don’t have to wink for a pic or tilt your head up and say “Okay Glass, Take a picture” or reach up to the hardware button on the frames. You will just reach into your pocket and press a button that’s already perfectly positioned in your pocket.

I am aware of the controversial nature of such plans, but hey, I did spend the money on Google Glass and the prescription lenses to go in them. I am determined to start developing them, and my self-narration killer app is on the verge of SciFi (another book to touch on the capability is Verner Vinge’s Rainbows End which reads like a Google 20-year plan). I will work progressively towards self-narration, but along the way I need to take baby-steps. And my baby-steps should be buzzworthy to recuperate my Glass investment in notoriety at least. So, is this just boorish attention-grabbing? Well, yes a little bit. After a twenty-plus years of flying under the radar doing internal projects for employers that only ever reached a miniscule amount of their potential, it is time.

Google themselves built the Google Plus reputation system to amplify messages and carry ripples of communication throughout the world. They even made a ripple visualization system inside Google+ so you can think about every posted picture, article or status update as dropping a ripple into a pond, with the ripples being +1s, shares, and other amplifications that radiate your message out, potentially around the world. I think about that as I snap and share pics pics and archaically type out my thoughts on a smartphone touchscreen that’s not quite caught-up with my imagination and potentially even the hardware sitting on my head.