Mike Levin SEO

Future-proof your technology-skills with Linux, Python, vim & git... and me!

Tackling Google Glass Development - Decyphering Precepts

by Mike Levin SEO & Datamaster, 09/19/2013

Make today count… again! Yesterday had some fantastic parts. Start by summarizing your projects. Do that DIRECTLY in email. Okay, I got that out. That’s a load off my back. Always CYA. Now, update the time system! Okay, that was an interesting reminder of how easy the alternate time entry app COULD BE! Really think about some hacking. Think about things like Wireshark and tcpdump. I was on that route earlier in my career (along with proxy servers like Charles), but I backed off as being a brittle sort of life. Work through official protocols and services, and have something that’s infinitely more reliable. But snooping on net traffic does have its uses.

You’ve sent out your update and you’ve got your time into eSM (mostly - going to try an http snooping experiment with the remaining entries). But your next distraction sure as eff better be what you’re SUPPOSED to be doing right now - figuring out how to program Google Glass. Get them fully charged! And start making your daily journal entries interesting as eff! You’ve been handed a pair of Google Glasses and told to make them go. You even have the app to carry out - one of your own design and one of the company’s brainstorming process. The one of the company’s idea has huge priority and precedent, but compile your learnings! New platform stuff!

Tackling Google Glass in such a way as to help catapult my career forward, advance the cause and tribe of Levinux, and benefit my employer to the maximum capacity is my next task. There are a lot of simultaneous equations to be solved, and it is tricky to truly even know which questions need to be asked at this point. Give it a try…

Do you HAVE TO use the Google App Engine to use the Mirror API? Their page https://developers.google.com/glass/downloads/ states…

To start development quickly, download a starter project. These projects deploy to Google App Engine and provide you a foundation from which you can develop your own project.

It’s time to be a bit of Sherlock Holmes here. This is a brand new experimental platform, and as such, there is little likelihood of good documentation, examples, developer tool-chain, and certainly none of the advanced insider-knowledge nuanced tips and tricks that revolves around a mature platform. This is much like the over-5-years-ago introduction of the iPhone, and the shock that there were no installable apss at-all and the entire responsibility was punted to Safari, HTML5 and “web apps”. Only later did support for Objective-C apps created with XCode using very clearly defined API-calls for future compatibility (and merely even just getting approved for the App Store) come along. This was quite a ballsy move. This was quite an ecosystem that was created. Wow! We see something with loose parallels going on, but with radically different precepts and process.

Google is iterating big and bold in public. Google is tipping its hand on many important details surrounding their attitude towards wearables. Overcoming the cultural resistance to heads-up-display (HUD) wearables is so paramount to their success, that tossing some vision of them out into the world zeitgeist and language lexicon is a priority - an even greater priority than secret product development and a big world-changing reveal.

So, I can’t get too hung up in the significance of what’s going on, or I could write about it forever without ever doing any coding, and that can’t be. I need that critical break-through today that lets all future steps be that easy. I have to get over that deficit hurdle. It’s just like the old Scala days - a lot of my learnings are new, but I have to be a sort of Kung Fu master of the tools, right as I figure out which the heck tools I’m even using in the first place!

Okay, so is the Levinux short stack an appropriate platform for Glass Development? Because if it is, I may just do a Hello World app on github for Glass in Python on Levinux. It will install on any server, and I will also go out of my way to see that it deploys on the Google App Engine (where they recommend), so that I always have a well-understood (or understandable) code execution platform. But I will actually UNDERSTAND things infinitely better if I develop it on Levinux. NOTHING UNKNOWN.

1, 2, 3… 1?

Step 1: Re-familiarize yourself with the Google “pitch” of what’s possible with Glass. Most of that is on this page: https://developers.google.com/glass/about

Grasp the difficult concepts here. Really plow through the euphemisms, of which there are plenty.

There is a Google Mirror API. This is something that runs much like the GData or any of the other API’s I use. It’s just a bunch of Google middleware infrastructure to take addressed and authenticated http requests and route them to the appropriate information systems within Google to do what there told (hopefully, successfully) and then to send back a response. While a new unified API-movement has gripped Google, my experience with the GData API (which one of these is not like the other) will be invaluable in helping me wrap my mind around Glass.

Google Glass in addition to the minimal and economic hardware, is just a full-screen chrome-less web browser with left/right-scrolling virtual-screens. This modified web browser constitutes the Glass user interface. Everything exists and works within this context. That is to say that anything written for Glass is an HTML5 app sent to Glass through the Google Mirror API to “run locally” on Glass as an HTML5 app inserted as the current virtual screen. There are a few other models here such as notifications, but for the sake of figuring out the platform, I will draw parallels to desktop and mobile. And the parallel here is a resounding similarity what Apple’s decisions have been regarding scrolling between virtual-screens, as anyone worked in multiple spaces, swiping left-and-right between them with the 3-finger swipe on a trackpad knows. Google Glass: same thing… more-or-less.

Okay, so everything I’m discussing here is interacting with Glass nicely without side-loading Android apps that hit the hardware and void the warranty - another development route entirely, which although probably more interesting, is of vastly less applicability to marketers than things that work formally through the Mirror API. So, the Mirror API it is for me.

Okay, so let your passion drive you from here. It’s still only 11:30 AM on a Thursday, and you are AT THIS POINT in your thinking. You are nearly in the zone with getting started… inching towards something meaningful… AH!

Step 2: Do something meaningful!

Now that you’ve accepted the limiting fact that you’ll be working through the Google Mirror API, come to grips with the types of things that means you’ll be able to do:

1. Manage timeline cards (a list-management API) 2. Interact with menu items (the UI on a particular card) 3. Subscribe to timeline notifications (interprocess communication) 4. Sharing to contacts (sending the content of a card to a contact)

So, can a card take a picture in the first place, and then share it? Is camera functionality something you can just code into an HTML5 app? So interesting! While most HTML5 stuff is moving towards putting up digital display like wallpaper, this is moving towards a pure Turing-machine-like tape scrolling in front of your eyes, with each symbol on the Turing tape itself being an entire Turing computer. Wow, very meta. Google could never possibly discuss the full depths of what there doing human-evolution-wise with anyone in the press. They would sound like wackos… oh, so just hire Ray and you already sound like a wacko, so how much crazier could it possibly get, right? I get the PR strategy.

You simply need to get through that single breakthrough.