Mike Levin SEO

Future-proof your technology-skills with Linux, Python, vim & git... and me!

Truly Switching Daily Work Journal Into WordPress

by Mike Levin SEO & Datamaster, 01/31/2012

I’m starting out my day’s journaling directly in my MikeLev.in WordPress, going against all my instincts to keep it in my vim code repository for my latest project. This is in an attempt to force a shift in thinking that I’m coming to believe is of the utmost necessity to push myself to the next level of my professional career. I have recently re-positioned myself from the old Microsoft Active Server Page platform to Linux/Python. I based a fairly ambitious professional project on Linux/Python to force myself to learn it by jumping off a cliff and learning to fly… I did.

Now, it’s pretty widely agreed within my company that it’s time to release my work to the public, at least for use (not the code yet) in order to solidify our thought-leadership position in this area. People have been posting little pieces of the puzzle here and there online, but no one has assembled it into one sweeping comprehensive system that can transform the way you work day-to-day. I have. I actually already have it running on the Rackspace cloud, so even scaling the app is not going to be an issue. Just instantiate out a bunch of instances and slap a load-balancer on it. It doesn’t need to maintain session between mouse-clicks, and the app has a very small footprint, so I expect that it will scale excellently.

The final step is selling the big-boss on the fact that we want to unveil this to the public. My huge priority right now is to deal with mitigating any potential down-sides. Those downsides are of course the application sucking and crashing and not working properly, so all my attention needs to be focused on stability and scaling. Other people are going to be working on documentation, public messaging and the like. But if all goes as planned, I’m pretty certain a “tribe” will rise up around this, similar to how it did with one of my earlier endeavors, HitTail, which is now sold and being brought back to life.

My new endeavor is based on API-programming with high allowable latency and an anti-24-by-7 reliability philosophy—quite the opposite of HitTail, which was a tracking-gif that had to run 24/7, least the page-loads slow down. At my current place of employment, the 1-pixel gif webbug trick, or whatever you want to call it, is firmly “owned” by not only one, but several other groups in the company. Even though using this pixel for SEO is very different than using it for bid management or conversion optimization, this is already owned turf, and there’s no reason for me to go there. Pick the battles you can win.

That’s why when it came time for my “next big thing”, I radically switched my focus. I let go of the tracking pixel kicking and screaming—because I still feel that your own weblogs is your best source of data for site optimization, bar none. The tracking pixel is usually the key to that data, but I can’t go there. What’s left is “everything else”. That means, connecting to data sources of all sorts—everything other than your web logs. Crawling a site might be a data source. Google Analytics or Google AdWords might be a data source, as might Majestic SEO, Facebook, Twitter, or any of the other net-accessible data sources.

So, I was back in the realm of “generic” systems again—but this time, for connecting to any old data source. However, I had had my fill of frameworks, having made several of my own in the past, only to be rendered obsolete by Ruby on Rails. And even then, you still have to be a programmer to take advantage of such frameworks, no matter how joyful they may be. And so it was with this mindset, that I realized data that you went pulling against sources, unless it’s too massively big, always ends up in spreadsheets anyway, and spreadsheets were now just Web applications.

Therefore, the key insight was that if your spreadsheet was already on the Web, and your data was already on the Web—or at least the Internet—why have anything else complicating the picture. Why can’t your data just “flow” into the spreadsheet? And indeed, it can. It just needs something in the background to orchestrate the show, and some “conventional behavior”, so that you don’t have to go about kludging new user interfaces onto spreadsheets. It has to, and is fully capable of, naturally living within spreadsheets as they currently exist.

And that brings me to where I am today. That was all exposition, and mostly to help me clear my mind for next steps, although it may be useful to you in understanding what I’m about to release to the world. This blog at this point is mostly a tool for me to think out loud and pontificate. These next steps are probably going to be fairly difficult to follow, because even though I gave exposition, it is going to lack certain context. But none-the-less, it is my method of pushing me past this most difficult step of the final polishing of my system for the public.

I will cut these journal entries as often as I need when I think I’ve reached a good stopping point, and can begin a new blog post with a fresh thought. And this is such a point. I am about to dive into code.