Generalized Systems and Micro-server DNA

by Mike Levin SEO & Datamaster, 03/04/2010

I am deeply engaged in my latest generalized system or framework. I count it as my fifth–with my first-four being Microsoft-based variations of the Ruby on Rails-concept. I started on something called IDC/HTX–a Microsoft Internet database connector back in 1997 in a system pre-dates Active Server Pages. Even then, I was designing my code to be generalized application-builder frameworks.

These app-builders were mostly used to create websites, discussion and business systems for my employers. They never amounted to much as stand-alone products, because they were all a means to an end. My version 4 system served as the underpinnings of HitTail, tweaking out SQL Server and IIS. I am still amazed at the performance you can get out of a single machine when you optimize the application to the underlying hardware. My challenge then was real-time data collection and reporting, and was the driving priority behind the HitTail version of the generalized app-builder.

Now the agile framework space is flooded and my challenges are different. I have no desire to compete with the likes of Rails and Django, and no tracking projects on the horizon. Most of these frameworks aim to alleviate developer time wasted on CRUD user interfaces. That means “create, report, update and delete”, usually abstracting away direct interaction with SQL. Relational database web apps can now be stamped out in a cookie-cutter fashion, making Web user-interfaces a mostly solved problem. They’re not high performance enough for real-time tracking/reporting, but I don’t have any tracking projects on the horizon.

Ruby on Rails showed just how elegant and fun agile frameworks can be through a shift in design philosophy putting conventional program behavior ahead of the complex “config files” usually required to make a program spring to life–exactly what my VBScript version did. You could basically trip and fall, and your app was working. You then customized it from there. Now with Rails and all its copycats, there’s really no reason to pursue CRUD generalized systems anymore. Just to hit the point home, generic web office apps like Google Docs are doing something very similar to CRUD agile frameworks, though few want to admit it. Making a new online database is now as simple as making a new spreadsheet in the cloud. So now, it can become more about the data–and the purging of anything proprietary or vendor-specific.

Version 5 of my system, and the real subject of this post, is not only my first time executing it on Linux, but it’s also my first SQL-free implementation. I’ve got a lot of new stuff on my mind lately, and this is the project where it culminates. It’s a perfect opportunity to unlearn a lot of conventional wisdom and try to create something highly disruptive. My vision is to enable average users to generally harvest data off of the Internet without any other human intermediary, and then iteratively post-process it with formulas and further harvesting of their own design, so as to produce remarkably useful findings and uncommon competitive advantage. This is not another CRUD builder, and it is unlikely anyone will “get it” without seeing it in person. And even then, it’s easy to incorrectly interpret and dismiss it as just a web crawler.

I still have interest in all the other pieces I used to hand-build, like content management, tracking systems, shopping carts, and discussion boards. Only now, I’m going to see if there aren’t light-touch alternatives either in the free and open source world, or web services that won’t leave me high and dry. Fundamental to this approach is the ability to rapidly wire-up with APIs. The valuable part from a developers perspective consequently shifts from deep familiarity with a particular implementation to the rapid ability to re-implement, using the art of loosely coupling and re-wiring components.

Now, instead of tweaking out hardware to it’s ultimate performance levels, which was the thrust of Version 4 of my system because it had to support real-time tracking and reporting, Version 5 says “to hell with performance”, and sits back ploddingly working it’s way through Web-APIs only, knowing that it’s own performance speed is the farthest thing from being the bottleneck–because now, http chatter is in the picture! Once you accept that high-latency reality, you can switch from very big servers to very small ones.

And let me tell you, once you’re in the world of very small servers, everything changes. You can design things to work equally well on a real hardware instance, or a virtual instance. You don’t hit all those issues of trying to virtualize a high-performance production machine, forcing you to investigate SANs, VMotion, and a series of other things that have you $30K indebted before getting back the performance of a $5K box. A first-class instance of your work can be carried around on your keychain. A simple bash script can build a wholly new instance of your server on a fresh Linux install. You can use ARM-based $50 servers instead of $5K Xeon ones. Combine this with a distributed revision-control system to keep all your instances in sync with full revision histories, and the server whack-a-mole game has begun (in a good way).

But small servers suck, you say? Hey, it’s 2010, and they’re already capable of running full LAMP installations on $50 of hardware. There’s nothing keeping you from hanging a petabyte hard drive or SDD off of the USB2 ports of one of these plug computers. And once you separated the data-layer, the interchangeability and upgrade-ability of this approach is astounding. How long do you think it’s going to be before the capabilities of a micro-server surpasses that of today’s multi-core rack mounted servers? The micro-server approach is like using a blade box without the chassis. Put one at home on the back of your wifi router. Put one in the office. Put one in the co-location facility. Use DynamicDNS to make domain names resolve properly.

And better-still, you can employ a “DNA” strategy whereby a whole host of applications can be on a tiny $50 hardware instance, and depending on the hardware at your disposal, or the amount of cloud-space you decide to use, the apps can sprawl out and scale to occupy available space. So what I am saying, is put your eggs all in one basket, then clone that basket infinite times. Advance your entire career and capabilities as a developer and entrepreneur by pushing forward that platform in your pocket. Carry around a tiny humble virtual instance on your USB keychain, and have a whole enterprise set up by the end of the day, distributing components across hardware and the cloud, each component according to its optimal requirements.

*BAM* Welcome to the future… or at least, my future.