Cloud Backlash Prediction
by Mike Levin SEO & Datamaster, 09/09/2010
I spent considerable time unbricking my SheevaPlug, losing precious time at work. But I chalk this up to gaining experience on Linux/Unix embedded systems. I’m doing this just barely after gaining my legs on Linux itself, so my thinking is to position my professional expertise for the cloud backlash. I figure I’m about a year ahead of popular trends with this, and whether or not the Diaspora boys get it right with their “nodes”, the idea and dire need for yanking a unit out of the cloud and running it somewhere else–occasionally real hardware for the sake of privacy and control, will hit everyone like a pie in the face in about a year.
The beautiful thing is that what you need to do for cloud backlash is precisely the same thing you need to do to take advantage of the cloud itself. So the Plug Computer approach, if you keep clouds in mind, is a super-set of system administration knowledge, ultimately more valuable than simply selecting and running with a cloud infrastructure like Amazon EC2 or Rackspace alone. You can exceed the capabilities of the cloud “computer unit” provided to you by having ultimate control over the software and hardware.
What is this cloud backlash of which I speak? Well, mainstream chip manufacturers have been taking advancements in miniaturization to load more CPU cores into single chips, setting the stage for larger, hotter, more difficult-to-program and take advantage of servers, in the tradition of Xeon processors. Meanwhile, consumer electronics folks (read: Apple) have taken the opposite approach, quietly designing devices that put precisely the amount of metal into the box appropriate to its function, making it smaller, cooler and easier to program.
In essence, the seemingly settled age-old war of RISC vs. CISC has been rekindled. The chips in the RISC camp appeared to have all lost: DEC Alpha, PowerPC, MIPS and the like. But what few realized was that this was RISC chips battling on CISC terrain, where you had a nice electric outlet to plug the computer into, and no one cared or even thought about “being green” with processor power consumption. But the terrain of the battle has utterly changed, and the dark horse in the race has suddenly pulled ahead from out of nowhere–the results of a collaboration between Acorn, VLSI and Apple to break Commodore’s stranglehold on the low-end computer market with the 6502 chip in the 80’s… the ARM processor.
But no one but cellphone manufacturers needed such a low-power RISC chip in those days, so ARM quietly evolved in the obscurity of cellphones and embedded devices, with virtually no competitors. It wasn’t very sexy, and ARM kept such a low profile, not manufacturing the chips itself, but only licensing others to do so. Once such licensee was Digital Equipment Corporation (DEC), who contributed the expertise of their DEC Alpha team to push ARM ahead in the form of StrongARM, which was later sold to Intel to become XScale, which was sold by Intel to Marvell, to become the Kirkwood processor, in the… drumroll, please… SheevaPlug that we are talking about today.
And so, how is this cloud backlash? Well, as it turns out, there are all sorts of advantages of walking away from the Intel methodology of larger, hotter, difficult-to-program processors that could only naturally evolve into the heart of virutalization and datacenters, keeping the power in the hand of the establishment. Smaller, cooler, easier-to-program devices put the power of yesterday’s quite capable servers literally in the palm of your hand. And thankfully, there was an entire branch of the processor industry working to ensure that this could happen. The term right now, popularized by Apple with the A4 processor in the iPad (yes, also ARM) is SOC–or, “system on a chip”. Call this low-power personal server-friendly movement a serendipitous convergence of a separate developments in the industry… all of which happen trace back to Apple.
Intel has responded by releasing it’s low-power Atom processor, which takes much less power than even its Pentium M, which already saved Intel’s asses during the low-power war kicked off by AMD and laptops. Atom processors are targeted at netbooks, smartphones and tablets, and would be a viable approach in this whole shankserver mission of mine. But ultimately, one of the attributes of a developer training him or her self to ascend to be a force of nature is to deal with a wide array of heterogeneous hardware. It’s good to know how to do things “in general” and not only so long as its some variety of x86 architecture. You will inevitably be dealing with x86 architecture anyway, so why not start your ARM experience here? And ARM is nothing to sneeze at, with an arguably larger installed base of processors even than Intel.
So again, why is this cloud backlash? Because devices like the SheevaPlug exist, allowing the average Jane or Joe to have learning experiences that used to be exclusive to people with the resources. You couldn’t just play around with servers, unless you had the money to buy them, the room to put them, and the bandwidth to host them. But today, they cost $100 bucks, vanish behind furniture, and work quite nicely off a home cable-modem or DSL broadband connection. Pretty much anyone can master these little buggers, which are themselves easy to virtualize and put on the cloud, thanks to their small, efficient footprint.
Even with clouds, ultimately your code has to run on real hardware. Even if someone tries to convince you that your code is running on a virtual processor… uh, no. Virtualization is ultimately just memory management and slicing up resources. Your code still runs on real processors somewhere. When you save something, even on Amazon E3 or other virtual storage services, its still going onto a hard drive somewhere. There is nothing inherently better about being on the cloud, except that you can instantiate out more instances easily and consume more of those real resources when you need.
And if your work is going to run on real hardware somewhere anyway, and everything you store is going to go onto a real hard-drive anyway, why not learn to do it on your own hardware, and have the ability to pack up your marbles and go home when you want? Why should you need billion-dollar data center to run your systems? Why should you even need all the overhead of private cloud OpenStack to run your code? Why not just the metal it actually takes? And if that happens to be enough to fit in your pocket, all the better. In fact, being able to do so is fundamental to functional privacy and ownership.
You can always take for granted your ability to take your system and reproduce it in the cloud. But you can not take for granted your ability to take something that was born no the cloud and getting it running on real hardware. Sure you could, but there’s a pretty good chance that you will have never had the occasion to develop basic hardware skills. Due to the cloud, dealing with real hardware may become in danger of becoming a lost art at precisely the time it should become more achievable and valuable than ever.
In fact, looking ahead a few years, because processor circuitry is so much like printing technology, free and open source processors are likely on the horizon that get printed out from ink-jet-like printers (desktop fabs), putting the “free” into free and open source hardware. Hardware will seem a lot more like software at that time. And when that day arrives, you can be pretty sure that it will require a very similar skill-set you’re learning here to get your own little hand-held servers running.