How To Synchronize Multiple Servers In The Cloud? CSSH!
by Mike Levin SEO & Datamaster, 05/21/2012
I switched my 360iTiger application over to a Rackspace load-balancer a couple of months ago, and have been on much of a code lock-down since, making sure that the public launch of the application went flawless. I do have my Mercurial distributed version control system keeping the application in sync across the cloud, in 2 different locations per server—one for 360i users, and another for the public. But it’s the same code-base, and the synchronization model is correspondingly easy. I just have an hourly cron job doing a repository pull from the same location.
As long as I only have to update the core app, I’m fine. But if I have to launch a third application directory, or edit apache config files, or anything else outside the established dvcs directory, I’m pretty screwed. You just can’t do this manually with a load-balancer, because if you make one little mistake, it can be hidden indefinitely. Why? Because I’m using least-connections as my load-balancer algorithm, and there’s no guarantee that when I tested it, even if I tested it six times, that I would have it all six of the servers. Of course, I could change the load-balancer algorithm for testing, but it seems so silly. There obviously are better ways, and I am in well travelled territory. Just choose a good solution.
One solution is making cron check for and execute a runonce.sh script on each server, as part of the boilerplate cloud image. This is appealing, because it is totally within a limited set of known tools: cron and a little bash script. But the downside is that I don’t particularly want to wait the full hour each time I bake a new script for deployment. I need to push stuff out over multiple servers in nearly real-time, so in case I need to do any iterative work, it doesn’t take hours. And I want visual feedback that what I’m doing had its desired effect. Using runonce.sh self-destructing bash scripts feels like sending out messenger pigeons that never return.
I’ve played with rsync in the past, but it has a similar cron issue as a runonce script, plus it has SSH key management issues, which can become quite tricky in the cloud. I want to use simple username/password authentication on each server, so I don’t have to worry about losing keys. It’s possible that I could simply bash-script the SSH commands that I want to run on each server. That actually is very appealing.
Okay, time to consult uncle Google… the concept is… simultaneous ssh connections… Bingo! ClusterSSH. It’s free and open source software from SourceForge.NET AND it’s in the Ubuntu apt-get software repository. So without further ado, we type: “sudo apt-get install clusterssh” …and presto! We’ve got our simultaneous deployment tool. Okay, truth-be-told, it’s not the scripting/synchronization tool that I was expecting, but it certainly gives real-time feedback. No mysterious carrier pigeons here.
Okay, so you outright list your servers in a /etc/clusters file, and from that point on, you can log into all your servers simultaneously using cssh clustername. Now, one of these servers is my repository and some of the files I need to edit are different, so you think it would be a hurdle, right? No! There’s at least two immediately obvious ways to handle exceptions. First, you can just close that SSH terminal, and only run the commands on the servers that are identical. But better yet, if the change still needs to be done on the repository server, I can just watch that window and carefully issue commands so that they are applied correctly, such as appending text onto the bottom of a file, in the case of adding new virtual hosts in an Apache config file.
And so, mission accomplished for today. I didn’t actually DO the work I needed to do for today to launch another app directory and activate it as an Apache virtual directory, but actually doing the work is going to be trivial now that I have the right tools. And so begins my adventure of management in the cloud. I don’t know if this is going to scale, having all these open windows. It’s perfect for right now while I only have 6 servers, but when it gets up to thousands, I’ll need the equivalent that does it windowless.