Welcome to the Questions & Answers site of my website. I bet you didn’t even know you were asking me these questions, did you? Well, you do ask me questions every time you search in Google and click on my site. Think of my site (and really, everybody’s) as a giant website content suggestion box—if only you know how to open that box and read the suggestions.
All about Yours Truly, Mike Levin
First of all, clear thinking is everything. We only experience the world through our senses, and it is up to us to sort it all out and make decisions about how we live our life in general, and the things we do day to day in particular. Like it or not, there is a concept of "quality of thought". Some thinking is garbage, and will get you stuck in the daily-grind, forever at the mercy of fate. To avoid this, you must do exercises for clear thinking. First and foremost is daily journal writing. It's not the keeping of the journal that's important. It's the process of forcing yourself to write and be reflective for some time every day—maybe a half-hour. Think and write and think and write. Rinse and repeat. That's how you process your thoughts and revel being the matter-stuff that's privileged enough to be self-directed, called a human being. The next exercise is the one-page-plan that you can read about elsewhere on this site. Then, make yourself aware of both the journal and the plan on a daily basis. Do you need to write more to process your thoughts? Are you on plan? Does it need revision? Rinse and repeat. Think clearly. Keep your hand planted firmly on the rudder of life. Navigate!
The difference is the same as between a Mark Zuckerberg and the Winklevoss twins. Anyone can have a billion-dollar idea, but only the rarest individuals can act on them, and only the rarest of the rare can get all the tiny little nuances of implementation correct that make all the difference. Many people with great ideas think if they just find a technical co-founder to be some code-tweaking robot monkey, they can make the next big thing, but it's not true. The code-tweaking robot monkey is the one who has to do all the really important stuff. Put 100 code-tweaking robot monkeys next to each other working on the same task, and which one is going to win? The point is that actually knowing how to do things is a different and much more rare and valuable skill than having that initial spark of an idea.
Note: For a little positive self-image therapy, try learning Linux server the Levinux-way. You might be surprised on how the seemingly impossible is relatively easy with the right mentoring, coaching, etc.
It's not that everything is so difficult for YOU. It's that everything actually is so difficult. People who make it look easy have done similar things many times before. Why is this?
If everything worth doing were easy, everyone would be doing it, and it would lose its special-ness, and wouldn't be worth doing anymore (this may be debated, but I'm talking in a broad competitive sense). Therefore, the initial difficulty you encounter on mastering a new skill, musical instrument, or merely being more successful in life, is just part of the grand scheme of things. Patience, persistence, and keen observational skills can help you get over the initial difficulties built into any task worth doing, and truly separate you from the crowd.
On the other hand, if even the chores of day-to-day life are difficult for you (not what I am addressing here), like getting up out of bed or keeping yourself groomed, then get help. It may be that all you need is better diet and exercise, or get off the weed. It may also just be your genetic predisposition and beyond your reasonable ability to control, in which case you should see a doctor and see if he/she advises of better living through chemistry.
I read. Reading is the best medium. It forces other people's thoughts through your own vocal-chords, magically reproducing their thoughts in your own head, like some magical thought playback engine. This process is better than passively watching other media like TV or listening to radio, because passive media lets you slip into couch-potato mode and doesn't really replay the author's thoughts as if they were your own original thoughts.
But you can't just read anything—think how intensely personal a thought playback engine is! You don't want someone slipping a virus into you, so you have to think of your reading life as a carefully directed adventure, skimming many headlines and blurbs here and there, then intelligently—like a bloodhound on a scent—sniffing out what you need to know and why. Then, take DEEP DIVES into carefully chosen material. Alternate betwen being critical and open-minded as necessary.
And I don't mean just tech stuff. Seek out the classics, the genre-definers, the seminal works PLUS the new stuff defining our age, and the off-the-beaten-track stuff that makes all the difference. Use technology then to always have that reading material on you, filling in your down-time standing in lines, riding the train, and the like. Oh, and never believe at face value things people tell you. Verify! Even if... especially if... they sound credentialed.
The larger list is located here
, but in summary, we're all a lot like monkeys living in a big programmable machine. Some of those monkeys rise to the top and figure out how to program the machine, while most of the monkeys that are just like gears in the machine, and just have to step in-line and get with the program—each day going through their monotonous mundane pre-programmed moves. Contrary to what this might sound like, I DO NOT believe that this is by any conspiratorial design—because, even the top monkeys got to their position by chance. This complex hierarchy of monkey-dynamics is just part of who we are as social animals. In the end, all you should do a little Maslow-style self-actualization so that you can see more clearly, and know where you want to—and actually do—fit in. After that, you can't complain about your role, 'cause it's your own friggin' fault if you don't do anything about it.
I'm a puzzle-solver and information-organizer with lots of patience and just a little touch of ambition. Therefore, I fancy myself as being able to do almost anything that involves unlocking a riddle, documenting the process, and making new use of the findings. It also helps to have some recognition or reward, which makes me well-suited for the new world of interconnected information. In my dreams, I would be cracking the free energy puzzle and designing molecule-scale fabricators, but I missed my calling as a scientist and engineer. So, now I fill my time with info-tech/business projects such as dragging Scala, Inc. kicking and screaming out of the red, building a 24/7 tracking system called HitTail that boosts your web traffic, and most recently, 360iTiger, a system that crawls your websites into Google Spreadsheets for SEO and social analysis... oh, and it's got lots other applications and implications including getting your ass onto free and open source software—which I expect will be the main focus of my time for the next few years.
I keep this online journal or diary because to make me more effective at my job—and perchance build up some notoriety. One of the things that makes us human is carefully articulating and cultivating our thoughts over time. Ever since Jane Goodall's observations about tool-using chimps, animals have been systematcially taking away all the "rules" that make us human. I think the last surviving thing may be the PROLONGED shaping of abstract thought over time. Maybe call it thinking out loud, or even story-telling. It's just not something we do naturally when we get sucked into the daily grind—making most people actually not much different than other animals. I am always amazed at how much more I can do when I "FORCE" myself to write it down or say it out loud, and this journal is my way of formalizing the process—at least on the professional front.
Life is too short to learn everything. In fact, life is too short to learn the WRONG things. I burned much of my life on Amgia and Windows expertise, only to have those skills suddenly go obsolete, like having the ground suddenly disappear out from under you. My solution has been to pursue a sort of obsolescence-proof platform, based on the fewest parts possible that can do the most possible, without forcing you to be a career system admin or programmer—something that can always be there in your life, on your servers, in your pocket (etc.) as a reliable place and way to run code. My answer has been Linux, Python, vim and Mercurial. There's not much else you need to work technical magic, while not being a career tech. Oh, and your hardware—it should have a cheap, disposable, rapidly reconstruct-able, whack-a-mole quality to it, hence my interest in virtualization, Raspberry Pi, the cloud, etc.
It's because I'm just sort of thinking out loud, stream of consciousness-style. I'm not taking a lot of time editing—just minimal "sanitizing" to make sure I'm not leaking too much proprietary information. It's all so technical, because that's the way I think. I care deeply about my tools, and I like to use as few of the "right" tools as possible, and master them deeply. That's why you'll hear a lot of talk about Linux, Python and vim. I feel this sort of thing is going to be as mainstream as reading, writing and arithmetic in a few years, and many more people will be able to speak this sort of language. It's the "plumbing" of the information technology world.
I am reformed Philadelphia-born suburbanite now living in Manhattan with my wife Rachel and daughter Adiella, born an artist, but somehow followed the technical path. Today, I work for 360i and practice search engine optimization (SEO), but identify myself professionally as a Peter Drucker-styled knowledge worker. I pack up my tool-box and work anywhere on a broad array of problem domains, working for "the man" because I can be more productive than trying to do it all on my own. This site is where I think out loud, mostly for myself, but surely will be a springboard for when I eventually do something big.
About dealing with bullies
Being an eyeglass-wearing, physically scrawny, drawn-to-science person all my life, I've had my fair share of being bullied... or at least initial attempts at bullying me. Now, anyone who's read Ender's Game knows Ender's strategy to putting an end to fighting the same horrible fight every day for your whole life—you kill 'em. Now, I never took things that far. But I have put someone in the hospital for six weeks—along with various other colorful stories about how I've dealt with horrendous bullying situations throughout my life. It seems to be a recurring theme of my life. There's nothing like a mild-mannered even-tempered fellow such as myself to make the bully-sharks smell blood in the water. So, I've have the special information to share with you now how to both prevent smelling like chum, and how to defend against a frenzied swarm of bully-sharks when you have to—both as a kid, and again later in life with adult bulling. I have one doozy of a situation I'm resolving right now. Stay tuned as this FAQ section develops.
SEO and Social – My career of late
Strategy and Accomplishment
Okay, so first google the Pareto principle, also known as the 80–20 rule, the law of the vital few, and the principle of factor sparsity. It states that, for many events, roughly 80% of the effects come from 20% of the causes. So, it's a natural law of nature that about 20% of anything—for the sake of our discussion, the population—can become the big winners high on the achievement-scale. The world is not naturally communist or socialist. It's not entirely laissez faire capitalist or survival of the fittest, either. It's just competitive due to limited resources and a bunch of people. Therefore, things that society agrees are valuable are of necessity made somewhat difficult to achieve—to the point where a lifetime of dedication may be necessary for some of the bigger prizes: wealth, fame, mastery of music, game development, sports, etc. Sure, there's the occasional child prodigy that's born brilliant, but don't be jealous of those aberrations of nature. For every one of them, there's 10,000 people who had to work their asses off to only just approach similar levels. Don't let it consume you. Consider becoming fairly good at something, and then just being a big fish in a small pond. Google and learn the fake and the real story of Mozart and Antonio Salieri (hint: the movie's not accurate). Measure success by how happy you are in relation to yourself, your family, friends, and the activities you do every day. Tweak your course in life to align what you're naturally predisposed to being good at, with what you enjoy, with what you can make a living doing. The new economy will continuously broaden how these three things overlap. Have hope. Follow your passion. But have a pragmatic view of things, and work on your technical diagnostic skills. Try to understand the real underlying reasons for things. Think of 5 or 6 ways you might do something before pursuing the first one that occurs to you. But there are actually about 5 right ways to do something, so don't become paralyzed deliberating. Choose a good one and trek on ahead! Go, go, go!
Face it, most of the world can't live in New York City—and like your religion, race, and socio-economic status, you're pretty much just born into the suburbs and dropped into the zombie-producing babysitting-factory known as the public school system. So, if you're a dull dimwit with no future, it's not entirely your fault. I was extremely fortunate being born only about 2 hours from NYC and had the pleasure of walking to a local Philly train station and taking train rides the very seedy NYC with my next-door-neighbor when I was around 16 years old in the mid-80's. It made NY seem very accessible to me, and when the time came to take a job there, I didn't hesitate. New York has disney-ified, but is still the place to be to set things straight. The only for me was, I was already 35 years old at the time I made the move! If you want to start fixing the faulty wiring in your head, try to do it much, much earlier. Like say, your first job out of college. You can always move back later if you hate it. And until then, there's always the Internet. Also, don't believe anything anyone tells you. Learn as much of your information first-hand as you can. And do something better than anyone else in the world and find your market. That will provide you a way out sooner or later. Google new economy.
I'm having difficulty getting myself to be productive. You need a tiny success—of any kind! And probably much smaller than you think. Deconstruct the problem or task in front of you, and figure out the first baby-step you can take, which is neither intimidating nor possesses too many unknowns. It is too many unknowns all in a row that creates a much larger obstacle to taking the first step than it seems like it should. In your defense, it actually is a much larger task than you initially thought. Your subconscious mind knows this, but your conscious mind doesn't, and only feels guilt. So, all you have to do is change your approach. First, you WILL have to do SOMETHING, so if pure procrastination is your poison, you simply have take some advice from Nike, and Just Do It. If it's distraction, then don't let your issue-evading subconscious get one over on you! Unplug the Internet. Put those Angry Birds to sleep. Make the task at-hand somehow more interesting to you than the distractions. But if the battle is over next-step ambiguity, take a page from FlyLady.net, and shine your sink
Robert Cialdin's Influence: The Psychology of Persuasion. This dude went undercover into eerily effective businesses to see what made them tick, and distilled it down to six principles. Go read the Wikipedia page about these six principles
of Reciprocity, Commitment and Consistency, Social Proof, Authority, Liking and Scarcity, and you can probably forgo the book, but it is full of excellent research and stories. It's pre-Web, but totally applicable online. He pinpointed tasty little tidbits like you'll gladly fork your dough over to a thief... but only if he's likable and friendly thief. This very blog is a conscious application of his Commitment and Consistency principle: write it down somewhere the public can read and hold you accountable to it, and you are more likely to follow through.
Sun Tzu, The Art of War. But beware of bloated commentary translations that make it into something that it is not. Get a very tiny version, such as the Thomas Cleary Pocket Edition
. The brilliance of this book is that it distills strategy down to a timeless and easy to comprehend essence, and it was written around 2500 years ago by some unknown Chinese general. Sure, it frames strategy in military terms, which isn't always the best, but it puts it in context. War is a necessary skill leaders must take up out of necessity and for survival, so you might as well kick ass when you have to. Only pick fights you can win, and win the fight before even walking into it. Thanks to good information and planning, victory should feel like throwing rocks on top of eggs, and all that sort of good stuff. There's plenty of other good strategy books, but few others are so tiny, well organized and timeless.
The only thing more important than the cloud is actual hardware
Aside from the obvious, such as taking less room on your hard drive and memory, being easier to copy and transport, there are countless other subtle advantages of your virtual machines being as small as reasonably possible. Two of the most exciting are security and scalability. As with "real" hardware, the smaller your software "footprint", the less "surface area" is exposed for hacking. The less software you load onto your virtual machine, the less that can potentially be running, and the less that can be exploited. Similarly, the less software you have running and lower your processor requirements, the more times you can instantiate (copy off running instances) for massively parallel applications in the cloud. Now, for the even less obvious advantages. When you start stripping your virtual machine down to bare essentials, you'll learn the first thing to go is graphical windows managers like Gnome, KDE, and even the lightweight stuff like FLWM. Suddenly you have something that looks and feels a lot more like an old-school server, encouraging you to take up old-school Unix/Linux Bash shell, which will serve you well. You'll then be in a better position to monitor every little thing running, and ensure your work is designed with the shortest software stack possible, cutting out needless vendor dependencies, and making your work more timeless in nature, as well as running faster. And eventually all these things you learn with a small virtual machine translates over to real hardware and probably pretty soon, personal cloud data centers that you can run out of your home.
New to load-balancers, huh? Yes. A loadbalancer takes is assigned a new "virtual IP" through which all your individual servers will now be accessed, and therefore you will actually be changing the IP to which your existing sites resolve. So if you already have a site on a Rackspace server, and you just added a load balancer, then you now need to change the DNS entry for your existing site to the virtual IP of your loadbalancer. Now since lots of sites can reside on the same IP, you can do the additional neat trick of using your one loadbalancer for lots of sites, by putting them all on the same replicated server using Apache's virtual host feature.
All three of these devices are designed to be pocket-sized nano-computers with media center-and game console capable graphics. At first glance, the difference is price, with the CuBox intended to be around $130 and Cotton Candy around $200, while Raspberry Pi is only $35.
The next difference is availability. Only CuBox is fully released, while Raspberry Pi is shipping in it's raw pre-release case-less form, and Cotton Candy is not available at all. The third difference is that while both Raspberry Pi and CuBox are designed to be stand-alone devices, needing only a monitor, keyboard and mouse to be fully functional PCs, a Cotton Candy works through USB and needs a "host" computer, and presumably special software to access a virtual terminal to the device. There are plenty more nuanced differences, but these are the broad strokes.
You either use the cloud, or you use tiny computers such as the Fit-PC, Raspberry Pi, SheevaPlug, or one of the other multitude that are coming out, right? The cloud and microservers are mutually exclusive, right? Not so! Billion-dollar power-hungry, monolithic data centers, while they were all the rage in the 2000's, may slip in popularity in the 2010's, in favor of "just enough metal" to run your app running. This will take many forms, but the one most readily accessible to individuals is a system for building your own tiny cloud based on microservers. It's cheaper than cloud hosting, as Raspberry Pi's are only $35 apiece, and the cheapest viable cloud server is $10/mo. And you're already paying for broadband Internet at home, which is already perfectly capable of hosting reasonable-traffic websites and web services. You do the math. The era of micro-server based clouds is almost upon us. Even the big data-centers are likely to switch over to smaller, cheaper, cooler, even more modlular servers as part of this trend.
It may take a little bit of work, but the easiest way is to just plop at text file in the web serving space of each server, and put different text in each text file. Set the balancing algorithm to round-robin, and then visit the text file from your web browser. Each time you hit the file, you should see different text. An easier way to do this is to use a dynamic language like PHP or Python to output the computer's hostname. That way, it can be the same file on each of your servers—but the text file is a quick and easy way to do this test—especially with your first experience with loadbalancing.
Don't let 'em fool you. Cloud servers are real hardware. It's not like your code is running in thin-air. Your code is running on real hardware on real servers, even if you use the Amazon EC2, Rackspace, or other cloud services—the difference being the clever way resources are partitioned up on the beefy multi-CPU / multi-core servers that cloud farms use. But the cloud advantages are those of management interfaces, and what those interfaces let you do, like "instantiate" out new servers quickly, and throw them as new nodes behind a load-balancer—all without thinking about the real hardware that's actually there. The hardware has effectively been "abstracted away". The down-side of this approach (versus "real" hardware) is the lack of ability to "tweak out" your server for optimal performance. So if you're running a real-time tracking system (like HitTail
), you probably want real hardware. But if you're making an API mash-up service (like 360iTiger
), you're better off in the cloud.
Of course! For $35 a pop, you get 100% dedicated CPU that can be treated like blades in a blade server. The only difference is, it can run out of your house for actually less money than an Amazon EC2 or Rackspace cloud server instance. You only pay once for the hardware cost once, then use your home broadband for serving—which you pay for anyway! Services like DynDNS can give you a fixed domain name even from a dynamic IP. And you get the hand's-on hardware experience of spinning your own loadbalancers, etc.
Old school is cool – The Short Stack
Above all else, the Unix command-set (also the GNU command-set) is worth knowing—stuff like cd, ls, less, and all those. No matter how much technology advances, the original system put in place by Ken Thompson involving programs represented as words that have standard input and output that can pipe their data to each other is infinitely scalable. We could have robot decedents like from the movie A.I., and they'll still be using Unix. Now that the Unix SCO lawsuit nonsense is over and Linux has reverse engineered almost everything, Unix/Linux has become the standard plumbing of the information age. So, learn the BASH shell and a bunch of commands that you need to get around. And then, if you want to master a "short stack" of software tools that let you really do incredible things old-skool-style without having to invest a lifetime into learning, then learn the vim text editor, the Python programming language, and the git distributed revision control system. These are the old school tools that will be around forever and give most modern "fat stacks" a run for the money... oh yeah, all the old-school stuff is free.
With technology having rapidly moved from the era of mainframes to desktops to laptops to smartphones, we are on the verge of the post-smartphone era in which computers and sensors are embedded into everything. What do you think is going to be inside those embedded computers? How do you think you're going to interact with them? They mostly aren't going to have traditional monitors and keyboards. From a developer standpoint, unless you're going to use the exact custom tools the vendors provide you every time, you're going to be connecting in over serial or secure shell (SSH) type-in style logins. And what you're going to find there is some version of Unix or Linuxand most likely, "BusyBox" Linux in particular. So, old school is coming back in a big way.
I am all about stripping out as much software as possible, and working much like you would with embedded systems. This offers the greatest flexibility in terms of code execution platforms: cloud, keychain, vm, wifi router, wristwatch, etc. Taking this approach often means getting a fat enterprise database, webserver, and even most of the GNU command-set out of the picture. When you strip out more and more until you only have enough to do cool work easily, you're left with Linux (kernel), Python and a text editor (why vim is another story). If you a need lightweight db, sqllite is usually enough, and 50MB of GNU commands are replaced by .5MB of busybox. For the pedantic out there, okay, vim is not part of the code execution stack, but even with it, this code execution platform weighs in at only 50MB.
Python is a love-worthy programming language
The answer is "yes". Even if you're not a programmer, in several months, you if you have any predisposition towards programming at all, you will probably find programming Python a pleasure. It is different than other languages, particularly in how indents matter, making the relationship of Python code to your brain distinctly different from other "sloppy" languages, in which you have to bring your own discipline—which might be different from the next person's discipline, and different from the next. There are other things that make Python a pleasure. You can start out with the most sensible and intuitive programming style, called Structured (or Procedural) programming. This satisfies the 80/20 rule. You'll get 80% of the benefit of programming with 20% of the effort. If you want to upgrade your programming style into some of the other big paradigms out there, like Object Oriented (OO), you can do so with no compromise. If you start out with Ruby, for example, you have to go right to OO, which is just a sort of mental gymnastics and artificial imposition for small tasks that you don't really need to deal with as a beginner. You can even go to a very powerful "functional" programming style—but there are compromises there, versus LISP or Haskell—but at least it's possible, and you won't be a stranger to the broad diversity of programming paradigms available to you as a programmer if you start with Python. And there's plenty of community to provide help and positive affirmation that you made a good decision if you take up Python. Oh, and read this
Although Jeff Altwood would tell you otherwise, yes, it very much is learning to program as a hobby. But don't think of it as programming. Think of it as the discipline of precise thinking. Programming is speaking in a language, just like any other language. But whereas the spoken language reproduces thoughts in someone else's head when it is read and "run", language for machines make the machines carry out your bidding, in a process called automation. Everybody should be able to automate machinery to improve their own personal capabilities and exercise their brain. And depending on your language choice and the types of projects you undertake, you definitely DON'T have to be a professional, and can learn everything you need to know in your spare time. Languages like Python
are excellent if you're taking up programming as a hobby, because they are fairly advanced without all the "nonsense" overhead of other languages.
Note: Want to run Python right now without even an install on your computer? Download Levinux, go through the SSH & vi tutorial, then choose option #4 from the menu.
Okay, so here's the thing: you don't really need a framework. Python IS the framework, and one of the first things people do in Python is make yet another framework, because it gets you like 90% of the way there. I learned this when I started investigating the Twisted framework to get webserver functionality from Python, then realized writing a webserver was ONE LINE of code. So, things like Flask make things just a bit incrementally nicer, like isolating you from the urllib2 library and front-end dev work. I personally would also choose Flask over Django, given the minimalism and low dependencies that it itself has. The longer you wait, the more likely the really best frameworks will rise to the top and make selection easy the way JQuery did.
Note: Thinking of learning Python? Run it on your desktop within minutes of reading this without even a software install, using my virtual machine-based Linux distro made just for just that purpose.
You can do as much with the Python programming language as you can with most other languages. To best answer this, maybe I should point out what you CAN'T do with Python. At first glance, it would appear that you can't make optimized compiled binaries for particular hardware, the way you can with C++, but there are projects like Psyco, Cython and PyPy that let you do just that.
At second glance, it would appear that you can't do complex anonymous functions in the functional programming style, such as with Haskell and LISP. This is probably true. If you are a functional programming purist, Python is not the best first choice—though it does have these functional one-liners called Lambda's that help. There is a certain style of meta programming achievable in LISP that Python is not as suited for as more purely functional paradigm languages.
And finally, Python is not the best choice for taking advantage of multi-core processors with concurrency and parallelism. For that, Go or Erlang might be a better choice. Aside from these cases, I think you will find that Python can do most anything that other general-purpose programming languages can do. And finally, specialized languages geared towards a particular problem domain, like R for statistical computing, will always be better at their task than generalized languages like Python.
Despite my love for Python, there are down-sides. Programming purists will rip Python apart in how it differs from other languages or leaves out this favorite feature or that. High performance snobs will talk about the Global Interpreter Lock, or GIL, which will always prevent Python from becoming massively scalable. I say pishaw! Why exclude a language that's so rocking awesome in so many ways for such a broad audience of people, because of a few outlying use cases, or the predisposition of already professional programmers? The things wrong with Python in my mind mostly amount to the necessary use of this statement, which is so inelegant versus so many other things in Python: if __name__ == "__main__":
I program FOR my job, and not AS my job. Therefore, my criteria in a programming language is entirely different than most professional programmers. I couldn't care less where the job market is and which languages will get you hired. Every major programming project I've undertaken (and yes, some have been major) were for myself, and I've had the luxury of testing out a number of different languages, including some of the "usual suspects" (Java, ASP.NET, C++, PERL, TCL, VBScript, AREXX). I know what I love and I know what I hate, and have always considered programming a necessary evil, and an imperfect solution to things not already being how they're supposed to be. Programming is almost a violent process of banging reality into the proper shape, and therefore, the language needs to feel like it's letting you do that. Python does. From what I've gathered, I imagine LISP does even more, but I just don't have the time for it.
I'm not really a LISP guy, but I would like to be. When I did my excruciating process of choosing my new day-to-day language, I tried learning Scheme, and reading the seminal Structure and Interpretation of Computer Programs book. Problem is, in about the same time I dedicated to penetrating the first few chapters, I learned enough of the Python language to get to work. LISP is a truly self-referential reflective meta-language—meaning the structure of data and programming syntax is the same. So, when you write data, you're writing programs, and visa-versa. That's profound, and is useful for self-modifying systems, artificial intelligence, inventing new languages, and solving certain classes of problems that are nearly impossible to solve any other way. But if you're not doing that sort of thing, you may find yourself a bit alienated, living a world of your own invention—an incredibly powerful world where you have god-like powers—but one nobody else can inhabit but you.
Paul Graham calles it the Python Paradox
. Python programmers are smarter, but Python will rarely land you a job. Huh? Well, people who learn a programming language as radically different and useful as Python are doing it because they want to—out of love. They sought it out. They decided not to run with the pack. Python doesn't make you smarter, but smarter people tend to discover and use it. So the advantage of Python in the industry is minimal. But the advantage in your LIFE is incalculable.
vim – A text editor that is one with the force
It's not frequent, but sometimes the tools that make you work the hardest to learn are the best. vim is one of them. Although vim, which is the modernized version of vi, which is rapidly becoming the defacto standard vi replacement, is a carry-over from the earliest days of Unix, Bill Joy, got something right that has not been done better since: incorporate a text manipulation language into an editor that you can get better and faster with time. The mouse-and-windows user interface popularized by the Macintosh actually slowed down peoples' ability to work fast in a text environment, because of the continual need to reorient yourself—doing much more damage to your flow than a pause, but actually throwing off your rhythm and pace. If only you could "telepathically" control the text on the screen—well, vim is about the closest you can get to that without ever taking your hands off the keyboard. Sometimes old-school tools are best.
QEMU – The underdog workhorse of virtualization
One of the things you might like to do with a QEMU virtual machine is run it like a server so it can receive and answer requests from the "outside world" like a webserver or SSH server. QEMU's default security context makes this uniquely challenging, and even once you know the trick, you will only ever be able to make the services work on "high ports" of the host machine's address. By this, I mean the standard port for http is 80, but if you're running a webserver on QEMU, it has to be mapped (NAT) to port 8080 on the host machine and there's no way to change this unless you patch the source code. This is a security precaution because QEMU can be run clandestinely on a host machine and intercept traffic intended for the host, processing and relaying it for nefarious purposes. Once you accept the fact that your services are going to appear as if on host machine high-ports, then allowing traffic sent to the host machine to reach QEMU is a simple matter of running QEMU with the -redir parameter. For example, qemu -redir tcp:8080::80 will redirect any TCP/IP traffic sent to the IP of the host machine on port 8080 to port 80 of QEMU, where you might be running Apache, nginx or g-wan on port 80. QEMU will respond and it will be returned seemingly from the host machine on port 8080, and the entire process is transparent.
QEMU has dependencies that are not normally "baked in" to the binary and relies on common libraries on the host machine. The exact set of libraries that are required varies based on the version of QEMU, and this makes it a bit challenging to make a truly portable QEMU of the type you can carry around on USB and run with a double-click on any host machine without an install. However, there are various tricks you can use to know what those dependencies are and then put them in the same directory as the QEMU binary and set an environment variable before running QEMU so that it looks for its library dependencies in the same folder its run from. This is what I have done with Levinux
for various host OSes including Mac OS X, Windows, and various versions of Linux.
Well, you can but every time you restart your computer, you're going to have to remember to restart your virtual machines as well, or figure out a way to auto-start them. There are other complications that make it almost not worth doing, such as attempting to create a static IP on the network for your virtual machine, while the host machine in all likelihood is getting a dynamic IP. It's possible, but kooky. Also, you'll probably have to use something other than QEMU for your VM's, unless you want to use strange high-number ports for your services, like putting your webserver on port 8080 (instead of the default 80). That's because the security context of QEMU only lets you route high-port traffic into your VM, in an attempt to cut down the type of trojan abuses that QEMU could be used for when running secretly. For these and about a dozen other reasons, its really best to just run your hobby servers on real hardware. The good news is that real hardware can be real cheap these days, like the $35 Raspberry Pi, and ultimately save you money because leaving a Raspi running 24x7 will take a lot less electricity and cost less on your electric bill than preventing your main desktop computer from being able to go into hibernate or sleep-mode—another thing you'll have to do to allow a VM server to always be available. This is the point I advise people who are gradually getting acquainted with Linux server to just throw some dedicated 24x7 always-on low-power hardware to the cause.
I've never done this myself, but it is theoretically possible—more-so than on the other platforms like VirtualBox, VMWare and Virutal PC that require an install. Now the problem is that the things that happen during the install actually greatly enhance the performance of the virtual machine (makes virtual Windows faster), while QEMU using its very non-intrusive no-install approach has fewer choices to achieve acceleration, so while your portable Windows may run from a USB thumb drive or Dropbox with no install, it may not be very fast. The exception to this is when your host machine is a new enough Linux to have KVM (the kernel virtual machine), which QEMU should automatically detect and provide much higher hardware-enhanced performance. But if your host machine is another Windows box or OS X, you are unlikely to see this performance enhancement, and may go nuts waiting for things to happen, in which case you'd be better off with VMWare, VirtualBox or Virtual PC.
That's like half the point of QEMU. QEMU doesn't need to be installed. Unlike VMWare, which completely transforms your system with all the virtual network interfaces and other software it permanently wedges into your host OS, QEMU can run with a double click from a USB thumb drive you pop into a PC, Mac or Linux box to which you have no administrative privileges. If you use Dropbox and keep a QEMU virtual machine in the cloud, you can automatically just run a virtual machine with a double-click on nearly any x86-based machine
you log into Dropbox with—meaning all modern Macs, Windows and other Linux desktops and laptops. Once you quit out of QEMU, it doesn't even leave a trace on the host machine. It's like it was never there—making QEMU a uniquely (versus VirtualBox, VMWare, Virtual PC, etc) nomadic virtual machine. QEMU is very special not only because it's a totally free and open source VM, but because of how it doesn't need to alter the host OS at all to work.
There is one main default networking setup under QEMU which is poorly understood, but extremely intelligent and worth understanding. Then you can just go with it, instead of fighting it. The main thing to know is that QEMU has a DHCP server and router built-in, much like your home WiFi router, but essentially secret, only known to your host machine (the computer running QEMU) and your QEMU virtual machine guest. So long as your Linux boot image is setup to ask for a dynamic IP from a DHCP server (a common setup), QEMU networking should just automatically work. This is smart, because the presence of QEMU itself is kept secret from your network, because it never asks for an IP or otherwise advertises its presence. This is an excellent security context for VMs.
Instead of asking your network's DHCP server for an IP, your QEMU guest always just gets the IP 10.0.2.15, issued by the built-in DCHP server at 10.0.2.2, which is also QEMU's firewall and router for when your guest makes outbound requests to reach the Internet at large—but which makes it look like normal traffic from the host machine and not a virtual machine running on it. When you run QEMU, you are essentially running a virtual local area network (VLAN) on your one physical machine shared by guest and host. Once you truly understand this fact, you can embrace it and then start to expand on the default to suit your needs—like using NAT rules (redir params when running QEMU) to make QEMU servers available to your host machine on addresses like localhost:8080, such as I do with Levinux
. Unarchive. Double-click. Server set up.
Note: Why not try my respin of Tiny Core Linux called Levinux, which runs from your Mac, Windows or Linux desktop with a double-click and no install?
Tiny Core Linux. This is for all the same reasons that it's the best Linux distribution for a USB drive. The two questions are very related. QEMU is the best choice for the virtual machine, because it can be configured so that a double-click will allow your virtual machine to run from the desktop of either a PC, Mac or Linux desktop—complete x86 machine portability! The files are so small, that if you partition your drives right, you can even run it from a Dropbox cloud drive without causing too much network traffic as your VM is running. This is an amazing way to work, and I highly recommend it.
Note: Download Levinux and see the Core Linux (text-only) portion of Tiny Core Linux running with a double-click from your Mac, Windows or Linux desktop without even an install. It's set up as a full-fledged persistent development environment.
Tiny Core Linux... hands-down. Thanks to busybox, and minimal necessary hardware support, it comes in at about 12MB, and only about 8MB if you don't want graphics. That's 1/10th the size of the smallest Linux images using debootstrap, or even the stripped-down AMI virtual machines in the Amazon EC2 cloud. And that's with a modern Linux kernel and its own repository system for installing software! Of course, this makes also makes it a very strange Linux to use for most people—but for the same reasons, a very good one to learn on. It also makes Tiny Core Linux the ideal choice for a USB stick based Linux, particularly with QEMU, so it's portable between all x86 platforms, and can run without an install or admin privileges.
Until there is a better solution, you use the Q port of QEMU
. Problem is, the last update to their site was FOUR YEARS AGO! And while the version they're giving out works just fine—even on Lion—it's feeling a bit crusty. The main branch of QEMU has come out of beta and is at version 1 stable. If anyone knows a way to get some basic, competent, minimally patched compile of QEMU off the main branch to work on the Mac, please let me know. This is key to one of my projects!
Unix, Linux and everything else that matters
As always, we pre-qualify this question by stating we are talking about the non-graphical Unix/Linux command set. If you are interested in a version of Unix or Linux with a windows manager and desktop to make it look and behave much like Macs or Windows, there's not much to learn (know one windowing operating system, know them all). But if you are interested in the Unix or Linux command-set that underlies almost all other information technology these days, then a great place to start is with my own special Linux remix for education
If you're talking about a tiny version of Linux server (no graphics) that's easy to copy, loads fast and can work as a development platform, then download Levinux
. It is a virtual linux that runs with a double click from your Mac, Windows or Linux desktop with no install. Because no install is required, its a perfect version to run "in location" from your USB driveor better-still, Dropbox. Why is Levinux so good for Dropbox? Because you can start some work sitting at your PC at the office, and continue it on your Mac when you get home, as if you were on the same computer. The Levinux hard drives are carefully organized that as you do work, it's only sync'ing your "home" folder, where changes are actually occurring (and not your entire system) between Dropbox locations.
Like with similar questions, we must draw the distinction between Unix with desktop environment and windows manager, versus the "old-school" text-based interface that lies beneath. If you're just looking to use an alternative desktop environment to Windows or OS X, then GNOME and KDE on Unix or Linux is no harder to learn than Windows or Mac. And in fact, the Ubuntu Unity desktop environment (base on Linux rather than Unix) is actually easier to learn than Windows or Mac. But if you're talking about the old-school Unix commands as run from the "shell" type-in command line interface, then yes, it is more difficult to learn than windowing environments, simply by virtue of being so different, and having a more difficult process of experimentation. Thankfully, one of the main obstacles to learning has been eliminated: tightly controlled hardware that a system administrator won't let you touch. Today, you can just download a ready-to-run virtual machine, such as my Levinux
or actual hardware costing only $35
that you could run 24x7 as a personal server where you are free to experiment, and can reset everything to its original state with just an SD Card file copy. So while the barrier to entry on command-line Unix is still quite high, it is infinitely more approachable in today's world than it used to be—and of course, is worth doing because it is the beating heart behind almost every information technology system today.
Yes! But we need to add some qualifications. This is very related to the question: is Linux worth learning? Unix and Linux are not the same thing, but when you're at the point that you're asking this question, they might as well be. Their differences do not impact early learning much, and will become clear as you go. You might be thinking is Unix worth learning as an alternative to Windows or Mac OS X. Meh. Windowing OSes are windowing OSes, regardless of the vendor. So, if you really want Unix, try PC-BSD. If Linux, try Ubuntu or Mint—any of which will provide an adequate Windows replacement. But if that's all you want, maybe just stay on whatever you're using. However, if you're here to ask whether "lower-level" Unix stripped of any graphical user interface worth learning, then again: YES! If you are a technical professional looking for your next step, it will change your life and start to make your skills sort of obsolescence-proof, disruption-proof and timeless. Unix has been around for 40 years, withstanding the test of time, and is likely to be around for another 40, even despite the technology acceleration curve. It underlies almost everything in infotech. In fact, being able to "take for granted" the great timeless underlying unixy-platform (having committed everything there long ago to "muscle memory") lets you focus more on the higher-level things in the stack where your special customizations occur. In other words, Unix is worth learning, so you don't get the carpet pulled out from under you by vendors every 5 to 10 years.
One would think that a personal project of one employee at AT&T in the late 1960's wouldn't be the foundation for the entire information tech industry today, but it is and shows no sign of waning. And while Linux is not Unix, you can lump them together when speaking about the future of infotech. There have been many attempts at alternatives—some arguably better—but Unix/Linux is a case where good enough is better than any newcomer (such as Plan 9 or BeOS). The reason is that the Unix command set provides a simple, consistent, and surprisingly powerful way to manipulate information with text commands. Don't mistake Unix/Linux for the graphical environments like Ubuntu Unity, which are really no different than Windows or Mac. The important part is the underlying command-set and information piping system. In fact, many OSes that you will find as "alternatives" to Unix/Linux are actually just Unix/Linux with a different graphical user interface slapped on it. Android, iOS and Mac OS X all actually have Unix or Linux lurking beneath. Therefore, the obsolescence of Unix is not only not inevitable, but is also highly unlikely even in light of technology acceleration. In other words, it's been around for 40 years and is likely to be around for another 40.
I am considerably less passionate about the future of Linux desktop as I am about the awesomeness of the underlying text-only "short stack" of software that lurks underneath for software development—consisting of a Unix-like operating system, nearly telepathically controllable text editor (vi) that's pre-installed on almost all hardware in existence, a distributed version control system like git or Mercurial (hg), and the language of your choice, such as Python, Common LISP, Standard C, etc. All that windowing stuff is an interchangeable distraction. OSes are not important anymore. OSes just need to provide a nice way to launch applications, then fade away into the background to make room for the apps. OSes need to let you organize and switch between your apps seamlessly. So all this stuff about Linux someday being a threat to Windows on the desktop... well, it's unimportant. GNOME, KDE, Unity, blah blah blah. But Canonical's Unity interface for Ubuntu is actually pretty awesome, and Mark Shuttleworth about the closest thing to a Steve Jobs-like character we have left on the planet. And they do have that plan for a unified interface across phones, tablets, desktops and TV. And Ubuntu's built on Debian, which is itself awesome, and not beholden to any large corporate interests, like Fedora, OpenSUSE and other "major" versions of Linux desktop are. So yeah, I'll go with Canonical Ubuntu Unity as the future of the Linux desktop. Let's go with that.
Unix and Linux office administrators used to be some of the highest priests in the office—regardless of how skilled they were. But with the bottom falling out of the price of hardware, it is much easier to get into the game and to be perceived as a valuable entry-level sysadmin. So, you are going to have to be much more skilled to have a real career beyond tech grunt. There are a few areas where better sysadmin kung fu really makes a difference. Those are security, large-scale deployment, and high availability, highly scalable apps. But calling yourself a sysadmin might not be the wisest thing anymore. If you want an interesting and rewarding career a step up from administrator, the buzzwords to google are infosec and devops. If you stay in just administration, services like Google Apps for Business and Zoho are eventually going to make you obsolete.
It's worth learning text-based Linux, because it is a Unix-like operating system, which has become the underlying technology of almost everything these days, from WiFi routers to supercomputers to remote control helicopters. Most things with a computer inside them these days don't have monitors and keyboards hooked to them like desktop PCs or laptops. Instead, you have to "log into" them with SSH or serial connection. This is very old-school, but old-school is more important today than ever. To learn Unix and Linux in this way, you need to log into something and start playing around. Your options are setting up a multi-boot system, starting your system with a Live-CD, or logging into some remote server in the cloud. But a tiny virtual Linux like Levinux
that runs with a double-click from your Mac, Windows or other Linux desktop gives you a much easier option to get started.
Yes, but you have to use either a Live-CD, a multi-boot system, or a virtual Linux install. All this sounds pretty intimidating, but it's not. The easiest of these options to use without having to mess with your system and reboot all the time is virtual Linux. If what you're interested in is the old-school type-in Linux server, then just download my distribution, Levinux
. This is recommended, because if you learn Linux with a windows manager like Gnome or KDE, you're learning just another windowing environment, and not the underlying timeless tech.
However, if you ARE interested in one of the graphical popular distributions of Linux, like Ubuntu, Fedora, OpenSUSE or the many others, then you can usually find a ready-made Linux Virtual Appliance to download and run. It pops open like a program, but it is a full-fledged virtual computer with most of the capabilities of a native hardware install, and is just fine for getting started learning. You may have to download software to be able to run these virtual appliances, but that too can be found for free, such as VMWare Player or Oracle VirtualBox.
Yes, however there are various ways to go about it. First off, Mac OS X is actually based on a derivative of Unix called Darwin. So merely by opening a Terminal window (type Terminal into Spotlight), you're using something VERY similar to a Unix bash shell—meaning, you're using Unix. Linux and Unix are very similar, so by learning one, you are learning many of the essentials about the other. But for various reasons, it's not always best to work directly on your computer's native OS when a virtual environment is so easy. So, you can run virtual versions of Linux—not touching your native OS X installation—and its much easier than you think.
If you just want a minimal Linux server (recommended), you can use Levinux
, my own Linux distribution. If you want a more complete version of Linux, you can use Parallels, VMWare Fusion or VirtualPC—three different products that allow you to seamlessly run a complete additional operating system—to run more graphical and popular versions of Linux, such as Ubuntu, Fedora, OpenSUSE, Arch or many others. And the final note, is if you're interested in truly learning Linux, and not just window dressing like Gnome or KDE (making Linux work like a Mac or PC), then what you're really interested in is Linux Server, and the old-school type-in interface.
As technology improves, there will be unlimited ability to mold and shape interactive information devices like phones, tablets, wristwatches and monitors into news shapes. Also, the sensors that let them interact with the real-world will undergo (are undergoing) similar explosive variation. Every one of these will need some sort of underlying embedded operating system unencumbered by restrictive licenses that developers can quickly understand and rely-upon and generalize about without having to learn the nuances of every single device their apps might target—and that embedded OS will often be Linux.
In that scenario, Linux becomes just sort of a generic plumbing. Oh sure, some people will use Linux on the "desktop", but that won't be any philosphical debate. It'll just be a by-product of Linux being everywhere else, and it only making sense. Of course in some cases, it will be Unix and other variations of *nix platforms that have even less restrictive licenses than GNU. But as a rule, manufacturers will use Linux as a short-cut to get around all the work they would have to do themselves for their own custom embedded Unix, or a proprietary one that could be even more expensive than "just sharing back" as GNU licenses generally require.
NOTE: If you believe like me that Linux is the future, but still have to take the plunge, try out my unique Linux distribution that runs with a double-click from your desktop.
Is Linux the future of... computing? embedded systems? Information tech in general? Yes, probably. This opinion comes under intense criticism, but the fact is that in the long-run, open wins out over proprietary. Now, proprietary can result in astounding "just works" products, and I'm a HUGE Apple fan. But even iPhone is based on an open source Unix, which is along very similar lines to Linux. All due respect to AROS, but Linux or Unix is already becoming the underlying plumbing of nearly info-tech devices. And so long Linux remains well-maintained, truly free and open source, and under the rule of a benevolent dictator like Linux Torvalds or some credible successor, Linux adoption in all things will just continue to grow grow as it's hardware support expands and developer barriers-to-entry diminish. Linux is currently undergoing a major expansion, being adapted to a diversity of ARM platforms, such as the $35 Raspberry Pi, whose effect on computer education and worldwide developer recruitment cannot be overstated. And Linux is already one of the most common operating systems on supercomputers. So, Linux has already won on the very large and very small. Now, it's just a matter of Linux's ubiquity meeting in the middle.
The "difficulty learning" just doesn't apply anymore, if what you want is an easy "desktop" Linux/Unix to serve as a free and powerful alternative to Mac or Windows. I personally use Ubuntu, whose Unity interface is becoming consumer-friendly, with a solid installer, expansive hardware support, intuitive user interface, built-in software store, and fewer and fewer ways available to the user to screw things up—more and more equivalent to Windows and Mac, and better in many ways.
However, if what you're looking for is old-school Unix server, then yep, it's not very easy to learn—but only because it's different than most things you already know. Each Unix command is like a world unto itself, with limitless things to learn and master. And then the output of one command can be "piped" in as the input of the next command, making some remarkable things possible. It takes awhile to get used to, but if you stick with it, there is a lot of reward, like so many things that take patience and practice.
NOTE: If you actually have a sincere interest in beginning to learn Linux server, I prepared what I think is the easiest way to begin getting some hands-on Linux experience.
Desktops for Unix and Linux are graphical user interfaces (GUI), just like Mac OS X or Windows. Unix desktops started out many years ago as X-Windows, but since have become windows managers known as Gnome and KDE on Linux. Today, both are very popular and used in nearly every Linux distribution labeled as the "desktop" or "workstation" versions, and make Linux very usable to the average user. The versions known as "server" are usually very different in that they don't include a window manager, and for good reason.
Linux desktops, while providing a free and often superior desktop computer experience, are not giving you that extra "edge" you may be looking for in taking up a *nix operating system. For that, you want to use "Server", which is just a way of saying that no window manager is installed, and tons less software is pre-installed. Servers generally don't need graphics, because they run "headless". So, when you use Linux Server, you are using that type-in command-line interface that is lurking beneath, and in common to, every Unix and Linux system.
Learning this, as archaic as it may seem, is where the edge really resides for a developer or knowledge worker, for practical security reasons in addition to old-school. The less software you have running (and windows managers bring A LOT of software with them), the more secure your system. It is said to have a smaller footprint or "surface area". And many instances of Unix and Linux are "embedded" into devices where it would not make sense to have a desktop, but you should still be able to get around in such a server environment.
NOTE: I believe so strongly that "embedded" Busybox Unix/Linux is so important a platform to master, that I made my own derivative distribution of Linux to help you learn Linux.
Unix AND Linux are rapidly becoming—or by some measures, already are—the generic plumbing of the information technology world, and the lingua franca of knowledge workers. That all sounds pretty highfalutin, but actually Windows is just about the last remaining massively deployed unusual proprietary system—while Macs and almost every electronic gadget has made the switch. Over the years, proprietary systems will get weeded out, replaced by systems that have "standard plumbing" underpinning them.
Why? Because in order to do anything interesting, you have to be able to take all the stuff you're building on top of for granted. That means it should be generic, interchangeable, multi-source-able, and unencumbered by proprietary licenses... in other words, plumbing. The future of Unix is fading invisibly into the background so that you can focus on the things that are built on top of it and important to you. Oh, and it's awful good to know Unix/Linux in that future... uh make that, "current" world!
Unix and Linux are not the same thing, but do share many things in common. Unix is the original, with a history going back to 1969, when AT&T Bell Laboratories Ken Thompson and Dennis Ritchie released the first version—the genesis that spawned thousands of versions. Linux started in 1991 by Finnish student Linus Torvalds to bring the benefits of Unix to lower-end (cheaper) 386 PC hardware, becoming the staple of the free and open source software moment. These different origins illuminate the subtle difference that continue to differentiate the two today, with a higher degree of formality, control, and centralization surrounding Unix, versus a free-for-all culture of spin-your-own distribution surrounding Linux.
Ubuntu Unity – One UI to Rule Them All!
By far, it is how every system upgrade keeps turning back on the defaults for the printscreen feature. While it's wonderful to have such an easily accessible screen capture capability, come on! The PRTSC key is just so prevalently positioned on so many keyboards, that it's just impossible not to hit time to time. You can dismiss it with repeated taps on the ESC key, but it is really jarring and disrupts your work-flow like a punch in the nose. To turn it off, go into System Settings / Keyboard / Shortcuts tab / Screenshots, click on "Take a screenshot", hit CTRL+PRTSC on your keyboard (which should be the default), then close out of System Settings. Repeat with every system version upgrade.
I'm only on the first public release beta right now, but one of my favorite features is turned off by default: workspaces. To turn it back on, just go to System Settings / Appearance and click the Behavior tab and check Enable workspaces.
The answer is MyUnity. If it's not on your system already, get it from the Ubuntu Software Center. It is the "less dangerous" replacement to CompizConfig Settings Manager. Load MyUnity, and go to the desktop tab, and adjust the H Desktop and V Desktop numbers. If you're on a dual-monitor system, let me suggest 1x3.
To upgrade from Ubuntu 12.04 to the 12.10 Apha, open a terminal window and type: "sudo do-release-upgrade -d"
How workspaces work in Unity is in flux. Apple changed expectations with their "as-needed" workspaces, and Gnome 3 followed suit with a workspaces panel that does about the same thing, but Unity does not ship with it. So, you're stuck with four virtual screens right now, but you can get the Gnome panel with "sudo apt-get install gnome-panel" (probably only works in Unity 3D). But there are two other ways (in Unity 3D): both MyUnity and ccsm (CompizConfig Settings Manager), neither of which ship with Unity but work just fine with it. Ubuntu is trying to phase out ccsm because of the amount of damage you can do to your system. If you need it, go to a terminal and type "“sudo apt-get install compizconfig-settings-manager”. Run it, and go into General compiz options / Desktop Size to adjust your workspace. MyUnity is a much safer way to customize Unity without damaging your system. If you're using MyUnity which you can get from the Ubuntu Software Software Center, run it and go to the Desktop tab and adjust your H Desktop and V Desktop numbers. But if you're on Unity 2D, you have to go into gconf-editor (either type gconf-editor from a terminal, or configuration in Dash) / apps / metacity / general / num_workspaces and change the 4 to another even number (because it automatically does the x by y grid). Log out and back in. If you want one column of virtual screens, or as with the Mac, one row of virtual screens in Unity 2D... well, too bad.
One of the biggest annoyances for me in Ubuntu is how by default the screen snapshot feature is SO easy to invoke, and with a dual-monitor setup, this is particularly annoying because the screenshot is of such a large space, and takes that much longer before you can hit the ESC key to dismiss the unwanted screen capture. To turn off this feature, simply go into System Settings / Keyboard / Screenshots, and highlight "Take a screenshot" and that will put you in new hotkey capture mode. Set it to something much less likely to be hit by accident. I use Shift+Ctrl+Print, which seems pretty logical. Unfortunately, this setting seems to be lost occasionally through updates. You can also just hit backspace to clear the keyboard shortcut.
By default on a dual-monitor system, you're going to get the app launcher on the left-hand side of both monitors, with an ultimately annoying magnetic sticky effect when you drag the mouse between monitors. Now, I understand the reasoning, and whether this is good or bad depends on how you use dual monitors. It sucks for me, I'm turning off the second launcher. Rumor has it that you can change Edge Stop Velocity with CompizConfig Settings Manager if you re-installed it. Wrong. Another rumor has it that turning off Sticky edges under System settings / Displays is supposed to do it. Wrong again—though you do have to turn it off as part of the solution. The right answer is going into the X config program (in my case, NVIDIA X Server Settings) and fiddling with the "Make this the primary display for the X screen" checkbox, until you have just one launcher, and when you do (if the monitors are on the wrong site) make your life easier, and just switch monitor cables. Trust me. You will win back hours per week in dual monitor sticky mouse frustration.
Update: check out this post on Ask Ubuntu
Ahhhh, the old full-screen remote desktop in Ubuntu question—particularly on a dual-monitor system. Okay, here's the scoop: it's still flaky. All RDP clients suck for one reason or another. Remmina is the least bad, but still it has problems with full-screen, particularly on dual-monitor systems. I won't get into all the nuances like the technically accurate, but odd definition of what "full screen" is on a dual-monitor system (where full-screen mode spans both monitors). But you are MUCH better off allowing Remmina to keep your RDP session window "framed" in an Ubuntu window, but maximized to full-screen with the full-screen gadget or with a double-click on the window's drag-bar. This way, you can make it behave and live nicely on just one of your two monitors, which is probably what you want, and you give up only a little bit of vertical screen real estate. The trick is figuring out how large to set x by y resolution for your terminal window so that when you maximize, you neither have black stipes or scroll bars. My magic numbers are 1912x988 on a 1920x1080 monitor. So in summary, just use maximized windows instead of full-screen mode, and just live with it until things get less flaky.
Yes, Precise Pangolin looks better, handles multiple monitors better, handles logging in and out better. But most importantly, it performs better—including a more responsive DASH. I am currently working on a Commodore 64x Basic, which has an Intel Atom D525 1.8 GHz Dual Core CPU, which is actually pretty weak (upgrading soon to Ivy Bridge), but the performance is totally sufficient for day-to-day work (multiple web browsers, lots of terminal windows, 2 1920x1080 24bit monitors, etc.). Also, I am gradually getting used to the Super-button for Dash and tapping the Alt key for HUD... potentially transforming of how I use desktop computers... the further reduction of the absolute need to use the mouse, and increasing the ability to "get into the zone" while you work, without being jarred out of it by re-orienting your hands on the keyboard and mouse. A big step in the right direction for desktop OSes.
The Ubuntu 12.04 Precise Pangolin Long Term Support (LTS) is scheduled for April 26th. Feature freeze happened on February 26th, and it's all been about inching in on stability and user interface refinements since then. Ubuntu uses a controversial but reliable six-month release schedule system, resulting in the next version being "whatever we can get in" rather than "whenever it's ready"... but it does keep things moving forward quite nicely.
You don't. Workspaces was moved to a configure-on-the-fly model that keeps pace with both Macintosh OS X Lion and Gnome 3, which have foregone "fixed" workspaces, in favor of a workspace-when-you-need-one. So, when you hit the "Super+s" key combination that shows your virtual-screens, you will always see your current one, plus a 2x2 quadrant, with the 3 unused quadrants "greyed out" until you drag something onto one of them.
No, you are actually the cool one, recognizing that there are advantages in using a polished Linux desktop that doesn't necessarily sport infinite customization, but instead focuses on practical conventions. These conventions may not be mainstream yet, but they are well reasoned, are undergoing rigorous testing, and intended to provide a consistent experience across platforms (phone, tablet, netbook, desktop, etc.). While contrary to much of the customization spirit of the Linux community, it is exactly in this spirit that Canonical is free to proceed in this direction. I for one am going along for the ride, because features like HUD
really speak to me, and my hatred of dropdown menus. I support you here, Mark Shuttleworth
, you crazy space tourist
This is harder than it used to be, because of the removal of Compiz Config Settings Manager (ccsm). So far, I have managed to not re-install ccsm on my Ubuntu 12.04 Precise Pangolin system, trying to do it the "Ubuntu Unity Way", but this was the deal-breaker that forced me to re-install it. I need this keyboard shortcut, as it is fundamental to my day-to-day workflow—I don't know how this is not a first-class keyboard shortcut! After installed, go into ccsm, and scroll down to Window Management / Put. Check Put (activate it), then click into it. Enable "Put to Next Output" for the keyboard (not the mouse), then tell it to grab your keyboard combination. I use Ctrl+Alt+Spacebar, which it translates into <Control><Primary><Alt>space. Once done, press Ctrl+Alt+Space and marvel.
Ubuntu Unity is increasingly trying to prevent you from blowing up your own system. You can blow up your system with Compiz Config Settings Manager, so it is not included. Also, if you go to the Ubuntu Software Center, it will look like Compiz is already installed, but this is not CCSM. If you want the old controls, go to a terminal, and type: "sudo apt-get install compizconfig-settings-manager" (without the quotes). Now, you can type compiz in dash, and launch CopizConfig Settings Manager, but it will give you a warning that you may cause damage to your system. There is a movement to provide MyUnity
as a less dangerous Compiz, which still provides some level of Unity customization. I ended up putting ccsm back on 12.04 but ONLY to move windows between dual screens with a hotkey
The way Workspaces work has been significantly changed in Ubuntu 12.04, following the model set forth by both Gnome 3 and Apple OS X—workspaces when you need them. In other words, on-demand workspaces. This takes a lot of getting used to for people who have trained their muscle memory to know exactly where their fixed workspaces "were" and how to get to them. You really now have to use the Super-s key combination or the Workspace manager icon that is now left out on the Launchpad to drag open apps to a ghosted, greyed-out or inactive Workspace, which then suddenly becomes active.
Change it to something more obscure, like Ctrl+Shift+PRTSC or press Backspace to clear the keyboard shortcut. On a side-note, Ubuntu is chock-full-O' great screen-capture shortcuts that don't actually interfere with day-to-day work, such as my favorite: Shift+PRTSC, which immediately gives you the cross-hairs to capture a region of the screen. To re-assign the keyboard shortcut for a full screen capture, just go to System Settings / Keyboard / Shortcuts / Screenshots... go to the SECOND listing of Take a screenshot, click on it, and enter your new keyboard shortcut. I chose Shift+Ctrl+Print to get rid of the annoying BAM in your face that you get when you accidentally hit that key while typing.
No need to wipe your hard drive! I started from a CD-ROM install media for Ubuntu 11.04, then upgraded to 11.10, then set my Update Manager to work off of the developer-release repository, and allowed the 12.04 alpha 2 upgrade to install, which carried me along to 12.04 beta which, which I am now on. Every step of the way, the installer offered to remove files that are no longer required. So, an upgraded Ubuntu to all appearances is as clean and pristine and installing it from scratch. This I believe is due to Ubuntu being derived from Debian Linux, which maintains a giant database of dependencies and software interactions. The kind of .dll file accumulation and registry cruft that happens on Windows just doesn't plague Unix (or any other Debian-derived Linux).
All you Ubuntu / Mac crossover fans are finding my site on this question. While I'm quite confident you can do this using VirtualPC or VMWare Fusion, I don't really know if you can multi-boot, which I know some people are trying to figure out. If you have the answer, please leave a comment here so I can update the answer. Thanks!
HUD is a type-in alternative to dropdown menus in Ubuntu, starting with late 12.04 alpha releases. It was a result of Mark Shuttleworth's observation that users lose lots of time hunting through arbitrarily organized dropdown menus, when they already know the name of what they're looking for—for example, "crop" in GIMP. If you know you want to crop, why can't you just start typing it? And now you can! It loosely resembles the Macintosh Spotlight feature, but for finding product features rather than documents and files.
Just briefly tap the Alt key with any application activated. HUD ships included and turned on since the late alphas (now released). It takes some getting used to, but is worth it. I think we finally know why the Alt key was created. To use the Alt key in the old way, simply don't quick-tap. Instead, do a typical multiple-key hold-and-modify, such as Ctrl+Alt+Del.
To upgrade from Ubuntu 11.10 on a desktop system, press Alt+F2 and type in: update-manager -d. This will also work if you just open a terminal by whatever method you prefer, and type the same thing. You will now see an "Upgrade" button in the lower-right of Update Manager, and it will take care of the rest for you. You now have the developer-release sources in your repository, and the updates will be more frequent, but less prompted. Go into update manager often. You will further be promoted from alpha to beta releases, and so on. Alternatively, you can download the install images
One of the improvements to Precise Pangolin Ubuntu Unity 12.04 in the current Beta 1 released March 1 is the putting the launcher or launchpad on both screens in a dual screen system. Currently, it only appears to be supported in Unity 3D, so if you're using Unity 2D, you are only seeing the launcher in your main screen. If your hardware supports Unity 3D, just log out, and check the 3D option under the login options, and log back in.
Tiny Core Linux – a lesser known greater Linux
After a fairly long search, I've determined that the absolute tiniest version of Linux that runs well on a virtual machine that provides networking services with virtually no effort is Tiny Core Linux. You can run Tiny Core Linux on almost any virtualization product using its boot image, but I find that the way it combines with QEMU is particularly useful, because between GPL licensing and the lack of an "install" procedure with QEMU, you have a fully working virtual machine that you can distribute or run directly from USB drives or Dropbox without an install. Now even though you have networking services from Tiny Core Linux, you're still working within QEMU's networking context, meaning you have a virtual LAN that requires some port redirection between host and guest computers. It takes awhile to really get it, but you can download Levinux
and pick it apart to see how it can be accomplished.
Yes, absolutely! Tiny Core Linux's stripped-down design makes it an ideal choice for both virtual and real appliances—along with it's sister-version Micro Core Linux, which doesn't have the graphical desktop. Because it is is based on the BusyBox single-file replacement of the GNU commands, tinycore has a lot in common with embedded systems. There is therefore less file-bloat, making your appliance smaller. There is less software running (surface area), making your appliance more secure. The knowledge and know-how you acquire in getting through such a project will familiarize you with many of the issues of embedded systems. And finally, tinycore's creative incorruptible design makes your appliance uniquely resistant to corruption due to power-loss, malicious software, and general software cruft accumulating over time. It is always as pure and pristine an installation as the day you released it.
I know that there is a lot of interest in this question, and although I have investigated this, I do not know the answer. I will participate in the Tiny Core Linux community a bit more, and post the answer here when I have it.
Raspberry Pi – The $35 computer that’s changing the world
Yes. All you need is a USB-to-MicroB USB cable. Lots of phone charging kits and juice packs come with this cable.
Note: You should really watch my Raspberry Pi Unboxing YouTube video on this page and read the comment thread. This question is answered over and over ad nauseam. Bottom line: Education!
You may think "big deal". It's under-powered and not Windows compatible, or a Mac. But what you're missing is that Raspberry Pi marks the passing of traditional computers into the realm of ubiquitous disposable appliances—much to the benefit of children and developing nations. The thought process goes like this: modern computers do everything possible to protect that $2000 price point that has prevailed for 30 years. Only recently, netbooks and the iPad have driven prices down to the $500 range for something decent. But we are 30+ years into the home computer revolution, and shouldn't we be at a point where actual computers that you can do interesting things with should be handed out like textbooks in school? In fact, they should be cheaper
than actual text books by now. Oh wait a second... Yes, yes they are. Thank you Raspberry Pi Foundation for finally setting things straight with a $35 full-featured computer that everybody can get their hands on. And an honorary thank you to the OLPC foundation for a good college try.
You need a micro B USB cable connected to a minimum of a 700mA (milli-amps) 5V power supply. Sounds intimidating? It's not! This is an unbelievably standard power arrangement for the Raspberry Pi organization to have chosen, and typical of their "open" decisions whenever possible. Got an iPhone or iPad charger? You've got the power supply part covered. All you need now is a micro B USB connector on the end—it's the smaller, flatter tiny USB connector
—not the taller notched looking one. You may have it from a digital camera or other device. A lot of cellphones like Motorola's use this connector, but typically only provide 500mA. There's no harm in trying it. Also, it's safe to go to a higher mA supply. I use a 1500mA Sony that I foolishly bought for $20 before realizing my iPhone power supplies would have done the trick.
You can run the Raspberry Pi headless, just as you can with nearly any modern computer, making it a logical choice for a very cheap micro server. The trick with the Raspberry Pi is that it does in fact have a high performance VideoCore graphics processing unit (GPU), which theoretically could get turned off, to make RPI's already incredibly efficient ~2 watt power-draw even lower, but it would have to be disabled in software. Also theoretically, you could re-purpose the GPU for other non-graphics uses. In either case, yes, the Raspberry Pi can be run headles, though all the standard requirements apply... you need to have an SSH-server running, like OpenSSH or dropbear, have proper network access to the device like getting SSH in through any firewalls, and of course know the IP or NAT of the RPI server.
What? Are you kidding? Someone actually searched on that and clicked-through to my site. I'm already in a top search rank for this, so I don't really have to write this for the audience. But come on, people! A full-fledged computer with media-center / game console capabilities for $35, and that includes networking. The point is that computers like that are fast approaching disposable, that has socially transforming implications. The computer itself is cheaper than the textbook about computing! It could be the cornerstone of a modernized education curriculum for an entire country... exactly what it was designed to be and do.
Hey, if you're just trying to learn Linux, then one alternative to the Raspberry Pi is to just download Levinux and run Linux server on your desktop with a double-click from your Mac, Windows or Linux desktop with no install! What's cheaper than $35? Free!
Not at the price-point of $35, there isn't. There are plenty of "little computers" that fall into the category of plug computers—my favorite being the SheevaPlug, of which I own five for development and micro-server purposes. There's also the GuruPlug and TodinoPlug, and I'm sure a whole host of others. But these are in the $100 range and intended to run as "headless" servers, leaving out the most difficult, and might I add, sexiest bit for consumers: high-resolution, high performance graphic output. This is what lets the Raspberry Pi work as a media center or game console. It can play back 1080p video, which is BluRay quality—and at $35, THAT'S what makes the Raspi unique right now.
The closest piece of hardware to do anything like this at the price is AppleTV, which is 3 times the price, and a closed-system, requiring hacking to do anything interesting. The CuBox
comes in next at 5x the price. Raspberry Pi is the first, and currently, only one of its kind—in this low-cost to the point of being disposable category of media center-capable PCs, and perhaps more significant than the Commodore 64 (C64) or One Laptop Per Child (OLPC).
For the sake of full coverage, and per the Mashable article
, a few other boards worth mentioning are another embedded 1080p-capable The Panda Board
, a radically different USB PC cotton candy
, the Arduino ARM competitor Beagle Board
, and the extremely raw and component-like Gumstix
. None are real competitors to Raspberry Pi, as either not full-fleged computers, or coming in at 4x the price-tag. Raspberry Pi is also a charity organization, so I suppose that helps keep the price low. But expect real competitors to break out over the next couple of years. This is another computer revolution in the making.
And another one just appeared that looks a lot like Cotton Candy, the Aliexpress Rikomagic
The primary difference between the SheevaPlug and the Raspberry Pi, aside from the obvious $65 more for the SheevaPlug, is the Raspberry Pi's graphics. SheevaPlug is designed to be a headless system, with no VGA, DVI, HDMI ports or otherwise. The only way to see what's going on with a SheevaPlug is to connect in over a serial or network connection, which does not make it a good choice for running a "desktop", such as Ubuntu, or any of the other popular Linux desktop distributions. This goes for Media Servers such as XBMC, which are turning out to be popular applications for Raspberry Pi, as it is capable of 1080p BluRay-quality playback. Besides graphics, the SheevaPlug uses a faster 1.2 GHz ARM 9E, while the Raspberry Pi uses a seemingly slower 700 MHz ARM 11, but don't be fooled by the numbers
. Yes, Raspberry Pi chose to go with the seemingly less powerful and older ARM 11 versus the newer Cortex A8 or Cortex A9 pushed out as a response to Intel Atom processors, but the subtleties of that decision are for another article. Plus of course, the SheevaPlug has a plastic case already.
The short answer is yes, but it may be slightly less reliable and more difficult to administrate, as you will have to host it out of your home. But you will learn a lot in the process, and save about $10/mo on tier-1 Rackspace hosting. It's perfect for running small websites.
This question is rising in popularity. I originally documented it so that I could make interchangable boot environments for the SheevaPlug, but now that the Raspberry Pi is positioned to become 1000x more popular, this question is probably for you. First of all, you need to use a machine that has enough storage to hold the image... or else, you need TWO sd slots. So that pretty much rules out using the Raspberry Pi or SheevaPlug to do the copy. You preferrably need to do the work on a Linux box with enough hard drive space and an SD card reader, because it has all the necessary tools, including dd, which is a raw media copy command built into Linux. The rest of the answer of how to copy an SD card is here
Computing’s heterogeneous again! Cross-platform issues
The amount of re-learning you have to do from one desktop OS to the next is minimal, so the question is not as important as "which server platform". None-the-less, it's where you live every day, so it's worth considering. Definitely not Windows. It's both proprietary, and forces you to re-train every few years during forced upgrade cycles (revenue windfalls). OS X is the best proprietary desktop, and my second choice for daily productivity. First, is (controversially) Ubuntu Unity, thanks to the unsung hero of the pragmatic desktop, Mark Shuttleworth who is extending Apple's "Spotlight" innovations into application menus with HUD
. I use it every day as my main desktop environment. The concept that Linux isn't ready for prime-time desktop use is a notion that is long-since dead. And Ubuntu "Precise" is just about as polished as any product can be.
UPDATE: Ubuntu 12.10 Quantal Quetzal is even more polished. My affinity for Ubuntu continues to grow, as it becomes cleaner, faster and gets even more out of your way. Ubuntu has been making some gutsy decisions, and I like them. Decide for yourself. Don't let old-school Linux desktop bullies tell you Ubuntu lost its way. It's forging forward on a brand-new path that few realize how significant it might become.
Ahhhh—my favorite character—the em-dash. You can type it on any platform. Macs and Linux makes it easy. PCs only really let you easily do it if you have a number-pad on your keyboard, which is mindbogglingly dumb. Anyway, it goes as follows.
Mac: Option+Shift minus, or however you want to say it. It's the easiest of all the platforms, and you basically just hold down the Option and the Shift keys simultaneously, then tap minus.
Windows: Alt+0151. You HAVE TO use the number pad for the 0151, which is just so dumb. So, if you're on a compact keyboard or certain laptops, you just can't type an em-dash. Ugh! Oh yeah... you have to HOLD DOWN the alt-key while typing 0151, and when you LET GO of the alt-key is when the em-dash appears—just so strange!
Ubuntu Linux: Ctrl+Shift+U 2014+Enter. You FIRST press Ctrl+Shift+U simultaneously, then let go, and type 2014 and hit Enter. Okay, this is the most sensible, but perhaps the most disruptive to your flow. You can type almost any Unicode character in Ubuntu Linux by using the Ctrl+Shift+U prefix, then entering the Unicode number for the glyph that you want. There are other ways, but they require customizations, and I try to avoid that.
iOS iPhone, iPad, etc.: Simply press-and-hold the minus sign on the on-screen keyboard, and in a moment, extended character options will appear with the em-dash among them.
Android: Basically the same as iOS. Click the ?123 key to see the minus key. Press and hold the minus key until the em-dash and other alternatives apear. Tap it.
vim (bonus): Press Ctrl+v u2014. That is, press control and v simultaneously, then let go and type the letter u and the numbers 2014. There is need to press Enter at the end.
Than answer varies from platform to platform. I discuss double clicking bat files on this page
. In broad strokes, you have to go through a .vbscript on Windows, using a WishScript object with a suppress output parameter selected. On Mac OS X, you create an OS X "application bundle" that's wired-up to run a bash script (.sh file) that it contains. And on Linux... well... it differs per your platform, but the only cross-distribution reliable thing is to use a bash script .sh file, and just deal with the warnings that pop up when you double-click it, and making whatever selection is NOT "run in console".
Everybody’s favorite loser, Commodore!
The Commodore 64x, or C64x, is a re-issue of the approximate design of the Commodore 64, using an extremely high quality modern keyboard that is a delight to type on. I am something of a keyboard-snob, using nothing but IBM Model-M's, until this thing came along, offering the perfect combination of retro-coolness and typing delight—for which I have received much criticism
. The insides accommodate almost any mini-ITX motherboard, which is a form-factor created for embedded systems like cable-boxes, meaning this thing is easily and infinitely up-gradable, offsetting what some think is "too expensive". I spent $595 on mine, which is exactly what the original C64 cost over 25 years ago.
Nothing evokes passions, both positive and negative, like Commodore Computers. There are many layered reasons for this, but the most fundamental one is that you never forget your first love, and the C64 was the first experience with computers at a young age for millions of children of the 80's, representing a time of wonder and amazement in their life. Commodore's horrid treatment of its users, and subsequent demise had much of the flavor of a betrayal and break-up. I missed the C64 days (not because of age - I'm 41), but was around for the later Amiga days, which had many parallels - a superior, creative and liberating technology allowed to die, along with the hopes of a generation of alternative computing rebels along with it. This multifaceted story runs much deeper, in a story barely told
Ah, what a sad story. The world could have been much different if the technically superior, cheaper, and inspiring computer became the standard in the late 80's instead of the IBM PC. To make a long story short, Commodore was in the process of buying Amiga from a company founded by ex-Atari engineers, but Commodore's founder Jack Tramiel was kicked out by their uninspired chairman Irving Gould, and without Jack, Commodore was very poor stewards of the Amiga technology, fumbling both in promoting and advancing it technically. I attribute it to the mind-numbing effect of the suburbs in which their world headquarters was based—versus Silicon Valley where they should have been.
I was about 12 years old (1982) when the home computer revolution really started to kick-in. I missed jumping on the Commodore-bandwagon the first time around, getting the failed Coleco Adam, and always sorta having C64 envy of my software pirating friends. This was worsened by the fact that some of these friends REALLY LOVED Commodore and followed it like some follow sports—because the C= world headquarters was somehow inexplicably located in our suburban Philly back-yards—instead of Silicon Valley, where it rightfully should have been. Then around '87, as it happened, I got a used Amiga 1000 that needed a $300 disk-drive repair, fell immediately in love with Electronic Arts' Deluxe Paint (as an artist), became the head of the Philadelphia Amiga Users Group... then, in an amazing twist, was actually SOUGHT OUT by C= education executives Howard Diamond and John Harrison at a Valley Forge World Of Commodore show where I was helping man the PAUG booth. They recruited me to be a student rep on campus, and went to work for them on the Drexel University co-operative education program—the first college to require all incoming students to have computers (Macs). This Commodore co-op experience plugged me into a remarkable set of people that I built up into hero's in my mind, but then I lost my innocence trying to "save" Commodore as a shareholder, travelling TWICE to their Bahamas fortress to beat sense into Irving Gould to no avail. After graduating, I went to work for a Commodore spin-off, which REALLY opened my eyes. Suffice to say, I revised my understanding of what killed Commodore. Half was from kicking out their spit-and-vinegar Auschwitz-surviving man-of-steel founder Jack Tramiel... but the other half was definitely inane suburban mediocrity, and the effect it has on people unfortunate enough to end up living and working there. I almost fell victim to that trap myself, and have been correcting it ever since.
The Robots are coming!
I've owned all the Roomba (including Scooba), the Neato and the Mint. First-off, the Mint floor cleaner is an entirely different animal... uh, robot. Unlike Roomba and Neato, the Mint doesn't vacuum. It takes Swiffer wet and dry cloths, and basically can't do anything more than a Swiffer, except go places you never thought. And that makes it worth it in its own right, but very different from the robot vacuums. I owned a second-generation Roomba Discovery with scheduler, which was my first robot-helper love, until being a robot maintenance repairman ended the honeymoon. That's not to say this won't happen with my new Neato XV-21, but I'm still on the honeymoon, and am loving how Neato navigates around furniture and baby toys without bumping into or vacuuming them it. Neato's just smarter, and less frustrating to watch than the zigzagging insect-intelligence Roomba (might have improved since my experience). I'll defer Scooba discussion for another post. Today, it's a very tough call between the latest generation Roomba vs. Neato, but my robot army to help me clean around the house consists of the Neato XV-21 and the Mint Plus, and I'm pretty happy. Whichever you get, also take a Mint.
Mobile RSS Readers – Still a great source for your daily news fix
WordPress – least we forget what made this site possible
Kau-Boy's AutoCompleter is the first one that worked well for me as expected with no strange side effects. I had gone through quite a number of them to reach this conclusion. No special registration. Just install plug-in, and have full-article search autocomplete. Easy-peasy.
I've tried every one. I like Slick Social Buttons with the default skin and only Facebook, Twitter and Google+ turned on. People who actually use LinkedIn, Digg, StumbleUpon and other sites like that already know how to add articles there, and don't need the help at the expense of your site's aesthetics. Chances are, Facebook Likes and Twitter Tweets are going to become the new kingmaker signals. Google's trying desperately to "own" such a social signal of it's own with +1. But unless your needs are very specific, these three kingmaker buttons are all you need, and Slick Social Buttons gives you them with the lest fuss, that cool floating effect, and a good deal of ability to customize the look.