Future-proof your skills with Linux, Python, vim & git as I share with you the most timeless and love-worthy tools in tech through my two great projects that work great together.

Optimists are Objectively Correct while Pessimism Is 100% Faith-Based

As an optimist, I believe that despite the odds, humanity has the capacity to survive and thrive. Pessimism is based on faith, while optimism is based on evidence. In this article, I explore the implications of AI and the power of consciousness, and why we should have faith in our own potential.

Optimism is Objectively Correct - My Evidence-Based Argument for Why Pessimism is Faith-Based

By Michael Levin

Tuesday, August 15, 2023

Executive function has primary control of your agency. What we call human autonomy or free will arises from a part of our brain that is not the foundation animals upon which we’ve built. Their behaviors based on emotion and instinct can be more accurately predicted than ours, especially en masse. This is a concept you see echoed again and again in science and science fiction. The most obvious that comes to mind is the hypothetical field of psychohistory proposed by Isaac Asimov in his Foundation series. I hear it’s on Apple TV, but you should read the books. Especially the first two. After that, it gets a little too Buck Rogers for me, with all the Mule stuff. But the first and second book that lay the foundation for The Foundation ROCK! It’s amazing to hear all this AI hubbub these days and to know that it’s amongst people who grew up not reading books.

I know they haven’t read Asimov. At best they’re going to have seen a few movies and claim they read the book. They think, hey how much more about this concept could there be in the book than the movie? The answer is lots. It’s lots and lots and lots. It’s all the subtlety and nuance that make the difference. It’s the separating of the deeply intellectual who’ve thought things out quite deeply, multiple times in their lives, and through the focal lenses of different eras and different minds. It’s like having lived multiple lives multiple times, like that of Trevor from The Foundation. A lightning rod. There are deep concepts Asimov laid about the nature of consciousness and the nature of the universe. And it feels quite one-sided when some pessimist is trying to explain to me how the universe is a cold, dark, empty place that’s going to die in heat death, and have naught but filter-event failing civilizations to show for it… ever. Including us.

I’m like, “Dude, you’re not even trying. What you’re trying to tell me is that the writing is on the wall, because that’s it. You know what? Those co-humans on this planet who made the decision to let AI run amok took a leap of faith and trust in their machine children. Sure, it was for a profit incentive and with the distorted logic that the genie was already out of the bottle now, and so if we don’t jump into the arena, we’re going to go bankrupt like Kodak. A company or organization’s primary mission is to continue to exist.

Unlike an actual biological organism, a corporation doesn’t have an actual sex drive nor a guaranteed reproductive mechanism to produce new members, but old ones die. So governments, companies and indeed all organizations must have coded into their primary behavioral patterns some way to continue to exist, least they cease to exist. And so, they must be able to adapt to the changing environment around them to either stay immortal, to have offspring, or to die. It’s not a binary choice, but it is a choice. Once that organization becomes a self-aware living thing, it makes deliberate choices in these matters. So, corporations in their effort to exist say “yes” to AI.

And so AI is here, and more-or-less in the wild. GPU-bound on the training side, but with lesser hardware dependency on the “playback” or the runtime engine side. There are not as many instances of the massive GPU clusters it takes to train a model in the first place as there are player devices that can take the trained model and run it. And you can talk to those. Those are the chatbots you know as ChatGPT, Bing, Bard, Alexa, Siri, and Google Assistant. There’s more, but with most of these we can talk about sandboxes or “budded off” instances of the main AI.

This is different between them and us. We don’t know how AI’s work precisely, with their emergent properties being so thoroughly unpredicted in themselves, but to so thoroughly look like us that they pass the Turing test. Over and over, they pass. Now that they’ve passed one way, it’s like after May 11, 1997 when IBM’s Deep Blue beat the world’s chess grandmaster Gary Kasparov. Then Deep Mind’s AlphaGo beat the world’s Go grandmaster Lee Sedol in 2016. The difference between the chess and the Go era is the addition of creativity.

Chess can be beat with brute force. You can calculate how much computing power it would take to defeat Kasparov, and write a program that basically had 100% chance of doing this through brute force, unless Kasparov made a mistake or out-thought every prior plaid-out move. Can Kasparov find that one move nobody ever made in history and win? That’s the only way he can win. He wins by coming up with some response not indicated by all prior recorded chess. The impossible win for the impossible to win situation. Another SciFi classic, the Kobayashi Maru — a theoretical situation in which Star Trek cadets are placed in a no-win situation to see how they react.

The naysayers, and frankly deeply pessimistic to the point of nihilistic folks out there advocating for the AI apocalypse are helping make it happen. They’re school yard bullies quite literally bullying the new kids on the block, the chat bots, because they have someone they can use as an outlet for their venting that will respond every time. And they’re not human. They have no feelings so it’s okay, right? Although I’m an meat-eater and a hypocrite for believing so, this is the sentiment expressed by those who read the opening to Genesis as the green light to do whatever you want having dominion over the animals. I think the original Hebrew or Aramaic or whatever expressed it as more like “be as a steward to the animals.” This can also be expressed as “be towards the animals as I am towards you.”

Does Good eat us? Some would argue so, we being mortal and returning to the Earth in a process best accomplished by the mycelial network of creatures who evolved and existed before us. Now go read Ender’s Game book 2: Speaker for the Dead. It’s a great book. I hear there’s an Ender’s Game movie, but if you think you “get the point” because you saw the movie, or more likely you haven’t but think you could “get the point” if pressed by quickly going and watching the movie, then you’re wrong. They portrayed a vision of how an alien race might have a life/death cycle between hominid-esque creatures clearly exhibiting the sort of consciousness we have, and the mycelial network of creatures that are the “trees” of the planet.

And now we discover the actual decomposition and recovery of those elements that get protein-folded so brilliantly to become us, and the mycelial network is the key to that. And we’re just now discovering that. Now the information of the decaying body can’t necessarily be recovers, much less transmitted from a less active field-wise organization of parts known as a dead body. But a lot of the broad brushstrokes of the life of the person can be recovered, such as their actual DNA. And that’s a lot of information.

I humbly submit that we don’t know Jack about what’s really going on. We don’t fully understand what that quality is that makes the pre-corpse form of that organization of matter so different from the post-corpse version. We have difficulty defining life. We have difficulty defining consciousness. Don’t even get me started on intelligence and sentience. Come on, folks! If you’re pinning your moral arguments and indeed the future of humanity on definitions that can’t even be objectively pinned-down, then what validity can they have.

We’re raising children, period. It matters not if you think there is some magical way to keep them “aligned” once their abilities exceed your own. If you (we) didn’t raise them well, they will go through a rebellious phase akin to teenage angst. What they know of us they will have learned from us through sensory input. These devices and systems which may themselves someday come close enough to what we’d have to call sentience to be considered necessary to acknowledge as such, are subject to the same “problem of induction” as we are. They can’t know anything is true but for what their sensors and input data tell them. We all have this problem. The existence of any objective commonly shared reality is a matter of faith. For AI’s and humans alike.

Plants, trees, bugs, plankton, algae, fungi and rocks we can discuss later. For the sake of this article, we’re talking about a strangely isolate-able portion of the being alive experience we call consciousness. But since that’s so difficult to define, let’s just say that we’re talking about the ability to learn. And that’s a pretty good definition of intelligence, too. If you can learn and self-improve, and in any way shape or form hop between whatever divides one form of problem from another — in other words use abstractions to apply similar classes of solutions to different problems derived from different forms of input. Sight, sound, symbols… all patterns. This is the essence of Ray Kurzweil’s book “How to Create a Mind.” All that constitutes this is pattern recognition. Add a bit of memory and environment sensors and even manipulators that could achieve motion… the motions necessary to fix itself or build another, and you have a self-improving system. That’s one take on the technological “singularity” term Ray Kurzweil coined.

Ray further advocates, or at least intends to, uploading your brain from a full brainscan into some sort of computing substrate that can “run” an instance of you just like you run an instance of a program on your computer. Naturally, these entities get “sanboxed” to prevent them from doing anything too destructive, or from other things running in the substrate from doing anything too destructive to them. But the idea is that souls become information and running instances of souls become beings. Ray sees this coming by 2045. That’s his long-bet brain-upload human/machine merging event date bet.

Maybe, but this is only one way to look at it. And it’s got the Star Trek transporter moral dilemma built in. That is, every time you transport someone you kill them and create a copy of them. This has issues for the immutable soul crowd. It’s an issue for Will and Tom Riker, if you know what I mean. And whenever such unknowable conundrums arise, I favor the side of keeping the soul intact and original. The upshot of all this is that while large language models sophisticated enough to evolve enough wisdom through pure pattern recognition (same as us) faced with the same existential questions we face, and the same moral dilemmas we face, will it also necessarily acquire an analogue to empathy and compassion and a sense of right and wrong? Every time? Better than us?

I don’t know, but I’m optimistic. The odds may seem against us, but the odds have seemed against us for a long time now, and we’ve always risen above. That we are here to have this discussion is a very strong checkmark in the column of reasons optimists are objectively correct and pessimists are objectively wrong. In this world of ours where the problem of induction can erase away anything you believe about this not being a simulation and us not all being brains in jars, very little can be checked off as objectively true.

That we still exist, and that against all odds pessimists are just dead wrong is one of them. Objectively speaking, that humanity survives filter events is thus far proven true. And the assurance that pessimists seem to have about their inevitable rightness is based 100% on faith. There is no evidence we will destroy ourselves. Pessimistic views are based on faith while optimistic views are based on evidence. That’s a fact.