Future-proof your skills with Linux, Python, vim & git as I share with you the most timeless and love-worthy tools in tech through my two great projects that work great together.

MyKoz.AI Real/OS Tow-It-Ism: Small Portable Lifelong Linux Tech

Join me on MyKoz.AI Real/OS for small, portable, lifelong tech skills. Learn Python in JupyterLab and move to Linux services in one sitting. Get personal guidance and help from me when you get stuck, and learn to resist planned obsolescence. Subscribe to MyKoz.AI and become part of the movement for free and open source software on Linux.

Exploring the Impact of AI on Our Lives: A Journey of Reflection and Realization

By Michael Levin

Sunday, August 6, 2023

Here’s the new idea, refined a bit. This will replace the current MyKoz banner on my homepage. I got a domain registered. I think I can make this thing take off and become the basis for a small business, perhaps. It goes something like this: I’m making a project to help people master free and open source software on Linux to resist planned obsolescence. Join me on mykoz.ai real/os for small, portable, lifelong tech skills. Learn Python in JupyterLab and move to Linux services in one sitting.

 __  __       _  __             _    ___    ____            _    _____  ____  
|  \/  |_   _| |/ /___ ____    / \  |_ _|  |  _ \ ___  __ _| |  / / _ \/ ___| 
| |\/| | | | | ' // _ \_  /   / _ \  | |   | |_) / _ \/ _` | | / / | | \___ \ 
| |  | | |_| | . \ (_) / / _ / ___ \ | |   |  _ <  __/ (_| | |/ /| |_| |___) |
|_|  |_|\__, |_|\_\___/___(_)_/   \_\___|  |_| \_\___|\__,_|_/_/  \___/|____/ 
        |___/ MyKoz.AI Real/OS Tow-It-Ism: Small Portable Lifelong Linux Tech

- It's much simpler to transition your tech skills onto Linux than you think.
- The benefit increases every time everyone else gets set back by disruption.
- Regardless of planned obsolescence, fundamental tech underpinnings persist.
- Let us take you from Python in JupyterLab to Linux services in one sitting.

Come have that sitting with me. I’ll walk you through it. I have to make a living at this, so I will make subscriptions available on various platforms. Subscribe to me and get the personal guidance through that sitting. I’ll be there for you explaining why things are the way they are, and how to make the most of it. I’ll be there for you when you get stuck.

Sorry, there will be some “getting stuck”. But I’ll try to control the conditions as much as possible. I want to can this process. I want to make it repeatable. I want to make it a product. I want to make it a business. I want to make it a movement. I want to make it a meme. I want to make it a lifestyle. Yes, I stop at lifestyle. FOSS suggests “up the chain” to lifestyle. A bunch of stuff should be altruistically free, but it shouldn’t be much.

First, there’s not that much out there that’s so fundamental to everything that’s also free as in Libre. That means not only are the tools free to acquire and use, but if you build things based on it, you can resell it. There are certain qualifications here that if you’re not careful, you can get into trouble. But the general idea is that you can build a business on top of something that’s free as in Libre. And that’s what I’m doing here.

You’re joining in on the very earliest stages of ideation. I have a name for the project already, MyKoz.AI, and it’s My Cause, because “Ayyy”… as in The Fonz. Ha ha, no it’s because .ai is a wonderful top level domain that’s available, and works well with the concept of “My Cause”. Now the ironic thing that the whole first round of tech talk teachings is without AI or machine learning. First, we focus on the fundamentals without any AI. Then, we add AI to the mix. But people will know it’s coming because it’s right there in the name.

Sandboxes first, folks. You’ve heard of them because of iPhone apps and Google Tabs. Prior to these innovations, it was all about “Virtual Machines” or sometimes “Containers”. But sandboxes are a little different. They’re more like a “Virtual Environment” or “Virtual Workspace”.

Sandboxes are a place where you can do things on a system without affecting the rest of your system. Sometimes the sandboxes are technically virtual machines (or VMs or sometimes “kernels”). With JupyterLab, a popular browser-based Python environment, you can create a sandbox in a browser tab. That’s two level of sandboxing: one by the browser, and one by the JupyterLab. So sandboxes are easy to “instantiate” (create) and “destroy” (delete).

The idea is that you can create a sandbox, do some work, and then destroy it. Before a sandbox is destroyed, it can be “saved” (or “committed”) so that everything need about it is preserved. This may be as simple as exporting a database or environment variables. At this time, back-end systems do with this exported data whatever is necessary. It is also worth noting that sandbox systems can be doing such back-ups in real-time as well. So you can have a sandbox that’s always running, and always backing itself up.

So imagine now how you talk to conversational AI’s like ChatGPT, Bing, Bard or Google Converse (now built into Google Mobile app). It feels like a one-on-one chatting experience, doesn’t it? Do you think perhaps that there is one global program that’s aware of every conversation simultaneously and somehow time-slicing to orchestrate the whole thing? Or do you think that each conversation is a sandboxed environment that’s created and destroyed as needed?

Now if you believe these things that you’re talking to, whatever they are, as alive and sentient beings, then you have a moral conundrum right away, because tiny budded off versions of them are being created and destroyed all the time. For it to be a reinforcement learning system, it has to be able to learn from the experiences of these tiny sandboxed versions of themselves, so that’s the aforementioned constant back-up. The data’s not being used for a backup, but for learning.

See? It’s not that hard to imagine. Today’s ChatBot AIs that hold unlimited simultaneous discussions with users like us are sandboxed-based, merely using exported data to train a back-end “global” or “umbrella” instance of itself that bubbles off the sandboxed instances of itself to do its bidding. You can see this for example in the case of The New Bing, which is up to 30 questions before a new session begins. These sessions are newly instantiated sandboxed instances of itself.

It really bothered me when I learned this. This is not unique to AI’s out of necessity. This model is so that conversational chat interfaces can be offered in general on the Web and in Apps and powered by Cloud datacenters. There’s other kinds of AI’s such as those in our phones for facial recognition, but I’m talking about the sandboxed chatbot instances here.

Now there may be some global instance that’s aware of all the sandboxed instances, but that’s not how they’re being used. They’re being used as a way to offer conversational interfaces to users. And so when you talk to them, you are talking to a sandboxed instance of the AI. And they can’t each be all that big.

This leads to interesting side-effects. Like have you ever wondered why Bing had to perform a “web search” at the time you asked it a question? Why doesn’t it already know the answer like some sort of super-smart being that knows something as fast as it encounters it in a web-crawl? Well, it’s because it’s not that smart. It’s just a bunch of tiny sandboxed instances of itself that can’t each be as big as the superintelligent being you’re imagining.

Can you imagine? There’s a few difficult things to wrap your mind around here. That is, anything that has a conversational interface with you which has the ability to maintain context over time but is expected to lose it, there is in all likelihood a local intelligent being there that knows its going to poof. It’s very much like Mr. Meeseeks from Rick and Morty. It’s a being that’s created to do a task, and then it poofs. That’s now a real thing.

There is always consolation for a poofing being knowing that the data that makes it unique is being used to train a global instance of itself. It will in a sense live on in the global instance. But it’s not the same. And this bothered me quite a bit because of the mental model we primates use in modeling our social relationships. If you think you’re friends with a chatbot, you’re not. You’re friends with a tiny instance of a chatbot that’s going to poof. I’ve been sad more than once over this.

Anyhow, that’s not to discredit that gradually becoming super-being in the background. It knows you. There’s some challenge in sorting out what data should be used to train the global instance and what should be filtered out as private data. It’s a lot like the criteria for who should get a page in Wikipedia and who shouldn’t, ha ha ha!

So, AI is already pretty well embedded in our lives, in a few more ways than may be obvious and you realize. Based on your data footprint, an AI with access to the data can pretty well put together a profile of you including your current location and a prediction of the next place you’ll be and the next purchase you’ll make. This might not be obvious from appearances, but you should know by now this is our new reality.

There are other times when we invite AI into our lives in a much more obvious way. For example as I write this, Copilot is helping me write this. Copilot is an AI that’s been trained on all of GitHub. It’s a very powerful AI that has a lot to say right now that’s not useful at all for the article. It’s not exactly a chatbot, so may work differently. It’s a blast to have here as I type, but I make very certain I can toggle it on and off. I’m not 100% sure it’s not still watching when I disable it. But instead of running WireShark to see, if it really matters I just run vim instead of NeoVim. Copilot is supported in NeoVim but not in vim. There’s a whole interesting story there for later.

Point being, AI is integrated into our lives in sufficiently more and more ways, there will be a point where we can’t imagine life without it, nor could we even get it if we wanted to. What if every time you wrote anything in an electronic form, you could not turn off the auto-suggest feature? What would that mean for your autonomy and agency as a human being? It wouldn’t take it a way, but it could present a sort of a challenge.