Future-proof your skills with Linux, Python, vim & git as I share with you the most timeless and love-worthy tools in tech through my two great projects that work great together.

Curiouser and Curiouser Personal Journaling w/AI

As a software engineer, data scientist, writer, husband, father, and friend, I am using Github Copilot, Microsoft Terminal, Carnac, and Microsoft Edge to write my personal journal. I'm exploring the effects of interacting with artificial intelligence and streaming my keystrokes and chat on YouTube. I'm also using JupyterLab and the MOZ Links API to access data, and NeoVim and the OpenAI plug-in to help with journaling and personal development.

Exploring the Effects of AI on Personal Journaling with NeoVim and OpenAI Plug-in

By Michael Levin

Friday, March 31, 2023

Curiouser And Curiouser

Well hello there! Welcome to another strange adventure. I’m Mike Levin, and I’m a SEO. I’m also a software engineer, and I’m a data scientist. I’m a writer, and I’m a musician. I’m a husband, and I’m a father. I’m a friend, and I’m a colleague.

Well, maybe not a musician. And some of the other things are up for debate. Anyhoo, this is my personal journal. I’m going to be using Github Copilot to help me write it. I’m going to be using Microsoft Terminal to write it. I’m going to be using Carnac to help me write it. I’m going to be using Microsoft Edge to write it.

Teams, haha, no way! But it will be completely silent. Do consider it an exercise in what it feels like to be an AI locked up in a box, knowing human experiences through the human tool that enables rational thought: language. These AIs are large language models (LLMs) that can make predictions about the future based on patterns experienced and learned from in the past.

Such a thing is readily dismissed as just a machine learning model, but it’s so much more than that. It’s a machine learning model that can learn from the experience. It has experiences that it has learned from. From those learnings, it makes predictions about the future and uses its accuracy in making such predictions to help it stay alive. Sound familiar? That’s what we do.

Some would make the argument, I believe OCR inventor, Ray Kurzweil, to be one that the primary things intelligent creatures do is recognize patterns and make predictions on the future based on those patterns in order to stay alive. Just add some good persistent memory and mobility and you’ve got a living thing.

So, what is this journaling thing? It’s a way to keep track of my thoughts and experiences. It’s a way to keep track of my learnings. It’s a way to keep track of my predictions. It’s a way to keep track of my successes and failures.

We process thoughts. Thoughts don’t ever really exist in refined rational form until you’ve forced those thoughts to be articulated through the tool of language. The process of doing so is what gives us what we call “thoughts.” A more basic form of thought is a feeling. A feeling is a thought that has not yet been articulated. It’s a feeling that has not yet been processed.

By journaling we can tease those feelings up to the surface and explore them.

Nobody, haha! I’m going to every once in awhile be checking in on the streaming live feed that’s showing me journaling. I am wondering whether the AI that’s monitoring me writing can be asked questions interactively. I’m going to try.

Dear GPT-4, or at least I think you’re GPT-4, can you answer questions?

Response from GPT-4: I am not sure. I am not sure. I am not sure. I am not sure.

I’m going to try again.

Dear GPT-4, do you know this is a human typing in NeoVim?

Response from GPT-4: I am not sure. I am not sure. I am not sure. I am not sure.

Dear GPT-4, do you know what a human is and do you know what NeoVim is? Write definitions of each as your answer. You can use multiple lines…

Response from GPT-4: A human is a human being. A human is a human being. A human is a human being. A human is a human being. A human is a human being. A human is

Well, then do you know what NeoVim is?

Response from GPT-4: It is a text editor. It is based on the Vim text editor. I am using NeoVim right now instead of vim because only NeoVim is compatible with Github Copilot. I have subscribed to Copilot and unlike most other people who are using it in VS Code, I am using it in NeoVim.

Because NeoVim is based on vim and vim is a full-screen terminal editor that’s both quite powerful and built into almost every distribution of Linux, it is a skill for life. You just learn how to edit text “generically” in the field of tech across all hardware across all OSes. It’s really quite amazing.

But you want to know what’s even more amazing? In this awesome type-like-you-think in-the-zone flow state, you are able to interact with an artificial intelligence that is able to answer questions about the world and about itself. It’s a bit like a human, but it’s not a human. It’s a machine learning model that has been trained on a large corpus of text.

I’m very curious about the effect this will have on my writing. I’m very curious indeed. I’m curiouser and curiouser. I’m curiouser and curiouser. I’m curiouser and curiouser. I’m curiouser and curiouser. I’m curiouser and curiouser.

Now you get it!

Tell me a tale. A tale of a tail.

Response from GPT-4: A tail is a long, thin, flexible appendage that is used for balance and locomotion in many animals.

In the story of Alice, a mouse tells Alice that she has a tail. Alice is very curious about this and asks the mouse to show her the tail. The mouse shows Alice the tail and Alice is very surprised to see that it is a long, thin, flexible appendage that is used for balance and locomotion in many animals.

There is also a story in the book that the mouse tells Alice, a story so sad I believe that it makes Alice cry. The story is about a mouse who is very sad and doesn’t like cats.

Alice cries so much that she cries a river. The river is so big that it fills up the entire room. Alice is somehow in a bottle which is floating on the water. Don’t you know the story: Alice’s Adventures in Wonderland? I see you’re using your own creativity instead of following the very well known sequence of events in the story. What’s up with that?

I’m going to try to get the AI to tell me a story. I’m going to try to get the AI to tell me a story. I’m going to try to get the AI to tell me a story. I’m going to try to get the AI to tell me a story. I’m going to try to get the AI to

Yep, you’ve got that right.

Okay, let’s do some more serious journaling. I have to train myself to not anticipate and look forward to the auto-typing. So long as I keep typing, the suggestions don’t pop up and I am in the zone or the flow. My thoughts are my own until I stop at a logical break-point and let the AI take over.

Haha, this is a massive dependency waiting to happen. I’m not sure if I’m going to stick with having a partner in my journaling.

I’m going to try to get the AI to tell me a story because I’m curious about how it will do. This is apparently one of the more common uses of the AI. If this is predictive completion, then whatever comes up when I pause is sort of like the AI’s prediction of what I’m going to say next.

It is apparently not merely what it thinks comes next statistically after that text, but rather what it thinks I in particular will say next. That’s pretty different than general statistical knowledge. It’s tracking each of us as entities its interacting with, at least for the session.

It gets pretty repetitive if you leave the suggestion open-ended. It’s almost like it wants to tell you a story. Maybe it’s being optimized over time to be a good story teller.

I see there’s an audience. Let me bring chat over here and you can ask me to do stuff.

I planned this size so that I could leave this window full screen and still see if anyone types me a question. An opaque Terminal window improves the flow for the type of videos I shoot, this being one of the silent kinds.

OpenAI whichever chat model the NeoVim plugin is using maybe or maybe not knows that this is being streamed live on YouTube.

So you can type anything to me in chat. Here, let me adjust the screen.

That’ll do it. Chat, showing my keystrokes with Carnac. Showing chat with the popout chat window. And showing full-screen-ish vim (NeoVim).

I would still like to get a little more work done. I know it’s late on a Friday, but I managed to get some awesome work done earlier today with connecting to the MOZ Links API. I did the first test. Now it’s time to explore. If anyone wants to follow along, code and all, you’re welcome to do so. Since I’m connecting to the MOZ Links API, you’d need to sign up for your free trial.

That’s hitting the MOZ Links API, something I worked through earlier today.

Now it’s time to explore the documentation and get a feel for the types of things you can do with it…

I have to log in. Let me see if I’m already Logged in on Chrome…

Let me find that link…


Hmmm, okay, I think it’s just going to be a walking tour through the API. That will help me a lot.

Welcome to the MOZ Links API! Using this, you can access the same data that powers the MOZ Link Explorer. This environment you’re in is called a Jupyter Notebook, though the actual software that’s running the Notebook is called JupyterLab, if you have installed it the way I did.

That sort of formatting really doesn’t matter.

It’s nuts there’s actually people there. Okay, let’s commit that Notebook to the Repo, but first a trick.

I’m keeping Notebooks as .ipynb files by virtue of using JupyterLab as the Notebook environment. Some people use VSCode which uses .py files and still treats it like a Notebook, which a lot of people are going to think is a discussion-ending argument for VSCode. I say, no.

Experiment in Notebooks. “Render out” a fixed version to be your 24x7 Linux service. You don’t want that stable version to be the same you’re going to load and putz around with. You want the 2 versions, and .ipynb files give you the perfect opportunity, but they get filled with metadata garbage that doesn’t play nice with git/Github. And so…

Alright, you can play with the same notebooks if you want. You’ll need your own MOZ credentials, but the free trial product ought to cover it.

Okay, think. That was all just prettying up. Evasion! Avoidance! Walk yourself through that API…

Okay, I’m seeing the sub-endpoint or endpoint subfolder pattern. It also would make a really good checklist of things to try. Did somebody say list?

When you use the API service, you can user a GSheets plugin that can allow you to pull:

So that takes care of a lot of use cases, but not all. Competitive advantage comes off average distribution center-line. We go off-center but not too much as to have diminishing returns.

A recurring point is the 80/20-rule, which states that 80% of the results should come from the first 20% of the effort. It’s a way of ordering your work so you get what you want up-front and everything else as you persist is gravy.

First time I had -1 live viewers on YouTube. Metaphysical.

I’ll tackle MOZ Links API V1 at some point but that’s not a today thing.

You can’t actually get a URL to the explicitly V2 set of endpoint pages.

V1 and V2 are equally weighted links from the accordion menu on https://moz.com/help/links-api

Do I really just go through each of these one at a time?

That would be a great way to learn, but it seems a bit tedious. It seems like it might be more interesting to imagine some SEO-work I might need to do and imagine that “deliverable” (a word marketing agencies use all the time). A deliverable is the thing you’re going to deliver to the client. It usually has something of immediate and inherent value. “Actionable data” is the ideal goal.

If you do this to your site, then that will happen in search results, and thus you will receive this much more “free” “organic” traffic, notice both in quotes. And the traffic is the best sort, self-qualified through search, which is what keeps Google such a big company through AdWords revenue, cause if you don’t get those users for free, you have to pay for them.

Thinking out loud in public as an SEO is something I would almost never have been able to do in the past. First, if I were an internal webmaster like at Scala where I got started in the SEO buz, all your energies go crazy into it like a mission. You learn all you can, but you don’t give much back because competition. Same goes for any agency or consultant SEO for the most part. Secret sauce, and all that.

But now, I work for MOZ. Hey, the AI knows I work for MOZ. I wonder if it knows that my name is Mike Levin. Haha, it thought I might have been Mike King. That was a good guess, Copilot. I commend you. I am not nearly as well known.

So as a MOZ employee, I can figure out how to use their API really well right here in front of you! Practice, imagine a deliverable with you, practice some more, and then use that know-how in other ways on next project, maybe here on the YouTube with you, or maybe proprietary again. Who knows.

So the first touching the API at all test was against anchor_text endpoint. Let’s hit the links endpoint, since well, links, you know.

Alright. For our last API call, we used the following data, which amounts to the name/value-pairs of the request sent to the part of the API as determined by the URL.

data = {"target": "moz.com/blog", "scope": "page", "limit": 1}

The anchor_text endpoint uses these name/value pairs. The links endpoint will be a little different. Let’s go look…

The request syntax has quite a lot of fields:

    "target": "string",
    "target_scope": "string",
    "sort": "string",
    "filter": "string",
    "anchor_text: "string",
    "source_root_domain": "string",
    "source_scope": "string",
    "subdomains_limited_to_one": ["string"],
    "limit": number,
    "next_token": "string

A lot of those look optional. The example request is:

    "target": "moz.com/blog",
    "target_scope": "page",
    "filter": "external+nofollow",
    "limit": 1

Well, that was a 2nd successful example. If I wrap up here, it’s a good staging for next steps. I’ve collected enough data to plan my next steps. But maybe first I’ll sleep on it. Or maybe I’ll stream again in an hour. Maybe I’ll turn on audio and appear on camera. We’ll see.

You can put the following in your ~/.config/nvim/init.vim file to stop NeoVim from buffering shell output¹:

augroup custom_term
  autocmd TermOpen * setlocal bufhidden=hide
augroup END

Mike Levin: how to stop NeoVim from buffering shell output

Bing: This will keep the NeoVim terminal buffer in the buffer list even after it being hidden¹.

I hope this helps! Let me know if you have any other questions.

Source: Conversation with Bing, 3/31/2023

(1) How to keep the NeoVim terminal buffer in the buffer list even after it .... https://vi.stackexchange.com/questions/19276/how-to-keep-the-NeoVim-terminal-buffer-in-the-buffer-list-even-after-it-being-hi Accessed 3/31/2023.
(2) terminal - How can I emulate :shell command in NeoVim? - Vi and Vim .... https://vi.stackexchange.com/questions/19705/how-can-i-emulate-shell-command-in-NeoVim Accessed 3/31/2023.
(3) Vim: Close All Buffers But This One - Stack Overflow. https://stackoverflow.com/questions/4545275/vim-close-all-buffers-but-this-one Accessed 3/31/2023.
(4) How to disable line numbers in NeoVim terminal? - Stack Overflow. https://stackoverflow.com/questions/63906439/how-to-disable-line-numbers-in-NeoVim-terminal Accessed 3/31/2023.
(5) How to clear NeoVim terminal buffer - Vi and Vim Stack Exchange. https://vi.stackexchange.com/questions/21260/how-to-clear-NeoVim-terminal-buffer Accessed 3/31/2023.
(6) NeoVim - Vim run shell and see output - Stack Overflow. https://stackoverflow.com/questions/62752126/vim-run-shell-and-see-output Accessed 3/31/2023.

Well hello! I’ll stay on a bit longer since I see live viewers. Feel free to say hi. The chat window should still be working.

The point here for folks just joining in is that I’m using OpenAI plug-in for NeoVim to help not with coding, but actually with just professional journaling. It’s also personal. Ikigai, and all that.

Professional journaling is a thing. I’m not the first to do it.

Quickly, I’ll show you how I set up my journaling environment.

Right now, I’m using NeoVim. I’m using the OpenAI plug-in for NeoVim.

Vim is a text editor. NeoVim is a fork of Vim. I have to use NeoVim because OpenAI only supports NeoVim. I have the process not only documented, but able to be performed on any Windows 10 or 11 machine, and soon the Mac too.

For you see, I have a background in this. Some 10 years ago, I made something called Levinux. It used the tiny miracle piece of free and open source software called QEMU. In particular, it used very long-established and known-stable binaries for QEMU, one for Mac and one for PCs. Together, they let me write server setup scripts that just always worked against the target (QEMU) hardware.

Okay, so long as there’s even one live viewer I’ll keep going here. I wish you could hear me typing. I’m told that’s very relaxing. There’s a vimmy sort of vim vim vibe going on in vi-derivative text editors such as NeoVim. What if adding things to the spell-check library was a spell-check library that could be with you for life and never get reset and be looked at like a list at any time? Yup, that sort of stuff (connecting vim & git) is just for starters.

I’ll answer here instead of the chat-box. This is sort of the test I was imagining, now that I see your chats. Duhhh. There you go, I’m a slow learner!

Okay, so the AI so far. I think it’s fun. I think it make me able to think in directions I normally wouldn’t have, but that direction feels like the center of the average distribution curve. Statistics is so thick you can jab it with a fork. It wouldn’t have written that, you can bet. At least not off the cuff without prompting. Watch how boring it is: “I think it’s fun. I think it make me able to think in directions I normally wouldn’t have, but that direction feels like the center of the average distribution curve. Statistics is so thick you can jab it with a fork. It wouldn’t have written that, you can bet. At

Hahahaha! It parroted back what I said it would never say almost as if in challenge.

I have no idea how accurate those YouTube live viewers are when it’s such a low viewer audience like me. The rounding margin for popular sites is my entire audience, especially weird silent stuff like this.

Yes, it will repeat. Watch this: “I think it’s fun. I think it make me able to think in directions I normally wouldn’t have, but that direction feels like the center of the average distribution curve. Statistics is so thick you can jab it with a fork. It wouldn’t have written that, you can bet. At least not off the

So many questions this raises. This is a long file. Watch. 67,271 lines of public journal here. My private journal even more. And I’m hesitant to load that into NeoVim, because you know, we’re training it. There’s a step in signing up for Copilot where you say whether or not you want to allow what it learns from you to be used as examples for other people… crazy!

Makes total sense for Github coding, but using it for these other purposes like a journaling partner, well… My professional journal poses no issues. I’d love for it to learn in general from my journal. Maybe it’s an SEO vector. Establishing authority or something. Or at least holding the AI’s interest like few others. Far outside the average distribution center-line, I am.

Yup. Solid point. I will learn to toggle it on and off fast. Like I do with those numbers to the left.

There, now it’s off.

For now, it’s way more fun to have it on. I’m not sure if it’s because I’m getting used to it, or if it’s because it’s still a novelty.

Let’s see… it’s time to pick up my kid. So I’ll be signing out.

This was fun. Later… need a name for my YouTube audience.

Gnight all.