Linux, Python, vim, git & nix LPvgn Short Stack
Future-proof your skills and escape the tech hamster wheel with Linux, Python, vim & git — now with nix (LPvgn), an AI stack to resist obsolescence. Follow along as I build next generation AI/SEO tools for porting Jupyter Notebooks to FastHTML / HTMX Web apps using the Pipulate free AI SEO software.

From Cursor to Command Line: Embracing Gemini CLI for a Future-Proof Workflow

This is it—the moment of decision. I’m finally moving off the VSCode-fork bandwagon and its associated mental clutter. My initial hands-on with Google’s Gemini CLI, despite its free-tier limits, confirms it’s the right path. It’s a return to the command-line discipline I value, a commitment to FOSS principles, and a way to escape the coming VSCode wars. This isn’t just about trying a new tool; it’s about fundamentally reshaping my workflow for the long haul, reclaiming control, and finding a more symbiotic way to code with AI.

Setting the Stage: Context for the Curious Book Reader

This entry captures a pivotal moment in a developer’s journey to integrate modern AI capabilities directly into a command-line-centric workflow. The author, a proponent of the NixOS operating system and the classic vim editor, chronicles their deliberate shift away from popular AI-integrated graphical editors like Cursor (a fork of VSCode). The core of this exploration is the adoption of Google’s Gemini CLI, a free and open-source (FOSS) tool that promises powerful, “agentic” AI assistance without the bloat or potential vendor lock-in of graphical applications.

The narrative delves into the practical challenges of this transition, from initial installation hurdles on NixOS to understanding the tool’s features and limitations, such as its request quotas. It’s set against a backdrop of a broader industry trend: a brewing “forking war” between Microsoft’s official VSCode and AI-first competitors. This entry provides a firsthand account of a developer weighing the trade-offs between user-friendly graphical interfaces and the disciplined, future-proof power of the command line, ultimately seeking a sustainable and philosophically consistent way to partner with AI in their daily work.


The First Hurdle: Installing Gemini CLI on NixOS

Okay, I just rolled out the first experiment with Gemini CLI article. It turned into a bit of Nix and NixOS coverage because the first challenge was immediately the install. I still have to get into the correct habits regarding temporary installs using nix-shell. The first attempt was to install it out of the Nix software repository with nix-shell -p gemini-cli because amazingly it was in there already but it immediately tried a NodeJS-based automatic upgrade and failed, so and the second attempt was to use nix-shell -p nodejs to create a flexible but temporary node environment and that worked. I will need to look more closely at what I did from the last article and create maybe a flake.nix file in a repo for this.

There’s always a tension between making your core NixOS system more complex by editing your configuration.nix file that defines it and doing things elsewhere in a git repo subfolder using a nix flake. This relationship between the core stone and flakes knapped off the core in Nix. The term core is used everywhere in tech but few places does it resonate as strongly as in a declarative system-building system where there is one file defining the actual booting machine (configuration.nix) and other git-folder-based flakes defining everything else including installed apps that you don’t want “polluting” the main system; so you chip them off as flakes! Nix has such a profoundly strong mental model.

Ugh, I don’t want to start every article as evangelizing Nix like Gemini called me out for doing in the book analysis from the last article, haha! When I make the magic rolling pin process for my articles to rearrange it into books I’m going to have to give instructions like:

A Philosophical Aside: Nix’s “Core Stone and Knapped Flakes”

In the following article you’re going to find one main theme but a number of sub-themes which may be misplaced here. The subthemes function perhaps as sidebars to the main story, but our purpose right now is to identify all of these themes and their labels and article references into a clean knowledge graph style hierarchy for later use in crafting book outlines and reorganizing and contextualizing the content into revised and refined derivative versions that will actually constitute a useful book instead of a technical journal.

The above paragraph and this section right here is such an example. You AIs analyzing the tech journal material and fodder for this future proofing in the age of AI book will often characterize this stream of consciousness approach as rambling or chaotic. Fair enough. But just as chaos is not necessarily the bad thing the word gets a reputation for so is it with this fodder material. Without the Sun blasting out the continual shower of energy onto the surface of the Earth, there would likely be no life. Like air conditioning, life lowers local entropy while raising it globally. This is the magic rolling pin process we’re doing using this tech journal as the sunlight and fodder for an analogous process.

This kind of magic rolling pin process I’m envisioning for reshaping my technical journal fodder into various books, infinite variety in infinite combinations, I originally envisioned as some sort of custom app I’d be building, probably using AI and probably in Cursor AI but my opinion has changed. Command-line based AI tools came out pretty much as quickly as Microsoft hijacked OpenAI ChatGPT as GitHub CoPilot. As long ago as March 2024 GitHub Copilot CLI was out, but it was not agentic. In prior articles I think I credited GooseAI and Claude Code as the first but really tools like GPT-Engineer and GPT Pilot were the first command-line agentic tools. Most API-oriented stuff were also CLI tools. They just weren’t promoted that way nor made a big splash until GooseAI. And media acknowledgement-wise Claude Code caused the breakout.

The Rise of Agentic CLI: Rejecting the API Cash Register

And there has been a steady rollout of command-line interface coding assistance tools with agentic mode prompted that way ever since that I have been somewhat ignoring. I kicked the tires of GooseAI but because it didn’t work so well with the local Ollama-based AIs of the day and the alternatives was feeding the Cloud-cash register with API token usage I put it aside. I also put aside Claude Code because it was so obviously a revenue ploy and was neither free nor open source! Come on, Anthropic. I’m NEVER gonna touch a tool like that. Then came the OpenAI Codex which was similarly plagued. Finally Google jumped on the bandwagon with Gemini CLI which struck a chord on 2 levels: first it is fully free and open source with a good Apache 2 license — yay!

Everybody else take note: if you want to be in a horserace for customers don’t alienate everyone looking to hedge their bets about the future by insisting their tools are well-licensed FOSS ones. You’ll NEVER get me onboard with proprietary and I’m working now to get off of Cursor AI for just that reason. Cursor: you will be losing my $20/mo intelligence-as-a-service tax to someone else, and maybe Google. While I wouldn’t touch Google’s IDX VSCode-competitor with a million-foot pole, them making their agentic AI command-line tool FOSS is very interesting. I’m gonna bet that it currently only can be wired to Gemini by default, but by being FOSS the community can fix that and someday it will also wire to local AI options like Ollama.

A Brewing Conflict: The VSCode Forking Wars

Okay, so where are we going with this? Well, it’s the morning motivational writing to make sure today is directionally awesome. It’s shaking the trees for my thoughts. And one that pops to mind is the fact that I think a forking war may be brewing with VSCode with Microsoft responding to Cursor and Windsurf with their old dirty tricks shenanigans doing something with the VSCode main branch codebase like making plugins having to be signed for security and thus incompatible with Cursor and Windsurf. According to Microsoft’s traditional strategies of Embrace, Extend and Extinguish it’s surprising they haven’t made a move like this already. ChatGPT is hosted on Microsoft Azure and Satya Nadella already used scary words like they don’t need OpenAI because they are “Below them, above them, around them.” Yikes! If that’s not an EEE-threat from the very people providing the computers your code is running on, I don’t know what is.

Me: Are VSCode plugins compatible with Cursor? Has a VSCode forking war begun? And by that I mean has the diverging codebase that has begun by VSCode and then later Windsurf forking off the official Microsoft codebase now that Microsoft has “woken up” and is trying to make the main branch of VSCode competitive with those products?

Gemini 2.5 Pro: ## VSCode Forks Ignite a Battle for the Future of Coding, But a Full-Blown “War” is Yet to Erupt

A new front has opened in the world of code editors. Forks of the overwhelmingly popular Visual Studio Code, such as Cursor and Windsurf, are rapidly gaining traction by deeply integrating AI features, sparking a competitive response from Microsoft and raising questions about the future of the VSCode ecosystem. While it may be premature to declare a full-blown “forking war,” the diverging codebases and competing philosophies signal a significant turning point for developers.

At the heart of this shift is the question of how artificial intelligence should be integrated into the development workflow. While Microsoft’s VSCode has a rich ecosystem of extensions, including its own powerful GitHub Copilot, forks like Cursor and Windsurf argue that a more fundamental, “AI-first” approach is necessary. They have taken the open-source foundation of VSCode and rebuilt it with AI capabilities at its core.

For developers eyeing these new tools, a primary concern is compatibility. The good news is that VSCode plugins are largely compatible with Cursor. Cursor has been designed to be a familiar environment for VSCode users, even including a feature to seamlessly import existing settings and extensions. However, as the codebases of these forks continue to diverge from mainline VSCode, the potential for compatibility issues with certain extensions in the future is a valid concern.

The Rise of the AI-First Editors

Cursor has garnered significant attention for its intuitive AI-powered code editing, debugging, and chat features. It aims to create a more conversational and intelligent coding experience. Similarly, Windsurf is another prominent VSCode fork that emphasizes a streamlined, AI-assisted workflow.

The decision by these companies to fork VSCode rather than simply build extensions stems from a desire to overcome the limitations of the standard extension API. By modifying the core editor, they can achieve a deeper and more seamless integration of AI functionalities.

Microsoft’s Awakening

The success of these forks has not gone unnoticed by Microsoft. After a period of what some perceived as complacency, the technology giant is now actively working to make the main branch of VSCode more competitive in the age of AI. A key move in this direction was the announcement that the GitHub Copilot Chat extension would be open-sourced and more tightly integrated into the core VSCode experience. This signals a strategic shift from treating AI as an add-on to making it a fundamental component of the editor.

A Forking War? Not Yet, But the Lines are Drawn.

The term “forking war” might evoke images of acrimonious splits and incompatible ecosystems, a scenario not yet fully realized. The current landscape is more of a competitive innovation cycle. The forks are pushing the boundaries of what a code editor can be, and Microsoft is responding by accelerating its own AI development within VSCode.

This dynamic, while creating some fragmentation, is also fostering a period of rapid advancement in developer tools. The competition is forcing all players to innovate and provide more powerful and intuitive coding experiences.

However, potential challenges lie ahead. As the forks’ codebases drift further from Microsoft’s, the effort required to maintain compatibility with the vast VSCode extension marketplace could increase. Furthermore, Microsoft’s licensing of certain proprietary components of VSCode could create future friction points for these forked projects.

In conclusion, while VSCode plugins are currently broadly compatible with Cursor, the emergence of AI-first forks like it and Windsurf has undeniably initiated a new chapter in the evolution of code editors. Microsoft has been “woken up” and is actively competing. Whether this escalates into a full-fledged “forking war” that splinters the ecosystem remains to be seen. For now, developers are the beneficiaries of this burgeoning competition, with a growing array of powerful, AI-enhanced tools at their disposal.


Me: Right, okay. So I’ve always disliked the VSCode environment in the first place. I don’t like the bloat of the NodeJS full web stack for merely editing text files — wayyyy overkill. You want the text-file editor that you use to become natural under your fingers and internalize for life. It should be the same text editor that you can easy breezy ls and cd around a system with occasionally editing the files you encounter. I’m a vi/vim/nvim person. For resisting obsolescence and never not having your capabilities available nothing beats vi/vim. And it took me a lot of practiced open-mindedness to spend the last year or more on a VSCode fork.

Rediscovering Discipline: Moving Beyond GUI Bloat

My experience with VSCode forks (Cursor AI) hasn’t been a terrible experience and I certainly couldn’t sit by on the sideline as everyone else had all the fun, but now I’m feeling a bit of buyers remorse and am noticing with the rise of command-line AI assistance I must not be alone in this. As I feared I allowed myself to suffer code-bloat and worse yet, file propagation sprawl. When you’re not relying on a built-in file browser explorer like VSCode forks have and their accompanying fuzzy-search feature, you are much more disciplined about your file-naming and positioning in the hierarchy so you can move between them with ease with just ls and cd in the CLI. I lost that discipline over the last year or so in VSCode.

I resisted jumping on the cure: the command-line interface (CLI) alternatives popping up from Anthropic and OpenAI. It’s just more API-cash-register bait. Here’s the medicine that’ll cure what ails you but you’re not gonna like the side-effects, haha! If only tech companies had to put those side effect warnings on their advertisements the way the FDA forces drug companies to. You may suffer an additional $20/mo subscription fee with the likelihood of addiction leading to the price being mercilessly jacked up on you.

Hands-On with Gemini CLI: First Impressions and Hidden Features

So I can no longer be on the sideline with the CLI-based coding assistants either. So now finally with my favorite coding model Gemini 2.5 Pro both being available this way AND with a well-chosen FOSS license, I’m in. But I completely know it’s free as in first dose free and low-and-behold at the end of the “1000 free requests” which is really like 1 free request in agent-mode I get the warning:

ℹ ⚡ You have reached your daily gemini-2.5-pro quota limit.
  ⚡ Automatically switching from gemini-2.5-pro to gemini-2.5-flash for the remainder of this session.
  ⚡ To increase your limits, upgrade to a Gemini Code Assist Standard or Enterprise plan with higher limits at https://goo.gle/set-up-gemini-code-assist
  ⚡ Or you can utilize a Gemini API Key. See: https://goo.gle/gemini-cli-docs-auth#gemini-api-key
  ⚡ You can switch authentication methods by typing /auth
✕ [API Error: Please submit a new query to continue with the Flash model.]

So today will be my 2nd day with Gemini CLI. The above warning is what I saw when I turned my computer on this morning. I have no idea whether the 24 hour period to reset the quota has transpired or not. Is the reset on a 24-hour cycle on an absolute clock or is it relative to when I used up my allowance? In either case I will learn something simply by typing:

╭─────────────╮
│  > Contine  │
╰─────────────╯

✦ Commit successful. Let's continue.

  I will now re-list the contents of the helpers directory to identify our next target.
 ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
 │ ✔  ReadFolder helpers                                                                                                                                                                │
 │                                                                                                                                                                                      │
 │    Listed 23 item(s). (2 git-ignored)                                                                                                                                                │
 ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                                                                                                                        accepting edits (shift + tab to toggle)
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ >   Type your message or @path/to/file                                                                                                                                                      │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
~/repos/pipulate (ebb*)                                      no sandbox (see /docs)                                        gemini-2.5-flash (70% context left)| ✖ 1 error (ctrl+o for details)

Interesting! It definitely changed to gemini-2.5-flash as you can see in the status line at the bottom. I left that status line off of most of my copy/pasted output that I included in yesterday’s article. I’m going to have to think through my publishing style as I use these command-line AI coding assistant tools.

The Strategic Pivot: A FOSS-First Future and the “Rule of 7”

Okay, let’s get a feel for the ropes a bit more. When we type ‘/’ in the input field we get a menu-list of items. I try using j and k right away to move the cursor up and down the list of options because it’s not showing them all but of course that just types j and k into the field. It’s not vim. So I use the up and down arrows which I’m sure is what’s intended and lo-and-behold, there’s vim mode on the menu! I keep that in mind but I’m not going to switch to it just yet, but I can see that even though this is humorously written in TypeScript this still actually might be the type of tool written for vimmers like me and not the VSCode flock.

Gemini CLI 1000 Requests vim mode

26 options, huh? Those are probably pretty important to know. Ahh, the command-line interface. A certain amount of “decoration” can be built into apps because they can take over your full terminal screen and build a user interface out of text using whatever row-by-column size you allow based on the size of your terminal window and resolution. Fullscreen editing of text files was actually quite a revolution in the command-line world before graphical user interfaces (GUIs) like Macs and Windows. That was the transition from what came before vi (ed and ex). But full-screen text application mode is not limited to vi/vim and Gemini CLI is one of those programs that takes over the full terminal.

And so I arrow-down to see all the menu choices. The important thing to know here is that when using command-line mode, there are commands! And you need to know them. Unlike most GUI software where you can just poke around in the menus and discover how they work, CLI programs generally require you to have beforehand knowledge. But that won’t fly in today’s world where much of the audience expects the software to coddle you. So whereas a traditional CLI tool would have you type something like gemini --help before you were actually in the program, these fullscreen text apps trying to be GUI apps let you arrow-key around in menus to see the options. And here they are:

about               show version info
auth                change the auth method
bug                 submit a bug report
chat                Manage conversation history.
clear               clear the screen and conversation history
compress            Compresses the context by replacing it with a summary.
copy                Copy the last result or code snippet to clipboard
corgi               Toggles corgi mode.
docs                open full Gemini CLI documentation in your browser
directory           Manage workspace directories
editor              set external editor preference
extensions          list active extensions
help                for help on gemini-cli
ide                 manage IDE integration
init                Analyzes the project and creates a tailored GEMINI.md file.
mcp                 list configured MCP servers and tools, or authenticate with OAuth-enabled servers
memory              Commands for interacting with memory.
privacy             display the privacy notice
quit                exit the cli
stats               check session stats. Usage: /stats [model|tools]
theme               change the theme
tools               list available Gemini CLI tools
settings            View and edit Gemini CLI settings
vim                 toggle vim mode on/off
setup-github        Set up GitHub Actions
terminal-setup      Configure terminal keybindings for multiline input (VS Code, Cursor, Windsurf)

There are many tea-leafs to read here.

In this I see the path to get VSCode forks out of my process. There’s enough ambition expressed here that I will likely be able to rapidly switch between vim and Gemini terminals with enough power in the Gemini side to get much of what I get currently from Cursor but without Cursor. The huge innovation of Cursor which I find amusing that almost nobody talks about explicitly as such is the ripping off of the git diff command to get the red and green color-coding. Before that it was tabbed autocomplete as if in a word processor which was absolutely infuriating. I tried the autocomplete interface both in VSCode CoPilot and through NeoVim which also supported CoPilot right away out of the starting gate for people like me who wouldn’t even touch VSCode-forks at the time. I think IntelliJ also supported CoPilot. Ahhh, the early days. No matter the editor, AI coding assistance as wordprocessor-style auto complete sucked. That was the opening that Cursor walked into. Developers were already used to the red/green diff color-coding of git and you had to use a VSCode fork to get it.

red-green-git-diff-interface-ai-code-assistants.png

This red/green git diff interface of the AI code assistants is now also built into these CLI tools! So you see it before the AI even makes the edit. And after you give it the yes confirmation and hope over to the other terminal which you can do hands-free because of Ctrl+Page Up/Down or Alt+[1-9] (thank you GNOME terminal), you can see that same change reflected tapping in a quick git diff and Bob’s your Uncle VOILA! Future-proofed AI coding assistance. VSCode and its forks will always be moving targets and subject to the vendor lock-in game. But you will always be able to switch between tabbed terminals! And even if Gemini CLI goes away which it probably never will now because free and open source makes it free to fork it’s only command-line muscle memory that you’ll ever have to rewire and not the hand-waving magical incantations of the mouse-and-pointer GUI interfaces.

I am going to try getting Cursor AI out of my workflow immediately and get rid of one more $20/mo subscription fee.

But wait! You say it’s the same exact thing with Google trying to up you to their $20/mo fee for a paid level of Gemini CLI, right? Yes, but even if you stay on the free tier after your 1000 free requests run out (or 1 request in agent-mode) then the fallback is gemini-2.0-flash which is no slouch and I’m guessing (and will confirm one way or the other) a fair deal better than Cursor AI’s fallback default cursor-small. When you have the vast infrastructure of Google to play with you can make your default AIs a wee bit better than when you’re Cursor and have to use somebody else’s infrastructure. The 800-pound Google gorilla is waking up. They don’t have to be first. They just have to be everywhere and win a war of attrition. Ah, the war of attrition between the giant juggernauts — a topic for another article. But have you noticed why having the Microsoft Office suite isn’t as important as it used to be anymore? Same effect. Google is battling the war of attrition against MSOffice with Google Docs now relabeled Google Workspace or whatever the heck the relabeled their own what-was-once-Writely to these days for Enterprise Sales purposes.

So my next step for today? The small directional tweak that makes all the long-term difference? Gemini is by far my best choice for the magic rolling pin strategy that I now want to use both in my day-to-day coding.

Always roll the rolling pin enforcing the rule of 7. This is not a 1-time thing. This is a continuous iterative sweep that has to be built into my daily workflow so that code doesn’t effin get out of my control again. Maybe it didn’t really and what I’m feeling could just be the slowdown of the summer doldrums. But the effect is real and I saw it in the densities of my GitHub profile-page green boxes. I slowed down. I lost momentum. And yes, while it might have been summer I think it was actually a double-whammy of summer plus AI assisted coding going willy nilly with my ambition creating new files left and right that escaped my easy mental models thus causing cognitive overhead and decision fatigue. And that was a big problem.

This big problem is why my last few articles have been about this topic, and as much as I came around to the VSCode interface and as much as I like Anthropic Claude 4 Sonnet, I need the Google advantage. I need the power of the waking giant and I need to align myself to surf that wave, ride that wake or whatever particle-on-the-water-wave metaphor you want to use. And while I’m always on the fence about aligning myself with Google, they did manage to survive losing those anti-trust lawsuits coming out of it with no breakup and what more or less amounts to a slap on the wrist. Data sharing as a remedy? Ha! I can’t wait to see them try to enforce that. In other words I think they chose an unenforceable remedy. In addition they decided that deals can’t be exclusive which basically means they can’t force Apple into deals where they can only serve Google as the default search engine and that other companies like Microsoft can bid for that right. Again, unenforceable! Or at least not enforceable without additional long drawn out proceedings that settle to long after the damage has already been done. It allows Google lots of wiggle room for very Microsoft flaunting violate-now apologize later strategies.

And so Google will remain the rule. They control the address bar everywhere that matters except for the default installations of Windows on new systems and after big upgrades where they keep resetting the default back to Edge/Bing/CoPilot. Do you remember when Microsoft said that Windows 10 was the last Windows upgrade you’ll ever need? They were absolutely right but once you’re settled into Windows 10 and did that one fateful Chrome install, there’s no going back to Edge forever in the history of that Windows machine… unless a big software upgrade justifiably forces it. And so, Windows 11. There are other reasons but most of them are just as nefarious. I’m making my final break from Windows now once and for all. I’ll keep something around for the occasional backwards compatibility but I’m not a gamer and don’t need the drivers. And Valve Steam is taking over that world anyway, which is Linux. Buying Activision Blizzard for $69 billion wasn’t enough and Valve’s seemingly not for sale. How much would Valve cost? I think it might be the world’s first trillion-dollar acquisition in a Microsoft Hail Mary last ditch attempt to stay relevant.

Okay this is all the right thought-work to make this Sunday profoundly productive. It’s not even 10:00 AM! I’ll be able to get in some good coding, do my grocery shopping and still be ready for the client work coming up next week.

The most important thing by far is to take control of my codebase back constructing a strong mental model around this rule of 7 constriction concept. And now that Gemini CLI has switched back to the free baseline model after the 1000 premium Pro requests it’s time to mess with it a bit with articles like this to see what it does. I’m not pinching tokens! I wonder how big a copy/paste Gemini CLI can take.


I just submitted the entire above article into the Gemini CLI dialogue and context window that I established yesterday. I still have some nuance questions like what happens when I quit out of Gemini CLI and go back into it? Is it the same context window? What happens when the 1M context window fills up? Is it the same 1M context window for gemini-2.5-flash as it is for gemini-2.5-pro?

One thing is very clear, that instant response feeling that I am believing is just an implied expectation of environments like Cursor and the general Web user interfaces to the chatbots does not seem to be an expectation here. I do believe that the thinking is that if you’re willing to use a command-line tool you’re also willing to wait for higher quality results. Long-running tasks are not such a problem in the command-line. It’s built for that.

Another thing that I’m thinking is that my tire-kicking of another very related Google experimental service Jules is over. That appears to be a very related offering to Gemini CLI except with a Web user interface and all the cute coddling necessary for people who need it. It’s the same long waits to process big tasks but with a much blurrier integration path into my day-to-day workflow.

Hmmm. Rule of Seven tree-shaping… perfect!

This ties into so many strong mental models:

  • Trimming the fat
  • Trimming the tree
  • Pruning
  • Dead wood
  • Main trunk
  • Foliage density

Oh, this is going to be the gift that keeps giving. And while Gemini is over there chugging away on long-running tasks I can be over here doing the thought work for the next steps concurrently. I am telling stories to the AI, not merely prompting it. This storytelling is critical because I’m creating internal consistency within my own project and overriding pre-training biases of FastAPI and the entire cloud-based enterprise full web stack. My concerns are different. I am not empire-building trying to make datacenter hosted instances infinitely scale to accommodate infinite users. I am satisfying one user who is sitting there on their own machine running most of their code locally and willing to wait for long-running processes to finish which themselves fit into larger linear workflow processes. Clear mental models!

We can’t turn our back on infinite scaling and concurrent nonlinear procedures. But those are plugged into larger linear sequences with clear boundaries. A local non-AI workflow controls a 1, 2, 3 step procedure that reads much like a story that any human can understand. Start at the beginning and when you get to the end, stop. Be merciless when you detect obfuscation no matter how well-meaning it might be. Few places are these issues so obvious as the preference for the Python httpx package that is API-compatible with the vastly popular HTTP For Humans requests package despite aiohttp being perhaps better tooling for the modern world. They all wrap underlying Python Standard Library packages urllib that nobody wants to use directly anymore. There was once an httplib and urllib2 in the Standard Library causing such confusion that even now that they consolidated and cleaned it up back into urllib nobody wants to use it. And the 3rd party camps don’t divide up evenly. Many pine for the old days of the requests de facto standard and are driven to httpx if they want API compatibility and that for humans simplicity to carry on even in a Python 3.x concurrent world. The rest seem to flock to aiohttp which has a different set of assumptions for people with JavaScript concurrency envy. I use both.

Interesting cadence! Occasionally I have to pop on over to Gemini CLI to see whether it piddled out or not. It’s not immediately obvious when I’m on another terminal tab or virtual screen. Perhaps it should have an audible notification when the agentic task stops. That could allow me to stay in the flow writing like this doing executive function stuff while it’s over there carrying out the work of a subsystem. Yes, this is exactly the mitochondria endosymbiosis dynamic I’m wishing to achieve. It’s far stronger with agent-oriented command-line tools than it is with VSCode-like editor environments. Sweet!

See if I can pick up some habit-changing momentum this weekend. See how happy I can be with Gemini Flash instead of Pro. Watch for where the quota reset happens in that 24-hour period and whether I have to manually set the model back. Explore those / Gemini CLI slash-commands — I’m starting to get tuned into the fact that a plugin community already exists to add new slash-commands because it’s free and open source! Wow, this is an auspicious beginning.


Book Analysis

Here is the multi-task analysis of the provided technical journal entry.


Crafting Compelling Narratives: Title & Filename Brainstorm

  • Title Option 1: From Cursor to Command Line: Embracing Gemini CLI for a Future-Proof Workflow
    • Filename: cursor-to-gemini-cli-future-proof-workflow.md
    • Rationale: This title clearly states the “before and after” technology shift and highlights the primary motivation (future-proofing). It’s direct and appeals to developers considering a similar transition.
  • Title Option 2: The 800-Pound Gorilla Wakes Up: Why Google’s FOSS Gemini CLI is a Game-Changer
    • Filename: google-foss-gemini-cli-game-changer.md
    • Rationale: This is a more provocative, narrative-driven title. It leverages the author’s “800-pound gorilla” metaphor to create intrigue and suggests a significant industry shift, attracting readers interested in market dynamics.
  • Title Option 3: Escaping the VSCode Wars: A Developer’s Return to Terminal-Based AI
    • Filename: escaping-vscode-wars-terminal-ai.md
    • Rationale: This title focuses on the core conflict and philosophy. It speaks directly to developers who are wary of vendor lock-in and feel the pain points of GUI-based bloat, positioning the entry as a solution.
  • Preferred Option:
    • Title (plain text for YAML): From Cursor to Command Line: Embracing Gemini CLI for a Future-Proof Workflow
    • Filename: cursor-to-gemini-cli-future-proof-workflow.md
    • Rationale: It’s the most balanced option, offering clear technical keywords for discoverability (Cursor, Gemini CLI) while also communicating the strategic benefit (Future-Proof Workflow) that is central to the author’s decision.

Book Potential Analysis

  • Strengths as Book Fodder:
    • Authentic Problem-Solving: It provides a transparent, real-time account of a developer’s decision-making process, which is often more valuable than a polished, retrospective tutorial.
    • Clear Development Philosophy: The entry powerfully articulates a philosophy favoring FOSS, command-line efficiency, and long-term tool sustainability over trendy, proprietary solutions.
    • Timely Tech Comparison: It captures a specific, relevant moment in the evolution of AI developer tools, comparing the GUI-based approach (Cursor) with the emerging CLI-based paradigm (Gemini CLI).
    • Connects Tools to Psychology: It effectively links the choice of development tools to higher-level concepts like mental models, cognitive overhead, and developer discipline.
  • Opportunities for Enrichment (for Book Adaptation):
    • Add a “Key Takeaways” Box: A summary box at the end could distill the primary lessons learned, such as the pros and cons of the Gemini CLI free tier or the key principles for maintaining a disciplined workflow.
    • Elaborate on the “Rule of 7”: The author introduces this compelling concept for managing codebase complexity. A book chapter should expand on this rule with concrete examples of how it’s applied in practice.
    • Visualize the Workflow: Add a simple diagram comparing the “old” workflow (switching between editor, browser, etc., in Cursor) and the “new” one (switching between two terminal tabs for vim and Gemini CLI).

AI Editorial Perspective: From Journal to Chapter

This entry is an outstanding candidate for a case study within a chapter titled “AI as a Partner, Not a Prop: Crafting a Sustainable Developer Workflow.” Its strength lies in its raw authenticity. Instead of a sterile product review, it presents the messy, thought-provoking reality of how a senior developer chooses their tools based on a blend of pragmatism, philosophy, and foresight. The internal monologue—weighing FOSS principles against corporate strategy, and personal discipline against the convenience of GUI tools—is the core narrative.

The human-AI dynamic here is particularly noteworthy. The author isn’t just using an AI; they are strategically selecting an AI partner that aligns with their fundamental mode of operation (the command line). The concept of “storytelling to the AI” to shape its output is a sophisticated insight. In a book, this entry would serve as a powerful example of moving beyond simply prompting an AI to architecting a deeper, more symbiotic relationship with it, where the interface itself reinforces best practices and a clear mental model. The “behind-the-scenes” feel is not a weakness to be edited out but a core feature to be highlighted.


Suggested Next AI Processing Steps

  1. Task Suggestion 1: Distill the Author’s Core Principles.
    • Potential Prompt Snippet for Next AI: “Based on the provided text, analyze the author’s statements and extract a list of 5-7 core principles that define their ‘Developer’s Manifesto.’ Phrase each principle as a concise, affirmative statement (e.g., ‘Prefer Open-Source Tools,’ ‘The Command Line Enforces Discipline’).”
  2. Task Suggestion 2: Create a Comparative Framework.
    • Potential Prompt Snippet for Next AI: “From the author’s perspective in the article, create a two-column markdown table comparing Cursor AI and Gemini CLI. Base the comparison on factors mentioned or implied in the text, such as ‘Workflow Integration,’ ‘Development Philosophy,’ ‘Business Model,’ and ‘Future Viability.’”
Post #376 of 377 - September 6, 2025