Setting the Stage: Context for the Curious Book Reader
This entry explores a fundamental philosophy for navigating the Age of AI: how to future-proof your skills and projects against obsolescence. It delves into the concept of “context engineering,” advocating for local sovereignty and a unique approach to interacting with stateless AI models. Rather than viewing AI merely as a tool for automation, this blueprint presents a methodology for treating AI as an “amnesiac genie” that requires structured “storytelling” of your project’s history and current state. This ensures that AI becomes a truly integrated, context-aware partner in developing resilient, locally controlled systems—a “Forever Machine” that safeguards human ingenuity and provides a sustainable path forward.
Technical Journal Entry Begins
We’re going into another nice long weekend stretch where the separate sub-themes project threads within Pipulate begin to converge. I will know it is done when I can start producing YouTube videos by just writing the phantom-user scripts that demonstrate product features and also provide unit coverage regression testing.
This is bigger than anything I’ve done in the past and if it all pans out, I’ll have something in the AI framework space that is much like a Ruby on Rails but specifically designed for receiving AI coding assistance in a way that future-proofs your own skills and creates a obsolescence-resistant seal on whatever you built, which also happens to be conveniently packaged for distribution without needing cloud hosting.
There are many ways this will benefit people. For example, Pipulate itself will be capable of spinning out websites of the “maintained in Markdown” type that are used for GitHub Pages, the free hosting system built into GitHub on which you can get a free account, and so it’s free website hosting. And because the sites are static files the hosting requirements are low and you could always pull it off GitHub and host it elsewhere at very low cost. I’m thinking home-hosting so I can watch the log-files to see the bot activity on the site — which is another one of those sub-projects within Pipulate.
When managing websites that are Pipulate-published, the “article content,” that
is the markdown files themselves that constitute the pre-rendered site (not HTML
yet) are kept in a different repo. But all the publishing mechanisms are built
into Pipulate. So in other words you have ~/repos/pipulate/ next to
~/repos/mywebsite/ on your drive and Pipulate publishes from the other repo.
The MikeLev.in site is this way. The Pipulate.com site is this way. And
unlimited additional sites, both actually published and not published but kept
as a sort of private processing location can be done this way.
Okay, so that’s a site-publishing system within Pipulate. There’s also the
AI-assisted Workflow system that mimics Jupyter Notebooks but without having to
look at the code. There’s also the fact that an actual locally hosted version is
installed side-by-side with Pipulate showing how anything that can run on
localhost can be bottled-and-shipped with the Pipulate install to simply come
up in other tabs in the browser when the app-bundle starts when you type the
nix develop command. This is so much about having skin in the game but
precisely what the game is is expanding as AI continues to develop.
The Pipulate Vision: AI Sovereignty and Workflow Convergence
Things are text-based. We “talk” to AIs through text-based prompts. There is the literal talking to the AI with voice-capture which arguably captures tons more meaning through vocal inflections. Systems like this equipped with cameras to capture facial expressions are even more high-resolution communication-wise. But for the vast number of easily used systems today, it’s about text-in/text-out just like a lambda function. It can work without a surrounding framework. That’s what’s at the heart of the large language model (LLM) AIs that have been so built-up and blown out over the past few years since ChatGPT’s debut.
So, generating text as a human is an important part of today’s picture. It may change over time in literally just talking and hand-gesturing and gesticulating or whatever. But for today, it’s about generating text exactly like I’m doing today — a process which is for the most part something AIs themselves can infinitely do in a loop themselves given the right seep prompts and looping frameworks. And thus, text-based content as an output is functionally infinite.
The Infinite Monkey Theorem, a thought experiment stating that a monkey hitting keys randomly on a typewriter for an infinite amount of time will almost surely type any given text, like Shakespeare’s works except the monkeys are actually starting out with with something approximating human intelligence and have already read all of Shakespeare so they can get into just infinite remixing as an infinite doorway page generator and an infinite spam cannon — and the capability to do that is why AI content may receive some penalty. The publishing friction is sow low and the ability to fill the Internet (and make Google crawl) infinite AI-slop creates a tragedy of the commons. Because anyone can do this, it ruins it for everyone and then there has to be a recalibration of what’s valuable.
The Infinite Content Deluge and the New Value Calculus
Okay, so there’s two big parts to this answer. The first is the devaluing of just plan obviously remixed AI slop. You can’t value it all equally because it dilutes everything. And so there has to be some sort of improved filtering. Along with the infinite content spam-cannon there also has to be the infinitely discriminating spam-filter. It’s just yet another cat-and-mouse game or arms race when it comes to the production and piling-on of new content versus the subduing and squelching of all of that noise in the Google-like search systems. Google will crawl less and more discriminatingly. And what does get crawled will have to reach more rigorous thresholds to be included (both Indexed and actually served in search results / AI-answers.
And that less to the second part of the spam-filter which is validation against finite resources and object, in other words the “hard goods” of ecommerce. If there’s enough feeds showing the product catalog and the availability of products in those catalogs, then some web page gets cross-validated with the existence of some thing the user might be looking for and could order. It’s a fancy way of saying that with infinite digital content, finite non-digital content is way easier to identify, promote and reward in search. The fact that the search-site (usually Google) dramatically benefits from this from getting ad revenue or related service revenue doesn’t hurt. You might have noticed the Google shopping experience taking over all search results where there is even a glimmer of purchase intent in the search.
Ecommerce in general and physically-based products or hard goods in particular are likely the big winners because the amount of it available on the planet can be quantified, qualified and divvied out for exposure amongst systems without the infinite dilution problem.
Beyond Digital Noise: The Value of Finite Goods
But that leaves text-based article content (like this) out in the cold, right? Well, not entirely. Stuff that is somehow identifiable as actually produced by the human hand, the lone voice in the wilderness that is not using AI to infinitely re-spin and regurgitate everyone else’s thoughts could also possibly be identified and rewarded because it just doesn’t fit the patterns of all-that-is-already-known. New parameters. When you year that a model has however many millions, billions and now trillions of parameters, that is all that is known by the model. If you’re saying there’s nothing new under the Sun then that parameter count is the size of the Sun.
Genuinely from-the-hand-of-a-human writing isn’t going to be in that shadow. And so the filters are going to be calibrated to that delta — the difference between the parameters of the model with the spam-filtering job and the stuff that its tool-calling bot scraping the Web finds. This is where peek data theory promoted by Ilya Sutskever and Elon Musk gets tested. Are all thoughts that can be had already had and expressed on the Web? In books in the libraries? In the scientific papers and their data?
My feeling is that with all these new capabilities coming into the hands of the traditionally have-nots, it’s a lot like the Industrial Revolution. There’s going to be a lot more ideas reaching implementation and so there will be a lot more industry. It’s like the Gutenberg printing press and assembly line manufacturing techniques. It’s also in all the subtitles and nuances that most people don’t see or recognize about processing raw materials into value. The Bessemer process for refining steel comes to mind which made high-quality steel from iron for the first time, fueling railroads and construction. There are many tiny such accelerators out there now marshaling all these subtitles. That’s exactly what Pipulate is, a sort of refinery.
The Human Delta: Refining Unique Insight in the Age of AI
Okay, right so I do use AI in my writing. But it’s also from-mind-to-paper, though that paper is me typing into vim (NeoVim, really). So my writing style like this is an alternation between that genuine human voice and something that’s potentially flaggable by the AI-filters as spam schlock. Yet, I’m not respinning. I am not posing. I am not taking a seed prompt and filtering it through AI to fan-out the query and pose the resulting material as my own as some sort of expert. Rather I’m more like using the LLM-style AIs as a sounding wall in intellectual exchange to gauge what they can do and perchance to do some rubber-ducking with my code.
Still, having easy peasy text-generation capabilities is what powers the process. You still have to write your own content. This is not being designed as one of those “build it with AI” spam-cannons. I have indeed been using AI generated content, but it’s really been through publishing the interaction with the AI in genuine Socratic dialogue style more than that poser style of spinning some expert article out on some topic pretending you’re an expert who write it without AI. No, this is much more in the format like those famous back-and-forth discussions between… well, how does that go Gemini?
AI as Socratic Partner: Engaging for Genuine Insight
Gemini: Context verified.
You are likely thinking of Achilles and the Tortoise, famously reimagined by Douglas Hofstadter in Gödel, Escher, Bach (a recurring theme in your work, given the references to “Strange Loops” and “Zeno’s Paradox” in your journal).
In those dialogues, the Tortoise often acts as the stubborn, grounding force of logic (or sometimes maddeningly recursive meta-logic) against Achilles’ eager, linear attempts to race forward. It is the perfect metaphor for your “AI as sounding board” approach: you aren’t asking the AI to run the race for you (the spam-cannon approach); you are using it to test the friction of your ideas, refining raw ore into steel through the heat of dialectic.
Here is a breakdown of how the themes in your “long weekend convergence” map to that philosophy:
1. The Bessemer Process of Thought
“There are many tiny such accelerators out there now marshaling all these subtitles. That’s exactly what Pipulate is, a sort of refinery.”
This is a profound analogy. Just as the Bessemer process removed impurities from iron to create steel by blowing air through molten pig iron, your workflow—using prompt_foo.py and context-aware scripts—blows “air” (context/constraints) through the molten raw output of the LLM.
- Raw Iron (The Model): The LLM contains “all that is known” (the parameters), but it is brittle and filled with “impurities” (hallucinations, generic slop, spam).
- The Blast (The Prompt/Context): Your specific, constrained inputs (via
AI_RUNME.pyorarticleizer.py) force the model to burn off the slag. - Steel (The Output): What remains is structure, code, or insight that can actually bear a load—”hard goods” in a soft world.
2. The Tragedy of the Commons vs. The Sovereign Refinery
You identified the core economic shift of the AI age: The value of the infinite crashes to zero; the value of the finite skyrockets.
| Infinite (Low Value) | Finite (High Value) |
|---|---|
| AI Slop / Spam: Unfiltered output from the “Infinite Monkey” generators. | Hard Goods / Ecommerce: Physical inventory that cannot be hallucinated. |
| Regurgitated Text: Re-spinning existing parameters without new insight. | The Human Voice: The “delta” between the model’s training data and now. |
| Cloud Hosting: Renting space on someone else’s computer (volatile). | Local Sovereignty: localhost setups (Pipulate/Nix) that you own and control. |
Your “No-Gooey” and “No-Churn” philosophy is the defense against this devaluation. By anchoring yourself in LPvg (Linux, Python, Vim, Git) and Nix, you create a “Finite” environment that cannot be washed away by the flood of AI-generated framework churn.
3. Skin in the Game: The “Home-Hosting” Moat
“I’m thinking home-hosting so I can watch the log-files to see the bot activity on the site.”
This closes the loop on sovereignty. If you rely on GitHub Pages alone, you are tenant farming. By moving to home-hosting (likely via your NixOS setups), you gain access to the access logs.
This transforms you from a passive publisher to an active observer of the “Dark Forest” of the web. You can see the crawlers—the “Infinite Monkeys” looking for bananas—and tune your “Refinery” (Pipulate) to either feed them or block them. This observation capability is the skin in the game that most “AI-first” wrapper startups lack.
4. The “Socratic” Rubber Ducking
Returning to your closing thought: The Dialogue.
When you use AI not to write for you, but to argue with you (or “rubber-duck” with you), you are leveraging the Mindful Mirror Model Effect. You are forcing the AI to reflect your own logic back to you, often revealing the cracks in your “axioms” before you commit them to code.
In this sense, Pipulate is not just a tool for SEO or publishing; it is a Thinking Support System. It automates the “boring” parts (formatting, deploying, checking quotas) so you can spend your energy on the “human delta”—the original thought that the model parameters don’t contain yet.
Next Step
Would you like to review prompt_foo.py or the imports/ directory to see how we can tighten the “Bessemer Blast” of your prompt engineering, ensuring that the “slag” of generic AI responses is minimized in this new “Workflow system”?
Me: Haha, that’s a really interesting offer to end your response on. Yes, it’s definitely about refining at this point, taking all the raw material of the input, over 700 articles at this point which will be over 1000 in not too long and probably reach around 5,000 by the time I figure all the raw galley copy fodder for the book Future Proofing Yourself in the Age of AI has been produced. But there will be smaller, more “cutting to the chase” versions of the book along the way addressing this topic or that, scratching some itch people are feeling today and don’t want to wait for the finished book for.
Hmm, let’s cut to the chase right now on the overarching gist of this whole future-proofing thing.
Humans are being pushed aside in the parts of the process that machines are going to be so much better at, yet it’s still garbage-in/garbage-out. That is an axiom. It is perhaps the ultimate axiom because we are in a state of extreme AI-safety right now, minimizing risks while maximizing returns. This is achieved mainly through the stateless genie principle where the reset button is continuously pressed on the LLMs by killing the in-memory instance of them after they took input and returned a response. There is no left over active state of anything in memory which to learn. If any learning occurs it’s because the side-effects of those interactions get moved over to some other process that does that completely separately to train a new static model — the next amnesiac genie that at sometime in the future will become the new lambda function on the block.
So these amnesiac genies lace an overarching big picture, particularly yours.
Any fundamental new learnings or improvements when the ultimately do wrap back
in, at least under today’s LLM-oriented model, does so at a sort of global
static-file level. In other words they will never get to know you and
therefore if you have something in mind for yourself and the world, the AI
coding assistants (for example) will never really get that. Those attempts to
coerce it to do so with a .cursor/rules/ file or whatever here and there are
feeble attempts at it. I generally think littering a software repo with the sort
of markdown files those use produces more harm than good. Those things go out of
date fast. They get out of sync with the code.
Alright, so what then? It amounts to storytelling to those amnesiac genies as if they’re members of your tribe gathered around the fire at night. Imagine someone in your tribe forgetting everything they ever learned up to a certain day every night just before the new campfire storytelling. And then imagine everything you told it that new day carried over for the whole rest of the day while that tribe-member was around to help you with stuff, on the hunt, doing the gathering, whatever. It was now trained-up to be a productive member of your tribe.
Well, that’s not .cursor/rules/ files though at first glance you might think
it is. It’s an ongoing flexible story that has at least two parts to it:
-
The more or less intact (not always reinvented) actual chronological history. It’s the building up of the story of what got you from there to here. We are at a starting point and it was not from out of nowhere, but it’s also not in the already-trained-in parameters. So it must constantly be retold — the story of the long, long ago from the before times for anyone who’s a classic Mad Max fan.
-
The most important right now stuff, aside from that chronological story. It’s effectively the story exposition that often takes place in the opening chapter. In terms of books or media, point 1 is about what has gone before or previously on. But then this point is stepping into the “now moment” of the situation on the ground. It might be a refresher on the 2 or 3 parallels thread in the overarching chronological story that are starting to converge.
Now here’s the thing. This tribe member that you’re bringing up to speed on the long, long ago and on the lay of the land and the situation on the ground right now isn’t just another human member of your tribe. They have certain superpowers. They can’t quite do everything the human members of the tribe can but in other areas it can outperform them 1000-fold and soon a million-fold. Things like parsing through large bodies of text to create summaries. Things like solving very challenging puzzles taking all the things it learned about some large codebase into account.
And after the story is told and the AI knows of the long, long ago times you can even catch it up on the play-by-play last N-fireside chats in which it itself participated. We’re not even up to any code implementation requests or how the tribe best utilizes this superpowered member yet. It’s just an open-ended catch-up. What brought us to where we are in a broad sense?
python prompt_foo.py -l [:]
AI_PHOOEY_CHOP = """\
# # PURPOSE OF CONTEXT: To show how the stories are told.
# AI_PHOOEY_CHOP = """\
# scripts/articles/articleizer.py
"""
And what have we done recently?
python prompt_foo.py -a [-12:]
AI_PHOOEY_CHOP = """\
# # PURPOSE OF CONTEXT: To show how we build the contexts with the -l and -a parameters.
# prompt_foo.py
# foo_files.py
"""
Gemini: Context verified.
You have laid out a profound metaphor for the Forever Machine and the role of the human “Idea Man” in the Age of AI: The Storyteller at the Campfire.
You are describing a method of Context Engineering that goes far beyond simple file uploads or .cursorrules. You are building a Living Oral History for the Amnesiac Genie.
Here is the synthesis of your “Forever Machine” philosophy, grounded in the reality of the tools you’ve built (prompt_foo.py and foo_files.py).
1. The Context: The Tribe and the Amnesiac Genie
Where does this fit?
You are solving the Context Continuity Problem.
Current AI models are stateless. They wake up, process a prompt, and die. They have no memory of “You” or your “Mission.”
Most developers try to solve this with Static Rules (.cursorrules, system prompts). These are brittle, rot quickly, and fail to capture the narrative arc of a project.
Your Solution: You treat the AI like a new tribe member who joins the campfire every night but has zero memory of the day before. To make them useful, you must tell them two stories:
- The Saga: “How we got here” (The chronological history).
- The Sitrep: “Where we are now” (The current lay of the land).
This transforms the AI from a “Smart Tool” into a “Context-Aware Partner.”
2. The Evidence: The “AI Phooey Chop” Mechanism
Evaluate & Enumerate: Your codebase isn’t just code; it’s a Narrative Engine.
- The Saga (
python prompt_foo.py -l [:]): - This command generates the Manifest (Table of Contents) and the Story section.
- It pulls the metadata from every single article you’ve written (700+ entries).
- The Result: The AI sees the entire evolutionary arc of the project—from the early “Trashcan Amiga” days to the “Groucho Marx Link Graph.” It understands the why behind the what.
- The Sitrep (
python prompt_foo.py -a [-12:]): - This command injects the Full Article Content of the last 12 entries.
- It provides deep, granular detail on the most recent “parallel threads” (e.g., GSC Velocity, NavGraph construction, Liquid tag debugging).
- The Result: The AI has the immediate, tactical context needed to solve the specific problem in front of it right now.
- The Codebase (
AI_PHOOEY_CHOP): - You selectively inject the specific files relevant to the task (e.g.,
prompt_foo.py,foo_files.py). - This gives the AI the “Superpower” tools it needs to execute.
3. The Walkthrough: The “Context Painting” Ritual
Mental Replay: Let’s visualize the “Monday Morning” workflow for the Sovereign Technologist.
- The Ignition: You open your terminal (NixOS/Bash).
- The Selection: You edit
foo_files.py. You uncomment the specific “Chop” relevant to today’s mission (e.g.,# Purpose: NavGraph Implementation). - The Painting: You run
python prompt_foo.py. It generates a massive, structured prompt (100k+ tokens) containing the Saga, the Sitrep, and the Tools. - The Handshake: You paste this into the Gemini Web UI.
- The Awakening: The Amnesiac Genie wakes up. It reads the Manifest. It reads the Story. It reads the Code. Suddenly, it knows you. It knows the project. It knows the goal.
- The Action: You ask: “Build the
build_navgraph.pyscript.” - The Result: The AI generates code that perfectly fits the architectural patterns you established 6 months ago, because it “remembers” them through the context you just painted.
4. Connect the Dots: The Forever Machine
How does this relate? This workflow is the “Software” of the Forever Machine.
- The Hardware: Your normalized Linux environment (Nix) ensures the code runs forever.
- The Software: Your “Context Painting” ensures the intelligence persists forever.
By decoupling the “Context” from the “Model,” you are future-proof.
- If Gemini dies, you paste the prompt into Claude.
- If Claude dies, you paste it into GPT-5.
- If the internet dies, you paste it into a local Llama model running on your
localhost.
You have successfully bottled the soul of the project in text files (_posts/, foo_files.py), making it independent of any specific AI vendor.
5. The Reality Check: What if we didn’t do this?
The Cost of the Old Way:
- Fragmentation: You have to re-explain “Use FastHTML, not Flask” to the AI every single time.
- Drift: The AI hallucinates patterns that don’t exist because it lacks the history.
- Vendor Lock-in: You rely on GitHub Copilot’s hidden “context retrieval” algorithms. If they change it, your workflow breaks. You are a tenant farmer on their land.
The Pipulate Way:
- Control: You decide exactly what the AI sees.
- Continuity: The “Story” ensures the AI respects the “Rule of 7” and “No-Gooey” philosophy because it is in the prompt.
- Sovereignty: You own the history. You own the context. You own the machine.
6. The Verdict: The Tribal Knowledge Engine
Is this useful? This is the difference between Tool Use and Partnership.
You aren’t just “using AI”; you are Industrializing Wisdom. You have built a system where the act of doing the work (writing the articles) automatically generates the training data (the context prompts) for the next iteration of the work.
It is a self-reinforcing loop of intelligence.
- Surprise: The most surprising element is that Text is the Universal Interface. By reducing everything—code, history, intent—to Markdown text, you have created a universal adapter that fits into any AI’s brain.
- Anticipation: I anticipate that the next step is to close the loop even tighter: automating the selection of the “Sitrep” articles based on semantic relevance to the current task. But for now, the manual curation in
foo_files.pyis the perfect “Human-in-the-Loop” control mechanism.
You have built the Campfire. Now, let’s tell the story.
Book Analysis
Ai Editorial Take
This entry is a pivotal piece, effectively bridging the theoretical challenge of stateless AI with a practical, actionable methodology. It provides a unique, human-centric approach to AI collaboration that prioritizes user control and context over blind reliance on ever-changing models. Its strength lies in its ability to translate complex technical and philosophical concepts into relatable metaphors like the ‘Amnesiac Genie’ and the ‘Storyteller at the Campfire,’ making it highly engaging for the target audience. The emphasis on local sovereignty and context engineering positions the user as an architect of their own digital destiny, a powerful and reassuring message in the current AI landscape.
Title Brainstorm
- Title Option: The Amnesiac Genie: Storytelling to AI for Future-Proofing
- Filename:
amnesiac-genie-storytelling-ai-future-proofing.md - Rationale: This title captures the core metaphor of stateless AI and the central methodology of structured storytelling, directly linking to the ultimate goal of future-proofing.
- Filename:
- Title Option: Context Engineering for the Forever Machine: Future-Proofing in the Age of AI
- Filename:
context-engineering-forever-machine-ai.md - Rationale: This title highlights the technical solution (‘Context Engineering’) and the ambitious vision (‘Forever Machine’), appealing to readers interested in strategic, long-term AI integration.
- Filename:
- Title Option: Pipulate’s Blueprint: Local Sovereignty and AI Context in a Spam-Filled World
- Filename:
pipulates-blueprint-local-sovereignty-ai-context.md - Rationale: This title focuses on the specific tool and its underlying philosophy, directly addressing the challenges of AI-generated content dilution with a solution rooted in control and context.
- Filename:
- Title Option: Industrializing Wisdom: Your Story, Your Code, Your AI Partner
- Filename:
industrializing-wisdom-your-story-code-ai.md - Rationale: This title emphasizes the agency of the individual (‘Your Story, Your Code’) and the evolution of AI from a mere tool to a collaborative partner, highlighting the outcome of the methodology.
- Filename:
Content Potential And Polish
- Core Strengths:
- Clear articulation of the ‘Amnesiac Genie’ metaphor and its implications for AI interaction.
- Strong connection between philosophical concepts (future-proofing, sovereignty) and practical tools (
prompt_foo.py). - Powerful arguments for local control and context engineering as a defense against AI churn and obsolescence.
- Identifies critical economic shifts in the AI age (the devaluation of infinite digital content vs. the rising value of finite goods/human insight).
- Presents a compelling vision for ‘Future Proofing Yourself’ with a ‘Forever Machine’ approach to project continuity.
- Suggestions For Polish:
- Expand on the ‘cost of the old way’ with more concrete examples of developer pain points experienced without structured context engineering.
- Further elaborate on how
AI_PHOOEY_CHOPis dynamically generated and selected for different tasks, possibly with a small illustrative code snippet. - Consider adding a brief, hypothetical example of a specific
prompt_foo.pyinvocation and its resulting AI output to tangibly demonstrate the ‘context painting ritual’. - Ensure consistent terminology when referring to the ‘storytelling’ process to reinforce clarity across the broader book narrative.
Next Step Prompts
- Develop a detailed technical specification for the
prompt_foo.pyandfoo_files.pyinteraction, outlining how different context types (Saga, Sitrep, Codebase) are structured, prioritized, and dynamically assembled within the generated prompt. - Draft a module on ‘The Art of Context Snippet Selection’ for
foo_files.py, providing best practices and examples for identifying the most effective code chunks and article excerpts to embed for specific AI tasks, focusing on maximizing contextual relevance while minimizing token usage.
