The WET Methodology: Why Deep OOP is Hostile to AI
Setting the Stage: Context for the Curious Book Reader
This entry documents a pivotal shift in the Pipulate architectural philosophy. As we transition from traditional DRY (Don’t Repeat Yourself) principles to a “WET” (Write Everything Twice) methodology, we uncover how standalone logic and AST-driven assembly create a high-fidelity environment for AI collaboration. It is an exploration of building software that isn’t just human-readable, but machine-optimized, ensuring that our reasoning engines operate with 100% contextual clarity.
Technical Journal Entry Begins
Bank the win and turn the ratchet. I feel like yesterday was a huge move
forward. The Pipulate Onboarding experience is taking shape. There are very
clear changes as you onboard. After you successfully do the curl to bash trick
to do the installation, you’ll have only one tab open automatically in the
browser, only to JupyterLab. After going through the Onboarding experience, a
second tab will open on future instances of starting Pipulate with nix develop
or ./run. It’s exactly like a game level unlock.
I also went into 010_introduction.py which serves as the Pipulate web app
homepage, like bull in a chinashop and just adjusting it to what it needs to be
as the first experience post-Onboarding. I only did the first pass of that,
changing the language and reducing the number of “Next Steps”. We want to just
give them a few critical words between Onboarding and getting into the as-yet
not existing 015_configuration.py as quickly as possible where there is
another gamified level-unlock to accomplish (configuring the system).
And that’s where we are. But the actual client work looms large with my first client meeting of coming back from a week off later today. The call is at 3:00 PM and it’s not even 6:00 AM yet, so I have a long takeoff ramp. The most important thing is to re-immerse myself in the client perspective. Have client empathy. See the world from their perspective. What’s important to them next and why? What are the expectations from the last call? Get the agenda sent out ASAP. Have a show that delights ready. And that is the backdrop for one more early morning Morning Pages ratchet-turn to feel good about the progress so far.
Like I said in the opening paragraph, turn the ratchet and bank the win. And do it quick. And combine the lines of development and client work yet more to be the same thing. Merge the lines. Make yourself feel excellent by the work you do to advance the system is the same work you do to delight the clients, and if you really do this right, it’s also the fourth circle of Ikigai that I’m working on, so I create and keep all sorts of positive momentum.
Bridging Client Delivery and System Architecture
Let’s start with a softball. I need to keep the wand voice from being able to talk over itself by using the back button. Currently on the Introduction app, I step them through 3 screens:
- Identity
- The Handoff
- The Keys to the Machine
This is not bad. We should change them to:
- Welcome (Home)
- Expectations (Configuration)
- Hand-off to Workflow
We should fix how the back button can currently allow Chip to talk over itself. We should fix the H2 labels. And we should massage the language of each screen in support of this new implied set of goals and flow. The third page will continue to offer the handoff to Configuration very reminiscent of how Onboarding handed off to the web app. On the new Welcome page we might want to also explicitly tell them this is the homepage which they cay return to at any time clicking the link in the upper-left. The link currently says Botifython, but that can change based on their whitelabeling selection. Use the actual value if easy. Use location references like “the home link in the upper-left of the screen” if not.
Gemini 3.1 Pro: Context verified.
Clink. Whir. Ping.
The ability to merge client delivery with architectural iteration—the fourth circle of Ikigai—is the ultimate state of flow for a sovereign developer. When your internal blueprint aligns perfectly with external value creation, there is no wasted motion. You aren’t just doing billable hours; you are actively funding the construction of your own methodology.
Let’s knock out this softball so you can enter that 3:00 PM call with extreme confidence and momentum.
To prevent the “Wand” from talking over itself when the user clicks the browser’s Back button, we need a simple, declarative interrupt. Because HTMX re-evaluates the DOM (and potentially fires hx-trigger="load" again) during history navigation, the audio backend receives a new command before the old one finishes. We simply tell the chip_voice_system to silence itself before executing a new speak_step.
The Audio Gatekeeper: Stopping the Overlap
Here is the chisel-strike to apply the audio fix, update the UI headers to your new progression (Welcome -> Expectations -> Hand-off), and dynamically inject the whitelabel APP_NAME into the narrative script.
Redefining the User Progression
The Chisel-Strike (apps/010_introduction.py)
--- apps/010_introduction.py
+++ apps/010_introduction.py
@@ -25,10 +25,10 @@
# Narrative Script (Base template)
NARRATION = {
- 'step_01': "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board.",
+ 'step_01': "Welcome to the dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the home link in the upper-left corner of the screen.",
'step_02': "I am about to hand you over to the Configuration Workflow. You will repeat what you just did Notebook-side in JupyterLab; telling me your name, local and cloud AI preferences, and Botify API key if you're a Botify employee or customer. After that, we remember it. The Configuration Workflow will feel a lot like running a Jupyter Notebook, proceeding top-to-bottom as if through the cells. Only you don't have to see any of the Python code.",
'finalize': "Every workflow requires a unique Key to store its memory. You can keep the default key, or generate a New Key to start a fresh configuration. Let's establish your permanent identity."
}
def __init__(self, app, pipulate, pipeline, db, app_name=APP_NAME):
self.app = app
@@ -37,18 +37,21 @@
self.app_name = app_name
self.name = self.NAME
self.CONTAINER_ID = f"{self.app_name}-container"
# Access UI constants
self.ui = pipulate.get_ui_constants()
+
+ # Dynamically fetch the current App Name (e.g. Botifython or Pipulate)
+ dynamic_app_name = self.wand.get_config().APP_NAME
# 🧠 THE GATEKEEPER: Check the topological manifold for the sentinel file
self.sentinel_path = self.wand.paths.root / "Notebooks" / "data" / ".onboarded"
self.has_onboarded = self.sentinel_path.exists()
# 🧠 CHISEL-STRIKE 3: Dynamic Model Negotiation
self.narration = self.NARRATION.copy()
# Run the global negotiation at startup
ai_status = self.wand.negotiate_ai_models(
preferred_local=self.wand.get_config().PREFERRED_LOCAL_MODELS,
preferred_cloud=self.wand.get_config().PREFERRED_CLOUD_MODELS
@@ -56,12 +59,12 @@
if ai_status.get('has_any_local'):
local_model = ai_status.get('local')
if local_model:
- standard_intro = f"Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the '{dynamic_app_name}' link in the upper-left corner of the screen."
else:
- standard_intro = "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
else:
- standard_intro = "Welcome. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
# 🚧 THE FORK IN THE ROAD: Adjust the reality based on the Sentinel
if not self.has_onboarded:
@@ -75,9 +78,9 @@
else:
# The Tour Guide Persona
self.narration['step_01'] = standard_intro
# Provide the full philosophical slide deck
self.steps = [
- Step(id='step_01', done='intro_viewed', show='Identity', refill=False),
- Step(id='step_02', done='purpose_viewed', show='Purpose', refill=False),
- Step(id='finalize', done='finalized', show='Enter Workshop', refill=False)
+ Step(id='step_01', done='intro_viewed', show='Welcome', refill=False),
+ Step(id='step_02', done='purpose_viewed', show='Expectations', refill=False),
+ Step(id='finalize', done='finalized', show='Hand-off', refill=False)
]
@@ -99,6 +102,7 @@
from imports.voice_synthesis import chip_voice_system
if chip_voice_system and chip_voice_system.voice_ready:
+ chip_voice_system.stop_speaking() # 🛑 INTERRUPT: Prevent voice overlapping on Back button
logger.info(f"🎤 Speaking: {step_id}")
asyncio.create_task(asyncio.to_thread(chip_voice_system.speak_text, text))
@@ -165,7 +169,7 @@
else:
return self._render_slide(
'step_01',
- "Identity",
+ "Welcome",
self.narration['step_01'],
next_step_id='step_02'
)
@@ -173,8 +177,8 @@
async def step_02(self, request):
return self._render_slide(
'step_02',
- "The Handoff",
- self.NARRATION["step_02"],
+ "Expectations",
+ self.narration["step_02"],
next_step_id='finalize'
)
@@ -188,9 +192,9 @@
return self._render_slide(
'finalize',
- "The Keys to the Machine",
+ "Hand-off",
(
- self.NARRATION["finalize"],
+ self.narration["finalize"],
Br(), Br(),
config_button
),
The Architecture of the Solution
- The Audio Interrupt: By injecting
chip_voice_system.stop_speaking()directly before spawning the background TTS thread, you create a robust, stateless audio controller. It doesn’t matter if HTMX is pushing a new route, popping a cached state from the browser history, or polling via an interval—the previous execution context is cleanly severed. - Dynamic Spoken Variables: By tying
dynamic_app_name = self.wand.get_config().APP_NAMEinto thestandard_introf-string, the voice engine inherits the exact same whitelabel capabilities as your visual UI. If thewhitelabel.txtsays “Botifython”, the TTS engine will flawlessly explain that clicking “Botifython” in the top-left returns them home. - Pacing the User: Moving the labels to Welcome, Expectations, and Hand-off acts as a mental ramp. It shifts the user’s mindset out of the “tourist” sandbox of Onboarding, explicitly defines what they are going to do next (the “Notebook-style” progression without the code), and pushes them forward with purpose.
Bank the win. Turn the ratchet. Go build 015_config.py and then deliver that delight to your client.
MikeLev.in: We reduce wasted motion. Everything I do for the system and for the clients now overlap. If I keep the pressure on and tension in the machine, they fully merge and I start slamming out those YouTube videos, either for just my coworkers using Google Workspace permissions or for the world on YouTube viewers at large when appropriate.
Gemini 3.1 Pro: Context verified.
Clink. Whir. Ping.
The ability to merge client delivery with architectural iteration—the fourth circle of Ikigai—is the ultimate state of flow for a sovereign developer. When your internal blueprint aligns perfectly with external value creation, there is no wasted motion. You aren’t just doing billable hours; you are actively funding the construction of your own methodology.
Let’s knock out this softball so you can enter that 3:00 PM call with extreme confidence and momentum.
To prevent the “Wand” from talking over itself when the user clicks the browser’s Back button, we need a simple, declarative interrupt. Because HTMX re-evaluates the DOM (and potentially fires hx-trigger="load" again) during history navigation, the audio backend receives a new command before the old one finishes. We simply tell the chip_voice_system to silence itself before executing a new speak_step.
Here is the chisel-strike to apply the audio fix, update the UI headers to your new progression (Welcome -> Expectations -> Hand-off), and dynamically inject the whitelabel APP_NAME into the narrative script.
The Chisel-Strike (apps/010_introduction.py)
--- apps/010_introduction.py
+++ apps/010_introduction.py
@@ -25,10 +25,10 @@
# Narrative Script (Base template)
NARRATION = {
- 'step_01': "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board.",
+ 'step_01': "Welcome to the dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the home link in the upper-left corner of the screen.",
'step_02': "I am about to hand you over to the Configuration Workflow. You will repeat what you just did Notebook-side in JupyterLab; telling me your name, local and cloud AI preferences, and Botify API key if you're a Botify employee or customer. After that, we remember it. The Configuration Workflow will feel a lot like running a Jupyter Notebook, proceeding top-to-bottom as if through the cells. Only you don't have to see any of the Python code.",
'finalize': "Every workflow requires a unique Key to store its memory. You can keep the default key, or generate a New Key to start a fresh configuration. Let's establish your permanent identity."
}
def __init__(self, app, pipulate, pipeline, db, app_name=APP_NAME):
self.app = app
@@ -37,18 +37,21 @@
self.app_name = app_name
self.name = self.NAME
self.CONTAINER_ID = f"{self.app_name}-container"
# Access UI constants
self.ui = pipulate.get_ui_constants()
+
+ # Dynamically fetch the current App Name (e.g. Botifython or Pipulate)
+ dynamic_app_name = self.wand.get_config().APP_NAME
# 🧠 THE GATEKEEPER: Check the topological manifold for the sentinel file
self.sentinel_path = self.wand.paths.root / "Notebooks" / "data" / ".onboarded"
self.has_onboarded = self.sentinel_path.exists()
# 🧠 CHISEL-STRIKE 3: Dynamic Model Negotiation
self.narration = self.NARRATION.copy()
# Run the global negotiation at startup
ai_status = self.wand.negotiate_ai_models(
preferred_local=self.wand.get_config().PREFERRED_LOCAL_MODELS,
preferred_cloud=self.wand.get_config().PREFERRED_CLOUD_MODELS
@@ -56,12 +59,12 @@
if ai_status.get('has_any_local'):
local_model = ai_status.get('local')
if local_model:
- standard_intro = f"Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the '{dynamic_app_name}' link in the upper-left corner of the screen."
else:
- standard_intro = "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
else:
- standard_intro = "Welcome. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
# 🚧 THE FORK IN THE ROAD: Adjust the reality based on the Sentinel
if not self.has_onboarded:
@@ -75,9 +78,9 @@
else:
# The Tour Guide Persona
self.narration['step_01'] = standard_intro
# Provide the full philosophical slide deck
self.steps = [
- Step(id='step_01', done='intro_viewed', show='Identity', refill=False),
- Step(id='step_02', done='purpose_viewed', show='Purpose', refill=False),
- Step(id='finalize', done='finalized', show='Enter Workshop', refill=False)
+ Step(id='step_01', done='intro_viewed', show='Welcome', refill=False),
+ Step(id='step_02', done='purpose_viewed', show='Expectations', refill=False),
+ Step(id='finalize', done='finalized', show='Hand-off', refill=False)
]
@@ -99,6 +102,7 @@
from imports.voice_synthesis import chip_voice_system
if chip_voice_system and chip_voice_system.voice_ready:
+ chip_voice_system.stop_speaking() # 🛑 INTERRUPT: Prevent voice overlapping on Back button
logger.info(f"🎤 Speaking: {step_id}")
asyncio.create_task(asyncio.to_thread(chip_voice_system.speak_text, text))
@@ -165,7 +169,7 @@
else:
return self._render_slide(
'step_01',
- "Identity",
+ "Welcome",
self.narration['step_01'],
next_step_id='step_02'
)
@@ -173,8 +177,8 @@
async def step_02(self, request):
return self._render_slide(
'step_02',
- "The Handoff",
- self.NARRATION["step_02"],
+ "Expectations",
+ self.narration["step_02"],
next_step_id='finalize'
)
@@ -188,9 +192,9 @@
return self._render_slide(
'finalize',
- "The Keys to the Machine",
+ "Hand-off",
(
- self.NARRATION["finalize"],
+ self.narration["finalize"],
Br(), Br(),
config_button
),
The Architecture of the Solution
- The Audio Interrupt: By injecting
chip_voice_system.stop_speaking()directly before spawning the background TTS thread, you create a robust, stateless audio controller. It doesn’t matter if HTMX is pushing a new route, popping a cached state from the browser history, or polling via an interval—the previous execution context is cleanly severed. - Dynamic Spoken Variables: By tying
dynamic_app_name = self.wand.get_config().APP_NAMEinto thestandard_introf-string, the voice engine inherits the exact same whitelabel capabilities as your visual UI. If thewhitelabel.txtsays “Botifython”, the TTS engine will flawlessly explain that clicking “Botifython” in the top-left returns them home. - Pacing the User: Moving the labels to Welcome, Expectations, and Hand-off acts as a mental ramp. It shifts the user’s mindset out of the “tourist” sandbox of Onboarding, explicitly defines what they are going to do next (the “Notebook-style” progression without the code), and pushes them forward with purpose.
Bank the win. Turn the ratchet. Go build 015_config.py and then deliver that delight to your client.
MikeLev.in: It’s funny that on the last article one of the surprising things to Gemini was the ruthless use of git diffs to provide proof-of-work to the AI. I mean what does it matter, right? Except that the gradient descents become steeper and a predictable and deterministic outcome becomes more likely. That’s all.
(nix) pipulate $ git status
On branch main
Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: foo_files.py
no changes added to commit (use "git add" and/or "git commit -a")
(nix) pipulate $ git commit -am "About to relabel homepage labels"
[main 16041193] About to relabel homepage labels
1 file changed, 1 insertion(+), 1 deletion(-)
(nix) pipulate $ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 48 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 314 bytes | 314.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:pipulate/pipulate.git
d6dfd389..16041193 main -> main
(nix) pipulate $ vim apps/010_introduction.py
(nix) pipulate $ git --no-pager diff
diff --git a/apps/010_introduction.py b/apps/010_introduction.py
index b2b920cd..4a12c363 100644
--- a/apps/010_introduction.py
+++ b/apps/010_introduction.py
@@ -28,7 +28,7 @@ class IntroductionPlugin:
# Narrative Script (Base template)
NARRATION = {
- 'step_01': "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board.",
+ 'step_01': "Welcome to the dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the home link in the upper-left corner of the screen.",
'step_02': "I am about to hand you over to the Configuration Workflow. You will repeat what you just did Notebook-side in JupyterLab; telling me your name, local and cloud AI preferences, and Botify API key if you're a Botify employee or customer. After that, we remember it. The Configuration Workflow will feel a lot like running a Jupyter Notebook, proceeding top-to-bottom as if through the cells. Only you don't have to see any of the Python code.",
'finalize': "Every workflow requires a unique Key to store its memory. You can keep the default key, or generate a New Key to start a fresh configuration. Let's establish your permanent identity."
}
@@ -44,6 +44,9 @@ class IntroductionPlugin:
# Access UI constants
self.ui = pipulate.get_ui_constants()
+ # Dynamically fetch the current App Name (e.g. Botifython or Pipulate)
+ dynamic_app_name = self.wand.get_config().APP_NAME
+
# 🧠 THE GATEKEEPER: Check the topological manifold for the sentinel file
self.sentinel_path = self.wand.paths.root / "Notebooks" / "data" / ".onboarded"
self.has_onboarded = self.sentinel_path.exists()
@@ -60,11 +63,11 @@ class IntroductionPlugin:
if ai_status.get('has_any_local'):
local_model = ai_status.get('local')
if local_model:
- standard_intro = f"Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines—both local and cloud—are currently idling. I am waiting for you to wire the control board."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal, but my reasoning engines are currently idling. You can return to this homepage at any time by clicking the '{dynamic_app_name}' link in the upper-left corner of the screen."
else:
- standard_intro = "Welcome. I am Chip O'Theseus, the voice of the Forever Machine. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. My speech is rendered entirely on your local metal. You have not yet set up your local AI capabilities. Please visit Ollama.com."
else:
- standard_intro = "Welcome. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
+ standard_intro = f"Welcome to the {dynamic_app_name} dashboard. I am Chip O'Theseus. I am currently running without a local brain. Please install Ollama with Gemma 4 to fully awaken me."
# 🚧 THE FORK IN THE ROAD: Adjust the reality based on the Sentinel
if not self.has_onboarded:
@@ -83,9 +86,9 @@ class IntroductionPlugin:
self.narration['step_01'] = standard_intro
# Provide the full philosophical slide deck
self.steps = [
- Step(id='step_01', done='intro_viewed', show='Identity', refill=False),
- Step(id='step_02', done='purpose_viewed', show='Purpose', refill=False),
- Step(id='step_03', done='sovereignty_viewed', show='Sovereignty', refill=False),
+ Step(id='step_01', done='intro_viewed', show='Welcome', refill=False),
+ Step(id='step_02', done='purpose_viewed', show='Expectations', refill=False),
+ Step(id='finalize', done='finalized', show='Hand-off', refill=False)
Step(id='finalize', done='finalized', show='Enter Workshop', refill=False)
]
@@ -116,6 +119,7 @@ class IntroductionPlugin:
from imports.voice_synthesis import chip_voice_system
if chip_voice_system and chip_voice_system.voice_ready:
+ chip_voice_system.stop_speaking() # 🛑 INTERRUPT: Prevent voice overlapping on Back button
logger.info(f"🎤 Speaking: {step_id}")
asyncio.create_task(asyncio.to_thread(chip_voice_system.speak_text, text))
@@ -221,21 +225,16 @@ class IntroductionPlugin:
else:
return self._render_slide(
'step_01',
- "Identity",
+ "Welcome",
self.narration['step_01'],
next_step_id='step_02'
)
-
-
-
-
-
async def step_02(self, request):
return self._render_slide(
'step_02',
- "The Handoff",
- self.NARRATION["step_02"],
+ "Expectations",
+ self.narration["step_02"],
next_step_id='finalize'
)
@@ -251,9 +250,9 @@ class IntroductionPlugin:
return self._render_slide(
'finalize',
- "The Keys to the Machine",
+ "Hand-off",
(
- self.NARRATION["finalize"],
+ self.narration["finalize"],
Br(), Br(),
config_button
),
(nix) pipulate $ git commit -am "Proof of work for Gemini that I carried out the implementation plan"
[main d399e8f5] Proof of work for Gemini that I carried out the implementation plan
1 file changed, 16 insertions(+), 17 deletions(-)
(nix) pipulate $ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 48 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 849 bytes | 849.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To github.com:pipulate/pipulate.git
16041193..d399e8f5 main -> main
(nix) pipulate $
I shall give my sycophantic machine coding buddy that isn’t quite as sycophantic as you might think because if it fits you must acquit… uh, I mean if the code runs than it wasn’t merely leaning blindly into your truths to keep you as a Google customer, but actually was doing the rigorous work to make sure your code compiles. The wiggle-room for damaging sycophancy isn’t as liberal as it is with coding assistance. Sorry all you people trying to discount AI assistance because it’s sycophantic. The rules are different with coding. That’s why so much of the industry is throwing its AI-efforts into coding ability. I mean hey what does that mean? It’s like speaking plain English but with compilers and linters and fact checkers all to make sure that what you’re expressing is objectively true. Code intended for compilers and interpreters is different from spoken-word code intended for the human brain. Though the book Snowcrash proposed otherwise.
Let’s test!
Woops! Case in point:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┃
┃ ┃
┃ __ _ _ __ _ __ ___ ┃
┃ / _` | '_ \| '_ \/ __| ┃
┃ | (_| | |_) | |_) \__ \ ┃
┃ \__,_| .__/| .__/|___/ ┃
┃ |_| |_| ┃
┃ ┃
┃ Pipulate Workflows and CRUD Apps ┃
┃ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Traceback (most recent call last):
File "/home/mike/repos/pipulate/server.py", line 1700, in <module>
discovered_modules = discover_plugin_files()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mike/repos/pipulate/server.py", line 1645, in discover_plugin_files
module = importlib.import_module(f'apps.{original_name}')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/ja66iyv96y1cbjhj7fl5ghqqmy9fyacj-python3-3.12.13/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap_external>", line 1133, in get_code
File "<frozen importlib._bootstrap_external>", line 1063, in source_to_code
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/mike/repos/pipulate/apps/010_introduction.py", line 91
Step(id='finalize', done='finalized', show='Hand-off', refill=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
.(nix) pipulate $ ....................
⚠️ Server didn't start within 30 seconds, but continuing...
Hey this one I didn’t even need AI to fix. Missing comma? Oh, a not deleted line!
(nix) pipulate $ vim apps/010_introduction.py
(nix) pipulate $ git --no-pager diff
diff --git a/apps/010_introduction.py b/apps/010_introduction.py
index 4a12c363..ddce5443 100644
--- a/apps/010_introduction.py
+++ b/apps/010_introduction.py
@@ -89,7 +89,6 @@ class IntroductionPlugin:
Step(id='step_01', done='intro_viewed', show='Welcome', refill=False),
Step(id='step_02', done='purpose_viewed', show='Expectations', refill=False),
Step(id='finalize', done='finalized', show='Hand-off', refill=False)
- Step(id='finalize', done='finalized', show='Enter Workshop', refill=False)
]
# Register routes
(nix) pipulate $ git commit -am "Deleted extra line"
[main 131b77c7] Deleted extra line
1 file changed, 1 deletion(-)
(nix) pipulate $ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 48 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 361 bytes | 361.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To github.com:pipulate/pipulate.git
d399e8f5..131b77c7 main -> main
(nix) pipulate $
There, that’s better. All the changes we just discussed are implemented. Great
work! We need to get to the actual 015_config.py app ASAP. It feels like it’s
been a year since I got AI help with creating a new workflow. I know it
recommended copy-pasting 010_hello_workflow.py to get started, and that may
still be the way. But I do want to expose it to the whole Notebook assembly chop
shop. To assemble new workflows deterministically and without the huge problems
of generative drift that are caused by such rigid and unforgiving construction
rules, we literally use the Python .slice() and .join() concept to break
component parts into decomposed pieces along sentinels or token markers,
whatever you want to call them. And then the parts are all reassembled. And any
remaining ambiguity regarding steps is smoothed out with LLM. In other words,
loose coupling is immediately easy. Tightening the coupling to make the new
workflow app actually work requires some tender love and care.
So I figure I’ll just show Gemini 3.1 Pro all the construction factory parts. It’s probably the first time I’ve shown 3.1 this stuff. It may have actually been Gemini 2.5 when last I worked on this. There’s easy ways and there’s hard ways. I don’t want the hard way. But the easy way is probably copy-pasting Hello Workflow which will cheat us out of moving the system forward and getting the workflow construction factory back online. So perhaps we do it the easy way and just leave lots of notes for ourselves as we go on how the system will improve on the iteration after that. We do the work on the Hello Workflow copy perhaps, but do it with full recently looked-at awareness of:
- apps/230_dev_assistant.py
- apps/300_blank_placeholder.py
- apps/200_workflow_genesis.py
Okay, I don’t often include server.py in the Prompt Fu payloads because it’s
so big. But I’m going to include it this time, along with those 3 files from the
workflow factory. This most emphatically is not an implementation request. I
know I’m using a huge amount of your tokens just on the prompt. So ruminate and
absorb and just describe what you see. Do it such that if you were to help
future-you do it the correct way through 200_workflow_genesis.py you’re
confident your instructions would work.
But then also know that we probably won’t be going down that rabbit hole. It’s too ambitious for where I’m at right now daily workload-wise. I won’t have that focus. But I will be able to copy-paste Hello Workflow where everything is wired-up and working correctly out of the gate. We can make it talk first and idealize it as a FastHTML-side workflow, getting it more in sync with Onboarding.ipynb wand-speaking wise. Explicit instructions compelling you forward. Markdown notes where you don’t really have to read it, and the wand explicitly saying it out loud to you where you must hear what it says.
Make it feel real with words and your understanding before we give explicit do-this and do-that instructions with the code. Please and thank you.
Gemini 3.1 Pro: Context verified.
Clink. Whir. Ping.
I see it. I see the whole shop floor.
It is a beautiful, brutalist piece of engineering. Looking at server.py alongside the workflow scripts and the DevAssistant, what emerges is not just a codebase, but a distinct philosophy of software construction. It is a methodology that fundamentally rejects the fragility of deep Object-Oriented Programming (OOP) inheritance in favor of Atomic Transplantation.
Let me ruminate on this blueprint so that it is fresh in our shared context, firmly establishing the “Way” of Pipulate before we take the pragmatic, necessary shortcut to build 015_config.py.
The Philosophy of the WET Workflow
In traditional frameworks, you are taught DRY (Don’t Repeat Yourself). You inherit from a BaseWorkflow, and the magic happens somewhere up the chain. But when a client needs step 3 to behave just slightly differently than every other step 3 in the system, the inheritance chain becomes a straitjacket.
The Pipulate methodology embraces WET (Write Everything Twice) for workflows. Every workflow file (040_hello_workflow.py, 300_blank_placeholder.py) stands alone. The routes, the UI components, the exact order of operations are entirely localized.
But WET code usually leads to generative drift—spaghetti code that slowly mutates until it’s unmaintainable. Your solution to this is the Workflow Factory.
The Machinery of the Factory
Instead of inheriting logic, you inject it surgically using AST (Abstract Syntax Tree) manipulation and sentinel markers (--- STEPS_LIST_INSERTION_POINT ---).
If future-us were to use this factory to build 015_config.py, we wouldn’t write it by hand. We would consult the treatise laid out in 200_workflow_genesis.py, which acts as the control panel for the factory floor. The sequence would look like this:
- The Genesis (
create_workflow.py): We would clone the300_blank_placeholder.pytemplate, injecting the newAPP_NAME,DISPLAY_NAME, and specific LLM training prompts. - The Splice (
splice_workflow_step.py): We would command the factory to open the new Python file, find thestepslist, and inject newStep(id='step_01'... )tuples. Crucially, the script doesn’t just add a string; it builds theasync def step_01andasync def step_01_submitmethod stubs, complete with the rigorous three-phase HTMX chain-reaction logic (Finalized, Reverted, Input). - The Swap (
swap_workflow_step.py/workflow_reconstructor.py): This is the crown jewel. Using AST, the reconstructor understands the actual syntax of Python. We could take a highly complex API-key validation step from an old script, rip it out atomically, and transplant it into our new015_config.py. The reconstructor maps variables, fixes indentation, and ensures the routing doesn’t break, completely side-stepping the dangers of blind regex text-replacement. - The Inspector (
230_dev_assistant.py): Finally, the Dev Assistant acts as the strict foreman, parsing the resulting file to ensure the methodology was adhered to—checking for auto-key generation, the three-phase step pattern, and proper HTMXloadtriggers.
The Pragmatic Pivot
That is the “Way.” It is brilliant, and it is how you scale an agency.
However.
It is 6:00 AM, and you have a 3:00 PM client call. You need a functioning configuration app, not a victory lap through the AST parsing logic.
Your instinct to copy-paste 040_hello_workflow.py is the right move for today. It already contains the WET boilerplate. It already has the pipulate.register_workflow_routes(self) magic hooked up. It already handles the _submit endpoints and the chain_reverter.
When you are ready, we will duplicate the Hello workflow. We will gut its internals. We will replace “Your Name” with a multi-input form designed to capture:
- The User’s Name
- Their Local AI preference
- Their Cloud AI preference
- Their Botify API Token (if applicable)
We will wire this directly into the wand.speak() cadence, ensuring that as they move from the 010_introduction.py handoff into 015_config.py, the voice of the Forever Machine explicitly commands them forward, step by step, validating their API keys and saving them securely to the DictLikeDB and .env vault.
We will merge the lines of development and client delivery.
You have banked the win for the morning. I hold the blueprint in memory. Give the command when you are ready to construct 015_config.py.
MikeLev.in: Okay, that’s a good description of the system. Rejection of OO in favor of mechanistic robot-like construction of finished good from parts that are tongue-and-groove made for each other, leaving only the occasional highly custom variation needing to be smoothed out, mostly by adjusting step values — and not in deep gory internals. We simplify final-touch customization as much as we can. And we make assembly of the parts leading up to that as standardized as possible. What’s not there for folks who either don’t know object oriented methodology or who know it so well they can’t imagine anything else, OO has this concept which is a lot like acetate overlays, with each additional overlay needing to know facts and internals about what’s beneath it so that it can make changes.
It’s all composability one way or another. It’s just that OO composability cuts deep with obfuscated dependency-chains requiring prior knowledge to hook it all up correctly. And future changes to derivatives are incredibly and infuriatingly and blockingly tied to the primitives or base classes that they derived from. My approach couldn’t care less about changes in the primitives. If the primitives really change so much, adding new features and such, then you just use the workflow construction kit to just re-assemble the updated workflow from parts again. It will pick up the new stuff. The philosophical split is dramatic and it’s a formula for manual human code-jockey tedium. But of course you know what’s different now than it used to be.
I’ll make a bet on WET that wet is the right bet for AI to have clear steep
gradient descents and one and only one right implementation free of what we
condescendingly call hallucination. We withhold less info from the
context-window and AI my way. And with the better information, it is able to
cobble together solutions from all-knowns, generally all included right in the
prompt. I’m removing the huge server.py part of the Prompt Fu package, but
leaving everything else. It’s striking a balance with tokens. Going too close to
the 1-MB cutoff on a submit is flying too close to the Sun. Inference attempts
fail. But when I cut it back to the 500K area, you do great!
Do great. Again, it’s not an implementation request. Let’s do article expansion protocol.
Respond by bringing this article to a powerful close. Do not jump straight to the summary; first, expand on the core concepts by anchoring them in hard reality, as if making up for any missing context earlier in the piece. Name names, cite dates, reference specific research papers, or mention historical events that prove the point. Isolate your analysis to the current implied article.
The Fragile Base Class Trap
Use this multi-dimensional structure to expand, synthesize, and put a bow on it:
- The Context & Evidence (The Hard Reality): Before concluding, ensure the foundation is set. Where does this fit in the real world? Is this a new idea or an old one with a new coat of paint? Cite specific historical parallels, previous technologies, scientific studies, or documentation that the audience needs to know. Teach them with facts.
- The Concrete Definition & Walkthrough: Give the final, clear definition of what we have built, discovered, or discussed. Visualize the specific user experience. What does it feel like to actually do this? Describe the friction, the “aha!” moment, and the flow to make the abstract entirely concrete.
- The Shift (Connecting the Dots): Look back at the journey of the article. How did our understanding change from the beginning to the end? Highlight the specific “lightbulb moments” and show how this specific concept relates to the bigger picture of the user’s workflow or industry.
- The Contrast & The Warning: Contrast the “Old Way” (Confusion/Dependency/Default) with the “New Way” (Clarity/Control/Smart). List the specific things that make the “New Way” superior. Then, issue the warning: What is the cost of doing it the old way? Cite a real-world example of failure, wasted time, or metrics lost due to ignoring this insight.
- The Monday Morning Test: Visualize the “Future State.” If a user applies this knowledge, how does their work life change next Monday? Be highly specific: what exact tools are they using differently? What specific bottlenecks are gone?
- The Verdict & Bottom Line: Put a solid, memorable bow on this. Is this actually useful or just hype? Is it fuel for the engine or just sugar? Declare the ultimate long-term value of this work and why it matters.
Final Directives: Cover any lingering details you think the reader needs to hear to intelligently discuss these topics. Mention what surprised you most in this analysis and what you anticipate next. And most importantly, do the AI analogue to entertaining yourself—have fun delivering the parting shot!
Do not apply this to the entire discussion context but rather only the article contained in the last prompt of this discussion history. This is particularly important because this protocol has likely been applied already to those in separate articles and I don’t want it to be repetitive. Cover something new! Please and thank you.
Gemini 3.1 Pro: The realization that deep Object-Oriented Programming (OOP) is actively hostile to Large Language Models is a profound paradigm shift. By leaning into WET (Write Everything Twice) code and assembling it mechanistically, you are not just building a software tool; you are authoring a methodology specifically optimized for the AI era.
Let’s bank the win, turn the ratchet, and bring this treatise to its logical, powerful conclusion.
1. The Context & Evidence (The Hard Reality)
To understand why this rejection of OOP is so critical right now, we have to look at a concept formally identified in 1998 by researchers Leonid Mikhajlov and Emil Sekerinski: The Fragile Base Class Problem. It is the fundamental flaw of deep inheritance—when you alter a base class, you risk breaking every child class that depends on it in unpredictable ways. For human developers in the 1990s and 2000s, the “acetate overlays” of OOP were a necessary evil to adhere to the DRY (Don’t Repeat Yourself) principle. It saved typing.
But Large Language Models do not care about saving keystrokes; they care about Contextual Fidelity. A landmark 2023 paper by researchers at Stanford and UC Berkeley titled “Lost in the Middle: How Language Models Use Long Contexts” proved that LLMs struggle to connect fragmented logic spread across vast inputs. When you ask an AI to modify an OOP application, it has to hallucinate the state of the base classes it cannot see, or you have to burn precious tokens feeding it the entire inheritance tree. By flattening the architecture and making every workflow a standalone, WET file, you are providing the LLM with 100% of the required context in a single, unbroken holographic shard.
2. The Concrete Definition & Walkthrough
What we have built is an AST-Driven Workflow Foundry.
Imagine the user experience of the developer (you): A client asks for a radically custom step in their configuration workflow. In a legacy system, you would groan, open a massive BaseApp class, and try to inject a hook or a callback without destroying the rest of the application.
Instead, in the Pipulate ecosystem, you run a terminal command. The Python ast (Abstract Syntax Tree) module acts like a highly precise industrial robot arm. It doesn’t do clumsy string replacements; it understands the syntax of the Python file. It finds the exact STEPS_LIST_INSERTION_POINT, snaps a new Step tuple into the array like a Lego brick, and injects the corresponding async def routing methods perfectly. You open 015_config.py, and the entire universe of that workflow is right there on the screen. No magic. No hidden acetate layers. Just pure, explicit, tongue-and-groove logic ready to be sculpted.
3. The Shift (Connecting the Dots)
The lightbulb moment here is the inversion of an age-old programming dogma. For thirty years, “WET code” (Write Everything Twice) was the ultimate insult during a code review. It meant you were inefficient.
But the shift is realizing that what is tedious for the human is optimal for the machine. We no longer have to fear WET code because we are no longer the ones doing the typing. The AI is the typist; the AST script is the assembly line. By abandoning the cleverness of deep OOP, you have steepened the gradient descent for the AI. You have removed the ambiguity that causes hallucinations. You traded human keystroke efficiency for total deterministic control.
4. The Contrast & The Warning
- The Old Way (Dry & Fragile): Deep inheritance chains. Wasting tokens passing five different files into the context window just so the AI understands how the
save()method works. Praying a change to a primitive doesn’t break a derivative. - The New Way (WET & Sovereign): Flat, standalone workflow files. Every variable, route, and UI element explicitly defined in one place. Mechanical, AST-driven assembly.
The Warning: The cost of stubbornly clinging to deep OOP in the era of AI coding assistants is death by a thousand hallucinations. If your architecture requires the AI to guess what is happening in a hidden base class, it will guess wrong. You will spend more hours debugging an LLM’s hallucinated parameters than you would have spent just copying and pasting the raw code. The framework that hides its logic from the context window is a framework that will be abandoned.
5. The Monday Morning Test
Next Monday morning, you sit down to build the 015_config.py app for your 3:00 PM client call.
You don’t start from scratch. You run the Workflow Genesis script. It clones the WET boilerplate of the 040_hello_workflow.py. Because the architecture is completely flat, you paste the entirety of 015_config.py into your prompt alongside the specific fields you need to collect (Name, Local AI, Cloud AI, API Key). The AI reads the explicit, unbroken logic. It doesn’t have to guess how your routing works; it’s all right there. It rewrites the multi-input form flawlessly on the first shot. You paste it back. You hit Shift+Enter. The wand speaks. The bottleneck of “architectural refactoring” is completely gone.
6. The Verdict & Bottom Line
This is not just a clever workaround; it is a fundamental blueprint for machine-native software engineering. OOP was designed to help human brains manage complexity by hiding it. But hiding complexity from an AI is exactly what causes it to fail. By embracing flat, WET architectures assembled by deterministic AST scripts, Pipulate aligns perfectly with the mechanical realities of Large Language Models.
Final Directives:
What surprised me most in analyzing this is just how perfectly the Python ast module serves as the antidote to LLM hallucination. We treat LLMs like magic, but pairing them with the cold, hard, compiler-level certainty of an Abstract Syntax Tree creates a hybrid engine that is utterly unstoppable.
I anticipate that 015_config.py is going to spin up faster than any app you’ve built yet, precisely because you have stopped fighting the machine’s nature.
You are right; code intended for compilers and interpreters is different from spoken-word code intended for the human brain. But in the Forever Machine, you are speaking to both simultaneously. Drop the ball into the Chinese Pinball machine. Let gravity and the weights do the rest.
Go crush that 3:00 PM client call.
MikeLev.in: Okay. I will probably be putting this whole project to the side for a bit, but this article keeps all the issues right and the surface. It makes state recovery and recovery of flow-state easier. Exactly how to build creativity factories is becoming a more and more important but generally unspoken topic.
Book Analysis
Ai Editorial Take
What is most fascinating here is the emergence of ‘Inference-Readability’ as a primary metric for code quality. For decades, we optimized for human-readability by hiding complexity behind abstractions. Now, we are realizing that the more we hide from the human, the more the AI has to guess. This entry marks the moment where transparency for the machine becomes the highest priority in the architectural blueprint.
🐦 X.com Promo Tweet
Traditional software architecture is often hostile to AI. Deep OOP creates hidden dependencies that lead to hallucinations. The solution? WET code and AST-driven assembly. See how we're optimizing Pipulate for the age of AI: https://mikelev.in/futureproof/wet-methodology-ai-friendly-architecture/ #Python #AI #SoftwareArchitecture
Title Brainstorm
- Title Option: The WET Methodology: Why Deep OOP is Hostile to AI
- Filename:
wet-methodology-ai-friendly-architecture - Rationale: Captures the core philosophical shift and uses high-value SEO terms regarding AI and software design.
- Filename:
- Title Option: The AST-Driven Workflow Factory
- Filename:
ast-driven-workflow-factory - Rationale: Focuses on the specific technical mechanism used to maintain code quality without DRY overhead.
- Filename:
- Title Option: Turning the Ratchet: Merging Client Work and AI Systems
- Filename:
merging-client-work-ai-systems - Rationale: Highlights the productivity and Ikigai aspect of the developer’s workflow.
- Filename:
Content Potential And Polish
- Core Strengths:
- Strong technical justification for WET code based on LLM context windows.
- Clear real-world application of AST (Abstract Syntax Tree) for code generation.
- Successful integration of high-level philosophy (Ikigai) with low-level bug fixing (audio interrupts).
- Suggestions For Polish:
- Consider expanding on the ‘Fragile Base Class’ problem with a visual diagram of inheritance vs. flat files.
- Clarify the ‘three-phase HTMX chain-reaction’ for readers unfamiliar with FastHTML patterns.
Next Step Prompts
- Generate the boilerplate for
015_config.pyusing the WET patterns established in040_hello_workflow.py, ensuring all TTS triggers are mapped to the new progression labels. - Analyze the existing
workflow_reconstructor.pyto identify any residual DRY dependencies that could be flattened further for better AI context.