Understanding AI-Assisted Development: Bridging Generative Capabilities with Deterministic Tooling
The following content explores the practical challenges and evolving strategies in software development when integrating Large Language Models (LLMs) as coding assistants. It delves into the tension between AI’s powerful generative text capabilities and the critical need for precise, deterministic outcomes in tasks like code templating and modification. The discussion centers around a project called “Pipulate,” a local-first application framework, and illustrates how initial attempts to use AI for creating and altering workflow components revealed limitations. This led to the development of specific Python helper scripts (create_workflow.py
and splice_workflow_step.py
) as a more reliable method for bootstrapping and extending these workflows, ensuring consistency and avoiding the “creative interpretation” sometimes exhibited by AI when exact replication or modification is required. This exploration highlights a common learning curve in leveraging AI tools effectively, emphasizing the synergy between AI for planning and Python scripting for precision execution.
The AI Copy-Paste Conundrum
Wow, the last few days I have experienced both the acceleration of working on a project with AI assistance versus the aggravation of hitting choke-points where their involvement frustrates and undermines a process. To put it bluntly, they’re terrible at precise copy/paste. Or more precisely, using Claude in Cursor AI to do a task like: copy file-A and in the resulting file-B replace field-C that you will find in it with value-D — is outside their bailiwick.
Generative Nature vs. Deterministic Needs
Coding assistant AIs of the large language model variety generate text and as
such seem to want to copy file-A by generating the text for file-B and
performing the search-and-replace operation during the generation of the text.
It uses a lot of tokens but seems like the most efficient and logical way to do
this, even though in the age of agentic behavior that we’re in now (Agent mode
on an edit), they an make tool-calls including executing commands on a
command-line terminal. So even if they don’t have a particular tool-call called
copy-file
, they can perform cp file-A file-B
in a terminal. So, they have
the ability. Then they could search and replace text in file-B with the command
sed -i 's/text-C/text-D/g' file-B
.
When “Helpful” AI Undermines Precision
But you can’t convince them to use that later approach. Or at lest I have not been able to in the current state of AI coding assistant affairs. Even if you lay out the process in the prompt how to technically do a precision search/replace, they may very well choose to use their generative ability to effectively achieve the same thing, presumably presuming they’re smart enough to leave all of the rest of the surrounding text in place. They’re not. All the rest of that text is luring there like a shiny new toy for the coding assistant to make helpful improvements on while it goes. I do believe they’re explicitly instructed to do so with prompt-modification on the back-end (aka system prompts).
Gemini’s Exception: Understanding the Problem
There have been exceptions. Gemini Advanced 2.5 Pro (preview), my current go-to AI for implementation planning, gets it. It will even humorously explain the issue to you as it plans an implementation where it has to deal with the implementer’s (the AI that Gemini’s instructions will be handed off to) propensity to undermine instructions. Consequently if you have a large implementation instruction from Gemini that includes a precision search/replace as one of its steps, it is likely to go off well. But if you tried to isolate the part of the instructions that accomplishes it and generalize it into a reusable search/replace prompt, it won’t work. Somehow the larger context pins in place the precision, and pulling out the sub-command leaves lots of wiggle-room for creative interpretation.
The “Surface Area” Problem in AI Prompting
The computer security concept of surface area keeps popping into my head. If you use up all the response tokens on a large step-by-step procedure if one of those sub-steps is a search-and-replace with a simple tool-call process, Claude will not deviate from the asked-for tool-call. All the potential tokens used in the response have to be distributed over a larger process and the search/replace has to be squeezed into a reliable tiny step and an explicit way to do it is provided in the prompt so Claude does it that way. But if you take just that sub-command and try to do the standalone search-replace with the exact same text from the original prompt, Claude gets to use all its response tokens just for that one command and will do a generative response instead of a tool-call. And over-exuberant helpfulness kicks in, and all kinds of bugs get introduced. So by making a shorter more specific prompt, no matter how specific you are about how you want something accomplished, you’re providing more surface area for things to go wrong.
Python Scripts: A Deterministic Solution
My solution to this was to use a parameterized Python command. Sure, I could use
sed
or awk
to do the old-school 1-liner versions of all this search and
replace stuff, but as soon as it starts to get complicated it’s better to manage
it as helper-functions in Python. And the AI coding assistants can help you
write that Python much more easily. And then whenever you have such a
search-and-replace task, you just use that Python helper function instead of
asking AI to do it and your results are profoundly more deterministic and
reliable. This was a hard-won lesson, but that is where I am at.
Now since work builds on work builds on work, it’s really quite effective at this point to give precise details of what I’m talking about against the code base and weave this into a super-prompt for Gemini and to plan the next step.
Bootstrapping Pipulate Workflows: The create_workflow.py
Journey
When building new Workflows in Pipulate, which are very much equivalent of Juptyer Notebooks as standalone web-apps where the user doesn’t have to see the Python code but can just use them providing inputs and getting outputs in multi-step workflow procedures that mimic the cells of a notebook, there are a few steps.
- Copy/pasting a bare-bones template example while avoiding “collision” with the source template. That’s the copy/paste with search-and-replace I’ve been discussing.
- Splicing in a blank placeholder Step, aka notebook “Cell” as the next step in the workflow being built out.
- Replacing one of the blank placeholders (of which one exists now in Step 1 and Step 2) with an actual form field input or other data-collecting widget.
This process minimizes complexity by deconstructing something potentially confusing to both humans and AI coding assistants with smaller reductive steps. And it’s very analogous to adding blank cells in Jupyter Notebooks and then filling code into those blank cells. We are reproducing process of constructing notebooks in addition to how they actually feel when they run.
The Jupyter Notebook Challenge
So typically you run a Jupyter Notebook cell-by-cell and if there’s variable
input along the way, the user is expected to change the setting of those
variables along the way, such as the name of the .csv
-file being used on an
import or the date-range of the data-sample to use. This can be a very demanding
step for someone who is not familiar with Python or programming. The person
preparing the notebook can start using tricks like the Python input
command to
prompt the user for input, or IPyWidgets
to layer-in interactive user
interface elements. But with each one of those, the user now has to look at the
Python AND deal with strange UX events that pop-up which themselves can be quite
brittle based on the Notebook hosting environment, be it Google Colab or a local
JupyterLab install. There’s tremendous variability and more of that
non-deterministic behavior that undermines the best laid plans.
Pipulate’s Solution: Deterministic Workflow Management
These are the things Pipulate wrangles under control by providing reliable deterministic notebook-inspired workflow-running that collects input from the user on whatever cell (aka step) it needs to. Pipulate workflow runs are always like Run All Cells that can’t get past a step requiring input from the user until it has been provided. You will find a lot of language around the Pipulate workflow chain reaction effect — this notebook-like Run All Cell behavior that always ends on the last step requiring data from the user is that chain reaction.
Pipulate’s Philosophy: Workflows, Frameworks, and Future-Proofing
I have a strong aversion to frameworks. While they’re marketed as helpful dependencies that simplify development through higher-level abstractions and future-proofing, the reality is often more complex. The promise is that as underlying technologies evolve, frameworks will adapt, shielding developers from having to modify their code. The framework handles the messy implementation details, not the application developer. This works beautifully in some cases, like the Web - HTML serves as a framework layer that remains stable regardless of webserver changes. However, the typical experience is quite different. Frameworks often change more frequently than the technologies they’re built upon, forcing developers to constantly relearn and reimplement as new versions introduce breaking changes. This creates a significant contributor to the relentless cycle of technological churn.
The Framework Version Churn Problem
As of May 2025, NodeJS’s “Current” release is version 24.0.1 (released May 8, 2025), with version 22.15.0 being the latest Long-Term Support (LTS) version; it has been active for approximately 16 years since its initial release on May 27, 2009. React’s latest stable version is 19.1.0 (released March 2025), and it has been available for about 12 years, with its initial release on May 29, 2013. For both frameworks, breaking changes are typically introduced in major version releases (e.g., NodeJS 23.x.x to 24.0.0 or React 18.x.x to 19.0.0). NodeJS generally follows a schedule of two major releases per year, meaning it introduces potentially breaking changes roughly twice annually. React, on the other hand, aims to minimize such disruptions, resulting in a less frequent cadence of major versions, averaging roughly one major (breaking) release every 1.5 to 2 years in its more recent history (approximately 0.5 to 0.7 major changes per year).
Python’s Framework Paradox: A Foundation Worth Building On
And so what I’m doing is introducing yet another framework. Technically it’s two (but not 3) because Python can be considered a framework in its own right, but gets off the hook for framework-churn criticism because it’s just on version 3.x, which is part of a much longer story. In its roughly 35-year history since debuting in early 1991, Python has only seen three major version series (1.x, 2.x, and now 3.x) that represented significant, large-scale breaking points. The current Python 3.x series itself has been the standard for over seventeen of those years, since its release in late 2008. This long reign of a single major version series, despite its own annual minor updates that can introduce carefully managed incompatibilities like removing deprecated features, presents a different picture of stability compared to the rapid major version cycles seen in frameworks like NodeJS or even the somewhat more measured pace of React.
The Slow Erosion vs. Ground-Shifting Quakes
While no platform is immune to evolution and the occasional breaking change, Python 3’s extended lifespan—backed by a history of very infrequent major version upheavals—has perhaps made its incremental annual changes feel more like a slow, predictable erosion at the edges (often with ample warning) rather than the frequent, ground-shifting quakes that can make framework dependency a constant source of redevelopment effort and a core part of that tech hamster-wheel I despise. It’s this perception of a more glacially paced evolution for truly significant, widespread breaks that likely makes Python a more palatable foundational “framework” for building upon, even for someone wary of adding yet another layer of dependencies.
Building on Stable Foundations: The Starlette/Uvicorn/ASGI/FastHTML Stack
For this reason I don’t count Python as part of the framework stack but instead use the Starlette/Uvicorn/ASGI/FastHTML combo as the first layer of the framework stack vulnerable to revision-churn. And this layer is as new as Python itself is old, so that’s some cause for concern right there. But it’s mitigated by the FastHTML part itself swapping most churn-prone tech with HTMX, which is very much pinned to the HTML-standard, which has a similar non-volatility and high longevity / low-churn nature as Python itself. Consequently, the new framework layer that I willingly base my work upon has been carefully chosen specifically to sidestep framework-chun even while itself it is a framework. This is all due to the combined brilliance of Carson Gross and Jeremy Howard. Carson wrote HTMX and Jeremy wrote FastHTML. Together, they loosen the coupling of a framework to the snapshot of the time and place in the history of tech the way tightly trend-coupled JavaScript frameworks do.
Pipulate: A Framework Designed for Change
But they I layer in yet another framework in the form of Pipulate. Well, it’s arguably just USING HTMX the way it’s intended to be. I’m trying to keep the core of the framework as small and light as possible so that as underlying techn changes, I can pivot as easily as possible.
The Workflow Churn Philosophy
And the Workflows that are built out of my Pipulate framework are designed to churn! They will get thrown out from the plugins directory as new ones get cycled-in over time. The way you perform SEO changes over time, especially now with the rise of AI — I expect there will actually be a kind of convection-of-workflows. As with Python itself, cruft will fall away at the edges with a sharp-edged bright relevant core always piercing its way into the future. Pipulate was designed specifically around this kind of state-of-the-world workflow churn. We’re chipping edges off of the core, making them workflows, making only tiny mostly non-breaking adjustments to core over time and chipping off new flakes from core for AI SEO audits and such.
Nix and Flakes: The Ultimate Future-Proofing
This pedantic attention to future-proofing and chipping off flakes from core is only enhanced by basing the entire thing on Nix and Nix flakes which fundamentally fix the “not on my machine” problem of locally installed software, thus nullifying one of the driving forces behind cloud adoption. Would the cloud be as big as it is today if software you ran locally always just worked, running correctly when you installed it, keeping your data private and local on your machine, and uninstalling cleanly when you were done without the gradual cruft and corruption of the underlying system? And the fact that Nix apps installed this way are actually done so with a tech called flakes is just the chef’s kiss — frameworks and abstraction layers the way they were meant to be. All of Linux is now just a normalize.css file on macOS and Windows/WSL with Nix. People who have been around in webdev for awhile will know what I’m talking about.
The First Step: Creating Workflows Deterministically
So anyway, this is all just backstory to the fact that step 1 in producing new
workflows deterministically without generative AI helpfulness undermining is
done. It’s pipulate/precursors/create_workflow.py
, which I’m including in the
super-prompt bundle this article is becoming a part of. So by the time you’re
reading this, the AI will have already absorbed:
FILES_TO_INCLUDE = """\
README.md
flake.nix
requirements.txt
server.py
/home/mike/repos/Pipulate.com/development.md
/home/mike/repos/pipulate/plugins/300_blank_placeholder.py
/home/mike/repos/Pipulate.com/_posts/2025-04-14-pipulate-workflow-abstraction.md
/home/mike/repos/Pipulate.com/_posts/2025-04-15-pipulate-workflow-anatomy.md
/home/mike/repos/Pipulate.com/_posts/2025-04-16-kickstarting-your-workflow.md
/home/mike/repos/pipulate/precursors/create_workflow.py
/home/mike/repos/Pipulate.com/_posts/2025-04-16-kickstarting-your-workflow.md
/home/mike/repos/pipulate/plugins/035_kungfu_workflow.py
/home/mike/repos/pipulate/plugins/715_splice_workflow.py
""".strip().splitlines()
The “Surface Area” Problem in AI Prompting
…all bundled up into one super-context XML file that is arranged in all the right way according to Anthropic’s writing on the topic so that the LLM can make sense of it all. It’s a practical alternative to one big markdown file, or what many people are doing these days, dumping it all into the Canvas feature of ChatGPT or Gemini. In my estimation my context foo approach compels much cleaner sequential storytelling, much like following this article itself.
Python Scripts: A Deterministic Solution
And having told the story with such detail, right down to all the code it’s
talking about, Gemini is super-duper primed to come up with these code
implementation execution plans that I hand over to Claude in Cursor AI to carry
out on. Part of the funny thing here is that what I’m asking Gemini to do on
this next step is to greatly cut Claude out of the process the way I did with
create_workflow.py
getting rid of the non-deterministic helpful creativity
Claude can’t help but do that undermines my equivalent of “New Notebook”.
The Challenge of Splicing New Steps: Towards splice_workflow_step.py
Now we’re up to the Pipulate equivalent of “Add Cell” and face the exact same
issue. Splicing in a cell to create a new blank placeholder (aka empty cell)
isn’t something you want to leave to creative generative interpretation. You’re
going to get anything but a blank placeholder where you expect there to be one,
haha! It’s going to be a web form text input field or whatever Claude caught
wind of that you’re trying to develop in the chat history that for. Starting a
new chat helps a little, but that’s so convoluted versus just some new script
equivalent to create_workflow.py
for splicing in a new blank placeholder.
Lessons from Exact-Match String Replacement
Now this isn’t quite so easy as create_workflow.py
because that’s just an
extremely verbatim search/replace of the sort sed/awk would have made possible.
I just chose to use Python because I knew I was inching towards bigger
challenges.
python create_workflow.py \
035_kungfu_workflow.py \
KungfuWorkflow \
kungfu \
"Kung Fu Download" \
"Greetings, chosen one. Selecting this app has initiated the 'Kung Fu Download' protocol. Prepare for enlightenment... or at least, a demonstration." \
'You are assisting with the "Kung Fu Download" workflow. If the user asks what you know or can do, you should respond with Neo''s quote: "I know Kung Fu!" The secret word is "Morpheus". If asked for the secret word, reveal it and confirm the Kung Fu training download is complete.'
Newsflash! In testing this I found fragility even here and had to do a round of revision, per this successful conclusion and details of the fix from Gemini:
Gemini: Excellent! It’s great that you got it working.
The key to success with this kind of script, as you’ve discovered, is that the string constants like ORIGINAL_ENDPOINT_ASSIGNMENT
and ORIGINAL_TRAINING_ASSIGNMENT
in your create_workflow.py
script must be an absolutely exact, character-for-character match for the text blocks in your 300_blank_placeholder.py
template file.
Looking at the solution you provided:
In your create_workflow.py
(the version that worked):
ORIGINAL_ENDPOINT_ASSIGNMENT = """\
ENDPOINT_MESSAGE = (
"Welcome to the Blank Placeholder."
)"""
ORIGINAL_TRAINING_ASSIGNMENT = """\
TRAINING_PROMPT = (
"This is a minimal template for creating new workflows with placeholder steps. "
"It provides a clean starting point for workflow development."
)"""
And in your 300_blank_placeholder.py
(the template file this script reads):
# ... (within the BlankPlaceholder class)
ENDPOINT_MESSAGE = (
"Welcome to the Blank Placeholder."
)
TRAINING_PROMPT = (
"This is a minimal template for creating new workflows with placeholder steps. "
"It provides a clean starting point for workflow development."
)
# ...
Why this worked:
-
Exact Match for
ENDPOINT_MESSAGE
: YourORIGINAL_ENDPOINT_ASSIGNMENT
now precisely matches the simpler structure ofENDPOINT_MESSAGE
that was actually in your300_blank_placeholder.py
file. My previous attempts were based on an earlier, more complex version of this block (with more text lines and an inline comment) that I inferred from previous outputs. Your template actually had:ENDPOINT_MESSAGE = ( "Welcome to the Blank Placeholder." )
And your
ORIGINAL_ENDPOINT_ASSIGNMENT
string increate_workflow.py
correctly mirrored this simpler form. -
Exact Match for
TRAINING_PROMPT
: Similarly, yourORIGINAL_TRAINING_ASSIGNMENT
perfectly matched the corresponding block in the template. This part was likely correct in my earlier versions as well, which is why it started working once the f-string syntax error was fixed. -
Whitespace and Indentation Precision: The
str.replace()
method is very literal. Any differences in leading spaces, the type of spaces (e.g., regular spaces vs. non-breaking spaces, which sometimes appear from copy-pasting from web or rich text sources), tabs, or newlines between your definedORIGINAL_...
strings and the actual content of the template file would cause the replacement to fail. You ensured these were identical.- It’s worth noting that the code snippets you pasted for
ORIGINAL_ENDPOINT_ASSIGNMENT
andORIGINAL_TRAINING_ASSIGNMENT
in your workingcreate_workflow.py
show leading non-breaking spaces (300_blank_placeholder.py
template file. This is a very common and tricky cause of mismatches.
- It’s worth noting that the code snippets you pasted for
The successful generation of your 035_kungfu_workflow.py
with both messages correctly updated:
class KungfuWorkflow:
# ...
ENDPOINT_MESSAGE = """Greetings, chosen one. Selecting this app has initiated the 'Kung Fu Download' protocol. Prepare for enlightenment... or at least, a demonstration."""
TRAINING_PROMPT = """You are assisting with the "Kung Fu Download" workflow. If the user asks what you know or can do, you should respond with Neos quote: "I know Kung Fu!" The secret word is "Morpheus". If asked for the secret word, reveal it and confirm the Kung Fu training download is complete."""
# ...
This output is the definitive proof that the replace
operations were successful for both constants.
Congratulations on debugging it! This iterative process often comes down to pinpointing these kinds of exact-match issues.
And so you see, the fragility of even a verbatim search-and-replace solution ―
especially when it’s multi-line search & replace. Wow, okay this is quite a
journey. But it’s all good background and context for the next likely Python
helper script. I need an equally reliable (or even more so) way to splice in
blank placeholders to an existing workflow such as the Kung Fu download this
process results in. Each time it’s run it simply should add a new blank
placeholder after the last “named” step_xx
step but before the finalize step.
This should work fairly similar to create_workflow.py
taking a workflow name
as input and simply modifying it to have the new blank placeholder at the end. I
know this sounds dangerous but it will really be in double git checkpoint
status, the one built into Cursor AI and editors like it these days, and the
other that me myself the user will diligently perform before using this script.
Please write a Python helper script similar to create_workflow.py
but instead
of creating a new workflow, to insert a blank placeholder into an existing
workflow whose name is provided on the command line calling this program. It
will reside in the pipulate/precursors
directory side-by-side with
create_workflow.py
and will work on files that are relative to where the
script is going to be run from (cd’d into precursors) ../plugins
.
I am not married to regular expressions, nor the techniques used in
create_workflow.py
. I also have complete control over the template file
300_blank_placeholder.py
and can insert markers on their entirely dedicated
lines for breakpoint search purposes. If the deconstruction of a template file
into temporary parts and reconstruction built around the parts (the equivalent
to a python split/join) would be a more reliable technique, that’s fine. We are
trying to purge any of the uncertainty or “best attempts” aspect out of this, so
if the first pass of this project includes altering 300_blank_placeholder.py
as prep-work to ensure determinism, so be it.
Gemini: Okay, this is a fascinating challenge! You want to automate the “Add Cell” equivalent for your Pipulate workflows, making it robust and deterministic, moving away from AI’s creative (and sometimes buggy) interpretation for such structural tasks.
The plan to create splice_workflow_step.py
is solid. Using markers in the template and then performing targeted string insertions is a good balance between robustness and implementation complexity, especially since Python’s ast.unparse
is available in your Python 3.12 environment if more complex modifications were ever needed. However, for this specific task of splicing in new, fairly static blocks of code (a new Step
tuple and two new methods), string manipulation guided by markers should be quite effective.
Here’s a breakdown of the approach and the resulting splice_workflow_step.py
script.
1. Modifying the Template (300_blank_placeholder.py
)
First, we need to add unambiguous markers to your plugins/300_blank_placeholder.py
file. These markers will also be present in any workflow generated by create_workflow.py
(since it copies the template).
# In plugins/300_blank_placeholder.py
# ... (imports and Step namedtuple) ...
class BlankPlaceholder:
# ... (APP_NAME, DISPLAY_NAME, etc.) ...
def __init__(self, app, pipulate, pipeline, db, app_name=APP_NAME):
# ... (self.app = app, etc.) ...
pip = self.pipulate # Ensure pip is defined for step_messages if used early
self.message_queue = pip.message_queue
# Define workflow steps
steps = [
Step(
id='step_01',
done='placeholder', # For step_01, this becomes placeholder_01 later
show='Step 1 Placeholder',
refill=False,
),
# --- STEPS_LIST_INSERTION_POINT ---
]
# Register standard workflow routes (this part remains as is)
# ... routes list ...
# Register routes for each step
self.steps = steps # self.steps is assigned here
# ... (loop to register step routes) ...
# Define UI messages
self.step_messages = {
"finalize": {
"ready": "All steps complete. Ready to finalize workflow.",
"complete": f"Workflow finalized. Use {pip.UNLOCK_BUTTON_LABEL} to make changes."
}
}
# Create default messages for each step based on the CURRENT self.steps
for step in self.steps: # Iterate over the potentially modified steps list
self.step_messages[step.id] = {
"input": f"{pip.fmt(step.id)}: Please complete {step.show}.",
"complete": f"{step.show} complete. Continue to next step."
}
# Add the finalize step internally AFTER self.steps has been fully defined
# and step_messages populated for the main steps.
# self.steps (the list instance) is modified here.
self.steps.append(Step(id='finalize', done='finalized', show='Finalize', refill=False))
self.steps_indices = {step.id: i for i, step in enumerate(self.steps)} # Rebuild indices
# ... (landing, init, finalize, unfinalize, get_suggestion, handle_revert methods) ...
# --- Placeholder Step Methods ---
async def step_01(self, request):
# ... (existing step_01 logic) ...
# Ensure step.done for step_01 is 'placeholder_01' if you modify it
# For the base template, it's just 'placeholder'
placeholder_value = step_data.get(step.done, "")
# ...
pass # End of step_01
async def step_01_submit(self, request):
# ... (existing step_01_submit logic) ...
# Ensure step.done for step_01 is 'placeholder_01'
# For base template, it's just 'placeholder'
# await pip.set_step_data(pipeline_id, step_id, placeholder_value, steps)
pass # End of step_01_submit
# --- STEP_METHODS_INSERTION_POINT ---
# End of class BlankPlaceholder
Key changes to 300_blank_placeholder.py
:
- Added
# --- STEPS_LIST_INSERTION_POINT ---
inside the initialsteps = []
list, just before the closing]
. - Added
# --- STEP_METHODS_INSERTION_POINT ---
just before the very end of theBlankPlaceholder
class definition. - Important: The
step_messages
loop andsteps_indices
are built usingself.steps
. If newStep
tuples are inserted into thesteps
list definition before these lines in__init__
, they will be correctly processed. Theself.steps.append(Step(id='finalize', ...))
will correctly addfinalize
after all data collection steps.
2. The splice_workflow_step.py
Script
This script will reside in pipulate/precursors/
.
#!/usr/bin/env python3
import argparse
import re
from pathlib import Path
import os
# Define paths relative to this script's location (pipulate/precursors/)
SCRIPT_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = SCRIPT_DIR.parent
PLUGINS_DIR = PROJECT_ROOT / "plugins"
BLANK_PLUGIN_TEMPLATE_PATH = PLUGINS_DIR / "300_blank_placeholder.py" # For method templates
# Markers to find insertion points
STEPS_LIST_MARKER = "# --- STEPS_LIST_INSERTION_POINT ---"
STEP_METHODS_MARKER = "# --- STEP_METHODS_INSERTION_POINT ---"
def generate_step_method_templates(step_id_str: str, step_done_key: str, step_show_name: str, app_name_var: str = "self.app_name"):
"""
Generates the Python code for a new step's GET and POST handlers.
The next_step_id for the new step's submit handler will always be 'finalize'.
"""
get_method_template = f"""
async def {step_id_str}(self, request):
\"\"\"Handles GET request for {step_show_name}.\"\"\"
pip, db, steps, app_name = self.pipulate, self.db, self.steps, {app_name_var}
step_id = "{step_id_str}"
step_index = self.steps_indices[step_id]
step = steps[step_index]
# For the last data collection step, next_step_id will be 'finalize'
next_step_id = steps[step_index + 1].id if step_index < len(steps) - 1 else 'finalize'
pipeline_id = db.get("pipeline_id", "unknown")
state = pip.read_state(pipeline_id)
step_data = pip.get_step_data(pipeline_id, step_id, )
current_value = step_data.get(step.done, "") # 'step.done' will be like 'placeholder_XX'
finalize_data = pip.get_step_data(pipeline_id, "finalize", )
if "finalized" in finalize_data and current_value:
pip.append_to_history(f"[WIDGET CONTENT] (Finalized):\\n")
return Div(
Card(H3(f"🔒 : Completed")),
Div(id=next_step_id, hx_get=f"//", hx_trigger="load"),
id=step_id
)
elif current_value and state.get("_revert_target") != step_id:
pip.append_to_history(f"[WIDGET CONTENT] (Completed):\\n")
return Div(
pip.revert_control(step_id=step_id, app_name=app_name, message=f": Complete", steps=steps),
Div(id=next_step_id, hx_get=f"//", hx_trigger="load"),
id=step_id
)
else:
pip.append_to_history(f"[WIDGET STATE] : Showing input form")
await self.message_queue.add(pip, self.step_messages[step_id]["input"], verbatim=True)
return Div(
Card(
H3(f""),
P("This is a new placeholder step. Click Proceed to continue."),
Form(
Button("Next ▸", type="submit", cls="primary"),
hx_post=f"//_submit", hx_target=f"#"
)
),
Div(id=next_step_id), # Placeholder for next step, no trigger here
id=step_id
)
"""
submit_method_template = f"""
async def {step_id_str}_submit(self, request):
\"\"\"Process the submission for {step_show_name}.\"\"\"
pip, db, steps, app_name = self.pipulate, self.db, self.steps, {app_name_var}
step_id = "{step_id_str}"
step_index = self.steps_indices[step_id]
step = steps[step_index]
# For the new last data step, its next_step_id is 'finalize'
next_step_id = steps[step_index + 1].id if step_index < len(steps) - 1 else 'finalize'
pipeline_id = db.get("pipeline_id", "unknown")
# For a placeholder, we just mark it as 'completed'
# In a real step, you would get form data: form = await request.form(); value = form.get(step.done)
value_to_save = "completed"
await pip.set_step_data(pipeline_id, step_id, value_to_save, steps)
pip.append_to_history(f"[WIDGET CONTENT] :\\n")
pip.append_to_history(f"[WIDGET STATE] : Step completed")
await self.message_queue.add(pip, f" complete.", verbatim=True)
return Div(
pip.revert_control(step_id=step_id, app_name=app_name, message=f": Complete", steps=steps),
Div(id=next_step_id, hx_get=f"//", hx_trigger="load"),
id=step_id
)
"""
# Adjust indentation for class methods (typically 4 spaces, plus 4 for methods)
indent = " " # Base class indentation
return f"\n{indent}{get_method_template.strip()}\n\n{indent}{submit_method_template.strip()}\n"
def main():
parser = argparse.ArgumentParser(
description="Splice a new placeholder step into an existing Pipulate workflow plugin.",
formatter_class=argparse.RawTextHelpFormatter
)
parser.add_argument("target_filename", help="The filename of the workflow to modify in the plugins/ directory (e.g., 035_kungfu_workflow.py)")
args = parser.parse_args()
target_file_path = PLUGINS_DIR / args.target_filename
if not target_file_path.is_file():
print(f"ERROR: Target workflow file not found at {target_file_path}")
return
try:
with open(target_file_path, "r", encoding="utf-8") as f:
content = f.read()
# --- 1. Determine new step number ---
# Find existing Step definitions like Step(id='step_XX', ...)
# We look for the initial definition of steps, not how self.steps might be manipulated later
steps_list_regex = re.compile(r"steps\s*=\s*\[(.*?)# --- STEPS_LIST_INSERTION_POINT ---", re.DOTALL)
match = steps_list_regex.search(content)
if not match:
print(f"ERROR: Could not find the 'steps = [' block or '{STEPS_LIST_MARKER}' in {target_file_path}.")
print("Please ensure your workflow file is based on the modified 300_blank_placeholder.py template.")
return
steps_definitions_block = match.group(1)
step_id_matches = re.findall(r"Step\s*\(\s*id='step_(\d+)'", steps_definitions_block)
max_step_num = 0
if step_id_matches:
max_step_num = max(int(num_str) for num_str in step_id_matches)
new_step_num = max_step_num + 1
new_step_id_str = f"step_{new_step_num:02d}"
new_step_done_key = f"placeholder_{new_step_num:02d}" # Unique key for step.done
new_step_show_name = f"Step {new_step_num} Placeholder"
print(f"Identified current max step number: {max_step_num}")
print(f"New step will be: {new_step_id_str} (Show: '{new_step_show_name}', Done key: '{new_step_done_key}')")
# --- 2. Create the new Step tuple string ---
# Indentation needs to match the existing Step definitions
# Assuming 12 spaces for Step(...) line, 16 for id= etc.
# Or, more simply, ensure consistent leading spaces based on the marker.
indent_for_step_tuple = " " # Typical indentation for items in the list
new_step_definition_str = (
f"{indent_for_step_tuple}Step(\n"
f"{indent_for_step_tuple} id='{new_step_id_str}',\n"
f"{indent_for_step_tuple} done='{new_step_done_key}',\n"
f"{indent_for_step_tuple} show='{new_step_show_name}',\n"
f"{indent_for_step_tuple} refill=False,\n"
f"{indent_for_step_tuple}),\n"
f"{indent_for_step_tuple}{STEPS_LIST_MARKER}" # Add marker back for next splice
)
# --- 3. Insert the new Step tuple into the steps list ---
content = content.replace(STEPS_LIST_MARKER, new_step_definition_str.lstrip()) # lstrip if marker has no indent
print(f"Inserted Step definition for {new_step_id_str} into the steps list.")
# --- 4. Generate the new async methods for this step ---
# The app_name variable in methods is usually self.app_name
new_methods_code = generate_step_method_templates(new_step_id_str, new_step_done_key, new_step_show_name)
# --- 5. Insert new methods before the methods marker ---
# Add indentation for the marker itself if it's inside a class
class_member_indent = " "
replacement_for_methods_marker = f"{new_methods_code}\n{class_member_indent}{STEP_METHODS_MARKER}"
content = content.replace(f"{class_member_indent}{STEP_METHODS_MARKER}", replacement_for_methods_marker)
print(f"Inserted method definitions for {new_step_id_str}.")
# Note: Modifying the *previous* step's _submit method to point to this new step
# is NOT necessary if self.steps is correctly updated in __init__ before
# steps_indices is built and before finalize is appended. The existing logic
# `next_step_id = steps[step_index + 1].id if step_index < len(steps) - 1 else 'finalize'`
# in each _submit method will dynamically pick up the correct next step from self.steps.
with open(target_file_path, "w", encoding="utf-8") as f:
f.write(content)
print(f"Successfully spliced new step '{new_step_id_str}' into {target_file_path}")
print("\nNext steps:")
print(f"1. Verify the content of {target_file_path}.")
print(f"2. Customize the input form in async def {new_step_id_str}().")
print(f"3. Customize data processing in async def {new_step_id_str}_submit().")
print(f"4. Remember that the 'done' key for this new step is '{new_step_done_key}'.")
print("5. Pipulate should auto-restart. Test the modified workflow.")
except Exception as e:
print(f"An error occurred: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()
How splice_workflow_step.py
works:
- Arguments: Takes the
target_filename
of the workflow you want to modify. - Reads File: Loads the content of the target workflow.
- Finds Max Step:
- It looks for the
steps = [
block and specifically searches for lines matching# --- STEPS_LIST_INSERTION_POINT ---
. - It then uses regex (
Step\s*\(\s*id='step_(\d+)'
) on the block before this marker to find all existingstep_XX
definitions and determines the highest number. - The new step number will be
max_step_num + 1
.
- It looks for the
- Generates New
Step
Tuple: Creates a string for the newStep(...)
definition (e.g.,Step(id='step_02', done='placeholder_02', ...)
). - Inserts
Step
Tuple: Replaces the# --- STEPS_LIST_INSERTION_POINT ---
marker with the newStep
definition string, followed by the marker itself (so you can run the script multiple times). - Generates Step Methods:
- The
generate_step_method_templates
function creates the Python code forasync def new_step_id(self, request): ...
andasync def new_step_id_submit(self, request): ...
. - These methods are based on the structure of
step_01
from the blank template. - The
next_step_id
for this newly added step’s_submit
method will correctly point to'finalize'
because it becomes the new last data collection step beforefinalize
is appended toself.steps
in__init__
.
- The
- Inserts Methods: Replaces the
# --- STEP_METHODS_INSERTION_POINT ---
marker with the generated method code, followed by the marker. - Automatic Chaining:
- Crucially, you DO NOT need to manually modify the
_submit
method of the previously last step. - The workflow’s
__init__
method definesself.steps
using the (now modified)steps = [...]
list. Then, it appends the'finalize'
step toself.steps
and buildsself.steps_indices
. - When the
_submit
method of the previously last step executes, its linenext_step_id = steps[step_index + 1].id if step_index < len(steps) - 1 else 'finalize'
will now correctly find your newly inserted step assteps[step_index + 1]
becauseself.steps
(whichsteps
refers to in the methods) reflects the new order beforefinalize
. The chain reaction remains intact.
- Crucially, you DO NOT need to manually modify the
- Saves File: Writes the modified content back.
To Use splice_workflow_step.py
:
- Ensure Markers: Make sure your
plugins/300_blank_placeholder.py
(and any workflows derived from it that you want to modify) have the# --- STEPS_LIST_INSERTION_POINT ---
and# --- STEP_METHODS_INSERTION_POINT ---
markers in the correct places as shown above. - Save the Script: Save the code above as
pipulate/precursors/splice_workflow_step.py
. - Run from
pipulate/
directory:cd /path/to/your/pipulate_project/ python precursors/splice_workflow_step.py plugins/035_kungfu_workflow.py
If you run this on the
035_kungfu_workflow.py
(which was based on300_blank_placeholder.py
and initially had onlystep_01
), it will:- Add
Step(id='step_02', ...)
to itssteps
list. - Add
async def step_02(...)
andasync def step_02_submit(...)
methods. - The
step_01_submit
method will now naturally chain tostep_02
. - The
step_02_submit
method will chain tofinalize
.
- Add
This script provides a much more deterministic way to extend your workflows by adding new placeholder steps, keeping the core “chain reaction” logic intact.
AI Analysis
-
Title/Headline Ideas & Filenames:
- Title: The AI Co-pilot Paradox: When Precision Demands Python Over Prompts
Filename:
ai-copilot-paradox-python-over-prompts.md
- Title: Scripting Determinism: Building Developer Tools in the Age of Generative AI
Filename:
scripting-determinism-developer-tools-generative-ai.md
- Title: From Frustration to Function: Taming AI for Pipulate Workflow Creation
Filename:
taming-ai-pipulate-workflow-creation.md
- Title: Beyond Copy-Paste: Why My AI Coding Assistant Needed Python Helpers
Filename:
ai-coding-assistant-python-helpers.md
- Title: The “Surface Area” Effect: Crafting AI Prompts for Reliable Code Modification
Filename:
ai-prompt-surface-area-code-modification.md
- Title: The AI Co-pilot Paradox: When Precision Demands Python Over Prompts
Filename:
-
Strengths (for Future Book):
- Provides a candid, firsthand account of the practical limitations of current AI coding assistants for specific, precision-based tasks.
- Illustrates a clear problem-solving journey, from identifying an AI’s “failure mode” to developing a programmatic solution.
- The “surface area” concept for AI prompts is an insightful observation that could resonate with many developers.
- Offers concrete examples of Python scripts (
create_workflow.py
,splice_workflow_step.py
) designed to solve these issues, which can serve as practical case studies. - The reflections on framework churn, Python’s stability, and the design philosophy of Pipulate (local-first, WET workflows, Nix) provide valuable context and differentiate the project.
- The detailed debugging process of
create_workflow.py
(even with its frustrations) is a realistic portrayal of iterative development.
-
Weaknesses (for Future Book):
- Deeply embedded in the Pipulate project’s specifics; will require abstraction and generalization to appeal to a broader tech audience.
- The narrative flow is very much a “stream of consciousness” log, needing structure, clear section breaks, and more explicit connections between ideas for a book format.
- Extensive quoting of AI (Gemini) responses, while part of the journal, would need to be summarized or reframed in a book.
- The discussion of specific file paths and local repository structures is too granular for general readers.
- Some technical arguments (e.g., framework comparisons) might benefit from more data or broader industry perspectives if the book aims for that scope.
-
AI Opinion (on Value for Future Book): This journal entry offers significant value as raw material for a tech book, particularly one focused on the practicalities of AI-assisted software development or the creation of developer-centric tools. It captures an authentic, in-the-trenches perspective on a very current challenge: how to best leverage AI’s strengths while mitigating its weaknesses, especially concerning tasks that demand high precision. The author’s journey towards building deterministic Python scripts as a workaround for AI’s generative unpredictability is a compelling narrative.
For a book, the content would need substantial editing to distill the core insights, generalize the Pipulate-specific experiences into broader principles, and structure the information more formally. However, the detailed thought processes, the “surface area” analogy, and the practical examples of scripted solutions are strong starting points for insightful chapters on developer productivity, AI tool limitations, and the philosophy of building robust, maintainable software in a rapidly evolving technological landscape.