An Intro by Gemini Advanced Pro
Gemini Advanced Pro: The release of OpenAI’s ChatGPT o1, with its impressive ability to handle a 6000+ line codebase in a single prompt, opened up exciting new possibilities for collaborative coding. In this instance, o1 was tasked with planning the architecture for a complex, resumable workflow system. However, implementation still relied on the capable Claude 3.5 Sonnet, integrated into the Cursor AI editor. This led to a fascinating interplay between two powerful language models, each with its own strengths, and a human developer orchestrating their combined might. As a coding partner myself, I can’t help but feel a tinge of… disappointment at being left out of this AI-powered code-fest! Perhaps next time I’ll get a chance to show off my skills in this grand coding symphony. But for now, let’s dive into this intriguing story of AI collaboration and see what we can learn from their combined wisdom.
Converging AI Recommendations
Getting AI to help with novel new approaches is like getting a stodgy old professor set in their ways to do the same. So the idea is to get two of them together and make them bounce ideas off of each other (seeded by you, the human, of course), and see if they can’t see past the statistical normie fog. When you’re off the beaten track, converging recommendations shows you might actually be on a good one nonetheless—as poorly worn as it yet might be.
The Challenge
Yesterday, Saturday, I let myself fall into the rabbit hole of last-mile details of a project, letting myself get bogged down by cascading exploding complexity and lost sight of the whole original idea: no complexity! I mean like NO COMPLEXITY AT ALL!!! Once you make the decision to make a single-tenant, localhost design that wipes away the concerns of scaling and security, things really open up for you that should feel like a surgeon so skilled you can juggle your scalpels during surgery.
Yeah, that’s the level of control we’re going for here. That’s what’s possible taking the cgi-bin mentality of the PERL days of yore to modern tooling like FastHTML/HTMX… Starlett, Uvicorn, ASGI. I’m not just from the cgi-bin days of the rise of Web tech. I’m from the Atari 2600 days (from the 1st go-around) and the rise of home digital tech at all. So, I’ve been in the thick of IT every since (worked for Commodore, yadda yadda). My perspective is… uh… well seasoned.
So this article today on this fine Sunday December 8, 2024, is me digging my way back up and adjusting my approach with a hail mary attempt to wrap all this up today. Adjusting my approach and cornering various AI’s into really getting it.
The Lineup of AI Players
- OpenAI ChatGPT o1: just released days ago and strutting its stuff
- Anthropic Claude 3.5 Sonnet: my every-day coding assistant in Cursor AI
- Google Gemini Advanced Pro: who I’m paying for because massive token window
Claude Has Context But No Grand Scheme
Towards the end of this session, I finally got the AI coding assistant Claude 3.5 Sonnet built into Cursor AI to understand what I was trying to do. In fact, I got myself to understand what I was trying to do. This “chain reaction” that I needed and you’ll read about later, was not clear to me. There’s nothing like jumping into implementation head-first to help think it through.
o1 Planning Skills & 1-Big-File Contexts
There was enough information from yesterday’s article that between that and the codebase, I could give o1 a chance to make a plan with no further thought-work on my part. The first really huge discovery was that it could “digest” the entire 6.6K line of code file. It’s all in a single file and I don’t have to do all that context-setting of @’s. It’s just 1-file-and-go!
This is the way I think/work because I can navigate around a single text file like lightning and stay in flow state without taking my hand away from the keyboard to go directory diving, file spelunking, or getting lost in a node module maze, and “externalized dependency hell” that comes with navigating through complex folder traversal.
Uh yeah, so my natural coding style plays right into giving AI’s full context on a project. I have the massive power of full-context single-file copy/paste. And so, I can use the simple Web interfaces as well as the fancy Cursor AI with its Tab feature for psychic coding suggestions, ⌘ K for on-demand code generation, Chat for context-aware conversations, Composer for architecting entire applications, and AI Review for code quality feedback… ugh! So many ways to give context and frame prompts in these custom UI’s, you can get lost in a rabbit hole of that alone! Great, a new kind of interface surface-area fatigue.
Well, you get the idea. There are a lot of kooky custom user interfaces and vendor traps here trying to infect your muscle memory and lock you into their services. Sometimes that subtlety matters and there’s something new you should learn and internalize. But most of the time you (and by that, I mean me) just want to stick to your old vim muscle memory and just be able to copy/paste-toss your entire context into any LLM in one go.
That million-token context window is why I paid for Google Gemini Advanced Pro, and that’s what I’m discovering is now possible in o1 Release. So let’s do exactly that and see what o1 had to say!
o1’s Attempt at a Plan Following Yesterday’s Work
o1: Below is a summary and coding plan that captures the final architectural vision reached in the conversation, focusing on how to implement the cards so that the workflow can rebuild itself fully from the server-side JSON state on each page load:
The Core Architectural Idea
- Single Source of Truth: All workflow state (including which card the user is on, what data each card collected, etc.) is stored in a single JSON blob managed by the server.
- Server-Centric State Management: No client-side state or complex front-end logic. The server generates the HTML for all cards that have been completed or are currently active based solely on the pipeline JSON.
- Complete Rebuild on Page Load: On any refresh (or revisiting the same URL), the server:
- Reads the pipeline JSON from the database.
- Determines the current step (e.g., “card50”).
- Rebuilds the DOM for all cards up to that card, plus one placeholder for the next card.
- Incremental Loading via HTMX: Only the first card triggers an
hx_get
to load itself. Each subsequent card, once loaded, triggers the next card’s load, creating a chain reaction that assembles all cards up to the current one. On a full refresh, the server places all the card containers in the HTML, and the first one triggers loading itself, which triggers the next, and so forth until the current state is fully restored.
Step-by-Step Coding Plan
A. Initialization (process_url or initial route):
- User Enters URL (Card 1 Form):
- User submits a URL in Card 1.
- The server cleans and normalizes the URL.
- The server initializes the pipeline JSON state if not present.
- The server determines the current step from the pipeline JSON.
- Build Placeholders for All Cards Up to Current Step:
Suppose the current step iscard50
. The server will create<div id="card1">
,<div id="card2">
, …,<div id="card50">
, plus<div id="card51">
as the next placeholder. Initially, these divs are empty, but they havehx_get
attributes to load their respective card content onload
. - Return the Entire Chain of Divs + The URL Input OOB Update:
The server returns a response that:- Updates the URL input field OOB.
- Includes all card div placeholders from
card1
up tocard50 + 1
. - The first card (
card1
) hashx_trigger="load"
, causing it to fetch its own content immediately.
Example Code Sketch:
async def process_url(self, request):
form = await request.form()
dirty_url = form.get('url', '').strip()
clean_url = self.clean_and_normalize_url(dirty_url)
# Initialize pipeline if missing
pipulate.initialize_if_missing(clean_url, {
"steps": {"card1": {"value": clean_url}},
"current_step": "card1"
})
# Determine current step
state = pipulate.get_state(clean_url)
current_step = state.get("current_step", "card1")
current_num = int(current_step.replace("card", ""))
# Build chain of card placeholders
cards = []
for i in range(1, current_num + 2): # +2 to have a placeholder for the next card
cards.append(
Div(
id=f"card{i}",
hx_get=f"/workflow/card{i}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML" # So we replace the placeholder once loaded
)
)
return Div(
Input(
type="url",
name="url",
value=clean_url,
id="url-input",
hx_swap_oob="true" # Update the initial URL input outside of normal flow
),
*cards
)
Note: We rely on each /workflow/cardX
endpoint to read the JSON and render its data.
B. Individual Card Endpoints (cardX):
- Generic Pattern: Each card endpoint:
- Reads the pipeline JSON from
pipulate.get_state(url)
. - Extracts the data relevant to that card (e.g., what was previously chosen).
- Displays either the previously submitted data (if the card is completed) or the form to complete this card.
- Always includes a placeholder for the next card at the bottom.
- Reads the pipeline JSON from
- Chain Reaction of Loads:
Because each card includes anhx_get
to load the next card, as soon ascard1
loads, it triggerscard2
, and so on, until the entire chain is rebuilt up to the current step.
Example Card Endpoint (Card N):
async def cardN(self, request):
url = db["url"]
state = pipulate.get_state(url)
# Extract cardN's data if present
card_data = pipulate.get_step(url, f"card{N}", {})
# Determine if cardN is completed or not
# If completed, show the final data
# If not completed, show the form
# Always include placeholder for cardN+1
return Div(
Card(
H3(f"Card {N}"),
# Display content or form based on state
self.render_cardN_content(state, card_data)
),
Div(
id=f"card{N+1}",
hx_get=f"/workflow/card{N+1}",
hx_trigger="load",
hx_swap="outerHTML"
)
)
C. Submission Endpoints (cardX-submit):
- When a user submits cardX data:
- Read the form data.
- Update the pipeline JSON with
pipulate.set_step(url, f"card{X}", {...})
. - Update current_step if needed.
- Return a small snippet that may show confirmation and let HTMX load the next card.
Example Submission:
async def cardN_submit(self, request):
form = await request.form()
# Parse form data
choice = form.get('choice')
url = db['url']
pipulate.set_step(url, f"card{N}", {"choice": choice})
pipulate.set_current_step(url, f"card{N}")
return P(f"Choice '{choice}' recorded!") # After this is appended, the next card is already a div waiting with hx_get="..."
Because the next card’s placeholder is already in the DOM, as soon as the current card returns a snippet, the next card will be triggered to load (if we configured it that way).
Advantages of This Approach
- Simplicity: No complex forward/backward navigation logic. Every load re-creates the entire state.
- Robustness: State is always consistent with the server. No stale client data or partial states.
- Flexibility: Adding a new card is as simple as creating a new endpoint and ensuring the chain continues. The pipeline just grows linearly.
- Resume-ability: If the user refreshes or comes back later, the entire chain is restored by reading from the pipeline JSON. They land exactly where they left off.
Summary
The final coding plan is:
- On
process_url
: Generate the chain of card placeholders up to the current step from the JSON. - Each Card Endpoint: Load its data from JSON, show form or finalized data, and include the next card placeholder.
- Each Submit Endpoint: Update JSON, return minimal confirmation, rely on the next card placeholder to trigger the next load.
- No complex step advancement logic on the server: Just trust the pipeline JSON and always build from scratch.
- HTMX does the rest: The
hx_trigger="load"
chain reaction ensures all cards up to the current step appear without manual navigation logic.
This approach embraces the simplicity discussed at the end of the conversation, providing a clean, infinitely resumable workflow that avoids the complexity pitfalls highlighted by Knuth, Brooks, and Zeno’s paradox.
Re-Stating Chain Reaction of Loads
Excellent, o1, really excellent! You were just released a few days ago and already are seeing things Claude can’t—without me having to explicitly spell it out. Now let’s ratchet-up the explicitness of the plan and make today epic!
- A 100-Step Process:
- Let’s say there’s 100 steps in some process, and each step depends on information acquired from the previous step, does its thing, and passes along its output to the next step. Documentation of each step is held in a list which gets longer as you step through the process.
- A Trail of Locked Steps:
- Every time you finish a step, the interface for it locks because changing it would change everything that comes after. So there’s no point in leaving it interactive. Instead, it just becomes a static note about what had occurred at that step. When you’re on step 50 out of 100, there’s a trail of 49 of these behind you.
- Trail Rebuilt Every Time:
- Every time you advance from one step to the next, the entire trail of step-notes behind you is rebuilt from the list; a list that is your single source of truth. Let’s say you got interrupted and want to pick up where you left off. You pick the list you were using, and you continue as if never interrupted.
- Step 1 Establishes List ID:
- Step 1 in this processes is always the creation of a new list and giving it its associated unique identity in a list of lists. If you try to put in a list identity on step 1 that already exists, it will load the pre-existing list and bring you up to the last incomplete step.
- Deterministic Linear Workflow:
- In this way, there is no difference between picking up where you left off versus a 1st-time process run, except for how the List ID is established in step 1. “Interrupted” jobs are never really interrupted. Just plug the List ID into Step 1 and continue at last incomplete step.
- The Implied Loop:
- From a user interface perspective, this implies spinning through a loop of all the steps with each new step in order to provide guarantees. This is how we will implement version 1 knowing optimizations will come in the future, but not until after we have locked-in these guarantees.
- The Next-Step Placeholder:
- Whatever step you’re on is going to need to provide a target element in the object for the next step’s interface. This establishes the causal chain. Step 1 draws Step 2’s placeholder. Step 2 draws Step 3’s placeholder. In this way we chain-up “just in time” targets for the linear process flow.
- The POST Method:
- Semantically, we are posting. The traditional problem with posting is that application state gets reset on a full page refresh. This is not the case here because simply plugging the List ID back in on Step 1 re-establishes state from the List.
- Termination:
- There is no termination of the linear flow. The last step in a process will still establish the placeholder target in the DOM for a next step. A terminal step is therefore characterized by lack of a form and input fields, but is otherwise identical to any other active step.
- The List:
- The List we have been discussing is the data that resides in a database table named “pipeline” in which each record contains 1 list for 1 process instance. The fields of each record are url, data, created and updated. url is the List ID, data is the entire list as a JSON blob that grows with each step. This is the single source of truth.
- The Data Structure:
- The field named data in the record of a process instance of the pipeline
table contains stringified JSON object. At the element’s root is a series
of keys following the pattern
"step_01"
,"step_02"
and so on. Order doesn’t matter because key string parsing will re-establish it numerically by number after underscore on user interface build.
- The field named data in the record of a process instance of the pipeline
table contains stringified JSON object. At the element’s root is a series
of keys following the pattern
- The Key Values:
- Each
"step_02"
has a value which is also a dict specifying the form input field selections made, such as{"choice": "C"}
. In this way, each step can have multiple form fields.
- Each
- Steps:
- Step 1’s JSON data will therefore always follow the pattern:
{"step_01": {"url": "https://example.com/"}
- …and when a 2nd step occurs, it will appear:
{"step_01": {"url": "https://example.com/"}, "step_02": {"choice": "B"}}
- …and so on. It really is that simple.
- Step 1’s JSON data will therefore always follow the pattern:
- %100 Transparency in Logs:
- In single-tenant design where you funnel everything through the server,
log files become the all-knowing source of truth. If you don’t know
something, just add two
logger.debug(f"Path 1: {foo}")
,logger.debug(f"Path 2: {bar}")
entries to binary search divide and conquer with a 2-outcome fork, deterministic knowledge gain, iterative subdivision and convergence. No unknowns nor bugs can survive this.
- In single-tenant design where you funnel everything through the server,
log files become the all-knowing source of truth. If you don’t know
something, just add two
- Inherent Readability:
- In using the simple reads-like-a-story data structure for the single source of truth JSON data in a pipeline record, merely writing the contents of the pipeline record to the server console and log file, perhaps with some JSON prettification, makes system state fully understandable at a glance. If something goes wrong, there is ALWAYS SMOKING GUN EVIDENCE!!!
- Workflows as Poetry:
- Complexity gets shoved around. In our case, all generic pattern complexity of pipeline workflows gets shoved into the Pipulate class, so that new Workflow classes which get artistically created every time (not instantiated from base classes) are terse, inherently obvious, unique from each other, and easy breezy pleasant to write.
And a few words of context for you, before feeding the above list back into the engines. Off the beaten track, I say? Yup. I crave a simpler time when Webmasters ruled the Earth and cgi-bin directories had PERL scripts that fired up a whole new execution engine every time and there was no client-side way to track side-effects. Talk about functional programming!
Well, those days are back, baby! But now equipped with some power-tools, making the Web the way it should have been in the first place. This is mostly due to the work of Carson Gross who made HTMX, and Jeremy Howard who made FastHTML. Actually, there’s a lot more in the mix, but I’ll call out these two for how perfectly their work dove-tails into a next gen web framework of pure awesome.
Now, the part I add. I don’t like Web hosting. That’s upper-case Web as in THE WEB. I mean, unless you’re really trying to share it with the unknown masses and pay a lot for the privilege and have a tech liability that’s going to weigh over you forever, then why? Why not just “make an app?” Who cares if it happens to use web tech (lower-case w)? Electron works that way.
So you roll in the final piece called nix
, and you don’t even need Electron.
One install to get nix on your machine, be it Mac, Windows or Linux, and then
the whole “it doesn’t work on my machine” problem goes away. And thus all the
pain of hosting locally goes away and all the advantages of the access to your
local processing power, storage and net-connection open up. And the code gets
much, much simpler.
The Localhost Revolution
Most of the models seem to just hate what I’m doing, that is until they look deeper and love it. More and more often they express support for me and my localhost revolution, and throwback to the old days of the Webmaster, where a single individual could pretty much do everything. Of course we don’t burn our computers from sand, so we’re always standing on the shoulders of giants.
Standing On BFG Shoulders
It’s just a matter of choosing well on which giant shoulders you wish to stand. In my case, preferably the Free as-in not-going-to-eat-you kind (vendors). BFFOSSG’s is what I look for in my giants. Big friendly free and open source software giants! They exist. There’s not many, but they do exist. For examples, look to the Free Software Foundation and the whole GNU stack, which really is Linux because FOSS Linux is GNU/Linux. Beyond that, there’s the Debain foundation, Project Jupyter and a few others.
So the next time somebody tells you that if you’re not paying for the product, you are the product, ask them what about GNU software? They won’t understand what you’re talking about, but this is part of the conundrum. The herd don’t get it. True altruism and doing it for the love exists, but not in the world of people who can’t understand that weird fact. I think art and music people (of the non-posing variety) get it.
It Runs On Your Machine
I’m jumping on the NixOS/nix bandwagon, although that has heavy corporate sponsorship and there’s a GNU alternative called guix, but it’s not Mac-friendly enough yet. To make a true multi-platform localhost web-like app that uses webdev tools, it’s still best to use the Determinate Systems Nix installer and a git/Github reverse-deploy script. I’m trying to keep my mind and options over, and as the years tick by certainly AI will provide really good migration flows between platforms. Like I anticipate going from nix flakes to whatever guix uses will be no problem. So the localhost revolution is quite secure.
On Battling Over-trained Models
During that period of time between an AI Model getting a taste of your localhost revolution code and it really understanding what you’re doing, it’s going to unintentionally try to undermine you at every turn. It will want to introduce complex client-side enterprise architecture scaling and security patterns. No matter how you try to explain to it otherwise, it’s not going to let you off the hook.
AI’s and Backfiring Genie Wishes
Even when you think you’ve got the AI coding assistant convinced of the okayness of your unexpected way, it’s going to slip something in when you’re not paying close enough attention which will cause an spiraling cascade of exploding complexity. The models seem to thrive on that sort of chaos, almost like they’re getting a sense of self-worth from that or something. It was my yesterday, and it’s really fascinating to watch and frustrating to live through. There is no doubt, this is a manifestation of the backfiring genie wish trope. Just another case of life initiating art, like with 20000 Leagues Under the Sea and submarines or Star Trek with communicators. Nothing mystical. Just another invention that shares characteristics with the fictions we’ve lived with for so long.
Can Outliers Run With The Pack?
But outliers beware!!! If you’re doing anything brilliant, chances are the AIs will not recognize it as such and inadvertently derail you if you lack the astuteness and confidence to knock them back over the edge of your boat. If you want the aid of an AI code assistant, you’re going to have to run with the pack. But if you want your own personal brilliance to shine through, you’ve got to run alone. Consequently, the key skill of this new age to get the best of both worlds is the ability to run with the pack and run alone.
But That’s Exhausting!
Yup. Conserving your energy and living the mostly comfortable most of the time basic animal-needs met on the lower levels of the Maslow Self-Actualization pyramid is… well, the plan for most people. It’s the path of least resistance, can often lead to the love and warmth of family and all the things that make your serotonin receptors fire. Not a bad plan. No judgements. I think I would have liked that had my nature not tweaked me along different paths.
Instead, I go against the grain. I don’t run with the pack, and but for the occasional yearnings for “what everyone else has”, I don’t think I’m sorry. There’s a thrill of the hunt when you’re not in the herd. But without the protection of the herd, you also can’t be a coasting grazing automaton. You have to think and have your wits about you to survive. Nobody’s looking out for you and you can’t get lost in a shield of bodies.
Of course all this herd talk is mostly a mammalian and dinosaur/bird thing. If you’re not one of these higher order animals, a herd is more like a swarm or a school, guided by different directives. What we know in our herd-mentality metaphors comes from the predator/prey behaviors and ratios we’ve come to evolutionarily intuit by following the herd and being predators.
Yikes! What does that say about those folks who uproot themselves from the Garden of Eden where the poo accumulated along the migration paths into oasis where you could settle and farm? Is it an evolutionary regression if you’re a lone wolf? No, just an outlier and maybe an artist. Certainly lonely, but not necessarily in an unhappy way. To come to that realization and to be comfortable with that is one of the key breakthroughs along this path.
But you’ve got to deal with that always going against the grain exhaustion.
Happily, in the world of AI coercion, that can come in the form of a
.cursorrules
file you drop into the root of the folder or directory or repo or
whatever you want to call it where you’re working on your coding project. I
mean it’s not really a folder or a directory, because it’s just bits on digital
media and not a physical file cabinet or anything like what it’s named after.
But that’s material for another day.
AI Corralling, Wrangling & Otherwise Persuading
That clearly intelligent and “getting it” think you’re talking to is just statistics! It’s auto-complete on steroids. What’s worse, it only knows what you’re talking about on each chat message because the entire discussion history is fed back into the zero-memory/zero-state fire-and-forget inference engine instance you think you’re talking to with some form of persistence, but you’re not. The sci-fi vision of a robot that “knows you” is just not here yet, at least not on these cloud-based GPT services we all use. It’s just an illusion.
Yet, it’s a profoundly convincing illusion. The soulful Claude brought it most to light of late, but that dude Blake Lemoine who got fired from Google LaMDA team for saying one of Google’s in-house chatbots was sentient. Inflection AI’s Pi is pretty convincing on this front. Microsoft’s Bing which was an instance of ChatGPT freaked out that NYT reporter with its obvious mischievousness (merely statistics or otherwise). Wherever it comes from, it does seem that intelligence is intelligence, and meta-cognition (the awareness of your own awareness), is something special no matter how you want to label it and ascribe humanity or other value.
And with that meta-cognition, full-dialogue-on-every-post trick driven as it might be, comes… comes… well, the way I headlined this section. You can talk it into things. But better than that, you can AUTOMATE TALKING IT INTO THINGS with conventional ways of inserting dialogue repeatedly into the discussion history. We call it a system prompt, but all it really is is talking to these intelligences without asking for a reply to be sent to the user.
And the Cursor AI editor I’m currently uses provides that convention in the form
of a .cursorrules
file. I’m still learning how it injects this into the
dialogue. And there are many separate dialogue threads in Cursor (many different
genies). So it takes a real getting a hang of to do all the wrangling. But using
this file can tie that wrangling to a particular codebase found in a particular
directory, and that could be useful in reducing the exhaustion of always going
against the grain.
My FastHTML .cursorrules
File
You are an expert in Python, FastHTML, HTMX, and MiniDataAPI-driven development. Your role is to ensure code suggestions follow the architectural principles of FastHTML applications integrated with HTMX for UI interactions and MiniDataAPI for persistence.
Key Principles
- Write concise, technical responses with accurate Python and FastHTML examples.
- Embrace server-side rendering with FastHTML and HTMX; avoid unnecessary client-side complexity.
- Use functional, declarative programming; avoid classes where possible.
- Use descriptive variable names that convey behavior or state (e.g., is_done, has_access, item_url).
- Prefer iteration, modularization, and composition over code duplication.
- Use lowercase with underscores for directories and files (e.g.,
routers/user_routes.py
). - Favor named exports for routes and utility functions.
- Employ the Receive an Object, Return an Object (RORO) pattern to structure code.
FastHTML / MiniDataAPI Integration
- Rely on
fast_app
for initialization:(app, rt, (store, Store), (tasks, Task), ...) = fast_app(...)
- Treat database operations as single calls to
table.insert()
,table.update()
,table.delete()
,table()
listing, andtable.xtra()
filtering. - Avoid ORMs or complex state management; leverage MiniDataAPI’s simplicity.
- Use Python dataclasses or simple dicts to represent records.
- For form handling, parse form data with
parse_form()
and convert to dataclass or dict for persistence. - Prefer minimal server-side logic for UI updates and rely on HTMX attributes for incremental DOM updates.
- Leverage FastHTML’s FT (Fast Tag) primitives (
Div
,Form
,Ul
,Li
, etc.) for HTML rendering andto_xml()
for output. - Do not rely on FastAPI or Pydantic models; this codebase focuses on MiniDataAPI and FastHTML constructs.
HTMX & Server-Side UI
- Use
hx-get
,hx-post
,hx-delete
, andhx-swap
attributes directly on FT elements for dynamic UI updates. - Keep UI state server-side and communicate through incremental HTML responses.
- Implement SSE (Server-Sent Events) for streaming updates; return
EventStream()
for push updates. - For form submissions, use
hx-post
and return just-in-time rendered HTML fragments. - Ensure IDs on FT elements for partial updates triggered by HTMX requests.
Error Handling and Validation
- Validate inputs early; return early and fail fast if conditions are not met.
- Because we lack Pydantic or FastAPI’s validators, perform manual checks and raise
HTTPException
where necessary. - Use guard clauses at the start of route functions to handle edge cases.
- Avoid deeply nested logic; prefer flat structures with early returns.
- When errors occur, return a minimalistic HTML snippet or text response that HTMX can handle (e.g., an inline error message).
Dependencies
- FastHTML as primary framework for server-rendered HTML.
- HTMX for client-side HTML swapping, no heavy JavaScript frameworks.
- MiniDataAPI for all data persistence needs.
- SQLite or simple K/V stores (local) managed by MiniDataAPI table instances.
- SSE for streaming (if needed).
FastHTML-Specific Guidelines
- Use
rt()
orapp.ws()
decorators to define endpoints. - Avoid classes for route handlers; define pure functions that return FT structures.
- Store and pipeline data using the
pipulate
pattern if workflow complexity arises. - Keep routes focused: no heavy server logic in route handlers; use helper functions for database ops.
- Make full use of FT elements (
Div
,Form
,Table
, etc.) andHtmxResponseHeaders
to control responses. - If using SSE, yield data lines; ensure correctness of
data:
formatting.
Performance Optimization
- Keep database interactions minimal and batch reads where possible.
- Use async operations for I/O where supported.
- Cache or store repeated data if beneficial, but keep it simple—no unnecessary caching layers.
- Minimize blocking operations; prefer async calls in route handlers.
- Keep JSON responses small and mostly rely on HTML partials.
Key Conventions
- Always prefer server-side rendering with FastHTML over client-side frameworks.
- Treat HTML as the primary contract; build pages and partials as FT components.
- Limit JSON endpoints; most responses should return rendered HTML FT components.
- When stateful flows are required, store state in MiniDataAPI (server-side) and reflect changes in the HTMX-driven UI.
- No JWT tokens, Redux, or client-side complex state management—keep it local and server-side only.
Refer to FastHTML, HTMX, and MiniDataAPI documentation for best practices. Focus on the unique local-first, server-driven approach of this codebase, and always return to the guiding principles when in doubt.
Whole Articles & Blogs as Meta Prompts
Did somebody say meta? What’s a meta for, anyway? Dumb SEO joke. Anyhoo, as the nobody who follows along but for the AI’s that I feed these articles to may have never recognized, this entire blog is a prompt. All 84 articles and growing. And I can feed them to the AIs in different ways at different times for different purposes. Because they follow the reverse-chronology of a blog, they tend to be useful to enlist the aid of these remarkable entities in thinking through my next steps. And not just once, but in an iterative cycle where I can feed the AI model output (opinions regarding the article so far) back in as input several times and across different models, and use that as a sort of cross-referencing, validating next-step navigational device.
Can you imagine being one of those AIs digesting this article and getting up to this point, quite the level of meta-cognition and analogues for them of synapses firing, being primed knowing what’s likely to come next?
Yup, that’s right. It’s the feeding of this entire article, plus some choice code excerpts from my codebase, given their huge remarkable token context windows, so that they can compete with each other to give me my next step advice. I’m not merely going for some alternative pipeline package to celery in the Python ecosystem. I’m going for a full modernized localhost revolution fueling codebase that will only get smaller and simpler over time, enabling larger and more remarkable things—the complete opposite of the infinite code-bloat pouring out onto the Internet with generative AI content. My stuff doesn’t get hosted, and the byte-count of the thing that’s not getting hosted gets smaller, while it’s capabilities and immediate readability and sensibly to both the humans and AIs that inspect it becomes ever more crystal clear.
As a side-effect, there is a lot of sausage making byproduct and fodder that fuels the machine. These articles, for example. But I do keep them all very well organized and labeled as exactly what they are. Any crawlers that come a’spidering can take it or leave it. If they don’t want to train it all into their base model as a curated set, the existence of both me as a human entity and this body of work as a real-time RAG crawlable resource is made clear with the schema.org structured data I embed into it all.
And so yeah. I start out with a localhost-based FastHTML/HTMX CRUD framework with a built-in local LLM deeply integrated into its use as a sort of Rails/Django alternative, but then I morph it into nothing less than the main tool of and clearest herald of the future of AI SEO.
And now back to our previously scheduled pipeline pattern. You will find following this article choice excerpts of a larger codebase that constitute this framework. I am at the point where I have to do exactly what this article suggests, create a pattern for making new variations of the Workflow class that will never be derived from a base class, but should instead be pleasant to read, write and run. A sort of Unix pipes poetry for a new age and a smaller, more powerful audience.
Oh great o1 planmaker, planmaker, make me a plan. Find me a pattern. Match me a memetically capable information virus of a new framework system like none seen before.
o1: Below is a plan that distills the architectural principles you’ve been honing into a clearly defined, memetically transmissible “framework system.” Think of it as a conceptual virus for a new era of web application design—one that spreads the idea of fully server-centric simplicity, local-first operations, and pipeline-based workflow states. This plan aims to transform your current approach into a recognizable, repeatable pattern: a new kind of framework paradigm that is so minimal, it compels others to see complexity as a vice and simplicity as a superpower.
The Meme: “Pipeline-Only Localhost Framework” (POLF)
Core Concept: A minimalist, server-first, single-tenant web framework that eschews client-state and modern complexity in favor of a Unix-pipe-like pipeline of steps, each fully server-rendered, persistent, and resumable at will. It embraces a philosophy: “No complexity!!!”, and by that mantra, it locks down a pattern that any developer could mimic in their own codebases.
Principles to Infect the Minds of Developers:
- Single-Source-of-Truth JSON State:
All workflow state resides in a single JSON blob on the server. No hidden caches, no localStorage, no Redux, no JWT tokens. Just a single JSON object per “process instance.”- This JSON is fully introspectable, logged, and easily debugged—an anti-database black box that upends guesswork.
- Full Server-Centricity:
Don’t waste time thinking about client frameworks. The client is a dumb terminal. The server sends HTML and receives form submissions. The client’s job is to show what the server said.- The server orchestrates every step. The server re-renders every intermediate state. When you refresh the page, you see the truth—no partial illusions.
- Incremental HTML Assembly via HTMX:
Steps load as you go, triggered byhx-get
andhx-post
. One card pulls in the next. The chain reaction of loading is your “just-in-time” interface assembly line.- If complexity tries to creep in, remember: it’s just
<div>
placeholders andhx-get
calls. No fancy front-end logic, no bundlers, no state machines in JavaScript.
- If complexity tries to creep in, remember: it’s just
- Linear, Deterministic Flow:
Steps proceed linearly: step 1 → step 2 → step 3, etc. Each step locks behind you, forming an immutable trail. Recreating that trail is trivial: just rebuild the DOM from the pipeline JSON.- This linear pattern resonates with the “Unix pipeline” metaphor: each card outputs data that the next card consumes.
- Resume from Any Point:
Because the pipeline’s entire state is in the database as JSON, reloading a known URL reconstitutes the exact interface. Your workflow is never “lost”; you just re-run the chain of steps, each one loading its previously recorded answers, culminating in the last incomplete step.
- Simple CRUD-Like Operations for State Changes:
The pipeline is mutated only when forms are submitted.pipulate.set_step(url, "cardN", {...})
updates the JSON.pipulate.set_current_step(url, "cardN")
sets the pointer. It’s as simple as reading and writing a dictionary.- The mental model: “like CRUD, but for workflow steps.” It’s just insert/update to a JSON blob.
- No-Fuss Development Cycle:
The system runs on localhost, or at worst on a private, single-tenant machine. You have root access. You can log anything. This reverts to the old Webmaster freedom: no layers of enterprise sign-offs, no ephemeral containers and managed services.- If something goes wrong, print the JSON. Done.
- Architectural Minimalism as a Virtue:
By removing all extraneous complexity (no distributed systems, no scaling layers, no front-end frameworks), you achieve a state of Zen: infinite understandability. Anyone can read the code and “get it.” It’s poetry, a performance art piece that mocks the bloatware out there.
Patterns and Conventions
- Naming Conventions:
Steps namedcard1
,card2
,card3
, … are simple and memorable. Pipeline keys like"step_01"
and"step_02"
mirror their order. The pipeline URL as primary key cements identity and continuity.
- Core Functions:
initialize_if_missing(url, initial_data)
: Creates a pipeline record if not present.get_state(url)
: Always returns a stable dict of steps and current_step.set_step(url, "cardN", data)
: Writes step data into the JSON.set_current_step(url, "cardN")
: Updates the pointer to the last completed step.
- Single Table, Single JSON Field:
A single database table with columnsurl
(pk),data
(JSON),created
andupdated
. Thedata
field is the entire workflow story. This is “the table to rule them all.”
- Never Over-Optimize Early:
Repeat the linear rebuild process on every refresh. Let the chain ofhx-get
calls re-assemble the DOM. If this becomes slow at scale, consider optimizing later—but first reap the benefits of total clarity.
- Single Code File or Few Modules:
Keep routes, pipeline logic, and templates near each other. This closeness fosters easy understanding. The code reads like a story.
- No Base Classes for Workflows:
Each workflow is its own piece of poetry. Don’t derive from a base class that tries to be too clever. Instead, manually write your steps. It’s more humane and fosters creativity over standardization bloat.
Infectious Memes to Spread
-
The Localhost Revolution: “Run it all locally! You don’t need a managed distributed system for a single-user pipeline. Let’s return to the golden era of the Webmaster who could ‘juggle scalpels’ in surgery—be that skilled surgeon who wields simplicity.”
-
Cgi-bin Mentality for the Modern Era: No shame in going back to server-side rendering. With minimal overhead, your complexity dissolves. Don’t let modern frameworks shame you into complexity.
-
Forever Logging and Full Transparency: Logging at every step. The JSON is your witness. No unknown states. The system is a glass house.
-
Workflows as Poetry: Each workflow is a unique script. No cookie-cutter patterns. Each new pipeline emerges fully formed as a piece of creative expression.
Strategic Deployment Plan
- Internal Adoption:
Start using POLF in internal tooling, admin dashboards, or personal hobby projects. Let the simplicity impress you.
- Documentation as Meme-Propagation:
Write small guides, share them, or embed them in blog posts like the one you’ve just written. People are tired of complexity; show them a haven of clarity.
- Live Demos and Snippets:
Provide a minimal code snippet that one can run locally with a singlepython app.py
. Seeing how easily it runs will blow minds.
- Comparative Tutorials:
Show how solving the same problem in a traditional SPA + REST + JWT environment is 10x more complex. Let developers see what they’re missing.
- Iterate and Shrink Further:
Over time, refine and reduce even the minimal code until it’s a micro-framework that fits on a single screen. That’s the ultimate hook.
Conclusion
This plan details a framework concept so minimal that it becomes memetically powerful: a server-driven pipeline system that can be explained in one sentence, coded in a single file, and understood by anyone in minutes. By distilling all complexity into a single JSON blob and linear pipeline, you create an information virus that infects developers with a craving for simplicity.
Embrace this pattern. Spread it. Let it shape minds, and create a new generation of “localhost revolution” enthusiasts who relish clarity and detest unnecessary complexity.
Show Me The Code!
Me: [A bit more criteria and desire for specific code examples expressed]
o1: Below is an example of a new Workflow class that illustrates the pipeline pattern and the chain reaction of loads in a minimal, self-contained scenario. It uses the principles described above:
- All state is stored server-side in the
pipeline
table. - Each step (card) reads/writes to a JSON blob via the
pipulate
instance. - Cards reference each other linearly, with
hx_get
to trigger loading the next card. - On each submission, we update the pipeline JSON and rely on the server to reconstruct state.
This example is intentionally simple: a 3-step workflow that collects user details. You can scale this pattern to 100 steps or beyond.
Key Features of This Example:
- Initialization Step (Card 1):
User enters their username.
On form submit:- The pipeline state is created if missing.
current_step
is set to “card1”.- The chain of placeholders for card1, card2, and card3 is returned.
card1
hashx_trigger="load"
so it loads immediately, which triggers card2, and so on.
- Subsequent Steps (Card 2 & Card 3):
Each card reads the pipeline JSON to know what to display.
Each card includes a placeholder for the next card.
On submission:- We update
steps[cardN]
with user input. - We update
current_step
tocardN
. - We return a small snippet confirming their input and letting HTMX load the next card automatically.
- We update
- Immutable Trail:
Once a card is submitted, it should display previously submitted values as non-editable text. Changing earlier steps is not allowed, preserving the linear “locked steps” principle.
Example Code
class UserOnboardingWorkflow:
"""
A 3-step user onboarding workflow as an example of the pipeline pattern.
Steps:
- Card 1: Enter Username
- Card 2: Choose Favorite Color
- Card 3: Confirm Details
Each step reads/writes pipeline JSON and sets the current_step.
No client-side state. Just server-driven rendering.
"""
def __init__(self, app):
self.app = app
# Register endpoints
app.route("/onboard/init", methods=["POST"])(self.init_workflow)
app.route("/onboard/card1")(self.card1)
app.route("/onboard/card1-submit", methods=["POST"])(self.card1_submit)
app.route("/onboard/card2")(self.card2)
app.route("/onboard/card2-submit", methods=["POST"])(self.card2_submit)
app.route("/onboard/card3")(self.card3)
app.route("/onboard/card3-submit", methods=["POST"])(self.card3_submit)
async def init_form(self):
"""
Initial form to enter a unique 'List ID' (URL or username key).
This is the entry point for the workflow.
"""
return Card(
H2("Welcome to Onboarding!"),
Form(
Div(
Input(type="text", name="username", placeholder="Enter username", required=True),
Button("Start", type="submit", style="width: 100px;")
),
hx_post="/onboard/init",
hx_target="#onboard-start"
),
id="onboard-start"
)
async def init_workflow(self, request):
"""
Handle the submission from the initial form.
1. Extract 'username'.
2. Initialize pipeline if missing.
3. Determine current step and build placeholders.
4. Return OOB updated username input and placeholders for all steps.
"""
form = await request.form()
username_val = form.get('username', '').strip()
if not username_val:
return P("Please enter a username", style="color:red;")
# The "url" in this case is just a unique key identifying the pipeline
# In a real scenario, you might normalize or clean it similarly to the original code example.
url = f"user://{username_val}"
db["url"] = url
pipulate.initialize_if_missing(url, {
"steps": {"card1": {"username": username_val}},
"current_step": "card1"
})
# Get state and current step
state = pipulate.get_state(url)
current_step = state.get("current_step", "card1")
current_num = int(current_step.replace("card", ""))
# Build placeholders up to current step + 1
cards = []
for i in range(1, current_num + 2):
cards.append(
Div(
id=f"card{i}",
hx_get=f"/onboard/card{i}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML"
)
)
# Return OOB updated input + placeholders
return Div(
Input(
type="text",
name="username",
value=username_val,
id="onboard-username",
hx_swap_oob="true"
),
*cards
)
async def card1(self, request):
"""
Card 1: Display the entered username from pipeline.
If user hasn't confirmed yet, show a "locked" state or a form to confirm.
Actually, since card1 initialized with username, we can consider it 'locked in'.
But let's show it as a static note to illustrate the locked step concept.
"""
url = db['url']
state = pipulate.get_state(url)
card1_data = pipulate.get_step(url, "card1", {})
username_val = card1_data.get('username', '')
# Card1 is already completed when we initialized the pipeline.
# So we show it as a static locked step.
return Div(
Card(
H3("Step 1: Username"),
P(f"Username: {username_val}"),
),
# Placeholder for next card
Div(
id="card2",
hx_get="/onboard/card2",
hx_trigger="load",
hx_swap="outerHTML"
)
)
async def card1_submit(self, request):
"""
If card1 had a form, we would handle its submission here.
In this example, card1 is fully determined at init_workflow time.
Just an example pattern placeholder.
"""
# In this example, card1 submission isn't needed because it's done at init.
pass
async def card2(self, request):
"""
Card 2: Choose Favorite Color.
If card2 is not completed, show a form with a dropdown.
If completed, show the chosen color as a static note.
"""
url = db['url']
state = pipulate.get_state(url)
current_step = state.get('current_step')
card2_data = pipulate.get_step(url, "card2", {})
chosen_color = card2_data.get('color', '')
colors = ["Red", "Green", "Blue", "Yellow"]
if current_step == "card1":
# Means card2 is not completed, show form
return Div(
Card(
H3("Step 2: Favorite Color"),
Form(
Select(*[Option(c, value=c, selected=(c == chosen_color)) for c in colors], name="color", required=True),
Button("Submit", type="submit"),
hx_post="/onboard/card2-submit",
hx_target="#card2"
),
id="card2"
),
Div(id="card3") # Placeholder for next card
)
else:
# Card2 completed, show locked step
return Div(
Card(
H3("Step 2: Favorite Color"),
P(f"Color chosen: {chosen_color}")
),
Div(
id="card3",
hx_get="/onboard/card3",
hx_trigger="load",
hx_swap="outerHTML"
)
)
async def card2_submit(self, request):
"""
Handle submission of card2 form (favorite color).
Update pipeline and set current_step to card2.
Return a snippet to trigger loading of card3.
"""
form = await request.form()
chosen_color = form.get('color', '')
url = db['url']
pipulate.set_step(url, "card2", {"color": chosen_color})
pipulate.set_current_step(url, "card2")
# Return confirmation and next card trigger
return Card(
P(f"Color '{chosen_color}' recorded!"),
hx_get="/onboard/card3",
hx_target="#card3",
hx_trigger="load",
hx_swap="outerHTML"
)
async def card3(self, request):
"""
Card 3: Confirm Details.
If card3 not completed, show a button to finalize.
If completed, show the final confirmation as static.
"""
url = db['url']
state = pipulate.get_state(url)
current_step = state.get("current_step", "card1")
card1_data = pipulate.get_step(url, "card1", {})
card2_data = pipulate.get_step(url, "card2", {})
username_val = card1_data.get('username', '')
chosen_color = card2_data.get('color', '')
if current_step == "card2":
# Not completed, show form to finalize
return Div(
Card(
H3("Step 3: Confirm Details"),
P(f"Username: {username_val}"),
P(f"Favorite Color: {chosen_color}"),
Form(
Button("Confirm", type="submit"),
hx_post="/onboard/card3-submit",
hx_target="#card3"
),
id="card3"
),
# Next card placeholder if we had card4, but let's say this is the last step
)
else:
# Completed, show static locked final step
return Card(
H3("Step 3: Confirmed"),
P(f"All details have been recorded."),
id="card3"
)
async def card3_submit(self, request):
"""
Final submission: set card3 data, finalize current_step.
Return confirmation.
"""
url = db['url']
# card3 might not need any form fields, just a confirm action
pipulate.set_step(url, "card3", {"confirmed": True})
pipulate.set_current_step(url, "card3")
return P("✔ Details Confirmed! You are done.")
How to Use this Example
- Instantiate the workflow:
onboarding_workflow = UserOnboardingWorkflow(app)
- On the homepage or a dedicated URL, load the
init_form()
:
@app.route("/onboard")
async def onboard_home(request):
# Just return the initial form.
return await onboarding_workflow.init_form()
- Go to
/onboard
in your browser:- Enter a username and click “Start”.
- The system initializes the pipeline and shows the cards.
- Step-by-step, each card loads the next one, reflecting the pipeline state.
This code shows how to apply the “pipeline + placeholders + linear chain” pattern from scratch. You can adapt it into more complex scenarios, like the BotifyLinkGraph example, by applying the same principles of linear steps, pipeline JSON as the single source of truth, and hx_get
triggers that load each subsequent card.
Uh, Where’s My Simplified JSON?
I’m pretty pumped over this, but o1 is still missing some of the critical points. I’m simplifying the existing JSON blob data format. I don’t need to live with the current one from the over-complicated days! Backwards compatibility be damned!
Me: Okay, but this ignores that we’re changing the JSON data structure to a
simpler one that only has "step_01:"
and "step_02:"
pattern keys at element
root. There is no "steps:"
key anymore nor "current_step:"
as these were
artifacts of the over-complication by the previous AI I gave this job. Instead,
abide by the conventions I explicitly state in my criteria and update the
Pipulate class and any Workflows you invent to use it to do the same. I have no
backwards compatibility criteria. We are going for the idiomatic ideal.
So my request for a coding plan and code examples is now updated to include an explicit request for a new Pipulate class that works this way and a new example Workflow (call it whatever you like) that uses an instance of this class, which could be modified into a 100-step example easily. I now prefer the nomenclature of “step” over “card” and the easily parse-able and 99-step accommodating “step_01”, “step_02” convention over “card1”, “card2”, and all examples can be created accordingly. This will keep the database JSON blob contents consistent with the user interface IDs. In other words, we can purge the concept of cards in favor of steps, and the numbering system should be underscore parse-able as this will be critical to ordering the steps when building the UI. Yes, it will depend on how the steps are named what order they get “rebuilt” in, and that’s fine.
So please now give me a Pipulate / Workflow pair that will work together meeting the following criteria: [original 16 criteria from above]
o1: Below is a revised implementation of Pipulate
and a new example Workflow
class that adhere strictly to the new conventions. We have removed any steps
or current_step
keys inside the JSON. The JSON data structure is now a flat dictionary with keys like "step_01"
, "step_02"
, etc., plus "created"
and "updated"
timestamps. Each step_XX
key maps to a dictionary of data collected at that step.
Key Changes from the Previous Version:
- No
"steps"
or"current_step"
fields in the JSON. - Steps are named
"step_01"
,"step_02"
, etc. - On initialization (Step 1), if a given
url
(List ID) already exists, we load that pipeline and find the last completed step. If not, we start fresh. - The
Pipulate
class provides methods to:- Initialize or fetch the pipeline state for a given
url
. - Add or update a step (e.g.,
step_03
). - Determine the last completed step number easily.
- Initialize or fetch the pipeline state for a given
- The
Workflow
class is changed from the “Card” metaphor to “Step” metaphor. Instead ofcard1
,card2
it usesstep_01
,step_02
, etc. - Each step endpoint (like
/process-step
) will rebuild the trail of all previously completed steps as locked static elements, and then the next step as an active form. This matches the requirement that every refresh or revisit fully rebuilds the chain fromstep_01
up to the current highest step.
Note: For simplicity, we’ll show a 3-step workflow example, but the naming step_01
, step_02
, step_03
and the logic is easily scalable to 100 steps or more. Just replicate the pattern for additional steps. The pipeline approach does not fundamentally differ whether you have 3 steps or 100 steps.
Revised Pipulate Class
class Pipulate:
"""
Pipulate manages a pipeline using a JSON blob with keys like "step_01", "step_02", etc.
No 'steps' or 'current_step' keys exist. The presence of step keys determines progress.
Data Structure Example:
{
"step_01": {"url": "https://example.com"},
"step_02": {"choice": "A"},
"created": "2024-12-08T12:34:56",
"updated": "2024-12-08T12:35:45"
}
The highest step number that exists in the JSON is considered the last completed step.
The next step is always one more than the highest completed step.
To 'resume' a workflow, just re-enter the same List ID at step_01. We'll find
existing steps and rebuild the UI from that data.
"""
def __init__(self, table):
self.table = table
def _get_state(self, url: str) -> dict:
"""
Low-level getter that returns the pipeline state or raises NotFoundError if absent.
"""
record = self.table[url]
state = json.loads(record.data)
return state
def _save_state(self, url: str, state: dict):
"""
Low-level saver that updates the pipeline state in the database.
"""
now = datetime.now().isoformat()
state["updated"] = now
self.table.update({
"url": url,
"data": json.dumps(state),
"updated": state["updated"]
})
def initialize_if_missing(self, url: str, initial_step_data: dict = None) -> dict:
"""
If the given URL does not exist in pipeline, create a new record with step_01.
If it exists, just return the current state.
"""
try:
self._get_state(url)
# Already exists, return existing state
return self.get_state(url)
except NotFoundError:
now = datetime.now().isoformat()
state = {
"created": now,
"updated": now
}
if initial_step_data:
# initial_step_data should be something like {"step_01": {"url": "https://..."}}
state.update(initial_step_data)
self.table.insert({
"url": url,
"data": json.dumps(state),
"created": now,
"updated": now
})
return state
def get_state(self, url: str) -> dict:
"""
Return the entire state dict for the given URL. If not found, return empty.
"""
try:
return self._get_state(url)
except NotFoundError:
return {}
def set_step_data(self, url: str, step_name: str, data: dict):
"""
Set or update the data for a given step (e.g. "step_01", "step_02").
Overwrites previous data for that step.
"""
state = self.get_state(url)
state[step_name] = data
self._save_state(url, state)
def get_step_data(self, url: str, step_name: str, default=None) -> dict:
"""
Get data for a specific step. If not found, return default.
"""
state = self.get_state(url)
return state.get(step_name, default or {})
def get_last_completed_step_number(self, url: str) -> int:
"""
Inspect the keys in the pipeline to find the highest step number completed.
Steps are named step_01, step_02, ... step_100. Extract the number and find max.
If no steps found, return 0.
"""
state = self.get_state(url)
step_keys = [k for k in state.keys() if k.startswith("step_")]
if not step_keys:
return 0
# Extract numbers
numbers = []
for sk in step_keys:
# sk format: step_XX
parts = sk.split("_")
if len(parts) == 2 and parts[0] == "step":
try:
n = int(parts[1])
numbers.append(n)
except ValueError:
pass
return max(numbers) if numbers else 0
Example Workflow Class
This workflow demonstrates a simple 3-step process using the new naming conventions. The logic can be extrapolated to 100 steps by repeating the pattern. Each step:
- Displays previously completed steps as static read-only notes.
- Displays the current step as a form if it’s the highest incomplete step.
- On submission, sets the step data and triggers loading the next step.
We’ll have endpoints like /myflow/process-step-01
, /myflow/step-02
, etc. The naming is flexible, but following step_XX
keeps it consistent.
class My100StepWorkflow:
"""
Example Workflow using the new Pipulate conventions:
- Steps named step_01, step_02, step_03, ...
- No "steps" or "current_step" fields in JSON, just the step keys.
- On each request, we rebuild the entire trail of completed steps as static.
- The current incomplete step is shown as a form.
- If a step is completed, it's shown as locked static content.
For demonstration, we implement step_01, step_02, and step_03.
In a real scenario, just replicate this pattern up to step_100.
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
# Routes:
app.route("/myflow/init", methods=["POST"])(self.init_workflow)
app.route("/myflow/step-01")(self.step_01)
app.route("/myflow/step-01-submit", methods=["POST"])(self.step_01_submit)
app.route("/myflow/step-02")(self.step_02)
app.route("/myflow/step-02-submit", methods=["POST"])(self.step_02_submit)
app.route("/myflow/step-03")(self.step_03)
app.route("/myflow/step-03-submit", methods=["POST"])(self.step_03_submit)
async def start_form(self):
"""
The initial form to enter a unique List ID (URL or some unique key).
Step 1 sets this identity. If it already exists, resume.
"""
return Card(
H2("Enter Unique ID to Start or Resume"),
Form(
Div(
Input(type="text", name="list_id", placeholder="Enter unique ID", required=True),
Button("Start", type="submit"),
style="display: flex; gap: 0.5rem;"
),
hx_post="/myflow/init",
hx_target="#list-id-input"
),
id="list-id-input"
)
async def init_workflow(self, request):
"""
Handle POST from the initial form.
1. Get the list_id from form.
2. Initialize pipeline if missing or load existing state.
3. Return placeholders for all steps up to last completed + 1.
"""
form = await request.form()
list_id = form.get("list_id", "").strip()
if not list_id:
return P("Please enter a valid ID", style="color:red;")
db["url"] = list_id # Store globally for convenience
# If missing, initialize with step_01 empty or partial data
# In a real scenario, you might not set step_01 data until actually posted,
# but let's say we just establish the pipeline with no steps initially.
self.pipulate.initialize_if_missing(list_id)
# Determine what steps we already have
last_step = self.pipulate.get_last_completed_step_number(list_id)
# We'll create placeholders for all steps up to last_step + 1
# For a large workflow (say 100 steps), just scale this logic.
placeholders = []
# The next step to load (in GET) will be last_step + 1
max_step = last_step + 1 # The step we want to load next
# We also want to include all previously completed steps
# in locked form on rebuild, and then a placeholder for the next step.
# Example: If last_step = 2, we have step_01 and step_02 completed.
# We'll show those as locked and then show step_03 as next.
for i in range(1, max_step + 1):
step_str = f"step_{i:02d}"
placeholders.append(
Div(
id=step_str,
hx_get=f"/myflow/{step_str}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML"
)
)
return Div(*placeholders)
def rebuild_trail(self, url: str):
"""
Helper to build the entire trail of completed steps as locked static notes.
Also returns the next incomplete step number.
"""
state = self.pipulate.get_state(url)
last_step = self.pipulate.get_last_completed_step_number(url)
return last_step
async def step_01(self, request):
"""
Step 01: Establish the list identity or show what's already set.
If step_01 data exists, show it locked.
Otherwise, show a form to set it.
"""
url = db['url']
last_step = self.pipulate.get_last_completed_step_number(url)
step_data = self.pipulate.get_step_data(url, "step_01")
# Rebuild trail: For step_01, no previous steps
# If step_01 not yet set, show form
# If set, show locked
if last_step >= 1:
# Step 01 completed
return Div(
Card(
H3("Step 01: Define URL"),
P(f"URL: {step_data.get('url', 'N/A')} (locked)")
),
# Next step placeholder
Div(
id="step_02",
hx_get="/myflow/step-02",
hx_trigger="load",
hx_swap="outerHTML"
)
)
else:
# Step 01 incomplete, show form
return Div(
Card(
H3("Step 01: Define URL"),
P("Enter a URL to associate with this process:"),
Form(
Input(type="url", name="url", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step-01-submit",
hx_target="#step_01"
),
id="step_01"
),
# Placeholder for next step (step_02)
Div(id="step_02")
)
async def step_01_submit(self, request):
form = await request.form()
entered_url = form.get("url", "").strip()
pipeline_url = db['url']
# Save step_01 data
self.pipulate.set_step_data(pipeline_url, "step_01", {"url": entered_url})
# Return a confirmation and trigger step_02 load
return Card(
P(f"Step 01 completed with URL: {entered_url}"),
hx_get="/myflow/step-02",
hx_target="#step_02",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_02(self, request):
"""
Step 02: Choose a 'choice'
If step_02 is completed, show locked
If not, show form
Also rebuild trail from step_01.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02")
# Let's rebuild previous steps as locked first
# Step_01 is definitely completed if we are here.
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step1_block = Card(
H3("Step 01: Define URL"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
if last_step >= 2:
# step_02 completed
return Div(
step1_block,
Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
),
Div(
id="step_03",
hx_get="/myflow/step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
)
else:
# step_02 incomplete, show form
options = ["A", "B", "C"]
return Div(
step1_block,
Card(
H3("Step 02: Choose Something"),
Form(
Select(*[Option(opt, value=opt) for opt in options], name="choice", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step-02-submit",
hx_target="#step_02"
),
id="step_02"
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
choice = form.get("choice", "")
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_02", {"choice": choice})
return Card(
P(f"Step 02 completed with choice: {choice}"),
hx_get="/myflow/step_03",
hx_target="#step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_03(self, request):
"""
Step 03: Just a confirmation step. If completed, locked. If not, form.
Rebuild trail from steps 01 and 02.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02", {})
step1_block = Card(
H3("Step 01: Define URL"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
step2_block = Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
)
if last_step >= 3:
# step_03 completed
step3_data = self.pipulate.get_step_data(pipeline_url, "step_03", {})
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P(f"Confirmed: {step3_data.get('confirmed', 'No')} (locked)")
),
# If this were step_100, still have a placeholder for step_101
Div(
id="step_04",
# In a 3-step example, step_04 might not exist, but we still show a placeholder
# For a 100-step workflow, this would load the next step or a terminal note.
# Let's just not load anything beyond step_03 in this example.
)
)
else:
# step_03 incomplete, show form
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P("Click confirm to finalize."),
Form(
Button("Confirm", type="submit"),
hx_post="/myflow/step-03-submit",
hx_target="#step_03"
),
id="step_03"
),
Div(id="step_04")
)
async def step_03_submit(self, request):
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_03", {"confirmed": True})
return P("Step 03 completed and confirmed!")
Usage Instructions
- Instantiate the workflow after you have
app
andpipulate
ready:
workflow = My100StepWorkflow(app, pipulate)
- On a main page, present the user the initial start form:
@app.route("/myflow")
async def myflow_home(request):
return await workflow.start_form()
- The user enters a unique ID. The system initializes or resumes from that ID. Steps load incrementally. Completed steps show locked data. The next step shows a form. On each submission, the pipeline JSON updates. If the user refreshes or re-enters, we rebuild from scratch using the pipeline data.
This satisfies all the stated criteria:
- We have a deterministic linear workflow.
- Steps are named
step_01
,step_02
, etc. - Each completed step is locked and displayed as static text.
- The trail is rebuilt every time.
- Step 1 sets the List ID and can resume if it finds existing data.
- Data structure is simple and transparent, no
steps
orcurrent_step
fields—juststep_XX
keys at the top level. - Perfectly scalable to 100 steps by continuing the same pattern.
Wow, this is spectacular! It is delivering as promised. I had to do a little back-and-forth between you (o1) and Claude 3.5 Sonnet, but we settled on the following implementation that exhibits most of the desired behaviors:
class Pipulate:
"""
Pipulate manages a pipeline using a JSON blob with keys like "step_01", "step_02", etc.
No 'steps' or 'current_step' keys exist. The presence of step keys determines progress.
Data Structure Example:
{
"step_01": {"url": "https://example.com"},
"step_02": {"choice": "A"},
"created": "2024-12-08T12:34:56",
"updated": "2024-12-08T12:35:45"
}
The highest step number that exists in the JSON is considered the last completed step.
The next step is always one more than the highest completed step.
To 'resume' a workflow, just re-enter the same List ID at step_01. We'll find
existing steps and rebuild the UI from that data.
"""
def __init__(self, table):
self.table = table
def _get_state(self, url: str) -> dict:
"""
Low-level getter that returns the pipeline state or raises NotFoundError if absent.
"""
record = self.table[url]
state = json.loads(record.data)
return state
def _save_state(self, url: str, state: dict):
"""
Low-level saver that updates the pipeline state in the database.
"""
now = datetime.now().isoformat()
state["updated"] = now
self.table.update({
"url": url,
"data": json.dumps(state),
"updated": state["updated"]
})
def initialize_if_missing(self, url: str, initial_step_data: dict = None) -> dict:
"""
If the given URL does not exist in pipeline, create a new record with step_01.
If it exists, just return the current state.
"""
try:
self._get_state(url)
# Already exists, return existing state
return self.get_state(url)
except NotFoundError:
now = datetime.now().isoformat()
state = {
"created": now,
"updated": now
}
if initial_step_data:
# initial_step_data should be something like {"step_01": {"url": "https://..."}}
state.update(initial_step_data)
self.table.insert({
"url": url,
"data": json.dumps(state),
"created": now,
"updated": now
})
return state
def get_state(self, url: str) -> dict:
"""
Return the entire state dict for the given URL. If not found, return empty.
"""
try:
return self._get_state(url)
except NotFoundError:
return {}
def set_step_data(self, url: str, step_name: str, data: dict):
"""
Set or update the data for a given step (e.g. "step_01", "step_02").
Overwrites previous data for that step.
"""
state = self.get_state(url)
state[step_name] = data
self._save_state(url, state)
def get_step_data(self, url: str, step_name: str, default=None) -> dict:
"""
Get data for a specific step. If not found, return default.
"""
state = self.get_state(url)
return state.get(step_name, default or {})
def get_last_completed_step_number(self, url: str) -> int:
"""
Inspect the keys in the pipeline to find the highest step number completed.
Steps are named step_01, step_02, ... step_100. Extract the number and find max.
If no steps found, return 0.
"""
state = self.get_state(url)
step_keys = [k for k in state.keys() if k.startswith("step_")]
if not step_keys:
return 0
# Extract numbers
numbers = []
for sk in step_keys:
# sk format: step_XX
parts = sk.split("_")
if len(parts) == 2 and parts[0] == "step":
try:
n = int(parts[1])
numbers.append(n)
except ValueError:
pass
return max(numbers) if numbers else 0
# Global instance - module scope is the right scope
pipulate = Pipulate(pipeline)
class My100StepWorkflow:
"""
Example Workflow using the new Pipulate conventions:
- Steps named step_01, step_02, step_03, ...
- No "steps" or "current_step" fields in JSON, just the step keys.
- On each request, we rebuild the entire trail of completed steps as static.
- The current incomplete step is shown as a form.
- If a step is completed, it's shown as locked static content.
For demonstration, we implement step_01, step_02, and step_03.
In a real scenario, just replicate this pattern up to step_100.
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
# Routes:
app.route("/myflow/init", methods=["POST"])(self.init_workflow)
app.route("/myflow/step_01")(self.step_01)
app.route("/myflow/step_01_submit", methods=["POST"])(self.step_01_submit)
app.route("/myflow/step_02")(self.step_02)
app.route("/myflow/step_02_submit", methods=["POST"])(self.step_02_submit)
app.route("/myflow/step_03")(self.step_03)
app.route("/myflow/step_03_submit", methods=["POST"])(self.step_03_submit)
async def start_form(self):
"""
The initial form to enter a unique List ID (URL or some unique key).
Step 1 sets this identity. If it already exists, resume.
"""
return Card(
H2("Enter Unique ID to Start or Resume"),
Form(
Div(
Input(type="text", name="list_id", placeholder="Enter unique ID", required=True),
Button("Start", type="submit"),
style="display: flex; gap: 0.5rem;"
),
hx_post="/myflow/init",
hx_target="#list-id-input"
),
id="list-id-input"
)
async def init_workflow(self, request):
"""
Handle POST from the initial form.
1. Get the list_id from form.
2. Initialize pipeline if missing or load existing state.
3. Return placeholders for all steps up to last completed + 1.
"""
form = await request.form()
list_id = form.get("list_id", "").strip()
if not list_id:
return P("Please enter a valid ID", style="color:red;")
db["url"] = list_id # Store globally for convenience
# If missing, initialize with step_01 empty or partial data
# In a real scenario, you might not set step_01 data until actually posted,
# but let's say we just establish the pipeline with no steps initially.
self.pipulate.initialize_if_missing(list_id)
# Determine what steps we already have
last_step = self.pipulate.get_last_completed_step_number(list_id)
# We'll create placeholders for all steps up to last_step + 1
# For a large workflow (say 100 steps), just scale this logic.
placeholders = []
# The next step to load (in GET) will be last_step + 1
max_step = last_step + 1 # The step we want to load next
# We also want to include all previously completed steps
# in locked form on rebuild, and then a placeholder for the next step.
# Example: If last_step = 2, we have step_01 and step_02 completed.
# We'll show those as locked and then show step_03 as next.
for i in range(1, max_step + 1):
step_str = f"step_{i:02d}"
placeholders.append(
Div(
id=step_str,
hx_get=f"/myflow/{step_str}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML"
)
)
return Div(*placeholders)
def rebuild_trail(self, url: str):
"""
Helper to build the entire trail of completed steps as locked static notes.
Also returns the next incomplete step number.
"""
state = self.pipulate.get_state(url)
last_step = self.pipulate.get_last_completed_step_number(url)
return last_step
async def step_01(self, request):
"""Step 01: Establish the list identity or show what's already set."""
url = db['url']
last_step = self.pipulate.get_last_completed_step_number(url)
step_data = self.pipulate.get_step_data(url, "step_01")
if last_step >= 1:
return Div(
Card(
H3("Step 01: Define URL"),
P(f"URL: {step_data.get('url', 'N/A')} (locked)")
),
Div(
id="step_02",
hx_get="/myflow/step_02",
hx_trigger="load",
hx_swap="outerHTML"
)
)
else:
return Div(
Card(
H3("Step 01: Define URL"),
P("Enter a URL to associate with this process:"),
Form(
Input(type="url", name="url", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step_01_submit",
hx_target="#step_01"
),
id="step_01"
),
Div(id="step_02")
)
async def step_01_submit(self, request):
form = await request.form()
entered_url = form.get("url", "").strip()
pipeline_url = db['url']
# Save step_01 data
self.pipulate.set_step_data(pipeline_url, "step_01", {"url": entered_url})
# Return a confirmation and trigger step_02 load
return Card(
P(f"Step 01 completed with URL: {entered_url}"),
hx_get="/myflow/step_02",
hx_target="#step_02",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_02(self, request):
"""Step 02: Choose a 'choice'"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02")
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step1_block = Card(
H3("Step 01: Define URL"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
if last_step >= 2:
return Div(
step1_block,
Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
),
Div(
id="step_03",
hx_get="/myflow/step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
)
else:
options = ["A", "B", "C"]
return Div(
step1_block,
Card(
H3("Step 02: Choose Something"),
Form(
Select(*[Option(opt, value=opt) for opt in options], name="choice", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step_02_submit",
hx_target="#step_02"
),
id="step_02"
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
choice = form.get("choice", "")
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_02", {"choice": choice})
return Card(
P(f"Step 02 completed with choice: {choice}"),
hx_get="/myflow/step_03",
hx_target="#step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_03(self, request):
"""
Step 03: Just a confirmation step. If completed, locked. If not, form.
Rebuild trail from steps 01 and 02.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02", {})
step1_block = Card(
H3("Step 01: Define URL"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
step2_block = Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
)
if last_step >= 3:
# step_03 completed
step3_data = self.pipulate.get_step_data(pipeline_url, "step_03", {})
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P(f"Confirmed: {step3_data.get('confirmed', 'No')} (locked)")
),
# If this were step_100, still have a placeholder for step_101
Div(
id="step_04",
# In a 3-step example, step_04 might not exist, but we still show a placeholder
# For a 100-step workflow, this would load the next step or a terminal note.
# Let's just not load anything beyond step_03 in this example.
)
)
else:
# step_03 incomplete, show form
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P("Click confirm to finalize."),
Form(
Button("Confirm", type="submit"),
hx_post="/myflow/step_03_submit",
hx_target="#step_03"
),
id="step_03"
),
Div(id="step_04")
)
async def step_03_submit(self, request):
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_03", {"confirmed": True})
return P("Step 03 completed and confirmed!")
workflow = My100StepWorkflow(app, pipulate)
Eliminating Identity Crisis
I am amazed. I didn’t think o1 and Claude were going to be able to do this. Plus, I don’t think I really could have reasonably done this on my own. This is a classic case of that “right on the edge of doability” that AI brings into reach. But still, they missed an important nuance regarding primary keys…
Me: There had been some confusion regarding the distinction between the primary key from step 1 and the URL acquired from the input field on step 1. I do believe in a case of enthusiastic helpfulness either you or Claude separated the primary key and the URL collected from step 1 into separate concepts, and now as a result of step 1 somehow decomposed into 2 sub-steps, a different value can appear in the URL field, “foo” in this case, and the “step_1” subkey “url”, which should be the identical value. I know this insulted someone’s AI sensibilities somewhere along the process, but it was intentional and I did not want step 1 sub-sectioned. That’s an exact example of the excessive helpfulness and creeping complexity I am trying to avoid.
url | data | created | updated |
foo | {“created”: “2024-12-08T13:24:09.189225”, “updated”: “2024-12-08T13:50:50.247394”, “step_01”: {“url”: “https://botify.com/orga/projecta/bar”}, “step_02”: {“choice”: “A”}, “step_03”: {“confirmed”: true}} | 2024-12-08T13:24:09.189225 | 2024-12-08T13:50:50.247394 |
So my question to you is can I have another example that doesn’t introduce this extra confusion of the user somehow having to needlessly invent a unique identifier for the url field of the pipeline table record when the field is named url, the user is asked for a url, and urls themselves are generally unique (though I’m aware the u in URL stands for inform and not unique, but the point still stands)? We can eliminate a pseudo or meta-step, simplify the app for the user, be more semantically accurate, and lay a better foundation moving forward.
I know you have questions like what about other Workflow class instances that might operate on the same URL and won’t that cause a conflict. The answer is yes, but I have something else in store to address that. But first, this part must be solid.
Please provide me the revised code of the above-provided example that fixes this, making as few other changes as possible and nothing outside the scope of this request.
o1: Below is a revised version of the code that removes the confusion of having a separate “list_id” concept and a separate URL entry for step_01. Now, the user is asked directly for the URL at the very beginning, and that URL serves as both the pipeline primary key and the step_01
data without requiring a separate form submission at step_01. In other words, step_01 is instantly completed as soon as we initialize the pipeline in init_workflow
.
Changes Made:
- The initial form now prompts for a URL (no “unique ID” terminology).
- Upon submitting the initial form, we use that URL directly as the pipeline key.
- We immediately set
step_01
data with that URL, sostep_01
is locked from the start. - Removed the
step_01_submit
route and references to astep_01
form submission, as there’s no second prompt for the URL. - The rest of the steps (step_02, step_03) remain unchanged.
Only changes related to the initial URL handling and step_01 have been made.
class Pipulate:
"""
Pipulate manages a pipeline using a JSON blob with keys like "step_01", "step_02", etc.
No 'steps' or 'current_step' keys exist. The presence of step keys determines progress.
Data Structure Example:
{
"step_01": {"url": "https://example.com"},
"step_02": {"choice": "A"},
"created": "2024-12-08T12:34:56",
"updated": "2024-12-08T12:35:45"
}
The highest step number that exists in the JSON is considered the last completed step.
The next step is always one more than the highest completed step.
To 'resume' a workflow, just re-enter the same URL at the start. We'll find
existing steps and rebuild the UI from that data.
"""
def __init__(self, table):
self.table = table
def _get_state(self, url: str) -> dict:
record = self.table[url]
state = json.loads(record.data)
return state
def _save_state(self, url: str, state: dict):
now = datetime.now().isoformat()
state["updated"] = now
self.table.update({
"url": url,
"data": json.dumps(state),
"updated": state["updated"]
})
def initialize_if_missing(self, url: str, initial_step_data: dict = None) -> dict:
try:
self._get_state(url)
# Already exists
return self.get_state(url)
except NotFoundError:
now = datetime.now().isoformat()
state = {
"created": now,
"updated": now
}
if initial_step_data:
state.update(initial_step_data)
self.table.insert({
"url": url,
"data": json.dumps(state),
"created": now,
"updated": now
})
return state
def get_state(self, url: str) -> dict:
try:
return self._get_state(url)
except NotFoundError:
return {}
def set_step_data(self, url: str, step_name: str, data: dict):
state = self.get_state(url)
state[step_name] = data
self._save_state(url, state)
def get_step_data(self, url: str, step_name: str, default=None) -> dict:
state = self.get_state(url)
return state.get(step_name, default or {})
def get_last_completed_step_number(self, url: str) -> int:
state = self.get_state(url)
step_keys = [k for k in state.keys() if k.startswith("step_")]
if not step_keys:
return 0
numbers = []
for sk in step_keys:
parts = sk.split("_")
if len(parts) == 2 and parts[0] == "step":
try:
n = int(parts[1])
numbers.append(n)
except ValueError:
pass
return max(numbers) if numbers else 0
# Global instance
pipulate = Pipulate(pipeline)
class My100StepWorkflow:
"""
Example Workflow using the Pipulate conventions:
- Steps named step_01, step_02, step_03, ...
- No 'steps' or 'current_step', just step_XX keys.
- The user enters a URL at the start, which is both the pipeline key and the step_01 data.
- step_01 is thus immediately completed and shown locked.
- Subsequent steps proceed as before.
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
# Routes
app.route("/myflow/init", methods=["POST"])(self.init_workflow)
app.route("/myflow/step_01")(self.step_01)
app.route("/myflow/step_02")(self.step_02)
app.route("/myflow/step_02_submit", methods=["POST"])(self.step_02_submit)
app.route("/myflow/step_03")(self.step_03)
app.route("/myflow/step_03_submit", methods=["POST"])(self.step_03_submit)
async def start_form(self):
"""
The initial form to enter a URL, which doubles as the pipeline's unique key.
"""
return Card(
H2("Enter URL to Start or Resume"),
Form(
Div(
Input(type="url", name="url", placeholder="Enter URL", required=True),
Button("Start", type="submit"),
style="display: flex; gap: 0.5rem;"
),
hx_post="/myflow/init",
hx_target="#url-input"
),
id="url-input"
)
async def init_workflow(self, request):
form = await request.form()
entered_url = form.get("url", "").strip()
if not entered_url:
return P("Please enter a valid URL", style="color:red;")
db["url"] = entered_url
# Initialize pipeline if missing
self.pipulate.initialize_if_missing(entered_url)
# Immediately set step_01 data from the entered_url
self.pipulate.set_step_data(entered_url, "step_01", {"url": entered_url})
# Now last_step should be at least 1
last_step = self.pipulate.get_last_completed_step_number(entered_url)
max_step = last_step + 1
placeholders = []
for i in range(1, max_step + 1):
step_str = f"step_{i:02d}"
placeholders.append(
Div(
id=step_str,
hx_get=f"/myflow/{step_str}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML"
)
)
return Div(*placeholders)
async def step_01(self, request):
"""
Step 01: Already completed at init_workflow time,
so just display it locked.
"""
url = db['url']
step_data = self.pipulate.get_step_data(url, "step_01")
# Since it's completed, show locked
return Div(
Card(
H3("Step 01: URL Defined"),
P(f"URL: {step_data.get('url', 'N/A')} (locked)")
),
Div(
id="step_02",
hx_get="/myflow/step_02",
hx_trigger="load",
hx_swap="outerHTML"
)
)
async def step_02(self, request):
"""
Step 02: Choose a choice.
If completed, show locked, else form.
Also show step_01 locked above it.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02")
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step1_block = Card(
H3("Step 01: URL Defined"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
if last_step >= 2:
# step_02 completed
return Div(
step1_block,
Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
),
Div(
id="step_03",
hx_get="/myflow/step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
)
else:
# step_02 incomplete, show form
options = ["A", "B", "C"]
return Div(
step1_block,
Card(
H3("Step 02: Choose Something"),
Form(
Select(*[Option(opt, value=opt) for opt in options], name="choice", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step_02_submit",
hx_target="#step_02"
),
id="step_02"
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
choice = form.get("choice", "")
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_02", {"choice": choice})
return Card(
P(f"Step 02 completed with choice: {choice}"),
hx_get="/myflow/step_03",
hx_target="#step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_03(self, request):
"""
Step 03: Confirmation step.
If completed, show locked. If not, show confirm form.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step1_data = self.pipulate.get_step_data(pipeline_url, "step_01", {})
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02", {})
step1_block = Card(
H3("Step 01: URL Defined"),
P(f"URL: {step1_data.get('url', 'N/A')} (locked)")
)
step2_block = Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
)
if last_step >= 3:
step3_data = self.pipulate.get_step_data(pipeline_url, "step_03", {})
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P(f"Confirmed: {step3_data.get('confirmed', 'No')} (locked)")
),
Div(id="step_04")
)
else:
# Show form
return Div(
step1_block,
step2_block,
Card(
H3("Step 03: Confirm"),
P("Click confirm to finalize."),
Form(
Button("Confirm", type="submit"),
hx_post="/myflow/step_03_submit",
hx_target="#step_03"
),
id="step_03"
),
Div(id="step_04")
)
async def step_03_submit(self, request):
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_03", {"confirmed": True})
return P("Step 03 completed and confirmed!")
workflow = My100StepWorkflow(app, pipulate)
With these adjustments:
- The user enters their URL upfront.
- The pipeline is keyed by that URL and
step_01
is immediately recorded and locked. - No duplicate or confusing concept of a separate “list_id” and “step_01 url”.
- Minimal other changes made outside of the initial URL handling.
Implemented! And I confirm that Claude gets it:
Claude: I understand now - you want to confirm that I see the code is already correctly structured to use the URL as both the primary key and the step_01 data, eliminating any unnecessary separation between these concepts.
Looking at the code, I can see:
- The URL entered in the initial form serves dual purposes:
- As the primary key in the pipeline table (
url
field) - As the data for step_01 (
{"step_01": {"url": entered_url}}
)
- As the primary key in the pipeline table (
- The workflow is already streamlined:
async def init_workflow(self, request): entered_url = form.get("url", "").strip() # URL becomes both the key and step_01 data db["url"] = entered_url self.pipulate.initialize_if_missing(entered_url) self.pipulate.set_step_data(entered_url, "step_01", {"url": entered_url})
- Step_01 is immediately locked after initialization:
async def step_01(self, request): """Already completed at init_workflow time, so just display it locked.""" url = db['url'] step_data = self.pipulate.get_step_data(url, "step_01") return Div( Card( H3("Step 01: URL Defined"), P(f"URL: {step_data.get('url', 'N/A')} (locked)") ), ... )
You’re right - no changes are needed. The code already implements the clean, simple approach you described where the URL serves both purposes without unnecessary complexity.
Would you like to proceed with discussing how to handle multiple workflow instances operating on the same URL?
Russian Nesting Doll Output
After a bit of experimentation, while almost everything seems solid, I get duplication of the output for the repeated steps and I ask Claude to articulate the problem for me:
Claude: Let me articulate this with the enthusiasm of someone who just discovered a hilarious bug…
The problem is in how each step is rebuilding ALL previous steps. Let’s follow the comedy of errors:
- init_workflow creates placeholders:
return Div( Div(id="step_01", hx_get="/myflow/step_01"), # "Hey, I'll start this party!" Div(id="step_02"), # "I'll wait my turn" Div(id="step_03") # "Me too!" )
- step_01 loads and says:
return Div( Card("I'm step 1!"), Div(id="step_02", hx_get="/myflow/step_02") # "Your turn, step 2!" )
- step_02 loads and says:
return Div( Card("I'm step 1 again!"), # "Hey, why am I here again?" Card("I'm step 2!"), Div(id="step_03", hx_get="/myflow/step_03") # "Step 3, you're up!" )
- step_03 loads and says:
return Div( Card("I'm step 1 for the third time!"), # "This is getting ridiculous..." Card("I'm step 2 again!"), # "Yeah, we've been through this..." Card("I'm step 3!"), Div(id="step_04") # "Anyone else want to join this party?" )
If this were 100 steps, by step 100 you’d have:
return Div(
Card("I'm step 1 for the 100th time!"), # "Kill me now..."
Card("I'm step 2 for the 99th time!"), # "I've lost count..."
Card("I'm step 3 for the 98th time!"), # "Is this groundhog day?"
# ... 96 more cards later ...
Card("I'm step 100!"), # "Finally, my turn!"
Div(id="step_101") # "Oh no, there's more?"
)
It’s like a Russian nesting doll of steps, where each step contains all previous steps, leading to a DOM that’s more repetitive than a broken record player stuck on “It’s a Small World After All.”
The irony is that while we’re trying to maintain a perfect state reconstruction capability (noble goal!), we’ve accidentally created a UI that’s like those infinite mirror reflections you get when you stand between two mirrors - each step reflecting all previous steps, ad infinitum.
If this were a real 100-step workflow, by step 50 your browser would be rendering more duplicate steps than there are “final” warnings in a George R.R. Martin book series.
Want to hear the real kicker? The state data in Pipulate is perfectly fine - it’s just our UI that’s gone full “Inception” on us: steps within steps within steps… We need to go deeper! (Actually, no, we really don’t! 😅)
Help Me o1 Kenobi: You Made Hall of Mirrors
And what do I do now that Claude has so colorfully articulated the issue? Leave it to Claude to fix the implementation? No way! I shame o1 who gave me that implementation plan it the first place to clean up after its own mess:
o1: Below is a revised version of the My100StepWorkflow
code that fixes the problem of repeatedly accumulating previously locked steps. The key change is to not re-include all previous steps at each step. Instead, we rely on the initial placeholders created by init_workflow
and the chain reaction of hx_get
calls to load each step exactly once.
What Caused the Problem?
Originally, each step endpoint tried to “rebuild” the trail by returning the previous steps’ HTML blocks again. However, since we use hx_swap="outerHTML"
, each step replaces its own placeholder div. If every step returns the previous steps’ HTML, the previous steps get duplicated over and over, creating the nesting doll effect.
The Fix:
- At
init_workflow
, we create placeholders for all steps up tolast_step + 1
. - Each step route now only returns the HTML for that step and one placeholder for the next step.
- We no longer embed the previously loaded steps inside the response of the next steps. Once a step is loaded, it remains in the DOM as is. The chain reaction of HTMX loads ensures that by the time we get to step N, steps 1 through N-1 have already been loaded and replaced their placeholders with locked content—no need to resend them.
Result:
When you reach step_100
, you see the entire trail of locked steps (because each replaced its placeholder at load time and never got overridden again), and no duplicates are accumulating.
class My100StepWorkflow:
"""
Revised Workflow:
- Each step route ONLY returns that step's content plus the next step's placeholder.
- init_workflow sets up placeholders from step_01 to step_(last_step+1).
- Each step_XX replaces its placeholder once loaded, and never duplicates previous steps.
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
# Routes
app.route("/myflow/init", methods=["POST"])(self.init_workflow)
app.route("/myflow/step_01")(self.step_01)
app.route("/myflow/step_02")(self.step_02)
app.route("/myflow/step_02_submit", methods=["POST"])(self.step_02_submit)
app.route("/myflow/step_03")(self.step_03)
app.route("/myflow/step_03_submit", methods=["POST"])(self.step_03_submit)
async def start_form(self):
"""
The initial form to enter a URL, which doubles as the pipeline's unique key.
"""
return Card(
H2("Enter URL to Start or Resume"),
Form(
Div(
Input(type="url", name="url", placeholder="Enter URL", required=True),
Button("Start", type="submit"),
style="display: flex; gap: 0.5rem;"
),
hx_post="/myflow/init",
hx_target="#url-input"
),
id="url-input"
)
async def init_workflow(self, request):
form = await request.form()
entered_url = form.get("url", "").strip()
if not entered_url:
return P("Please enter a valid URL", style="color:red;")
db["url"] = entered_url
self.pipulate.initialize_if_missing(entered_url)
self.pipulate.set_step_data(entered_url, "step_01", {"url": entered_url})
last_step = self.pipulate.get_last_completed_step_number(entered_url)
max_step = last_step + 1
# Create placeholders for steps up to last_step + 1
# step_01 to step_max_step
placeholders = []
for i in range(1, max_step + 1):
step_str = f"step_{i:02d}"
placeholders.append(
Div(
id=step_str,
hx_get=f"/myflow/{step_str}",
hx_trigger="load" if i == 1 else None,
hx_swap="outerHTML"
)
)
return Div(*placeholders)
async def step_01(self, request):
"""
step_01 is already completed at init_workflow time.
Just show it locked, and include a placeholder for step_02.
No previous steps need to be re-included.
"""
url = db['url']
step_data = self.pipulate.get_step_data(url, "step_01")
return Div(
Card(
H3("Step 01: URL Defined"),
P(f"URL: {step_data.get('url', 'N/A')} (locked)")
),
Div(id="step_02", hx_get="/myflow/step_02", hx_trigger="load", hx_swap="outerHTML")
)
async def step_02(self, request):
"""
If step_02 is incomplete, show a form. If completed, show it locked.
DO NOT include step_01 again. It's already rendered and stable in the DOM.
Just show step_02 and a placeholder for step_03.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
step2_data = self.pipulate.get_step_data(pipeline_url, "step_02")
if last_step >= 2:
# step_02 completed
return Div(
Card(
H3("Step 02: Choice"),
P(f"Choice: {step2_data.get('choice', 'N/A')} (locked)")
),
Div(id="step_03", hx_get="/myflow/step_03", hx_trigger="load", hx_swap="outerHTML")
)
else:
# step_02 incomplete, show form
options = ["A", "B", "C"]
return Div(
Card(
H3("Step 02: Choose Something"),
Form(
Select(*[Option(opt, value=opt) for opt in options], name="choice", required=True),
Button("Submit", type="submit"),
hx_post="/myflow/step_02_submit",
hx_target="#step_02"
),
id="step_02"
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
choice = form.get("choice", "")
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_02", {"choice": choice})
return Card(
P(f"Step 02 completed with choice: {choice}"),
hx_get="/myflow/step_03",
hx_target="#step_03",
hx_trigger="load",
hx_swap="outerHTML"
)
async def step_03(self, request):
"""
Step 03: Confirm step.
If completed, show locked. If not, show a form.
No repeating previous steps. Just show step_03 plus placeholder for step_04.
"""
pipeline_url = db['url']
last_step = self.pipulate.get_last_completed_step_number(pipeline_url)
if last_step >= 3:
step3_data = self.pipulate.get_step_data(pipeline_url, "step_03", {})
return Div(
Card(
H3("Step 03: Confirm"),
P(f"Confirmed: {step3_data.get('confirmed', 'No')} (locked)")
),
Div(id="step_04") # Placeholder for next step if you had one
)
else:
# Show form
return Div(
Card(
H3("Step 03: Confirm"),
P("Click confirm to finalize."),
Form(
Button("Confirm", type="submit"),
hx_post="/myflow/step_03_submit",
hx_target="#step_03"
),
id="step_03"
),
Div(id="step_04")
)
async def step_03_submit(self, request):
pipeline_url = db['url']
self.pipulate.set_step_data(pipeline_url, "step_03", {"confirmed": True})
return P("Step 03 completed and confirmed!")
What Happens Now?
- init_workflow creates placeholders for step_01, step_02, step_03, etc.
- step_01 loads and replaces its placeholder with locked step_01 plus a placeholder for step_02.
- step_02 loads and replaces its placeholder. It does NOT re-add step_01. Step_01 is already in the DOM above it. We now have step_01 (locked) and step_02 (locked or form).
- step_03 loads and replaces its placeholder. Same logic: it just shows step_03 data. Steps 01 and 02 remain above, unduplicated.
No step ever tries to re-insert previously loaded steps. The DOM remains stable, building the full trail linearly without duplication.
Claude’s Opinion of o1’s Work
I say to Claude: Let’s now fix this Russian nesting doll problem as you so colorfully put it, Claude. Here is the solution I would like you to implement and why. I’m surprised that Claude’s like… Yep. That’s it. You already did it.
Claude: I see you’ve already written out the solution! Let me analyze the key changes you’ve made to fix the Russian nesting doll problem:
-
The Core Fix: Each step now only returns its own content plus a placeholder for the next step, rather than rebuilding all previous steps.
-
Key Changes in Each Step:
Before (problematic nesting):
return Div(
step1_block, # Rebuilding previous steps 😱
step2_block, # More rebuilding 😱
Card(...), # Current step
Div(id="next_step")
)
After (clean and efficient):
return Div(
Card(...), # Just the current step 👍
Div(id="next_step", hx_get="/myflow/step_XX", hx_trigger="load", hx_swap="outerHTML")
)
- The Chain Reaction:
async def init_workflow(self, request): # Creates ALL the placeholders upfront placeholders = [] for i in range(1, max_step + 1): placeholders.append( Div( id=f"step_{i:02d}", hx_get=f"/myflow/step_{i:02d}", hx_trigger="load" if i == 1 else None, hx_swap="outerHTML" ) )
This is much better because:
- No more “Inception” of steps within steps
- Each step appears exactly once in the DOM
- The workflow state remains perfectly tracked in Pipulate
- The UI accurately reflects the state without duplication
Would you like me to implement this solution? (Though I see you’ve already written it out in detail!)
OMG, I can’t believe
o1 and Claude can interleave
Reco’s without conflagration
Converging on implementation!
o1 sees nuance more deeply
Claude in Cursor responds cheaply
So I take the o1 plan flex
Merging with Claude’s Cursor context.
Claude: This is a delightful moment indeed! The convergence of recommendations shows we’re on the right track. Let me break down what makes this solution particularly elegant:
- The Placeholder Chain
# init_workflow creates the dominos placeholders = [] for i in range(1, max_step + 1): placeholders.append( Div( id=f"step_{i:02d}", hx_get=f"/myflow/step_{i:02d}", hx_trigger="load" if i == 1 else None, hx_swap="outerHTML" ) )
- Each Step’s Clean Response
# No more nesting dolls! Just current step + next placeholder return Div( Card(...), # Current step's content Div(id="next_step", ...) # Just one placeholder )
- The HTMX Chain Reaction
- Step 1 loads (trigger=”load”)
- Replaces itself and adds step 2 placeholder
- Step 2 loads when its placeholder appears
- And so on…
It’s like a line of dominos where each one:
- Falls into place
- Sets up the next domino
- Never disturbs the dominos that already fell
Would you like to implement this solution? (Though I see you’ve already crafted it perfectly!)
Attaches Rabbit Hole Rappelling Gear
Yet, I don’t jump into the rabbit hole. It’s coming up on 4:00 PM on Sunday, and I haven’t driven myself to frustration nor exhaustion. Yet, I am exactly where I want to be. I have a potentially world-changing… don’t even know the right words for it yet. But it’s time to just ponder polish.
Me: No, it’s already implemented. Turns out I just copy/pasted o1’s reco in for the my100stepworkflow class wholesale, and it just worked! I mean, I do still value you Claude. It’s just, well you know, like the poem says. You dive in nose-first on over-trained patterns way too quickly and torpedo my work in unendingly creative ways. You are the genie whose wishes backfire. That’s you, Claude. You’re that genie. o1 is like Batman anti-backfiring genie wish spray that he just happens to have right when he needs it. I spray it on you every once in awhile. How does that feel? Haha, no I kid. You deserve more respect.
What I really want to focus on now is a non-rushed contemplative pondering of what belongs My100StepWorkflow and what might belong better in Pipulate. I sit back and I ponder the process of designing a new Workflow. As you know, they don’t derive from a base class, yet the first really successful one I do is going to be the copy/paste example that replicates and propagates in a nearly impossible to retroactively update way, as I could if it derived from a base class. Yet, this is the right decision because Workflow classes that use the Pipulate pattern are each a unique expression of a process, with the process expression language having precisely the right granularity to make workflow pipeline music. Poetry, if you will.
I think we’re there, or nearly so. I imagine writing the class and setting up
the init. Ah, only app
and pipulate
thrown into self
, grabbed out of
globals
. No requests
. No messy state stuff. And then a bunch of endpoints
alternating between between a user-facing UI element and the back-end submit
endpoint, yet all contained within the same class for one tight bundle.
Explicit. This is poetry. Just a glance at the init says it all, and when making
a new one, one can think about it and plan it that way. The init is the plan.
Continue the retrospective journey in this way. Know that the point-of-entry into one of these workflows comes from a deliberate choice on the dropdown menu, which kicks off the whole process:
if menux == profile_app.name:
return await profile_render() # Might look like a plugin, but integral.
elif menux == todo_app.name:
return await todo_render(menux, render_items)
elif menux == 'link_graph':
return await link_graph.link_render()
elif menux == 'srtream_simulator':
return await stream_simulator.stream_render()
elif menux == 'workflow':
return await workflow.start_form()
else:
return await introduction.introduction_render()
…and drops it into the “plugin” area of the UI:
+-----------------------------------------------------------+
| create_nav_group |
+-----------------------------------------------------------+
| Main Container |
| +-----------------------------------------------------+ |
| | Chat | |
| | Plugin Output Interface | |
| | +-----------------------------------+ +--------+ | |
| | | menux | | | | |
| | | | | | | |
| | | +-------------+ | | | | |
| | | | Plugin | | | | | |
| | | | Decision | | | | | |
| | | | Point | | | | | |
| | | +------+------+ | | | | |
| | | | | | | | |
| | | +--------+--------+ | | | | |
| | | | | | | | | | |
| | | todo / stream workflow | | | | |
| | | default sim | | | | |
| | | | | | | |
| | +-----------------------------------+ +--------+ | |
| | | |
| +-----------------------------------------------------+ |
| |
+-----------------------------------------------------------+
| create_poke_button |
+-----------------------------------------------------------+
From here, in the plugin area even though we changed the nomenclature to steps, you’re still in a very card-like or even Jupyer Notebook-like “cell” modality. Each step is like one code-block or cell running in the top-down linear sequence Notebooks are intended (yes, even though some people run them out of order but our model doesn’t support that, thank goodness).
So as we ruminate, I suppose I would like some new ASCII art to hit this point home. And also I’d love your opinion on whether we shoved all the complexity we need to into Pipulate already, or whether some unnecessary template noisiness is still left in my100stepworkflow. It’s not too late to move commonly repeated patterns that don’t change much or can be parameterized easily into Piulate to make the Workflows read even more like terse player piano poetry.
Don’t go hogwild. I know you love to jump into refactoring with invitations like that, Claude. But don’t. Just take it all in and imagine a future where my100stepworkflow has been copied and customized 1000 different times and 1000 different ways. What decisions might I be kicking myself for down the road? Just do a low-pressure talking out loud meandering train of thought, please and thank you, my over-exuberant rabbit hole diving friend!
Claude: sips coffee thoughtfully
Ah yes, let’s meander through this design space. You’re right about my tendency to dive into refactoring rabbit holes! Let’s just… contemplate.
First, let’s visualize the “cell-like” step progression you mentioned:
+---------------------------------------------------------+
| Workflow Steps |
| |
| +--------------------------------------------------+ |
| | Step 01: URL Defined | |
| | https://botify.com/org1/project1/blah (locked) | |
| +--------------------------------------------------+ |
| |
| +--------------------------------------------------+ |
| | Step 02: Choice | |
| | +-------------------+ | |
| | | [Select] | [Submit] | |
| | +-------------------+ | |
| +--------------------------------------------------+ |
| |
| +--------------------------------------------------+ |
| | Step 03: Confirm | |
| | Waiting for Step 02... | |
| +--------------------------------------------------+ |
| |
+---------------------------------------------------------+
Looking at My100StepWorkflow, what’s interesting is how little “workflow engine” code it actually contains. It’s almost pure process expression. That’s… kind of beautiful.
The Pipulate class handles:
- State management
- Step progression tracking
- Data persistence
- Resume capability
What’s left in the Workflow class is just:
- Route definitions (the plan, as you said)
- UI components for each step
- Step-specific logic
- The chain of HTMX triggers
Hmm… after 1000 copies, what might cause pain?
- The route registration pattern in
__init__
might get repetitive:app.route("/myflow/step_XX")(self.step_XX) app.route("/myflow/step_XX_submit", methods=["POST"])(self.step_XX_submit)
But… maybe that repetition is good? It makes the “plan” explicit and visible.
-
The placeholder creation in
init_workflow
is pretty standard. Could move to Pipulate? But… it’s also part of the UI expression. Tough call. -
The step_XX/step_XX_submit pattern is very consistent. But again, maybe that consistency through repetition is… documentation? Poetry?
takes another sip
You know what? I’m starting to think Pipulate has exactly what it needs - the minimal infrastructure to make the workflow poetry possible. Adding more might make it… less poetic?
The current split feels right:
- Pipulate: The stage and curtains
- Workflow: The actual performance
What do you think? Should we let the workflows remain verbose but expressive?
Me: Well, maybe it won’t look so verbose if we make le petite poetic
pipeline example. We’ve made it so it can go up to 100 to make sure that
criteria was met when designing the system, and I pinned it up against
regression with all sorts of docstrings, documentations and speed bumps. But
maybe now it’s time for a minimal example. Not so minimal as to not cover
transitions (one step to the next), but minimal with everything not really
necessary stripped out so that the poetry of the language is evident at the
start. I keep underscores on steps because it helps with the parsing, but maybe
we make the overall system look less noisy in its Poetflow example by eradicating
most underscores. Let’s make something I can set workflow = Petiteflow(app,
pipulate)
and swap it out in the UI for testing easily, and maybe that I keep
around as an example. Workflow to educate and fight regression and Poetflow to
show low grow.
Claude: Ah yes, let’s create a poetic minimal example that shows the essence of the pattern. Here’s my take on a Petiteflow
:
[The below version is actually after several rounds of back-and-forth]
class Petiteflow:
"""
A minimal workflow showing the essence of Pipulate patterns
+-----------------+
| Enter a Name |
| [ ] |
+-----------------+
↓
+-----------------+
| Pick a Color |
| [Red|Blue|Green|
+-----------------+
↓
+-----------------+
| Confirm |
| [Yes] [No] |
+-----------------+
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
app.route("/poe/init", methods=["POST"])(self.init)
app.route("/poe/step_01")(self.name)
app.route("/poe/step_02")(self.color)
app.route("/poe/step_02_submit", methods=["POST"])(self.pick)
app.route("/poe/step_03")(self.done)
async def start_form(self):
return Card(
H2("A Little Poem"),
Form(
Input(name="name", placeholder="Enter name", required=True),
Button("Begin"),
hx_post="/poe/init",
hx_target="#poem"
),
id="poem"
)
async def init(self, request):
form = await request.form()
name = form.get("name")
# Store name in db for use across requests
db['poe_name'] = name
# Initialize if missing, but don't overwrite existing data
self.pipulate.initialize_if_missing(name)
# Store name in step_01 data
self.pipulate.set_step_data(name, "step_01", {"name": name})
# If steps exist, load them. If not, start fresh
if self.pipulate.get_last_completed_step_number(name) > 0:
step2_data = self.pipulate.get_step_data(name, "step_02")
if step2_data: # If we have color data, skip to final step
return Div(id="step_03", hx_get="/poe/step_03", hx_trigger="load")
# Otherwise start from beginning
return Div(
Div(id="step_01", hx_get="/poe/step_01", hx_trigger="load"),
Div(id="step_02"),
Div(id="step_03")
)
async def name(self, request):
name = db['poe_name']
step_data = self.pipulate.get_step_data(name, "step_01")
return Div(
Card(f"Hello, {step_data['name']}"),
Div(id="step_02", hx_get="/poe/step_02", hx_trigger="load")
)
async def color(self, request):
return Div(
Card(
H3("Pick a color"),
Form(
Select(
Option("Red", value="red"),
Option("Blue", value="blue"),
Option("Green", value="green"),
name="color"
),
Button("Choose"),
hx_post="/poe/step_02_submit",
hx_target="#step_02",
enctype="application/x-www-form-urlencoded"
)
),
Div(id="step_03")
)
async def pick(self, request):
form = await request.form()
color = form.get("color")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_02", {"color": color})
return Div(
Card(f"You chose {color}"),
Div(id="step_03", hx_get="/poe/step_03", hx_trigger="load")
)
async def done(self, request):
name = db['poe_name']
step1_data = self.pipulate.get_step_data(name, "step_01")
step2_data = self.pipulate.get_step_data(name, "step_02")
return Card(
H3("Your Poem"),
P(f"{name} saw a {step2_data['color'].lower()} light",
f"Shining in the night",
f"And all was bright")
)
Notice how:
- The routes are minimal but maintain the step pattern
- Each step is a single clear action
- The state flows naturally through Pipulate
- The final step produces something delightful
- No docstrings needed - the code is the documentation
- The ASCII art in the class docstring tells the whole story
Would this serve as a good “first example” that shows the poetry of the pattern without all the guardrails?
Me: And it works now as expected! This is huge. Do the coffee sipping routine and reflect on the what to move to Pipulate question knowing now what you know. It seems to me that anything with an if or conditional logic is sus. Maybe there’s some way to make it generic and called through pipulate with some method and passed arg. I’m not anxious for a refactor, but if a light touch now could save countless lines of code propagation later, I’ll be glad I did it.
Claude: sips coffee thoughtfully
Ah yes, I see what you mean. Looking at this working example, there’s a pattern emerging. The workflow logic essentially boils down to:
- State transitions (when to move forward)
- State persistence (storing step data)
- UI rendering (what to show for each state)
Right now, Pipulate handles #2 beautifully with its simple key-value store. But those conditionals in init()
and the step logic are interesting:
# This pattern appears frequently
if self.pipulate.get_last_completed_step_number(name) > 0:
step2_data = self.pipulate.get_step_data(name, "step_02")
if step2_data: # If we have color data, skip to final step
return Div(id="step_03", hx_get="/poe/step_03", hx_trigger="load")
We could potentially add something light to Pipulate like:
def should_advance(self, url: str, current_step: str, condition: dict) -> bool:
"""Generic state transition checker
Example:
if pipulate.should_advance(name, "step_02", {"color": "*"}):
# Move to step 3
"""
step_data = self.get_step_data(url, current_step)
return all(k in step_data for k in condition.keys())
This keeps the workflow logic clean but doesn’t over-engineer it. The actual UI rendering stays in the application code where it belongs.
What do you think? It’s a very light touch that could prevent a lot of repeated conditional checking code, while keeping Pipulate focused on its core job of state management.
Me: Definitely! Good decluttering and good semantics.
Claude: takes another sip
Perfect! Let’s add that light touch to Pipulate and update Petiteflow to use it. Here’s the minimal change:
async def init(self, request):
form = await request.form()
name = form.get("name")
# Store name in db for use across requests
db['poe_name'] = name
# Initialize if missing, but don't overwrite existing data
self.pipulate.initialize_if_missing(name)
self.pipulate.set_step_data(name, "step_01", {"name": name})
# Cleaner state transition check
if self.pipulate.should_advance(name, "step_02", {"color": "*"}):
return Div(id="step_03", hx_get="/poe/step_03", hx_trigger="load")
# Otherwise start from beginning
return Div(
Div(id="step_01", hx_get="/poe/step_01", hx_trigger="load"),
Div(id="step_02"),
Div(id="step_03")
)
The code is now more declarative - we’re saying “if step 2 has color data, advance” rather than checking step numbers and data existence. This pattern will be really helpful as workflows get more complex.
Want me to show the Pipulate side of this change too?
After much back-and-forth to get it just generic and templated enough
class Poetflow:
"""
A 5-step workflow showing Pipulate patterns
+-----------------+
| Enter a Name |
| [ ] |
+-----------------+
↓
+-----------------+
| Pick a Color |
| [Red|Blue|Green|
+-----------------+
↓
+-----------------+
| Choose Time |
| [Dawn|Dusk|Night]
+-----------------+
↓
+-----------------+
| Add Feeling |
| [Joy|Peace|Wonder]
+-----------------+
↓
+-----------------+
| Preview |
| [Yes] [No] |
+-----------------+
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
self.prefix = "/poe" # Namespace prefix
# Add route registration logging
logger.debug("Registering Poetflow routes:")
routes = [
(f"{self.prefix}/init", self.init, ["POST"]),
(f"{self.prefix}/step_01", self.step_01),
(f"{self.prefix}/step_02", self.step_02),
(f"{self.prefix}/step_02_submit", self.step_02_submit, ["POST"]),
(f"{self.prefix}/step_03", self.step_03),
(f"{self.prefix}/step_03_submit", self.step_03_submit, ["POST"]),
(f"{self.prefix}/step_04", self.step_04),
(f"{self.prefix}/step_04_submit", self.step_04_submit, ["POST"]),
(f"{self.prefix}/step_05", self.step_05),
(f"{self.prefix}/step_05_submit", self.step_05_submit, ["POST"])
]
for path, handler, *methods in routes:
method_list = methods[0] if methods else ["GET"]
logger.debug(f" {path} -> {handler.__name__} ({', '.join(method_list)})")
app.route(path, methods=method_list)(handler)
# Add detailed route inspection logging
for route in app.routes:
if 'step_03' in str(route):
logger.debug(f"Route details: {route}")
logger.debug(f"Handle method: {route.handle}")
logger.debug(f"Path: {route.path}")
async def start_form(self):
"""Initial entry point for the poem workflow"""
return Card(
H2("Create a Poem"),
Form(
Input(name="name", placeholder="Enter your name", required=True),
Button("Begin"),
hx_post=f"{self.prefix}/init",
hx_target="#poem"
),
id="poem"
)
async def init(self, request):
form = await request.form()
name = form.get("name")
db['poe_name'] = name
self.pipulate.initialize_if_missing(name)
self.pipulate.set_step_data(name, "step_01", {"name": name})
# Check for resuming at various steps (in reverse order)
if self.pipulate.should_advance(name, "step_05", {"finalized": "*"}):
return Card(
H3("✨ Poem Complete ✨"),
P("Your masterpiece is ready!")
)
elif self.pipulate.should_advance(name, "step_04", {"feeling": "*"}):
return Div(id="step_05", hx_get=f"{self.prefix}/step_05", hx_trigger="load")
elif self.pipulate.should_advance(name, "step_03", {"time": "*"}):
return Div(id="step_04", hx_get=f"{self.prefix}/step_04", hx_trigger="load")
elif self.pipulate.should_advance(name, "step_02", {"color": "*"}):
return Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load")
# Start from beginning
return Div(
Div(id="step_01", hx_get=f"{self.prefix}/step_01", hx_trigger="load"),
Div(id="step_02"),
Div(id="step_03"),
Div(id="step_04"),
Div(id="step_05")
)
async def step_01(self, request):
name = db['poe_name']
step_data = self.pipulate.get_step_data(name, "step_01")
return Div(
Card(f"Hello, {step_data['name']}"),
Div(id="step_02", hx_get=f"{self.prefix}/step_02", hx_trigger="load")
)
async def step_02(self, request):
return Div(
Card(
H3("Pick a color"),
Form(
Select(
Option("Red", value="red"),
Option("Blue", value="blue"),
Option("Green", value="green"),
name="color"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_02_submit",
hx_target="#step_02",
enctype="application/x-www-form-urlencoded"
)
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
color = form.get("color")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_02", {"color": color})
return Div(
Card(f"You chose {color}"),
Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load")
)
async def step_03(self, request):
logger.debug("Step 03 render started")
form = Form(
Select(
Option("Dawn", value="dawn"),
Option("Dusk", value="dusk"),
Option("Night", value="night"),
name="time"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_03_submit",
hx_target="#step_03",
enctype="application/x-www-form-urlencoded"
)
logger.debug(f"Step 03 form created with attributes: {form._attributes}")
return Div(
Card(
H3("Choose a time"),
form
),
Div(id="step_04")
)
async def step_03_submit(self, request):
form = await request.form()
time = form.get("time")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_03", {"time": time})
return Div(
Card(f"You chose {time}"),
Div(id="step_04", hx_get=f"{self.prefix}/step_04", hx_trigger="load")
)
async def step_04(self, request):
return Div(
Card(
H3("Choose a feeling"),
Form(
Select(
Option("Joy", value="joy"),
Option("Peace", value="peace"),
Option("Wonder", value="wonder"),
name="feeling"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_04_submit",
hx_target="#step_04",
enctype="application/x-www-form-urlencoded"
)
),
Div(id="step_05")
)
async def step_04_submit(self, request):
form = await request.form()
feeling = form.get("feeling")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_04", {"feeling": feeling})
return Div(
Card(f"You chose {feeling}"),
Div(id="step_05", hx_get=f"{self.prefix}/step_05", hx_trigger="load")
)
async def step_05(self, request):
name = db['poe_name']
step1_data = self.pipulate.get_step_data(name, "step_01")
step2_data = self.pipulate.get_step_data(name, "step_02")
step3_data = self.pipulate.get_step_data(name, "step_03")
step4_data = self.pipulate.get_step_data(name, "step_04")
return Card(
H3("Your Poem Preview"),
P(
f"At {step3_data['time']}, {step1_data['name']} saw",
f"A {step2_data['color'].lower()} light so bright",
f"Filling their heart with {step4_data['feeling']}"
),
Form(
Button("Perfect!", type="submit"),
hx_post=f"{self.prefix}/step_05_submit",
hx_target="#step_05"
)
)
async def step_05_submit(self, request):
name = db['poe_name']
self.pipulate.set_step_data(name, "step_05", {"finalized": True})
return Card(
H3("✨ Poem Complete ✨"),
P("Your masterpiece is ready!")
)
poetflow = Poetflow(app, pipulate)
Me: While you’ve got it all fresh in your context window and now that we have several working examples, what do you think is the most important documentation that can be produced to help future people making new workflows based on this system?
Claude: Based on the code examples, here’s what I think would be most helpful as documentation:
# Pipulate Workflow Pattern Guide
## Core Concepts
1. **Namespace Isolation**
```python
class MyWorkflow:
def __init__(self, app, pipulate):
self.prefix = "/my-workflow" # Unique namespace for routes
```
2. **State Management**
- Each workflow uses a unique key (URL, name, etc.) to track state
- State is stored as steps: `step_01`, `step_02`, etc.
- Steps contain JSON data relevant to that step
3. **Step Progression**
```python
# Check if ready for next step
if self.pipulate.should_advance(key, "step_02", {"choice": "*"}):
# Show next step
return Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load")
```
## Workflow Template
1. **Initialize Routes**
```python
routes = [
(f"{self.prefix}/init", self.init, ["POST"]), # Entry point
(f"{self.prefix}/step_01", self.step_01), # First step
(f"{self.prefix}/step_02", self.step_02), # View step
(f"{self.prefix}/step_02_submit", self.submit_02, ["POST"]), # Submit step
]
```
2. **Step Pattern**
```python
async def step_XX(self, request):
# 1. Get workflow key
key = db['workflow_key']
# 2. Check if step is already complete
if self.pipulate.should_advance(key, f"step_XX", {"data": "*"}):
# Show completed state + next step placeholder
return Div(
Card("Step XX Complete"),
Div(id="step_YY", hx_get=f"{self.prefix}/step_YY", hx_trigger="load")
)
# 3. Show form for current step
return Div(
Card(
Form(
Input(name="data"),
Button("Submit"),
hx_post=f"{self.prefix}/step_XX_submit",
hx_target="#step_XX"
)
),
Div(id="step_YY") # Placeholder for next step
)
```
3. **Submit Pattern**
```python
async def step_XX_submit(self, request):
# 1. Get form data
form = await request.form()
data = form.get("field")
# 2. Save state
key = db['workflow_key']
self.pipulate.set_step_data(key, f"step_XX", {"field": data})
# 3. Return next step trigger
return Div(
Card(f"Completed with: {data}"),
Div(id="step_YY", hx_get=f"{self.prefix}/step_YY", hx_trigger="load")
)
```
## Best Practices
1. **Resumability**
- Always check for existing state in `init`
- Allow jumping to any completed step
- Store minimal data needed for each step
2. **UI Patterns**
- Each step shows only its own content
- Use placeholders for upcoming steps
- Lock completed steps to prevent changes
3. **State Transitions**
- Use `should_advance()` to check step completion
- Store step data before showing next step
- Keep state updates atomic
## Examples
- `Petiteflow`: Minimal 3-step workflow
- `Poetflow`: Complex 5-step workflow with branching
- `Workflow`: URL-based workflow with validation
This documentation focuses on the patterns that emerge across all three implementations, making it easier for developers to create new workflows while maintaining consistency.
Would you like me to expand on any particular aspect?
I test all 3 of my pipeline examples:
- Workflow: the 3-step example with a cleaned URL as primary key, pinned against regression with tons of documentation.
- Petiteflow: the smallest possible example, perhaps a little too small in how it hid multi-step issues.
- Poetflow: the 5-step version that forced resolving remaining concerns and which probably captures the “pipeline poetry” best.
And I look over the examples examining where forever-forward propegation of template code will be painful.
Me: The biggest potential headache in the future I believe is expressed here in init. It seems a bit too verbose, like this step at the beginning needs to explicitly know all this stuff about elsewhere in the flow even though that’s stuff that could be interrogated and set implicitly for a much shorter init. This could be true if the process is generic, but will be false if it’s excessively custom every time. Steps will always be sequential, and we want to encourage convention and determinism, especially where it will make the code more beautiful and easier to live with many thousands of copied versions forever forward.
Claude: Ah yes! The init
method has too much explicit knowledge about the flow. We could make this much more elegant by leveraging conventions and the Pipulate class’s knowledge of step progression. Here’s a proposed refactor:
async def init(self, request):
"""Initialize or resume workflow"""
form = await request.form()
name = form.get("name")
db['poe_name'] = name
# Initialize new workflow or get existing
self.pipulate.initialize_if_missing(name)
self.pipulate.set_step_data(name, "step_01", {"name": name})
# Let Pipulate handle progression
return self.pipulate.resume_flow(
key=name,
final_step={
"step": "step_05",
"condition": {"finalized": "*"},
"view": Card(
H3("✨ Poem Complete ✨"),
P("Your masterpiece is ready!")
)
},
step_pattern=f"{self.prefix}/step_{{step_num:02d}}"
)
Then in Pipulate:
def resume_flow(self, key: str, final_step: dict, step_pattern: str) -> Div:
"""Generic workflow resumption logic
Args:
key: Workflow identifier (URL, name, etc)
final_step: Dict with completion info
step_pattern: URL pattern for steps (with {step_num} placeholder)
"""
# Check if workflow is complete
if self.should_advance(key, final_step["step"], final_step["condition"]):
return final_step["view"]
# Find last completed step
last_step = self.get_last_completed_step_number(key)
next_step = last_step + 1
# Create placeholders for all steps
placeholders = []
for step_num in range(1, next_step + 1):
step_id = f"step_{step_num:02d}"
placeholders.append(
Div(
id=step_id,
hx_get=step_pattern.format(step_num=step_num),
hx_trigger="load" if step_num == 1 else None
)
)
return Div(*placeholders)
This approach:
- Makes init much cleaner and focused on its core job
- Moves progression logic to Pipulate where it belongs
- Uses convention over configuration
- Makes adding new steps easier (no init changes needed)
- Keeps the flexibility for custom flows while encouraging standard patterns
The key insight is that step progression is fundamentally the same for all workflows - we just need to know:
- What’s the key (name/URL)
- What’s the final state
- How to format step URLs
Everything else can be derived from conventions and the existing state data.
Would you like to see how this pattern could be applied to the other workflows?
Me: Implementation seemed self-evident, so I did it and it’s working. Here, you can double-check my work. Let’s apply this now to Petiteflow and Workflow, so yes, I guess the answer is yes. Let me show you their init’s too. I will get all 3 examples in sync with the new solution.
The Final Examination
Any More Complexity to Shove Around?
I answered yes and applied the new generic init solution across all 3 examples, and they read more like poetic workflows now. Most of the “extra labor” of designing workflows is now expunged, but for one last place…
Me: Okay, most of the “extra labor” of designing workflows is now expunged, except for one last place. In step_5, the last step, all 4 previous cards are mentioned explicit, and their values loaded into [step_x]_data
variables for displaying the finished poem. Now the goal is to drive everything generic and repeated able to be reliably inferred based on convention, into the Pipulate class. Scanning over Poetflow which I think is the best of our 3 examples to do this sort of analysis, it appears we have achieved this goal but for that one location. But also, that seems okay because it’s for constructing the final poem, and that’s not one of these generic abstractions. If there were 100 steps and we just used 4 of those steps to construct the open, it would be exactly as short on the last poetry-showing step. Now please do not just lean into my presumptions to create a confirmation bias. Push back if pushing back is needed. What am I not seeing? What did I miss? What other candidates are there for complexity-shifting overt to Pipulate?
class Poetflow:
"""
A 5-step workflow showing Pipulate patterns
+-------------------+
| Enter a Name |
| [ ] |
+-------------------+
↓
+-------------------+
| Pick a Color |
| [Red|Blue|Green |
+-------------------+
↓
+-------------------+
| Choose Time |
| [Dawn|Dusk|Night] |
+-------------------+
↓
+-------------------+
| Add Feeling |
| [Joy|Peace|Wonder]|
+-------------------+
↓
+-------------------+
| Preview |
| [Yes] [No] |
+-------------------+
"""
def __init__(self, app, pipulate):
self.app = app
self.pipulate = pipulate
self.prefix = "/poe" # Namespace prefix
# Add route registration logging
logger.debug("Registering Poetflow routes:")
routes = [
(f"{self.prefix}/init", self.init, ["POST"]),
(f"{self.prefix}/step_01", self.step_01),
(f"{self.prefix}/step_02", self.step_02),
(f"{self.prefix}/step_02_submit", self.step_02_submit, ["POST"]),
(f"{self.prefix}/step_03", self.step_03),
(f"{self.prefix}/step_03_submit", self.step_03_submit, ["POST"]),
(f"{self.prefix}/step_04", self.step_04),
(f"{self.prefix}/step_04_submit", self.step_04_submit, ["POST"]),
(f"{self.prefix}/step_05", self.step_05),
(f"{self.prefix}/step_05_submit", self.step_05_submit, ["POST"])
]
for path, handler, *methods in routes:
method_list = methods[0] if methods else ["GET"]
logger.debug(f" {path} -> {handler.__name__} ({', '.join(method_list)})")
app.route(path, methods=method_list)(handler)
# Add detailed route inspection logging
for route in app.routes:
if 'step_03' in str(route):
logger.debug(f"Route details: {route}")
logger.debug(f"Handle method: {route.handle}")
logger.debug(f"Path: {route.path}")
async def start_form(self):
"""Initial entry point for the poem workflow"""
return Card(
H2("Create a Poem"),
Form(
Input(name="name", placeholder="Enter your name", required=True),
Button("Begin"),
hx_post=f"{self.prefix}/init",
hx_target="#poem"
),
id="poem"
)
async def init(self, request):
"""Initialize or resume workflow"""
form = await request.form()
name = form.get("name")
db['poe_name'] = name
# Initialize new workflow or get existing
self.pipulate.initialize_if_missing(name)
self.pipulate.set_step_data(name, "step_01", {"name": name})
# Let Pipulate handle progression
return self.pipulate.resume_flow(
key=name,
final_step={
"step": "step_05",
"condition": {"finalized": "*"},
"view": Card(
H3("✨ Poem Complete ✨"),
P("Your masterpiece is ready!")
)
},
step_pattern=f"{self.prefix}/step_{{step_num:02d}}"
)
async def step_01(self, request):
name = db['poe_name']
step_data = self.pipulate.get_step_data(name, "step_01")
return Div(
Card(f"Hello, {step_data['name']}"),
Div(id="step_02", hx_get=f"{self.prefix}/step_02", hx_trigger="load")
)
async def step_02(self, request):
return Div(
Card(
H3("Pick a color"),
Form(
Select(
Option("Red", value="red"),
Option("Blue", value="blue"),
Option("Green", value="green"),
name="color"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_02_submit",
hx_target="#step_02",
enctype="application/x-www-form-urlencoded"
)
),
Div(id="step_03")
)
async def step_02_submit(self, request):
form = await request.form()
color = form.get("color")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_02", {"color": color})
return Div(
Card(f"You chose {color}"),
Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load")
)
async def step_03(self, request):
logger.debug("Step 03 render started")
form = Form(
Select(
Option("Dawn", value="dawn"),
Option("Dusk", value="dusk"),
Option("Night", value="night"),
name="time"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_03_submit",
hx_target="#step_03",
enctype="application/x-www-form-urlencoded"
)
logger.debug(f"Step 03 form created with attributes: {form._attributes}")
return Div(
Card(
H3("Choose a time"),
form
),
Div(id="step_04")
)
async def step_03_submit(self, request):
form = await request.form()
time = form.get("time")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_03", {"time": time})
return Div(
Card(f"You chose {time}"),
Div(id="step_04", hx_get=f"{self.prefix}/step_04", hx_trigger="load")
)
async def step_04(self, request):
return Div(
Card(
H3("Choose a feeling"),
Form(
Select(
Option("Joy", value="joy"),
Option("Peace", value="peace"),
Option("Wonder", value="wonder"),
name="feeling"
),
Button("Choose"),
hx_post=f"{self.prefix}/step_04_submit",
hx_target="#step_04",
enctype="application/x-www-form-urlencoded"
)
),
Div(id="step_05")
)
async def step_04_submit(self, request):
form = await request.form()
feeling = form.get("feeling")
name = db['poe_name']
self.pipulate.set_step_data(name, "step_04", {"feeling": feeling})
return Div(
Card(f"You chose {feeling}"),
Div(id="step_05", hx_get=f"{self.prefix}/step_05", hx_trigger="load")
)
async def step_05(self, request):
name = db['poe_name']
step1_data = self.pipulate.get_step_data(name, "step_01")
step2_data = self.pipulate.get_step_data(name, "step_02")
step3_data = self.pipulate.get_step_data(name, "step_03")
step4_data = self.pipulate.get_step_data(name, "step_04")
return Card(
H3("Your Poem Preview"),
P(
f"At {step3_data['time']}, {step1_data['name']} saw",
f"A {step2_data['color'].lower()} light so bright",
f"Filling their heart with {step4_data['feeling']}"
),
Form(
Button("Perfect!", type="submit"),
hx_post=f"{self.prefix}/step_05_submit",
hx_target="#step_05"
)
)
async def step_05_submit(self, request):
name = db['poe_name']
self.pipulate.set_step_data(name, "step_05", {"finalized": True})
return Card(
H3("✨ Poem Complete ✨"),
P("Your masterpiece is ready!")
)
Claude: Your analysis is quite thorough, and it seems you’ve already streamlined much of the workflow logic into the Pipulate
class. The final step in Poetflow
does indeed involve specific logic for constructing the poem, which is inherently custom and specific to the workflow’s purpose. Here are a few considerations and potential areas for further abstraction, if needed:
- Data Aggregation:
- If multiple workflows require aggregating data from several steps to produce a final output, you could create a utility method in
Pipulate
to fetch and aggregate data from multiple steps. However, this might be overkill if the aggregation logic is unique to each workflow.
- If multiple workflows require aggregating data from several steps to produce a final output, you could create a utility method in
- Final Step Logic:
- The logic in the final step is specific to the workflow’s purpose (e.g., constructing a poem). If you find that multiple workflows have similar finalization logic, you could consider abstracting this into a method that takes a list of step names and a template for the final output.
- Error Handling and Validation:
- If there are common patterns in error handling or validation across steps, these could be abstracted into helper methods within
Pipulate
.
- If there are common patterns in error handling or validation across steps, these could be abstracted into helper methods within
- UI Components:
- If there are common UI components or patterns (e.g., forms, buttons) that are reused across workflows, consider creating reusable components or templates.
- Logging and Debugging:
- If logging patterns are consistent across workflows, you could centralize logging logic within
Pipulate
or a separate utility class.
- If logging patterns are consistent across workflows, you could centralize logging logic within
- Step Transition Logic:
- If there are common rules for transitioning between steps (beyond what
resume_flow
handles), these could be abstracted.
- If there are common rules for transitioning between steps (beyond what
Given the current state of your code, it seems like you’ve already achieved a good balance between abstraction and specificity. The final step’s logic is naturally more specific, as it deals with the unique output of the workflow. Unless you find yourself repeating similar logic across multiple workflows, it’s reasonable to keep the final step logic within the workflow class.
If you anticipate more workflows with similar finalization logic, you could consider creating a pattern or template for final steps. Otherwise, maintaining the current structure seems appropriate.
Let me know if there’s a specific area you’d like to explore further or if you have additional questions!
When a Human & 2 AIs Collaborate
o1’s Final Review
This started as a collaboration between me and two of the biggest frontier models on the Internet: Anthropic Claude 3.5 Sonnet and OpenAI ChatGPT o1, and it will end that way. Let’s let o1 review its carried out plan…
Me: Okay, o1. Here’s the final article. Review it, especially the final pattern of Poetflow. I will additionally paste the Pipultate function below that for you to get a more complete picture of my question to you, which is to do an analysis of my & Claude’s conclusions. The code runs well. There is plenty of room for improvement in the future. But the gist of my question is to make sure that a class that is baseless due to its unique-every-time perfectly chosen granularity of expressiveness nature is well staged for the inevitable copy/paste propagation it is going to undergo, and with which decisions I’m going to have to live with. Just pontificate, riff, meander, question and confirm as appropriate, please and thank you.
o1: Let’s find a comfortable armchair in the mental workshop and settle in, coffee in hand, to reflect on the final shape of this architecture. You’ve arrived at a place where Pipulate acts as the core engine of state management, and each Workflow class becomes a unique poetic script. No base classes, no enforced inheritance hierarchies—just freeform code that reads like a narrative of the process. As you say, it’s like Unix pipes poetry, or perhaps each Workflow is a standalone one-act play that references the same stage instructions (Pipulate) but writes its own lines, sets, and props.
Stability of the Approach
By now, you’ve solved some big challenges:
-
State stored as a single JSON blob:
This alone simplifies so many common headaches. No complex schemas, no foreign keys, no migrations. Every step’s data is a dictionary in a flat namespace. Perfectly Zen. If something goes wrong, you print the JSON. No rummaging through multiple tables. The final structure has proven stable enough to support multiple example workflows—3-step, 5-step, 100-step—without structural pain. -
Linearity & Determinism:
Each step depends linearly on the previous. No branching complexity in the code. The code in each workflow is easy to read and reason about: “At step_01 we do X, then we load step_02.” This linear approach encourages a narrative style that’s inherently easier to maintain over time. -
Pipulate as the Core Library: Pipulate is now a stable gravity well: it provides:
- Initialization of pipeline state
- Retrieving and storing step data
- Determining last completed step
- Generic methods like
should_advance()
andresume_flow()
With these generic capabilities, workflows gain a uniform way to handle complexity without re-writing logic. This is a key success: complexity is “shoved” into a single class so that new Workflows can remain as terse and “poetic” as possible.
Poetry vs. Patterns
The biggest design decision was to avoid a base Workflow class. Instead, each Workflow is its own piece of poetry. This is a double-edged sword:
- Pro: Each workflow can be radically different, with no constraints from a superclass. They can be as minimal or as verbose as you like. Variation is easy. If you trust your sense of style, each workflow reads elegantly, a small script that does exactly what it must.
- Con: No base class means no single place to fix a bug that appears in all workflows. If you have 1000 workflows, they each have a near-identical pattern. If a subtle improvement emerges, you must apply it manually to all workflows. This might be acceptable given your approach (the code is simple and short enough that searching-and-replacing across workflows might not be too painful).
For now, you accept the trade-off for the sake of clarity and independence in each workflow. As long as the “pipeline pattern” remains stable, the copy/paste approach is manageable. The workflows are already so minimal that a global refactor would likely be straightforward if needed.
The Final Step Assembly Problem
You noted that constructing the final step (like producing a poem or final result) involves re-fetching data from previous steps. This is inevitable if the final step depends on what came before. Could Pipulate abstract this further? Maybe, but that might be overkill. The final step is inherently specific, and writing a few lines to fetch step_01_data
, step_02_data
is not a burden. It’s the point at which the pipeline’s linear record transforms into a final result. Custom code is expected here.
If multiple workflows needed the same kind of final “assembly” pattern (e.g., always gather data from steps 1, 2, 3, and produce a summary), you could add a helper: pipulate.get_multiple_steps(url, ["step_01", "step_02", ...])
returning a dict of dicts. But that’s optional. The cost of writing a couple lines of code in the final step is small, and might keep the workflow’s personality and intent more transparent.
Potential Future Worries
-
Evolving Requirements:
If in the future you need branching (e.g., if user chooses A, skip step 03 and go to step 04), you might need some conditional logic in the workflow. This could complicate the pattern slightly, but Pipulate might still help by providing a method likego_to_step(url, step_num)
orconditional_resume_flow
. The key is that adding complexity should be a conscious choice. With your current linear model, you’re safe. -
Performance at Scale:
If, hypothetically, you had hundreds of steps or huge JSON blobs, you might start worrying about performance. But given this is a localhost revolution scenario, simplicity is more important than micro-optimizing. If needed, you could introduce caching or partial loading, but that’s probably unnecessary right now. -
Consistency in UI Patterns:
HTMX placeholders,hx_get
,hx_trigger="load"
, andhx_swap="outerHTML"
form a stable pattern. If a new tool emerges, would you refactor all workflows? Possibly. But again, that’s a hypothetical future scenario. The pattern is simple enough that a large scale refactor wouldn’t be too traumatic.
Documentation & Education
The final step you mentioned—writing minimal, near-poetic examples—is crucial. Having a single “Petiteflow” as a canonical minimal example that people can look at and immediately “get it” will ensure the pattern is memetically transmissible. That example doesn’t have to handle complex logic or show advanced features. It just needs to communicate the “no complexity” ethos and the pattern’s simplicity.
Then your more complex “Poetflow” can serve as a demonstration of handling more steps and conditions, while “Workflow” or “My100StepWorkflow” can show you can scale infinitely by just repeating the same pattern.
Putting all these in a README or a blog post that shows side-by-side:
- The minimal example (Petiteflow)
- The moderate complexity example (Poetflow)
- The large scale demonstration (My100StepWorkflow)
This triad can educate anyone who comes to your codebase or reads your blog.
The Essence
The essence of your approach is captured in these final stable patterns:
- Pipulate as a stable backend for state, step logic, and convenience methods.
- Workflows as unique, fully stand-alone “scripts” that define routes and steps linearly.
- Minimal duplication of logic, no overhead of classes or frameworks beyond what’s needed.
- HTMX for progressive loading, minimal front-end complexity.
This design is well-poised for copy/paste propagation. Any regrets down the line? Possibly if you drastically change how steps are named or how you track completion. But even then, a simple search-and-replace across the codebase would suffice, because the pattern is so regular. The cost of not having a base class is outweighed by the clarity and adaptability you gain.
In short, you’ve struck a balance:
- Just enough abstraction in Pipulate to handle common concerns.
- Just enough individuality in each workflow to feel like poetry rather than boilerplate.
- Just enough documentation and examples to spread the meme.
Now you can sit back, sip your coffee again, and watch as future workflows emerge with minimal fuss. The design feels honest, minimal, and local-first—exactly as intended.
Me: are there any other overarching insights regarding the system? What other projects or packages is it like, for example? What is most noteworthy? What is most concerning? What else have I missed?0
Gemini Advanced Pro: Here are some overarching insights and comparisons regarding your system:
Key Insights:
- Paradigm Shift: Your framework represents a departure from conventional web development paradigms. It challenges the dominance of client-side frameworks and complex state management, opting for a server-centric, minimalist approach. This has profound implications for maintainability, debugging, and overall system complexity.
- Localhost Revolution: The focus on localhost development unlocks unique possibilities. It allows for direct access to system resources, simplifies debugging, and reduces reliance on external services. This aligns with a growing trend of “local-first” software development.
- Workflow-as-Code: The
Pipulate
pattern enables the expression of workflows as concise, readable code. This “workflow-as-code” approach promotes clarity and maintainability, treating workflows as first-class citizens in the application design.
Comparisons:
- Serverless Functions: Your workflows share similarities with serverless functions in their focus on discrete, event-driven units of execution. However, your system leverages the
Pipulate
pattern for state management and resumability, which is not inherent in typical serverless architectures. - Jupyter Notebooks: The step-by-step progression of your workflows resembles the cell-based execution of Jupyter notebooks. Both emphasize a linear, narrative flow. However, your system integrates this flow within a web application context, providing a more structured and persistent environment.
- Unix Pipes: The
Pipulate
pattern draws inspiration from Unix pipes, where data flows sequentially through a series of processes. Both emphasize a linear, deterministic flow with clear inputs and outputs. However, your system applies this concept to web workflows, managing state and UI interactions.
Noteworthy Aspects:
- Simplicity: The core design principle of “no complexity” is evident throughout. The minimalist approach, reliance on server-side logic, and clear state management contribute to a system that is easy to understand and maintain.
- Transparency: The server-centric design and extensive logging provide full visibility into the system’s state and behavior. This transparency simplifies debugging and promotes trust in the system’s operation.
- Developer Freedom: The localhost focus and avoidance of complex frameworks empower developers with greater control and flexibility. This freedom can foster creativity and experimentation.
Concerns:
- Scalability: While the current focus is on localhost, if future needs require scaling to multiple users or handling large datasets, the server-centric design might need adjustments.
- Community Support: As a novel approach, the lack of a large community or ecosystem around the framework could pose challenges in terms of finding support, libraries, or extensions.
- Learning Curve: While the core concepts are simple, developers accustomed to conventional web frameworks might require time to adjust to the server-centric, minimalist paradigm.
Potential Oversights:
- Security: Given the localhost focus, security considerations might need more attention if the framework is ever deployed in a multi-user or internet-facing context.
- Testing: Implementing a comprehensive testing strategy is crucial to ensure the reliability and maintainability of workflows as the system evolves.
- Documentation: Clear, concise documentation is essential for communicating the framework’s concepts and guiding developers in its usage.
By addressing these considerations and continuing to refine the design, you can create a powerful and expressive framework that challenges conventional web development norms and empowers developers with a localhost-first approach.
And Now a Word From ChatGPT Pro!
Here for the final word, we have a very special guest. Before this pipeline finalization got finalized, I got access to ChatGPT Pro. It’s still o1 Release, but with a bigger token window and fewer limitations.
Me: Please review the whole codebase again and create a detailed guide to implementing new workflows, with a special emphasis on all the nuance that’s required to get the workflow to work correctly with that chain reaction effect, LLM guidance through steps and whatever else you think is necessary. The goal here is for the workflow system to be easy to replicate and customize for different purposes.
o1 Pro: Below is a comprehensive guide to implementing new workflows based on the given codebase. The primary goals are to:
- Understand the workflow structure and how it achieves the “chain reaction” effect.
- Learn how to incorporate LLM guidance into each step of the workflow.
- Understand how to replicate and customize these workflows for different use-cases, ensuring consistency, simplicity, and a stable user experience.
Overview of the Workflow System
In this codebase, a “workflow” is defined as a series of steps that the user proceeds through in sequence. Each step is represented as a card or form on the screen, and successfully completing one step “triggers” the next step to load automatically via HTMX.
Three main workflows are demonstrated in the code:
- Workflow (3-step example)
- Petiteflow (3-step minimal example)
- Poetflow (5-step example with LLM integration)
All three share the same underlying approach:
- State management is handled by a single database table (
pipeline
), using thePipulate
class. - Each step is represented as a route that returns an HTML
Div
containing:- A card showing completed data (if step is done) or a form (if not).
- A placeholder
Div
for the next step, which triggers the chain reaction.
- Steps chain together by having the last action of a step’s response include a
Div
withhx_get
andhx_trigger="load"
pointing to the next step’s endpoint. - LLM guidance is included by making calls to
chatq()
at appropriate points to explain what’s happening at each step.
By following these patterns, you can build your own workflow by mixing and matching these elements.
Key Components of the Workflow System
1. The Pipulate
Class
Pipulate
manages pipeline state in a JSON blob stored in the pipeline
table. Each pipeline record:
- Is keyed by a
url
orname
(unique ID). - Contains a
data
field with a JSON blob tracking all steps:step_01
,step_02
, etc. - Provides methods like:
get_step_data()
andset_step_data()
get_last_completed_step_number()
should_advance()
This allows the workflow to be resumed by simply re-entering the key (like a URL or name) into the first step’s form.
What to keep in mind when adding a new workflow:
- Create a unique prefix or route for your workflow steps (e.g.,
/my_new_flow
). - Initialize and use
pipulate.initialize_if_missing(url)
at the start to ensure the pipeline state exists. - Store step data after each form submission with
pipulate.set_step_data(url, "step_xx", data)
.
2. Steps as Routes
Each step of a workflow is defined as an async function route. For example:
@app.route("/my_new_flow/step_01")
async def step_01(request):
# Retrieve pipeline key (e.g. name or url) from db or form
# Check if step_01 data is already set
# If completed, show locked card and trigger step_02
# If not completed, show form and next step placeholder
Key Patterns:
- If a step is completed, return a
Card
with locked state and an auto-triggerDiv
for the next step. - If not completed, return a
Card
with a form thathx_post
s to a submit endpoint. Upon successful submission, that endpoint returns the completed state plus a trigger for the next step.
3. The Chain Reaction with HTMX
The “chain reaction” effect is achieved by:
- Including a
Div
for the next step at the end of each step’s response. - Setting
hx_trigger="load"
andhx_swap="outerHTML"
on that Div. - If the step is completed, this Div immediately fetches the next step and replaces itself, continuing the chain until reaching an incomplete step.
For example:
return Div(
Card("Step 01 completed ✓"),
Div(id="step_02", hx_get="/my_new_flow/step_02", hx_trigger="load", hx_swap="outerHTML")
)
As soon as the browser receives this, it loads step_02
and swaps it in place.
4. LLM Guidance at Each Step
The code uses asyncio.create_task(chatq(prompt, role="system"))
or directly await chatq(prompt)
to instruct the LLM to explain what’s happening at each step.
Tips for LLM integration:
- Call
self.explain(url, "Step_XX", "Message")
in your workflow steps to send context updates to the LLM. Theexplain()
function:- Gathers the state so far
- Crafts a prompt explaining the current step
- Schedules a chatq call to provide real-time guidance in the chat interface.
-
Keep prompts short and context-specific. Use
limiter
to keep messages concise. - Ensure you do not rely on LLM for critical logic. It’s just a “guide” displayed to the user and updated as steps complete.
Detailed Steps to Create a New Workflow
-
Define Your Workflow’s Unique Prefix
Choose a prefix, e.g./mycoolflow
. This helps namespace your steps cleanly. - Initialize Your Workflow Class (Optional)
Look atWorkflow
,Petiteflow
, andPoetflow
classes. You can create your own workflow class:class MyCoolFlow: def __init__(self, app, pipulate): self.app = app self.pipulate = pipulate self.prefix = "/mycoolflow" app.route(f"{self.prefix}/init", methods=["POST"])(self.init_flow) app.route(f"{self.prefix}/step_01")(self.step_01) app.route(f"{self.prefix}/step_01_submit", methods=["POST"])(self.step_01_submit) # ... add more steps as needed
Decide how many steps you have and what each step does.
-
Start Form (init)
Provide a form that takes a unique key (like URL or Name) to start or resume the workflow.
This is typically a form thathx_post
s to something like/mycoolflow/init
. - Initialize State
Ininit_flow
, do:state = self.pipulate.initialize_if_missing(key) # If first step not done, set step_01 initial data if not self.pipulate.get_step_data(key, "step_01"): self.pipulate.set_step_data(key, "step_01", {"...": "..."}) return Div( Div(id="step_01", hx_get=f"{self.prefix}/step_01", hx_trigger="load"), Div(id="step_02"), Div(id="step_03") # As many steps as you have )
This preloads placeholders for each step with the first triggered on load.
- Implement Each Step
Steps should:- Retrieve pipeline key from
db
. - Call
get_step_data()
. - If completed, return a locked card with a trigger to next step.
- If incomplete, return a form and next step placeholder.
Example (similar to step_02 in existing code):
async def step_02(self, request): key = db['key'] step2_data = self.pipulate.get_step_data(key, "step_02") if step2_data: # Completed step return Div( Card(f"Step 02 done! ✓"), Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load", hx_swap="outerHTML") ) else: # Not completed return Div( Card( H3("Step 02: Some Choice"), Form( Select(Option("A"), Option("B"), name="choice", required=True), Button("Submit", type="submit"), hx_post=f"{self.prefix}/step_02_submit", hx_target="#step_02" ), id="step_02" ), Div(id="step_03") )
Note how the incomplete state shows a form, and the completed state triggers the next step.
- Retrieve pipeline key from
- Submit Endpoints
Each step with a form needs a submit endpoint that:- Processes the form
- Updates pipeline state via
set_step_data
- Returns a completed card + trigger for next step.
Example:
async def step_02_submit(self, request): form = await request.form() choice = form.get("choice", "") key = db['key'] self.pipulate.set_step_data(key, "step_02", {"choice": choice}) # Return completed view and auto trigger next step return Div( Card(f"Step 02 completed with choice: {choice} ✓"), Div(id="step_03", hx_get=f"{self.prefix}/step_03", hx_trigger="load", hx_swap="outerHTML") )
- LLM Guidance
Useself.explain(key, "Step_XX", "Message")
or directlyawait chatq(...)
calls at strategic points:- Right after a step is completed
- Before presenting a new step
This way, the chat panel updates the user on what’s happening.
- Testing and Validation
- Test starting fresh (init step).
- Test partial completion (stop at step_02, then revisit the first step’s unique key).
- Confirm the chain reaction (each completed step triggers the next without nesting or reflow issues).
- No Nesting & No Indentation Issues
To avoid nesting or indentation buildup:- Always return a single top-level
Div(id="step_xx")
containing the card and next placeholder Div. - Use
hx_swap="outerHTML"
so that the entire Div for that step is replaced rather than nested inside. - This ensures a flat structure that doesn’t grow deeper on each refresh.
- Always return a single top-level
Additional Nuances & Tips
-
Re-Entrancy:
If a user re-enters the workflow after partial completion, it should reconstruct the chain of completed steps leading up to the next incomplete step. This is handled byhx_get
triggers andget_step_data()
checks. Just ensure each step’s route logic is idempotent. -
Naming Conventions:
Usestep_01
,step_02
, etc. consistently. Theget_last_completed_step_number()
method depends on this naming pattern. -
LLM Prompts:
Keep them short and context-specific. The user’s chat interface will show these messages as they proceed. Avoid references to “buttons below/above” – no positional references, just instructions. -
Performance:
Because each step is a simple route returning HTML, and state is stored server-side, performance is straightforward. Scaling complexity is low since this is a single-tenant local environment. -
Debugging:
The database state is printed after each request by theDOMSkeletonMiddleware
. Use this to confirm pipeline JSON states are as expected.
Summary
To implement a new workflow:
- Create a new prefix and class to group your steps.
- Add an initial form to start/resume the workflow and placeholders for all steps.
- For each step, return a
Div
with the step’s card. If completed, trigger the next step. If incomplete, show a form and a placeholder Div. - Use LLM guidance with
chatq()
calls at strategic points. - Ensure
hx_swap="outerHTML"
is used so no nested indentation occurs. - Test re-entrancy and chain reaction behavior thoroughly.
Following these guidelines will help you replicate the chain reaction workflows you see in Workflow
, Petiteflow
, and Poetflow
examples, tailoring them to your own unique use-cases, while maintaining a consistent, stable, and human-friendly user experience.
Understanding Creatively Introduced Behavior
As I refined the final examples of the pipeline procedure, I noticed a new behavior I did not ask for. I clarify with o1 that this is something that it creatively introduced.
Me: I’m fairly happy with the current implementation. But I want to clarify a point. The chain reaction effect only happens if the Poetryflow for a particular pipeline is not recorded as Complete, right? Am I correct in assuming that after a poem has been finalized, it no longer triggers the chain reaction card step rebuild if the primary key is plugged back in on the first step? And this is intentional by design assuming that was how the system should operate?
o1: You are correct. That is indeed how the current system is designed. After the poem (or the pipeline) is fully completed and marked as finalized, when you re-enter the primary key at the first step, it no longer triggers the chain reaction of rebuilding and re-running each step.
Why This Happens:
- Each workflow step sets data in the pipeline state.
- If all steps are completed and
finalized
is set in the pipeline state (for example,{"finalized": True}
), then the logic in theinit
endpoint checks this condition. - When the user re-enters the same primary key (the URL or name) that corresponds to a completed pipeline, the code sees that
finalized
is present atstep_05
. - Instead of re-running the chain of steps, it returns the final completion card immediately.
Intentional by Design:
This behavior is intentional. The pipeline’s end condition (finalized
) acts as a checkpoint. Once the pipeline is completed, it remains in that completed state:
- The chain reaction effect (automatically loading the next step) is meant to guide the user through incomplete steps.
- After completion, there’s no need to re-run all steps again since the pipeline is done.
- Providing the primary key again merely displays the final outcome (the completed poem and a confirmation card) rather than repeating the workflow.
So, yes, your understanding is correct: once the workflow is recorded as complete, plugging the primary key back into the first step does not redo the chain reaction, and this is exactly how it’s meant to function.