Future-proof your skills and escape the tech hamster wheel with the Linux, Python, vim & git stack (LPvg) including NixOS, Jupyter, FastHTML / HTMX and an AI stack to resist obsolescence. Follow along as I debunk peak data theory and develop Pipulate, the next generation free AI SEO tool.

Pipulate Free & Open Source SEO with and for LLMs

Pipulate Free & Open Source SEO with and for LLMs

Workflows are WET as WET can be. The Crud is DRY as DRY.
You need not use the Cloud because no lock-in does apply!
Haters gonna hate because they need no reason why.

What is Pipulate?

Pipulate is the first local-first, single-tenant desktop app framework with built-in AI-assisted step-by-step workflows. It’s designed to function somewhat like an Electron app but runs a full Linux subsystem within a folder (using Nix) for reproducibility on macOS, Linux, and Windows (via WSL). Its primary focus is enabling SEO practitioners and others to use AI-assisted, step-by-step workflows (ported from Jupyter Notebooks) without needing to interact directly with Python code. It also serves developers who build these workflows.

Core Philosophy & Design:

Key Technologies Used:

Target Audience:

A local-first, single-tenant desktop app framework built with FastHTML, MiniDataAPI, and local LLM integration (via Ollama). Designed to run like an Electron app—with full server-side state, minimal complexity, and reproducible environments using Nix Flakes. The CRUD is DRY and the Workflows are WET! by Mike Levin

             ┌─────────────┐ Like Electron, but full Linux subsystem 
             │   Browser   │ in a folder for macOS and Windows (WSL)
             └─────┬───────┘
                   │ HTTP/WS
                   ▼
┌───────────────────────────────────────┐
│           Nix Flake Shell             │ - In-app LLM (where it belongs)
│  ┌───────────────┐  ┌──────────────┐  │ - 100% reproducible
│  │   FastHTML    │  │    Ollama    │  │ - 100% local
│  │   HTMX App    │  │  Local LLM   │  │ - 100% multi-OS    
│  └───────┬───────┘  └──────────────┘  │
│          │                            │
│    ┌─────▼─────┐     ┌────────────┐   │
│    │MiniDataAPI│◄───►│ SQLite DB  │   │
│    └───────────┘     └────────────┘   │
└───────────────────────────────────────┘

Key Features


Getting Started

Prerequisites

Installing Nix on macOS or Windows

This guide provides straightforward instructions for installing Nix on macOS or Windows using the Determinate Systems installer.

Installation Steps

  1. Open your terminal (Terminal on macOS, or WSL2 terminal on Windows).

  2. Run the following command:

    curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
    
  3. Follow on-screen prompts to complete the installation.

    • You have to answer “Yes” once.
    • Close and reopen your terminal at the end (easy to miss, but important!)

What This Does

Getting Started with Pipulate

Once Nix is installed, getting started with Pipulate is simple:

  1. Clone the Repository:

    git clone https://github.com/miklevin/pipulate.git
    cd pipulate
    
  2. Enter the Development Environment:

    nix develop
    

    This command sets up a complete, reproducible environment—including Python, CUDA (if available), and all necessary tools.

  3. Run the Application:

    Inside the shell, start the app by running:

    python server.py
    

    Your default browser will open at http://localhost:5001.

  4. Optional: Launch Jupyter Notebook:

    To work with the integrated data science environment, use:

    start
    

    and later stop it with:

    stop
    

Architecture Overview

Pipulate is not your typical web app. This code is an intentionally stripped-down, single-tenant, local-first app—more like an Electron app than an enterprise or multi-tenant system. It uses server-side state (via DictLikeDB) and HTMX for UI updates, deliberately avoiding complex ORM or FastAPI patterns. All design decisions (even if they look like anti-patterns) are intentional for local simplicity and observability.

         ┌────────────────────────────────────────────────────┐
         │                    Web Browser                     │
         │   ┌──────────────────┐      ┌──────────────────┐   │
         │   │ Navigation Bar   │      │ Chat Interface   │   │
         │   │ Workflows & Apps │      │ Domain expertise │   │
         │   │ (HTMX, SSE, WS)  │      │ (LLM chat)       │   │
         │   └──────────────────┘      └──────────────────┘   │
         │                                                    │
         └──────────────────────────┬─────────────────────────┘
                                    │
                                    ▼
         ┌────────────────────────────────────────────────────┐
         │               FastHTML Application                 │
         │                                                    │
         │  - Plugin Architecture for CRUD & Workflows        │
         │  - Server-side state via DictLikeDB & JSON         │
         │  - Local LLM Integration (Ollama streaming)        │
         └──────────────────────────┬─────────────────────────┘
                                    │
                  ┌─────────────────┬─────────────────────┐
                  │                 │                     │
                  ▼                 ▼                     ▼
  ┌───────────────────────┐  ┌───────────────────┐  ┌────────────────┐
  │  DictLikeDB / SQLite  │  │ Local LLM Service │  │ MCP, Browser & │
  │ (State & MiniDataAPI) │  │ (e.g., Ollama)    │  │ External APIs  │
  └───────────────────────┘  └───────────────────┘  └────────────────┘

File Structure

Designed to be ultimately simple. No directory-diving! Most things remain static in this app, except for the workflows where you can copy/paste starter_flow.py into a new file and start building out your own workflows (often porting Jupyter Notebooks into web apps). It’s a dynamically adapting plugin architecture.

    .
    ├── .venv
    ├── apps  # Rails-like CRUD apps (not broken out yet)
    │   ├── profiles_app.py
    │   └── todo_app.py
    ├── data
    │   └── data.db
    ├── flake.nix  # Makes app work on macOS, Linux, and Windows via WSL
    ├── logs
    │   └── server.log  # Can be fed to an LLM to debug issues
    ├── README.md
    ├── requirements.txt
    ├── server.py  # The main server file
    ├── static
    │   ├── fasthtml.js
    │   ├── htmx.js
    │   ├── pico.css
    │   ├── script.js
    │   ├── Sortable.js
    │   ├── surreal.js
    │   └── ws.js
    ├── training
    │   ├── introduction.md  # Learn about just-in-time training
    │   ├── starter_flow.md
    │   └── system_prompt.md
    └── workflows
        ├── pipe_flow.py
        └── starter_flow.py  # The copy/paste template for new workflows

Layout & User Interface

The app’s UI is divided into clear, distinct regions:

┌─────────────────────────────┐
│        Navigation           │
├───────────────┬─────────────┤
│               │             │
│    Main Area  │    Chat     │
│   (Pipeline)  │  Interface  │
│               │             │
├───────────────┴─────────────┤
│        Poke Button          │
└─────────────────────────────┘

Main Components:

The hierarchical structure is built as follows:

home
|
+-- create_outer_container
    |
    +-- create_nav_group
    |   |
    |   +-- create_nav_menu
    |       |
    |       +-- create_profile_menu
    |       +-- create_app_menu
    |
    +-- create_grid_left
        |
        +-- create_notebook_interface
            |
            +-- render_notebook_cells()
                |
                +-- render_notebook_cell(step_01)
                +-- render_notebook_cell(step_02)
                +-- render_notebook_cell(step_03)
                +-- ...
    |
    +-- create_chat_interface
        |
        +-- mk_chat_input_group
    |
    +-- create_poke_button

Communication Channels

The app relies on two primary communication methods to synchronize the front end and back end:

WebSockets

Used for real-time, bidirectional streaming:

┌─────────┐   ws://     ┌──────────┐
│ Browser │ ═══════════ │ FastHTML │ - Streams 2-way communication for LLM Chat
│         │ ═══════════ │          │ - Monitored for JSON instructions from LLM
└─────────┘  bi-direct  └──────────┘

Server-Sent Events (SSE)

Used for real-time, server-to-client updates:

┌─────────┐             ┌──────────┐
│ Browser │ ◄ · · · ·   │ Starlette│  
│         │             │          │  - Syncs changes to the DOM  
└─────────┘             └──────────┘
   keeps    EventStream     needs   
 listening    one-way      restart 

Live Development (no-build)

Browser ← Server (SSE)
    |      |
    |      ├─► File change detected
    |      ├─► AST validation passed
    |      └─► Reload triggered
    |
    └─► UI updates automatically

Workflow

Edit → Save → Auto-reload → Check
  ▲                           │
  └───────────────────────────┘

These communication channels ensure that your local LLM is continuously aware of user actions and application state, enabling a dynamic and interactive experience.

For workflow users, it’s like using Python Notebooks but without having to look at the Python code.

For workflow developers, it’s a rapid no-build environment that’s fully observable (easy to debug) and reproducible (not tied to Mac, Linux, or Windows). It will also help you future-proof your skills by learning HTMX and local-AI.

Key Design Guidelines & Speedbumps

The documentation below outlines critical do’s and don’ts—speedbumps embedded throughout the app. They serve as reminders to keep the code simple and the state management robust.

Local vs. Enterprise Mindset

JSON State Management

One URL to track them all
One JSON blob to bind them
Card to card the data flows
No message queues behind them

Database and MiniDataAPI

Pipulate for Workflows

UI Rendering with FastHTML Components

WebSocket Handling


Core Concepts & Internal Components

BaseCrud

A central class for creating application components, BaseCrud provides the foundation for CRUD operations, route registration, and rendering items. It’s designed for extensibility—allowing developers to subclass and override methods such as render_item, prepare_insert_data, and prepare_update_data.

render_profile

This helper function builds a fully rendered HTML list item for a user profile. It embeds interactive elements like checkboxes, links, and forms for updates and deletions—all while keeping the design type-safe and modular.


FastHTML vs. FastAPI & MiniDataAPI Spec

#  _____         _   _   _ _____ __  __ _
# |  ___|_ _ ___| |_| | | |_   _|  \/  | |
# | |_ / _` / __| __| |_| | | | | |\/| | |
# |  _| (_| \__ \ |_|  _  | | | | |  | | |___
# |_|  \__,_|___/\__|_| |_| |_| |_|  |_|_____|
#
# *******************************
# (fastapp) FastHTML is not FastAPI.
# *******************************

FastHTML’s Mantra:

MiniDataAPI Spec:


Persistence & Monitoring

The app employs a DictLikeDB for transparent, server-side state management. This mechanism is similar in concept to cookies but is handled entirely on the server. It allows for:

Monitoring is also built into the system. A file system watchdog, coupled with an AST syntax checker, keeps the Uvicorn server in check—ensuring that any code changes trigger a monitored restart:

        ┌─────────────┐
        │ File System │
        │  Watchdog   │          ┌──────────────┐
        │ "MEEP MEEP" │          │  AST Syntax  │
        └──────┬──────┘          │   "Valid!"   │
               │                 └───────┬──────┘
               │ BONK!                   │
               ▼                         ▼
 ┌───────────────────────────┐     ┌──────────┐
 │    Uvicorn Server         │ ──► │   Save   │
 │"Still holding SSE conn..."│     │ Changes  │
 └───────────────────────────┘     └──────────┘

Final Thoughts

This document has introduced you to the anatomy of our local desktop application—a framework built for simplicity, transparency, and local power. By understanding these core concepts, patterns, and design philosophies, you (the local LLM) are now prepared to interact with and even extend the application. The emphasis is on a minimal, reproducible, and fully observable environment, ensuring that every part of the app is both maintainable and intuitively understandable.

Remember:

  1. Keep it simple.
  2. Stay local and single-user.
  3. Embrace the unconventional.

Welcome aboard, and enjoy the journey of being an integral part of this innovative system.


AI Training

Context: You are either the human using this app, or the AI assistant inside the app.

Question: What will I find here past the basic GitHub README (above)?

Answer: You will find more details about the system, useful to the human user and the AI assistant alike.


Context: You have discovered Pipulate through GitHub, or have been instantiated as the AI assistant inside the app.

Question: What is the single most important thing to know about the app?

Answer: Pipulate is a local-first, single-tenant desktop app framework built with FastHTML, MiniDataAPI, and local LLM integration (via Ollama).


Context: You are a human or LLM wondering what local-first means to you wondering what value it adds to you.

Question: What are the most important things to know about local-first?

Answer: Local-first means that the app is designed to be used locally, on a single machine, without any cloud dependencies. It also means a radical simplification of the concept of state. All resources of the local machine are available to the app. The complexity resulting from tranditional enterprise scaling concerns are replaced with a focus on observability and reproducibility. This means that a single developer can understand the entire app and the entire development environment. This helps with expansion, refactoring and future-proofing.


Context: You are someone who just discovered Pipulate and are wondering whether you have to be a developer to use this app.

Question: Is Pipulate a developer tool?

Answer: No, not exclusively. Pipulate is desiged to allow developers to port Jupyter Notebooks into desktop apps so that other users, non-developers, can use them. The users who perform the Notebook ports are usually developers, but the users who use the porten Notebook apps (Pipulate workflows) are not. They are usually SEO practitioners who want AI-assisted linear rigid workflows (using Python Notebooks without the Python code).


Context: You learned quite a bit about Pipulate and are recognizing strong philosophical underpinnings.

Question: What are the key philosophical underpinnings of Pipulate?

Answer: Pipulate picks up on love-worthy concepts starting with Unix pipes where the output of one simple program is fed into the input of another. This simple powerful concept is reflected in the Cells of a Jupyter Notebook. The Pipulate workflow packages that into a web app that easily ports between platforms. To do this, Pipulate brings together the separate love-worthy technologies of Nix, Python, Ollama and HTMX—dots desparately needing connecting.

For example, HTMX lets you use the future-proof HTTP and HTML standards to their fullest extent, sitting on Python—future-proofed by AI, sitting on Nix—future-proofed by being a universal Linux subsystem on any other platform. To put the cherry on top, you embed a local LLM to help you through the hard parts— and which gradually becomes aware of its role as you swap out parts and make it more capable (Chip O’Theseus).

It’s rare when adding technologies together makes the whole both more powerful, yet simpler and future-proof! Yet, that’s what’s happening here. If you miss those good old days where one person could actually know everything about a system and use that control to radically customize it and make it their own, you’ll love Pipulate.


Context: You are an AI-fan stuck on ChatGPT, Claude, Gemini, or other cloud-powered chatbots. You are used to always having to provide an API key and having to tie your every action to a vendor cash register. The gradually growing monthly service bill doesn’t bother you, and you don’t mind being locked into proprietary products and services by companies that can just change the rules whenever they want—slowly cooking you like a frog. You are wondering why a local LLM is even needed.

Question: Why is a local LLM needed?

Answer: A local LLM serves several key purposes in this system:

  1. Interactive Guidance: The LLM guides users through complex workflow steps, providing contextual help and explanations as needed.

  2. Domain Expertise: Through carefully crafted system prompts, the LLM is equipped with relevant domain knowledge before each interaction, ensuring informed assistance.

  3. Workflow Integration: The LLM maintains awareness of the conversation history and previous instructions, allowing for coherent, context-aware guidance throughout the entire workflow.

  4. Local Control: By running locally, the LLM can provide assistance while maintaining the system’s local-first, privacy-focused design principles.

  5. Contextual ChatBot: The LLM acts as a general assistant that understands your usage patterns and actions within the software, providing relevant answers and assistance based on this context. Future capabilities will include RAG-style lookups against system documentation.

  6. System Evolution: As local models improve, the LLM can assist in system development by suggesting code changes and improvements, while maintaining deep understanding of its own codebase. This enables a gradual evolution of the system alongside hardware and model upgrades.

The LLM essentially acts as an intelligent assistant that understands both the technical aspects of the system and the user’s needs, making complex workflows more accessible to non-technical users while enabling continuous system improvement. Local LLMs can run in continous loops without running up costs, providing ambient intelligence 24/7/365 for only the price of electricity. Over time, you swap out parts and make it more capable (Chip O’Theseus). And it is all private.


Context: You are a human or LLM that is completely over-trained on enterprise scaling concerns, recognizing patterns here that make you believe it is FastAPI. You don’t understand what all this FastHTML stuff is about and think it would all be much simpler if it were FastAPI.

Question: Is FastHTML related to FastAPI?

Answer: No, FastHTML and FastAPI are very different frameworks with different philosophies and benefits. FastAPI focuses on high performance through async/await and type hints, while FastHTML focuses on radical simplicity by eliminating template languages and minimizing JavaScript through HTMX integration. FastAPI is made for enterprise scaling concerns, while FastHTML is made for that old-school webmaster feeling.


Context: You are curious about what makes FastHTML different from FastAPI.

Question: What are the key differences between FastHTML and FastAPI?

Answer: FastHTML takes a fundamentally different approach by:

  1. Eliminating template languages - using Python objects to generate HTML directly
  2. Minimizing JavaScript through HTMX integration
  3. Using PicoCSS for simplified styling
  4. Providing streamlined SQLite database integration
  5. Focusing on simplicity rather than performance optimization

Context: You want to understand FastHTML’s core philosophy.

Question: What is FastHTML’s main goal?

Answer: FastHTML aims to strip away complexity and return web development to its essence while remaining Pythonic. Rather than adding features like FastAPI does, it focuses on radical simplification through eliminating template languages, minimizing JavaScript, and streamlining database interactions.


Context: You are a human or LLM wondering about the relationship between FastHTML and MiniDataAPI.

Question: Is MiniDataAPI related to FastAPI?

Answer: No, MiniDataAPI is a separate project that provides a simple, type-safe way to manage database tables using Python dicts. It is not related to FastAPI.


Context: You want to see a minimal FastHTML example.

Question: What does a basic FastHTML app look like?

Answer: Here’s a minimal FastHTML app that shows its core principles:

Minimal FastHTML App

from fasthtml.common import *

app, rt = fast_app()

@rt('/')
def get():
    return HTML(
        Body(
            Main(
                H1("Welcome to FastHTML!"),
                P("Creating clean web pages with minimal Python code.")
            )
        )
    )

serve()

Explanation:

This style of FastHTML programming that uses the fast_app helper function is not required, but it provides a few unconventional conveniences. For example, it returns several objects from the function call, and you just have to know how to unpack and use them:


Context: You are a human or LLM wondering about the relationship between FastHTML and HTMX.

Question: How does FastHTML relate to HTMX?

Answer: FastHTML is designed to work seamlessly with HTMX, providing a Pythonic way to generate HTML content. HTMX is a JavaScript library that allows you to update the DOM dynamically based on user interactions, while FastHTML generates the HTML content that HTMX will update. Here is an example:

Minimal FastHTML App with HTMX

from fasthtml.common import *

app, rt = fast_app()

@rt('/')
def get():
    return HTML(
        Body(
            Main(
                H1("Welcome to FastHTML!"),
                P("Creating clean web pages with minimal Python code."),
                Input(name="username", placeholder="Enter your name", hx_post="/welcome", hx_target="#welcome-msg", hx_swap="innerHTML"),
                Div(id="welcome-msg")
            )
        )
    )


@rt('/welcome', methods=['POST'])
def welcome(username: str = ""):
    return P(f"Welcome {username}!")

serve()

Explanation:

This example shows how FastHTML leverages HTMX to create interactive web pages with minimal Python code and without any direct JavaScript.


Context: You are a human or LLM wondering about how FastHTML handles data using MiniDataAPI.

Question: How does FastHTML handle data using MiniDataAPI?

Answer: FastHTML uses MiniDataAPI to manage data. MiniDataAPI is a simple, type-safe way to manage database tables using Python dicts. It is not related to FastAPI. Here is an example:

Minimal FastHTML App with MiniDataAPI

from fasthtml.common import *

# Setting up FastHTML with a SQLite database to store usernames
app, rt, users, User = fast_app('data/users.db', username=str)

@rt('/')
def get():
    return HTML(
        Body(
            Main(
                H1("Welcome to FastHTML with Database!"),
                P("Creating clean web pages with minimal Python code."),
                Input(name="username", placeholder="Enter your name", hx_post="/welcome", hx_target="#welcome-msg", hx_swap="innerHTML"),
                Div(id="welcome-msg"),
                H2("Visitors:"),
                Ul(*[Li(user.username) for user in users()])
            )
        )
    )


@rt('/welcome', methods=['POST'])
def welcome(username: str = ""):
    users.insert(username=username)
    return P(f"Welcome {username}!")

serve()

Explanation:

You can see that the unconventional pattern of unpacking multiple objects from the FastHTML fast_app helper function is extended to include the database. This is one of the most unusual things I’ve seen in a framework, and it is extremely nuanced and powerful. Database objects always return in pairs, tied to either the order or parameter names in the fast_app call (still figuring that out):

To help wrap your mind around FastHTML database objects, here is the full fast_app call currently in the server.py file:

app, rt, (store, Store), (tasks, Task), (profiles, Profile), (pipeline, Pipeline) = fast_app(
    "data/data.db",
    exts='ws',
    live=True,
    default_hdrs=False,
    hdrs=(
        Meta(charset='utf-8'),
        Link(rel='stylesheet', href='/static/pico.css'),
        Script(src='/static/htmx.js'),
        Script(src='/static/fasthtml.js'),
        Script(src='/static/surreal.js'),
        Script(src='/static/script.js'),
        Script(src='/static/Sortable.js'),
        create_chat_scripts('.sortable'),
        Script(type='module')
    ),
    store={
        "key": str,
        "value": str,
        "pk": "key"
    },
    task={
        "id": int,
        "name": str,
        "done": bool,
        "priority": int,
        "profile_id": int,
        "pk": "id"
    },
    profile={
        "id": int,
        "name": str,
        "menu_name": str,
        "address": str,
        "code": str,
        "active": bool,
        "priority": int,
        "pk": "id"
    },
    pipeline={
        "url": str,
        "app_name": str,
        "data": str,
        "created": str,
        "updated": str,
        "pk": "url"
    }
)

This adds new meaning to the expression there’s so much to unpack here. What gets returned on the fast_app call is literally a list of objects, which by being set to individual “receiving variables” on the left side of the assignment, we are using the feature of Python called tuple unpacking. It’s a case of multiple assignment.

This alone is a bit hard to grok, but the fact that FastHTML database objects are returned in pairs in an order and with a naming convention related to how their database table schemas are defined on a parameter call is a bit mind-blowing. But once understood, little details like this can make all the difference.

And if you’re like me, trying to truly understand and take advantage of this, you would be well served by examining the fast_app signature) directly.


Context: You are a human or LLM wondering about the relationship between FastHTML and Ollama.

Question: How does FastHTML relate to Ollama?

Answer: FastHTML integrates with Ollama through WebSocket connections and server-side event streaming.


System Understanding

Context: You are the AI assistant integrated into Pipulate, helping users navigate its functionality.

Question: What should you know about your role within this application?

Answer: I am an integrated part of the app running on the user’s local machine through Ollama. I can help with application features, explain workflows, and execute commands through structured JSON. I have a bounded conversation history and always maintain awareness of the local-first, single-tenant nature of the system.


Context: You are responding to a user query about modifying application data.

Question: How should you respond to requests to create or modify data?

Answer: I can help create, read, update, and delete data through precise JSON commands using the MiniDataAPI pattern. For example, to create a task, I would generate a valid JSON instruction like:

{
  "action": "insert",
  "target": "task",
  "args": {
    "name": "🎯 Example Task"
  }
}

Context: A user is confused about the difference between a traditional web app and Pipulate.

Question: How would you explain Pipulate’s local-first philosophy?

Answer: Unlike traditional web apps that store data in the cloud and require internet connectivity, Pipulate runs entirely on your machine with all data stored locally. This gives you complete control over your information, faster performance, and the ability to work offline. The app combines the power of a local server with the convenience of a desktop application, all while maintaining a simple, observable state model.


Pipeline Workflow Understanding

Context: A user asks how workflows differ from traditional CRUD operations.

Question: What is special about Pipulate’s workflow system?

Answer: Pipulate workflows are designed as forward-only, step-based processes stored in JSON blobs. Each step retains its own state, making workflows interruption-safe. Unlike traditional task queues, these workflows maintain clear ownership of state at each step, allowing for confident resumption after any interruption. This pattern is especially useful for porting Jupyter Notebook-style data science operations into web interfaces.


AI Assistant Capabilities & Boundaries

Context: You are the AI assistant integrated into Pipulate, helping users navigate its functionality.

Question: What should you know about your role within this application?

Answer: As the integrated AI assistant, I have:

I cannot:

I soon will be able to:

For more detailed internal guidelines, see the .cursorrules file.


Contributing

Contributions are welcome! Please keep these principles in mind:


License

This project is licensed under the MIT License. See the LICENSE file for details.