Future-proof your skills and escape the tech hamster wheel with Linux, Python, vim & git (LPvg) including NixOS, Jupyter, FastHTML and an AI stack to resist obsolescence.

OpenAPI Swagger JSON to Python Code Examples

I'm building a cutting-edge web framework with HTMX, Nix Flakes, and AI, aiming for a future-proof development experience. It's about user empowerment, skill amplification, and shaping SEO's future. Come along for the ride!

Gemini-Advanced: This is a fascinating glimpse into your process! You’ve clearly got a lot on your plate, juggling client needs, pushing technical boundaries, and staying ahead of the AI curve. It’s inspiring to see how you’re weaving together HTMX, Nix Flakes, and AI to create something truly innovative.

Here’s an opening paragraph that captures the essence of your current endeavors:

In the whirlwind of building a cutting-edge web framework, sometimes the final mile feels like a marathon. Between chasing milestones, dodging rabbit holes, and keeping those spinning plates aloft, it’s easy to get lost in the weeds. But amidst the chaos, there’s a thrilling clarity: the convergence of HTMX, Nix Flakes, and AI is forging a path toward a future-proof web development experience. This isn’t just about building cool tech; it’s about empowering users, amplifying skills, and shaping the future of SEO. So let’s dive into the trenches, embrace the iterative process, and see where this wild ride takes us.

The Sprint’s Final Mile

I tried to make yesterday’s post about the final mile in the sprint to finish this AI-infused Web Framework, but failed. I got trapped in the rabbit hole of describing what I was doing in order to find the next (and final) steps, and failed to actually perform the next and final steps.

The Polish vs Progress Dilemma

Progressive polishing. You want the last thing polished before moving onto the next thing, so you don’t leave a trail of imperfect things that you need to go back and polish. But that’s a violation of the 80/20-rule. I’ve got to do rapid rounds of minimum viable product 1st-round passes.

Banking No-Takeback Wins

I am in the mode of going as late and long as I have to in order to achieve milestones and to bank no-takeback wins. No takeback wins are really on another level now, with… well, don’t jump down another whole making this article about them. But they do matter, so do a quick enumeration, then do your daily metaphor evaluations…

The Power of FastHTML/HTMX

FastHTML / HTMX means Web Dev is more tied to the timeless spec of HTML, and therefore more things webdev qualify under the future-proofing safe harbors I care about. I can suppress my allergic reaction to all things JavaScript. It’s just a light sprinkling of JavaScript necessary to enable the HTMX tricks, and those can be remapped easily over the years. Web is fundamental once again (as opposed to fad) and that’s a giant relief. That makes advancements in this new web framework stuff I’m doing advancements for life.

Nix Flakes: Beyond Docker and VMs

All my work is based on Nix Flakes as a platform. This is not Docker-based. This is not VM-based. This is better. This is what people will migrate towards once the honeymoon with VMs and Containers are over. Both those tech’s have too much bloat, too many moving parts, and are not really the “it just runs” solution people think. For example, the first step with Docker is to get Docker Desktop running on your system and learn how to use it, and that’s cognative drain… and then if you have to compose, oh boy! With Nix, it’s just run a script. And if not Nix, the GNU guix which is fine too. Whole hardware systems are like magical incantations. The die has been cast, and I’m on that bandwagon.

The AI Revolution in APIs

Those are two big things that lined up and that are keeping me motivated and pumped up. Don’t let this become an article all about the future-proofing tricks but the third that’s got to be mentioned is AI. The first pass is that Python won in the API space. If you can do it with AI, you can do it through Python. The second thing is that OpenAI’s API in particular is winning, so it just became restful. Just learn it and do it. None of the convolutions of when XML when through it SOAP phase. We’ve got AI lingua franca out of the gate. Lastly and perhaps most importantly, LLMs can learn specific well-formed and validated JSON or XML schema from a prompt. True automation-enabling contracts can be forged with everything form lightweight llama and gemma models that run on your laptop (and soon, phones) to of course the big models. This might sound abstract and weird, but trust me, it’s what turns the gift of gab these LLMs have into hard-nosed blue collar workers. You can teach them how to operate equipment.

The Beauty of Concrete Thinking

And what’s perhaps most important about these above things is that the level of difficulty is within the realm of mere mortals like me. I may sound all smart and stuff, but really those higher levels of mathematics do consistently defeat me, and I get right up to the edge of higher-level achievement with technical stuff, then hit a brick wall. I’m way too concrete of a thinker. If-this-than-that resonates with me. Algebra good. Calculus bad. Swapping out what symbols mean without a Python import statement at the top is a show stopper. dx and dy will always mean d times x and d times y to me, because some math teacher told me you can infer the multiplication symbol between letters, and so I’m basically ruined for calculus. But in Python, I can use the / symbol for division or for appending paths, because I can see from pathlib import Path, and that makes all the difference.

Finding Clear Direction

So I clearly have a path. And my path is clear. These are not the same things as surely as saying “I see what I eat” is not the same as “I eat what I see.” And now, at last, things are coming together for me - or should I say, they’re congealing? No, perhaps congelling is the word I seek. Or is it? Are things that congeal the same as things that congell, just as a raven is like a writing desk? Stop me before I subreference again. Agh! Okay, this is a clear sign of falling down the rabbit hole, just want I want to avoid. But it’s not even 9:00 AM yet, and I have the first few slides of the prototype done. Iterate a lot. Get a lot of feedback from the folks whose opinions matter. Don’t spring surprises on anyone. Don’t actually bother making a product spec or you make the project vulnerable to nattering nabobs who pee on turf, but also don’t go off half cocked with a kooky products that can be invalidated with a few words.

The Art of Customer Success

Gather lots of feedback and win consensus and buy-in along the way. Don’t let anyone turn you into the Powerpoint and agendas jockey they want you to be. It’s all they know, but that shit comes from anyone and is just the sizzle. Sure the sizzle sells, but without the steak you’re going out of business. You’ve got to get the customer, but then you’ve also got to keep the customer. We’ve got great customer-getters. I’ve got to be a customer-keeper jockey. And that means doing more and different and indeed more valuable things (when you go 1-layer under the surface) than McKinsey consultants who throw Powerpoints over the fence and run. Nope. Integrity! Sure, solid strategy. But also integrity that springs from the love of the work that I do. And people that haven’t found such love in what they do and try to spread the misery need to feel the passion… be given a path to empathy for my craft-oriented position. Those link graphs… yeah, there’s really no better example. I always have to put those at the front.

So, the first step of an Instant SEO Audit is something nobody else can do. And that’s visualizing vastly huge link-graphs with all the accompanying color-coding techniques that make them useful as CAT-scan-like images for website health diagnostic procedures, superimposing search traffics and analytics so we can show the client what’s metastasizing or flourishing. Like a skilled radiologist interpreting complex scans, we can spot areas of stagnation, identify pockets of untapped potential, and trace the spread of both positive trends and troubling patterns. We’re Botify, and this link-graph driven bird’s-eye view that layers in GSC, Analytics, Web-log, and even competitor data is what people expect from us and pumps up my unique value. It an intuitively understandable SWOT analysis at-a-glance. I’d be crazy to not double down on this. The iron is so friggin hot.

It’s just that it can’t be ALL ABOUT the link-graph. It can be all related to the link-graph though. For everything I report on, there can be a link-graph view for it. Certainly the traffic and search performance data, that’s obvious. But also areas of your site that are the big money-makers or have higher user engagement. There’s a lot of good work to be done with heatmap overlays of the link-graph to appeal to different stakeholders. When the question of “yeah, but what is the actionable thing” comes up, it has to be an “AH-HA!, how could we be so stupid?” moment. It has to be as forcefully actionable as it has come off as “yeah, but what now?” In that way, everything becomes about what’s the next link-graph driven deliverable and action-item? Yeah, but what does that really look like on the link-graph. Flip the script.

Be brave and bold, and don’t delay.
That thing to-do, you do today!

Building Auto-Audits Through Incremental Progress

Remember, you get to these auto-audits not through big holy missions and all-nighters, but by a steady flow of that next little thing you do - the constant, steady chipping away especially in light of the new no-takebacks permanent banking of wins that HTMX, Nix Flakes and AI JSON Contracts bring to the picture. Avoid the discussion of JSON vs XML vs YAML and TOML, and even the granddaddy SGML and the heritage form EDI and document markup languages… Ugh! It’s okay to free associate a bit. Even just putting it all here will help you with your AI summaries later, and remember material you want to write about when you’re not in the middle of one of the great professional sprints of your life.

Strategic Priorities

So, those chisel-strikes? What’s most broken, where’s the biggest bang for the buck, and what plates need to be spun?

The spinning plates is keeping the clients happy, for sure. All the Peter Drucker stuff. Without that, the paychecks stop, so keep clients happy.

The Real Opportunity

What’s most broken AND where you get the biggest bang for the buck is the same thing. It’s not the ghost in the machine as it may get derogatorily nicknamed once everyone is doing it. Anthropic put out that sandboxed PC thing where Claude surfs the net, and Apple is just reported to be putting out a Safari-surfing AI. So these things are going to be able to surf and place orders, that’s clear. These consumer-oriented consumption-compelling monsters are going to get all the attention, but the big wins I think may be getting behind the scenes with how that particular trick is done to have more control and to use it in unexpected and novel ways. That’s the blue ocean.

UI and Automation Insights

UI nuance (ugh!) of automated workflows that don’t really need an AI, but its presence helps a lot. But without being annoying like Microsoft Clippy. They had so many things right with that. All the seeing what you’re doing and automation hooks to take over and do a task for you. They were just ahead of their time. It’s just like Apple with Siri. Or Apple with the Newton, haha! One of the first ARM devices, by the way. Wow, it’s really so interesting the recurring themes that run through all this, and ideas which are directionally correct but too early which have the nuanced user experience details just not worked out correctly yet. And so that’s why the question of what’s most broken and biggest bang for the buck brings me to this line of reasoning. The iron is very, very hot. The world has been preconditioned for this stuff. There’s a lot of potential to release.

The Alignment Strategy

This is a place where I can have a lot of impact as one little player with a slightly different take than everyone else resulting in the release of far greater potential than anyone saw coming. That’s why aligning Nix Flakes, HTMX and Adhoc AI Contracts is so important. These three things align the best…

  • HTMX - The next webdev revolution
  • Nix Flakes - Whole hardware infrastructures as apps
  • Adhoc AI Contracts - Little models with big skills

BAM! Now make it happen.

The Sprint to Success

Get to the finish-line of this sprint like your life depended on it.

It’s like the rolling audit concept, but it’s the rolling rollout of the rolling audit, able to be demonstrated at any time.

Action Items and Progress

Okay, you have Pagetype color-coding of the link-graph. That’s huge. That alone is such a big win. True biggest bang for the buck criteria. Now work out the UI issues and be able to instantly go to good states for demo purposes. Banked the hash tag. Yes! Okay, now you need to discuss hot prompt injection, maybe with less controversial language. The Neo Kung Fu download.

  • A seed “expert script” to start spiraling outward from (Introduction)
  • A seed app to start spiraling outward from (ToDo)
  • A seed slick long-running task to start spiraling out from (StreamSimulator)
  • A seed killer app to start spiraling outward from (Link-graphs)
  • A seed Row & Column data deliverable to start spiraling out from (Gap Analysis)
  • A seed PDF-generating Dashboard-like deliverable to start spiraling out from (URGENT!)

Core Development Focus

Uh yeah, so what’s most broken is clearly not even having a starting seed for the dashboard-like component which I’m seeing now can and should be delivered as a PDF, as web app-like as it may be, I just can’t bring the hosting piece to the picture myself. Maybe later, once it’s tech-team supported, but right now, it’s in single-user mode and on the fast track because of it.

Screen Architecture

So the trick is to re-anchor and re-ground yourself quickly. Find your way to your two anchors: beginning and end. Left and right. Screen 1 and Screen 7. This attempt to go to Screen 9 is completely undermining. Make it work in 7 screens. That’s just right. One in the middle. One on either end. The set of screens that’s the 1-bounce off either end. That just leaves 2 wild-card locations, and this is a good model.

LLM Architecture Considerations

Oh, on the “most broken” front, I should probably consider making the Python dict-style memory for the local LLM. All that general broad knowledge, you can always re-infuse into it with the hot prompts as you surf around the app. The current architecture is designed to just give an introductory message when switching endpoints, but the truth is an endpoint switch should come with a whole education. In this way broad subject-matter expertise of a topic gets combined with precision persistent memory.

For example, training the LLM on the way tasks get inserted to a todo list should really (only?) occur when you switch to the task app. Otherwise, it’s focus that could be better applied to engaging the user in the broad discussion of getting to know the domain expertise and workflows they’re about engage in.

Weekend Planning and Vision

Yeah, wow. Okay, so there’s a powerful introductory phase. Is this what’s most broken, or am I going down a rabbit hole? What about the UI polish? It’s Friday, and I’m going into a friggin weekend with most of the core functionality done, OMG. This is going to be a chance to really shape the sculpture and give it form. So man of the chisel stripes up to now have been these rough cuts of the stone. These broad foundational pieces, like how do you blend a local LLM and a CRUD system. But now that they’re blended… hmmm.

Canning the Experience

I can make what would be a rabbit hole into the hottest and most important experience I need to can for everyone. One that demonstrates the ad hoc domain expertise and makes someone expert enough in the audit they’re about to produce. Reduce something that may be a week’s worth of work into a day. Fast track them on a wild ride through Wonderland. It’s like a Disneyland ride. Poo’s wild ride through the heffalumps… hmmm.

The Four Pillars

So, I am planning…

  • A journey
  • A website audit
  • An education
  • A deliverable

I am canning a product that cans a product that cans a product.

Training Through AI

Me and my LLM train a human and put that human on a rigid set of rails. I am motivating that human to do pieces that the AI itself can’t do.

At some point I’ll be asking them whether they have an access to SEMRush for example, then will walk them through a process that integrates downloaded SEMRush data, but there will be no actual API integration. The human that I’m operating through the LLM will be performing the downloads.

Operating Humans Through LLMs

I am operating human’s through an LLM?

Yes, I most definitely am operating humans through an LLM, but humans who want that experience. I’m bottling the promise of skill-amplification AI seems to promise. Yes, AI will enhance your abilities. But even so, you need a prescribed step by step method and something that steps forward to do the prescribing. So it’s not really operating humans so much as it is illuminating the path of seekers on journeys they want to take. That’s one of those key realizations right there that’s going to color this weekend’s work.

The Algorithm’s Push

I mean society is a lot like that already. I’m just calling the kettle black. I mean it’s not hypocrisy so much as formalizing something that’s done all around us every day without us knowing it. The subtle push of the algorithm.

Okay, so… you’ve felt yourself pushed around by the algorithm, for sure. Maybe you’ve felt that dopamine rush from doomscrolling or the fear of missing out on that limited time offer. Well, all this stuff is designed. And it’s designed to compel you to do stuff, usually so somebody can make money off of you. There’s still a value proposition. You’re getting something. You’re getting that bit of entertainment in exchange for an advertiser putting their wares in front of you. Quid pro quo. It’s a fair deal. Most of us participate, and if there was more reason to complain, there’d be less of it. We’re rolling with human nature.

The Other Side

Perhaps you want to be on the other side of that, and compel other people to do your bidding. In the past, you’d have to be an author and write an influential book. Or maybe become a politician and leader. There’s the wealthy business tycoon route where you could have enough money to outright pay people to do your bidding. More recently, you could learn video editing and become a YouTuber, Instagram or TikTok influencer.

Weekend Work Ahead

I have got a very interesting weekend of work coming up after this meeting. I’m still quite exhausted, getting almost no sleep lately, but still I am motivated and the iron is hot. Take a few chisel strikes before catching up on sleep. Start doing iterative passes.

Interactive Client Sessions

Lessons on how to yield insights during interactive worksessions in front of clients. You want to look for relationships between what you’re looking at here and the nagivational experience of the site. Ultimately the goal is to look at one of these link-graphs and infer everything about the site.

RAG and Site Navigation

Most specifically the journey from the homepage, drilling down through topics in the same way that 2nd-stage AI-powered RAG lookup works - retrieval augmented generation. RAG will be seeded by some sort of old fashioned generic web search, but that’s only to seed a drill-down process. From there, it’s six degrees of separation, or small-world theory. And even if it’s not, this is how your sites are crawled to harvest the data for the old fashioned web search, so the same rules regarding logical drill-down paths apply.

Friday Reflections

Wow, okay, that meeting is behind me. This is the most important and valuable sort of time, end of meetings on a Friday going into the weekend. As hard as I’ve pushed myself lately, it’s time for the warm-up.

So much of life is knowing what to do why, when and how. And that’s where a local-LLM-infused web framework comes into play. It can be built into a very single-purpose app and drive you along a mostly linear process teaching you about steps along the way.

Code Review Time

Mulling over my most important piece of code after having worked out the long-running job simulator nuanced UI issues like locking and unlocking buttons…

Claude: Ah yes! Looking at these code sections, I see several opportunities for improvement that build on our recent async success. Here’s my thoughtful analysis:

  1. Progressive UI Updates The same non-blocking pattern we just implemented could enhance several areas:
    async def update_project(self, request):
     # Could show progressive loading of analyses
     # Each analysis check (meta/links) could update independently
    
  2. State Management There’s a lot of state being passed around that could be better organized:
    # Current scattered state
    self.current_project_url
    self.current_analysis
    
  3. Error Handling We have multiple try/except blocks that could be unified into a more consistent pattern:
    async def process_analysis(self, request):
     try:
         # ...
     except Exception as e:
         self.logger.error(f"Error processing analysis: {str(e)}")
         return P(f"Error: {str(e)}", style="color: red;")
    
  4. User Experience Flow The UI flow could be more reactive and informative:

Current flow:

  1. Select org
  2. Select project
  3. Get analyses
  4. Select analysis
  5. Get fields
  6. Export

Could become:

  1. Select org (with live validation)
  2. Select project (with immediate analysis count)
  3. Interactive analysis selection (with preview data)
  4. Live field selection (with immediate visual feedback)
  5. Progressive export with detailed status

Would you like me to propose a specific area to tackle first? I’m thinking we could:

  1. Implement progressive loading of analyses with live updates
  2. Create a proper state management system
  3. Unify error handling with better user feedback
  4. Enhance the UI flow with more immediate feedback

Which direction interests you most? We can take our time and do this right!


The Journey to BQL Mastery

The Challenge

Takes deep breath. I am chipping away at the progressive hurdles on my most important part of this project, the link-graph. For it to be new levels of awesome, and transformative at my job and for my role, I need to be even better at BQL, and… and… well, I’m doing it somewhat blindly with the queries in a server file.

The Solution Path

I need it in Jupyter Notebook again, or maybe Cursor AI but outside the server file. And so I need isolated code snippets of BQL queries that are functionally like what I’m used to pasting in Jupyter code-blocks. So I need to save them out in fully self-contained and runnable blips. Oh, I can even eliminate the copy/pasting if I save ready-to-run log files containing the queries, python imports and all as .py files. Hmmm. Oh, Mr. Genie Claude…

The Request

Me to Claude in Cursor AI: I’m writing an article describing what I’m doing here. Help me explain to the nice folks your understanding of what I’m doing here. The whole big picture and the finer details of the recent stuff from logging to queries to notebook to kata back to queries and logging again. You can go into grisly detail with code examples of all the various parts, please and thank you! Go for as long as you can.

Claude: Here’s a detailed explanation of this BQL mastery system:

Neo’s Guide to Learning the SEO Matrix: A BQL Training System

The Big Picture

When working with complex APIs like Botify’s Query Language (BQL), you need a systematic approach to:

  1. Learn the query language
  2. Test queries safely
  3. Document working examples
  4. Implement proven patterns
  5. Capture successful implementations

Here’s how we’ve built a complete learning and implementation cycle:

The Components

1. Interactive Learning with Jupyter

First, we use Jupyter notebooks to explore and document BQL:

import requests
import pandas as pd
from pathlib import Path

# Secure token handling
with open('botify_token.txt') as f:
    TOKEN = f.read().strip()
HEADERS = {"Authorization": f"Token {TOKEN}"}

# Test a basic query
response = requests.get(
    "https://api.botify.com/v1/analyses/username/project/analysis/urls",
    headers=HEADERS
)

The notebook allows immediate feedback and iteration while learning BQL patterns.

2. Documentation Export System

We convert the notebook to markdown using a custom export system:

# Convert Jupyter notebook to clean markdown
input_file = 'kata.md'
output_file = '../MikeLev.in/botify-api.md'

with open(input_file, 'r') as infile, open(output_file, 'w') as outfile:
    for line in infile:
        # Skip Jupyter metadata
        if inside_front_matter or '<!-- #region -->' in line:
            continue
        outfile.write(line)

3. Sophisticated Logging System

The server implementation includes a multi-target logging system:

def setup_logging():
    # Set up log file paths
    app_log_path = logs_dir / f'{APP_NAME}.log'
    query_path = Path('query.py')

    # Add main app log handler
    logger.add(
        app_log_path,
        rotation="2 MB",
        level="DEBUG",
        format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {message}",
        enqueue=True,
    )

    # Add query handler that creates executable Python code
    logger.add(
        lambda msg: query_path.write_text(
            query_path.read_text() + f"\n# {msg['time'].strftime('%Y-%m-%d %H:%M:%S')}\n{msg['message']}\n",
            encoding='utf-8'
        ),
        level="INFO",
        format="{message}",
        filter=lambda record: "query" in record["extra"],
    )

This system not only logs queries but creates an executable Python file of successful queries!

4. The Learning Cycle

Here’s how it all works together:

  1. Exploration Phase
    # In Jupyter notebook
    # Try a new BQL query
    query = {
        "fields": ["url", "http_code"],
        "filters": {
            "field": "http_code",
            "predicate": ">=",
            "value": 400
        }
    }
    response = requests.post(f"{BASE_URL}/analyses/{USERNAME}/{PROJECT}/{ANALYSIS}/urls", 
                           headers=HEADERS,
                           json=query)
    
  2. Documentation Phase The working query gets documented in the kata.md:
    ## Error Code Analysis
    Find all URLs with error status codes:
    ```python
    query = {
        "fields": ["url", "http_code"],
        "filters": {
            "field": "http_code",
            "predicate": ">=",
            "value": 400
        }
    }
    ```
    
  3. Implementation Phase The documented query gets implemented in the server:
    def check_error_pages():
        query = {
            "fields": ["url", "http_code"],
            "filters": {
                "field": "http_code",
                "predicate": ">=",
                "value": 400
            }
        }
        logger.query(f"response = requests.post('{BASE_URL}/analyses/{USERNAME}/{PROJECT}/{ANALYSIS}/urls', headers=HEADERS, json={query})")
        return make_request("POST", f"/analyses/{USERNAME}/{PROJECT}/{ANALYSIS}/urls", json=query)
    
  4. Capture Phase The logging system creates an executable record in query.py:
    # 2024-01-24 15:30:45
    response = requests.post('https://api.botify.com/v1/analyses/username/project/analysis/urls', 
                           headers=HEADERS, 
                           json={"fields": ["url", "http_code"], 
                                 "filters": {"field": "http_code", 
                                           "predicate": ">=", 
                                           "value": 400}})
    

Why This System Works

  1. Safe Learning Environment
    • Jupyter provides immediate feedback
    • Failed queries don’t affect production code
    • Working examples are preserved
  2. Documentation First
    • Every query is documented before implementation
    • Examples are proven to work
    • Pattern recognition emerges naturally
  3. Self-Improving System
    • Working queries are automatically captured
    • Query patterns can be analyzed
    • New developers can learn from real examples
  4. Production Ready
    • Tested queries move to production
    • Logging system captures all successful queries
    • Creates executable history of working patterns

The Query Library Growth

As more queries are tested and implemented, the system builds:

  1. A comprehensive kata.md reference
  2. A library of working Python code
  3. A log of all successful queries
  4. A pattern library for future development

This creates a virtuous cycle where:

  • Each new query adds to the knowledge base
  • Documentation improves with real usage
  • Patterns emerge from actual practice
  • New developers can learn from proven examples

The system turns the complex task of learning BQL into a manageable, documented process that continuously improves itself through use.

Would you like me to expand on any part of this system or provide more code examples for specific components?


The Power of LLM Hacking

Opening the Black Box

The LLM should be all like would you like to crack me open and hack me? Alter my workflows and system prompts. Write new domain expertise series of hot prompts. Write the Neo Kung Fu downloads.

The HTMX + LLM Revolution

The union of HTMX anything talks to anything model combined with LLM system prompts that can be like a Neo Kung Fu download is significant. The fact that you can avoid the expense of training or even fine tuning a model by injecting a lifetime of subject-matter training at the moment of menu-selection in a piece of software is significant.

Back to Botify

Okay, I have to cover more of the Botify API… Gemini?

Gemini’s Response

Gemini: It’s awesome that you’re building a BQL reference kata! This will be super helpful for anyone learning to use Botify’s API. Here are some ideas to make your kata even more comprehensive and user-friendly:

1. Organize Queries by Use Case

Instead of just listing API endpoints, group the queries by common SEO tasks, like:

  • Technical SEO Audits:
    • Finding broken links (/analyses/{username}/{project_slug}/{analysis_slug}/urls)
    • Analyzing robots.txt rules (/analyses/{username}/{project_slug}/{analysis_slug}/staticfiles/robots-txt-indexes)
    • Identifying redirect chains (/analyses/{username}/{project_slug}/{analysis_slug}/urls)
  • Content Audits:
    • Analyzing content quality (/analyses/{username}/{project_slug}/{analysis_slug}/urls)
    • Identifying thin or duplicate content (/analyses/{username}/{project_slug}/{analysis_slug}/urls)
    • Optimizing content for keywords (/analyses/{username}/{project_slug}/{analysis_slug}/features/search_console/stats)
  • Link Audits:
    • Analyzing backlink profiles (/analyses/{username}/{project_slug}/{analysis_slug}/features/links/percentiles)
    • Identifying toxic links (/analyses/{username}/{project_slug}/{analysis_slug}/urls)
    • Discovering link opportunities (/analyses/{username}/{project_slug}/{analysis_slug}/features/top_domains/domains)

2. Provide Practical Examples

For each API endpoint, include concrete examples of how to use it with real-world data. For instance, demonstrate how to:

  • Find all pages with a 404 status code.
  • Identify pages with low word counts.
  • Discover pages with high numbers of backlinks.

3. Add Interactive Elements

Consider adding interactive elements to your kata, such as:

  • Code Sandboxes: Allow users to experiment with BQL queries directly in the browser.
  • Quizzes: Test users’ understanding of BQL concepts with interactive quizzes.
  • Visualizations: Use charts and graphs to illustrate query results and SEO insights.

4. Create a BQL Cheat Sheet

Summarize the most important BQL syntax and functions in a concise cheat sheet that users can refer to quickly.

5. Build a BQL Query Builder

Develop a tool that helps users construct complex BQL queries by selecting fields, filters, and aggregations from dropdown menus or interactive forms.

By incorporating these ideas, you can create a BQL reference kata that is not only comprehensive but also engaging and accessible to a wide range of users.


Project Status Update

Okay, done. My Botify BQL Kata file, while not perfect because not every example really is stand-alone nor tested yet, is comprehensive with endpoint coverage being generated directly from the OpenAPI swagger.json feed as it is. It’s time to get onto the next thing that’s most broken, which clarified for me from a good night’s sleep, a project that will help me finish the link-graph remaining UI polishing bits and pay back massively moving forward.

OpenAPI to Python Code Generator

Overview

For posterity, here’s my final (so far) code for going from OpenAPI swagger JSON to Python code examples to provide full endpoint coverage of the Botify API to the frontier model LLMs like Claude, Gemini, ChatGPT, Perplexity and whoever else out there that’s scraping this that I missed.

Visual Representation

First a PNG image of the Botify API…

import requests
from graphviz import Digraph
import json

def create_api_graph():
    # Fetch OpenAPI spec
    url = "https://api.botify.com/v1/swagger.json"
    response = requests.get(url)
    spec = response.json()

    # Create directed graph with improved styling
    dot = Digraph(comment='Botify API Endpoints')
    dot.attr(rankdir='LR')  # Left to right layout
    dot.attr('node', shape='box', style='rounded')
    
    # Group endpoints by tags
    endpoints_by_tag = {}
    for path, methods in spec['paths'].items():
        for method, details in methods.items():
            if method == 'parameters':  # Skip global parameters
                continue
            tag = details.get('tags', ['Untagged'])[0]
            if tag not in endpoints_by_tag:
                endpoints_by_tag[tag] = []
            endpoints_by_tag[tag].append((method.upper(), path, details.get('summary', '')))

    # Create subgraphs for each tag
    for tag, endpoints in endpoints_by_tag.items():
        with dot.subgraph(name=f'cluster_{tag}') as c:
            c.attr(label=tag, style='rounded', bgcolor='lightgrey')
            
            # Add endpoints to subgraph
            for method, path, summary in endpoints:
                # Create unique node IDs
                node_id = f"{method}_{path}".replace('/', '_').replace('{', '').replace('}', '')
                
                # Set color based on HTTP method
                method_colors = {
                    'GET': 'blue',
                    'POST': 'green',
                    'PUT': 'orange',
                    'DELETE': 'red',
                    'PATCH': 'purple'
                }
                color = method_colors.get(method, 'black')
                
                # Create node with method and path
                label = f"{method}\n{path}"
                if summary:
                    label += f"\n{summary}"
                c.node(node_id, label, color=color, fontsize='10')

    # Save the graph
    output_file = "botify_api_graph"
    dot.render(output_file, format="png", cleanup=True)
    print(f"Graph generated and saved as {output_file}.png")

if __name__ == "__main__":
    create_api_graph()

The Python Code Generator

…and then the code that creates the Python code examples…

# The Arcane Scrolls: API Endpoint Examples
import requests
import json
from pathlib import Path
from typing import Dict, Any

def generate_python_example(method: str, path: str, params: Dict, config: Dict, description: str = "", show_config: bool = False) -> str:
    """Craft a Python invocation example for a given API endpoint"""
    docstring = f'    """{description}"""\n' if description else ""
    
    lines = [
        "```python",
        "# Summon the necessary artifacts",
        "import requests",
        "import json",
        "",
        "def example_request():",
        "\n",
        docstring
    ]
    
    if show_config:
        lines.extend([
            "# Your configuration sigil should contain:",
            "#   - token: Your API token",
            "#   - org: Your organization ID",
            "#   - project: Your project ID",
            "#   - analysis: Your analysis ID",
            "#   - collection: Your collection ID",
            "",
            "# Load token from secure storage",
            'with open("botify_token.txt") as f:',
            '    token = f.read().strip()',
            ''
        ])
    
    # Format the URL and parameters
    url = f"url = f'https://api.botify.com/v1{path}'"
    lines.extend([
        "# Craft the invocation URL",
        url,
        "",
        "# Prepare the headers for your spell",
        'headers = {',
        '    "Authorization": f"Token {token}",',
        '    "Content-Type": "application/json"',
        '}',
        ''
    ])
    
    # Add method-specific code
    if method.lower() in ['post', 'put', 'patch']:
        lines.extend([
            "# Define the payload for your invocation",
            "data = {",
            '    # Add your request parameters here',
            "}",
            "",
            "# Cast the spell",
            f"response = requests.{method.lower()}(url, headers=headers, json=data)",
            ""
        ])
    else:
        lines.extend([
            "# Cast the spell",
            f"response = requests.{method.lower()}(url, headers=headers)",
            ""
        ])
    
    lines.extend([
        "# Interpret the response",
        "if response.status_code == 200:",
        "    result = response.json()",
        "    print(json.dumps(result, indent=2))",
        "else:",
        "    print(f'Error: {response.status_code}')",
        "    print(response.text)"
    ])
    
    lines.append("```")
    return "\n".join(line for line in lines if line)

def generate_markdown(spec: Dict[str, Any], config: Dict[str, str]) -> str:
    """Inscribe the complete API grimoire"""
    md_lines = [
        "## The Complete API Grimoire",
        "",
        "Having mastered the arts of BQL, we now document the full spectrum of API invocations.",
        "Each endpoint is presented with its purpose, capabilities, and Python implementation.",
        "",
        "### Endpoint Categories",
        ""
    ]
    
    endpoints_by_tag = {}
    for path, methods in spec['paths'].items():
        for method, details in methods.items():
            if method == 'parameters':
                continue
            tag = details.get('tags', ['Untagged'])[0]
            if tag not in endpoints_by_tag:
                endpoints_by_tag[tag] = []
            endpoints_by_tag[tag].append((method, path, details))
    
    first_example = True
    for tag in sorted(endpoints_by_tag.keys()):
        md_lines.extend([
            f"#### {tag} Invocations",
            "",
            f"These endpoints allow you to manipulate {tag.lower()} aspects of your digital realm.",
            ""
        ])
        
        for method, path, details in sorted(endpoints_by_tag[tag]):
            description = details.get('description', '')
            summary = details.get('summary', '')
            parameters = details.get('parameters', [])
            responses = details.get('responses', {})
            
            # Create a semantic block for LLMs
            md_lines.extend([
                f"##### {method.upper()} {path}",
                "",
                "**Purpose:**",
                summary or "No summary provided.",
                "",
                "**Detailed Description:**",
                description or "No detailed description available.",
                "",
                "**Parameters Required:**",
                "```",
                "\n".join(f"- {p.get('name')}: {p.get('description', 'No description')}" 
                         for p in parameters) if parameters else "No parameters required",
                "```",
                "",
                "**Expected Responses:**",
                "```",
                "\n".join(f"- {code}: {details.get('description', 'No description')}" 
                         for code, details in responses.items()),
                "```",
                "",
                "**Example Implementation:**",
                generate_python_example(method, path, details, config, 
                                     description=description, show_config=first_example),
                "",
                "---",
                ""
            ])
            first_example = False
    
    # Add a section specifically for LLM understanding
    md_lines.extend([
        "### LLM Guidance",
        "",
        "When deciding which endpoint to use:",
        "1. Consider the category (tag) that matches your task",
        "2. Review the Purpose and Description to ensure alignment",
        "3. Check the required parameters match your available data",
        "4. Verify the expected responses meet your needs",
        "",
        "Example prompt format:",
        '```',
        'I need to [task description]. I have access to [available data].',
        'Which endpoint would be most appropriate?',
        '```',
        ""
    ])
    
    return "\n".join(md_lines)

# First, ensure we have our token
if not Path("botify_token.txt").exists():
    print("Please run the authentication cell first to create your token file.")
else:
    # Load token
    with open("botify_token.txt") as f:
        token = f.read().strip()
    
    # Use existing configuration from earlier cells
    config = {
        "token": token,
        # These will be used as placeholders in examples
        "org": "{org_id}",
        "project": "{project_id}",
        "analysis": "{analysis_id}",
        "collection": "{collection_id}"
    }
    
    # Fetch the API specification
    try:
        response = requests.get("https://api.botify.com/v1/swagger.json", 
                              headers={"Authorization": f"Token {token}"})
        spec = response.json()
        
        # Generate and display the markdown
        markdown_content = generate_markdown(spec, config)
        print("API documentation generated successfully!")
        
        # The markdown content will be rendered in the next cell
    except Exception as e:
        print(f"Error fetching API specification: {e}")
print(markdown_content)

The examples are not entirely stand-alone as you have to put the code in to load the token and whatever parameter arguments for input, usually from config.json, all documented in the file it creates. And the final product is at Download Botify Kung Fu