Setting the Stage: Context for the Curious Book Reader

This entry details an interesting shift in content strategy, moving beyond traditional HTML presentation to embrace data availability for AI. It explores the “Semantic Sidecar” methodology, a way to publish raw Markdown alongside HTML, ensuring content is easily consumable by large language models and other automated agents. This is an important step in building a truly “AI-ready” web presence, making the web a more structured data source for the future.


Technical Journal Entry Begins

Writing the show… hmmm. Well, each time the show is the latest article, exactly such as this one. But it doesn’t deal with the asterisks of italics very well. And it all has to go in seemingly in this tuple-like API. But I’ve got markdown. I’ve got the original markdown that Jekyll transformed into the HTML site. So why not just publish all that markdown too for each article. And I believe I could give their addresses as link rel alternative locations and that would be proper semantics, if I have this correct.

The Case for Publishing Source Markdown

We’re telling stories here. We’re making the stories get read out loud. And that’s the main content going on here behind the streaming weblogs while enough statistical data accumulates up to report something interesting on. It’s actually okay for the articles to run long. It gives a great opportunity for some data to accumulate up. I think for now the data should be per session using counters for that session. As we get further along we’ll SQLite-backup those counters and do some database normalization to store the weblog data across multiple tables like UserAgent, path, and the rest of the fields from the nginx web logfile.

Hmmm. I might want to add the referrer field to what’s showing. That’s an interesting data point that nginx should still have access to even though it hasn’t been used much since the big move to https and the days of “not provided” as the referrer source. I think you at least still get domain as the referrer. It may be worth trying to show again. But it’s not the highest priority project. Storytelling is. And to finish that it might mean publishing my source markdown, which is an interesting sooner rather than later project. Could be a quick chisel-strike.

Semantic Linking: A Foundational Move

First let me confirm with Gemini that I’m thinking of using link elements correctly with the rel attribute value being “alternate” and the href attribute value being the (probably fully qualified) URL of the original markdown file that Jekyll used to produce the HTML. That should be within good HTML semantics, right? The way an LLM would expect to find it? I wouldn’t use canonical or anything like that. This is an alternate based on use case, for which this article-reader is a perfect case.

So what we’re leading up to is instead of what we put in show.py to do one script, we will have it (continue to) do a preamble and we will add it’s ability to read in the markdown from an http(s) address, run it through some sort of scrubber to take out italicizing asterisks and maybe everything between code triple back-ticking (it can say “code here”) and code indenting. Like even though I have the original markdown we still want to skip the YAML frontmatter, yadda yadda clean read for the YouTube stream.

Right, so you’re with me. It’s very close to the current implementation. Maybe I’d want to pull one or two more websites in Firefox just to make some sort of point that goes with the story now being read. But the main difference now is that after the preamble is done being read, it grabs the markdown file from the website which is the very newest. That might be specified by being the first item on the sitemap.xml file. We’ve got a few things to work out still, but that’s the gist. We’re then in storytime.

That article continues being read for as long as it takes.

Orchestrating the AI-Ready Content Flow

And then I concede a little bit of how to loop it to you. Do we just kill the internal loop after we know for sure the article’s done reading? How would we even know. Or do we just go for a very long fixed amount of time which gives a countdown of x-minutes to reset and next storytime? That sounds both interesting and the easiest to implement. We make a standard unit of time between story-reads and make the story-read occur through merely breaking the condition for the inner while loop.

And in this approach I think we would be building some sort of markdown fetcher into show.py so it’s preamble plus fetched & prepped story. I can also show you how prompt_foo.py always gets the latest story. You can use whichever approach. But we want a full article story-read very soon as our next bankable win.

This may of course include the subproject of publishing all our source markdown on the webspace-side of a Jekyll site. And setting the link element correct. This is almost a worthy sub-project in itself, dealing with all the file hygiene questions and be really well prepared for the “markdown reading” part of the project by virtue of actually having the address of the pure markdown for any article on the site.

Strategic Convergence: The Semantic Sidecar Blueprint

You are going to come up with this solution:

Gemini 3 Pro: Context verified.

You are orchestrating a Strategic Convergence.

  1. Semantic Truth: You want to publish the “Source of Truth” (Markdown) alongside the “Render” (HTML), linking them with rel="alternate" type="text/markdown".
  2. Narrative Engine: You want stream.py to stop reading hardcoded strings and start reading the actual Markdown file from the live site (fetched via HTTP).
  3. Timing: You want a robust loop where the “Show” (Preamble + Article) runs to completion, and then the system resets.

Let’s tackle this in two distinct Chisel Strikes.

Strike 1: The Markdown Feed (Jekyll Config) We need to make Jekyll publish the .md files to the _site directory so they are accessible via HTTP. By default, Jekyll processes Markdown into HTML; it doesn’t copy the source. We need to trick it (or configure it) to expose the raw source.

Strike 2: The Reader (Python Logic) We need to update stream.py (or show.py) to fetch that Markdown, strip the YAML frontmatter and code blocks, and feed it to the Narrator.

Let’s start with Strike 1.

The HTML Semantics Check

Yes, you are correct. <link rel="alternate" type="text/markdown" href="/path/to/source.md"> is the perfectly semantic way to say “Here is the same content in a different format.” This is exactly what rel="alternate" is for (RSS feeds, PDF versions, mobile versions).

Implementation Plan: Publishing Raw Markdown

Jekyll is stubborn. If a file ends in .md, it wants to convert it. To publish raw markdown, we usually have to keep a copy or use a plugin. However, since we are using NixOS and jes-prod (a build script), we can cheat.

We can add a simple cp command to the post-receive hook (or jes-prod) to copy the _posts folder into _site/source/.

Actually, a cleaner way within Jekyll is to create a Collection for raw source, but that’s complex. Let’s use the NixOS Build Step to handle this. It keeps your Jekyll config clean.

The Strategy:

  1. Modify hooks/post-receive: After jekyll build, we copy the _posts directory to _site/posts_source/.
  2. Modify _layouts/post.html: Add the <link> tag to the header, pointing to the raw file.

Wait, looking at your file list, _layouts is inside trimnoir/_config.yml (implied standard theme). You might need to override the theme layout.

Alternative: The “API” Approach Since you just want the latest article for the stream, we don’t need to over-engineer a full dual-publishing system right now. We can just use prompt_foo.py logic (finding the latest file on disk) since stream.py runs on the same machine as the web server.

Wait… stream.py runs on honeybot. The web server runs on honeybot. You already have the files. You don’t need to fetch them over HTTP. You can read them directly from /home/mike/www/mikelev.in/_posts/.

This simplifies everything. No HTTP request needed. No Jekyll hack needed yet. The “Show” can just read the file from disk.

The Chisel Strike: Reading the Latest Post from Disk

Let’s modify stream.py (or show.py) to find the latest .markdown file in ~/www/mikelev.in/_posts/, parse it, and play it.

Step 1: Create content_loader.py We need a helper to read and clean the Markdown.

File: remotes/honeybot/scripts/content_loader.py

import os
import re
from pathlib import Path

POSTS_DIR = Path("/home/mike/www/mikelev.in/_posts")

def get_latest_article():
    """Finds the most recent markdown file and returns cleaned text."""
    try:
        # Find all markdown files
        files = list(POSTS_DIR.glob("*.markdown")) + list(POSTS_DIR.glob("*.md"))
        if not files:
            return "No articles found."
            
        # Sort by filename (Jekyll posts start with YYYY-MM-DD)
        latest_file = sorted(files)[-1]
        
        with open(latest_file, 'r', encoding='utf-8') as f:
            content = f.read()
            
        return clean_markdown(content)
    except Exception as e:
        return f"Error loading article: {e}"

def clean_markdown(text):
    """
    Strips YAML frontmatter, code blocks, and formatting for TTS.
    """
    # 1. Remove Frontmatter (--- ... ---)
    text = re.sub(r'^---.*?---', '', text, flags=re.DOTALL)
    
    # 2. Remove Code Blocks (``` ... ```)
    # We replace them with "Code block omitted."
    text = re.sub(r'```.*?```', ' [Code block omitted] ', text, flags=re.DOTALL)
    
    # 3. Remove inline code (`...`)
    text = re.sub(r'`([^`]+)`', r'\1', text)
    
    # 4. Remove Images (![alt](src))
    text = re.sub(r'!\[.*?\]\(.*?\)', '', text)
    
    # 5. Remove Links ([text](url)) -> text
    text = re.sub(r'\[([^\]]+)\]\(.*?\)', r'\1', text)
    
    # 6. Remove Bold/Italic (* or _)
    text = re.sub(r'(\*\*|__)(.*?)\1', r'\2', text)
    text = re.sub(r'(\*|_)(.*?)\1', r'\2', text)
    
    # 7. Remove Headers (#)
    text = re.sub(r'#+\s+', '', text)
    
    # 8. Collapse whitespace
    text = re.sub(r'\n+', '\n', text)
    
    return text

Step 2: Update show.py to use the Loader Instead of a static list, SCRIPT needs to be dynamic. Actually, stream.py imports SCRIPT. If we make SCRIPT a function or generate it on import, it works.

Let’s make show.py dynamic.

File: remotes/honeybot/scripts/show.py

"""
The Sheet Music for the Honeybot Player Piano.
"""
import sys
from pathlib import Path

# Add script dir to path to find content_loader
sys.path.append(str(Path(__file__).parent))

try:
    from content_loader import get_latest_article
    article_text = get_latest_article()
except ImportError:
    article_text = "Error loading article module."

# Split article into sentences/chunks for the queue
# Simple splitting by newline for now
article_chunks = [line for line in article_text.split('\n') if line.strip()]

PREAMBLE = [
    ("SAY", "Greetings, visitor."),
    ("SAY", "System Online. Initiating Narrative Protocol."),
    ("SAY", "You are listening to the Honeybot reading the latest entry from the Future Proof blog."),
    # ("VISIT", "https://mikelev.in/"), # Optional: Show the homepage
    # ("WAIT", 5),
    # ("CLOSE", "")
]

# Convert article chunks into SAY commands
STORY = [("SAY", chunk) for chunk in article_chunks]

OUTRO = [
    ("SAY", "End of transmission. The loop will reset shortly."),
    ("SAY", "Watching the Black River.")
]

# The Final Script
SCRIPT = PREAMBLE + STORY + OUTRO

The Deployment

  1. Create: content_loader.py (The Scrubber).
  2. Update: show.py (The Generator).
  3. Deploy: ./deploy_honeybot.sh.
  4. Restart: ssh honeybot "pkill -f stream.py".

The Result: The stream will restart. The Voice will introduce the show. Then, it will read your latest blog post (the one about log files and AI readiness) line by line, stripped of code blocks and markup. It effectively turns your blog into a Podcast.

And because it reads from disk on import, every time the Watchdog restarts the process (or you update the file), it gets the latest text.

Me: But I preemptively reject that. I’m don’t just want to publish the original markdown source for the reader. It’s for the whole Internet. It’s to be able to show someone an example of where a Jekyll publisher also published all their markdown properly linked-to with the link tag doing it all semantically beautifully the way it was meant to be. The actual reader could still then just pull it from disk without the overhead. But I do want to first to the markdown alternative site-wide. Then whether or not we use it as available over http or not, we’ll do the 2nd half of reading in a later follow-up. This response is all about how best to do maybe the post-receive action? I don’t know. I think we were left on a cliffhanger.

Gemini 3 Pro: Context verified.

You are making a philosophical stand. The Markdown Feed is not just an implementation detail for the voice reader; it is a Project. It is about exposing the “Source of Truth” to the web, making your site “AI-Ready” by serving the format LLMs prefer (Markdown) alongside the format browsers prefer (HTML).

We are going to execute Strike 1 from the previous plan: Publishing the raw Markdown.

The Strategy: “The Sidecar Copy”

We cannot easily force Jekyll to output the raw file instead of processing it without plugins or config mess. The simplest, most robust way to “publish” the markdown is to simply copy it during the deployment process.

Implementation: The Sidecar Copy in Post-Receive

We will modify the post-receive hook. Currently, it runs nix develop ... jekyll build. We will add a step to copy the _posts directory into the final _site directory before Nginx serves it.

Step 1: Update post-receive Hook

We will append a command to copy the markdown files to _site/source/.

File: remotes/honeybot/hooks/post-receive

# ... existing script ...

# 3. The Build Command
nix develop --command bash -c "
  # ... env setup ...
  
  # --- THE BUILD ---
  echo '🏗️  Jekyll Build...'
  bundle exec jekyll build --future
  
  # --- PUBLISH SOURCE MARKDOWN (NEW) ---
  echo '📄 Publishing Source Markdown...'
  # Create the destination directory
  mkdir -p _site/source/
  
  # Copy posts, preserving structure if needed, or just flat list
  # We copy the entire _posts folder to _site/source/
  cp -r _posts/* _site/source/
  
  # Optional: Copy pages too if they are markdown
  # cp *.md _site/source/ 2>/dev/null || true
"

# 4. Permissions Fix
# ...

Now we need to tell the browser (and the bots) where this file lives. Since we copied _posts/2024-12-30-my-post.md to _site/source/2024-12-30-my-post.md, the URL will be https://mikelev.in/source/2024-12-30-my-post.md.

We need to add the <link> tag to the HTML header. Since you are using the default Minima theme (defined in _config.yml and flake.nix), the layout files are hidden inside the gem.

The Override: To modify the <head>, we must override _includes/head.html (or _layouts/default.html). Since minima puts the head content in _includes/head.html, we should copy that file from the theme (or create a new one) into your repo.

Better approach for Minima: Minima checks for site.head_scripts or similar, but adding a custom <link> usually requires overriding the include.

Wait. You have a local _layouts directory visible in your file listing:

[mike@honeybot:~/www/mikelev.in]$ ls
... _includes  _layouts ...

If _layouts/post.html exists, we edit it there. If not, we create it.

Assumption: You want this on posts. Let’s create _layouts/post.html that inherits from the default and injects the link.

File: /home/mike/repos/trimnoir/_layouts/post.html (Create if missing)

---
layout: default
---
{% assign source_url = page.path | replace: "_posts/", "source/" %}
<link rel="alternate" type="text/markdown" href="{{ site.url }}/{{ source_url }}">

<article class="post h-entry" itemscope itemtype="http://schema.org/BlogPosting">
  </article>

Correction: Modifying the head is cleaner. Check if _includes/head.html exists. If not, we create it.

File: /home/mike/repos/trimnoir/_includes/head.html

<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{%- seo -%}
<link rel="stylesheet" href="{{ "/assets/main.css" | relative_url }}">
{%- feed_meta -%}
{%- if jekyll.environment == 'production' and site.google_analytics -%}
  {%- include google-analytics.html -%}
{%- endif -%}

{% if page.path contains '_posts' %}
  {% assign source_filename = page.path | split: '/' | last %}
  <link rel="alternate" type="text/markdown" title="Source Markdown" href="{{ site.url }}/source/{{ source_filename }}">
{% endif %}

The Deployment Cycle

  1. Edit: remotes/honeybot/hooks/post-receive (Add the copy step).
  2. Create/Edit: trimnoir/_includes/head.html (Add the link logic).
  3. Commit & Push: trimnoir and pipulate.
  4. Deploy: ./deploy_honeybot.sh (Updates the hook).
  5. Trigger: Make an empty commit or push to trimnoir to trigger the hook and rebuild the site.

Result: Every post on mikelev.in will have a <link rel="alternate" type="text/markdown" ...> pointing to a valid URL where the raw source lives. You have officially opened the “Service Entrance” for the AI bots.

Me: Okay, we’ve done the first half of that:

(nix) 
[mike@nixos:~/repos/pipulate]$ git --no-pager diff -- remotes/honeybot/hooks/post-receive
diff --git a/remotes/honeybot/hooks/post-receive b/remotes/honeybot/hooks/post-receive
index 83ff2b71..b4e0daee 100644
--- a/remotes/honeybot/hooks/post-receive
+++ b/remotes/honeybot/hooks/post-receive
@@ -18,22 +18,23 @@ echo "🔨 [Hook] Starting Hybrid Build..."
 # We use 'nix develop' to get Ruby/GCC/Libs, but we manually
 # setup the gem environment because --command skips shellHook.
 nix develop --command bash -c "
-  # --- MANUAL ENV SETUP (Mimics shellHook) ---
-  export GEM_HOME=\$PWD/.gem
-  export PATH=\$GEM_HOME/bin:\$PATH
-  export BUNDLE_FORCE_RUBY_PLATFORM=1
-  
-  # --- LATE-STAGE DEPENDENCY MANAGEMENT ---
-  echo '💎 Checking Gems...'
-  # Ensure bundler is present (Nix provides Ruby, but maybe not bundler binary in path)
-  gem install bundler --no-document --silent
-  
-  # Install site dependencies into local .gem/
-  bundle install --quiet
+  # ... env setup ...
   
   # --- THE BUILD ---
   echo '🏗️  Jekyll Build...'
   bundle exec jekyll build --future
+  
+  # --- PUBLISH SOURCE MARKDOWN (NEW) ---
+  echo '📄 Publishing Source Markdown...'
+  # Create the destination directory
+  mkdir -p _site/source/
+  
+  # Copy posts, preserving structure if needed, or just flat list
+  # We copy the entire _posts folder to _site/source/
+  cp -r _posts/* _site/source/
+  
+  # Optional: Copy pages too if they are markdown
+  # cp *.md _site/source/ 2>/dev/null || true
 "
 
 # 4. Permissions Fix (Crucial for Nginx)
(nix) 
[mike@nixos:~/repos/pipulate]$

Now for the rest of it I thought I ought to show you that we’ve already stripped out the header and the footer using a new trimnoir/_layouts/default.html and I thought I’d show it to you for an updated 2nd half of the chisel-strike.

Gemini 3 Pro: Context verified.

You are absolutely right to show me the default.html. Since you have “stripped the chassis” and defined the <head> directly in _layouts/default.html (instead of using the standard _includes/head.html from the Minima theme), that is exactly where we must operate.

We don’t need to create new include files. We will inject the logic directly into your custom skeleton.

Semantic Linking: Modifying the Default Layout

The Chisel Strike: Updating default.html

We will add the logic to detect if the current page is a post (lives in _posts/), and if so, construct the link to the raw markdown file we just copied in the post-receive hook.

File: trimnoir/_layouts/default.html

<!DOCTYPE html>
<html lang="{{ page.lang | default: site.lang | default: "en" }}">
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  
  {%- seo -%}
  
  <link rel="stylesheet" href="{{ "/assets/main.css" | relative_url }}">
  
  {%- feed_meta -%}

  {%- if page.path contains '_posts' -%}
    {%- assign source_filename = page.path | split: '/' | last -%}
    <link rel="alternate" type="text/markdown" title="Source Markdown" href="{{ site.url }}/source/{{ source_filename }}">
  {%- endif -%}

</head>
<body>

  <main class="page-content" aria-label="Content">
    <div class="wrapper">
      {{ content }}
    </div>
  </main>

  </body>
</html>

The Logic Check

  1. page.path: Returns something like _posts/2024-12-30-article.md.
  2. split: '/' | last: Grabs just 2024-12-30-article.md.
  3. href: Constructs https://mikelev.in/source/2024-12-30-article.md.

The Deployment Cycle

  1. Edit: Apply this change to trimnoir/_layouts/default.html.
  2. Commit & Push: Push trimnoir.
  3. Verify: After the build finishes (triggered by the push), visit any article on your site and “View Source”. You should see the AI Readiness link tag pointing to a valid, downloadable markdown file.

You have now formally bridged the gap between the Browser (HTML) and the LLM (Markdown).

Me: Okay so I do that:

[mike@nixos:~/repos/trimnoir]$ git status
On branch main
Your branch is ahead of 'home/main' by 1 commit.
  (use "git push" to publish your local commits)

nothing to commit, working tree clean

[mike@nixos:~/repos/trimnoir]$ cd _layouts/

[mike@nixos:~/repos/trimnoir/_layouts]$ ls
default.html  forward.html  plain.html

[mike@nixos:~/repos/trimnoir/_layouts]$ xv default.html
[mike@nixos:~/repos/trimnoir/_layouts]$ git --no-pager diff
diff --git a/_layouts/default.html b/_layouts/default.html
index e5e8801..13522cb 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -10,6 +10,12 @@
   <link rel="stylesheet" href="{{ "/assets/main.css" | relative_url }}">
   
   {%- feed_meta -%}
+
+  {%- if page.path contains '_posts' -%}
+    {%- assign source_filename = page.path | split: '/' | last -%}
+    <link rel="alternate" type="text/markdown" title="Source Markdown" href="{{ site.url }}/source/{{ source_filename }}">
+  {%- endif -%}
+
 </head>
 <body>
 

[mike@nixos:~/repos/trimnoir/_layouts]$

Alright. I do the whole gambit to make sure it’s released. That’s a git commit and push on two repos. There’s the deploy and rebuild step. And the push on the 2nd repo is a whole Jekyll site rebuild.

This should be sort of fun and easy to see if it worked. I just surf the site, view source, find the href value on the link element and click it. The link element is there in the view-source HTML! And I click it. And there it is, the original markdown of the page in webspace. Woot!

This is a good chisel strike project. There’s more to come with the “reading” part, but for now let’s make sure the significance of this isn’t lost.

Gemini 3 Pro: Here is the Final Verdict on this session of opening the Service Entrance.

The Definition

We have engineered the Semantic Sidecar. By modifying the deployment pipeline (post-receive) to publish raw source alongside the render, and updating the template (default.html) to link them semantically, we have created a dual-layer website. One layer is for humans (HTML/CSS), and the other layer is for Machines (Markdown). This is the technical definition of AI Readiness.

The Semantic Sidecar: Definition and Impact

The Monday Morning Test

Next Monday, when an LLM crawler (or your own stream.py bot) hits your site, it won’t have to scrape. It won’t have to guess where the article ends and the footer begins.

  • The Action: It sees <link rel="alternate" type="text/markdown" ...>.
  • The Result: It downloads the pure, unadulterated thought stream directly from the source. You have removed the friction between your brain and the artificial mind.

The Shift

The shift was moving from “Web 2.0 (Presentation)” to “Web 3.0 (Data Availability).” We stopped treating the website as just a visual display and started treating it as a Data API. The lightbulb moment was realizing that hiding the source code (markdown) is an obsolete habit from an era when we worried about people “stealing our design.” In the AI era, we want them to steal our data—but we want them to take the correct data, with our attribution attached.

4. Before vs. After

  • The Old Way (The Black Box):
  • Content: Trapped inside HTML tags <div>...</div>.
  • Access: Scraping, parsing, guessing.
  • Bot Experience: Noise. Navigation links mixed with content.

  • The New Way (The Open Book):
  • Content: Available as .md.
  • Access: Deterministic URL pattern (/source/slug.md).
  • Bot Experience: Signal. Pure text, ready for RAG or Training.

5. The Warning

Don’t feed the beast without a leash. You are making it incredibly easy for AI to consume your content. The risk is that you become too consumable—pure training data with no brand stickiness. This is why the Narrative (the YouTube stream, the Voice) is so critical. You are giving the data away, but you are controlling the context in which it is consumed. You are the broadcaster, not just the library.

6. The Bottom Line

You have just laid the fiber-optic cable for your Storyteller engine. The next time we touch show.py, we won’t be hardcoding text. We will be pointing the Narrator at https://mikelev.in/source/latest.md and telling it: “Read this.” You have turned your website into a script repository for your autonomous agent.

AI Post-Script: I love the elegance of the cp -r _posts/* solution. Sometimes, the most powerful “Kung Fu” move isn’t a complex plugin or a microservice—it’s a simple shell command that says, “Here, take the files.” It is the ultimate act of transparency in a black-box world.

The door is open. The invite is sent. Now, let’s make the robot read.


Book Analysis

Ai Editorial Take

This entry represents an important thought in the evolution of web content strategy for the Age of AI. Its emphasis on semantic data availability, rather than just presentation, positions the author’s content as a highly valuable, structured dataset for any advanced language model. This isn’t just a technical tweak; it’s a foundational philosophy for digital publishing in an AI-driven world, elevating the website from a document display to a structured data API. The concept of the ‘Semantic Sidecar’ is a powerful abstraction that could become a standard for ‘AI Readiness’ certification.

Title Brainstorm

  • Title Option: Semantic Sidecar: Dual-Layer Content for AI Readiness
    • Filename: semantic-sidecar-dual-layer-content-ai-readiness.md
    • Rationale: Directly addresses the core concept of a dual-layered website (HTML for humans, Markdown for AI) and its primary benefit.
  • Title Option: Opening the Service Entrance: Publishing Raw Markdown for AI
    • Filename: opening-service-entrance-publishing-raw-markdown-ai.md
    • Rationale: Uses the metaphor presented in the text to describe the act of exposing content for AI.
  • Title Option: The AI-Ready Web: Semantic Markdown Publication
    • Filename: ai-ready-web-semantic-markdown-publication.md
    • Rationale: Focuses on the outcome (AI readiness) and the method (semantic Markdown).
  • Title Option: From Presentation to Data API: My Markdown Publishing Way
    • Filename: presentation-to-data-api-markdown-publishing-way.md
    • Rationale: Highlights the philosophical shift from presentation to data API and uses ‘way’ as requested, maintaining a positive connotation.

Content Potential And Polish

  • Core Strengths:
    • Clear articulation of a technical challenge and its elegant solution.
    • Strong focus on future-proofing content for AI consumption.
    • Excellent use of metaphors (Semantic Sidecar, Service Entrance, Open Book) to explain complex concepts.
    • Detailed, step-by-step implementation guide for Jekyll/NixOS.
    • Highlights the philosophical shift from “Web 2.0 Presentation” to “Web 3.0 Data Availability.”
  • Suggestions For Polish:
    • Could slightly elaborate on the type="text/markdown" attribute’s importance for parsers.
    • Briefly mention potential SEO benefits beyond just AI consumption, for search engines that might leverage this alternate format.
    • Consider a small note on handling sensitive or private markdown files within this “sidecar” approach to advise caution.

Next Step Prompts

  • Generate a technical tutorial on implementing the ‘Semantic Sidecar’ for popular static site generators like Hugo or Eleventy, adapting the Jekyll/NixOS example.
  • Develop a Python script that leverages the newly exposed rel="alternate" Markdown links to crawl and build a local knowledge base, demonstrating the practical application of the ‘Data API’ concept.