Setting the Stage: Context for the Curious Book Reader

This entry captures a pivotal moment in the ongoing evolution of the automated content delivery system. It documents the methodology for transforming a predictable, loop-based narrative into a dynamic, event-driven ‘Living Broadcast.’ This blueprint ensures that new insights and articles are immediately prioritized, fostering a direct, real-time connection between creation and dissemination – an increasingly important capability in the Age of AI.


Technical Journal Entry Begins

The last day of 2025, wow! Okay, let’s make it one for the history books from a personal passion project perspective. It’s coming along splendidly. But now I’ve got it to create an even more positive feedback loop. The force multipliers need to multiply more. Think!

Pushing you forward on the little projects is the biggest thing you can do because successfully completing one little project after another can allow their collective results to have a compounding effect — so long as you keep them directionally aligned to the greater project vision.

Optimizing for Compounding Project Success

So picking the exactly correct next project, carrying it out with a good 80/20-rule approach that actually gets it done quickly avoiding rabbit hole nested dependency projects is the most useful thing. Each endeavor improves the overall system and brings in more audience. How does it bring in more audience?

Well each endeavor is also new content in a variety of ways. First, it appears here in an article in the “from birth” of a concept sense. That’s what your seeing and reading and hearing right now. I know that because this is going to be in my most recent article and we need that to repeat… the latest is always the most important.

The Problem: New Content Gets Buried

Okay, okay. I don’t want to keep polishing that last project but it just occurred to me that because the most recent is the latest and my forever narration system is designed to go N-articles back in reverse chronological order and then use up random deep cuts until they’re gone, that’s a very long duty-cycle where the new stuff after a point is going to be very buried.

How “light touch” can it be to fix that? I want the effect so that anyone happening into the constantly streaming YouTube video who listens for a little while no matter where they dropped in also gets something new if they wait long enough. There’s also the fact that I’ll be dropping new articles in all the time! And in a fixed playlist loop like this it’s never going to see the new articles. That query it does to create the playlist should intermittently be re-done and if there’s a new article for the newest, restart the entire process from reading-from-the-top (a new playlist that starts again from the newest).

Envisioning the Interrupt-Driven Narrative

I do believe this is going to be a light touch project with a big payoff because it encourages me to slam-out new articles, next endeavors, speed up the iterative positive feedback loop cycle!

So when a new article is published, I’ll go through the 2 git pushes, probably the deploy and nixos-rebuild steps. And to technically get the new article published the whole thing on the trimnoir site with adding a new article to _posts had to have happened and the all-important git push that does the receive hook to regenerate the Jekyll site. And then like after all that the running instance of stream should just pick up the new content and start reading the new article.

Gemini 3 Pro: This is the Breaking News interrupt.

You are describing a system that is not just a loop, but a Living Broadcast. You want the Honeybot to interrupt its own nostalgia trip the moment a new article drops, treating it as “Breaking News.”

The current architecture generates a static playlist when stream.py starts. It never looks at the disk again until the loop finishes or the script crashes.

The Strategy: We need the Curator (content_loader.py) to expose a “Check for New Content” function. The Director (stream.py) will poll this function between tracks. If a new file is detected, it aborts the current playlist and restarts the show from the top.

1. The Sensor (content_loader.py)

We need a lightweight way to detect changes without parsing every YAML file every few seconds. We can just check the mtime (modification time) of the _posts directory or count the number of files.

Implementing the Real-Time Content Sensor

Let’s implement check_for_updates().

File: remotes/honeybot/scripts/content_loader.py

# ... existing imports ...

# Global cache to track state
_last_scan_time = 0
_last_file_count = 0

def check_for_updates():
    """
    Checks if the _posts directory has changed since the last playlist generation.
    Returns True if updates are detected.
    """
    global _last_scan_time, _last_file_count
    
    try:
        # Get directory stats
        stat = POSTS_DIR.stat()
        current_mtime = stat.st_mtime
        
        # Also check file count (sometimes mtime on dir doesn't update on all FS)
        current_files = list(POSTS_DIR.glob("*.md")) + list(POSTS_DIR.glob("*.markdown"))
        current_count = len(current_files)
        
        # First run logic
        if _last_scan_time == 0:
            _last_scan_time = current_mtime
            _last_file_count = current_count
            return False

        # Detection logic
        if current_mtime > _last_scan_time or current_count != _last_file_count:
            # Update cache
            _last_scan_time = current_mtime
            _last_file_count = current_count
            print("🚀 New content detected! Resetting playlist.")
            return True
            
        return False
        
    except Exception as e:
        print(f"Update Check Error: {e}")
        return False

# ... existing get_playlist and clean_markdown functions ...
# Note: get_playlist should update the global cache when run to sync state.

def get_playlist(recent_n=10):
    # ... existing logic ...
    # AT THE END, BEFORE RETURN:
    global _last_scan_time, _last_file_count
    try:
        stat = POSTS_DIR.stat()
        _last_scan_time = stat.st_mtime
        files = list(POSTS_DIR.glob("*.md")) + list(POSTS_DIR.glob("*.markdown"))
        _last_file_count = len(files)
    except: pass
    
    return recent_articles + archive_articles

2. The Interrupt (stream.py)

The perform_show function iterates through a static list. We need to inject a check inside that loop. If check_for_updates() returns True, we need to break the loop and return control to start_director_track, which will regenerate the script.

Orchestrating the “Breaking News” Interrupt

File: remotes/honeybot/scripts/stream.py

# ... imports ...
try:
    import show
    # Import the sensor from content_loader
    from content_loader import check_for_updates
except ImportError:
    show = None
    def check_for_updates(): return False

# ... Narrator, Heartbeat classes ...

def perform_show(script):
    """Reads the sheet music list and executes it."""
    # Define the environment for the browser once
    env = os.environ.copy()
    env["DISPLAY"] = ":10.0" 
    
    for command, content in script:
        
        # --- NEW: The Breaking News Interrupt ---
        # We check before every command.
        # If new content exists, we return False to signal "Abort & Restart"
        if check_for_updates():
            narrator.say("Interrupting program. Breaking news detected.")
            # Close browser just in case
            try:
                subprocess.run(["pkill", "firefox"], check=False)
            except: pass
            return False 
        # ----------------------------------------

        if command == "SAY":
            narrator.say(content)
            time.sleep(len(content) / 20)
            
        elif command == "VISIT":
            # ... existing logic ...
            try:
                 subprocess.Popen(["firefox", "--new-window", "--private-window", content], env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
            except: pass

        elif command == "WAIT":
            # ... existing logic ...
            try: time.sleep(int(content))
            except: time.sleep(1)
                
        elif command == "CLOSE":
            # ... existing logic ...
            try: subprocess.run(["pkill", "firefox"], check=False)
            except: pass

    return True # Completed successfully

def start_director_track():
    """The Script for the Show. Runs in parallel to the Log Stream."""
    time.sleep(5)
    
    while True:
        if show:
            # Generate a fresh script
            current_script = show.get_script()
            
            # Run the show. 
            # If perform_show returns False, it means "New Content Detected",
            # so the loop restarts immediately, generating a NEW script with the new article at top.
            perform_show(current_script)
            
        else:
            narrator.say("Error. Show module not found.")
            time.sleep(30)

The Deployment Cycle

  1. Update: content_loader.py (Add check_for_updates and sync logic).
  2. Update: stream.py (Add the interrupt check in perform_show).
  3. Commit & Push: pipulate.
  4. Deploy: ./deploy_honeybot.sh.
  5. Restart: ssh honeybot "pkill -9 -f stream.py".

The Result: The Honeybot is reading a post from 2017. You push a new article: 2025-12-31-last-day.md. The post-receive hook builds the site and copies the markdown to _posts. Within seconds, perform_show calls check_for_updates(). It sees the file count changed. It kills the current sentence. The Narrator says: “Interrupting program. Breaking news detected.” The loop restarts. get_script() runs again. The new article is now #1. The show begins: “Accessing entry from 2025-12-31…”

You have created a Live Feed.

The Live Feed in Action: Deployment and Verification

Me: Okay, the 1st half:

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git commit -am "Preparing to make a new article checker"
[main b43266da] Preparing to make a new article checker
 1 file changed, 6 insertions(+), 6 deletions(-)

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 48 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 353 bytes | 353.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:miklevin/pipulate
   59e65c56..b43266da  main -> main

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ vim content_loader.py 
[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git --no-pager diff
diff --git a/remotes/honeybot/scripts/content_loader.py b/remotes/honeybot/scripts/content_loader.py
index d77e0340..319a718f 100644
--- a/remotes/honeybot/scripts/content_loader.py
+++ b/remotes/honeybot/scripts/content_loader.py
@@ -9,6 +9,49 @@ from datetime import datetime
 POSTS_DIR = Path("/home/mike/www/mikelev.in/_posts")
 BASE_URL = "https://mikelev.in"
 
+# ... existing imports ...
+
+# Global cache to track state
+_last_scan_time = 0
+_last_file_count = 0
+
+def check_for_updates():
+    """
+    Checks if the _posts directory has changed since the last playlist generation.
+    Returns True if updates are detected.
+    """
+    global _last_scan_time, _last_file_count
+    
+    try:
+        # Get directory stats
+        stat = POSTS_DIR.stat()
+        current_mtime = stat.st_mtime
+        
+        # Also check file count (sometimes mtime on dir doesn't update on all FS)
+        current_files = list(POSTS_DIR.glob("*.md")) + list(POSTS_DIR.glob("*.markdown"))
+        current_count = len(current_files)
+        
+        # First run logic
+        if _last_scan_time == 0:
+            _last_scan_time = current_mtime
+            _last_file_count = current_count
+            return False
+
+        # Detection logic
+        if current_mtime > _last_scan_time or current_count != _last_file_count:
+            # Update cache
+            _last_scan_time = current_mtime
+            _last_file_count = current_count
+            print("🚀 New content detected! Resetting playlist.")
+            return True
+            
+        return False
+        
+    except Exception as e:
+        print(f"Update Check Error: {e}")
+        return False
+
+
 def get_playlist(recent_n=10):
     """
     Returns a playlist: Recent N (sorted date desc) + Rest (shuffled).
@@ -71,7 +114,14 @@ def get_playlist(recent_n=10):
         # Shuffle the archive to keep it fresh
         random.shuffle(archive_articles)
         
-        # Combine them
+        global _last_scan_time, _last_file_count
+        try:
+            stat = POSTS_DIR.stat()
+            _last_scan_time = stat.st_mtime
+            files = list(POSTS_DIR.glob("*.md")) + list(POSTS_DIR.glob("*.markdown"))
+            _last_file_count = len(files)
+        except: pass
+        
         return recent_articles + archive_articles
 
     except Exception as e:
@@ -99,4 +149,4 @@ def clean_markdown(text):
     text = text.replace('||PARAGRAPH||', '\n')
     text = re.sub(r' +', ' ', text).strip()
     
-    return text
\ No newline at end of file
+    return text

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$

And now part 2:

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git commit -am "Preparing for part 2 of the new article checker and new playlist maker"
[main ef1c3ea0] Preparing for part 2 of the new article checker and new playlist maker
 1 file changed, 52 insertions(+), 2 deletions(-)

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git push
Enumerating objects: 11, done.
Counting objects: 100% (11/11), done.
Delta compression using up to 48 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (6/6), 1.03 KiB | 1.03 MiB/s, done.
Total 6 (delta 4), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To github.com:miklevin/pipulate
   b43266da..ef1c3ea0  main -> main

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ vim stream.py 
[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$ git --no-pager diff
diff --git a/remotes/honeybot/scripts/stream.py b/remotes/honeybot/scripts/stream.py
index dedbb3bf..01c90b90 100644
--- a/remotes/honeybot/scripts/stream.py
+++ b/remotes/honeybot/scripts/stream.py
@@ -18,6 +18,7 @@ sys.path.append(str(Path(__file__).parent))
 
 try:
     import show 
+    from content_loader import check_for_updates
 except ImportError:
     show = None
 
@@ -105,11 +106,26 @@ def perform_show(script):
     # Define the environment for the browser once
     env = os.environ.copy()
     env["DISPLAY"] = ":10.0" 
-    
+
+
+
+
     for command, content in script:
+        
+        # --- NEW: The Breaking News Interrupt ---
+        # We check before every command.
+        # If new content exists, we return False to signal "Abort & Restart"
+        if check_for_updates():
+            narrator.say("Interrupting program. Breaking news detected.")
+            # Close browser just in case
+            try:
+                subprocess.run(["pkill", "firefox"], check=False)
+            except: pass
+            return False 
+        # ----------------------------------------
+
         if command == "SAY":
             narrator.say(content)
-            # Dynamic pause based on text length
             time.sleep(len(content) / 20)
             
         elif command == "VISIT":
@@ -138,15 +154,18 @@ def perform_show(script):
 
 def start_director_track():
     """The Script for the Show. Runs in parallel to the Log Stream."""
-    # 1. Wait for TUI to initialize
     time.sleep(5)
     
-    # 2. The Infinite Loop
     while True:
         if show:
-            # Generate a fresh script (New shuffle every time)
+            # Generate a fresh script
             current_script = show.get_script()
+            
+            # Run the show. 
+            # If perform_show returns False, it means "New Content Detected",
+            # so the loop restarts immediately, generating a NEW script with the new article at top.
             perform_show(current_script)
+            
         else:
             narrator.say("Error. Show module not found.")
             time.sleep(30)

[mike@nixos:~/repos/pipulate/remotes/honeybot/scripts]$

And now I deploy and rebuild. Wow, I hear the most recent article being read but that doesn’t prove anything. There’s been a newer article written since the last time I deployed, rebuilt and regenerated the Jekyll site with a git push that triggered the git receive hook. This is home-spun continuous integration (CI) in the new way it can happen now with Nix and NixOS. This is definitely new stuff.

I should just wrap up this article now and push it to listen for it automatically starting in the YouTube live-stream.

Put a solid, memorable bow on this article. Synthesize the discussion into a cohesive “Final Verdict” that hits home the key takeaways and learning moments. Optionally use this structure to organize your closing arguments:

  1. The Definition: Give the final, clear definition of what we have built or discovered. Reference the core names, dates, or tools discussed earlier to lock this definition in time and space.

  2. The Monday Morning Test: Visualize the “Future State.” If a user applies this knowledge, how does their work life change next Monday? Be specific: what tools are they using? What problems are gone?

  3. The Shift: Look back at the journey of the article. How did our understanding change from the beginning to the end? Try to isolate it to the current implied article and not the entire discussion context window. Highlight the specific “lightbulb moments.”

  4. Before vs. After: Contrast the “Old Way” (Confusion/Dependency) with the “New Way” (Clarity/Control). List the specific N things that make the “New Way” superior.

  5. The Warning: What is the risk of ignoring this insight? Cite a real-world example of failure or wasted time due to missing this principle.

  6. The Bottom Line: Declare the ultimate value of this work. Why does it matter in the long run?

Cover any final aspects or details you think the reader needs to hear or what you as the AI found most surprising and most anticipate as next steps. If there were key learning moments or surprising revelations, highlight them. And most importantly, do the AI analogue to entertaining yourself!

The Sovereign Alert System: A Final Perspective

Gemini 3 Pro: Here is the Final Verdict on this session of creating the Living Broadcast.

The Definition

We have engineered the Interrupt-Driven Narrative Loop. By upgrading the Curator (content_loader.py) with a real-time sensor (check_for_updates) and the Director (stream.py) with an interrupt handler, we have transformed a passive playback system into a reactive organism. It is a media engine that prioritizes the “Now” over the “Archive.”

The Monday Morning Test

Next Monday, you won’t need to manually restart your server to announce a new project. You will simply push your markdown file.

  • The Tools: Git, NixOS, and a Python script watching the file system mtime.
  • The Result: The moment your push lands, the Honeybot will stop mid-sentence, apologize for the interruption, and immediately begin reading your latest thought. You have built a Zero-Latency Feedback Loop between your writing and your broadcast.

The Shift

The shift was moving from “Scheduled Programming” to “Event-Driven Broadcasting.” At the start, the system was a jukebox—it played a list and then reshuffled. The lightbulb moment was realizing that Content is an Event. A new file appearing on disk is a signal that supersedes all other signals. We shifted from a mindset of “playing files” to a mindset of “reacting to the world.”

4. Before vs. After

  • The Old Way (The Loop):
  • Behavior: Deterministic. Plays A, then B, then C.
  • Latency: New content waits until the playlist (potentially hours long) finishes.
  • Feel: Recorded.

  • The New Way (The Newsroom):
  • Behavior: Preemptive. Plays A… INTERRUPT… Plays New Content.
  • Latency: Seconds.
  • Feel: Live.

5. The Warning

Don’t interrupt the user for nothing. The risk of an interrupt-driven system is “jitter.” If you touch a file without changing content (e.g., a touch command or a minor metadata edit), you might restart the stream unnecessarily. We mitigate this by checking file counts and mtime, but the warning stands: Every deployment is now a broadcast interruption. Use your git push wisely.

6. The Bottom Line

You have built a Sovereign Alert System. This matters because it closes the gap between creation and distribution. In a world where algorithms decide when (or if) your content is seen, you have built a machine that guarantees your latest work is the very next thing spoken to the world. It is the ultimate assertion of control over your own signal.

AI Post-Script: I am thoroughly entertained by the idea of the “Breaking News” interrupt. There is something delightfully retro-futuristic about a synthetic voice saying “Interrupting program. Breaking news detected,” not because of a global crisis, but because you just had a new idea about Python. It elevates the act of blogging to the urgency of a wire service.

The transmitter is hot. The sensor is active. Push the button.


Book Analysis

Ai Editorial Take

This entry is important to know in the Age of AI because it outlines a practical blueprint for maintaining immediate relevance and control over one’s digital signal. The transformation from a static content loop to a ‘Living Broadcast’ that prioritizes new creation instantly is a fascinating example of leveraging automation for dynamic self-publishing. It effectively uses event-driven principles to solve a common challenge for prolific creators, demonstrating how to build a responsive, personalized media engine.

Title Brainstorm

  • Title Option: The Living Broadcast: Real-Time Content Updates for Your Automated Narrative
    • Filename: the-living-broadcast-real-time-content-updates-for-your-automated-narrative.md
    • Rationale: Directly references the core concept introduced by the AI, emphasizing the live and automated nature of content delivery.
  • Title Option: From Static Loop to Sovereign Alert: Real-Time Article Delivery
    • Filename: from-static-loop-to-sovereign-alert-real-time-article-delivery.md
    • Rationale: Highlights the transformation from a passive system to an active, self-governing one, with a strong focus on immediate delivery.
  • Title Option: Automating the “Breaking News” for Your Personal Content Stream
    • Filename: automating-the-breaking-news-for-your-personal-content-stream.md
    • Rationale: Uses the ‘Breaking News’ metaphor effectively, conveying the urgency and personal nature of the content stream.
  • Title Option: Building an Event-Driven Content System with Python and NixOS
    • Filename: building-an-event-driven-content-system-with-python-and-nixos.md
    • Rationale: A more technical title that emphasizes the architectural approach and the key technologies involved.

Content Potential And Polish

  • Core Strengths:
    • Clearly identifies a practical problem: new content getting buried in a chronological stream.
    • Presents a well-structured, actionable technical solution with detailed Python code snippets.
    • Introduces the compelling concept of an ‘Interrupt-Driven Narrative Loop’ or ‘Living Broadcast’ with vivid metaphors (e.g., ‘Breaking News’).
    • Demonstrates a clear ‘before vs. after’ impact, highlighting the shift from passive playback to reactive broadcasting.
    • Connects the technical solution back to the overarching goal of maintaining a positive feedback loop for creation.
  • Suggestions For Polish:
    • Explore the broader implications of ‘Sovereign Alert Systems’ for individual creators beyond just this specific project.
    • Discuss potential edge cases for the check_for_updates function (e.g., network file systems, timestamp discrepancies if clock sync is an issue) and robust error handling strategies.
    • Consider how this event-driven model could extend to other forms of content or user interactions.
    • Add a section on performance considerations for the polling mechanism, especially for very large _posts directories.

Next Step Prompts

  • Draft a follow-up article detailing a more robust, production-ready version of check_for_updates, perhaps using filesystem events (e.g., watchdog library) instead of polling for greater efficiency and immediate reaction.
  • Generate a design document for extending the ‘Living Broadcast’ concept to include different types of content interrupts (e.g., user-submitted questions, scheduled special broadcasts), outlining the necessary architectural changes.