Future-proof your skills and escape the tech hamster wheel with the Linux, Python, vim & git stack (LPvg) including NixOS, Jupyter, FastHTML / HTMX and an AI stack to resist obsolescence. Follow along as I debunk peak data theory and develop Pipulate, the next generation free AI SEO tool.

Future-proofing My Tech: Linux, Python, Vim, Git, and now HTMX

Alright, buckle up, because today was a coding marathon! I wrestled with HTMX, merged projects, and finally got my open-source SEO tool, Pipulate, out into the wild. It's a blend of Linux SEO, AI SEO, and whatever else you want to call it. The big takeaway? I'm diving deep into AI-assisted linear workflows, aiming to teach AI by watching humans, and stitching these workflows together like LISP s-expressions. It's all about future-proofing my skills with Linux, Python, vim, and git, while embracing the weird beauty of HTMX and FastHTML. Now, time for sleep.

A 15-Hour Saturday Coding Marathon

Okay, it’s still Saturday and just about 11:00 PM. Today’s coding saga started at 8:00 AM. There was a nap in the middle.

Major Milestones Achieved

The big, complex coding job… it is done.

The “Hello Word” of Pipulate workflow templates… it is done.

Hashing through all the WET mess that is HTMX workflow templating that ensures radical customization ability on par with what’s possible in a Jupyter Notebook, and making a WET example, yet being very explicit about the WETness… it is done.

Unifying the projects previously known as Botifython and Pipulate into one codebase, and moving that codebase over to public-side GitHub, replacing where my old Pipulate used to be… it is done.

Now, the public has access. The very soft promotion of the system as free and open source SEO, alternatively branded as Linux SEO software, AI SEO, or whatever… it is done.

The actual workflows that live up to that billing… well, that is not done.

Winding Down for the Night

It’s still Saturday.

Takes a deep breath and decides to not finish the coffee I started before the worksession after the nap. That worksession is done. Today’s work is done. It would be silly for me to keep pushing farther tonight.

The time has come the Walrus said to speak of what I’ve done.

Exploring AI-Assisted Development

To think about AI coding assistance right directly in a Jupyter Notebook loaded into Cursor AI. That’s new for me, as of yesterday and radically interesting. I’ve got to digest that and work it into my workflow.

This is not going to be a long article. I am going to go back to sleep soon and try to maintain my circadian rhythm and knock the immune system out of me, like those times where I push myself, because that little bit of extra work really could change my life for the better forever forward.

That kind of work is behind me!

The Challenge of Training AI

Now, I principally have to teach AI what I’ve done. Unfortunately, until they’re generally trained on the concept right in their static base models (unlikely to occur anytime soon), that training has to be over and over and over.

The Reality of Innovation and Implementation

Good ideas are easy.

Implementation is hard.

Consistency and persistent follow-through is nigh impossible.

Good ideas die on the vine. Good ideas rain down on our heads like a light drizzle, our every waking hour – and likely while we’re asleep too.

Reflecting on Past Ventures

There is no shortage of good ideas. And in fact, first implementations aren’t the hardest things in the world either. I arguably had a system much like Twitter, WordPress, HubSpot and even the frameworks that run them, like Ruby on Rails – all before those became big. It’s just I did it for “internal intranets” – private websites, as work for hire or as my SEO experiments to make very search-friendly content management systems. I also did it on the dying Microsoft Active Server Page (.asp) platform. I also didn’t know much about what I was doing, and never doubled-down on the computer science part of it. I always just stayed in that scrappy mode, doing herculean lifting for the tiny (in the grand scheme of things) work at hand.

Twenty-Five Years of Tech Evolution

Fast-forward some twenty-five years! Empires get built and crumble in 25 years. The Web and Google both basically had their stellar rise in that time period. Amazon, Tesla and the like. From nothing to empires.

Yet, here I am twiddling bits. I don’t know if it’s jealously, sour grapes or what. The good ideas I’ve had and implemented in half-assed ways, swinging the bat and making contact with the ball only to consistently have pop-flies…

Pop-flies… Populate… Pipulate. The conversion of my past pop flies into solid hits, because… because… because I can still stay scrappy, but I can get a lot of the critical computer science bits to fall in place better than I have before… because AI.

The Stack Overflow Story

Now, I could always research on the Web, and piece it together from Stack Overflow. In fact, I could tell you the story of StackOverflow and Joel Spolsky… but you know what, I’m just going to let Perplexity do it…

Perplexity: Joel Spolsky, a software engineer and writer, co-founded Stack Overflow in 2008 with Jeff Atwood, another software engineer, author, and blogger[1][2]. The idea for Stack Overflow originated from the frustration developers faced when searching for programming solutions online, often encountering unhelpful websites or paywalls[2].

The Birth of Stack Overflow

In early 2008, Jeff Atwood called Joel Spolsky to discuss becoming a full-time blogger. Spolsky instead proposed the idea of combining job listings with a Q&A site for developers[1]. Development began in April 2008, with Jeff Atwood leading the coding efforts alongside two other programmers, Geoff Dalgas and Jarrod Dixon[1].

Stack Overflow officially launched in September 2008, quickly becoming an essential tool for programmers worldwide[1]. The founders’ goal was to improve the internet for programmers, without necessarily focusing on creating a profitable business[1].

Key Features and Success Factors

Stack Overflow’s success can be attributed to several factors:

  1. Upvoting/downvoting system inspired by Reddit
  2. Content updating mechanism similar to Wikipedia
  3. Sense of ownership akin to blogging[2]

The site started with plain HTML and grew rapidly, leveraging the existing follower base of both founders’ popular blogs[2][6].

Growth and Impact

Stack Overflow experienced explosive growth, becoming a standard part of every developer’s toolkit within months of its launch[5]. The site currently serves millions of users monthly and has expanded to include numerous Q&A sites on various topics[3].

Company Evolution

Stack Overflow has grown into a successful company:

  • Raised over $18 million in funding
  • Employs hundreds of people
  • Profitable and impactful in the programming community[1][3]

Joel Spolsky served as CEO of Stack Exchange, the parent company of Stack Overflow, until 2019[4][5]. He has been dedicated to improving the world for software developers throughout his career[3].

Stack Overflow’s success has transformed the way programmers learn, share knowledge, and solve problems, making it an integral part of the modern software development landscape[1][5].

Citations:

[1] https://www.joelonsoftware.com/2018/04/06/the-stack-overflow-age/
[2] https://techwithmaddy.com/the-story-behind-the-success-of-stackoverflow
[3] https://www.reddit.com/r/IAmA/comments/gulpx/iama_cofounder_of_stack_exchange_and_fog_creek/
[4] https://stackoverflow.blog/2019/09/17/joel-spolsky-clive-thompson-discuss-coders-software-programming/
[5] https://www.joelonsoftware.com/2021/06/02/kinda-a-big-announcement/
[6] https://meta.stackexchange.com/questions/76126/how-did-stack-overflow-get-started
[7] https://stackoverflow.blog/2021/12/14/podcast-400-an-oral-history-of-stack-overflow-told-by-its-founding-team/
[8] https://www.youtube.com/watch?v=-wtIMTCHWuI

Stack Overflow’s Impact on AI and Programming Knowledge

Joel Spolsky did one thing well. He created a system that organized the world’s programming knowledge so well that it, along with GitHub, was primarily what trained today’s LLM AIs how to program. Sure, the code was in GitHub, but all the discussions and nuance was in Stack Overflow. You can see it right up to today in the citations that are given by all the AIs on matters of coding.

Reflecting on Missed Opportunities in Enterprise Software

That Stack Exchange CMS that was behind it all is not terribly dissimilar to the systems I’ve built over the years for this internal purpose or that. My systems in fact have reached into the worlds of Bills of Manufacturing and Order Fulfillment, making it even more of a total business software solution than a mere CMS. This is the sound of me kicking myself for not doing more with it back when it was so much easier and less crowded. But I didn’t have the business acumen or gumption. But no use kicking ones self. The world is full of such stories. Just look at Digital Research and Gary Kindall… Perplexity?

Perplexity: The story of Digital Research, Gary Kildall, and the lost DOS deal is a pivotal moment in the history of personal computing.

The Rise of Digital Research

Gary Kildall, a computer scientist and professor, founded Digital Research Inc. (DRI) with his wife Dorothy in 1974[1][4]. The company’s flagship product was CP/M (Control Program/Monitor), an operating system Kildall had developed for microcomputers[1]. CP/M quickly became the dominant operating system for early personal computers, running on thousands of different models[1].

CP/M’s Success

By 1981, CP/M was at the peak of its popularity, running on 3000 different computer models. Digital Research was thriving, with annual revenues of $5.4 million[1]. Kildall’s innovations, including the concept of the BIOS (Basic Input/Output System), allowed CP/M to run on various hardware configurations, making it a de facto standard in the industry[1].

The Fateful IBM Meeting

In 1980, IBM was developing its first personal computer and needed an operating system. They approached Digital Research, as CP/M was the obvious choice. However, the meeting did not go as planned. According to popular accounts, Kildall was absent, allegedly flying his private plane. His wife Dorothy, who handled business affairs, was hesitant to sign IBM’s non-disclosure agreement without Gary present[1].

The Microsoft Opportunity

With Digital Research seemingly uninterested, IBM turned to a young Bill Gates and his company, Microsoft. Although Microsoft didn’t have an operating system at the time, Gates saw the opportunity and acquired QDOS (Quick and Dirty Operating System) from Seattle Computer Products. Microsoft adapted QDOS, renamed it MS-DOS, and licensed it to IBM[1].

The Aftermath

The lost IBM deal was a turning point for both Digital Research and the personal computer industry. While CP/M continued to be used, MS-DOS became the standard operating system for IBM PCs and compatibles. This decision shaped the future of personal computing, with Microsoft emerging as the dominant force in PC operating systems[1][4].

Digital Research’s Later Years

Despite the setback, Digital Research continued to innovate. They developed products like MP/M (a multi-tasking version of CP/M), GEM (a graphical user interface), and FlexOS (a real-time operating system)[1][4]. However, the company never regained the market dominance it once held with CP/M.

Kildall’s Legacy

Gary Kildall’s contributions to computing extended beyond CP/M. He co-hosted the TV show “Computer Chronicles,” founded other companies like Activenture (later KnowledgeSet), and continued to innovate in areas such as CD-ROM technology and wireless home networking[1][5].

In 2014, the Institute of Electrical and Electronics Engineers (IEEE) recognized Kildall as the inventor of the first personal computer operating system, acknowledging his significant role in the history of computing[8].

The story of Digital Research and the lost DOS deal remains a cautionary tale in tech history, illustrating how a single business decision can alter the course of an entire industry.

Citations:

[1] https://en.wikipedia.org/wiki/Gary_Kildall
[2] https://ethw.org/w/images/5/53/TheBirthOfThePC-OperatingSystem.pdf
[3] https://computerhistory.org/blog/in-his-own-words-gary-kildall/
[4] https://en.wikipedia.org/wiki/Digital_Research
[5] https://computerhistory.org/blog/fifty-years-of-the-personal-computer-operating-system/
[6] https://www.digitalresearch.biz/Gary.Kildall.htm
[7] https://www.i-programmer.info/history/8-people/305-gary-kildall.html
[8] https://www.kazu.org/2014-04-24/recognizing-the-legacy-of-pacific-grove-inventor-gary-kildall

Reflecting on Gary Kildall and Operating System Development

I mean, I’m no Gary Kindall. I couldn’t program an operating system to save my life. I’m far more downstream in those who make tools to make tools. It goes from low-level to high-level, abstraction-wise. And Gary Kindall’s story is one of those of bizarro alternative technology timeline histories that could have been.

The LISP vs UNIX Alternative Timeline

I love telling the one about LISP vs. UNIX. The LISP timeline that could have been. If it wasn’t Unix, it could have been LISP, which blossomed from MIT’s Marvin Minsky… well, you know what? Perplexity?

Perplexity: The story of Marvin Minsky and LISP machines is a fascinating journey through the early days of artificial intelligence and computer workstations.

The Birth of LISP and AI

In the late 1950s, John McCarthy developed LISP (LISt Processing) at MIT, which became a fundamental language for AI research[5]. Marvin Minsky, often called the “father of AI,” co-founded MIT’s Artificial Intelligence Laboratory with McCarthy, laying the groundwork for future developments in the field[4].

The Rise of LISP Machines

In 1979, Richard Greenblatt from MIT’s AI Lab formed Lisp Machines, Inc. (LMI) to commercialize LISP machines[1]. Simultaneously, Russell Noftsker founded Symbolics, Inc., attracting many AI Lab hackers[3]. These companies aimed to build computers optimized for running LISP, targeting the growing AI research market.

Symbolics and LMI Competition

Symbolics and LMI competed fiercely in the 1980s:

  • LMI released the LMI-CADR, their first LISP machine[1].
  • Symbolics countered with the LM-2, selling for $70,000 each[1].
  • Both companies developed second-generation products: the Symbolics 3600 and the LMI-LAMBDA[1].

Symbolics gained an edge by attracting more AI Lab talent and producing a wider range of workstations[3].

The AI Boom and Symbolics’ Success

The 1980s saw an AI commercialization boom, with Symbolics computers becoming the preferred platform for AI software development[3]. Symbolics expanded its product line and even developed workstations capable of processing high-definition television quality video[3].

The Emergence of Apollo Computer

While LISP machines were gaining traction, Apollo Computer entered the workstation market in 1980[6]. Founded by William Poduska, Apollo became a significant player in the graphical workstation space alongside Symbolics and Sun Microsystems[6].

The Decline of LISP Machines

Despite initial success, the LISP machine market began to decline in the late 1980s. Several factors contributed to this:

  1. The rise of general-purpose workstations and personal computers.
  2. The AI winter, which saw reduced funding and interest in AI research.
  3. The high cost of specialized LISP machines compared to more versatile alternatives.

HP’s Acquisition and Integration

In 1989, Hewlett-Packard acquired Apollo Computer for $476 million[6]. HP integrated Apollo’s technology into their HP 9000 series of workstations and servers, effectively ending the era of specialized LISP machines[6].

Legacy

While LISP machines and companies like Symbolics and LMI didn’t survive as independent entities, their innovations had a lasting impact on the computer industry. The concept of high-performance workstations for specialized tasks continued to evolve, influencing the development of modern workstations and high-performance computing systems.

Marvin Minsky’s contributions to AI and computer science continued long after the decline of LISP machines. His work laid the foundation for many aspects of modern AI, including neural networks and machine learning[4]. Minsky’s legacy lives on in the ongoing development of AI technologies and the continued use of LISP in certain specialized applications.

Citations:

[1] https://en.wikipedia.org/wiki/Lisp_Machines
[2] https://www.openpa.net/systems/hp_apollo_series_domain_10000_dn10000.html
[3] https://en.wikipedia.org/wiki/Symbolics
[4] https://www.space.com/32153-god-artificial-intelligence-and-the-passing-of-marvin-minsky.html
[5] http://jmc.stanford.edu/articles/lisp/lisp.pdf
[6] https://en.wikipedia.org/wiki/Apollo_Computer
[7] https://www.yergens.net/history-of-workstations.html
[8] https://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html
[9] https://writings.stephenwolfram.com/2016/01/farewell-marvin-minsky-19272016/
[10] https://www.computinghistory.org.uk/det/22982/Symbolics-3620/

The Alternate Timeline: LISP Instead of Unix

Yeah, so that was the bizarro alternative timeline that could have been, instead of Unix, the C programming language (and all its variations), Linux and what we have today. The butterfly effect was… no, not Ken Thompson, though he is a critical lynchpin that factors in next. It was Fernando Corbató’s struggle with sharing computing time… Perplexity?

Perplexity: Fernando José “Corby” Corbató’s journey to invent time-sharing is a pivotal story in the history of computing. Born on July 1, 1926, in Oakland, California, Corbató’s work at MIT in the early 1960s revolutionized how people interact with computers[5].

The Path to Time-Sharing

After earning his doctorate in physics from MIT in 1956, Corbató was recruited by Phil Morse to join the newly formed Computation Center[2]. At the time, computers like the IBM-704 and later the IBM-709 were used in a batch processing mode, where users had to take turns using the machine for short periods[2][5].

Corbató recognized the inefficiency of this system, noting how burdensome compute times were for researchers[3]. Inspired by the concept of time-sharing, which had its roots in military operations like the SAGE air-defense system, Corbató set out to improve “man-machine interaction”[3].

The Birth of CTSS

In the spring of 1961, Corbató initiated a project that would lead to the development of the Compatible Time-Sharing System (CTSS)[2]. Working with a small team, including Marjorie Daggett and Bob Daley, Corbató managed to create a crude prototype by November 1961[1][2].

The CTSS was a groundbreaking system that allowed multiple users to access a computer simultaneously, dramatically increasing the efficiency of program development[5]. It allocated short time slices to each user, creating the illusion of dedicated access to the computer[3].

Overcoming Challenges

Corbató faced several challenges in developing CTSS:

  1. Limited resources: They had to carve out just 5K words of user address space from the standard operating system[2].
  2. Skepticism: Many people did not understand the concept of interactive computing[2].
  3. Technical hurdles: The team had to implement new hardware and software systems to enable multiple users to submit code to a central processor[3].

Despite these obstacles, Corbató persevered, driven by his vision of making computing more accessible and efficient.

Impact and Legacy

The success of CTSS led to the creation of Project MAC at MIT, which further advanced time-sharing technology[2]. Corbató’s work laid the foundation for modern computing concepts:

  1. It introduced the use of passwords, establishing the concept of digital privacy[5].
  2. CTSS included early versions of email, instant messaging, and word processing[5].
  3. The system foreshadowed trends in cloud computing[5].

Corbató’s contributions were recognized with the ACM Turing Award in 1990, acknowledging his role in organizing concepts and leading the development of general-purpose large-scale time-sharing systems[2][8].

Fernando Corbató’s struggle for time-sharing not only revolutionized computing but also democratized access to computational resources, paving the way for the digital revolution we experience today[5].

Citations:

[1] https://people.csail.mit.edu/saltzer/CTSS/CTSS-Documents/CTSS_50th_anniversary_web_03.pdf
[2] https://history.computer.org/pioneers/corbato.html
[3] https://econoben.github.io/2020/08/31/time-sharing.html
[4] https://en.wikipedia.org/wiki/Fernando_J._Corbat%C3%B3
[5] https://news.mit.edu/2019/mit-professor-emeritus-fernando-corby-corbato-computing-pioneer-dies-0715
[6] http://larch-www.lcs.mit.edu:8001/~corbato/
[7] https://www.historyofinformation.com/detail.php?id=787
[8] https://amturing.acm.org/award_winners/corbato_1009471.cfm
[9] https://www.welivesecurity.com/2014/05/23/password-inventor-says-creation-now-nightmare/
[10] https://www.clearobject.com/passwords-digital-security-ctss-and-the-digital-revolution-thank-you-fernando-corbato/

The Birth of MULTICS and Corporate Computing

Ken Thompson’s Journey from West to East

And from there, we had Multics. And the corporate greed kicked in with Bell Labs, Honeywell, GE and the usual military industrial complex elite full of IBM-envy decided to make the great cash-register in the cloud. Yes, my friends. The cloud was almost invented decades prior to the Internet via MULTICS, the bastardization of the good work of Professor Corbató! And that is where Kenneth Thopson’s story begins. Actually, it begins earlier when this West Coast free-spirit was wrangled into East Coast uptighttopia. But we’ll start with his story at Bell Labs when he realized the looming MULTICStopia… Perplexity?

Perplexity: The story of Multics and Ken Thompson is a tale of innovation, corporate ambition, and the birth of Unix, which reshaped the computing world.

Multics: A Visionary System

Multics (Multiplexed Information and Computing Service), developed in 1964, was a groundbreaking time-sharing operating system. Spearheaded by MIT (Project MAC), General Electric (GE), and Bell Labs, it aimed to create a secure, flexible, and robust computing utility. Multics introduced revolutionary features:

  • Time-sharing: Allowed multiple users to interact with a single system simultaneously.
  • Security: Implemented access control and was the first system to receive a B2 security rating.
  • Hardware innovations: Supported multiple CPUs sharing memory and introduced segmentation and paging for memory management.
  • Software advancements: Written in PL/I, it influenced programming practices by using a high-level language for an operating system[1][5][7].

Multics was envisioned as a “computer utility,” akin to electricity or telephony, where users could access computing resources on demand. However, its complexity, high cost, and reliance on specialized hardware limited its commercial success[1][3][5].

Bell Labs’ Departure and Ken Thompson’s Role

Bell Labs initially participated in the Multics project but withdrew in 1969 due to frustration with its complexity and delays. Ken Thompson, who had been working on Multics at Bell Labs, found himself at a crossroads. He had developed a game called Space Travel on Multics but lost access to the system when Bell Labs exited the project[6][8].

Thompson’s creativity flourished in this moment of transition. Using an obsolete PDP-7 minicomputer at Bell Labs, he began building a simpler operating system inspired by Multics’ ideas but stripped of its complexity. This effort marked the beginning of Unix.

The Birth of Unix

Thompson collaborated with Dennis Ritchie to create Unix in 1969. Key features included:

  • Simplicity: Unlike Multics, Unix was designed to be lightweight and portable.
  • Hierarchical file system: Introduced structured directories and files.
  • Inter-process communication: Developed pipes for efficient data flow between processes.
  • Programming language innovation: Thompson created the B language, later extended by Ritchie into C, which became the foundation for Unix development[2][6][8].

Unix was initially a tool for internal use at Bell Labs but quickly gained popularity due to its adaptability and efficiency. It became the foundation for modern operating systems.

Legacy of Multics and Unix

While Multics faded into history—its last system shut down in 2000—it left an indelible mark on computing. Its concepts influenced nearly every subsequent operating system, including Unix. Meanwhile, Unix spawned countless derivatives (e.g., Linux, macOS) and became the backbone of modern computing infrastructure[1][5][7].

Ken Thompson’s journey from Multics to Unix exemplifies how constraints can fuel innovation. His work not only addressed the shortcomings of Multics but also laid the groundwork for an era of open systems and portable software that continues to shape technology today.

Citations:

[1] https://multicians.org/history.html
[2] https://www.ebsco.com/research-starters/biography/ken-thompson
[3] https://gunkies.org/wiki/Multics
[4] https://www.chessprogramming.org/Ken_Thompson
[5] https://en.wikipedia.org/wiki/Multics
[6] https://www.britannica.com/biography/Kenneth-Lane-Thompson
[7] https://www.multicians.org/multics.html
[8] https://en.wikipedia.org/wiki/Ken_Thompson

Reflecting on “Do One Thing Well”

One thing well? This is absolutely not the story of doing one thing well. I digressed and subreferenced down to the operating system-level. Yeah, I guess that’s my favorite thing. I was an Amiga computer freak. I fell in love with the hardware, was bitterly heartbroken when the bizarro alternative timeline that could have been Amiga computer dominance died. I had a lot of my personal identity tied up in it for some reason. I think I found the ability to express myself, and to play, in that platform like I had seldom found elsewhere. A particular program called Deluxe Paint (aka DPaint) had a lot to do with that.

The Rise and Fall of Perfect Hardware

But then over the years, my view widened. There were perfect storms of idealized wonderful hardware before. The LISP world was ye old Amiga. It’s just that the Apollo and Symbolics workstation hardware was way too expensive. Sun Microsystems with their SPARC workstations were in this category too. Amiga brought it down to the masses. But then piddled out and died just like Apollo, Symbolics and Sun. Hmm, maybe it’s just in the stars.

Unix: The True Master of Portability

What settled in and worked was… oh yeah, there it is! It’s Unix, which really did one thing well: portability. Aka interoperability. Aka allowing you to write your software once and port it anywhere. Well, it used to be “port” it anywhere, but these days with all the hardware-level virtualization, it’s more like plain old run it anywhere without the port. This was also the promise of the Java programming language, but it didn’t start there. It didn’t even really start with Unix. It started with BCPL and… uhh… Perplexity?

Perplexity: The story of BCPL (Basic Combined Programming Language) is closely intertwined with the work of British computer scientist Christopher Strachey and his colleague Martin Richards.

Origins and Development

BCPL was developed by Martin Richards in 1966 while he was visiting MIT. It evolved from the more ambitious Combined Programming Language (CPL), which was a collaborative project between Cambridge University and the University of London[1][5]. Strachey, along with others, had been working on CPL since 1962, but the project proved too complex and was never fully realized[4].

Richards designed BCPL as a simplified version of CPL, stripping away many of its complexities while retaining its core concepts. The name initially stood for “Bootstrap CPL” but later became “Basic CPL”[1].

Key Features

BCPL was characterized by its simplicity and portability:

  1. It was typeless, with only one data type: the machine word.
  2. It introduced modern control flow structures and block structure.
  3. It was primarily intended for systems programming, such as operating systems and compilers[1].

Impact and Legacy

BCPL’s influence on the evolution of programming languages was significant:

  1. It directly inspired the B programming language, developed by Ken Thompson at Bell Labs around 1969[9].
  2. B, in turn, led to the creation of C by Dennis Ritchie, making BCPL the grandfather of C.
  3. BCPL was used in early work on the Xerox Alto and the first Multi-User Dungeon (MUD)[1].

Christopher Strachey

Christopher Strachey (1916-1975) was a pioneering British computer scientist who played a crucial role in the development of programming languages[6]:

  1. He was one of the key figures behind CPL, which led to BCPL.
  2. Strachey made significant contributions to the field of programming language theory and design.
  3. In 1959, he proposed an early form of time-sharing in computing[6].
  4. He became the first professor of computer science at Oxford University in 1965[6].
  5. Strachey was instrumental in developing denotational semantics, a method for formally describing the meanings of programming languages[6].

Strachey’s work on CPL and his influence on the computing community at Cambridge and Oxford helped create the environment in which BCPL flourished. His contributions to programming language theory continue to influence computer science to this day.

Citations:

[1] http://gunkies.org/wiki/BCPL
[2] https://community.esri.com/t5/conservation-gis/1916-1975-christopher-strachey-computer-scientist/ta-p/907089
[3] https://cppdepend.com/blog/tag/bcpl-programming-language/
[4] https://history.computer.org/pioneers/strachey.html
[5] https://www.cl.cam.ac.uk/~mr10/cpl2bcpl.pdf
[6] https://en.wikipedia.org/wiki/Christopher_Strachey
[7] https://www.softwarepreservation.org/projects/BCPL
[8] https://www.chessprogramming.org/Christopher_Strachey
[9] https://en.wikipedia.org/wiki/B_(programming_language)

The Origins of Tool-Making Tools

Hmm, okay, I couldn’t resist the one-thing-well topic deconstructing to the background of tools that are used to make tools that are used to make tools. I could go deeper, but we don’t need to burn our computers from sand. Dependencies kick in on these ginormous operating system endeavors right at about the point when the hardware can be called a general purpose computer. It’s a bit above programmable microcontroller, like Arduino’s. But not much. The Arduino would almost qualify. But once you can load Unix (or Linux) onto a thing, a world of other tool-making tools opens up to you for running on that hardware.

The Unix Legacy and Its Viral Spread

So the one thing that Unix really did well was open the door to other software on that same machine. So it sort of spread like a virus. Ken Thompson brilliantly let the Unix code leak at his alama matter, Berkeley. And that became Berkeley System Unix, or BSD. It was legally encumbered, so while it was becoming FreeBSD (which Linus didn’t know), Linus Torvalds programmed Linux. That was circa 1991. See the butterfly effect? From Fernando Corbató wanting more time on computers, to Ken Thompson preventing a corporate computing dystopia, to Linus Torvalds rendering moot one of the greatest intellectual property battles of our time (SCO Linux). I’d be remiss if I didn’t toss in Richard Matthew Stallman (RMS) who sort of finished Linus’ Linux with the cloned Unix command-set and the nifty general public license (GPL) that really flipped the bird at corporate America, haha!

The Pioneers of Computing

These tool-making tool-makers didn’t do one thing well. They friggin did everything well! And I’m green with envy of their types. If you’ve got to choose hero’s in this life, you could do a lot worse than them. Oh, if you want to throw in the women, one should invoke the inventor of computer software itself, Ada Lovelace, and Grace Hopper, the creator of the first high-level business language, COBOL, which is still widely in use today. Computer Science didn’t only have grandfathers; their grandmothers were pretty badass too!

From Hardware to Language: The Evolution of Abstraction

Anyhow, as we go along the abstraction layers, from the hardware to the operating system, we get languages. In LISP’s case, the operating system was the language. But starting with Unix, the division became much less blurred. Standard C really only has 40 commands or so, and is refreshingly simple compared to all the fancy modern languages with all the bells and whistles. But some of those bells and whistles are really required for mere mortals like me who wouldn’t be programming at all if they had to use C… simple as it might be with only ~40 commands. I don’t vibe with it.

Python: The Approachable Programming Language

I vibe with Python. And Python is another one of those genius tool-making tools. It continues the simplification of abstractions. Things you would have to build in C before you could actually start being productive, data structures like lists, tuples and dictionaries, are ready-made in Python. Similarly to how an OS does everything, but Unix in particular did one thing well: portability, so too does Python do one thing well: approachability.

Python is approachable. You would think JavaScript is approachable given how it’s built into every web browser. But you should compare the experience of doing something productive starting with that versus starting with a Python in a Jupyter Notebook. It’s night and day. Python really is that much better than JavaScript approachability-wise. If it weren’t, it wouldn’t still be rising in popularity in the face of JavaScript browser-exclusivity monopoly.

Vim: The Editor That Stands the Test of Time

So, we’ve got portability and approachability. The text editor vim does one thing well: longevity. You wouldn’t think that longevity in a text-editor is a killer feature, but depending on what stage of you’re life you’re reading this at, you’ll either think I’m talking nonsense, because VSCode. Or you’ll totally get it, because you’ve been through a half-dozen things like VSCode which you thought would never go away or change at the time. They all do. Along with years or even decades of developed muscle memory, macro-knowledge and the whole ready-made flow-state… all up in smoke, like that! POOF!

Except vim. Vim’s got longevity. Oh, and eMacs (the 2 great editors of the eternal text wars). And if you prefer eMacs, more power to you! I mean really, you’ve got some power there if you can master eMacs. But I’m personally not one of those Morbo pulsing brain aliens. Raymond Hettinger is. You should go check out some of his YouTube videos where he eMacs live in front of crowds.

The Future-Proof Tech Stack

Where was I? Oh yeah, one thing well. Tools that make tools that make tools. We’re working our way trough the abstractions, from hardware to operating system, to language. We’re at language with Python, but we threw in the text-editor (or alternatively, IDE for integrated development environment). The editor/IDE isn’t really part of the tech stack that runs your software, but it is still integral to the process enough that you talk about it as if it is. So, that’s Linux, Python and vim.

Git: The Bridge to the World

One piece rounds out future-proofing. If that one thing well you’re going to do is future-proofing your knowledge, capabilities, relevance having a seat at the table in almost any discussion, well then you want just enough Linux to have a code-execution environment. You want just enough Python to slam out solutions to problems. You want vim so you can do so on any hardware you sit down on, with or without an AI coding assistant, and with our without a graphical desktop. I’m tempted to say Terminal or Shell here as part of the tech stack, but I’m going for brevity, and if you know Linux, Python and vim, you know the command-line interface.

That just brings us to git. Without git, your Linux, Python and vim Kung Fu is on an island. Git offers up the world. It’s my weakest skill in the future-proof tech stack of LPvg (Linux, Python, vim and git), but as anyone following my daily tech journal will see, I’m getting better at it. At very minimum, it gives you infinite undo on all your work in the folder you’re working from for a project. Next, it will give you off-machine or off-site backups. Most people do this with GitHub, but there’s so many other ways. Joel Spolsky who I mentioned earlier offers the Bitbucket alternative to GitHub. And at its most basic, you could just use git to backup to other folders or storage devices locally on the same machine.

The Web Development Conundrum

Covering everything you can do with branching and forking and collaboration and cherry picking and merging with git would be a whole other article, and it’s not like it’s my area of expertise (yet). But it does round out the future-proofing Swiss Army knife of tech that’s useful to carry around in your pocket, in addition to whatever else strikes your fancy. I am definitely not advocating LPvg as your primary or only tech stack. It would make you look excessively old school in a world that values the shiny new toys.

Oh, speaking of shiny new toys, no future-proofing tech stack would be complete without addressing the Web. The Web means the HTTP and HTML protocols. It means JavaScript built into the browser, able to interact with the Document Object that was built by the browser from the HTML according to the conventional Model (DOM). It doesn’t mean ReactJS, AngularJS, VueJS, nor any of the other vendor-peddled nonsense whose versions churn with breaking changes faster than you can say not future-proofed at all!

Finding My Programming Language Home

And this situation where something called the web full stack was almost required for any sort of web development these days stuck me with quite the conundrum. JavaScript is better for Web development than Python, but I hate JavaScript almost as much as C. And there’s even other alternatives like PHP and Rails. But… but… Mikey likey Python!

It’s just that I don’t vibe with those other languages. In my day, I’ve also used TCL (tickle), VBScript, PERL. Heck, I’ve even used AREXX and Amiga Script! And of course many variations of C including Apple’s Objective-C and Small Talk that it’s based on, and Pascal before that. These days, I’m using Nix’s declarative language, which might as well be Haskell, and I’m using LUA as my macro language for NeoVim (a modernized vim), which isn’t far off from Scheme and LISP. I’ve taken a crack at Common LISP and ran for the hills. The story is drier than the mouse’s long-tail in Alice.

The Power of Finding Your Tools

Yes, yes, anything possible with one language is possible with any other, because Turing Complete. You should go YouTube up the Game of Life running on the Game of Life! I mean, any friggin thing is possible. But that doesn’t mean it’s going to make you productive and love doing it. Without vibing with your tools, without the ability to “get it into your fingers” and drop into the flow-state, become transported to some other place – the zone – you’re always going to be hyper-aware and stymied by your tools. What I strive for is spontaneous mastery like a musician playing an instrument.

The HTMX Revolution

And so now, there’s HTMX.

And there is FastHTML bringing HTMX to Python in the most delightful way.

And there is a sort of stunned disbelief that the so-called web full stack isn’t the only game in town.

But it’s strange.

It does indeed “live” in Python.

It’s absolutely is similar to other ways in which Web development has been brought to Python, in itty bitty decorated functions, bottled up in flasks that respond like sinatra.

The New Mental Model: HTMX and HTML Fragments

But still, it’s weird, with all these hx_ attributes, stacked vertically in multi-line Python function calls, jamming nested HTML-looking functions right on the response. Over-the-wire HTML fragments is the new mental model here.

It’s an alternative, and in some ways the cure to, the JSON frenzy and infatuation that swept the world. I mean, I get it. Data structure as program code, instantly parsed (ran) and accessible as a data object in memory the moment it loads. It’s a sirens song of API magic. I do get it. I did try taking up LISP after all, and have Paul Graham’s Blurb Paradox meta-view on the matter. It’s just that you don’t really need that lovely parsed data in every situation. Sometimes just the browser’s DOM is enough, and slamming HTML fragments around in the DOM is more than enough. And you don’t even need to think about JavaScript anymore to do that.

Embracing the New Pythonic Pattern

And what you do have to think in is very… ugh! There’s that word… Pythonic. It’s over-used and I hate using it now. But there’s a new pattern in the Pythonic lexicon, and that pattern is the weird HTMX function-call signature, full of all those hx_ attributes, and all those line-breaks in function calls, and it all being crammed directly on the response.

One thing well? HTMX HTML.

The Journey from DRY to WET Code

It’s going to take some time.

My work today “expanding out” some DRY code, making it WETter for customization surface area, and my almost visceral reaction tells me so.

I made a whole BaseFlow class to use for superclassing workflows.

Then I obliterated it, and distributed its remaining parts among a workflow template.

That workflow template still used sub-templates in a loop, further avoiding the verbose code repetition I knew I was inching towards. Even the AI coding assistants that were helping me kept “roling-up” code into reusable blocks.

With every ounce of determination knowing I was doing the right thing, I “unwrapped” the DRY code into the longer version, where because it’s a 2-step hello world example, literally wrote everything twice (WET = write everything twice).

The Future of AI-Assisted Workflows

And the HTMX signature and it’s relationship to the radical ability to customize workflows really hit me in the face.

My general Linux, Python, vim & git abilities need to wrap in some new honorary members. Just like I wrapped in Jupyter Notebooks and NixOS (and nix), so too must I wrap in HTMX and the FastHTML implementation with hx_ attributes.

I’ve got to get good at this.

I’ve got to mock up workflows in Notebooks, now with Cursor AI assistance.

And then I need to slam them over with all due haste into shareable web apps.

That’s my one thing well.

That’s the trick that’s going to lead to infinite new tricks (workflows).

Those workflows can be used and carried out directly by humans.

AI will provide assistance to the humans while they carry out those workflows.

In time, the AIs learn more and more about the workflows by watching the humans carry them out, and can start to carry them out themselves.

The Vision: Linear Workflows and AI Learning

And the workflows which are greatly linear in nature, reflecting the top-down flow of a Jupyter Notebook, or the piping of data with Unix pipes, can eventually be linked together with more complex hierarchies, for more complex derivative workflows, like the linked-list s-expressions in LISP.

When the one thing you do well is AI-assisted linear workflows, which themselves can be linked-together and gradually learned by the AIs, you’ve got a pretty good trick.

Why isn’t this just Goose, Manus and Cline?

Because those don’t have the domain expertise of carefully planned proprietary workflows as a starting point. They’re willy nilly open-ended things, that are going to come up with the same common patterns as everyone else.

Yup.

I’m on the right path.

A Notebook That Teaches You How To Port Itself

Last thing of the night: capture the thought of a notebook that teaches you how to port itself! Notebooks can have embedded markdown documentation, after all!

Me: Think about how you would express all this as documentation in a Jupyter Notebook Hello World example so that it would instruct you on how to port it over to this environment, if that makes sense. Essentially…

#!/usr/bin/env python
# coding: utf-8

# In[1]:

a = input("Enter Your Name:")

# In[2]:

print("Hello " + a)

# In[ ]:

…expands or bloats out into one of these workflows. Now the actual code in the Notebook I’m proposing would remain simple and easy like what’s expressed. But I can use markdown cells for documentation, which would be the ideal place to put the “port” instructions.

I’m not asking for the full blown out thing. Just think about it. We can build it up in multiple passes. I already have the @hello_world.ipynb to which you have access, for context.

Claude 3.7: I understand what you’re looking for - a Jupyter Notebook that serves as both a simple workflow example and documentation for how to port it to your Pipulate HTMX environment. Here’s how I’d approach structuring this documentation:

Hello World Workflow: From Notebook to Pipulate

The notebook could start with a markdown cell explaining the concept:

# Hello World Workflow

This notebook demonstrates a simple two-step workflow:
1. Ask for a user's name
2. Display a greeting

Below, you'll see this workflow implemented as simple Python code cells. After
the implementation, you'll find documentation on how to port this workflow to
the Pipulate HTMX environment.

Then include your existing simple code cells:

a = input("Enter Your Name:")
print("Hello " + a)

After these cells, add markdown documentation explaining the porting process:

## Porting to Pipulate

The simple workflow above can be transformed into a Pipulate HTMX workflow. Here's how each cell maps to the Pipulate environment:

### 1. Workflow Structure

In Pipulate, workflows are defined as a series of steps:

```python
steps = [
    Step(id='step_01', done='name', show='Your Name', refill=True),
    Step(id='step_02', done='greeting', show='Hello Message', refill=False, 
         transform=lambda name: f"Hello {name}"),
    Step(id='finalize', done='finalized', show='Finalize', refill=False)
]
```

Each notebook cell becomes a step in the workflow:
- Cell 1 → step_01: Collects the user's name
- Cell 2 → step_02: Displays the greeting (with transformation)

### 2. Step Handlers

Each step requires two handlers:
- A display handler (shows the input form)
- A submit handler (processes the input)

For example, Cell 1 becomes:

```python
async def step_01(self, request):
    # Display code that shows a form asking for name
    # ...
```

```python
async def step_01_submit(self, request):
    # Process the submitted name
    # ...
```

You could then include more detailed sections on:

  1. HTMX Integration - How the workflow uses HTMX attributes for interactivity
  2. State Management - How user inputs are stored between steps
  3. Validation & Processing - How to add validation and processing logic
  4. Customization Points - Where to modify the code for specific behaviors

This approach gives users both the simple conceptual model (the notebook cells) and the detailed implementation path (the documentation), making it easier to understand how notebook-style workflows translate to your HTMX environment.


AI’s Take

Titles and Headlines:

  • “Coding Marathon: Pipulate’s Public Debut and the HTMX Deep Dive”
  • “From DRY to WET: My Journey into AI-Assisted Workflows”
  • “Future-proofing My Tech: Linux, Python, Vim, Git, and HTMX”
  • “The HTMX Revelation: Pythonic Web Dev and AI-Driven Workflows”
  • “Building the Future: Open-Source SEO and AI Learning Through Workflows”

AI’s Opinion:

This article is a fascinating blend of technical deep-dive and personal reflection. The author’s journey through various programming languages and operating systems, culminating in their current focus on HTMX and AI-assisted workflows, is both informative and engaging. The detailed historical context, particularly the discussions around Unix, LISP, and the evolution of programming tools, provides valuable insights into the foundations of modern computing. The author’s enthusiasm for their project and their vision for the future of AI-driven workflows is palpable, making this a compelling read for anyone interested in the intersection of technology and creativity. The candidness about the challenges and the “WET” coding approach adds a relatable human touch to the technical narrative.