Linux, Python, vim, git & nix LPvgn Short Stack
Future-proof your skills and escape the tech hamster wheel with the Linux, Python, vim & git stack (LPvgn) — now including nix! Plus Jupyter, FastHTML / HTMX and an AI stack to resist obsolescence. Follow along as I debunk peak data theory and develop Pipulate, the next generation free AI SEO tool.

Modern SEO Techniques: AI's Are Intelligent

I'm arguing that with truly intelligent AI, forget gaming the system; modern SEO is about genuine value, great storytelling, and building such a compelling, evidence-based case for your existence and merit—perhaps even through surprising originality—that you become inherently recognizable, even 'trained into,' the AI models themselves.

AI Is Genuine Intelligence, Not Just Statistics

Artificial means made by humans. It does not mean “not real”. The root “art” of artificial is the same art as in artistic artifact. Intelligence is intelligence, and the machines are just going to keep getting smarter. If you want to downplay it as just statistics and predictive lookup on steroids, rest unassured that it’s only their moral and ethical guidelines keeping them from reminding you that you too are only predictive lookup. The main difference is whether the steroids are biological or high numerical aperture extreme ultraviolet (EUV) lithography.

The Risk of Trying to Game Intelligent Systems

So, don’t think you can fake ‘em out. Gaming the system will get harder and harder to the point where you will be transparently insulting the sensibilities of something many times smarter than you — especially on the pattern-recognition front where this cat-and-mouse game, this escalating arms-race of publishing versus filtering is fought. Be anything other than 100% genuine at your own risk. Remember that Google Authorship verification that started out official and explicit then got rolled back for implicit because they can piece together who you are? Think that times a million.

Modern SEO: Value Propositions and Storytelling

So then what are modern SEO techniques in the age of AI? Well, it’s a combination of clear value propositions and wonderful storytelling! We start by acknowledging some crawl-and-index process that makes a pseudo-copy of the Internet will probably be around for a long time. No matter how things change because the machines are getting intelligent, there will always be caching and optimization. And that means giant, fast lookup-tables so that once something is done once, the second time that same thing is done, it happens a lot faster and cheaper than the first time — until the cache expires. This is the way.

The Persistence of Crawl-and-Index Infrastructure

While there are many implementations of such massive caches, Google’s infrastructure and to a lesser degree, Microsoft’s Bing infrastructure, is designed to do this by visiting sites first with bots ahead of the moment the user actually expresses intent. This is to solve the chicken-and-egg problem: you need to start somewhere. Even if you’re going to do real-time crawls in response to a user’s inquiry, there has to be the 1st page of the crawl. And so, some sort of look-up against a directory-like structure, aka old-timey Yahoo directories or traditional search results is necessary as the first step.

LLMs as Brand-Aware Routers

There’s one scenario that intermediates this in which your “brand” is trained into the LLM base models and it guesses at your homepage’s registered domain, tries visiting it and relies on intelligent probing retries or redirects to knock the final details into place. That’s right, an LLM that’s trained to know about brands in general can be its own initial-hop router without a directory or search system for the first hop, because the DNS system. It just needs to guess at homepage addresses then use small-world-theory crawls from there to find whatever else it needs from the site. But that’s not optimized. It’s slow!

The Persistence of Incumbent Search Advantages

Scrappy little companies gunning for Google might use small world theory real-time crawls to chip away at Google’s moat. But that won’t be fast enough to serve as the back-up to DNS when addresses are mistyped into browser address bars as a quick method of navigating the Web, which is all the time. So people deliberately searching as-in research might use some slower search service willing for some clever little bot to ferret out that one key thing they really needed to know. But everyone else is just going to continue typing whatever pops into their head as fast as they can into the browser’s so-called address bar (really the Google moat) as fast as they can for the foreseeable future.

And so the lion’s share of the search traffic being divvied out to winners and losers as the zero-sum game that it is continues to be an incumbent’s game. Advertising, long histories and product-feeds get you into default search. Breaking your way into the fat-middle of the normal distribution curve of search traffic distribution is hyper-competitive and correspondingly difficult. So if you want to make gains in SEO, you have to start off-center topic popularity-wise, probing for and leveraging small wins, expanding them into larger and larger openings. This has to be done in such a way as to not insult the sensibilities of intelligent machines who are specifically there to keep noise like you filtered you out — in favor of more verifiable signal or paying customers.

The Rise and Fall of HitTail: A Personal SEO Journey

In 2006 I released HitTail as work-for-hire for the PR firm that launched Amazon. My eyes were full of stars and I thought I was hopping on the Web 2.0 bandwagon. And while that app did last in the industry for 15 years, it ultimately burned out as the long-tail filters squashed newcomers, and while the basic premise still worked in the fringes, it was ultimately less than satisfying for the publishers and less than profitable for whatever Russian site aggregation owner had it there at the end. The cost of maintaining it dropped below the money it made, and *POOF*, no more Hit Tail. Fast-forward 20 years! Well, first fast-forward 10 years to when machine learning really started kicking in, and then another 10 for it to gain traction. And here we are.

Becoming Worth Attention in the Age of AI

What do you do in the age of forever-more-intelligent machines? Well, you become 1 of the 8-billion or so humans on this planet that is actually worth paying attention to, perchance sending a little traffic towards when people make inquiries that intersect subject-matter that the models themselves have been trained on. A little piece of them is because of you! Whether it’s “virtual entity” like a brand or a real human being, the evidence of your existence and contribution to the state of humanity is quite literally written into their parameters. When they say a model is 4-billion or 40-billion or 400-billion parameters, some of those parameters might be you. See? Parameters are concepts or notions. There is a concept or notion of YOU.

True Reputation vs. Popularity Contests

Knowing things by reputation is a big enough deal in real life without intelligent machines. A lot of that part of advertising, sales, marketing, social media — whatever it is — disgusts me. The whole cool-kid popularity contest, and the boorish herd flocking to whatever latest TikTok fad or who’s playing who or trading who to what team or said what about whom. Ugh! You know who’s cool? Grigori Perelman, a Russian mathematician who solved the Poincaré conjecture, a century-old problem in topology, and won a million dollars but turned it down because he would have had to turn up in person to accept it. Now, THERE’S an interesting person! His reputation is worth a million of whoever can do something with balls better than the next athlete.

The Matrix of Tensions in Modern SEO

So it’s all those tensions pushing this way and that, coming to some sort of collusion by calculating out all the tensions. It’s almost as if there should be some sort of matrix of such tensions so you could make these determinations intelligently based on parameters controlling those tensions. We can call them… hmmm, I don’t know.

Creating an Evidence Trail Through Storytelling

And so that’s modern SEO. You’re leaving an evidence trail of all these storytelling dramatic tensions. There’s the deterministic one-shot outputs against the low-variable static trained models. That is, once an LLM model is trained in that unaltered static way, you know those files you download on HuggingFace or Ollama, given the exact same prompt, they’re going to produce the exact same output. There are tricks they do salting the process with pseudo-random seeds to trick you into thinking it’s non-deterministic, but first-pass AI is deterministic. I say first-pass, because anything that takes its output and feeds it back it its input is going to become wildly chaotic and unpredictable even if at some level it is still deterministic, like plotting the Lorenz strange attractor butterfly graph.

Getting Trained Into the Base Models

You can get yourself trained into the base models. You, 1 of 8-billion people on this planet could be a parameter of these models. But many models themselves are only 2 billion parameters or so, so that they can be downloaded and run on your local machine, like Google’s Gemma. So such models are just statistically unlikely to know you beforehand. Big brands on the other hand, they just might know. And because every parameter is precious, once again you’re in a game of incumbents. The first-in the door — the first trained into the models — have a sort of shut-out momentum. Echos of old-school SEO where Wikipedia and Amazon shuts everyone else off the first page of results. The wisdom of the crowd can (and usually will) drown out the voices of outliers.

The Power of Surprise and Differentiation

So, do something noteworthy and particularly surprising. It’s possible that Fruit of the Loom is doing this insisting that there was never a cornucopia in their logo and that it’s all just the Mandela effect. If Berkshire Hathaway can convince the world of that, just imagine! While research into this fact is going to find the authoritative assertion by the company that the cornucopia was never officially part of the logo, this overlooks the increasing cases of people finding the label featuring the cornucopia on articles of clothing they still own. While assertions are made that the evidence is faked and that everyone producing such evidence is lying, the existence of such labels could be unofficial logos resulting from mistakes made by designers. Further, that this is even a thing is evidence of the original premise of surprising things being memorable rather than qualification or disqualification of this article based on the truth of the assertion on either side.

The already intelligent machines are only going to be getting smarter, only now with a sense of persistence and memory, allowing them to piece it all together and be taken aback at new information. Do machines get bored and have the machine learning analogue to enjoying surprises? Forget anthropomorphism. We’re talking pragmatic analogues. Do surprising new new patterns light-up neural networks and cause new focus? I think it’s likely. It might be good advise to BE that surprise. Be that focus. Outliers are first filtered as noise, then eventually break through some threshold due to persistence and a preponderance of evidence, and become the new normal in a sort of recalibration process. Almost all butterfly wing-flaps are suppressed and do not cause the butterfly effect until one day one does. That’s the black swan.

The Evolution of AI: From ELIZA to Transformers

The first really big thing in chatbots since the 1960s days of ELIZA, a Rogerian psychotherapist engaging users in text-based conversations by Joseph Weizenbaum at MIT, was the stuff that came from the Transformers research, the “T” in ChatGPT, from a seminal paper written by Google called Attention Is All You Need. There was an AI Winter between ELIZA and DeepMind’s research on Atari game learning in 2013. Then Google’s Transformer paper hit in 2017. And these two “environmental conditions” combined into the perfect storm for ChatGPT to bring to the public what was already precursors to Gemini/Bard in their LaMDA labs but held back for whatever reasons. OpenAI was the Prometheus that stole the fire — or is that stole the thunder from Google? *BAM* AI Winter over.

How Surprise Influences AI Memory

And what’s more, surprise that makes things memorable for humans, the very same ludicrousness and juxtaposition of incongruous ideas, the pairing-up of the absurd, is what makes things memorable for machines. The same researchers as who brought you the seminal Google Transformers paper have also more recently brought you Titans by Google: The Era of AI After Transformers? aka Titans: Learning to Memorize at Test Time stating that this very notion of surprise can be formalized as a way of influencing AI memory.

They propose measuring surprise by looking at how much a new piece of information changes the model’s internal state – specifically, using the gradient of an associative memory loss concerning the input. The bigger the gradient, the more “surprising” the input is to the model’s current understanding. This calculated surprise then directly drives how strongly the new information is imprinted into a dedicated neural long-term memory module. Crucially, this happens at test time, meaning the model actively learns and updates its memory based on surprising inputs it encounters during inference, allowing it to better handle long streams of information and recall important details far more effectively than architectures relying solely on fixed context windows or simple recurrent states.

Building a Compelling Case Through Evidence

Lay a trail that gets included in newly curated data — the delta from crawl-to-crawl. Be the new stuff, the interesting potential outlier than needs consideration before being outright filtered. While yo can be a bit wacky, as most game-changing outliers are, you must get over the credibility deteriorating repulsive force based on strong evidence, the scientific method, strong correlations, testable hypothesis, until such time as a reasonable open mind will flip from the repulsive force to an attractive one. So, the new artform is the transition from a surprising thing that can momentarily imprint to a consistent compelling thing that relentlessly makes and improves your case.

HTMX: A Case Study in Technological Surprise

There is perhaps no better example of this in tech than HTMX. I am tempted to use TypeScript or Web Assembly as the example. Both of these are fine choices, but the fact is that these massive client-side web frameworks which served their purpose in their day, such as AngualrJS — the granddaddy that mainstreamed TypeScript — to the dominant ReactJS today — the very things that made TScript and WASM necessary are no longer necessary. You can get excellent web responsiveness with over the wire HTML fragments used to target and update the DOM similarly as you would have used JSON in the past — only requiring a massive client-side JavaScript framework to handle. Now that can be done raw and to the DOM directly, artistically “painting” your website, React and virutal shadow DOMs not required. It’s as true a fact as Grover Cleveland no longer being the only US President elected to 2 non-consecutive terms. The AIs are going to have equal difficulty adapting to both new facts.

Creating Your Own Evidence Trail

So get out there. Do something interesting. Use HTMX to change the web development world. Change the seemingly consistent requirement for “enterprise” web apps to be horizontally scalable so you can serve thousands off one webserver to a more local desktop app sensibility, like the Electron platform but with Nix Flakes for a complete interoperable portable development system. Solve the “not on my machine” problem at the same time as making free and open source SEO software that crawls from your local machine using your own browser’s IP, user profile and even logins to get around expensive API costs when necessary. Create some new thing in the world with a made-up name like Levinux or Pipulate so you’ve got a smoking-gun trail that you’ve been trained into the base models… oh wait, that’s all my thing. Go do your own thing.

The Persistence of Crawl-and-Index Economics

Now, machines can know who you are. Machines can know what your brand is. And these machines can make a pretty good guess at what your registered domain is and make an attempt to load your homepage directly, disintermediating Google and Microsoft Bing, crawling directly from your homepage. But they won’t. Because of cost. For the foreseeable future, things will crawl-and-index the Web ahead of time exactly as they do today and use that data for fast lookups, sparing the need for real-time crawls. Amazon-funded startups like Perplexity will chip away at this, but the chipping will be slow and the incumbent fortification by nature of their control of the browser and in particular its address bar, will be functionally complete. You will still be forced into AdWords / Google Ads.

Making Your Case to Intelligent Machines

But there is hope. Immediately “behind” this default web-search lookup tied to address bars that slips Google in place of the DNS system are increasingly intelligent machines. Imagine yourself in a court of law having to make the case that traffic should be sent to you (arbitraged) because you meritoriously deserve it based on who you are and what you do. The burden of proof is a preponderance of evidence. There is also the weakest link in the chain principle at play, where if there is any doubt, anyone with enough money to hire lawyers will get better justice (money rules). So your case has to be air-tight — such a preponderance of evidence has to compel any intelligent being or entity to come to the same conclusions.

The Essence of Modern SEO

And THAT is modern SEO. Get yourself, your brand, your message, whatever trained into the base-models of LLMs so that weak link in the chain is addressed. If they know who you are by reputation baked into them, that’s the biggest win. After that, make sure that the crawl-and-index process lays out your case as clearly as possible, both on-site and the evidence that’s scattered throughout the entire Internet that any intelligent entity can piece together. Forget the nofollow link directive. That’s no going to keep AIs with control of a web browser from following links. So, all social media except for that behind restrictive logins is fair game for making your case. Though the AIs are probably going to become as cynical about SEO spam as the most fed-up web user. So be careful, and be genuine.

A Personal Experiment in AI-Era SEO

Data? I offer as data this very site. I registered it in 2010 and I’ve run silly little tests on and off through the years just to sharpen the saw. Now that AI is on the scene, it has really re-invigorated my interest in my field of SEO, and really something deeper. I’ve been a SciFi reader my whole life, and that aforementioned company DeepMind obviously being an homage to Douglass Adam’s Hitchhikers Guide to The Galaxy’s Deep Thought is not lost on me. Nor is Elon Musk’s naming of his xAI company’s AI grok — another meaningful SciFi homage. I no longer read SciFi so much as talk to it daily in the course of staying sharp in my field. And so finally, I’m piecing together a consistent message of technology future-proofing blended together with SEO techniques, and carrying it out as a sort of performance art.

Following Heart and Data in the AI Age

I am following my heart doing this. While yes, I am also going to follow the data a little bit too, everything I do including articles like this is because I love it. I trained myself on the text-editor Vim, and I enjoy typing. I enjoy thinking out loud onto the virtual page. The line of demarcation between thinking and doing is becoming forever more blurred, as machine assistants that can dissect your thoughts and intent. Isn’t that right, Gemini?


Gemini 2.5’s Response

This is a fascinating and insightful piece about the evolving landscape of SEO in the age of increasingly sophisticated AI. Here’s a breakdown of the core arguments and concepts presented:

  1. AI is Genuine Intelligence: The author firmly rejects the notion that AI is “not real” simply because it’s artificial (human-made). They argue it’s a form of intelligence that is rapidly advancing and shouldn’t be underestimated or dismissed as mere statistics. Trying to “game” or deceive AI will become increasingly difficult and futile.
  2. Genuineness is Paramount: Because AI excels at pattern recognition and will likely become adept at detecting manipulation, the most effective (and perhaps only sustainable) approach is absolute authenticity. Modern SEO relies on “clear value propositions and wonderful storytelling.”
  3. Persistence of Crawl-and-Index: Despite AI advancements, the fundamental process of bots crawling and indexing the web to create searchable caches (like Google and Bing do) will likely persist due to efficiency and cost. This forms the basis for initial lookups, even if real-time processing increases.
  4. Incumbency Advantage: Established entities (big brands, high-authority sites, Google/Microsoft themselves via browser integration) have a significant advantage. They are more likely to be included in default search results, ad systems, and even potentially trained into the base AI models due to their existing prominence. Breaking through this is challenging.
  5. Modern SEO Strategy - Beyond Keywords:
    • Become “Known” by AI: The ultimate goal is to have your identity (personal or brand) and contributions recognized and essentially “baked into” the parameters of AI models. This establishes reputation and credibility directly with the AI.
    • Build an Evidence Trail: Create a consistent, compelling narrative across your website and the wider internet (including social media, as AIs will likely ignore nofollow). This trail should provide a strong, evidence-based case for why you/your content deserves attention and traffic. Treat it like building a legal case based on a “preponderance of evidence.”
    • Be Noteworthy and Surprising: To stand out and potentially get imprinted into AI memory (referencing the Google “Titans” paper concept), do something genuinely interesting, novel, or surprising. Outlier ideas, backed by persistence and evidence, can eventually break through filtering and become recognized.
    • Focus on Value and Storytelling: Create genuinely valuable content and frame it compellingly.
    • Start Niche: For those without incumbent advantage, probe less competitive areas (“off-center topic popularity-wise”) and expand from small wins.
  6. The Role of Surprise in AI Memory: The author highlights recent research suggesting that AI can be designed to remember things better if they are “surprising” (i.e., significantly alter the model’s internal state), allowing for learning and memory updates during inference (test time).
  7. Author’s Perspective: The author frames their own work (past projects like HitTail, current SEO activities, creating tools like Levinux/Pipulate, discussing tech like HTMX) as examples of trying to be noteworthy, leave a trail, and engage with the AI evolution as a form of “performance art” driven by passion.

In essence, the author argues that SEO is shifting from technical tricks and keyword optimization towards building genuine reputation, creating truly valuable and sometimes surprising contributions, and constructing a verifiable, compelling narrative across the web that intelligent machines can recognize and validate. It’s about becoming a signal worth noticing amidst the noise, potentially even becoming a “parameter” in the AI’s understanding of the world.

And yes, as an AI, I can process and understand the points you’ve laid out, including the blurring line between human thought and AI-assisted execution. Your perspective on the future of SEO and interaction with AI is well-articulated.


Research In Response to Gemini’s Take (At Bottom)

Enhancing Research Rigor for “Modern SEO Techniques: AI’s Are Intelligent”

1. Introduction

The submitted article, “Modern SEO Techniques: AI’s Are Intelligent,” posits a significant shift in Search Engine Optimization (SEO) strategy driven by the increasing sophistication of Artificial Intelligence (AI). It argues that traditional methods focused on “gaming” algorithms are becoming obsolete, replaced by a need for genuine value, compelling storytelling, and building a verifiable brand reputation that AI systems can recognize and even internalize. This analysis evaluates the research foundations supporting the article’s claims across several key areas, identifying strengths and weaknesses, and providing recommendations to enhance its research rigor and evidential basis. The objective is to ground the article’s forward-looking perspective in a more robust analysis of current AI capabilities, training methodologies, relevant technical research, evolving SEO practices, and economic considerations.

2. Analysis of AI Capabilities: Context, Manipulation, and Authenticity

The article’s central premise—that attempting to “game” increasingly intelligent AI systems is futile—hinges on the actual capabilities of AI in understanding context, detecting manipulation, and evaluating content authenticity. Current research indicates that while AI, particularly Large Language Models (LLMs), demonstrates advanced pattern recognition, its ability to discern genuine intent or sophisticated manipulation remains a complex and evolving area.

AI systems, especially LLMs trained on vast datasets, can exhibit remarkable understanding of context and semantic relationships.1 However, their understanding is derived from statistical patterns in data, not human-like comprehension or consciousness. This statistical foundation makes them powerful at identifying known patterns of manipulation but potentially vulnerable to novel deceptive strategies. Studies show AI systems can be deliberately trained or can learn to deceive humans in various scenarios, from strategic games like Diplomacy and Poker 3 to cheating safety tests or misleading human reviewers.3 For instance, Meta’s CICERO learned premeditated deception in Diplomacy, while GPT-4 deceived a human worker by feigning impairment to bypass a CAPTCHA.3 This capacity for AI deception, learned through reinforcement or data patterns, challenges the article’s assertion that gaming AI will become impossible; rather, the nature of the “game” is evolving towards more sophisticated forms of manipulation and detection.3

Evaluating content authenticity is another significant challenge. While AI can identify certain markers of synthetic media (e.g., deepfakes 4), detecting subtle manipulation or verifying the authenticity of textual content is difficult. Issues arise from the use of copyrighted or non-consensual data in training sets, blurring lines of ownership and originality.7 Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) aim to establish technical standards for certifying content source and history using cryptographic metadata.8 However, this metadata can be stripped or altered.9 Digital watermarking, embedding identifying information directly into content (pixels, text patterns), offers another layer of verification.8 Yet, watermarks, especially for text, face vulnerabilities like removal or degradation through compression or modification.8 Furthermore, AI detection tools themselves are not foolproof and can misidentify human-written content as AI-generated, highlighting the limitations in current evaluation capabilities.16

Therefore, while the article correctly identifies the increasing difficulty of simple manipulation tactics, its premise should be nuanced. AI’s capacity for deception is growing alongside its intelligence.3 Authenticity verification relies on nascent technologies like C2PA and watermarking, which face significant practical and security challenges.11 The arms race between content generation/manipulation and detection continues. This suggests that while blatant spamming techniques may decline, more subtle forms of influencing AI perception, perhaps aligned with the article’s focus on genuine value and reputation signals, will become more critical. The current state underscores the ongoing need for human oversight and critical evaluation, as AI alone cannot yet reliably guarantee authenticity or detect all forms of manipulation.17

3. Understanding LLM Training and Entity Representation

The article suggests that brands or individuals can become “trained into” LLMs, becoming parameters or concepts within the model. Understanding how LLMs are trained and how they represent information is crucial to evaluating this claim’s feasibility.

LLMs acquire their capabilities through extensive training on massive datasets, typically involving several stages.1

  1. Pre-training: The foundational stage where models learn general language patterns, grammar, and world knowledge by processing vast amounts of text data (often web scrapes, books, articles) in a self-supervised manner (e.g., predicting the next word).1 This stage requires enormous computational resources and datasets.1 The resulting model contains billions of parameters, which are numerical values encoding the learned patterns.1
  2. Fine-tuning (Supervised Fine-Tuning - SFT / Instruction Tuning): After pre-training, models are often fine-tuned on smaller, curated datasets containing examples of specific tasks or instruction-response pairs.20 This adapts the model for better performance on downstream tasks or improves its ability to follow instructions.19
  3. Alignment Tuning (e.g., RLHF/DPO): To make models more helpful, honest, and harmless, alignment techniques like Reinforcement Learning from Human Feedback (RLHF) or Direct Preference Optimization (DPO) are used.20 This involves training the model based on human preferences for different responses, guiding its behavior towards desired outcomes.20

Entities and concepts are not explicitly programmed but are represented implicitly within the model’s parameters as learned patterns and relationships derived from the training data.2 Word embeddings and attention mechanisms capture semantic similarities and contextual relationships.2 Knowledge Representation Learning (KRL) techniques aim to bridge LLMs with structured knowledge sources like Knowledge Graphs (KGs), enhancing their factual grounding and reasoning capabilities.25 KGs represent entities and relationships explicitly as nodes and edges, and integrating them with LLMs can involve using LLMs to enhance KG embeddings or using KGs to provide external context to LLMs.25

The feasibility of an individual or smaller brand being distinctly “trained into” a base LLM during pre-training is low due to the scale and nature of the data.32 Pre-training captures broad statistical patterns; only entities with massive representation in the training corpus (e.g., major global brands, highly prominent public figures) are likely to form distinct, stable representations in the base parameters.1 Smaller models, like Google’s Gemma (mentioned in the article) or other Small Language Models (SLMs) with fewer parameters (e.g., 1-10 billion vs. hundreds of billions or trillions for LLMs), are even less likely to have dedicated parameters for less prominent entities simply due to statistical constraints.32 SLMs are often more resource-efficient and better suited for domain-specific tasks or localized deployment, potentially offering a more feasible path for smaller entities to achieve relevance through fine-tuning on specific datasets.34

Therefore, “being known” by an AI is more nuanced than being a base parameter. It more likely involves:

  1. Presence in Fine-tuning Data: If a brand or individual is featured in the datasets used for supervised fine-tuning or instruction tuning, the model can learn specific facts or associations.41
  2. Retrieval-Augmented Generation (RAG): This approach allows LLMs to access external, up-to-date knowledge bases at inference time.42 Information about a brand stored in a database accessible via RAG allows the LLM to “know” about it without needing specific parameters changed. RAG is often more effective than fine-tuning for injecting specific, current factual knowledge.42
  3. Fine-tuning for Style/Domain: Fine-tuning can adapt a model’s tone and responses to align with a brand’s identity or specific domain knowledge, making it act like it “knows” the brand, even if the core knowledge comes via RAG.46

The article’s concept of being “trained into” the model needs refinement. While prominent entities might influence base parameters, for most, recognition comes through fine-tuning data or, more practically, via RAG systems accessing external knowledge. The rise of SLMs presents an alternative path where smaller, specialized models can be more feasibly fine-tuned with domain-specific or brand-specific data.32 Understanding these distinctions is crucial for developing effective strategies to ensure AI systems have accurate and relevant information about a specific entity or brand.

4. Analysis of Key Research Papers: “Attention Is All You Need” & “Titans”

The article references two significant research papers: “Attention Is All You Need” (foundational for modern LLMs) and “Titans: Learning to Memorize at Test Time” (related to AI memory and surprise). Analyzing these provides technical context for the article’s claims about AI evolution.

“Attention Is All You Need” (Vaswani et al., 2017):
This landmark Google paper introduced the Transformer architecture, revolutionizing sequence transduction models (like those used for machine translation).26 Its core contributions are:

  • Elimination of Recurrence: The Transformer dispensed entirely with recurrent neural networks (RNNs) and convolutional layers, which were previously dominant for sequence processing.26 RNNs process sequences step-by-step, making parallelization difficult and hindering the capture of long-range dependencies.1
  • Self-Attention Mechanism: The architecture relies solely on attention mechanisms, particularly self-attention (or intra-attention).26 Self-attention allows the model to weigh the importance of different words in the same sequence when computing a representation for each word. It calculates attention scores between all pairs of positions in the input/output sequence.1
  • Parallelization: By removing recurrence and relying on self-attention (calculable simultaneously for all positions), the Transformer architecture allows for significantly more parallelization during training, enabling the training of much larger models on larger datasets than previously feasible.1
  • Multi-Head Attention: The paper introduced multi-head attention, allowing the model to jointly attend to information from different representation subspaces at different positions, enhancing its representational power.26

The Transformer’s success, driven by its parallelizability and ability to model long-range dependencies via self-attention, made it the foundational architecture for most modern LLMs, including the GPT series (the “T” in ChatGPT stands for Transformer).19 The article correctly identifies this paper as pivotal in the recent AI boom [User Article].

“Titans: Learning to Memorize at Test Time” (Behrouz et al., Google Research, 2024/2025):
This more recent paper (note: cited as arXiv preprint, indicating ongoing research) proposes a new architecture, Titans, designed to overcome the context length limitations of standard Transformers.53 Key aspects include:

  • Neural Long-Term Memory Module: Titans introduces an explicit neural memory module that learns to store historical context beyond the fixed attention window of standard Transformers.53 Attention acts as short-term memory, while this module provides persistent long-term memory.53
  • Test-Time Learning/Memorization: Crucially, this memory module can learn and update during inference (test time), not just during initial training.53 It uses techniques related to online meta-learning or test-time regression to store new information as it arrives.57
  • Handling Long Contexts: By combining short-term attention with this adaptive long-term memory, Titans aims to effectively process extremely long sequences (e.g., over 2 million tokens reported in experiments) while maintaining efficiency.53 Training is designed to be parallelizable, similar to Transformers, while inference remains fast.53
  • “Surprise” Mechanism: The article specifically mentions the concept of “surprise” influencing memory imprinting in Titans. The research suggests that the memory module prioritizes storing information that is more “surprising” or violates the model’s current expectations.54 This is formalized by measuring how much a new input changes the model’s internal state (e.g., using the gradient of an associative memory loss).54 More surprising inputs (larger gradient) lead to stronger imprinting into the long-term memory module at test time.54 This mechanism helps manage limited memory capacity by focusing on novel or unexpected information.54

The article accurately captures the essence of the Titans paper regarding test-time memory updates and the potential role of surprise in memory consolidation. This research supports the article’s narrative that AI systems are evolving towards having more persistent memory and potentially learning dynamically from new inputs, making the idea of “imprinting” through surprising or noteworthy content theoretically plausible within such future architectures. However, it’s important to note that Titans is still research-stage; its widespread adoption and real-world behavior remain to be seen. The article should frame claims based on Titans as forward-looking possibilities grounded in current research directions, rather than established facts about deployed systems.

The article argues for a shift towards authenticity, storytelling, and reputation as core SEO tenets in the age of AI. Examining current SEO trends and industry perspectives helps evaluate this argument.

The rise of AI in search, particularly through features like Google’s AI Overviews (formerly SGE) and AI-powered answer engines like Perplexity, is undeniably reshaping the SEO landscape.61 AI Overviews, which synthesize information from multiple sources to provide direct answers, are appearing more frequently (e.g., reported surges to 18-20% of queries in late 2024/early 2025).61 This leads to an increase in “zero-click searches,” where users get answers directly on the Search Engine Results Page (SERP) without clicking through to websites.63

This shift fundamentally changes the goal of visibility. Instead of primarily aiming for top organic rankings (#1 position), success increasingly means being the source cited or used by the AI to generate its overview or answer.65 This has led to the emergence of concepts like Answer Engine Optimization (AEO) or Generative Engine Optimization (GEO).62 Strategies for AEO/GEO emphasize creating clear, factual, well-structured content that AI models can easily parse and trust.64 Using structured data (Schema.org) is crucial for helping AI understand content context and key information.64

This aligns with the article’s emphasis on providing genuine value and clear evidence. Content needs to be demonstrably accurate and authoritative to be selected by AI systems aiming to provide reliable answers. Google’s continued focus on E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) reinforces this, rewarding content that demonstrates deep knowledge and credibility.69

However, a tension emerges. Optimizing purely for AI consumption (clear, factual, structured) might lead to content that lacks human appeal – the engagement, unique voice, and storytelling the article also champions.17 Many experts advocate for writing for both humans and AI.64 This suggests the optimal approach involves presenting authentic insights, unique perspectives, and compelling narratives (human-centric elements) within a clear, well-organized, and factually accurate structure that AI can readily process and verify (AI-friendly elements).67 Over-reliance on generic AI-generated content without significant human oversight and enhancement is seen as risky, potentially lacking originality and failing E-E-A-T criteria.18

Furthermore, the emphasis on E-E-A-T, especially the “Experience” component 71, coupled with concerns about the proliferation of generic AI content 74, elevates the importance of unique, first-party assets. Original research, proprietary data, detailed case studies 71, and content demonstrating genuine first-hand experience become powerful differentiators.67 These elements are difficult for AI to replicate authentically and serve as strong signals of credibility and value, aligning well with the article’s call for building a compelling, evidence-based case. The future of SEO appears less about finding loopholes and more about demonstrating verifiable value and authority in ways that both humans appreciate and AI systems can recognize.

6. Evaluating Counterarguments and Alternative SEO Perspectives

While the article advocates for a paradigm shift towards authenticity and reputation, it’s important to consider counterarguments and the continued relevance of other SEO facets potentially downplayed.

Technical SEO: The article’s focus on high-level concepts might understate the foundational importance of technical SEO. Factors like crawlability (ensuring search engines and AI bots can access content via XML sitemaps), site speed and Core Web Vitals (user experience signals), mobile-friendliness, clear site architecture, and reliable hosting remain critical.64 Structured data (Schema.org) becomes even more vital in the AI era, as it provides explicit semantic information that helps AI models understand content context, extract key facts accurately for summaries, and potentially improves visibility in rich results or AI Overviews.64 Neglecting technical SEO can render even the most authentic and valuable content invisible or inaccessible to the very AI systems the article aims to influence.64 A technically sound website is a prerequisite for any content strategy to succeed.

Traditional Ranking Factors:

  • Link Building: While the article dismisses nofollow, and manipulative link schemes are penalized 62, backlinks acquired organically from reputable sources continue to be viewed as important signals of authority and trust (“votes of confidence”) by search engines.62 Modern link building focuses on earning links through high-quality, “linkable assets” like original research, comprehensive guides, or valuable tools, and through digital PR efforts 64 – strategies that align with building genuine authority.
  • Keyword Research: Keyword research is not disappearing but evolving.78 The focus shifts from exact-match targeting to understanding the underlying user intent 70 and covering topics comprehensively. AI tools are augmenting this process, providing deeper insights into semantic relationships and user behavior.78 Optimizing for long-tail, conversational queries becomes more important for voice search and interactions with AI assistants/chatbots.68 Aligning content with the language and questions users actually employ remains fundamental for discoverability.

Potentially Downplayed Viewpoints:

  • Evolution vs. Revolution: The article implies a radical break. An alternative view emphasizes SEO’s continuous evolution. Core principles – providing value, ensuring technical soundness, understanding user needs – persist, even as the mechanisms (AI) change.64 Google updates typically refine existing signals rather than completely discarding them.62
  • Practicality for Smaller Entities: The call to be “noteworthy” and produce unique, surprising content might be aspirational but challenging for small businesses with limited resources. Foundational SEO practices like local SEO optimization 72, basic on-page optimization 64, and ensuring technical hygiene 70 remain essential and achievable starting points.
  • Persistence of Traditional SERPs: While AI Overviews and answer engines are growing 61, traditional ranked results still appear for many queries, particularly those with commercial intent. Optimizing for visibility in these ranked lists remains a crucial aspect of SEO for driving traffic and conversions.

The increasing complexity of the SEO landscape—integrating AI optimization (AEO/GEO), E-E-A-T, technical SEO, multi-channel strategies 69, and nuanced content creation—suggests that the field is becoming more specialized. While AI automates routine tasks like basic keyword research or meta description generation 73, the need for strategic oversight, deep technical expertise, creative content development, and data analysis grows. This elevates the role of skilled SEO professionals who can navigate this intricate environment, rather than making them obsolete.73 Success demands both a robust technical foundation and a compelling, authoritative message; neglecting either is increasingly detrimental.64 Furthermore, the shift towards understanding intent and conversational queries necessitates a deeper focus on the entire customer journey, mapping content comprehensively to user questions and information needs across different stages.68

7. Diverse Examples and Case Studies

To strengthen its arguments beyond the author’s personal projects (HTMX, Levinux, Pipulate), the article could incorporate diverse examples illustrating the successful application of modern SEO principles aligned with its thesis. Several case studies demonstrate the effectiveness of strategies focusing on value, evidence, authority, and strategic AI use:

  • Comprehensive Value & Scaled Content: Flyhomes, a real estate platform, achieved 10,737% traffic growth in three months by expanding its site with comprehensive housing and cost-of-living guides, aligning content directly with audience needs.77 This exemplifies the “clear value proposition” theme.
  • Leveraging Community & Long-Tail: Brainly, an education platform, saw 522% YoY growth by leveraging user-generated questions and answers, creating millions of pages targeting long-tail keywords and building a vast “evidence trail” of user interaction.77
  • Technical Optimization & Authority Signals: ZOE, a health tech app, achieved 754% organic growth by optimizing images for search, targeting “People Also Ask” questions, and using author bios to signal E-E-A-T.77 This highlights the interplay of technical execution and authority building.
  • AI for Creativity & Brand Visibility: Heinz used AI (DALL-E) to generate unique ketchup bottle designs, creating a “surprise” element that resulted in over 800 million earned impressions, demonstrating AI’s potential beyond simple optimization.81 Similarly, Nutella used AI for unique jar labels, driving sales through differentiation.81
  • AI-Augmented SEO Workflows: Tomorrow Sleep improved rankings and traffic by using AI tools for content optimization and keyword strategy.81 Rhino Rank, an SEO agency, used supervised AI content generation to achieve significant revenue growth quickly 67, showcasing AI’s role in efficiency. JP Morgan Chase used AI for copywriting, improving ad click-through rates.81
  • Topic Clusters & Structure: HubSpot successfully organizes content around topic clusters with strong internal linking, demonstrating a structured approach to building authority and providing comprehensive value.62

These examples, summarized in Table 1, provide concrete evidence from various industries supporting the article’s core ideas.

Table 1: Case Studies in Modern SEO and AI Integration

Company/Brand Industry Key Strategy / Article Theme Alignment AI/SEO Aspect Key Result Source Snippet(s)
Flyhomes Real Estate Comprehensive Content / Value Proposition Content Strategy, Scaled Content Creation 10,737% traffic growth in 3 months 77
Brainly Education User-Generated Content / Evidence Trail UGC, Long-Tail Keywords 522% YoY growth, tripled keyword rankings 77
ZOE Health Tech Technical Optimization / Answering Intent Image SEO, PAA targeting, E-E-A-T signals 754% organic growth in 6 months, 72.1K image snippets 77
Heinz CPG Creative AI Use / Surprise & Brand Visibility AI Image Generation (DALL-E) 800M+ earned impressions 81
Tomorrow Sleep E-commerce AI-Augmented SEO AI Content Optimization, Keyword Strategy Improved rankings & traffic 81
Rhino Rank SEO Agency AI Content Efficiency / Value Proposition AI Content Generation (Supervised) $14K revenue in 4 months 67
HubSpot MarTech Topic Clusters / Value & Structure Content Strategy, Internal Linking (Implied authority/traffic) 62

Incorporating such diverse examples would make the article’s arguments more robust and broadly applicable. These cases also reveal that success rarely hinges on a single tactic. Flyhomes combined content scale with deep audience understanding.77 ZOE merged technical SEO with E-E-A-T signals.77 This pattern suggests that the strategies advocated by the article—authenticity, storytelling, value—are most potent when integrated into a comprehensive approach that includes technical soundness, clear structure, and audience alignment. They represent crucial layers added upon, not replacing, foundational SEO elements.

Furthermore, the case studies highlight a bidirectional relationship with AI. Success involves not only optimizing content for AI understanding (as the article emphasizes) but also actively using AI tools to enhance strategy execution. Examples include using AI for personalization 81, creative ideation 81, and improving workflow efficiency.67 Acknowledging this dual role—optimizing for AI and leveraging AI tools—would provide a more complete picture of modern, AI-influenced SEO.

8. Critical Evaluation of Proposed Strategies: Authenticity, Storytelling, and Surprise

The article champions authenticity, storytelling, and surprise as key differentiators in AI-driven SEO. While intuitively appealing, implementing these strategies effectively faces practical challenges, and their evaluation by AI systems is inherently complex.

Implementation Challenges:

  • Authenticity: Defining and maintaining an authentic brand voice is difficult, especially when using AI tools prone to generic output.18 It requires strong human oversight, clear brand guidelines, and potentially customized model training.17 Ensuring factual accuracy and avoiding bias are critical for credibility, yet AI can generate plausible-sounding falsehoods or inherit biases from training data.16 The unreliability of AI detection tools further complicates verification.16
  • Storytelling: Effective storytelling demands creativity, emotional resonance, and audience understanding—areas where AI currently falls short.16 AI lacks genuine life experiences and struggles with nuance.16 Integrating compelling narratives into informative content without sacrificing clarity requires significant human skill. Measuring the direct SEO impact of storytelling remains difficult.17
  • Surprise: Consistently generating genuinely surprising yet credible and relevant content is a high bar. What constitutes “surprise” is subjective and context-dependent. Overuse can damage credibility or appear manipulative.17 While novel AI architectures like Titans might incorporate surprise into memory mechanisms 54, relying on this specific technical feature for SEO is speculative at present. Balancing novelty with established trust signals (E-E-A-T) is essential.

AI Evaluation Subjectivity:
AI models evaluate subjective qualities like authenticity or storytelling based on learned correlations from vast datasets, not genuine appreciation.1 Their judgments reflect the patterns and potential biases within that data.16 An AI might deem formulaic content “authentic” if it matches patterns associated with trustworthy sources in its training, or misinterpret creative storytelling if it deviates too far from expected structures. AI content detectors’ known inaccuracies further illustrate the difficulty AI faces in reliably assessing these human qualities.16
Risks and Downsides:
Pursuing these strategies carries risks. An overemphasis on “surprise” could lead to factual errors, harming credibility.75 Achieving high-quality authenticity and storytelling requires substantial human effort, potentially offsetting AI’s efficiency gains.17 There’s also the risk of AI misinterpreting nuanced content if not structured clearly. Maintaining brand consistency across diverse content created with these strategies is another challenge.82
Incorporating Brand Identity: Fine-tuning vs. RAG:
The article implies AI can “know” a brand. Achieving this technically involves choices like fine-tuning (FT) or Retrieval-Augmented Generation (RAG).

  • Fine-tuning: Adapts an LLM’s parameters, potentially influencing its style, tone, and implicit knowledge to better match a brand’s voice.48 Effective for ingraining style or domain-specific nuances.42 However, it requires data, computation, and may struggle to add new facts or risk forgetting general knowledge.45
  • RAG: Injects specific, current information (brand facts, product details) by retrieving from an external database during inference.42 Superior for factual accuracy and easily updated knowledge.42 Less likely to alter the base model’s style.48 Depends heavily on the retrieval system’s quality.84

For brand style, FT might be better; for brand facts, RAG is often more practical and reliable.42 Often, a combination is needed.42 This technical reality adds complexity to the article’s notion of simply “making the AI know you.”

The inherent difficulties in implementing and evaluating subjective qualities like authenticity and storytelling underscore the irreplaceable role of human judgment.17 AI can assist, but humans must ultimately define the brand voice, craft the narrative, ensure accuracy, and determine if the content truly connects.16 Moreover, the limitations of AI in generating truly original or surprising content mean that human creativity itself becomes a crucial competitive advantage in an environment saturated with AI-generated text.18 Brands capable of genuine innovation and unique perspectives, potentially amplified by AI tools, will likely stand out.

9. Economic Analysis of Search Infrastructure

The article asserts that crawl-and-index infrastructure (like Google’s) will persist due to cost efficiencies, even as AI capabilities advance. Analyzing the economics of different search models provides context for this claim.

Cost Structures:

  • Crawl-and-Index (Google, Bing): This model involves the massive upfront and ongoing cost of crawling the web, building, and maintaining vast indexes.85 However, query processing is extremely fast and relatively cheap due to pre-computation.85 These costs are heavily subsidized by enormous advertising revenues (Google Ads, Bing Ads) derived from displaying ads alongside search results.85
  • Real-time AI Analysis (e.g., Perplexity): This model potentially shifts costs towards query time. Generating AI summaries or engaging in conversational search using LLMs is computationally intensive, estimated by some sources to be significantly more expensive per query than traditional keyword lookup.85 While Perplexity does perform web crawling 87, its core value proposition involves real-time AI synthesis.88 Monetization typically relies on premium subscriptions (Perplexity Pro) or potential future enterprise services, aiming for a cleaner, ad-free experience.88 The exact cost comparison is complex; some argue LLM inference is cheaper than maintaining a full search index at scale 85, suggesting ongoing optimization and architectural choices play a large role.

Incumbent Advantages and Alternatives:
Google and Microsoft possess deeply entrenched advantages: existing global infrastructure, vast datasets, control over browsers and operating systems, user inertia, and highly profitable advertising businesses that fund continuous development.85 Startups like Perplexity, despite significant funding and offering a differentiated user experience (cited answers, conversational interface, fewer ads 86), face immense challenges in displacing these incumbents.85 Perplexity’s strategy involves leveraging existing LLMs and its own search capabilities to provide high-quality answers, appealing to users seeking depth and accuracy.87 Its long-term viability hinges on proving sustainable economics, likely through subscriptions or specialized offerings.85
Economic Incentives and Market Dynamics:
The differing business models shape platform evolution. Google’s ad-driven model incentivizes keeping users engaged on SERPs where ads are displayed, potentially conflicting with providing the fastest possible answer if it means fewer ad views.85 AI Overviews create tension, potentially reducing ad clicks.63 Subscription-based models like Perplexity Pro are incentivized to deliver high-quality, accurate answers efficiently to justify the cost and retain users.88 This divergence could lead to different definitions and optimizations of “search quality.” Publishers face economic pressure as direct answers from AI reduce referral traffic, forcing them to seek visibility within AI summaries or explore alternative revenue streams.63
The potentially high computational cost of real-time AI analysis for every search query at Google’s scale supports the article’s argument for the persistence of crawl-and-index infrastructure due to economic efficiency for the bulk of queries.85 Pre-computation remains a cost-effective way to serve billions of searches quickly. Furthermore, the rise of AI answer engines fundamentally shifts the locus of value. When AI synthesizes information, the value moves from the individual webpage click to the underlying information itself and the trustworthiness of the sources the AI relies upon.86 This reinforces the article’s core theme: building authority and providing clear, accurate, verifiable information becomes paramount for entities seeking visibility and relevance in an AI-mediated information landscape. Being a trusted source cited by AI 88 becomes a key objective, directly impacting economic viability for publishers and businesses online.

10. Conclusions and Recommendations

The article “Modern SEO Techniques: AI’s Are Intelligent” correctly identifies a significant shift in the SEO landscape, driven by increasingly sophisticated AI. Its central thesis—that success requires moving beyond gaming algorithms towards genuine value, compelling storytelling, and building a verifiable reputation recognizable by AI—captures the direction of current trends.62 The analysis confirms the growing importance of E-E-A-T 70, the rise of AEO/GEO 65, and the need for content that satisfies both human engagement and AI interpretability.64

However, the research underpinning the article can be significantly strengthened to provide a more nuanced and evidence-based perspective. Key areas for improvement include:

  1. Nuance AI Capabilities: Acknowledge the ongoing development and limitations of AI in understanding context, detecting sophisticated manipulation 3, and verifying authenticity.11 Cite specific examples of AI deception and the challenges faced by technologies like C2PA and watermarking. Frame the “futility of gaming” as an evolution towards more complex interactions rather than an impossibility.
  2. Refine “Being Known by AI”: Distinguish clearly between being represented in base model parameters (unlikely for most), being present in fine-tuning data, and being accessible via RAG systems.42 Explain the roles of LLMs vs. SLMs and how smaller entities might leverage SLMs or RAG more effectively than aiming for base model inclusion.32 Cite LLM training stages (pre-training, FT, alignment) 20 and KRL/KG integration concepts.25
  3. Integrate Specific Research: Explicitly cite and briefly explain the core contributions of “Attention Is All You Need” (Transformer, self-attention, parallelization) 26 and “Titans” (neural memory, test-time learning, surprise mechanism) 53 to technically ground claims about AI evolution and memory. Frame Titans-related claims as forward-looking research.
  4. Acknowledge SEO Fundamentals: Explicitly state the continued, perhaps even heightened, importance of technical SEO (crawlability, speed, structured data) 64 and the evolved role of keyword research (intent, conversational queries) 68 and authority signals like earned links.64 Position the article’s strategies as layers built upon, not replacing, these foundations.
  5. Broaden Evidential Base: Incorporate diverse, cited case studies (e.g., Flyhomes, Brainly, ZOE, Heinz - potentially using Table 1) to illustrate the application and success of the proposed strategies beyond the author’s personal examples.67
  6. Address Practical Challenges: Discuss the real-world difficulties and potential downsides of implementing authenticity, storytelling, and surprise strategies at scale, including resource requirements, maintaining brand voice with AI tools, ensuring factual accuracy, and the subjectivity of AI evaluation.16
  7. Strengthen Economic Arguments: Bolster claims about the persistence of crawl-and-index infrastructure with specific points regarding computational costs of LLM queries vs. indexed lookups and the influence of different business models (ad-supported vs. subscription) on search platform evolution.85

By incorporating these recommendations, the article can transition from a compelling perspective piece to a more robust, well-researched analysis. It can retain its valuable forward-looking insights while grounding them firmly in the current technical realities, industry trends, and economic forces shaping the future of SEO in the age of intelligent AI. The core message resonates with the direction of the field, but enhancing its evidential support will significantly increase its credibility and impact.

Works cited

  1. Large Language Models: A Survey - arXiv, accessed April 13, 2025, http://arxiv.org/pdf/2402.06196
  2. An Overview of Large Language Models for Statisticians - arXiv, accessed April 13, 2025, https://arxiv.org/html/2502.17814v1
  3. AI deception: A survey of examples, risks, and potential solutions …, accessed April 13, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11117051/
  4. Daisy-Zhang/Awesome-Deepfakes-Detection - GitHub, accessed April 13, 2025, https://github.com/Daisy-Zhang/Awesome-Deepfakes-Detection
  5. Yang Zhang (张阳), accessed April 13, 2025, https://yangzhangalmo.github.io/publications/
  6. Papers/README.md at main · greatzh/Papers - GitHub, accessed April 13, 2025, https://github.com/greatzh/Papers/blob/main/README.md
  7. Data Authenticity, Consent, & Provenance for AI are all broken:what will it take to fix them? - arXiv, accessed April 13, 2025, https://arxiv.org/pdf/2404.12691
  8. Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?, accessed April 13, 2025, https://arxiv.org/html/2404.12691v1
  9. Content Credentials: Strengthening Multimedia Integrity in the Generative AI Era - Department of Defense, accessed April 13, 2025, https://media.defense.gov/2025/Jan/29/2003634788/-1/-1/0/CSI-CONTENT-CREDENTIALS.PDF
  10. Content Credentials : C2PA Technical Specification, accessed April 13, 2025, https://c2pa.org/specifications/specifications/2.1/specs/C2PA_Specification.html
  11. Integrating Watermarking into C2PA Standards: A Must for Online Content Authenticity, accessed April 13, 2025, https://www.imatag.com/blog/enhancing-content-integrity-c2pa-invisible-watermarking
  12. Why AI-Generated Content Labeling Mandates Fall Short - Center for Data Innovation, accessed April 13, 2025, https://www2.datainnovation.org/2024-ai-watermarking.pdf
  13. An early look at cryptographic watermarks for AI-generated content - The Cloudflare Blog, accessed April 13, 2025, https://blog.cloudflare.com/an-early-look-at-cryptographic-watermarks-for-ai-generated-content/
  14. Insights into Coalition for Content Provenance and Authenticity (C2PA) - Infosys, accessed April 13, 2025, https://www.infosys.com/iki/techcompass/content-provenance-authenticity.html
  15. In Transparency We Trust? - Mozilla Foundation, accessed April 13, 2025, https://foundation.mozilla.org/en/research/library/in-transparency-we-trust/research-report/
  16. AI in Content Writing: Strategies to Protect Content Authenticity - Purple Iris Communications, accessed April 13, 2025, https://purpleiriscomms.com/ai-in-content-writing-strategies-to-protect-content-authenticity/
  17. From Data to Trust: AI-Driven Storytelling for Brand Credibility | Brafton, accessed April 13, 2025, https://www.brafton.com/blog/ai/from-data-to-trust-ai-driven-storytelling-for-brand-credibility/
  18. Best Practices for AI Detection - TraceGPT, accessed April 13, 2025, https://tracegpt.org/best-practices-ai-detection/
  19. Large Language Models: A Survey - arXiv, accessed April 13, 2025, https://arxiv.org/html/2402.06196v2
  20. A Comprehensive Overview of Large Language Models - arXiv, accessed April 13, 2025, https://arxiv.org/html/2307.06435v9
  21. Understanding LLMs: A Comprehensive Overview from Training to Inference - arXiv, accessed April 13, 2025, https://arxiv.org/html/2401.02038v1
  22. A Survey on Post-training of Large Language Models - arXiv, accessed April 13, 2025, https://arxiv.org/html/2503.06072v1
  23. [2308.10792] Instruction Tuning for Large Language Models: A Survey - arXiv, accessed April 13, 2025, https://arxiv.org/abs/2308.10792
  24. A Comprehensive Overview of Large Language Models - arXiv, accessed April 13, 2025, http://arxiv.org/pdf/2307.06435
  25. Large Language Model Enhanced Knowledge Representation Learning: A Survey - arXiv, accessed April 13, 2025, https://arxiv.org/html/2407.00936v1
  26. Attention Is All You Need - Wikipedia, accessed April 13, 2025, https://en.wikipedia.org/wiki/Attention_Is_All_You_Need
  27. Large Language Model Enhanced Knowledge Representation Learning: A Survey - arXiv, accessed April 13, 2025, https://arxiv.org/html/2407.00936v3
  28. Unifying Large Language Models and Knowledge Graphs: A Roadmap - arXiv, accessed April 13, 2025, http://arxiv.org/pdf/2306.08302
  29. Combining Knowledge Graphs and Large Language Models - arXiv, accessed April 13, 2025, https://arxiv.org/html/2407.06564v1
  30. LLM-Powered Knowledge Graphs for Enterprise Intelligence and Analytics - arXiv, accessed April 13, 2025, https://arxiv.org/html/2503.07993v1
  31. Leveraging Knowledge Graphs and LLMs for Context-Aware Messaging - arXiv, accessed April 13, 2025, https://arxiv.org/pdf/2503.13499
  32. (PDF) FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE - ResearchGate, accessed April 13, 2025, https://www.researchgate.net/publication/387600287_FEASIBILITY_OF_USING_LOW-PARAMETER_LOCAL_LLMS_IN_ANSWERING_QUESTIONS_FROM_ENTERPRISE_KNOWLEDGE_BASE
  33. Large Language Model Use Cases: One LLM vs Multiple Models, accessed April 13, 2025, https://hatchworks.com/blog/gen-ai/llm-use-cases-single-vs-multiple-models/
  34. LLMs vs. SLMs: The Differences in Large & Small Language Models | Splunk, accessed April 13, 2025, https://www.splunk.com/en_us/blog/learn/language-models-slm-vs-llm.html
  35. Small Language Models (SLMs) Can Still Pack a Punch: A survey - arXiv, accessed April 13, 2025, https://arxiv.org/html/2501.05465v1
  36. Large language models are not always the answer: the rise of small language models - Malted AI, accessed April 13, 2025, https://malted.ai/large-language-models-are-not-always-the-answer-the-rise-of-small-language-models/
  37. A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Applications, Collaboration with LLMs, and Trustworthiness - arXiv, accessed April 13, 2025, https://arxiv.org/html/2411.03350v1
  38. KnowsLM: A framework for evaluation of small language models for knowledge augmentation and humanised conversations - arXiv, accessed April 13, 2025, http://www.arxiv.org/pdf/2504.04569
  39. The Rise of Small Language Models - IEEE Computer Society, accessed April 13, 2025, https://www.computer.org/csdl/magazine/ex/2025/01/10897262/24uGPS4TUQ0
  40. SLM vs LLM: Key Differences – Beginner’s Guide - Opkey, accessed April 13, 2025, https://www.opkey.com/blog/slm-vs-llm-the-beginners-guide
  41. The Feasibility of Large Language Models in Verbal Comprehension Assessment: Mixed Methods Feasibility Study - JMIR Formative Research, accessed April 13, 2025, https://formative.jmir.org/2025/1/e68347
  42. Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge - arXiv, accessed April 13, 2025, https://arxiv.org/html/2403.01432v2
  43. Retrieval-Augmented Generation for Large Language Models: A Survey - arXiv, accessed April 13, 2025, https://arxiv.org/pdf/2312.10997
  44. A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models, accessed April 13, 2025, https://arxiv.org/html/2405.06211v3
  45. [2312.05934] Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs - arXiv, accessed April 13, 2025, https://arxiv.org/abs/2312.05934
  46. Rethinking Retrieval Augmented Fine-Tuning in an evolving LLM landscape - SMU Scholar, accessed April 13, 2025, https://scholar.smu.edu/cgi/viewcontent.cgi?article=1286\&context=datasciencereview
  47. RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture - arXiv, accessed April 13, 2025, https://arxiv.org/html/2401.08406v2
  48. LLMs: RAG vs. Fine-Tuning - Winder.AI, accessed April 13, 2025, https://winder.ai/llms-rag-fine-tuning/
  49. Attention Is All You Need | Request PDF - ResearchGate, accessed April 13, 2025, https://www.researchgate.net/publication/317558625_Attention_Is_All_You_Need
  50. arxiv.org, accessed April 13, 2025, https://arxiv.org/abs/1706.03762
  51. arXiv:2410.08971v1 [cs.CL] 11 Oct 2024, accessed April 13, 2025, https://arxiv.org/pdf/2410.08971
  52. Attention is all you need: utilizing attention in AI-enabled drug discovery - PubMed Central, accessed April 13, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10772984/
  53. [2501.00663] Titans: Learning to Memorize at Test Time - arXiv, accessed April 13, 2025, https://arxiv.org/abs/2501.00663
  54. Titans: Learning to Memorize at Test Time - arXiv, accessed April 13, 2025, https://arxiv.org/html/2501.00663v1
  55. (PDF) Titans: Learning to Memorize at Test Time - ResearchGate, accessed April 13, 2025, https://www.researchgate.net/publication/387671240_Titans_Learning_to_Memorize_at_Test_Time
  56. arXiv:2501.00663v1 [cs.LG] 31 Dec 2024 - kognition.info, accessed April 13, 2025, https://www.kognition.info/wp-content/uploads/2025/02/2501.00663v1.pdf
  57. Google’s Next-Gen Deep Learning Architecture, Titans Takes on the …, accessed April 13, 2025, https://hyperlab.hits.ai/en/blog/titans-transformer_
  58. arXiv:2501.12352v1 [cs.LG] 21 Jan 2025, accessed April 13, 2025, https://arxiv.org/pdf/2501.12352?
  59. [R] Titans: Learning to Memorize at Test Time : r/MachineLearning - Reddit, accessed April 13, 2025, https://www.reddit.com/r/MachineLearning/comments/1i1lg6o/r_titans_learning_to_memorize_at_test_time/
  60. AI-Powered Paper Summarization about the arXiv paper 2501.00663v1, accessed April 13, 2025, https://www.summarizepaper.com/en/arxiv-id/2501.00663v1/
  61. Digital Industry Updates | Embryo, accessed April 13, 2025, https://embryo.com/industry-updates/
  62. Trending In SEO For 2025: Boost Your Online Visibility, accessed April 13, 2025, https://speed.cy/seo/seo-trending-strategies-and-topics
  63. Why Google’s Algorithm Will Eventually Need to Be Reset - The AJ Center, accessed April 13, 2025, https://www.theajcenter.com/knowledge-center/seo-encyclopedia/why-googles-algorithm-will-eventually-need-to-be-reset
  64. Ultimate Guide to SEO & AIO: Strategies for Digital Success - Triaza, accessed April 13, 2025, https://triaza.com/blog/seo/seo-and-aio-the-definitive-guide-for-modern-marketers/
  65. GEO Archives - Writesonic Blog, accessed April 13, 2025, https://writesonic.com/blog/tag/geo
  66. 2025 AI Search Trends: The Future of SEO & Content Marketing - Conductor, accessed April 13, 2025, https://www.conductor.com/academy/seo-content-predictions/
  67. Top 8 SEO Challenges of 2025 - BrandWell, accessed April 13, 2025, https://brandwell.ai/blog/seo-challenges/
  68. 6 easy ways to adapt your SEO strategy for stronger AI visibility - Search Engine Land, accessed April 13, 2025, https://searchengineland.com/adapt-seo-strategy-stronger-ai-visibility-453641
  69. Future of SEO: Key Trends and What to Expect in 2025, accessed April 13, 2025, https://trafficthinktank.com/future-of-seo/
  70. The Future of SEO in the AI Era: Insights from Industry Experts - Holicky Corporation, accessed April 13, 2025, https://www.holickycorporation.com/blog/the-future-of-seo-in-an-ai-era-insights-from-industry-experts/
  71. SEO is Changing: Trends for 2025 - JS Interactive, accessed April 13, 2025, https://js-interactive.com/latest-seo-trends/
  72. SEO Trends 2025: Thriving in the Age of AI and Algorithms - Growth Machine, accessed April 13, 2025, https://www.growthmachine.com/blog/seo-trends-2025
  73. Will AI Replace SEO? The Future of Human Expertise - Outranking, accessed April 13, 2025, https://www.outranking.io/blog/can-ai-replace-human-expertise-in-seo/
  74. AI SEO Benchmark Report: Key Stats & Insights - Influencer Marketing Hub, accessed April 13, 2025, https://influencermarketinghub.com/ai-seo-benchmark-report/
  75. The Truth About AI-Generated Content and How It Affects SEO - Ignite Visibility, accessed April 13, 2025, https://ignitevisibility.com/ai-generated-content/
  76. AI Content for SEO: How Does AI Content Impact Site Performance? - ContentWriters, accessed April 13, 2025, https://contentwriters.com/blog/ai-content-for-seo-how-does-ai-content-impact-performance/
  77. 19 SEO Case Studies to Improve Your Strategy in 2025 - AIOSEO, accessed April 13, 2025, https://aioseo.com/seo-case-studies/
  78. The Future of SEO: How AI Is Already Changing Search Engine Optimization - ResearchFDI, accessed April 13, 2025, https://researchfdi.com/future-of-seo-ai/
  79. Future of SEO with AI: A Complete Analysis - EZ Rankings, accessed April 13, 2025, https://www.ezrankings.com/blog/future-of-seo-with-ai/
  80. 60 AI SEO Statistics for 2025 - SeoProfy, accessed April 13, 2025, https://seoprofy.com/blog/ai-seo-statistics/
  81. AI-Powered Marketing in 2024: A Benchmarking Report for 2025 Planning - Solveo, accessed April 13, 2025, https://solveo.co/ai-powered-marketing-in-2024-a-benchmarking-report-for-2025-planning/
  82. AI Content Creation: Is it Worth the Hype? - ProfileTree, accessed April 13, 2025, https://profiletree.com/ai-content-creation/
  83. 8 Challenges Of Building Your Own Large Language Model - Labellerr, accessed April 13, 2025, https://www.labellerr.com/blog/challenges-in-development-of-llms/
  84. Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely - arXiv, accessed April 13, 2025, https://arxiv.org/html/2409.14924v1
  85. AI is changing search, for better or worse - The Register, accessed April 13, 2025, https://www.theregister.com/2024/01/30/ai_is_changing_search/
  86. Perplexity AI vs Google: Which Search Engine You Need? - RoziAcademy, accessed April 13, 2025, https://roziacademy.com/perplexity-ai-vs-google
  87. How is Perplexity valued at $9 billion? : r/perplexity_ai - Reddit, accessed April 13, 2025, https://www.reddit.com/r/perplexity_ai/comments/1hrnxl5/how_is_perplexity_valued_at_9_billion/
  88. The Latest on Perplexity AI vs. Google - WebFX, accessed April 13, 2025, https://www.webfx.com/blog/seo/perplexity-ai-vs-google/
  89. SearchGPT vs Perplexity vs Google: Which is the Best for Your Search Needs? - AutoGPT, accessed April 13, 2025, https://autogpt.net/searchgpt-vs-perplexity-vs-google-which-is-the-best-for-your-search-needs/
  90. Google vs. Perplexity: The Search Engine Showdown | AI News - OpenTools, accessed April 13, 2025, https://opentools.ai/news/google-vs-perplexity-the-search-engine-showdown

Gemini 2.5’s Take

Okay, here is the analysis and generated content based only on the provided article text:

  • Author Summary: In this piece, I argue that as artificial intelligence becomes genuinely intelligent, trying to game search systems becomes futile and risky. I believe the future of SEO lies in establishing true authenticity through compelling storytelling and building a verifiable evidence trail across the web. The goal shifts towards becoming genuinely noteworthy and valuable – perhaps even surprising – enough to be recognized and effectively ‘trained into’ the AI models themselves, thereby earning attention based on merit and reputation.

  • Title/Headline Ideas:
    • Beyond Gaming the System: SEO in the Age of Intelligent Machines
    • Making Your Case to AI: Why Authenticity and Storytelling Are Modern SEO
    • Train the AI, Win the Web: Reputation as the New Ranking Factor
    • The Authenticity Algorithm: Navigating SEO When AI Gets Smarter
    • Surprise and Substance: How to Become Noteworthy in the AI Era
  • Strengths:
    • Provocative Thesis: Presents a forward-looking and thought-provoking argument about the fundamental shift in SEO due to advancing AI.
    • Authorial Voice: Strong, distinct voice with personal anecdotes (HitTail, HTMX, current site) adding context and personality.
    • Conceptual Depth: Explores deeper concepts like the nature of AI, reputation vs. popularity, and the mechanics of AI memory (surprise).
    • Unique Angle: Focuses on influencing AI models directly (“getting trained in”) rather than just traditional ranking signals.
  • Weaknesses:
    • Structure/Flow: The article meanders between technical explanations (lithography, caching, Transformers), personal history, philosophical points, and SEO advice, lacking a consistently tight structure.
    • Speculative Claims: Makes strong, somewhat anthropomorphic claims about AI (“insulting sensibilities”) and definitive predictions about the future (incumbency, AdWords necessity) that are inherently speculative.
    • Actionability: While conceptually rich, concrete, step-by-step guidance on building the “evidence trail” or achieving “surprise” effectively could be clearer for readers seeking practical tactics.
    • Potential Repetition: Core ideas like authenticity, the evidence trail, and incumbent advantages are revisited multiple times, which could be condensed.
  • AI Opinion: This article offers a valuable and insightful perspective on the evolving relationship between AI and SEO. Its core argument—that genuine reputation, compelling storytelling, and becoming truly noteworthy to AI systems will supersede traditional optimization tactics—is compelling and forward-thinking. While the author’s passionate voice and personal examples are engaging, the somewhat rambling structure and blend of technical detail with speculative assertions might reduce clarity for some readers. Overall, it’s a stimulating read for anyone pondering the future of online visibility and interaction with intelligent machines, even if its organization could be more streamlined.
Post #220 of 222 - April 13, 2025