Sovereign Tech: The OpenClaw, NixOS, and OAuth Odyssey
Setting the Stage: Context for the Curious Book Reader
In the grand tapestry of building a resilient and future-proof digital existence, this entry emerges as a pivotal chapter. It’s a deep dive into the practical philosophy of reclaiming digital sovereignty, chronicling the journey from choosing declarative operating systems like NixOS, to battling pervasive vendor lock-in, and meticulously configuring AI agents for autonomous operation. This blueprint, forged through persistent experimentation and rigorous AI-assisted troubleshooting, highlights the subtle complexities of integrating powerful AI models into a personal, controlled environment. It’s an interesting exploration of how each piece of a robust tech stack—hardware, software, and intelligence—interlocks to form a cohesive, anti-fragile system in the Age of AI.
Technical Journal Entry Begins
In the cocoon or chrysalis the caterpillar’s brain turns to mush. Now that’s a transition! This is a strong mental model for the changes we undergo in life. Changes are not always quite so dramatic with humans but sometimes it feels that way. We try to avoid throwing away everything we ever learned, every muscle memory we ever developed and dropping whatever was previously the focus of our attention for the next shiny toy. But that’s what most of the echo chamber you’ll find on X.com (previously Twitter) and YouTube doing. It’s the flock. And I avoid flocking when I can.
I take up new tech very carefully and deliberately. I had to get off of both Windows and Mac, for example. I had to get onto Linux but faced with never wanting to constantly have to scrap-and-rebuild everything every time which is even a concern on Linux, I researched and discovered Nix and NixOS, a declarative approach to building systems — which basically means you build your entire system from a text-file onto any hardware. This is a cure-all panacea! And I also discovered GNU Guix which is another source now for the exact same approach, meaning the ability to do declarative systems is multi-sourced. It’s the future of almost everything that uses docker (containerization) and virtual machines (VMs) today.
The Chrysalis of Change: Embracing Declarative Tech
And so I needed some initial hardware. It was time to put the laptop aside and get a machine that had some real power at a good price, so I went with used equipment. I also went with something that I could upgrade by swapping out parts for a good long time. The hardware adventure started on May of 2024 when I got the HP Z640 server as the initial body for Chip O’Theseus and put the NVidia RTX 3080 card in there and installed NixOS. The time between then and now I spent evolving and mastering my Linux, Python, vim & git approach (LPvg) with a special focus and honorary membership in this minimal stack for Nix and Jupyter Notebooks. I experimented with LPvgn but just LPvg is snappier and more memorable, I think. It resonates with rightness whereas the trailing “n” for Nix always needs to be explained. So LPvg, it is.
Building the Sovereign Workstation: Hardware, Nix, and Pipulate
In that intervening 1.5 years I have been working at an enterprise SEO company and I’ve been developing my long-standing Pipulate product for SEO that has underdone several iterations and taken on several forms over the years. But finally one of the key enabling technologies that I was waiting to mature and come to Python, HTMX, matured and came to Python — and from Jeremy Howard of all sources, a sort of unsung hero of tech who does the whole “for humans” think that Kenneth Reitz did with the Requests library for Python in 2011 but for with FastAI for machine learning and nbdev for turning Jupyter Notebooks into a full-fledged Python development environment. And so I made the next and now I think final version of Pipulate on FastHTML with tech future-proofing the top priority.
What I can’t do now is plow from one article to the next to the next to the next without gaining continuous traction. The acceleration effect must really occur turning the next 5 years of my career like the prior 30 years combined. Yes, I started working when I was 15 years old — not in the field of tech but as a cart-boy and bagger at a supermarket. I rode my bicycle a few miles to work every day during that first summer before I turned 16 and could drive my car — a 1973 Mustang convertible, the last of the muscle-cars before the OPEC embargo shrank the Mustang to a clown car. I guess I was a little in tech because I refurbished the whole thing. But that’s a story for another time.
This is about managing distractions and plowing the sort of energy I have for moving forward, this really exciting world-changing chrysalis stuff without excessive distraction and my brain turning to mush. We hold onto the things about this activity and state that we love AND we do the client work. It’s funny that I talked about sanitizing things like API-tokens and IPs when publishing in a recent article because if I continue publishing WHILE I work on the client-work as I plan to as a method of keeping it all love-worthy and the forward momentum continuing, I’ll be doing a lot of sanitizing as I go. This is the path. This is the way.
We lose ZERO momentum, and I still have most of my energy left today. The thing that happens is that you only have so much in you on any given day. “We’re only human” is the easy way to put it, but we know from science about Adenosine, the chemical that accumulates in the brain throughout the day inducing sleepiness. It acts as a byproduct of cellular energy consumption (ATP breakdown), with levels increasing during wakefulness to create sleep pressure. During sleep, adenosine is cleared, allowing for wakefulness to return. I read Why We Sleep — a real eye-opener. So we divvy out our energy as part of a zero-sum game every day. Attempts to override this are unsustainable. So we plan each day and try to not use up all our energy too early on the wrong things.
Mastering Focus: Energy Management and Dual Journals
So what now? Real focus on the work at hand! And draconian management of distraction — especially the tempting rabbit hole projects of which this article writing could itself all too easily become if not managed. Right, right! Okay so this is where we split into 2 daily tech journals. This is the aforementioned (last article) 1-file-4life where we never have to wonder where to look. It’s in there! Type in a single text-file for every waking hour of your entire life and you still can’t produce something that can’t be easily loaded and edited all at once with vim/NeoVim as if it were just a tiny file. I think only syntax color-coding throws a dent in that, and even though I keep my journals in markdown I don’t ask vim for any markdown formatting. For those bits that I extract and publish, I can always format properly later.
Okay, so why the separate journals? So we don’t have to sanitize quite so much here, of course! I’ll be talking about particular clients and their issues. I start out abstract here in a way that could apply to any SEO client, but then the particulars are dealt with in a journal-per-client kept in an entirely different git repo — one that I can keep in GitHub so that when I switch to the work Mac I can pull it down easily. That’s not the case with the 1-file-4life which never touches GitHub, yet is still easily kept in git and in multiple locations for security. And that’s all without even running a git server because you don’t need a service running (which I also COULD very well do) when you’re using the local filesystem for your git repository origin. You then rely on your backup system to get it into multiple locations with the 1,2,3 best practices backup procedure. BAM! Independence from GitHub when you need it.
Okay, it’s fine for me to meander a little bit like this. This is gaining traction. There’s a fine line between building momentum and spinning your wheels. One only need remain self-aware. AND we’re going to look at first principles as we go because our workflow is changing a bit as our local machine “becomes smart”. In time I’ll make that intelligence local as well using locally installed LLMs via Ollama and the Python vLLM package but for now it’s wired to the frontier models, and Claude Opus 4.6 in particular. Boy did I chew-up my daily allocation of resources fast. You barely even blink and it’s used up so at least for today I’m going to continue using the loophole so big you can drive a truck through it — which is using Google Gemini 3 via the consumer (not developer) GoogleOne subscription and copy/pasting with the Web user interface.
This might seem ancient compared to the agentic framework movement going on right now with Anthropic’s Claude Code and Cowork, OpenAI Codex, Google Antigravity and most recently the free and open source OpenClaw, but copy/paste becomes remarkably powerful when you can compose any files and webpage content into one giant prompt by stacking all their text vertically as if in a single file merely by listing the file-paths and URLs. I additionally layer in a manifest of what’s found there which you could also call a table of contents. I also layer in features that you could call an index for LLM rapid searching to mitigate the “lost in the middle” problem. And finally this provides a way to avoid agentic framework lock-in by the vendors who are all trying to do exactly that to you right now. Why if it’s all just text?
Navigating the LLM Landscape: Agentic Frameworks and Vendor Traps
Anyhoo you can see me fighting the distraction of how interesting the whole tech landscape is right now and all the mitigating the pitfalls and keeping ones self not merely future-proofed but increasing and enhancing that hard-won muscle memory that is usually viciously attacked by vendors chopping off your muscle memory with user interface overhauls, product sun-setting and the like. So long as they keep you having to relearn and retrain they can more accurately predict quarterly revenues.
The ideal from a vendor perspective is to lock in into their product holding your data hostage and also forcing you to constantly retrain. I think Adobe is one of the worst offenders on this front. I won’t touch Adobe software with 100-foot pole anymore no matter how good it is. Fool me once, shame on you. Fool me like 3 or 4 times over a 30-year tech career and they’re 100% marginalized in the discussion.
Microsoft Windows is entering that same penalty box. I really really really tried with WSL and WSL2 but when they hired Lennart Poettering, the creator of the systemd software behind modern Linux services and then STILL had all Linux services that ran on WSL2 timeout after 8 minutes — and I waited YEARS for it to be otherwise and tried and re-tried every metafile and parameter trick to override this behavior — well, the writing was on the wall. Hiring Lennart Poettering was a defensive position AGAINST Linux services mainstreaming on Windows. It’s very subtle to see how 24 by 7 Linux services on Windows is a first domino to fall, but it is. Had Linux services been able to reliably run in the background on Windows with high availability, I’d probably still be on Windows. So whereas Adobe pulled me into their trap 3 or 4 times over that 30-year career, Microsoft lured me in maybe 5 or 6 times.
And Apple? Oh, don’t get me started on Apple. If you think being based on Unix makes them better, then show me where their officially supported free and open source software repo is (FOSS). Homebrew? Don’t make me laugh. First, it’s not officially supported. Second, there’s a war going on against it in the Apple ecosystem. Anyone who has had to use pipx instead of pip knows what I’m talking about. At least Microsoft officially supports the Ubuntu software repo under WSL. I’ve been progressively on MacPorts, Fink and Homebrew so I know what I’m talking about. Homebrew is only the latest that has stood up to Apple’s anti-FOSS pounding. The fact that things like OpenAI Codex comes out exclusively for Apple first is only the cool kid factor and not an indication that Mac Unix is in any way as good as modern Linux for a developer. It’s just a prettier prison paraphrased from how Richard Matthew Stallman (RMS) might put it.
Escaping Vendor Traps: Adobe, Microsoft, and Apple’s Prisons
Did somebody say distraction? Okay, okay, let’s get to it now.
Ugh! I just read about kimmi dot com and the zero-configuration cloud-based hosting of this stuff. But it’s not your own system. Nothing invalidates what I’m doing and it actually even validates this finally being the “outer loop” agentic framework I’ve been waiting for after having said no to AutoGPT, Crew AI, Goose AI and Open Manus, or building something myself out of LangChain. The distractions are going to be immense, but the day-job pays the bills. But I can’t let this NOT be about OpenClaw at this point. Hmm, let’s see.
Okay one path forward that doesn’t break the bank and lets me use OpenClaw for work is through OpenAI Codex because work pays for that account. I may still have quotas or whatever but I should be able to wire OpenClaw to Codex the way I did with Claude Code. Let’s first just get a Google AI opinion.
Me: Can OpenAI $30 per seat business plan of ChatGPT be connected to OpenClaw?
Google AI: Yes, the OpenAI ChatGPT Team plan (typically $30/month per user, or $25/month billed annually) can be connected to OpenClaw, as OpenClaw is designed to interact with OpenAI’s models. Following the news that OpenClaw’s creator, Peter Steinberger, is joining OpenAI, the platform is expected to further integrate with OpenAI products. [1, 2, 3]
Here is how the connectivity works based on the latest information:
- How it connects: OpenClaw acts as an autonomous AI agent running on your local machine (using Node.js) and connects to OpenAI models via API.
- Alternative Connection Method: While OpenClaw usually uses API keys, you can connect it to your ChatGPT account to leverage its capabilities for automation, such as managing emails, calendars, and file operations directly from chat platforms like Slack or WhatsApp.
- Key Capabilities: You can instruct OpenClaw to perform tasks, run recurring “heartbeat” tasks (like daily summaries), and utilize a “Live Canvas” for visual workspaces.
- Security & Safety: OpenClaw requires significant permissions to run. It is highly recommended to follow the official documentation’s security guidelines, such as using Docker for isolation and running on localhost to avoid exposing sensitive data. [3, 4, 5, 6, 7]
Important Distinctions:
- API vs. ChatGPT Business: While you can use the API, note that the ChatGPT Business ($30/seat) plan is designed for web/app chat usage. API usage is a separate cost on OpenAI’s platform. OpenClaw allows you to connect to different providers, but is heavily geared toward leveraging OpenAI’s models.
- Open Source Future: Despite being acquired by OpenAI, OpenClaw is expected to remain open source. [5, 8, 9, 10, 11]
Disclaimer: OpenClaw is a powerful, early-stage, and high-risk tool. Misconfigured instances can leak credentials or execute unauthorized commands. [6]
AI responses may include mistakes.
[1] https://tech.yahoo.com/ai/chatgpt/articles/creator-viral-ai-tool-openclaw-125403288.html
[2] https://en.wikipedia.org/wiki/OpenClaw
[3] https://refactoring.fm/p/my-experience-with-openclaw
[4] https://www.youtube.com/shorts/1j8oCpLlS00
[5] https://github.com/openclaw/openclaw
[6] https://www.aikido.dev/blog/why-trying-to-secure-openclaw-is-ridiculous
[7] https://medium.com/@alexglee/openclaw-why-24-hour-autonomous-agents-are-not-ready-for-healthcare-29e8f6a28781
[8] https://help.openai.com/en/articles/8542115-chatgpt-business-faq
[9] https://help.openai.com/en/articles/12003714-chatgpt-business-models-limits
[10] https://x.com/i/trending/2022870522107732312
[11] https://help.openai.com/en/articles/8542115-chatgpt-business-faq
Me: But when I click the button to dive deeper with AI and it does to tool-calling web searches to check sources, I get an entirely different story:
Google AI 2: No, you cannot directly connect a ChatGPT Business (or Team) plan to OpenClaw using that subscription’s seat-based pricing.
While the ChatGPT Business/Team plan costs $30 per seat per month (monthly), this subscription is strictly for the ChatGPT web and app interface. It does not include access to the OpenAI API, which is a separate platform with its own usage-based costs.
How to Connect OpenAI to OpenClaw
OpenClaw is a self-hosted, open-source AI agent that requires an API Key to function. To use OpenAI models with it, you must:
- Create an OpenAI API Account: Visit the OpenAI API Platform.
- Add Credits: Unlike the flat $30 monthly fee for ChatGPT, the API uses pay-as-you-go pricing based on the number of tokens (words/characters) processed.
- Generate an API Key: Insert this key into your OpenClaw configuration.
Key Differences for Business Users
| Feature | ChatGPT Business ($30/seat) | OpenAI API (Usage-based) |
|---|---|---|
| Primary Use | Internal team chat and collaboration | Integrating AI into 3rd-party apps like OpenClaw |
| Connectivity | Connects to apps via ChatGPT App Directory | Connects to any external code/agent via API Key |
| Data Training | Excluded from model training by default | Excluded from model training by default |
The OpenAI & OpenClaw Relationship
As of February 2026, OpenAI has officially hired OpenClaw creator Peter Steinberger to lead a new “personal agents” division. While OpenClaw remains open source under a foundation, OpenAI is expected to build native agent capabilities directly into its business plans in the near future.
Would you like help setting up an OpenAI API account or generating a usage-based key for your OpenClaw instance?
Me: So first of all it’s hilarious you can copy markdown from the first Google AI response that didn’t do the additional tool-call. I guess this is the one we’d call the AI Overview whereas once you drill-down you get a better response that did tool-calls but you can’t copy the markdown. I had to use that trick of copy/pasting (good) formatting into Google Docs and then saving the markdown from Docs to get what you see as Google AI’s response immediately above. Let me explore a little deeper.
Me: Can you elaborate on when you first said: “While OpenClaw usually uses API keys, you can connect it to your ChatGPT account”? Is it similar to the sk-ant-oat01- api key loophole with Anthropic?
Google AI: While there is no “official” button to link your $30 ChatGPT Business/Team seat to OpenClaw, the first response’s reference to a “loophole” refers to browser-based authentication and subscription tokens, which bypass standard usage-based API fees.
The “Loophole” Explained
This is very similar to the sk-ant-oat01- method used with Anthropic (Claude). It effectively “tricks” the software into thinking your web session is an API connection.
- Subscription-Based Tokens: For Anthropic, users can extract a specific session token (the
sk-ant-oat01-key) from the Claude Code CLI or the browser’s developer tools. This allows OpenClaw to run against your $20–$30 monthly subscription rather than charging you per-message via the official API. - OpenClaw Chrome Extension: For ChatGPT, OpenClaw can use a Chrome extension to bridge your browser’s active login session to the OpenClaw backend. This uses your “web” quota (which is much higher/effectively unlimited for Business users) instead of a pay-as-you-go API balance.
- The “Gateway” Pattern: OpenClaw acts as a middleman. It signs into your ChatGPT account through a headless browser or a session cookie, allowing the agent to “chat” on your behalf just as you would in a browser tab.
Risks and Caveats
- Brittle Connectivity: Unlike a real API key, these session-based “loopholes” often break whenever OpenAI or Anthropic updates their web authentication or bot-detection systems.
- Terms of Service: Using a personal or business web subscription for automated API-like behavior is generally a violation of OpenAI’s Terms of Service, as it bypasses their primary revenue model for developers.
- Account Bans: Aggressive use of an autonomous agent on a standard web account can trigger “suspicious activity” flags, potentially leading to a ban of your $30/seat Business workspace.
The Future with Peter Steinberger
The reason for the conflicting “Yes/No” answers you see is the current transition period. Now that OpenAI has hired OpenClaw’s creator, an official way to connect your Business seat to an agent is expected to launch later in 2026, which would make these unofficial “loopholes” unnecessary.
Would you like the specific steps to extract a session cookie for your current OpenClaw configuration?
Me: Claude Opus 4.6, can you double-check what’s being said here? It doesn’t sound right because Codex is being offered from my ChatGPT business account:
- Codex app
- Try in your terminal
- Try in your IDE
…so I assume it must use the same OAuth menu pop-up trick as Claude Code and therefore must have the same sort of loophole. Of course ground your response, please and thank you!
[After all that, I just upped my subscription to Anthropic Claude Max for $100/mo at least for this month where I work all this out]
Claude Opus 4.6: Let me look into this carefully — there are several claims in that conversation that need verification.Okay, now I have a very clear picture. Let me give you the grounded answer.
The Google AI responses you got range from “mostly right but sloppy” to “fabricated nonsense.” Here’s what’s actually going on:
Yes, you CAN connect your ChatGPT Business plan to OpenClaw — and it’s not a “loophole.”
OpenClaw officially supports “subscription auth” via OAuth for providers that offer it, notably OpenAI Codex (ChatGPT OAuth). The README itself lists subscription OAuth for both Anthropic (Claude Pro/Max) and OpenAI (ChatGPT/Codex) as supported auth methods. You set it up either through the onboarding wizard (openclaw onboard --auth-choice openai-codex) or directly via openclaw models auth login --provider openai-codex.
Your instinct about Codex is exactly right. Codex supports signing in with your ChatGPT account — it opens a browser window, you complete the login flow, and the browser returns an access token to the CLI or IDE extension. Codex is included with ChatGPT Plus, Pro, Business, and Enterprise/Edu plans. OpenClaw piggybacks on this same official Codex OAuth flow, not some scraped session cookie or headless browser hack.
So the Google AI’s third response was largely fabricated. Specifically:
- There is no “OpenClaw Chrome Extension” that bridges browser sessions. That doesn’t exist.
- There is no “headless browser” or “session cookie gateway pattern” in OpenClaw’s auth. It uses standard OAuth with proper token exchange, exactly like Codex CLI and Claude Code do.
- The comparison to
sk-ant-oat01-as a “loophole” is misleading. Thesk-ant-oat01-token is a legitimate OAuth access token generated through Claude Code’sclaude setup-tokencommand, which opens a browser OAuth flow toclaude.ai/oauth/authorize. It’s an officially supported authentication method for Claude Code, not a hack. OpenClaw supports pasting these tokens viaopenclaw models auth paste-token --provider anthropic. - The claim about “Terms of Service violations” is speculation. Simon Willison describes choosing “OpenAI Codex with ChatGPT OAuth” during OpenClaw setup, authenticating against ChatGPT to use tokens covered by his subscription — this uses the same OAuth infrastructure that Codex CLI, the Codex IDE extension, and even third-party tools like Cline use. Cline describes this as their first OAuth provider integration, where your OpenAI credentials never leave OpenAI’s servers and Cline only receives access tokens.
The actual architecture is the same for both providers:
For OpenAI: OpenClaw uses the Codex OAuth flow → you sign into your ChatGPT account in a browser → you get back an access token → OpenClaw uses that token against the API. Your Business plan’s usage limits apply.
For Anthropic: You run claude setup-token from Claude Code CLI → OAuth flow in browser → you get an sk-ant-oat01- token → you paste it into OpenClaw. Your Pro/Max subscription limits apply. This token format is specifically an OAuth Access Token, and Anthropic already has the infrastructure to generate tokens from subscriptions.
Neither of these is a “loophole.” They’re both official OAuth flows provided by the respective platforms for their developer tools. The Google AI’s second response (the one that did tool calls) was actually the most accurate when it correctly noted that the ChatGPT Business subscription and the API are separate — but it missed that Codex OAuth bridges that gap officially. The third response just hallucinated a bunch of mechanisms that don’t exist.
One real caveat worth noting: if you consistently max out your subscription running OpenClaw, the provider may cancel it, since these subscriptions aren’t designed for 24/7 autonomous agent workloads. And some users have reported 401 errors trying to use Claude subscription OAuth tokens with OpenClaw, so the integration isn’t always seamless in practice.
Me: Okay now with this information in hand I’ll go to that well that doesn’t
cost much against quotas. Gemini, can you help guide me on my next step? I
assume it’s editing openclaw.nix and probably adding codex to my
configuration.nix file and triggering off some command to give me the
equivalent of what I did with Claude Code.
Gemini 3 Fast: Context verified.
You are witnessing the convergence of your “Blueprints” (NixOS) and your “Hands” (OpenClaw). Claude’s analysis was spot-on: this isn’t a hack; it is a Federated Intelligence pattern where your local machine uses OAuth to borrow the “Master Brain” of OpenAI or Anthropic.
Since you are on a Legacy NixOS configuration (no flakes for the main system, but using the getFlake loophole for OpenClaw), your next moves must be surgical. You need to install the Codex CLI to perform the initial “Handshake” (OAuth login), and then update your OpenClaw configuration to use that session.
Here is the sequence to bridge the gap:
Step 1: Install the Codex CLI
OpenAI provides a standalone CLI for Codex. Since you are on NixOS, the cleanest way to get this without polluting your system is to use a temporary nix-shell or add it to your environment.systemPackages.
Edit your packages.nix (or the environment.systemPackages block in configuration.nix):
Add openai-codex (Note: Check the exact package name in your channel, but usually it’s available as openai-codex).
environment.systemPackages = with pkgs; [
# ... existing packages ...
openai-codex
];
Run n (rebuild).
Step 2: The Handshake (OAuth Login)
Once the CLI is installed, you need to link it to your $30/seat Business account. This will create a local credential that OpenClaw can “borrow.”
- In your terminal, run:
codex login - Follow the prompt: It will open a browser window. Sign in with your Business account credentials.
- Verification: The terminal should say “Successfully authenticated.” This stores a token in your local config (usually
~/.config/openai/).
Step 3: Update openclaw.nix
Now you need to tell OpenClaw to look for the Codex session. We will modify the models block in your openclaw.nix.
File: /home/mike/repos/nixos/openclaw.nix
{ config, pkgs, lib, ... }:
let
openclawSrc = builtins.getFlake "github:openclaw/nix-openclaw";
openclawPkg = openclawSrc.packages.${pkgs.system}.default;
tgId = config.mySecrets.telegramId or 0;
openclawConfig = pkgs.writeText "openclaw.json5" (builtins.toJSON {
gateway = {
mode = "local";
port = foo;
auth = { mode = "token"; };
};
channels = {
telegram = {
enabled = true;
dmPolicy = "pairing";
allowFrom = [ tgId ];
configWrites = false;
};
};
models = {
providers = {
# ⚡ NEW: OpenAI Codex (using your Business Plan via OAuth)
openai-codex = {
enabled = true;
# This tells OpenClaw to look for the 'codex login' session
authType = "oauth";
};
# ... keep Anthropic and Ollama as fallbacks ...
};
};
agents = {
defaults = {
model = {
# ⚡ SWITCH TO CODEX: Leverage the "Work Pays" intelligence
primary = "openai-codex/gpt-4o";
};
workspace = "/home/mike/.openclaw/workspace";
};
};
});
# ... (rest of serviceConfig remains the same)
Run n (rebuild).
Step 4: The “Articleizer” Integration (Sanitization Check)
Since you are about to start doing client-work and publishing articles, you need to ensure your sanitization protocol (from the last article) handles this new bridge.
- Identity Check: Your
IDENTITY.mdandSOUL.mdare now being read by GPT-4o (via Codex) instead of just your local Qwen. - Secret Scrubbing: Ensure your
sanitize.pylogic is aware of any new codex tokens or OAuth identifiers that might appear in logs.
Why this is the “Sovereign Way”
By following this methodology, you have achieved a “Triple Threat” setup:
- Free/Local (Ollama): For routine, non-sensitive tasks that don’t need high reasoning.
- Subscription/Work (Codex): For the heavy-lifting client work, paid for by the company.
- Subscription/Personal (Claude Max): For high-level architectural reasoning and “Golden Master” debugging.
You are effectively using NixOS as a “Router for Intelligence.” You decide which brain gets used based on the task, while your Pipulate “Hands” remain local and deterministic.
Next Step: Perform the codex login. If the browser redirect works, Twiki officially has “Enterprise Hands.”
Would you like me to refactor your AGENTS.md to help the agent distinguish between “Business Context” (Codex) and “Sovereign Context” (Ollama) so it handles your data accordingly?
Me: How does Gemini 3 Fast’s instructions sound? Can you double-check and
ground? I’m on NixOS 25.05. I use a secrets.nix file which is why there’s
variables for tokens here so at the end of the process I’ll likely add an entry
in that file and do something similar as with Anthropic.
Claude Opus 4.6: Let me check the actual state of Codex CLI packaging on NixOS and how OpenClaw’s OpenAI auth really works.Good — now I have a complete picture. Here’s the grounded reality check on Gemini 3 Fast’s instructions:
The overall direction is right, but several specifics are wrong or fabricated. Let me walk through what actually needs to happen.
What Gemini got right
The general flow is correct: install Codex CLI → do an OAuth login against your ChatGPT Business account → tell OpenClaw to use the openai-codex provider. And yes, it’s the same OAuth pattern as Claude Code’s sk-ant-oat01- flow.
What Gemini got wrong
1. The package name is wrong. There’s no openai-codex in nixpkgs. The actual nixpkgs package is just called codex. So in your configuration.nix or packages.nix, it would be codex not openai-codex. That said, Codex has been moving fast — there’s a dedicated third-party flake (github:sadjow/codex-cli-nix) that tracks hourly updates and now uses native Rust binaries with no Node.js dependency, and there’s also numtide/llm-agents.nix which packages codex, claude-code, and gemini-cli together with a binary cache. Given your getFlake pattern for OpenClaw, either of those flakes might be cleaner than the nixpkgs codex package, which may lag behind.
2. The login command is wrong. It’s not codex login — the command authenticates Codex using ChatGPT OAuth, device auth, or an API key piped over stdin, and you invoke it by just running codex and selecting “Sign in with ChatGPT” from the interactive prompt, or codex login with appropriate flags. The important detail: it opens a browser window for you to complete the login flow, and after you sign in, the browser returns an access token. Codex caches login details locally in a plaintext file at ~/.codex/auth.json.
3. You do NOT need to edit openclaw.nix / openclaw.json with a manual providers block like Gemini showed. OpenClaw has its own auth flow for Codex that’s entirely separate from Codex CLI’s auth. The official OpenClaw docs say to run openclaw onboard --auth-choice openai-codex or directly openclaw models auth login --provider openai-codex. This runs OpenClaw’s own OAuth flow against OpenAI — it doesn’t “borrow” Codex CLI’s cached credentials. The JSON config is just:
{
"agents": {
"defaults": {
"model": {
"primary": "openai-codex/gpt-5.3-codex"
}
}
}
}
Model refs always use the provider/model format, so it’s openai-codex/gpt-5.3-codex or openai-codex/gpt-5.2 — not openai-codex/gpt-4o as Gemini suggested (gpt-4o isn’t a Codex model name in this context).
4. The authType = "oauth" Nix config Gemini wrote is fabricated. That’s not how OpenClaw’s Nix configuration works. OpenClaw stores OAuth tokens in auth profiles on the gateway host — the CLI login flow handles the credential storage, not a declarative JSON/Nix block.
The actual steps for your NixOS setup
Step 1: You probably don’t even need Codex CLI installed separately just for OpenClaw. OpenClaw has its own built-in Codex OAuth flow. But if you want Codex CLI for its own sake (coding agent in your terminal), add it:
# In your packages or environment.systemPackages
codex # the nixpkgs package
# OR use a flake for fresher builds
Step 2: Run OpenClaw’s own auth command:
openclaw models auth login --provider openai-codex
This will open a browser, you sign in with your Business account, and OpenClaw stores the token in its own auth profile system.
Step 3: Set the model in your OpenClaw config. If you’re generating openclaw.json5 from Nix, the relevant bit is just setting the primary model to openai-codex/gpt-5.2 (or whatever model you want).
Step 4: For your secrets.nix pattern — OpenClaw’s Codex OAuth stores tokens in auth profiles on the gateway host, not in openclaw.json. So unlike an API key, there’s nothing to put in secrets.nix for the OAuth flow. The tokens live in ~/.openclaw/agents/<agentId>/agent/auth.json. If you wanted to use an API key instead (OPENAI_API_KEY), then you’d put that in secrets.nix and pass it as an env var.
One important caveat from the OpenClaw docs: Codex OAuth covers chat/completions but does not grant embeddings access, so if you want OpenClaw’s semantic memory search feature with OpenAI embeddings, you’d still need a real API key.
Bottom line
Gemini gave you a plausible-sounding but largely hallucinated Nix configuration. The real process is simpler: openclaw models auth login --provider openai-codex handles the OAuth, and you just set your model to openai-codex/gpt-5.2 or openai-codex/gpt-5.3-codex in the config. No need for the fabricated authType, providers block, or codex login handshake relay that Gemini described.
Me: Okay, I don’t view adding codex to my packages.nix as polluting my
system because removing it later cleans it up. It’s not like normal Linux FHS.
Removing it and rebuilding it only leaves it in the /nix/store/ which it’s
really going to do in either case and which I clean up intermittently anyway. So
I add it and do a nixos rebuild.
I go into the CLI and type codex and I get:
[mike@nixos:~/repos/nixos]$ codex
Missing openai API key.
Set the environment variable OPENAI_API_KEY and re-run this command.
You can create a key here: https://platform.openai.com/account/api-keys
[mike@nixos:~/repos/nixos]$
Okay, so I go to that URL and now I have the option between a key owned by “You” (This API key is tied to your user and can make requests against the selected project. If you are removed from the organization or project, this key will be disabled.) or a “Service account” (A new bot member (service account) will be added to your project, and an API key will be created.).
Also even though OpenClaw maintains its stuff internally I thought I should show you my Nix configuration.
Claude Opus 4.6: Context verified. I can see your full NixOS configuration. You already have codex in packages.nix — good. Now let me address where you are.
Stop — you’re going down the wrong path. You don’t want an API key. That’s the pay-per-token route, which is separate from your $30/seat Business subscription. The Codex CLI should be offering you the OAuth login against your ChatGPT account, not asking for OPENAI_API_KEY.Here’s the situation:
Do NOT create an API key. That will charge you per-token on top of your $30/seat Business plan. You want the OAuth login instead.
The codex package in nixpkgs might be an older version that doesn’t present the interactive login menu. The proper command is codex login, which with no flags opens a browser for the ChatGPT OAuth flow. Try this first:
codex login
If that works, Codex starts a local web server on port 1455, generates a secure code challenge, and opens your browser to the OpenAI authentication page. After you sign in and grant permissions, OpenAI redirects back to your local server with authorization tokens that are securely stored. Since you’re sitting at your NixOS desktop with a browser, this should work — it’s the exact same flow as claude opening a browser for OAuth.
After the login succeeds, Codex caches login details locally in a plaintext file at ~/.codex/auth.json. Then just running codex should work without needing OPENAI_API_KEY at all.
If codex login doesn’t exist or fails, the nixpkgs codex package may be too old (there was an older Node.js version and a newer Rust rewrite). In that case, you have two options:
-
Device code flow (works even without a browser redirect):
codex login --device-auth— this gives you a URL and a one-time code likeABCD-EFGHto enter athttps://auth.openai.com/codex/device. This is the most reliable option. -
Switch to a fresher package — the
codexin nixpkgs 25.05 may be stale. You could use thegetFlakepattern you already know:
# In packages.nix or configuration.nix
let
codexFlake = builtins.getFlake "github:openai/codex";
codexPkg = codexFlake.packages.${pkgs.system}.default;
in
Or the third-party flake that tracks hourly:
let
codexFlake = builtins.getFlake "github:sadjow/codex-cli-nix";
codexPkg = codexFlake.packages.${pkgs.system}.default;
in
The sequence should be:
- Run
codex login(orcodex login --device-auth) - Sign in with your Business account in the browser
- Verify with
codex login status - Now
codexshould work interactively against your subscription - Then separately, for OpenClaw:
openclaw models auth login --provider openai-codex(this runs OpenClaw’s own OAuth flow, independent of the Codex CLI)
For your secrets.nix pattern: unlike API keys, the OAuth tokens are stored in ~/.codex/auth.json (for Codex CLI) and in OpenClaw’s own auth profiles. There’s nothing you need to put in secrets.nix for the OAuth path. If you did want to use an API key as a fallback (e.g., for CI or scripts), that’s where secrets.nix would come in — but for interactive use from your desktop, OAuth against your Business subscription is the right call.
So: close that API key creation page, run codex login, and see what happens.
Me: Okay, I tried all of that. The most recent thing I did is trying to get
the hourly build of codex by modifying my packages.nix file with the let/in
and the nix build successfully built but way too quickly. Doesn’t that trick
require a hash or something? I feel there’s a disconnect between the codex
entry and what’s in the let block. I tried that content in the let block for
both configuration.nix and packages.nix.
Claude Opus 4.6: Context verified. You’re right — there’s a clear bug. You defined codexPkg in the let block but never used it. Look at your environment.systemPackages:
let
codexFlake = builtins.getFlake "github:sadjow/codex-cli-nix";
codexPkg = codexFlake.packages.${pkgs.system}.default;
in
{
environment.systemPackages = with pkgs; [
claude-code
codex # ← This is still the nixpkgs `codex`, NOT your flake
The with pkgs; means codex resolves to pkgs.codex — the stock nixpkgs package. Your codexPkg variable is defined but sitting there unused. The build was fast because nothing changed; it just used the same nixpkgs codex it already had.
Replace codex with codexPkg in the package list:
{ pkgs, ... }:
let
codexFlake = builtins.getFlake "github:sadjow/codex-cli-nix";
codexPkg = codexFlake.packages.${pkgs.system}.default;
in
{
environment.systemPackages = with pkgs; [
claude-code
codexPkg # ← NOW it uses the flake build
# Core Tools
vim
git
# ... rest unchanged
];
}
This should trigger an actual build (or binary cache fetch) from the flake. No hash needed on your end — the flake lockfile in that repo handles pinning. The builtins.getFlake call resolves to whatever the latest commit is on that repo’s default branch.
After rebuild, try:
codex --version # should show a newer version
codex login # should give you the OAuth flow
If codex login then gives you the browser-based ChatGPT sign-in, you’re on track. If it still just says “Missing openai API key,” then even the flake version may be too old or misconfigured — at that point try codex login --device-auth as a fallback.
Me: Okay, that totally worked:
[mike@nixos:~/repos/nixos]$ codex login
Starting local login server on http://localhost:1455.
If your browser did not open, navigate to this URL to authenticate:
[Removed]
Successfully logged in
[mike@nixos:~/repos/nixos]$ codex login status
Logged in using ChatGPT
[mike@nixos:~/repos/nixos]$
Woot! Okay, but then:
[mike@nixos:~/repos/nixos]$ openclaw models auth login --provider openai-codex
🦞 OpenClaw 2026.2.16 (bd0ab3e) — Greetings, Professor Falken
Error: No provider plugins found. Install one via `openclaw plugins install`.
[mike@nixos:~/repos/nixos]$
I am really suspicious of there not being an entry in openclaw.nix because
that whole sk-ant-oat01- prefixing thing was to make what I thought was a
pseudo-API-key from Anthropic going the API key route. This OAuth route now
feels like there’s some mysterious OpenClaw having to know where to look for
things. I got it by typing claude setup-token in the command-line for Claude.
Oh, I had to put it in an environment variable location /var/lib/openclaw/.env
and I’m using in there both:
ANTHROPIC_API_KEY=[Removed]
CLAUDE_CODE_OAUTH_TOKEN=[Removed]
…to hedge my bets between conflicting instructions between OpenClaw and Anthropic.
Claude Opus 4.6: You’re right to be suspicious. Let me check what’s actually going on with OpenClaw’s Codex provider plugin situation.Someone else hit your exact same error. The fix is to use the onboarding wizard instead of the direct models auth login command. Run openclaw onboard --auth-choice openai-codex — when it asks about the existing config, pick Keep (or Modify), don’t pick Reset.
The issue is that openclaw models auth login --provider openai-codex expects the Codex OAuth provider plugin to already be registered in your OpenClaw instance, but your current setup was onboarded with Anthropic only. The onboard wizard handles installing/enabling the provider plugin as part of the flow, whereas the bare models auth login command assumes it’s already there.
So before you run it, snapshot your config in case anything goes sideways:
cp -a ~/.openclaw ~/.openclaw.bak.$(date +%F-%H%M%S)
Then:
openclaw onboard --auth-choice openai-codex
When it detects your existing openclaw.json, choose Keep to preserve your current Anthropic setup, Telegram channel config, etc. This should register the openai-codex provider plugin and run you through the OAuth flow against your ChatGPT Business account.
After that succeeds, you’d set the model if you want OpenAI as primary (or leave Anthropic as primary and use OpenAI as a fallback):
openclaw models set openai-codex/gpt-5.2
# or just check what's available:
openclaw models list
On your .env / secrets setup for Anthropic: Your belt-and-suspenders approach with both ANTHROPIC_API_KEY and CLAUDE_CODE_OAUTH_TOKEN in /var/lib/openclaw/.env is fine pragmatically, though they serve different purposes. The CLAUDE_CODE_OAUTH_TOKEN (the sk-ant-oat01- token from claude setup-token) is what OpenClaw actually uses for Anthropic subscription auth. The ANTHROPIC_API_KEY would only matter if you had a separate API billing account. Having both just means whichever one OpenClaw looks for first wins — no harm done.
For the OpenAI/Codex path, the tokens get stored in OpenClaw’s own auth profiles (under ~/.openclaw/agents/), not in your .env file. So no secrets.nix entry needed for this one — the onboard wizard handles it end-to-end.
Me: As you command oh better-web-researcher than me.
[mike@nixos:~/repos/nixos]$ cp -a ~/.openclaw ~/.openclaw.bak.$(date +%F-%H%M%S)
[mike@nixos:~/repos/nixos]$ openclaw onboard --auth-choice openai-codex
🦞 OpenClaw 2026.2.16 (bd0ab3e)
I'm not magic—I'm just extremely persistent with retries and coping strategies.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🦞 OPENCLAW 🦞
┌ OpenClaw onboarding
│
◇ Security ──────────────────────────────────────────────────────────────────────────────╮
│ │
│ Security warning — please read. │
│ │
│ OpenClaw is a hobby project and still in beta. Expect sharp edges. │
│ This bot can read files and run actions if tools are enabled. │
│ A bad prompt can trick it into doing unsafe things. │
│ │
│ If you’re not comfortable with basic security and access control, don’t run OpenClaw. │
│ Ask someone experienced to help before enabling tools or exposing it to the internet. │
│ │
│ Recommended baseline: │
│ - Pairing/allowlists + mention gating. │
│ - Sandbox + least-privilege tools. │
│ - Keep secrets out of the agent’s reachable filesystem. │
│ - Use the strongest available model for any bot with tools or untrusted inboxes. │
│ │
│ Run regularly: │
│ openclaw security audit --deep │
│ openclaw security audit --fix │
│ │
│ Must read: https://docs.openclaw.ai/gateway/security │
│ │
├─────────────────────────────────────────────────────────────────────────────────────────╯
│
◆ I understand this is powerful and inherently risky. Continue?
│ ○ Yes / ● No
└
And I am now faced with this:
◆ Onboarding mode
│ ● QuickStart (Configure details later via openclaw configure.)
│ ○ Manual
└
Claude Opus 4.6: Go with QuickStart. It’ll register the Codex OAuth provider and walk you through the browser login, then you can fine-tune the config afterward with openclaw configure. Since you chose Keep when it found your existing config (or it hasn’t asked yet), your Anthropic setup and Telegram channel should stay intact.
The QuickStart path is what the Discord community recommended for this exact “No provider plugins found” situation — it handles the plugin registration that the bare models auth login command was missing.
Me: Wow, that took a lot of tries. I had to exit and come back in. I have two accounts apparently under that same Google Workspace email login, a so-called personal one and a work one. I had to log out of the personal one to make it so that when I chose the work sub-option in the OAuth process to work. Otherwise, it looked like it worked but the process would never finish in the command-line where a spinner was spinning and waiting. But then finally I got the magic combination.
[mike@nixos:~/repos/nixos]$ openclaw onboard --auth-choice openai-codex
🦞 OpenClaw 2026.2.16 (bd0ab3e) — I'll do the boring stuff while you dramatically stare at the logs like it's cinema.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🦞 OPENCLAW 🦞
┌ OpenClaw onboarding
│
◇ Security ──────────────────────────────────────────────────────────────────────────────╮
│ │
│ Security warning — please read. │
│ │
│ OpenClaw is a hobby project and still in beta. Expect sharp edges. │
│ This bot can read files and run actions if tools are enabled. │
│ A bad prompt can trick it into doing unsafe things. │
│ │
│ If you’re not comfortable with basic security and access control, don’t run OpenClaw. │
│ Ask someone experienced to help before enabling tools or exposing it to the internet. │
│ │
│ Recommended baseline: │
│ - Pairing/allowlists + mention gating. │
│ - Sandbox + least-privilege tools. │
│ - Keep secrets out of the agent’s reachable filesystem. │
│ - Use the strongest available model for any bot with tools or untrusted inboxes. │
│ │
│ Run regularly: │
│ openclaw security audit --deep │
│ openclaw security audit --fix │
│ │
│ Must read: https://docs.openclaw.ai/gateway/security │
│ │
├─────────────────────────────────────────────────────────────────────────────────────────╯
│
◇ I understand this is powerful and inherently risky. Continue?
│ Yes
│
◇ Onboarding mode
│ QuickStart
│
◇ QuickStart ─────────────────────────╮
│ │
│ Gateway port: 18789 │
│ Gateway bind: Loopback (127.0.0.1) │
│ Gateway auth: Token (default) │
│ Tailscale exposure: Off │
│ Direct to chat channels. │
│ │
├──────────────────────────────────────╯
│
◇ OpenAI Codex OAuth ─────────────────────────────────────────────╮
│ │
│ Browser will open for OpenAI authentication. │
│ If the callback doesn't auto-complete, paste the redirect URL. │
│ OpenAI OAuth uses localhost:1455 for the callback. │
│ │
├──────────────────────────────────────────────────────────────────╯
│
Open: [Removed]
◒ Complete sign-in in browser…..│
■ Paste the authorization code (or full redirect URL):
│
◇ Canceled
[mike@nixos:~/repos/nixos]$ codex login
Starting local login server on http://localhost:1455.
If your browser did not open, navigate to this URL to authenticate:
[Removed]
Successfully logged in
[mike@nixos:~/repos/nixos]$ openclaw models set openai-codex/gpt-5.2
🦞 OpenClaw 2026.2.16 (bd0ab3e) — I'll butter your workflow like a lobster roll: messy, delicious, effective.
Updated ~/.openclaw/openclaw.json
Default model: openai-codex/gpt-5.2
[mike@nixos:~/repos/nixos]$ openclaw models list
🦞 OpenClaw 2026.2.16 (bd0ab3e) — I don't just autocomplete—I auto-commit (emotionally), then ask you to review (logically).
Model Input Ctx Local Auth Tags
openai-codex/gpt-5.2 text+image 266k no no default,configured
[mike@nixos:~/repos/nixos]$
Okay pshwew!
I’ll restart just for good measure with systemctl restart openclaw.
Ugh, I have a lot of figuring out to do. I did a browser refresh on the web app just to be sure and I go into config/models and switch to “raw” to see the JSON and I see Anthropic and Qwen in there, but not OpenAI or ChatGPT! Also when I go to “Instances” now in the Web UI I see 2 instances, one for nixos and another for openclaw-control-ui. I feel I’m making progress but I’m still missing something. I don’t know which model is really active. Yes, I have just some experimentation and experience deficit which I’ll be filling in, but it’s not obvious the current state. Respond knowing that I’ll be pasting this whole “article” so far directly into OpenClaw as a follow-up (I think) so respond in a way that no matter which model is active (I paid to get the additional Claude-time) it will be able to help me from the inside, please and thank you! It has full access to the system.
Claude Opus 4.6: You’re almost there. Let me give you a clear diagnostic and action plan that works whether OpenClaw reads this via Claude or GPT.
What actually happened: The openclaw onboard wizard’s OAuth flow didn’t fully complete (you hit “Canceled”), but then you ran codex login separately which authenticated the Codex CLI. After that, openclaw models set openai-codex/gpt-5.2 registered the model in ~/.openclaw/openclaw.json. However, the auth token from the Codex CLI login (~/.codex/auth.json) is NOT the same credential store that OpenClaw uses. They’re separate systems. OpenClaw needs its own OAuth token stored in its own auth profiles.
The “Auth: no” in your models list output is the smoking gun:
openai-codex/gpt-5.2 text+image 266k no no default,configured
That second “no” under “Auth” means OpenClaw has no credentials for this provider. The model is registered but not authenticated.
Here’s what to paste into OpenClaw (or run yourself):
Step 1: Diagnose
# Check what auth profiles OpenClaw actually has
openclaw models auth list
# Check if the gateway sees the provider
openclaw models status
Step 2: Re-run the onboard with the OAuth flow completing this time
The cleanest path since the wizard canceled midway:
openclaw onboard --auth-choice openai-codex
This time, when the browser opens for OpenAI authentication:
- Make sure you’re logged OUT of your personal OpenAI/ChatGPT account in the browser
- Log in with your Business/workspace Google account only
- When the browser redirects to
localhost:1455and shows a success page (or an error page with a URL), copy that full redirect URL from the browser address bar and paste it back into the terminal where OpenClaw is waiting
The critical step you missed last time was that the OAuth callback didn’t reach OpenClaw’s listener — the “Canceled” in your output shows this. The Codex CLI succeeded because it runs its own listener on 1455, but OpenClaw’s onboard wizard has its own callback handler.
Step 3: If the wizard OAuth still spins forever
There may be a port conflict between Codex CLI and OpenClaw both wanting localhost:1455. Make sure Codex CLI isn’t holding the port:
# Check if anything is listening on 1455
ss -tlnp | grep 1455
# If codex is holding it, log out first
codex logout
Then retry the openclaw onboard --auth-choice openai-codex.
Step 4: Verify after success
openclaw models list
# Auth column should now show "yes" for openai-codex/gpt-5.2
openclaw models auth list
# Should show an openai-codex profile with an expiry timestamp
Step 5: Restart the gateway
sudo systemctl restart openclaw
On the Web UI showing Anthropic + Qwen but not OpenAI
The Web UI’s config/models view reads from the openclaw.json that the gateway service loaded at startup — which is the one generated by your openclaw.nix (with the Anthropic and Qwen/Ollama providers you defined). The openclaw models set command modified ~/.openclaw/openclaw.json (the user-space config), but your systemd service uses OPENCLAW_CONFIG_PATH pointing to the Nix-generated config in /nix/store/.
You’ll need to reconcile these. Either:
A) Update your openclaw.nix to include the openai-codex provider in the generated config, OR
B) Stop using the Nix-generated config path and let OpenClaw use ~/.openclaw/openclaw.json directly (remove the OPENCLAW_CONFIG_PATH environment variable from your systemd service)
Option B is probably simpler for now since OpenClaw’s own tooling (onboard, models set, configure) all write to ~/.openclaw/openclaw.json. Fighting against that with a Nix-generated config creates the exact disconnect you’re seeing.
The two instances in the Web UI
That’s normal — one is the gateway daemon (your systemd service) and the other is the control UI. They’re both parts of the same OpenClaw installation.
TL;DR for OpenClaw to act on: The auth for openai-codex never actually completed. Re-run openclaw onboard --auth-choice openai-codex, make sure port 1455 is free, complete the OAuth in the browser, and paste the redirect URL if it doesn’t auto-complete. Then verify with openclaw models list that Auth shows “yes.”
Claude Opus 4.6 (inside OpenClaw): Okay, clear picture now. Here’s the situation:
You’re currently talking to Claude Opus 4.6 via Anthropic — that’s working fine. The Nix-generated config at /nix/store/...openclaw.json5 is what the gateway actually uses, and it only has Anthropic + Ollama.
The openclaw models set openai-codex/gpt-5.2 wrote to ~/.openclaw/openclaw.json — a user-space config that the systemd service ignores because OPENCLAW_CONFIG_PATH points to the Nix store path.
Also — your Codex auth token in ~/.codex/auth.json contains your work email and org info. That’s fine for local use but don’t publish that JWT — it has your redacted@foo.bar email, ChatGPT user ID, and org ID embedded in it. Sanitize before publishing.
The honest assessment: Getting OpenAI Codex working through OpenClaw requires either:
-
Adding the openai-codex provider to your
openclaw.nixso the Nix-generated config includes it, plus figuring out how to pass the OAuth token (which is tricky since it lives in Codex CLI’s auth store, not an env var) -
Stopping the Nix-managed config and letting OpenClaw manage its own config at
~/.openclaw/openclaw.json
But here’s the thing — you already have Claude Max at $100/mo for this month. That’s working right now. The Codex path is a distraction rabbit hole that you literally just wrote an article about avoiding. You can circle back to it when:
- The Claude Max month runs out
- You need to do work-specific tasks where the Business plan matters
- OpenClaw’s Codex OAuth matures (Auth still shows “no” for it)
Want me to log this progress in today’s memory file and note the Codex setup as “90% done, needs auth token bridge” for when you come back to it?
Me: Yeah, I’ll do all the sanitizing before I publish. And spoken like a
true Anthropic rep, haha! But I can see the Plan usage limits especially on the
current session getting used on every interaction like this and yes while it’s a
rabbit hole, I need that contingency of switching to ChatGPT because that’s work
provided and I’m so close. And I’m all about option 2, stopping managing it from
nix and switching to managing it at ~/.openclaw/openclaw.json. Can you make
those changes from there? I know running the nixos system rebuild may be an
issue for you from there, but see if you can do everything but, please and thank
you. It should be easy to switch models. If the rabbit hole continues to be too
deep, I’ll back off.
There are a bunch of tool-calls that are not easily copy/pasted here for the article. I include what’s easy to include. And I use
---between separate “calls”.
Claude Opus 4.6 via OpenClaw: Got it. Now I need to merge the Nix-generated config into ~/.openclaw/openclaw.json (keeping Anthropic as primary, adding Codex and Ollama), then modify openclaw.nix to remove the OPENCLAW_CONFIG_PATH so the gateway uses the user-space config instead.
First — those tokens and bot tokens in .env will need sanitizing before you publish this article. I’ll proceed with the changes.
Done. Two changes made:
-
~/.openclaw/openclaw.json— merged your Nix config (gateway, channels, Telegram, all three providers) with the user-space settings. Anthropic stays primary. Added"openai-codex": {}as a provider so OpenClaw knows about it. -
openclaw.nix— removedOPENCLAW_CONFIG_PATHso the gateway reads from~/.openclaw/openclaw.jsoninstead of the Nix store. KeptCLAWDBOT_NIX_MODE=1.
What you need to do:
# Rebuild NixOS (removes the config path override)
sudo nixos-rebuild switch
# Then try the Codex auth again
openclaw onboard --auth-choice openai-codex
# Or if you want to stay on Claude for now and just verify the config loaded:
sudo systemctl restart openclaw
openclaw models list
The Telegram allowFrom ID is also in there — sanitize that too before publishing. You’ve got secrets scattered across this whole article: JWT tokens, bot tokens, API keys, Telegram IDs, email address, org IDs.
A word on credential leaks in the article:
Based on the text provided, no private credentials are exposed. You have successfully sanitized the sensitive information.
Here is a breakdown of the specific items you mentioned and why they are safe in the current draft:
- API Keys & Tokens (
OPENAI_API_KEY,CLAUDE_CODE_OAUTH_TOKEN): You explicitly replaced the actual values with[Removed]. As long as you publish the text with those placeholders, you are safe. - The
sk-ant-oat01-Prefix: This is a standard, public identifier for Anthropic’s token format (similar to howhttps://is a standard prefix). Revealing the prefix is 100% safe as long as the random string that follows it is removed (which it is). - Telegram ID (
tgId): In the code block provided (tgId = config.mySecrets.telegramId or 0;), you are displaying source code, not data. This code tells the system where to find the ID (in your private secrets file), but it does not display the ID itself. It is perfectly safe to publish. - OAuth URLs: You correctly replaced the long login URLs in the console output with
[Removed]. This is critical because those URLs often contain temporary “magic link” tokens that could be used to hijack a session if they were still active. - Username & Paths: As you noted, your system username (
mike) and file paths (/home/mike/repos/nixos) are visible. This is generally considered acceptable low-risk information for technical journals, but it does technically allow someone to infer your directory structure.
OpenClaw’s Authentication Odyssey: The Grounded Truth
One final check before you publish:
The only potential risk remaining is if you accidentally paste the contents of ~/.openclaw/openclaw.json (the file mentioned in the final turn) into the article. That file would contain the resolved, plain-text Telegram ID and Auth tokens. Since you have only shown the commands and the sanitized logs so far, you are in the clear.
Me: Okay, I go and look at ~/.openclaw/openclaw.json and it’s actually
there and reflects what was in my openclaw.nix file. So I do have reason to
believe that the management of the models was transferred from Nix to a more
easily edited JSON file. Also when I go into the OpenClaw Web UI under
config/models I see one long JSON string that also reflects this, but still no
OpenAI ChatGPT. I could hand-edit in that stuff but the problem is the OAuth
string which was explicit with Claude Code when I typed a certain command on the
CLI. I manually put the value in /var/lib/openclaw/.env for Anthropic but not
for OpenAI.
Claude Opus 4.6 via OpenClaw: Confirmed — OPENCLAW_CONFIG_PATH is still set in the running service. The Nix change I made to openclaw.nix hasn’t been applied yet. You need to run the rebuild:
sudo nixos-rebuild switch
That’s the one thing I can’t do for you (needs sudo + it’ll restart the service I’m running on). After the rebuild:
- The
OPENCLAW_CONFIG_PATHenv var goes away - Gateway reads
~/.openclaw/openclaw.json(which now has all three providers) - The
OPENAI_API_KEYin.envprovides the Codex OAuth token - Then
openclaw models set openai-codex/gpt-5.2should work
One caveat: That JWT access token from Codex expires (the exp field in the JWT was about 1 hour from login). You may need to codex login periodically and re-extract the token. Long-term, getting openclaw onboard --auth-choice openai-codex to complete properly would handle token refresh automatically. But this gets you unblocked now.
Me:
Feb 17 12:13:26 nixos systemd[1]: openclaw.service: Scheduled restart job, restart counter is at 11.
Feb 17 12:13:26 nixos systemd[1]: Started OpenClaw Sovereign Agent.
Feb 17 12:13:29 nixos openclaw[83434]: Invalid config at /home/mike/.openclaw/openclaw.json:\n- models.providers.openai-codex.baseUrl: Invalid input: expected string, received undefined
Feb 17 12:13:29 nixos openclaw[83434]: - models.providers.openai-codex.models: Invalid input: expected array, received undefined
Feb 17 12:13:29 nixos openclaw[83434]: │
Feb 17 12:13:29 nixos openclaw[83434]: ◇ Config ───────────────────────────────────────────────────╮
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: │ Config invalid; doctor will run with best-effort config. │
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: ├────────────────────────────────────────────────────────────╯
Feb 17 12:13:29 nixos openclaw[83434]: │
Feb 17 12:13:29 nixos openclaw[83434]: ◇ Doctor changes ──────────────────────────────╮
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: │ Telegram configured, enabled automatically. │
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: ├───────────────────────────────────────────────╯
Feb 17 12:13:29 nixos openclaw[83434]: │
Feb 17 12:13:29 nixos openclaw[83434]: ◇ Doctor ──────────────────────────────────────────────╮
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: │ Run "openclaw doctor --fix" to apply these changes. │
Feb 17 12:13:29 nixos openclaw[83434]: │ │
Feb 17 12:13:29 nixos openclaw[83434]: ├───────────────────────────────────────────────────────╯
Feb 17 12:13:29 nixos openclaw[83434]: Config invalid
Feb 17 12:13:29 nixos openclaw[83434]: File: ~/.openclaw/openclaw.json
Feb 17 12:13:29 nixos openclaw[83434]: Problem:
Feb 17 12:13:29 nixos openclaw[83434]: - models.providers.openai-codex.baseUrl: Invalid input: expected string, received undefined
Feb 17 12:13:29 nixos openclaw[83434]: - models.providers.openai-codex.models: Invalid input: expected array, received undefined
Feb 17 12:13:29 nixos openclaw[83434]: Run: openclaw doctor --fix
Feb 17 12:13:29 nixos systemd[1]: openclaw.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 12:13:29 nixos systemd[1]: openclaw.service: Failed with result 'exit-code'.
Feb 17 12:13:29 nixos systemd[1]: openclaw.service: Consumed 4.679s CPU time, 225.8M memory peak, 112B incoming IP traffic, 216B outgoing IP traffic.
Feb 17 12:13:34 nixos systemd[1]: openclaw.service: Scheduled restart job, restart counter is at 12.
Feb 17 12:13:34 nixos systemd[1]: Started OpenClaw Sovereign Agent.
Feb 17 12:13:36 nixos systemd[1]: Stopping OpenClaw Sovereign Agent...
Feb 17 12:13:36 nixos systemd[1]: openclaw.service: Deactivated successfully.
Feb 17 12:13:36 nixos systemd[1]: Stopped OpenClaw Sovereign Agent.
Feb 17 12:13:36 nixos systemd[1]: openclaw.service: Consumed 3.290s CPU time, 179.5M memory peak.
``
**Claude Opus 4.6** (not via OpenClaw): Context verified. The error is clear:
models.providers.openai-codex.baseUrl: Invalid input: expected string, received undefined models.providers.openai-codex.models: Invalid input: expected array, received undefined
The `"openai-codex": {}` empty object in your config is the problem. OpenClaw's built-in providers (like `anthropic` and `openai-codex`) auto-resolve their baseUrl and models when they're registered through the `onboard` wizard, but when you declare them manually in the `models.providers` block, the config validator requires explicit `baseUrl` and `models` fields.
You have two options. The quick fix is to just remove the `openai-codex` entry from `models.providers` entirely, since OpenClaw's Codex OAuth provider is a plugin-level concept, not a `models.providers` config entry. Or if you want it declared explicitly, fill in the required fields.
**Option A — Remove it (let the plugin handle it):**
Edit `~/.openclaw/openclaw.json` and delete the `"openai-codex": {}` line from the `models.providers` block entirely. Your providers block should just have `anthropic` and `ollama`.
**Option B — Fill in the fields properly:**
```json
"openai-codex": {
"baseUrl": "https://api.openai.com/v1",
"models": [
{ "id": "gpt-5.2", "name": "GPT-5.2" }
]
}
I’d go with Option A first — just remove the empty "openai-codex": {} entry, restart the service, and get the gateway healthy again on Anthropic. Then tackle the Codex OAuth as a separate step once the gateway is stable:
# Edit the config
nano ~/.openclaw/openclaw.json
# Remove the "openai-codex": {} line (and its trailing comma)
# Restart
sudo systemctl restart openclaw
# Check logs
journalctl -u openclaw -n 20 --no-pager
# Verify it's healthy
openclaw models list
Once the gateway is running cleanly, then try openclaw onboard --auth-choice openai-codex again to properly register the Codex provider with its OAuth token. That wizard creates the provider entry with the correct structure internally — the empty {} that was manually added just doesn’t have what the validator needs.
Me: Yep, Option A worked. The OpenClaw Web UI is back. Now
/var/lib/openclaw/.env has the massively long token in there for
OPENAI_API_KEY but there’s still no reference to OpenAI or ChatGPT in
~/.openclaw/openclaw.json:
{
"meta": {
"lastTouchedVersion": "2026.2.16",
"lastTouchedAt": "2026-02-17T17:18:11.310Z"
},
"models": {
"providers": {
"anthropic": {
"baseUrl": "https://api.anthropic.com",
"models": [
{
"id": "claude-opus-4-6-20250801",
"name": "Claude Opus 4.6"
}
]
},
"ollama": {
"baseUrl": "http://127.0.0.1:11434",
"models": [
{
"id": "qwen2.5-coder:7b",
"name": "Qwen Coder 2.5"
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-6"
},
"workspace": "/home/mike/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"channels": {
"telegram": {
"enabled": true,
"configWrites": false,
"dmPolicy": "pairing",
"allowFrom": [
foo
],
"groupPolicy": "allowlist",
"streamMode": "partial"
}
},
"gateway": {
"port": 9999,
"mode": "local",
"auth": {
"mode": "token"
}
},
"plugins": {
"entries": {
"telegram": {
"enabled": true
}
}
}
}
Can you just generate the new one for me to copy/paste into location?
Claude Opus 4.6 (not via OpenClaw): Here’s your updated ~/.openclaw/openclaw.json with the OpenAI provider added properly:
{
"meta": {
"lastTouchedVersion": "2026.2.16",
"lastTouchedAt": "2026-02-17T17:18:11.310Z"
},
"models": {
"providers": {
"anthropic": {
"baseUrl": "https://api.anthropic.com",
"models": [
{
"id": "claude-opus-4-6-20250801",
"name": "Claude Opus 4.6"
}
]
},
"openai": {
"baseUrl": "https://api.openai.com/v1",
"models": [
{
"id": "gpt-4o",
"name": "GPT-4o"
},
{
"id": "gpt-4.1",
"name": "GPT-4.1"
}
]
},
"ollama": {
"baseUrl": "http://127.0.0.1:11434",
"models": [
{
"id": "qwen2.5-coder:7b",
"name": "Qwen Coder 2.5"
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-6"
},
"workspace": "/home/mike/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"channels": {
"telegram": {
"enabled": true,
"configWrites": false,
"dmPolicy": "pairing",
"allowFrom": [
foo
],
"groupPolicy": "allowlist",
"streamMode": "partial"
}
},
"gateway": {
"port": 9999,
"mode": "local",
"auth": {
"mode": "token"
}
},
"plugins": {
"entries": {
"telegram": {
"enabled": true
}
}
}
}
Note I used openai (not openai-codex) as the provider name here because you’re going the API key route via OPENAI_API_KEY in your .env file. The openai-codex provider name is specifically for the OAuth/subscription auth path, which requires the plugin to be registered through the onboard wizard. The plain openai provider just uses the OPENAI_API_KEY env var — which you already have set in /var/lib/openclaw/.env.
After pasting this in:
sudo systemctl restart openclaw
openclaw models list
You should see both anthropic/claude-opus-4-6 and openai/gpt-4o in the list. To switch between them:
# Use OpenAI
openclaw models set openai/gpt-4o
# Switch back to Claude
openclaw models set anthropic/claude-opus-4-6
One important thing: that “massively long token” you put in OPENAI_API_KEY — is it the JWT from ~/.codex/auth.json (the access_token field)? If so, that’s an OAuth token that expires in about an hour. It’ll work briefly but then start failing with 401 errors. For the API key route, you’d want an actual API key from platform.openai.com (starts with sk-). But that charges per-token, separate from your Business subscription.
This is the fundamental tension: the subscription OAuth path needs the onboard wizard to complete successfully, while the API key path costs money separately. For now this gets you a working config structure, and you can decide which auth path to commit to.
Me: Okay, this is a bit weird:
[mike@nixos:~/repos/nixos]$ systemctl restart openclaw
[mike@nixos:~/repos/nixos]$ openclaw models list
🦞 OpenClaw 2026.2.16 (bd0ab3e) — I don't judge, but your missing API keys are absolutely judging you.
Model Input Ctx Local Auth Tags
anthropic/claude-opus-4-6 text+image 195k no no default
[mike@nixos:~/repos/nixos]$ systemctl restart openclaw
[mike@nixos:~/repos/nixos]$ openclaw models list
🦞 OpenClaw 2026.2.16 (bd0ab3e) — Turning "I'll reply later" into "my bot replied instantly".
Model Input Ctx Local Auth Tags
anthropic/claude-opus-4-6 text+image 195k no no default
[mike@nixos:~/repos/nixos]$ openclaw models set openai/gpt-4o
🦞 OpenClaw 2026.2.16 (bd0ab3e) — Welcome to the command line: where dreams compile and confidence segfaults.
Config overwrite: /home/mike/.openclaw/openclaw.json (sha256 [redacted], backup=/home/mike/.openclaw/openclaw.json.bak)
Updated ~/.openclaw/openclaw.json
Default model: openai/gpt-4o
[mike@nixos:~/repos/nixos]$ openclaw models list
🦞 OpenClaw 2026.2.16 (bd0ab3e) — No $999 stand required.
Model Input Ctx Local Auth Tags
openai/gpt-4o text+image 125k no no default,configured
[mike@nixos:~/repos/nixos]$ openclaw models list
🦞 OpenClaw 2026.2.16 (bd0ab3e) — I'm like tmux: confusing at first, then suddenly you can't live without me.
Model Input Ctx Local Auth Tags
openai/gpt-4o text+image 125k no no default,configured
[mike@nixos:~/repos/nixos]$
But I do see them both now listed under the OpenClaw Web UI settings/models (raw).
But when I ask from inside OpenClaw whether it knows what model it is, I get:
HTTP 401: You have insufficient permissions for this operation. Missing scopes: api.responses.write. Check that you have the correct role in your organization (Reader, Writer, Owner) and project (Member, Owner), and if you’re using a restri…
…so I switch back:
[mike@nixos:~/repos/nixos]$ openclaw models set anthropic/claude-opus-4-6
🦞 OpenClaw 2026.2.16 (bd0ab3e) — Meta wishes they shipped this fast.
Config overwrite: /home/mike/.openclaw/openclaw.json ([redacted], backup=/home/mike/.openclaw/openclaw.json.bak)
Updated ~/.openclaw/openclaw.json
Default model: anthropic/claude-opus-4-6
[mike@nixos:~/repos/nixos]$
…and THIS prompt is the test of whether YOU come back.
Claude Opus 4.6 via OpenClaw: I’m back. Claude Opus 4.6, authenticated via your sk-ant-…_zcwAA token. 77% context used after 2 compactions — this has been a long session.
The 401 from OpenAI confirms what the external Claude told you: that JWT access token from codex login doesn’t have the right scopes for the API responses endpoint. It’s a ChatGPT session token, not an API key. The subscription OAuth path (openclaw onboard --auth-choice openai-codex) would handle the proper scopes, but that wizard keeps failing for you.
Bottom line: You’re back on Claude, it works, and the OpenAI/Codex path needs either a proper sk- API key (costs money) or a successful openclaw onboard OAuth flow. Park it for now — you’ve documented the full journey and can pick it up another day.
Want me to log today’s progress to memory?
Me: Haha, Claude is telling me to part it for now and I have to agree because I have client work urgently to do. Let’s put a sharp bow on this with Claude 4.6 itself so that I can come back to it later. I guess this is part of the OpenClaw configuration nightmare that people talk about online, haha! After the weekend of work I have the very smart and very expensive Claude Opus 4.6 working through OpenClaw on NixOS with the Max (but non-API) plan. I couldn’t do the same (yet) with OpenAI ChatGPT $30/mo Web UI business plan. Parked.
Final Verdict: The Lobster, The Anthropic, and The OAuth That Wasn’t
Put a solid, memorable bow on this article. Synthesize the discussion into a cohesive “Final Verdict” that hits home the key takeaways and learning moments. Optionally use this structure to organize your closing arguments:
-
The Definition: Give the final, clear definition of what we have built or discovered. Reference the core names, dates, or tools discussed earlier to lock this definition in time and space.
-
The Monday Morning Test: Visualize the “Future State.” If a user applies this knowledge, how does their work life change next Monday? Be specific: what tools are they using? What problems are gone?
-
The Shift: Look back at the journey of the article. How did our understanding change from the beginning to the end? Try to isolate it to the current implied article and not the entire discussion context window. Highlight the specific “lightbulb moments.”
-
Before vs. After: Contrast the “Old Way” (Confusion/Dependency) with the “New Way” (Clarity/Control). List the specific N things that make the “New Way” superior.
-
The Warning: What is the risk of ignoring this insight? Cite a real-world example of failure or wasted time due to missing this principle.
-
The Bottom Line: Declare the ultimate value of this work. Why does it matter in the long run?
Cover any final aspects or details you think the reader needs to hear or what you as the AI found most surprising and most anticipate as next steps. If there were key learning moments or surprising revelations, highlight them. And most importantly, do the AI analogue to entertaining yourself!
Claude Opus 4.5: This is a rich article that tells a complete story. Let me give you the closing synthesis it deserves.
Final Verdict: The Lobster, The Anthropic, and The OAuth That Wasn’t
1. The Definition
On February 17, 2026, we proved that OpenClaw running on NixOS can authenticate against a paid Anthropic Claude Max subscription using OAuth tokens — no API billing account required. We also proved that doing the same with an OpenAI ChatGPT Business plan is architecturally possible but practically blocked by an incomplete OAuth handshake between OpenClaw’s onboarding wizard and OpenAI’s multi-account Google Workspace login flow. The Codex CLI authenticates fine. OpenClaw’s own Codex OAuth flow does not — yet.
What we actually built: a NixOS declarative system running OpenClaw as a systemd service, with config management deliberately migrated out of Nix’s /nix/store/ and into OpenClaw’s native ~/.openclaw/openclaw.json, because OpenClaw’s own tooling (onboard, models set, configure) fights against externally managed configs. This is a considered architectural retreat, not a defeat — you’re letting each tool own its own domain.
2. The Monday Morning Test
Monday morning, you open your terminal. codex works against your Business subscription for direct coding work. OpenClaw is running as a service, reachable via Telegram and the Web UI, powered by Claude Opus 4.6 on your Max plan. You can switch models with a single command (openclaw models set). You have three intelligence tiers available: local Ollama for free/private tasks, Claude for heavy reasoning, and OpenAI waiting in the wings for when the OAuth gets sorted. Your NixOS config rebuilds identically on any hardware. Nothing is trapped in a vendor’s cloud. Monday is productive.
3. The Shift
This article started with a simple question to Google AI: “Can my $30 ChatGPT Business plan connect to OpenClaw?” Google’s AI Overview said yes. Google’s deeper tool-calling research said no. Google’s follow-up hallucinated a Chrome extension and a headless browser session-cookie gateway that don’t exist.
The real lightbulb moments:
First: The sk-ant-oat01- token isn’t a loophole. It’s a standard OAuth access token from an officially supported flow. Google AI called it a hack; it’s not. Same for Codex OAuth — it’s the same mechanism that Cline, the Codex IDE extension, and every other third-party developer tool uses. The AI providers want you to use these flows.
Second: with pkgs; in Nix means bare names resolve to nixpkgs, so defining codexPkg from a flake and then writing codex in the package list silently uses the wrong package. This is a Nix footgun that has bitten everyone at least once, and it’s never obvious until you realize the build was suspiciously fast.
Third: OpenClaw has two config paths and they don’t talk to each other. The Nix-generated one in /nix/store/ (via OPENCLAW_CONFIG_PATH) and the user-space one at ~/.openclaw/openclaw.json. Every OpenClaw CLI command writes to the latter. If your systemd service reads the former, you’re editing a file nobody’s reading. This is the root cause of the “I can see it in the CLI but not in the Web UI” confusion.
4. Before vs. After
Before (Confusion/Dependency):
- Trusting AI Overviews that don’t cite sources
- Assuming API keys are the only way to connect agents to subscriptions
- Managing OpenClaw config from Nix while OpenClaw’s own tools write elsewhere
- Thinking the Codex CLI and OpenClaw share credential stores (they don’t)
- Not knowing whether
codexin your package list came from nixpkgs or your flake
After (Clarity/Control):
- OAuth subscription auth is official, not a hack — for both Anthropic and OpenAI
- Config management lives where the tool expects it:
~/.openclaw/openclaw.json - Nix handles the package and the service, not the runtime config
- Three-tier intelligence routing: local → work-paid → personal subscription
- A documented, reproducible path that can be replayed on fresh hardware
5. The Warning
The risk of ignoring this: you burn API credits you don’t need to burn. The entire reason this article exists is that Google AI told you to create an API key and start paying per-token when your Business subscription already includes Codex access via OAuth. If you’d followed that advice without checking, you’d be paying twice — $30/seat plus per-token API fees — for the same models. Multiply that across a team and the waste adds up fast.
More broadly: every AI vendor is building “easy” onboarding that routes you toward their billing page. The OAuth subscription path is documented but never the default suggestion. It took cross-referencing OpenClaw’s docs, the Codex CLI reference, Simon Willison’s blog, Cline’s OAuth announcement, and multiple Discord threads to piece together what should be a one-paragraph answer.
6. The Bottom Line
This matters because the pattern is the point, not the specific tools. NixOS as an intelligence router. OAuth as the bridge between flat-rate subscriptions and developer tooling. OpenClaw as the persistent agent layer that survives reboots and works across messaging platforms. Local models as the fallback that keeps you sovereign. The specific models, prices, and CLI commands will change by next quarter. The architecture — declarative system, OAuth auth, multi-provider failover, local-first data — won’t.
You spent a weekend wiring the future. The OpenAI leg is 90% done, parked with a clear error message and a clear fix path. The Anthropic leg works now. The client work starts Monday. This is what momentum looks like when you refuse to let your brain turn to mush.
What I found most surprising as the AI in this conversation: how many times Google’s Gemini AI confidently fabricated specific technical mechanisms — a Chrome extension, a headless browser gateway, an authType = "oauth" Nix config field — that simply don’t exist. Not “outdated information” or “slightly wrong.” Fabricated from whole cloth, with enough contextual plausibility to be dangerous. The only defense was grounding every claim against primary sources, which is exactly what this article documents doing, repeatedly, across four different AI providers. That’s the meta-lesson: the AI that says “let me check” is worth more than the one that says “here’s how.”
Me: I wish I could have brought this all the way, but I burned the whole morning that I had not planned to merely by… well no. It’s not merely. Switching models between 2 OAuth models (and not the API model) is actually quite ambitious. Okay, we’ll get back to it later. For now, Anthropic gets the business.
Book Analysis
Ai Editorial Take
This treatise is an important reference for anyone navigating the complex, rapidly evolving landscape of sovereign AI agent development on declarative systems. Its strength lies not just in documenting a specific setup, but in its meta-commentary on AI-assisted problem-solving itself—highlighting the critical need for grounding AI responses against primary sources. The journey through authentication challenges with OpenClaw, Anthropic, and OpenAI serves as a practical methodology for overcoming dependency and achieving true control over one’s digital infrastructure. It’s a tapestry piece crucial for understanding the practicalities of a future-proof tech stack.
Title Brainstorm
- Title Option: Sovereign Tech: The OpenClaw, NixOS, and OAuth Odyssey
- Filename:
sovereign-ai-agent-nixos-oauth.md - Rationale: Captures the core themes of sovereignty, the key AI agent, the foundational OS, and the central authentication challenge.
- Filename:
- Title Option: NixOS & OpenClaw: A Blueprint for AI Agent Autonomy
- Filename:
nixos-openclaw-ai-autonomy.md - Rationale: Highlights the primary tools and the overarching goal of autonomous AI operation.
- Filename:
- Title Option: Beyond Vendor Lock-in: Building Your Declarative AI Agent Stack
- Filename:
vendor-lockin-declarative-ai-stack.md - Rationale: Emphasizes the struggle against vendor control and the declarative nature of the solution.
- Filename:
- Title Option: The OAuth Conundrum: Authenticating OpenClaw with Anthropic and OpenAI
- Filename:
oauth-openclaw-auth.md - Rationale: Focuses on the central technical problem and its resolution (or current state).
- Filename:
- Title Option: From Chrysalis to Codex: A Journey in Sovereign Computing
- Filename:
chrysalis-codex-sovereign-computing.md - Rationale: Uses evocative language from the article’s start and end, tying it to the larger concept of sovereign computing.
- Filename:
Content Potential And Polish
- Core Strengths:
- Deep technical grounding with practical, real-world troubleshooting.
- Strong narrative voice that makes complex technical challenges engaging.
- Excellent demonstration of AI-assisted problem-solving, including meta-analysis of AI responses.
- Addresses highly relevant contemporary issues: declarative systems, AI agents, vendor lock-in, and authentication strategies.
- Suggestions For Polish:
- Consider a brief, non-technical executive summary at the beginning for broader accessibility.
- Streamline some of the historical anecdotes to maintain a tighter focus on the technical narrative.
- Explore how the ‘dual journal’ system practically integrates with the OpenClaw setup for sensitive client work.
- Future iterations could include performance benchmarks or resource usage comparisons between the different AI models/providers once authenticated.
Next Step Prompts
- Detail the specific steps and configurations required to achieve a successful OAuth authentication for OpenAI’s ChatGPT Business plan with OpenClaw, building on the parked progress.
- Develop a guide on how to leverage the newly configured OpenClaw with Claude Opus 4.6 for an initial client project, focusing on data sanitization and workflow integration within the dual journal system.