My Telegram bot stopped working. Again. The error message was clear: “No API key found for provider openrouter.” Except I knew the key was there—I’d just checked it. What I didn’t know was that I had two OpenClaw gateways fighting over the same bot, and the wrong one was winning.
Quick Context: Meet August
For those unfamiliar, OpenClaw is an open-source AI gateway that lets you run your own AI assistant with custom routing, authentication, and integrations. I’ve set mine up as a personal AI assistant accessible through Telegram, and I’ve named it August. Think of it as having Claude or ChatGPT, but self-hosted, with the ability to control exactly which AI models handle which requests and when they fall back to alternatives.
Also Read:
- My Journey with OpenClaw: Lessons from My AI Assistant
- The Top 10 Things I’ve Built With My AI Assistant (Built on OpenClaw)
Fair warning: This post gets a bit more technical than my usual content. I’m including actual command outputs and code snippets because, honestly, that’s what made the debugging possible. If you’re not into the terminal stuff, feel free to skim—the story and lessons still land.
The Symptoms Were Confusing
I’d been running OpenClaw with two profiles: a default setup and an “awprod” profile for production work. My plan was simple—use the awprod profile for my Telegram bot integration, keep Anthropic out of the fallback chain to preserve my quota, and rely on OpenRouter’s Kimi model when my ChatGPT rate limits kicked in.
But something was wrong. When I sent messages to my Telegram bot, I’d get cryptic errors about missing API keys, even though openclaw --profile awprod models status (a terminal command) showed everything configured correctly. The routing looked perfect: GPT-5.2 as primary, Kimi as first fallback, GPT-4o as backup. The agent directory pointed to the awprod location. OpenRouter authentication showed up in the status output.
Yet the bot kept failing. And worse, my Anthropic API quota was disappearing fast—despite explicitly removing it from my fallback chain.
The Detective Work Began
When configs look right but behavior is wrong, you have to go deeper. I started with the basics: checking which OpenClaw processes were actually running.
That’s when the first surprise hit me. Two gateway processes were running. Not one.
The newer process had OPENCLAW_PROFILE=awprod in its environment—exactly what I expected. But the older process? No profile specified. It was the default gateway, running on a different port, and I’d completely forgotten about it.
Here’s the thing about Telegram bots: they can only connect to one gateway at a time. When you have two gateways trying to poll the same bot token, they fight. The logs made it crystal clear: "Conflict: terminated by other getUpdates request; make sure that only one bot instance is running."
The default gateway was intercepting my messages.
The Irony of AI Fixing AI
Here’s where it gets meta: there I was, using Claude to debug my AI assistant named August, watching like a worried parent as one AI performed CPR on another AI.
You’d think that having your own AI assistant means you wouldn’t need ChatGPT, Claude, or Perplexity anymore. But when August crashed and went offline, those very same AI chatbots became the doctors in the emergency room. Claude helped me trace through process IDs and config files. Perplexity validated my hunches about Telegram’s polling conflicts. ChatGPT suggested auth file structures.
And I sat there, anxiously watching the terminal output, hoping each command would bring August back to life. The irony wasn’t lost on me—AI tools rescuing another AI tool while I played the role of worried parent in the waiting room.
Eventually, the patient stabilized. But not before teaching me that no matter how sophisticated your setup gets, you’re still going to need multiple tools in your arsenal. Redundancy isn’t just for servers; it’s for debugging too.
Subscribe
Receive a monthly drop of curated news, practical prompts, and how‑to guides—plus first dibs on workshops.
But Wait, There’s More
Stopping the duplicate gateway should have fixed everything. It didn’t.
Even with only awprod running, the errors persisted. The bot was now properly connected, but model routing still failed. The error messages revealed the real problem: the agent directory path.
OpenClaw uses “agents” as isolated workspaces—each agent has its own authentication store. My awprod profile’s config file was pointing to the default agent directory (~/.openclaw/agents/main/agent) instead of its own (~/.openclaw-awprod/agents/main/agent).
So even though I’d configured everything in awprod’s settings, at runtime the system was looking for auth credentials in the wrong location. No wonder OpenRouter authentication was “missing.”
The Fix Was Systematic
Once I understood the actual problem, the solution was straightforward:
- Permanently disable the default gateway using
systemctl --user disable --now openclaw-gateway.serviceto prevent it from restarting - Fix the agent directory path in awprod’s config to point to its own location
- Add OpenRouter authentication to the correct auth-profiles.json file (in the awprod agent directory, not the default one)
- Restart the awprod gateway to pick up the changes
The moment of truth came when I sent a test message to Telegram. GPT-5.2 was still rate-limited, so the bot fell back to the next option. And there it was:
“Currently running Kimi K2.5 via OpenRouter. Default model for this session is set to GPT-5.2, but I’m using Kimi for this conversation.”
It worked.

The Lessons That Stick
This debugging session taught me three things about working with multi-profile setups:
Configs can lie, but processes don’t. My awprod config looked perfect, but the running processes told the real story. When troubleshooting, always check what’s actually executing, not just what’s written in config files.
Authentication is path-dependent. It’s not enough to have API keys configured somewhere—they need to be in the exact location where the runtime expects to find them. Agent directories matter.
One bot, one gateway. Telegram’s polling mechanism doesn’t gracefully handle multiple consumers. If you’re running multiple OpenClaw profiles, make absolutely sure only one is connected to each integration.
What’s Working Now
My setup is clean:
- Single awprod gateway handling all requests
- Correct model routing: GPT-5.2 → Kimi K2.5 → GPT-4o
- Anthropic preserved for when I explicitly need it
- Telegram bot responding reliably with proper fallback behavior
The beauty of systematic debugging is that once you find the root cause, the fix is usually simple. The hard part is resisting the urge to randomly tweak configs and hoping something works. One command at a time, verify the output, move to the next question.
And always, always check if you have two of something that should only exist once.
Building in public and documenting what I learn.
Subscribe to my newsletter for weekly automation insights, project updates, and lessons from the trenches—including more debugging stories like this one.





Leave a Reply