A few weeks ago I said “make making money fun” and launched an AI trading bot I’d built using Claude as the logic machine and Alpaca as the crypto playground. I genuinely believed it. Then the bot lost money. Then my own slogan started to feel like a personal attack.
On April 3, I handed a $100,000 paper trading account to that bot and told it to figure out crypto. By April 7 it was up over $2,000 and I was feeling good about the whole thing. By April 11 most of those gains were gone. By April 20 the AI brain had quietly switched off while the bot kept running, logging zeros, and looking completely normal.
This is that story. And yes, making money fun turns out to be surprisingly expensive.
What I Actually Built
If you missed the first post, here is the short version. TBot is an automated crypto trading system I built from scratch. Claude acts as the signal brain, evaluating market conditions and deciding when to buy or sell. Alpaca provides the paper trading account and executes the trades. The bot runs on a Hetzner virtual machine, logs everything to Supabase, and sends me Telegram alerts so I can pretend I am actively monitoring it while doing other things.

The core idea was to use AI not just as a content tool or a chatbot, but as an actual decision-making layer inside a live system. That part worked. What happened around it is the more interesting story.
Act One: The Hot Start
The first five days were almost suspiciously good. The market was ranging, meaning prices were moving sideways within a predictable band rather than trending sharply in either direction. That is exactly the condition tbot was designed for. It would identify assets trading near their lower range, buy in, and exit when they bounced back toward the middle.
By April 7, cumulative profit had hit $2,058. The API costs were running about $6 a day and the bot was more than covering them. Everything was working exactly as designed.
I should have known that was too clean.
Act Two: The Crash
On April 8, a tariff announcement hit the markets. Bitcoin dropped sharply. The kind of drop that, if you were watching a chart, would make your stomach move in an unpleasant direction.
Here is where the first big structural problem showed itself. tbot’s market regime classifier uses a 7-day lookback window to decide whether the market is bullish, bearish, or ranging. Seven days is a long time in crypto. When Bitcoin fell sharply on April 8, the classifier kept calling BULL for three more days because the previous week still looked positive in the data. The bot kept buying into a falling market because it was looking at last week’s weather to predict today’s flood.
By April 11, cumulative profit had collapsed from $2,058 to $385. Three days. Almost entirely erased.
BCH, DOGE, LTC, ETH all stopped out in quick succession. The stop losses fired correctly. That part of the system worked. The problem was the entries should never have happened.
Act Three: Recovery, Then Silence

The market settled back into a ranging pattern and the bot recovered. By April 15 cumulative profit was back up to around $2,000. Then a second bullish wave hit on April 17 and erased another $600.
On April 13, before that second wave, I had deployed three fixes: a 24-hour Bitcoin momentum guard to catch sharp reversals faster, raised confidence thresholds for bull regime entries, and restricted buy signals to ranging conditions only. Good fixes. Logical fixes. Fixes that never got a proper evaluation window because a week later the Anthropic API budget ran out completely.
After April 20, the signal runs returned zero results every cycle. The bot kept running its scheduled jobs. It kept saving portfolio snapshots. It kept logging that everything was fine. The AI brain was offline and the system looked, from the outside, entirely operational.
That is the worst possible failure mode. Not a loud crash. A quiet, invisible one.
What the Numbers Actually Say
The final account sat at $96,171 against a $100,000 starting point, a loss of 3.8% on paper. But the realized profit from closed trades was actually positive at $1,025. The gap comes from unrealized positions and the account’s exposure during the crash period.

The more revealing numbers are inside the regime breakdown. In ranging markets, tbot hit a 78% win rate across 32 trades and generated $3,657 in profit. In bull markets, it hit a 20% win rate across 35 trades and lost $2,341. The regime classifier was essentially a binary switch between a system that works and one that does not.
The API cost across roughly 25 days came to approximately $150 total, around $6 a day. On a $100,000 paper account that generated $41 a day in realized profit at its best, that is a 15% overhead. On a real $10,000 account with real position sizes, the economics get significantly worse. The cost structure was designed for serious capital, not an experiment.
What I Actually Learned
Three things are worth keeping from this experiment.
The AI signal works in the right conditions. A 78% win rate in ranging markets from an AI making real-time trading decisions is not noise. That edge is real and worth building on.
The architecture was sound. Modular design, full logging, Telegram alerts, a documented strategy log with reasoning behind every change. Twenty meaningful strategy decisions in three weeks. The system was observable and fast to iterate on. That matters more than most people realize when you are debugging something live.

And the biggest lesson: cost structure is a product decision, not an afterthought. At $6 a day in API calls, the bot needs meaningful capital to justify itself. The fix exists, batching multiple assets into a single prompt rather than one API call per asset, but it needed investment to get there before the budget ran out.
Frequently Asked Questions
Can an AI trading bot actually make money? Based on this experiment, yes in specific conditions. tbot proved a genuine edge in ranging markets with a 78% win rate. The challenge is that markets do not stay ranging forever, and the bot’s ability to detect regime changes fast enough is still an unsolved problem.
How much does it cost to run an AI trading bot? In this experiment, Claude API calls cost approximately $6 a day running 12 to 19 signal cycles across 30 assets. Over 25 days that came to roughly $150. The cost scales with how many assets you evaluate and how frequently you run signals.
Is paper trading a reliable way to test a trading bot? It tests the logic and the architecture well. What it does not test is slippage, liquidity, and the psychological experience of real money moving. The results here are a useful directional signal, not a guarantee of real-market performance.
Where tbot Goes From Here
The experiment is paused, not abandoned. The three fixes deployed on April 13 never got a fair evaluation. The scanner concept, which hit 57% win rate on the small sample it got, needs a proper asset universe to operate in. Alpaca’s 36 crypto pairs is too small a pool. The roadmap points toward Kraken with 200 plus pairs, batched API calls to cut costs, and faster regime detection.
The slogan still stands. Making money fun is the goal. Getting there is apparently going to take a few more iterations. Subscribe and follow as the story unfolds.
Join The Wheels of Ai and Automation
A monthly drop of curated news, practical prompts, and how‑to guides—plus first dibs on workshops.





Leave a Reply