Reading this on the web or got it forwarded? Subscribe for free here.

I built a Bitcoin trading bot. A program that watches the market and places trades automatically based on rules I wrote. It's been working. Sort of.

Since the latest overhaul: 61 trades. 68.9% win rate. Total profit: $7.59.

Last week I shut it down.

The honest backstory.

The first version of this bot was bad. Like, take-your-money-and-run bad. The win rate was mediocre, but worse, it had a tendency to go on rapid losing streaks. A bug or an unconsidered scenario would compound into a $100+ loss in a single afternoon before I caught it. Then I'd patch the hole, deploy a fix, and find a different issue the next day.

I was careful from the start. The bot ran in paper-trading mode for weeks. No real dollars, just simulated trades against live market data, so I could see how it would have behaved without any real money on the line. I logged every trade. I did postmortems on the bad ones. I asked Claude to stress-test the logic. I read forums. I rewrote the position sizing. I made it faster and decreased API costs. I did all the responsible things you're supposed to do.

And it still had real issues once I made it live. Theory is one thing. Real markets are 10x harder. In paper-trading mode you're not really competing with anyone. The moment dollars are on the line, you're up against millions of traders, hedge funds, and other bots, all trying to win the same trade you are. Some are smarter. Some are faster. All of them showed up before you did.

The Wall Street Journal just published an analysis of the biggest prediction markets. On Polymarket, more than 70% of users lose money and just 0.1% of accounts captured 67% of all profits. For comparison, only around 13% of casino patrons walk out with any profit at all. So a prediction market is bad and a casino is worse. People may treat casinos like gambling and trading apps like investing, but the math has them in the same neighborhood.

Honestly, that part was fun. I wanted to do something tangible with AI and automation outside of work, and this was a contained way to do it. I learned a lot. I'd recommend it to anyone curious about this stuff.

Just go in with realistic expectations. I didn't expect it to make me millions. I figured there was a 99% chance I'd lose money and learn a lot. I was right on both counts, to the tune of about $400. I consider that the cost of this education. It made me better at building with AI. It taught me to stress-test before launching, to monitor closely, to soft-launch with no real money, to confirm instead of trust, and to question every single thing AI tells me.

Successful but not worth it.

A 68.9% win rate sounds incredible. In most trading contexts, that's the headline number. What you don't hear: a win rate doesn't tell you if you're profitable. My bot was making lots of small wins and giving most of them back on a few losses. The math worked out to $7.59.

If I told a friend "I have a side project that nets me $7.59 a month," they'd ask why I'm bothering. Fair question.

Most advice on side projects is to keep going, ship it, iterate, double down on what works. Almost no one suggests killing the thing even though it's "working," because "working" isn't enough. We just don't have enough time for mediocre.

This wasn't a hard call once I framed it that way. I'm not running a hedge fund. I'm running a company, raising two boys, finishing a house, and trying to find time to sleep a reasonable amount. Time and attention are the scarcest things we have. A few issues ago I quoted Arnold Bennett: "If you have time, you can obtain money. You cannot buy more time." The bot was charging me in the wrong currency.

This week's takeaway.

Find a project, habit, subscription, or recurring commitment that's technically "working." Make a brutally honest accounting of what it actually costs. Time, attention, mental space, money. Compare that to what it actually returns. If returns are less than cost, the answer isn't to optimize. The answer is to kill it.

ONE MORE THING

I turned off two other distractions this week. A weather-based trading experiment I'd lost interest in and a Bitcoin email I'd been deleting unread for weeks. Nassim Taleb calls this via negativa in Antifragile. What you remove is more reliable than what you add, because removal has knowable effects and addition carries hidden ones.

— Matt

Keep Reading