Sometimes the most valuable lessons come from the moments you almost rage quit.
This weekend, three of my apps (a trading bot, my self-hosted n8n server, and ReadRecall) decided to remind me of something important: software isn’t a fire-and-forget thing you launch once and then ignore. It’s more like keeping a plant alive; you water it, trim it, re-pot it, and every so often it wilts and forces you to figure out what went wrong.
This is the story of how my Docker apps broke, how fixing one kept mysteriously breaking the others, and what it taught me about patience, architecture, and becoming the kind of person who can sit with a broken system instead of walking away frustrated.

When “Set It and Forget It” Stops Working
I’ve been running a few web apps in containers on my Hetzner VM: a trading bot dashboard, an automation tool, and a web app for one of my domains. They were all living on the same server, behind the same nginx proxy, sharing similar infrastructure. For a while, everything just worked.
Then the quiet, invisible stuff started to fail.
One domain’s HTTPS certificate expired while the others were still fine. Restarting or recreating one container sometimes broke access to a different app. Under the hood, nginx and certbot had grown more tangled than I realised. From the outside, it just looked like “Why is my site down again?” But underneath? There was a deeper architectural issue: the way I’d wired everything together meant touching one part could accidentally knock something else over.
It wasn’t one dramatic failure. It was death by a thousand tiny misconfigurations.
The Hidden Problem: Three Apps, One Knot
At a high level, the problem came down to how I’d designed the system. I had three separate apps running in containers, a shared reverse proxy (nginx) handling all incoming traffic, and Let’s Encrypt certificates managed by certbot with renewal jobs scheduled in the background. On paper, that’s solid. In practice, I’d evolved the setup over time without a proper blueprint, so old and new configurations were mixed together. Some certificates lived in one directory, others in a custom one. Renewal settings for each domain weren’t unified—some config files were healthy, others were broken.
The result? Recreating or updating one app or certificate could confuse the rest. The system was “working” but fragile. Think of it like building a house room by room without a blueprint. It stands for a while, until you try to move a wall and realise three other rooms are leaning on it.
software isn’t a fire-and-forget thing you launch once and then ignore
Debugging: From Frustration to Clarity
This is the part no one glamorises when they talk about “shipping products”—the grind of debugging.
I started by restarting the proxy and checking if nginx was even happy with its configuration. Then I used tools to inspect which certificate each domain was actually serving and when it expires. I looked inside the certificate folders to see how many different “lineages” and copies existed. I ran the certificate renewal commands manually to see which ones succeeded, which ones errored, and why. I noticed that some renewal configs were broken or out of sync with where the real certificates now lived.
It was frustrating. At one point, I genuinely thought: “Why is this so brittle? I just want my sites to stay up.” But here’s the thing—each little command taught me something. That one domain’s certificate was fine and not due for renewal. That another had a broken renewal configuration that certbot was skipping. That the proxy container was reading from a specific directory, and not all certificates were being treated equally.
Slowly, the mess started to make sense.
The Turning Point: Simplify and Separate
The big breakthrough wasn’t a clever one-liner. It was a mindset shift: stop patching symptoms, and clean up the structure instead.
First, I separated the proxy from the application containers more cleanly, so restarting an app wouldn’t confuse the networking or certificates. Then I standardised where certificates lived instead of scattering them across multiple places. I made sure all domains shared a sane, unified certificate setup with renewal targeting the right config. I kept cron jobs for renewal, but became precise about what they do instead of stacking new jobs on top of half-working ones.
The final state was simple and robust: one proxy stack handling traffic for all domains, certificates living in predictable locations, renewal jobs that know exactly which configuration to use, and apps that can be redeployed or updated without randomly breaking SSL for something else.
Here’s the part I almost skipped—I explicitly archived the old Docker Compose files into an archive_compose folder. That’s a key part of the story. Not just fixing, but curating history and pruning old config rather than letting it rot in place. Future me would have no temptation to accidentally roll back into the messy, all-in-one setup.
Nothing magical. Just intentional design.

What This Really Taught Me
The most important lesson wasn’t “how to fix nginx” or “how to debug certbot.” It was about how to approach systems in general.
Apps aren’t appliances. They’re more like plants or pets. They need maintenance, observation, and occasional surgery. You can’t just build something, deploy it, and pretend it doesn’t exist.
Frustration is normal. But it’s also a signal that you’re at the edge of your current understanding—and that’s exactly where growth happens. I could have rage quit and called someone else. Instead, I stayed curious. Research is a skill. Reading error messages, looking up concepts, understanding the difference between a quick fix and a structural fix—that’s all learnable. I used Perplexity to help me think through the debugging process, but understanding what the actual problem was? That was the real work.
Architecture matters, even for small projects. The way you wire things together determines how easily they break and how painful they are to repair.
You don’t have to be “super technical” to start building this muscle. You just need to be curious instead of purely annoyed. Ask “Why does this work that way?” one more time than you feel like. Keep notes on what you changed, so today’s fix doesn’t become tomorrow’s mystery.
Over time, you stop seeing a broken app as a personal failure and start seeing it as an invitation to understand your own systems better.
The Habit Worth Building
If there’s one habit this whole saga reinforced, it’s this: don’t just fix the problem, but try to understand the system.
That doesn’t mean becoming a full-time DevOps engineer. It means when something breaks, you slow down, take a breath, and look at the architecture. How do the pieces connect? Who depends on what? Are you willing to refactor, not just patch?
The more you do that, the more confidence you build. Suddenly, “everything is broken” turns into “okay, the proxy is fine, the certificates are here, the renewal config is wrong for this domain, and here’s how to fix it.”
A Closing Thought
The apps you build with AI tools are still your responsibility. They’ll get messy, out of date, and occasionally fall over. But that’s where the real learning lives, right in the middle of the frustration, when you decide to keep going, keep digging, and come out the other side with both your app and your skill set upgraded.
If you’re building apps, maintaining systems, or learning to troubleshoot as you go, you’re in good company. We’re learning in public here. Share your debugging stories, your wins, your “almost rage quit” moments in our community. That’s how we all get better.





Leave a Reply