There is a post making the rounds on X right now. A developer named Nattu typed one prompt into the Lottie Creator MCP and watched Claude Opus 4.7 generate a 500-particle animation. Each particle had its own path, easing curve, and arrival frame. He did not touch a single keyframe.

X.com

That is the kind of moment that makes you put your phone down for a second.

Anthropic released Claude Opus 4.7 on April 16, 2026, and within hours people were sharing what it could actually do. Not benchmark scores. Actual things they built, often with results that surprised them. I was one of those people, and I tested it the same day it dropped. For context, OpenAI’s GPT-5.4 remains the benchmark to beat for breadth across coding, computer use, and knowledge work in one model. But Opus 4.7 is where Claude pulls ahead on depth and sustained autonomous work, which is a different kind of useful.

What Is Claude Opus 4.7?

Image courtesy: Anthropic

Claude Opus 4.7 is Anthropic’s latest flagship and their most capable publicly available model right now. It is available across all Claude products, the API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. If you are on Claude.ai, you can select it directly from the model picker today.

It costs the same as Opus 4.6. Five dollars per million input tokens, twenty-five dollars per million output tokens. Anthropic could have raised the price with this release. They did not. The cost-per-capability ratio just quietly improved without anyone paying more.

What People Are Already Doing With It

Lottie is a format for lightweight animations on websites and apps. Think smooth loading icons or animated illustrations in modern apps. Designers typically spend hours setting keyframes, timing curves, and motion paths to produce them. Nattu used one prompt through the Lottie Creator MCP and got five hundred individually animated particles in one shot. No keyframe editing. No manual adjustments. Motion design work happening in seconds through natural language.

On the engineering side, Anthropic’s own testing showed Opus 4.7 autonomously building a complete Rust text-to-speech engine from scratch, then verifying its own output against a Python reference implementation. Months of senior engineering work, delivered without hand-holding. The codebase is public if you want to look.

Image Courtesy: CodeRabbit

CodeRabbit ran Opus 4.7 against a hundred real pull requests from open-source projects and measured it against known issues in each. It caught 68 out of 100, up from 55 with the previous baseline. A 24% improvement in finding the specific bug that matters. To put that practically: if your team merges twenty pull requests a week, that is roughly three more real bugs caught before they reach production.

I Tested It Myself

I run a personal productivity dashboard called Mission Control at mc.taskcocoon.com. It is a custom Next.js app that aggregates my tasks, ops events, cron jobs, news, and agent status into one place. It sits behind a Cloudflare access gate so it is not publicly browsable, but it is a real working tool I use every single day.

The hero card had a particle sphere animation. It looked fine but felt generic. I wanted to replace it with something that meant something: a particle animation spelling AUGUST, scattering, then reforming into WHEEL with the two E’s rendered as spinning mechanical gears. The August Wheel brand living on my personal command center.

I built it using Claude Opus 4.7 through the Claude Desktop app’s Code section on my Mac, connected directly to the Mission Control GitHub repository. One starting instruction: read the existing component, confirm the setup, report back before touching anything. It returned four precise questions covering library preference, file location, container styling, and whether the animation needed to respond to a mood prop. No fluff. I answered them and it got to work.

Three iterations to get from nothing to a working gold particle animation matching the dashboard’s dark background, border, and scanline styling. It was not guessing randomly across iterations. It was diagnosing what was not working and fixing it with intent. It also created a GitHub branch before making any changes and waited for my approval before pushing to main. I did not ask it to do that. It just did it because that is the correct way to work on a live codebase.

The honest caveat: it consumed tokens fast. I am on the $20 Pro plan. Session usage went from roughly 5% to 70% across three iterations on a real codebase. If you are on Pro and planning to use Opus 4.7 for anything substantial, budget your sessions before you start.

What stood out most compared to my daily Sonnet 4.6 sessions: far fewer stopping points. Sonnet pauses and hands things back to you more often. Opus 4.7 just kept going. Faster, fewer errors, less input needed from me. For a task with multiple moving parts and a live GitHub workflow, that autonomy made a real difference.

What Actually Changed Under the Hood

Image courtesy: Anthropic

Vision got a serious upgrade. Visual acuity jumped from 54.5% to 98.5% and image resolution support tripled, up to 2576px from 1568px. It can now read small text in charts, scanned documents, and detailed screenshots without you needing to crop or zoom anything first.

It follows instructions more literally. Where previous Claude models would sometimes interpret instructions charitably, Opus 4.7 takes what you write at face value. Mostly a good thing, but if your prompts were loose before, tighten them up before switching over.

Thinking became adaptive. With Opus 4.6 you managed the extended thinking toggle yourself. Opus 4.7 decides how much reasoning each question needs. Simple things get answered quickly. Hard ones get the full treatment. You can leave it on and stop managing it manually.

Memory improved for long-horizon work. Opus 4.7 is better at referencing notes across tasks without you repeating context every session. For multi-step agentic workflows, this matters more than it sounds.

Frequently Asked Questions

How much does Claude Opus 4.7 cost?

Five dollars per million input tokens and twenty-five dollars per million output tokens, unchanged from Opus 4.6. No price increase with this release. That said, the model thinks more deeply by default which means more tokens per task. Budget accordingly, especially if you are on the Pro plan.

Is Opus 4.7 worth using over Sonnet 4.6 for everyday tasks?

Not for everything. Sonnet 4.6 is fast, capable, and significantly cheaper to run at scale. Opus 4.7 is the model to reach for when the task is genuinely complex, multi-step, or requires sustained autonomous reasoning. Think of it like calling in a specialist versus your regular go-to. Use Sonnet for daily volume. Use Opus when the work is substantial enough to justify it.

The Takeaway

Claude Opus 4.7 is not an incremental update. The vision gains are significant. The coding improvements are real, tested against actual production codebases by teams running it in the wild. And from my own session: the autonomy is genuinely different. It keeps going where other models stop, manages its own workflow, and checks its own work before reporting back.

The price stayed the same. The capability went up considerably.

The animation is live on my dashboard now. One session. Three iterations. Opus 4.7 built it, branched it, and waited for me to say go before touching main.


Discover more from August Wheel

Subscribe to get the latest posts sent to your email.

Leave a Reply

Trending

Discover more from August Wheel

Subscribe now to keep reading and get access to the full archive.

Continue reading