Last year, Andrej Karpathy tweeted a throwaway thought about how he was building things with AI and didn’t really look at the code anymore. He called it “vibe coding.” That one tweet turned into conference keynotes, VC pitch decks, and a thousand LinkedIn posts about building without knowing how to code. Now, a year later, Karpathy sat down with Sequoia at their AI Ascent 2026 event and essentially said: the vibes were just the beginning. Something more serious is taking shape on top of it. He called it agentic engineering. And if you’ve been paying attention to how AI tools have actually changed over the past six months, it clicks immediately.

What Karpathy Actually Said (And Why It Matters)

The interview, hosted by Sequoia partner Stephanie Zhan, was framed around what has changed since Karpathy first coined “vibe coding” in early 2025. His answer, in short: everything.

He described a moment in December 2025 where something flipped. Before that, AI coding tools were helpful but inconsistent. After it, they were producing correct, usable chunks of code with a reliability that changed how he worked entirely. He went from writing 80% of his code manually to writing maybe 20%. The rest was delegated to AI agents. He said he told his parents. They didn’t fully grasp it. That gap, between what’s actually happening and what most people understand is happening, is a theme he kept returning to.

The term he landed on to describe where things have moved is agentic engineering. He defined it carefully. “Agentic” because the new default is that you are not writing the code directly most of the time. You are orchestrating agents who do the work, and acting as oversight. “Engineering” to make clear that this is a real discipline with standards to uphold. Karpathy was specific about what those standards include: development velocity, yes, but also the quality bar and security standards expected in professional software. Vibe coding raised the floor so anyone can build. Agentic engineering is about maintaining the ceiling while moving faster than was ever possible before.

Vibe coding raised the floor. Anyone can now build a working prototype from a prompt. Agentic engineering is about maintaining the ceiling, doing professional quality work at a dramatically higher speed, by directing AI agents rather than replacing them with hope.


The Ghost Metaphor That Actually Helps

One of the most useful things Karpathy said in the interview was not about coding at all. It was about how to think about what these AI systems actually are.

He said most people make the mistake of treating LLMs like animals. Predictable. Instinctive. Trainable in a consistent direction. His preferred framing is that they are ghosts. Jagged, statistical, summoned entities that require a new kind of taste and judgment to direct.

What does jagged mean in practice? It means an AI model can refactor a 100,000 line codebase and simultaneously give you directions to a car wash with your car still in it. The capability is uneven in ways that human expertise is not. A human expert in one domain tends to be reasonably competent in adjacent ones. AI models do not work that way. They perform brilliantly inside the reward distribution they were trained on and struggle outside it in ways that can be hard to predict.

Karpathy described them as “spiky, stochastic, and fallible entities.” That framing is more useful than thinking of them as smart assistants. Smart assistants you trust. Spiky, stochastic, fallible entities you verify.

I have run into this personally. I have used Claude to write automation logic that was genuinely impressive, and then watched the same model completely misread a straightforward task in a workflow I thought was simpler. The jaggedness is real. Once you accept that, you stop being surprised by it and start designing around it instead.

The ghost framing is useful because it shifts your posture. You are not managing a tool. You are directing something that requires your judgment, your context, and your oversight to produce reliable output.


Software 3.0 and What It Changes for Non-Technical People

Karpathy has been developing a framework he calls Software 3.0. It goes like this. Software 1.0 was explicit code, rules written by hand. Software 2.0 was trained neural networks, behaviour shaped by datasets. Software 3.0 is prompting an LLM. Programming is now, in large part, writing instructions in natural language directed at a model that interprets and executes them.

This is not just interesting for developers. It is arguably more interesting for people who never considered themselves builders at all.

The barrier between “people who code” and “people who don’t” is becoming more permeable by the month. The Anthropic 2026 Agentic Coding Trends Report noted this explicitly, describing how sales teams, legal teams, operations, and marketing are building their own tools without engineering support. Not vibe coding for fun. Actually solving real workflow problems.

For someone running a small business, managing a team, or trying to automate something tedious in their day job, this is the part worth paying attention to. The tools have crossed a threshold. The question is no longer whether you can build something with AI. It is whether you understand enough about what you are trying to build to direct the process usefully.

Ad: Birdie

The Skill That Actually Matters Now

Karpathy made a point in the interview that I think gets underplayed in most write-ups. He said that as AI takes on more of the execution, the human bottleneck shifts to understanding.

You can outsource the typing. You cannot outsource the understanding.

This sounds obvious until you think about what it means in practice. If you use an AI agent to build you a system and you do not understand what that system is doing, you cannot catch the errors, you cannot make meaningful improvements, and you cannot adapt it when something breaks. The agent writes the code. You still have to know enough to know whether it wrote the right thing.

He framed verifiability as the key variable here. AI automates fastest in domains where you can check whether the output is correct. Maths. Code. Tasks with clear right and wrong answers. The harder it is to verify something, the slower AI moves into it. Which means the people who can define what “correct” looks like in their domain, even informally, have a real advantage right now.

The practical implication for anyone using AI tools today, whether you are a developer or not, is to invest in the judgment layer. Learn enough about what you are building to review it meaningfully. Not to replace the AI. To direct it well.


What This Means If You Are Not a Developer

The most honest thing I can say is this: agentic engineering as a formal discipline is aimed at professional software teams. Karpathy was speaking partly to CTOs and engineering leads about how to scale quality in AI-assisted development.

But the principles apply at every level.

If you are vibe coding a prototype, you are already in this territory. You are making decisions about what to delegate to AI and what to keep for yourself. You are (or should be) reviewing what comes back rather than shipping it blindly. The only difference between vibe coding and agentic engineering is the level of structure and scrutiny you apply.

The direction Karpathy is pointing is worth internalising regardless of where you are on the technical spectrum. AI tools are not getting less capable. The floor for what anyone can build is rising. The ceiling belongs to people who bring genuine understanding to the process.

That is the real upshot of the AI Ascent talk. Not that vibe coding is dead. That it was always the entry point, not the destination.


FAQ

What is the difference between vibe coding and agentic engineering? Vibe coding means using AI to generate something from a prompt without much oversight, usually for quick prototypes or personal projects. Agentic engineering is a more structured practice where you orchestrate AI agents to do the work, but you maintain meaningful oversight, review outputs, and apply real judgment to what gets kept. The difference is not the tools. It is the level of deliberate human involvement in the process.

What did Andrej Karpathy say at AI Ascent 2026? Karpathy spoke with Sequoia’s Stephanie Zhan about how the year since he coined “vibe coding” has changed the landscape entirely. He introduced agentic engineering as the more serious discipline building on top of vibe coding, described LLMs as jagged ghosts rather than predictable tools, and argued that the irreducible human contribution in an AI-assisted workflow is understanding, not execution.

Do I need to know how to code to do agentic engineering? Not necessarily, but you need to know enough about what you are building to verify whether the AI’s output is correct. Karpathy’s point is that you can outsource the typing but not the understanding. The people who will get the most out of these tools are the ones who bring enough domain knowledge to catch errors and direct the process meaningfully.


If you want to keep up with how AI tools are actually changing the way people work, not just the headlines but the practical implications, subscribe to the August Wheel newsletter at newsletter.augustwheel.com. No fluff, no hype. Just what’s actually useful.


Discover more from August Wheel

Subscribe to get the latest posts sent to your email.

Leave a Reply

Trending

Discover more from August Wheel

Subscribe now to keep reading and get access to the full archive.

Continue reading