Andrej Karpathy, the AI researcher and former OpenAI employee who coined the term “vibe coding,” says AI-generated code is still messy and requires human supervision. 

The idea of “vibe coding” has quickly gained traction among developers and non-developers alike. It describes a growing approach to building software with AI tools—where people generate code based on prompts or intent, often without fully understanding the underlying logic. As tools like GitHub Copilot and ChatGPT make it easier to “describe what you want” and get working code, more people are shipping products with little to no traditional programming experience. 

But Karpathy says the reality is still far from seamless. 

Speaking in a Sequoia Capital interview released Wednesday, he stressed that human input remains essential. When asked which skills will matter more as AI agents improve, he pointed to clarity as critical. “People still need to define the spec and the plan in detail,” he said, noting that without clear direction, even capable AI systems can go off track.

Software Engineer With No Basic Knowledge of Python? Gen Z and Gen X Debate Vibe Coding’s Future
With the rise of vibe coding tools like Loveable and Replit, many young tech misfits have landed coding jobs.

He described today’s AI-generated code as comparable to the work of inexperienced interns. “Right now, the agents are like these intern entities,” he said, adding that humans still need to step in on aesthetics, judgment, taste, and overall oversight. 

From Karpathy’s perspective, the uneven performance of large language models comes down to their training data. As a founding member of OpenAI and now the founder of Eureka Labs, he described their intelligence as “jagged”—meaning their capabilities can spike in specific areas depending on what they’ve been trained on. 

He pointed to chess as an example, referencing the jump from GPT-3.5 to GPT-4. “A huge amount of chess data made it into the pre-training set. Because of that data distribution, the model improved much more in that area than it would have by default,” he explained. 

While he acknowledges that AI models have improved significantly at generating usable chunks of code, Karpathy said the speed of progress has left him feeling like he has “never felt more behind” as a programmer. 

As AI continues to evolve, he argues that large language models represent more than just better software—they signal a shift in how software itself is created. 

“Software 1.0 is writing code. Software 2.0 is programming by creating datasets and training neural networks,” he said. “And now, with GPT models or LLMs trained on a sufficiently large set of tasks, they become like a programmable computer in a certain sense.” 

He describes this shift as “Software 3.0,” where programming moves from writing code to shaping prompts and managing context. In this model, what sits in the context window becomes the developer’s primary lever over the system—effectively turning prompting into a new form of programming. 

Replit CEO Calls It “Pretty Dumb” to Study Computer Science for the Paycheck
“If you don’t feel like you’re drawn to it like a fly drawn to a light, then don’t go into it,” he said.