GPT-3.5 scored in the bottom 10% of human test-takers on the Uniform Bar Exam in 2022. Months later, GPT-4 scored in the top 10%. That kind of leap, compressed into a matter of months, is why the question of whether AI will surpass human intelligence has shifted from a matter of speculation to one of scheduling.

But "smarter" is a slippery word. A calculator has been "smarter" than any human at arithmetic since the 1960s. A chess engine outmatched the world champion in 1997. What researchers, entrepreneurs, and policymakers are actually debating is something different: the arrival of artificial general intelligence. Hassan Taher, a Los Angeles-based AI expert and author of The Rise of Intelligent Machines, has spent years examining both the promise and the limits of that ambition.

Key Takeaways

  • AI already outperforms humans in narrow, well-defined domains — legal drafting, medical image detection, protein structure prediction, code generation — but remains brittle at spatial reasoning, intuitive physics, and tasks requiring genuine contextual understanding.
  • Progress is accelerating fast. GPT-3.5 scored in the bottom 10% on the Bar Exam in 2022; months later, GPT-4 scored in the top 10%.
  • AGI timelines are compressing. Industry leaders like Hassabis (5–10 years), Amodei (2–3 years), and Clark (end of 2026–2027) project near-term arrival, while broader expert surveys and forecasting platforms place 50% odds around 2033.
  • Key bottlenecks remain, especially continual long-term memory, which researchers call the most uncertain gap between current systems and AGI and may require a genuine breakthrough.
  • "Smarter" doesn't mean "wiser." A system that aces every test still won't understand what it means to be wrong, feel the weight of a decision, or care about outcomes — and no one has solved that problem yet.
  • Governance and deployment matter more than raw capability. Hassan Taher argues ethical frameworks must keep pace with AI improvements, and high-stakes applications always require human oversight.

Where AI Already Outperforms Humans

Current AI systems dominate humans in specific, well-defined domains. Large language models can draft legal briefs, summarize medical literature, and generate functional code at speeds no human can match. AI image recognition systems outperform radiologists at detecting certain cancers. AlphaFold, developed by Google DeepMind, solved protein structure prediction — a problem that had stumped biologists for decades — in a fraction of the time traditional methods required.

Hassan Taher has acknowledged these achievements while cautioning against conflating narrow excellence with broad intelligence. "As AI becomes more prevalent, understanding when to trust AI models is crucial," Taher has written. 

That brittleness is real. AI models still struggle with spatial reasoning, intuitive physics, and tasks requiring genuine understanding of context rather than pattern-matching. Apple researchers recently tested GPT-4o on a spatial reasoning benchmark called SPACE and found it scored just 43.8%. GPT-5, released in August 2025, improved to 70.8%, but humans averaged 88.9%. 

What the Experts Predict

Forecasts for AGI have shortened dramatically. Google DeepMind CEO Demis Hassabis said in March 2025 that he expected AGI to start emerging within five to ten years: "I think you will see meaningful evidence of AGI being in play in 2025". Anthropic CEO Dario Amodei told CNBC at Davos in January 2025 that he expected a form of AI "better than almost all humans at almost all tasks" to emerge within two to three years.

Anthropic co-founder Jack Clark went further in September 2025, stating that AI would be "smarter than a Nobel Prize winner across many disciplines by the end of 2026 or 2027".

Not everyone agrees with this compressed timeline. A September 2025 review of surveys spanning 15 years found that most experts agreed AGI would arrive by 2100, but many placed it decades away. Metaculus gives a 25% chance of AGI by 2029 and 50% by 2033. 

The Scaling Hypothesis and Its Limits

Much of the optimism around near-term AGI rests on what's known as the scaling hypothesis — the observation that AI models get predictably better as you train them with more data and computing resources. OpenAI CEO Sam Altman has said he realized in 2019 that AGI might arrive sooner than expected after researchers discovered these scaling laws.

80,000 Hours, a research organization focused on high-impact career decisions, estimated in a March 2025 analysis that by 2028, someone will have trained a model with 300,000 times more effective compute than GPT-4 — the same increase that separated GPT-2 from GPT-4.

Hassan Taher's Perspective on What Comes Next

Hassan Taher has argued whether AI is "smarter" than humans matters less than how it is governed and deployed. Through his firm, Taher AI Solutions, and his writing, including AI and Ethics: Navigating the Moral Maze and The Future of Work in an AI-Powered World, he has maintained that ethical frameworks must keep pace with capability improvements.

Taher has noted that even highly capable AI systems require human oversight in high-stakes contexts. "An AI system that predicts a high probability of cancer should also convey the certainty of that prediction," he has written. "If the model's confidence is low, it signals the need for further human review and additional tests. This approach ensures that AI is used as a tool to augment human decision-making rather than replace it entirely".

Smarter Doesn't Mean Wiser

A machine that outscores every human on every standardized test will still lack something fundamental. It will not understand what it means to be wrong. It will not feel the weight of a consequential decision. It will not care about the outcome — unless we figure out how to make it care, and that is a problem no one has solved and current AI regulations don’t address.

MIT Sloan professor Danielle Li, speaking at the World Economic Forum in January 2025, framed the central challenge this way: "How do we move from an AI that is a genius to one that's capable within an organization?". 

Whether AGI arrives in 2027 or 2047, the harder question isn't about timeline. It's about readiness.