Silicon Valley has a story it tells about itself: move fast, ship early, fail fast, learn faster. The narrative has produced extraordinary companies, and it’s also, increasingly, a blind spot the size of a research lab.
Kazu Gomi has spent more than thirty years inside one of the world’s largest technology organizations. As president and CEO of NTT Research – the Silicon Valley-based fundamental research arm of Japanese telecom giant NTT – he occupies a position that is almost structurally at odds with the Valley’s default operating mode.
His lab doesn’t ship products on six-month cycles; its researchers don’t have quarterly targets, and some of what they’re working on won’t be relevant for a decade.
That makes him a useful critic. And at the “Research to Reality” Upgrade 2026 conference in San Jose, California, he was willing to be one.
The speed problem nobody is talking about
The thing that Gomi keeps coming back to isn’t funding or talent or compute; it’s diligence – the kind that only time makes possible.
“Research to Reality used to take ten years, twenty years. That was the norm,” he told Techloy. “But now, when it comes to AI, that is shortened so much. Research to Reality is maybe six months, even.” He paused. “I don’t know if it’s a good thing or a bad thing. I probably would say it’s a good thing. But there’s a scary part.”
The scary part is what gets skipped. In traditional research timelines, the slow pace isn’t just a function of scientific difficulty – it’s where problems surface, side effects get caught, assumptions get stress-tested, and edge cases accumulate into corrections. Compress that process to six months, and you don’t have time to scan.
The evidence that this matters is accumulating. Scientifically, the International AI Safety Report found that existing testing methods no longer reliably predict how AI systems will behave after deployment, with performance on pre-deployment tests failing to reliably predict real-world utility or risk.
“AI investments face increasing pressure; so companies need to demonstrate tangible business impact early,” said Cesar D’Onofrio, CEO and Co-founder of Silicon Valley–based software development firm, Making Sense. “The focus is shifting toward targeted initiatives that can deliver ROI quickly in real operating environments.”
Meanwhile, enterprise data suggests the scanning isn’t happening at all. A March 2026 survey of 500 CIOs and CTOs at U.S. companies with at least $500 million USD in annual revenue, conducted by AI engineering firm Solvd, found that 80% reported at least one AI project failure due to lack of visibility and oversight; only 20% said they found high-value cases for AI at this stage.
“Most companies are still in active experimentation mode,” said Solvd CEO Mike Hulbert. Boards are noticing: 82% of respondents said their boards are increasingly questioning how much their company is spending on AI.
Gomi doesn’t frame this as a reason to slow down AI development. Rather, he suggests it as a reason to take seriously what speed costs:
“AI looks fabulous and very interesting, but has it been well-examined from a safety perspective? Has that rigorous check happened? Most likely not,” he stressed.
The labs that got cut
The second thing Gomi keeps coming back to is institutional memory – specifically, its disappearance.
The hollowing of corporate research labs over the past three decades is a well-documented phenomenon: Bell Labs was separated from AT&T and placed under Lucent in 1996; Xerox PARC was spun off into a separate company in 2002; IBM, under Louis Gerstner, redirected research toward commercial applications in the mid-90s.
A 2019 paper documents the trend plainly: businesses still spend heavily on R&D, but more and more of that spending goes to development – getting things ready for market – than to research itself. The result, researchers argue, is a growing division of labor in which universities handle science and corporations handle products.
It sounds efficient. Yet, Gomi thinks it’s a trap: “Somebody has to fund basic research. If Microsoft is retracting, somebody has to fill the gap.” He’s direct about where he thinks this leads. “Overall investment into fundamental research is kind of declining, which is perhaps a bit of a warning light. I think it’s a societal problem.”
The gap is becoming harder to ignore; the U.S. is now 13th in government R&D intensity and sixth in basic science intensity. And while the country remains the largest R&D spender overall, it holds that position by the smallest margin since the 1990s, while China’s numbers continue to grow year over year, according to the American Association for the Advancement of Science.
Gomi’s prescription is blunt: the government needs to step up. “Tax money should be spent for this type of thing… for the betterment of society, not only for the short term,” he said. That’s not a radical position in most of the world, but in the Valley, it reads as nearly contrarian.
The definition problem
There’s a subtler argument beneath both of the above, and it’s the one Gomi seemed most interested in making: the industry doesn’t actually know what fundamental research is anymore.
When asked who is doing the most interesting fundamental AI research today, his answer is complicated – in a revealing way. “A lot of good research is actually happening there – they take a ‘good for society’ approach, and there’s definitely a flavor of fundamental research.”
But Gomi is not fully comfortable with this categorization. “The definition of fundamental research is a little vague.” His own definition: no product in mind when you choose the topic. “You find something scientifically interesting, and therefore you spend time and money to investigate. That’s fundamental research.”
By that standard, most of what the industry calls research isn’t; it is applied work with a two-year commercialization horizon dressed up in academic language. That’s not necessarily wrong – applied work produces real value – but it means the genuinely open-ended inquiry, the kind you don’t know what you’re looking for, is getting crowded out.
The governance from the Solvd report makes this concrete: just a year ago, only 38% of large enterprises had any formal internal oversight for AI; today that figure is 100%. Progress, except that half still describe their governance as evolving, and an 80% project failure rate suggests the infrastructure isn’t mature enough to catch what fast deployment misses.
Drew Nukam, CEO of software consultancy Gorilla Logic, sees this pattern repeatedly with enterprise clients:
“Most organizations haven’t defined what success looks like,” he told Techloy. “The foundational work that would make AI effective – the process discipline, metrics, governance structure – isn’t there yet. So what ends up happening is AI accelerates the chaos that already existed. You get a productivity spike, and then you hit the same constraints you always had, just faster.
His prescription echoes Gomi’s: “AI is a governance decision before it’s a tooling decision.”
Companies are building the guardrails after the fact and calling the whole process rigorous. Meanwhile, the funding gap is widening at the other end. “Universities are struggling simply because of the funding gap. Anthropic, OpenAI, they have tons of money. Many magnitudes of difference,” Gomi said.
The consequence isn’t just fewer papers, but rather fewer unexpected discoveries. The history of technology is full of research that looked irrelevant until it wasn’t – the transistor, the laser, the internet itself all emerged from basic science programs with no clear product roadmap.
Xerox PARC in the 1970s built the first computer with a graphical user interface, the first laser printer, the first Ethernet cable, and the first user-friendly word processor – none of which were central to the core business.
Naukam draws a direct parallel to today. “The cloud analogy is apt here – we started that journey in 2007, and we still have enterprise clients who haven’t fully made the leap,” he said.
“The agentic development lifecycle is the same kind of ten-year transformation. The companies that will win are the ones that treat it that way: not as a sprint to deploy tools, but as a fundamental shift in how work gets done.”
The question Gomi is implicitly asking is: who is doing that work now? And who is funding it?
What Silicon Valley gets right
Gomi doesn’t romanticize the alternative. He came up in a large-organization culture and spent years running U.S. operations for a Japanese telecom — he knows what slow-moving institutions cost.
“I like this culture,” he said of Silicon Valley’s fail-fast ethos, without hedging. “It’s one of the good ways to push the envelope and challenge everybody.”
The CEO contrasted it with what he grew up around. “I came from Japan, and the Japanese culture is quality-sensitive and risk-averse – very slow. To be at the frontier of innovation, this ‘let’s try’ culture is very important. Silicon Valley is very risk-tolerant.”
Silicon Valley should not necessarily become more like a Japanese R&D department. Rather, speed-to-market is a feature of healthy innovation ecosystems, not a value system that should govern how all science gets done.
What’s missing is more patience alongside it.
Gomi’s critique of the Valley comes down to a single observation: the industry has optimized hard for one mode of working and is not systematically underfunding the other.
“Maybe five, ten years down the road, real, true, fundamental innovation will be slowing down. And it’s hard to regain.”
Not a prediction, but a warning signal. Whether the industry reads it in time is a different question.
Article Co-authored by Salomé Beyer Vélez.