A former OpenAI employee has revised his prediction for when artificial intelligence might reach dangerous levels of capability. Daniel Kokotajlo now expects AI systems to achieve fully autonomous coding in the early 2030s, not 2027 as he first estimated. The update marks a significant shift in thinking about how fast AI development will progress.
Revised Predictions for AI Development
According to The Guardian, Kokotajlo gained attention in April when he released AI 2027, a scenario describing how unchecked AI development could lead to superintelligence. That scenario suggested AI agents would fully automate coding and AI research by 2027, triggering an intelligence explosion. The system would then create smarter versions of itself.
The original timeline sparked debate across the tech community. US Vice President JD Vance appeared to reference the scenario in a May interview about the AI arms race with China. Gary Marcus, an emeritus professor at New York University, called the piece a work of fiction and dismissed some conclusions as science fiction.
Kokotajlo and his team have now adjusted their forecast. They set 2034 as the new horizon for superintelligence. The updated prediction does not include a specific guess for potential catastrophic outcomes.
Why the Timeline Changed
Things seem to be going somewhat slower than the AI 2027 scenario, Kokotajlo wrote in a post on X. Our timelines were longer than 2027 when we published and now they are a bit longer still.
Expert Views on AGI Timelines
Malcolm Murray, an AI risk management expert, said many people have been pushing their timelines further out in the past year. They are realizing how jagged AI performance is, he explained. For dramatic scenarios to happen, AI would need more practical skills useful in real-world complexities.
Henry Papadatos, executive director of French AI nonprofit SaferAI, noted that the term AGI made sense from far away when AI systems were very narrow. Now we have systems that are quite general already and the term does not mean as much.
OpenAI CEO Sam Altman said in October that having an automated AI researcher by March 2028 was an internal goal. He added that the company may totally fail at this goal.
Andrea Castagna, a Brussels-based AI policy researcher, said dramatic AGI timelines do not address many complexities. The more we develop AI, the more we see that the world is not science fiction, she said. The world is a lot more complicated than that.