Human-Level AI May Arrive Decades Earlier Than Expected
The survey found a dramatic shift in how experts perceive the timeline for advanced artificial intelligence. Researchers gave a 50% probability that systems capable of performing all tasks better and more cheaply than humans would be feasible by 2047, moving that estimate forward by 13 years compared to 2022. A 10% probability was placed on such systems by 2027.
In practical terms, participants thought that within the decade, leading AI labs could produce systems capable of autonomously fine-tuning large language models, building complex online services like payment-processing websites, or writing songs indistinguishable from those of hit artists.
Yet, despite optimism about capabilities, full automation of all occupations was not expected until 2116, highlighting a long lag between technical feasibility and societal transformation.
Confidence and Concern
The study revealed both excitement and anxiety among experts. Around 68% said positive outcomes from advanced AI were more likely than negative ones, but 48% of these optimists still assigned at least a 5% chance of catastrophic outcomes. Between 38% and 51% of respondents estimated at least a 10% probability that advanced AI could cause human extinction or permanent loss of control.
Concerns about specific near-term risks were even more concentrated. 86% highlighted misinformation, such as deepfakes, as an area of “substantial” or “extreme” concern; 79% pointed to the manipulation of public opinion; and 73% cited authoritarian misuse. Economic inequality followed closely behind, with 71% warning that AI could widen global disparities.
Researchers were also skeptical that future systems would be transparent. Only 5% believed that by 2028, leading AI models would truthfully explain their reasoning in ways humans can understand.
Preparing for the Next Phase
The JAIR survey adds empirical weight to broader institutional warnings.
The Stanford HAI AI Index 2025 reported record investment levels and benchmark breakthroughs but noted that governance and interpretability lag behind capability growth.
The World Economic Forum’s Global Future Council on Artificial General Intelligence is calling for early frameworks to manage cross-border risk, while Bloomberg Law has described how vague definitions of “AGI” complicate regulation and public debate. The World Economic Forum’s “Artificial Intelligence in Financial Services 2025” white paper emphasizes how financial-industry systems are already integrating advanced AI, increasing urgency for governance, auditability and systemic resilience.
Meanwhile, a PYMNTS article observed that “70% of executives said AI has increased their exposure to digital risk, even as it improved productivity” and notes that only “39% of firms surveyed said they have a formal framework for AI governance.” Together, these sources suggest a convergence of urgency: technical progress is accelerating faster than social systems can adapt.
More than 70% of the JAIR respondents said AI safety research deserves greater priority, up sharply from 49% in 2016. Even so, experts remain deeply divided on what alignment and oversight should look like in practice.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.