The views and assessments expressed in this review are those of the author and do not necessarily reflect the views of AERC. This material is provided for informational purposes only and does not constitute investment advice.
In 2013 the economist Alan S. Blinder published a book with the evocative title “After the Music Stopped…”. The book examined the global financial crisis of 2007–09 and the vulnerabilities that had long remained outside the spotlight while the “music” of economic expansion and rising stock markets continued to play. More than a decade earlier, in 2000, Robert J. Shiller released the first edition of “Irrational Exuberance”, a work devoted to the dot-com boom and the mechanisms through which speculative bubbles take shape in financial markets.
Why recall these books today? Because even after severe crises, financial markets tend to return surprisingly quickly to a familiar state – optimism grounded in expectations of the next technological breakthrough. Today that role is increasingly played by artificial intelligence (AI). As early as 2020, OpenAI introduced GPT-3, one of the most advanced language models of its time. In November 2022 the launch of ChatGPT brought generative AI into mainstream public awareness. The release of GPT-4 in 2023 cemented artificial intelligence at the centre of global debate. At first, the spread of AI technology was accompanied by widespread concern about its potential impact on employment and quality of life. Yet as practical applications accumulated and real-world use cases expanded, much of this anxiety gradually gave way to a more pragmatic acceptance of the technology.
The pace at which AI adoption has spread is unprecedented. According to Resourcera, by January 1st 2026 the number of active AI users had reached roughly 1.1bn people – around 13.3% of the world’s population. In itself, the expansion of the user base is hardly a problem. Yet its rapid growth has also become a source of optimism in financial markets.
That optimism is reinforced by the information environment. According to the AI Index 2025 Annual Report published by Stanford University, the number of academic and professional publications on artificial intelligence more than doubled between 2013 and 2023 – from about 102,000 to more than 242,000. In 2024 alone the volume of publications increased by a further 19.7% (YoY).
Why does the growth of publications matter? The answer was articulated long ago by Robert J. Shiller in “Irrational Exuberance”. As he observed, “Enhanced reporting of investing options leads to increased demand for stocks, just as advertisements for a consumer product make people more familiar with the product, remind them of the option to buy, and ultimately motivate them to buy”[1].
And indeed, current trends appear to support this thesis. As the volume of publications on artificial intelligence has expanded, so too has investor interest in AI-related ventures. According to the same Stanford report, global corporate investment in AI reached $252.3bn in 2024, an increase of 25.5% compared with 2023. The most pronounced growth occurred in private investment, which rose by 44.5% (YoY). Meanwhile, the value of mergers and acquisitions involving AI companies increased by 12.1% (YoY). Over the past decade, overall investment associated with artificial intelligence has grown nearly thirteenfold.
Yet beneath this wave of optimism, some economists and investors have begun to raise a number of uncomfortable questions – questions for which there are, as yet, no clear answers. It is these questions that merit closer attention. Three, in particular, stand out.
[1] Shiller, R. J. (2005). Irrational Exuberance (2nd ed.). Princeton, NJ: Princeton University Press. (Ch. 3)
One natural comparison is with the 1990s, when the world was first introduced to the internet as a widely accessible technology. Can artificial intelligence be considered comparable in scale to the arrival of the internet? Even the emergence of the internet itself, however transformative in hindsight, was not universally described at the time as a revolution on par with the First Industrial Revolution. Robert J. Shiller, for example, noted that the early internet could not plausibly have had as large an impact on aggregate corporate profits as markets initially expected, not least because internet companies of that era were still immature and generated relatively modest earnings. A later study by the OECD in 2013 reached a similarly sobering conclusion: a 10% increase in internet penetration – through the expansion of broadband access – was associated with an increase in GDP per capita in OECD countries of only about 0.9-1.5% per year. Compared with the exuberant expectations that often accompany new technologies, such figures appear rather modest.
What, then, can be said about artificial intelligence? A definitive answer – whether it constitutes a genuine technological revolution or not – remains elusive. Any transformative technology requires time before its economic effects become visible. Yet one pattern is already evident: public perception once again displays a familiar asymmetry. Success stories – those of companies such as OpenAI or Higgsfield – tend to dominate public discussion, while stories of disappointment receive far less attention.
A case in point is the report The State of AI in Business 2025 produced by researchers at the MIT. Although the report attracted some attention, the broader public debate suggests it did little to temper market expectations. Yet the numbers it presents are striking. Based on a review of more than 300 publicly disclosed AI initiatives, the authors conclude that 95% of organizations reported no measurable return on their investments in AI. The divergence in outcomes proved so pronounced – across users (large corporations, mid-sized firms and small businesses alike) as well as among developers (start-ups, vendors and consulting firms) – that the authors coined a term for the phenomenon: the “GenAI Divide”, referring to the stark gap in the effectiveness of generative AI across organizations.
The study’s authors note that tools such as ChatGPT or Copilot have indeed spread widely: more than 80% of organizations have explored them or launched pilot projects, and nearly 40% report having implemented them in some form. Yet these tools tend primarily to enhance the productivity of individual workers rather than directly improving firms’ financial performance. The situation is reminiscent of the famous observation by Robert Solow, who remarked that the effects of computer technology were visible “everywhere but in the productivity statistics” (see IMF Blog).
According to the MIT’s report, enterprise-level AI systems – whether developed internally by companies or offered by technology vendors – are often rejected altogether. Around 60% of organizations evaluated such systems, but only 20% proceeded to pilot programs, and a mere 5% moved to full-scale deployment. Many projects falter because of fragile organizational processes, insufficient contextual training of models and a poor fit with everyday operational tasks.
Evidence that the integration of AI can prove difficult even at the level of public policy can be found in South Korea. A large-scale government experiment involving the introduction of 76 AI-generated digital textbooks ended just four months after its launch when the program was deemed unsuccessful.
The initiative – known as the “AI Digital Textbook Promotion Plan” – was launched in partnership with a dozen publishing companies and championed by former president Yoon Suk Yeol in June 2023. According to Rest of World, the textbooks first became available to schools in March 2024. Policymakers promised that the new materials would enable personalised learning in mathematics, English and computer programming, reduce the workload of teachers and help lower dropout rates. The AI-enabled textbooks were designed to adapt lesson plans and generate assignments tailored to each student’s interests and abilities.
In practice, however, the results proved disappointing. Once introduced into classrooms, the textbooks were found to contain numerous errors. Rather than easing the burden on teachers and students, they often required additional time and effort from both. The programme encountered difficulties from the outset. When it was first announced, the then education minister, Lee Ju-ho, stated that AI-based textbooks would eventually become mandatory under law. Faced with mounting opposition, however, the government revised the plan, converting it into a voluntary pilot programme lasting one academic year. By October 2024 more than half of the 4,095 schools that had initially joined the initiative had withdrawn from it.
The financial implications were significant. Publishing companies had invested roughly $567m in the project, anticipating government procurement worth around $850m. Once the textbooks’ status was downgraded from mandatory to voluntary and many schools withdrew, a substantial portion of those investments became unrecoverable.
Against this backdrop, the rising optimism surrounding artificial intelligence raises uncomfortable questions. Why do failures remain largely at the margins of public debate? It is as though the discussion itself avoids acknowledging an obvious point: artificial intelligence does not automatically lead to productivity gains, nor can it easily translate into meaningful financial results in every sector.
Behind the high-profile success stories lies a quieter but increasingly evident reality. The economic impact of AI appears uneven, gradual and highly context-dependent – far more so than the prevailing narrative might suggest.
The success of OpenAI has inspired considerable enthusiasm among investors, but it also raises a number of questions. Many have taken note of the company’s announcement that its revenue had reached $20bn in 2025. Yet one crucial detail has largely escaped attention: this figure referred not to actual annual revenue, but to annualised revenue.
The distinction matters. Annual revenue reflects income actually received over a given period. Annualised revenue, by contrast, is a calculated figure: the revenue from a particular month or short period is extrapolated across an entire year. Such a metric can be useful for estimating growth dynamics, but it is not equivalent to realised financial performance. This naturally raises a question: if a company’s underlying results are as strong as suggested, why rely on a metric based on extrapolation? And what, then, might the company’s actual revenue look like?
Some observers have pointed to indirect indicators. In the first half of 2025 OpenAI reportedly paid Microsoft roughly $454.7 million under its revenue-sharing agreement. According to the terms of their partnership, OpenAI must transfer 20% of its revenue to Microsoft. On that basis, OpenAI’s revenue for the first half of 2025 may have been around $2.27bn. In the third quarter of the same year OpenAI paid Microsoft a further $411.1 million, implying revenue of around $2bn for that quarter alone. If one assumes a broadly comparable growth trajectory in the fourth quarter of 2025, the company’s total revenue for the year might plausibly fall somewhere in the range of $9-10bn – less than half the widely cited $20bn figure.
Such calculations do little to inspire confidence. Some investors remain convinced that the weak profitability of AI start-ups merely reflects the heavy upfront capital costs associated with developing and scaling new technologies, costs that will eventually be recouped. Others see a deeper issue. Writing in Harvard Business Review, Danielle Kost argues that the challenge lies not simply in high initial expenditures but in the absence of a clearly viable business model for many AI companies.
Indeed, generative AI technologies did not initially emerge within a well-defined commercial framework. According to OpenAI itself, by the end of 2025 ChatGPT had roughly 800 million weekly users. Yet only around 5% of them – about 40m people – were paying customers. It is difficult to imagine that such revenues alone could fully offset the enormous costs associated with training and maintaining large AI models.
Against this backdrop, the search for sustainable monetization strategies has become unavoidable. It is therefore unsurprising that in early 2026 OpenAI announced that it was testing advertising within ChatGPT. For now the approach appears cautious: separate advertising blocks and only within the free version of the service. Even so, such a model is unlikely to match the scale of the advertising ecosystem underpinning search platforms such as Google.
This leads to a more fundamental question. Why do investments continue to flow into companies whose business models remain so uncertain? Yes, financial markets have often been willing to finance rapid growth long before firms achieve sustained profitability. Yet the longer the gap persists between investment and profitability, the more the situation begins to resemble a bubble – one sustained primarily by investors’ faith in an optimistic future rather than by stable operating revenues.
Raising such a question typically provokes two immediate objections.
The first is straightforward: why focus so heavily on OpenAI? After all, the AI market today extends far beyond a single company. Numerous lesser-known start-ups operate in the sector, some of which appear to possess clearer paths to monetization. Yet scale matters. The larger a company becomes, the stronger the multiplier effects associated with its strategic decisions – and the broader the consequences should those decisions prove misguided.
OpenAI has already entered into agreements implying infrastructure expenditures of roughly $1.4trn over the next 8 years. These include commitments estimated at about $500bn for chips supplied by Nvidia, around $300bn for computing services from Oracle, roughly $22bn for capacity provided by CoreWeave, and additional obligations involving Broadcom related to the development and deployment of custom chip solutions. For context, the entire GDP of the United States in 2025 amounted to about $30.6trn. Meanwhile Oracle, having secured such contracts, plans to raise around $50bn in 2026 through a combination of debt issuance and equity sales in order to finance the construction of new data centres.
This highlights a broader trend: even the largest infrastructure providers cannot finance the expansion of AI capacity entirely from their own balance sheets. According to Bloomberg, the construction of the data-centre infrastructure required to support the artificial-intelligence boom could cost more than $3trn globally. Where will these funds come from? Largely from debt markets. In such circumstances, it becomes difficult to ignore both the unusual financial metrics reported by OpenAI and the occasionally ambiguous public statements of its chief executive, Sam Altman.
A second objection concerns the premise itself: why even discuss the possibility that the music might stop? To some, such speculation sounds like unwarranted skepticism. Yet the answer again returns to business models. A useful comparison is drawn in a blog post by Carnegie Investment Counsel, which contrasts the early growth trajectories of Alphabet and OpenAI. At first glance the analogy appears natural. Google’s search engine sought to organize the web and provide users with direct answers to their queries. In that sense ChatGPT can be interpreted as the next stage in the evolution of search: it also responds to user questions, drawing on information from across the internet, but does so in a more conversational, contextual and expansive way.
According to the estimates discussed in that analysis, the early growth of OpenAI appears broadly comparable with the trajectory of Google in the early 2000s – at least if one relies on the revenue figures reported by OpenAI (though, as discussed earlier, those figures invite closer scrutiny). Yet the central issue is not merely the current pace of revenue growth but the sustainability of that trajectory.
Alphabet (then known simply as Google) was able to achieve exponential growth thanks to a rare combination of technological advantage and an extraordinarily simple business model. Google’s search engine rapidly displaced competitors, the verb “to google” entered everyday language, and the company began to redirect advertising budgets away from traditional media – newspapers, radio and television – towards digital platforms. This transformation was made possible by the superior relevance of search results and the remarkable efficiency of targeted advertising. The platform’s ability to collect user data and offer a straightforward mechanism for purchasing search-based advertising allowed Google to capture a substantial share of a global advertising market worth roughly $300bn in the early 2000s. Importantly, the core of this growth was driven by desktop search long before the emergence of YouTube or Android.
OpenAI’s model, however, is structured very differently. At a minimum, it remains far less clearly defined in terms of sustainable monetization, while simultaneously requiring far greater capital investment. This does not mean the model cannot ultimately succeed. But it does mean that replicating Google’s trajectory is far from guaranteed.
Herein lies the central risk, and it is not technological but financial. Will investors remain patient if geopolitical uncertainty intensifies, the cost of capital rises and the timeline for returns remains unclear? In this context it is hard not to note that Michael Burry – the investor famous for betting against the U.S. housing bubble depicted in The Big Short – has reportedly begun positioning himself against the AI boom as well.
History suggests that the music on financial markets often continues to play longer than sceptics expect. But it never plays forever.