AI-tocracy or Orwell's dystopia in reality

Review on the article

Martin Beraja, Andrew Kao, David Y Yang, Noam Yuchtman (2023). AI-tocracy. The Quarterly Journal of Economics, Volume 138, Issue 3, pp. 1349–1402, https://doi.org/10.1093/qje/qjad012

“The invention of print, however, made it easier to manipulate public opinion, and the film and the radio carried the process further. With the development of television, and the technical advance which made it possible to receive and transmit simultaneously on the same instrument, private life came to an end”, - these prophetic lines from the pen of Orwell are the best preface to what can be called “AI-tocracy”. This term was used by Beraja et. al (2023) in their work, in which they raised a frighteningly relevant topic, deciding to look at the relationship between the development of AI and autocracy using the example of China.

Artificial intelligence, or AI, is the result of the breakthrough of the last decade. Although the idea of AI originated a long time ago, which is also reflected in fiction, the conditions for its implementation only developed at the end of the 20th century. The reason for this is the fact that AI is the result of innovative thinking. But until the 20th century, the term “innovation/reformation” had a negative connotation: innovations were feared, because all innovations destroy the usual way of life. Therefore, a person thirsting for reformation could easily be convicted of heresy. The situation began to change only in the 30s of the twentieth century - with scientific progress.

Subsequently, the experience of the USSR, or rather its collapse, implicitly showed that innovation cannot effectively exist in a planned economy. Since then, the view has spread in economic circles that creative destruction, which precedes innovation, is by its nature only possible under capitalism, within which the entrepreneurial spirit is free. In authoritarian regimes, incentives to innovate are suppressed by threats and acts of expropriation. However, Schumpeter himself, who first used the term “innovation” in economic theory, thought differently. In his work “Capitalism, Socialism and Democracy”, socialism in the theoretical sense, and not in its Soviet execution, has a number of “correct” elements that can support innovation.

Beraja et. al (2023) prove with their research that Schumpeter may have been right. The need for autocracies to collect and process data for the purposes of political control may become the starting point for the development of AI. In turn, government orders for AI products can provide impetus for the development of other innovative products on the market.

To prove the hypothesis that there is a two-way relationship between AI and autocracy, the authors use AI for facial recognition in China as a basis. Maintaining political control is a top priority for the ruling Chinese Communist Party (Shirk 2007). All citizens, even China's most successful entrepreneurs, are threatened by the autocrat's unlimited ability to violate their property rights and sometimes their civil rights.

The authors use the available data to answer the questions of whether autocracies are purchasing facial recognition AI for political control purposes, whether facial recognition AI is effective for riot control, and whether the acquisition of AI is associated with additional changes in political control technology (for example, the acquisition of surveillance cameras). . To answer these questions, the authors use three blocks of data:

  • episodes of local political unrest in China from the GDELT project;
  • local public security agencies' procurement of facial recognition artificial intelligence (and additional surveillance technology) primarily from China's Ministry of Finance;
  • the development of new software by Chinese facial recognition AI companies (registered with the Ministry of Industry and Information Technology), as well as their software export deals (compiled from press releases, news reports and other sources).
AI and civil unrest

“Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious”, - wrote Orwell. Indeed, social unrest is the result of discontent and the most common way to resist dictatorship. The ability to prevent such unrest can sustain the current power structure for a long time.

In this regard, it is obvious that autocrats will strive to gain the ability to “prevent” social unrest. This feature is implemented using AI that recognizes faces - this technology has gained the greatest popularity in China. To this end, the authors in the paper test whether autocrats respond to political unrest by purchasing artificial intelligence technology for facial recognition. Researchers are finding that this is indeed the case: regions of China experiencing episodes of political unrest are increasing subsequent purchases of AI for public safety through facial recognition. Local government purchases of artificial intelligence technology in response to the outbreak of political unrest suggest at least a belief in the effectiveness of such technology in containing future unrest.

Can autocracy generate innovation?

Having established that AI does enhance the political control of autocrats, the authors examine whether politically motivated AI procurement stimulates additional AI innovation. The assessment found that during the first year after receiving a government contract, firms produce significantly more AI software; two years after the contract was awarded, they produced approximately 10 (48.6%) more software products. This growth is seen not only among software intended for government use, but, importantly, among software intended for broader commercial applications.

Moreover, government procurement in AI can encourage firms to export their innovative products. Specifically, the authors, to establish the international competitiveness of new AI software produced under politically motivated contracts, test whether winning a contract is associated with a firm's likelihood of exporting its technology. As a result, the authors note a tripling of the likelihood that firms will export technology after receiving a large government contract for AI.

Freedom is slavery

The paper's findings imply that China's autocratic political regime and rapid innovation in its artificial intelligence sector are not contradictory, but rather mutually reinforcing. China is a prime example of how an equilibrium can be maintained—an “AI-tocracy”—in which an autocratic regime takes root and frontier AI innovation is supported. At the same time, the politically motivated acquisition of this innovation stimulates further innovation, which, in turn, further strengthens autocrats.

More generally, this study, by examining the forces underpinning the relationship between autocratic political repression and cutting-edge innovation in facial recognition AI, sheds light on other notable episodes of cutting-edge innovation in non-democratic regimes.

Such episodes, as the development of aerospace technology in the Soviet Union and chemical engineering innovations in the Third Reich, are difficult to reconcile with the extensive literature highlighting the forces limiting innovation and growth in nondemocratic contexts.

These episodes suggest three important general conclusions: first, nondemocratic regimes can derive political power from cutting-edge innovations; Second, recognizing the political benefits of innovation, authoritarian regimes provide financial and institutional support that can promote technological development. To the extent that these mutually reinforcing forces overcome traditional autocratic tensions, innovation can strengthen and sustain autocracies.

Comments 0