Machina Economica, Part I: Autonomous Economic Agents in Capital Markets

Phillip Moreira Tomei

This is the first entry in a series on AI integration in the global economy, exploring its ethical, sociopolitical, and financial consequences. This entry focuses on financial markets and firm behaviour.

On the afternoon of May 6, 2010, a sudden and inexplicable market phenomenon unfolded. Within minutes, the Dow Jones and S&P 500 indices plummeted over six percent, erasing approximately one trillion dollars in market value. This event, later dubbed the "flash crash," sowed panic among investors due to its magnitude, velocity, and seemingly irrational nature. No discernible real-world events occurred at that time, seemed to justify such a drastic downturn, and intriguingly, the market rebounded almost as swiftly as it had declined.

The flash crash is now largely attributed not to human action but to the domination of automated high-frequency trading in the equities trade. On any given market day, algorithmic trade encompasses around 80% of trading volume in equities. 

Finance has not been in human hands for more than ten years, and we are now about to hand over the rest of the economy. 

From Homo Economicus to Machina Economica

Institutions are structured based on the costs associated with organisational operation: bargaining, coordination, the gathering and processing of information and the monitoring and enforcement of rules and norms1 As Samuel Hammond argues, the emergence and integration of AI into industry in the near future promises to reshape these costs dramatically by radically reducing them and changing the way we collect the information necessary in order to transact. Thus resulting in a shift away from the institutional frameworks and corporate organisation established in the early 20th century, which were built to support previous flows of information and transaction costs. The increasing role of automated agents in creating economic value will necessitate a novel institutional organisation and upend market dynamics.

Our primary observation is that, for firms, the adoption of significant algorithmic advances is a zero sum game. If firm A replaces a business function with an AI, say answering customer emails, then the increases in speed and efficiency of this process will give it a competitive advantage. Thus the competing firm B, in a free market, is obligated to adopt the same technology. As algorithmic technologies advance in multi-modal domains, we can imagine this same dynamic repeating across business functions, from supply chain management to employee monitoring, from quality assessment to research and development. Ultimately as more and more business tasks are completed by AI Agents, executives and shareholders will see a corresponding loss of direct control and oversight in the engine of value-creation of their companies. 

Adoption of AI technology is already a competitive necessity in many fields, and this will only be exacerbated as use-cases widen, holding even when the marginal costs outweigh the marginal benefits. As a result of this competitive necessity, firms may be driven to adopt AI technologies when it is to their disadvantage, overexposing themselves to business risks – be they reputational, cybersecurity, regulatory or simply overspending on innovation. There will of course be winners, but as we shall discuss later they may be fewer than expected. 

Economics has for decades been criticised for its underlying assumption that markets are composed of utility-maximising human individuals who make more or less rational decisions, the so-called homo economicus. We begin to observe that AI agents may come to mirror this economic model of human behaviour more closely than humans themselves. This convergence heralds an era where AI, as an integral part of the economic landscape, promises to enhance the efficacy of models that dissect and explain the nuances of social, political, and financial dynamics. From an institutional perspective, this shift may suggest that the 'rules of the game', as formulated by economic theories, may find a more resonant and effective application in the realm of artificial agents than in the unpredictable domain of human behaviour.  Agent-based modelling has yet so far been unable to reproduce the behaviour of financial markets likely due to the irreducibility of investor psychology. Yet when the key players in the economy cease to be humans we may begin to see a phase transition: from the homo economicus to the machina economica.

Lessons from Finance

It’s important to understand that the loss of human agency in the economy is the natural consequence of the incentive structure of markets. This has already happened in finance with algorithmic trading firms such as quant funds, which trade securities based on complex ‘black box’ algorithms with minimal human intervention. The underlying philosophy of quant funds aligns closely with the Efficient Market Hypothesis, positing that market prices reflect all available information, thus rendering traditional human stock-picking strategies ostensibly ineffectual. The best performing fund of all-time is not truly run by a human. Renaissance Technologies’ Medallion Fund, which has maintained an average annual return of 66% since 1988, has its trading executed by a model originally programmed by the late James Simons and algebraist James Ax in the mathematics department at UC Berkeley.

 
 

The flash crash of 2010 was not an isolated event. In 2011 alone, commodities markets witnessed a series of swift and startling fluctuations, painting a vivid picture of the expansion of algorithmic trading beyond equities. In February, the sugar market experienced a drop of 6% in a mere second. This was followed by a plunge in cocoa futures prices, which fell 13% in under a minute on the Intercontinental Exchange on March 1st. Mid-March saw the U.S. dollar's descent against the yen, dropping 5% in a few minutes on March 16th, marking one of its most significant movements of the dollar in history.  According to a former cocoa trader: "The electronic platform is too fast; it doesn't slow things down like humans would.” 

Since algorithmic traders provide a significant portion of the market's liquidity, in volatile conditions these traders withdraw, reducing liquidity and exponentially magnifying price movements. The interconnectedness of modern financial markets means that a disturbance in one market quickly spreads to others presenting a new kind of systemic risk. Algorithmic trading strategies often span multiple markets and asset classes, allowing a flash crash in one market to trigger similar phenomena in others. The 2010 Flash Crash, was largely attributed to the interplay of algorithmic trading strategies, which reacted to each other and to a large sell order in the futures market, creating a feedback loop that drove prices down rapidly.

Central banks are increasingly worried about the rise of algorithmic trading not just introducing systemic risks but also exacerbating the problems they are trying to solve and introducing uncertainty into the global economy. Today's markets are remarkable in their speed to assimilate fresh information. However, this rapidity comes with a trade-off: they are inherently less stable, more susceptible to abrupt and often inexplicable collapses. Consider the findings of a 2014 investigation examining the effects of algorithmic trading in 42 international stock markets. This study revealed that while these markets gained in liquidity and efficiency, they also experienced heightened volatility. An even more intriguing observation comes from a 2013 study on commodity markets. Over time, these markets evolved into increasingly introspective entities: a substantial 60-70 percent of their price fluctuations are no longer primarily influenced by integrating novel information, but rather by ‘reflexivity’, in essence, these markets are moving themselves without reference to the physical world in which their assets lie. 

As finance becomes increasingly run by advanced algorithmic models, so will the ‘traditional’ industries that underlie much of global GDP. The corresponding failure modes may thus also cross over. When factories, logistics networks and transport systems are run by AI, systemic risk in the economy may reach unprecedented levels. 

The Production function and its natural monopolies 

Spirits I have conjured, no longer pay me heed.

Goethe 1797

As competitive necessity forces firms to adopt AI technologies haphazardly and AI replaces human decision making beyond the financial world, local as well as systemic risks will be introduced. Algorithmically controlled supply chains will collapse, factory robots will fall and driverless cars will kill.

On the large scale, firms with significant AI implementation will yield algorithmic advances that monopolise industries. Let us take our companies A and B both within a burgeoning field, say, rare earth mineral exploration. Company A, with significant capital expenditure, develops an algorithmic technology that enables it to predict the distribution of Cobalt with only satellite multispectral data. This significantly lowers its marginal cost by eliminating the need to collect expensive physical samples and thus supercharges the firm’s productivity.

In the following quarters firm A rapidly increases its profitability and thus is able to raise even more capital which it spends on even more data, engineers and perhaps even those physical samples now integrated into an even better multi-modal model resulting in even greater productivity increases. Thus the algorithmic innovation enters a positive feedback loop, resulting in exponential productivity growth. Firm B on the other hand faces such extreme capital expenditure in order to build out the tech stack and data necessary that it cannot compete, and the minerals exploration business veers toward a natural monopoly. 

Joseph Schumpeter articulated this idea back in 1942. He posited that the pursuit of a temporary monopoly is a fundamental activity for profit-driven entities. This endeavour, he argued, is a catalyst for substantial innovation and economic expansion. Fast forward to the 1970s, and we witness the emergence of the discipline of business strategy. The field essentially revolves around the methodologies of constructing and preserving these monopolies acting as a conceptual counterbalance to the notion of antitrust. Simply put, a natural monopoly is an industry in which multi-firm production is more costly than production by a monopoly. 

In past epochs, the supremacy of a corporation was largely tethered to its physical assets. Orthodox economic theory distinguished three factors of production, natural resources, labour and capital. Data can today be considered the fourth factor: it has throughout the popular media been described as the ‘new oil’, but while both need to be refined in order to be useful, crucially the consumption of data is not rivalrous. That is to say, unlike oil, two entities can have the same data at the same time and the use of it does not diminish it – quite the opposite, data begets more data.

If this sounds familiar it is because this production factor has propelled the digital monopolies of today. There is in essence only one search engine, only one browser, only one online store and only one online mapping service. All of these markets have enormous costs of entry, and their economies of scale benefit from exponential algorithmic improvement driven by the positive feedback loop of data-driven productivity gains. 

In the narrative of classical economies of scale, there is a clear simplicity: expanding production directly led to a proportional reduction in unit costs. This doctrine, deeply rooted in the tangible realms of manufacturing facilities, workforce expertise, and infrastructural strength, extolled the virtues of industrial expansion. However, integration of AI may change this landscape. Observe, for instance, the evolution of modern e-commerce platforms. Their edge doesn't solely lie in expansive inventories; rather, it's carved out through the data-driven deciphering of complex consumer patterns and optimising the corresponding supply chain. AI-driven value creation in traditional industries (such as minerals) thus represents the exportation of data-driven economic feedback loops into the non-tech economy. Some of the possible iterations of this relationship were modelled by Richard von Maydell at ETH Zurich.

These AI-driven economic entities (Machina Economica) will expand their automated decision-making due to competitive necessity, automating the ability to deploy capital or workers and supply millions of humans. Predictions for simple automata can be modelled for the formalisable and predictable AI paradigms of yesteryear, but the current frontier of LLMs do not share these features. Models inherit biases from their training data with demonstrated harms to humans. The output of a large language model is unpredictable and it’s seemingly easy to reverse safety-aligned and optimised models to their base state. Indeed, it may be that our new Machina Economica is not the perfect utility maximising automata we wished for. Yet rather than the familiar human irrationality, we face uncharted risks.  

By the time these machines have economic power they will be operating beyond the ability for humans to oversee them. We will need new economic institutions, rules and metrics to keep them working in our interest – lest we end up like Goethe’s young magician, helpless at the creature we have conjured. 

Previous
Previous

Morally Guided Action Reasoning in Humans and Large Language Models: Alignment Beyond Reward

Next
Next

Amplifying Voices: Talk to the City in Taiwan