Introducing the AI Objectives Institute
The implications of artificial intelligence for the future of humanity is one of the greatest questions of our era. We are increasingly delegating high-stakes decisions to AI systems, relying on them for our understanding of the world. Can we define what matters to us well enough to entrust important decisions to those systems? Can we guarantee that AI won’t simply discard the values we try to give it? If we succeeded in making systems that are as flexible and autonomous as humans, would we trust them?
While we wrestle with those questions, another set of artificial intelligences already exist, more powerful than any person, capable of both great benefit and harm to the world: corporations, with their objective of maximizing shareholder value. Capitalism already provides a setting in which we need to align impersonal superintelligences to human flourishing.
While the implied 'will' of capitalism does not always win out over other forces (for example, some profitable but toxic manufacturing process are illegal, some family businesses forego growth to preserve their lifestyle) nevertheless the scale and incentive force of decisions made within capitalism's machinery around the globe is tremendous, and shapes the world in some decisive ways. When left to their own devices, competition and the drive for accumulating capital will often squeeze out any other set of values.
Thinking of capitalism through the lens of artificial intelligence is not merely metaphorical—new research shows that there are deep algorithmic connections between the ways that markets and neural networks function, and many opportunities to address them together.
Noticing these correspondences opens up multiple fruitful inquiries:
How are the problems we fear in the next generation of AI already taking place in capitalism today?
E.g. Is profit (or GDP) really the value that we want to maximize? Could a particular formulation of ESG (environmental, social, governance) goals augment profit successfully at the full scale of capitalist markets? What side effects would ESG metrics likely have if they were set as goals? Are there ways to prevent those side effects?
Would having more aligned market objectives improve the incentives and resource allocation in the boom industry of machine learning, to reduce potential risks that misdirected or unsafe AI poses to humanity?
How might we use new tools and ideas to improve upon capitalism?
E.g. Can new tools from the AI and AI Safety communities help better incorporate external information into the market’s objective function in a more effective way than prior regulation, taxation, and subsidy regimes?
How might we use lessons from capitalism to assist in the project of building a safely aligned AI?
E.g. What have been the most successful cases of adjusting capitalism's optimization power? What can existing case studies teach us about entities which will do anything that is not explicitly forbidden in order to reach their goals?
How much does successful and aligned deployment of transformative AI systems depend directly on market incentives themselves being more aligned?
Finding the better versions of these questions and pursuing answers may be of critical importance to humanity's development in the 21st century. While there are many people and institutions working on pieces of this inquiry from many different perspectives, we believe we are overdue for an institution that can direct more dedicated attention to the project of aligning capitalism as the first case of powerful AI.