Our People

Team

Shyal Beardsley
Senior Software Engineer

Shyal Beardsley is a seasoned software engineer with a deep foundation in the development of pipelines tailored for international CGI groups, and computer graphics. At AOI, he pushes forth the productization of early-stage projects. He has previously also dabbled in consulting in SOAs and Microservices. His passion extends to algorithmic, functional-style programming and he's an active contributor to open-source projects in the realtime cryptocurrency scaling domain.

Alex Fox
Project Director, Transformative Simulations Research

Alexandra Fox is an experienced researcher, team builder, engineer and product strategist focused on tackling large scale data privacy and security problems. Her recent work was at Apple where she built tools that enable increased transparency into 1st and 3rd party data privacy practices for users of MacOS and iOS. She also contributed to infrastructure security and AI/ML products during her time at Apple. Her past roles include Data Scientist/Researcher at PAX Labs, where she authored human usage studies for the FDA and built predictive algorithms, and Software Engineer at Knight Foundation-funded efforts such as Project Gitenberg, known now as the Free eBook Foundation.

Matija Franklin
Researcher

Matija Franklin is a researcher at UCL interested in AI Safety and Alignment. Matija's current AI Safety projects include writing policy papers, conducting empirical research, and developing evals that try and address and understand AI Manipulation. For AI Alignment, Matija is currently focusing on exploring better methods for gathering human feedback to develop richer representations of human values, as well as exploring methods of AI Alignment that go beyond preferences.

Brittney Gallagher
Co-founder & Vice President

Brittney Gallagher is a storyteller and creator. She focuses on human interaction with the technology that is changing our world. Over the years she has interviewed technologists and thought-leaders for Digital Village, one of the oldest public radio shows in Los Angeles. You can also hear her on SciFi-preneurship with Phil Libin, which explores how great works of science fiction have inspired creators. In her other life, she builds and manages large-scale software systems for media companies.

Peli Grietzer
Researcher

Peli Grietzer is a researcher and writer specializing in ML, philosophy, and literary studies. He has a PhD from Harvard Comparative Literature in collaboration with the HUJI Einstein Institute of Mathematics, where he wrote about a technical analogy between autoencoding and the idea of 'vibe.' Before joining AI Objectives Institute he worked as a technical project manager in applied reinforcement learning.

Tushant Jha (TJ)
Affiliate Researcher

TJ (Tushant Jha) is a research scholar at the Future of Humanity Institute (FHI). Their work focuses on AI strategy for beneficial use and agency amplification, with interdisciplinary engagement with complex economic systems, institutional economics and legal theory, and theories of deliberation and cooperation. They also do conceptual research on agent foundations and philosophy of knowledge and action, and the relevance for AI risk theory. In a previous life, they were a researcher in computational economics and algorithmic game theory.

Anna Leshinskaya
Program Lead, Moral Learning

Dr. Anna Leshinskaya is a computational cognitive neuroscientist interested in learning and abstraction in human and artificial brains. She earned her PhD from Harvard University investigating the neural organization of semantic memory. Most recently, she led an NSF-funded project at UC Davis investating computational principles of predictive learning. She is interested in how AI can learn in ways that are human-like and human-aligned.

Bruno Marnette
Chief Technology Officer

Dr. Bruno Marnette is a technology entrepreneur based in London. He holds a PhD in Computer Science from Oxford and an MSc in Cognitive Science from EHESS. He co-founded two VC-backed AI-driven companies. As CTO and Head of Product at Enki, he impacted millions of students by enabling mobile coding education. As CEO of Prodo, he pioneered AI-powered code generation. Bruno also played key roles at Meta, where he developed novel infrastructure to combat misinformation. Prior to joining the AOI, he also co-founded Depolarized AI, a nonprofit initiative promoting productive online debates.

Colleen McKenzie
Executive Director

Colleen McKenzie is a researcher, designer, and product strategist, interested in exploring and improving humans' interaction with technology in a variety of contexts. She co-founded the Median Group, a scientific research nonprofit, where she explores technological trends and their societal impacts. Other past roles include Product Manager and Software Engineer at Google, Chief of Staff at the Center for Humane Technology, and Head of Product for distributed computing startup Kalix Systems. She holds a B.A. in Computer Science and Neuroscience from Columbia.

Amanda Schochet
Program Co-Lead, AI Supply Chain Observatory

A former computational ecologist and researcher for USAID and NASA, Amanda Schochet draws inspiration from the systems she spent years studying to create a new kind of educational institution. MICRO aims to weave compact, engaging exhibits about contemporary science into neighborhoods and public spaces everywhere, increasing public access to key tools and concepts to empower people to shape the trajectory of our collective future. As a researcher, Amanda employed computer vision and machine learning techniques to make public health tools and policy recommendations. You can learn more about her philosophy of change in her TED Talk, Science Friday feature, or on an impromptu urban ecology tour when the weather is right in NYC.

Değer Turan
President

Vehbi Değer Turan previously founded Cerebra Technologies, an AI platform that identifies shifts of public opinion and intent, used by governments, hedge funds, and international retailers alike. Cerebra facilitated deliberation and insights into public discourse trends for over 300 million citizens, He was a Research Fellow at the Freeman Spogli Institute for International Studies developing toolkits for forecasting shifts of public opinion. At Stanford University, Değer was awarded the Firestone Medal for Excellence in Research for his work with Francis Fukuyama and Dan Jurafsky on “Augmenting Citizen Participation in Governance through Natural Language Processing.” Passionate about pushing the boundaries of language understanding, he can be found teaching colors and numbers to his pet parrot when he wants to switch from AI to natural intelligence.

Edmund Zagorin
Program Co-Lead, AI Supply Chain Observatory

Edmund Zagorin is the Founder and Chief Strategy Officer of Arkestro, the leading Predictive Procurement Orchestration platform. Prior to founding Arkestro, Edmund worked as a strategic sourcing advisor helping large enterprises apply game theory and behavioral science to complex procurement operations. Edmund is a globally recognized thought leader and Forbes contributor on the emerging role of AI/ML in procurement and supply chain. He also serves on the Thought Leadership Committee of the Institute for Supply Management (ISM), the Advisory Board of the Sourcing Industry Group (SIG) and was named among the 2023 “Pros to Know” by Supply & Demand Chain Executive. Edmund is passionate about AI alignment issues in supply chain and organized the first AI Ethics & Safety Workshop at the ISM Annual Committee Meeting.

Board

  • Tasha is a technology entrepreneur living in Los Angeles. Her current work with GeoSim Systems centers around a new technology that produces high-resolution, fully interactive virtual models of cities. Prior to her involvement with GeoSim, she co-founded Fellow Robots, and was formerly on the faculty of Singularity University. She sits on the boards of OpenAI, the Centre for the Governance of AI, the Centre for Effective Altruism, and the AI Objectives Institute.

  • Vehbi Değer Turan previously founded Cerebra Technologies, an AI platform that identifies shifts of public opinion and intent, used by governments, hedge funds, and international retailers alike. Cerebra facilitated deliberation and insights into public discourse trends for over 300 million citizens, He was a Research Fellow at the Freeman Spogli Institute for International Studies developing toolkits for forecasting shifts of public opinion. At Stanford University, Değer was awarded the Firestone Medal for Excellence in Research for his work with Francis Fukuyama and Dan Jurafsky on “Augmenting Citizen Participation in Governance through Natural Language Processing.” Passionate about pushing the boundaries of language understanding, he can be found teaching colors and numbers to his pet parrot when he wants to switch from AI to natural intelligence.

  • Brittney Gallagher is a storyteller and creator. She focuses on human interaction with the technology that is changing our world. Over the years she has interviewed technologists and thought-leaders for Digital Village, one of the oldest public radio shows in Los Angeles. You can also hear her on SciFi-preneurship with Phil Libin, which explores how great works of science fiction have inspired creators. In her other life, she builds and manages large-scale software systems for media companies.

  • Timothy Telleen-Lawton is the former Head of Procurement for Anthropic, an organization working on making AI helpful, honest, and harmless. Before Anthropic, Tim ran the Center for Applied Rationality (CFAR) for 3 years, after spending 4 years in research and operations at GiveWell. He previously lobbied for Environment America, wrote policy reports for Frontier Group, and ran political campaign offices. He holds a B.S. and M.S. in Earth Systems from Stanford.

  • Dr. Anders Sandberg has a background in computational neuroscience, but is currently senior research fellow at the Future of Humanity Institute (FHI) at the University of Oxford. His research centres on management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement and new technology, estimating the capabilities of future technologies, and very long-range futures. He is a fellow for Ethics and Values at Reuben College, senior Oxford Martin fellow, and research associate of the Oxford Uehiro Centre for Practical Ethics, the Center for the Study of Bioethics (Belgrade), and the Institute of Future Studies (Stockholm).

Advisors

  • David A. Dalrymple (also known as “davidad”) recently completed a 2-year Research Fellowship at Oxford University, where he developed the Open Agency Architecture. He has backgrounds in theoretical computer science, applied mathematics, software engineering, and neuroinformatics. In 2008 he was the youngest person to receive a graduate degree from MIT, and he went on to study biophysics at Harvard. His neuroscience work has been funded directly by technology leaders such as Larry Page and Peter Thiel in their personal capacities. David has also worked in machine learning and software performance engineering at major tech companies and startups alike, and co-invented the top-40 cryptocurrency Filecoin with Protocol Labs, a decentralized technology firm where David continues to guide R&D strategy.

  • Brian Christian is the bestselling author of three acclaimed books of nonfiction about the interdisciplinary and human implications of computer science. The Most Human Human (2011) was a Wall Street Journal bestseller, New York Times Editors’ Choice, and New Yorker favorite book of the year. Algorithms to Live By (2016), with Tom Griffiths, was a #1 Audible bestseller, Amazon best science book of the year, and MIT Technology Review best book of the year. The Alignment Problem (2020) is a Los Angeles Times finalist for best science and technology book of the year. A visiting scholar at the University of California, Berkeley, he lives in San Francisco.

  • Gaia is an experienced entrepreneur and leader in technology and innovation-based startups. Prior to Metaculus, she co-founded DAQRI, AR hardware company catering to the industrial and enterprise market. She served as a senior executive there from 2010-2017 and helped build the company to 6 international offices and 500 employees. Gaia has published academic work in the field of AI policy and presented widely at conferences including at Inspirefest and Foresight Vision Weekend.

Collaborators

Tantum Collins

Tantum Collins is a researcher and former policymaker. He focuses on how machine learning can help build better systems for democratic governance and collective intelligence. Tantum is an Affiliate at the Collective Intelligence Project and at the Centre for the Governance of AI. He will be based at Harvard during the 2023-2024 academic year as a Safra Center Fellow in Residence. Previously, Tantum worked at the White House Office of Science and Technology Policy, where, as Assistant Director for Technology Strategy, he oversaw a portfolio of issues related to AI and national security. Prior to that, he worked at DeepMind as a Research Scientist, leading the Meta-Research team. He currently lives in London.

Dylan Hadfield-Menell

Dylan Hadfield-Menell is an Assistant Professor of Artificial Intelligence and decision-making in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. He has a Ph.D. in Computer Science from UC Berkeley. His research focuses on the value alignment problem in artificial intelligence. His goal is to design algorithms that learn about and pursue the intended goal of their users, designers, and society in general. He has focused on algorithms for human-robot interaction with unknown preferences and reliability engineering for learning systems. In his recent work, he has studied the limitations of proxy metrics in AI systems and the design of distributed control and verification mechanisms for autonomous agents, with a focus on aligning recommendation systems.

Natasha Jensen

Natasha Jensen is a product strategist. Her past experience ranges from Product Manager at Google focusing on Speech for the Google Assistant to consumer credit fraud and research helping people use credit wisely at Capital One. She previously studied Mathematics and Brain and Cognitive Science at MIT. At AOI, she is helping build out the Human Autonomy program: researching methods and building prototypes to increase a person's coherence and and foster a more transparent and consenting engagement between individuals and the technology they engage with.

Joel Lehman

Joel Lehman is a machine learning researcher who has studied algorithmic creativity, AI safety, artificial life, and AI for wellbeing. Most recently he was a research scientist at OpenAI coleading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called “Why Greatness Cannot Be Planned,” on what AI search algorithms imply for individual and societal accomplishment. His recent interests include the intersection of machine learning with developmental psychology, psychotherapy, and with moral philosophy.

Cat Mooney

Catherine (Cat) Mooney is an agent of elegant chaos and technological transformation. She serves as a Technical Program Manager at Okta, where she convenes strategy conversations for the fastest growing identity company on the Internet. She also assists the AI Objectives Institute in gathering brilliant minds from every direction and envisioning better futures for AI and transformations of the human economy.

Philip Moreira Tomei

Philip Tomei trained as a cognitive scientist at the University of Oxford where he focused on Cognitive Linguistics and dynamical systems. He has worked on developing technical model evaluations for AI systems with scholars from SERI MATS, and EU policy design to enforce these with the OECD. In 2023 he cofounded the NGO Pax Machina to focus on research integrating AI risk into financial markets.

Aviv Ovadya

Aviv Ovadya researches and supports tractable processes for technology governance and alignment, building on validated methods from offline deliberative democracy and innovative AI-augmented deliberation technology. He is affiliated with Harvard's Berkman Klein Center (RSM), a visiting researcher at Cambridge University's Center for the Future of Intelligence, and consults for technology companies, civil society organizations, and funders. Aviv’s related work explores how we can make our information ecosystem and decision-making systems robust in the face of new technologies. This has involved measurement, mapping harm dynamics, identifying potential levers, navigating limitations (e.g. of deepfake detection), and ensuring that social media ranking systems can bridge divides instead of fomenting division. Aviv's work has been covered regularly, including by the BBC, NPR, the Economist, and The New York Times and his writing has been published by Bloomberg, HBR, the MIT Technology Review, and the Washington Post.

Max Shron

Max Shron is a data scientist, author, and entrepreneur. He is currently the Director of Data at Warby Parker, where his work often involves aligning definitions, metrics, models, and feedback loops to the company's values. Prior to his work at Warby Parker, Max founded and ran Polynumeral, a team of data scientists who consulted for major NGOs, media companies, and tech startups on a wide variety of projects. He's the author of Thinking With Data (O'Reilly, 2014).

Orowa Sikder

Orowa Sikder is CEO and cofounder of Cophi, which uses computational methods to help organizations measure and improve collaboration. He is also finalizing a Computer Science PhD at University College London, focusing on mathematical methods to analyze network data. He previously trained in Economics and Philosophy at Oxford University. At AOI, he is researching methods to help language models better understand human preferences, beliefs and behaviors.

Jessica Taylor

Jessica Taylor is a researcher at Median Group who focuses on topics including artificial intelligence, global catastrophic risks, cryptocurrency scalability, and philosophy of mind. Previously, she researched AI alignment at the Machine Intelligence Research Institute, helping to design formalisms such as reflective oracles, quantilizers, and logical induction. She is passionate about algorithm design, reconciling ontological frameworks, and using AI to study social and psychological phenomena. She holds a MSc from Stanford University in Computer Science.

Justin Stimatze

Justin Stimatze, an engineering leader with an eclectic background in software engineering, management, and academia, has applied his generalist skills across various industries. With a Ph.D. in Physics and a well-rounded education, he has engaged his interests in a diverse mix of projects spanning logistics, mapping and location technology, gaming, research sectors, and dialog management. In addition to his pursuits in the field, he has shown enthusiasm for volunteerism, demonstrating his desire to make a positive impact on the community. His experience includes humbly leading teams and fostering supportive environments in the workplace, embracing a servant leadership approach.

Zoey Tseng

Zoey Tseng is an independent researcher specializing in public goods and self-sovereign identity. Actively involved in the g0v community, Tseng conducted research on designing a new government funding mechanism for open-source projects. Tseng's collaborations extend to Taiwan's Ministry of Digital Affairs (MoDA), where she has participated in web3 and AI-related projects. Notably, she played a key role in deploying Talk to the City for MoDA. Beyond research and development, Tseng has facilitated several deliberation workshops in Taiwan, fostering discussions on the impact and development of large language models (LLMs).

Darko Stojilović

Darko Stojilović is a researcher with extensive experience in industry, NGOs, and academia. He holds MSc degrees in Cognitive and Decision Sciences from UCL and in Psychology from the University of Belgrade. Darko’s current interests focus on moral psychology and AI ethics, with a particular interest in understanding the moral responsibility of AI systems. His work is dedicated to shaping legal guidelines for assigning liability in instances where AI cause harm.

Carroll Wainwright

Carroll “Max” Wainwright is an AI research scientist, technologist, and physicist. He is the co-founder and CTO of Metaculus, a crowd-sourced forecasting platform that predicts the future of AI progress along with other important topics. He previously worked at Partnership on AI, building reinforcement learning environments to benchmark AI safety. Before joining the AI research community, Carroll earned his doctorate studying the physics of the very early universe. He currently lives in San Francisco.

Stacey Svetlichnaya

Stacey Svetlichnaya is a deep learning engineer at Weights & Biases, building developer tools for visualization, explainability, reproducibility, and collaboration in AI. She’s not sure if climate change or AI safety is the bigger existential threat, so she strives to maximize impact on both. She enjoys the intersection of machine learning research, application, and UX, mostly for vision & language models. Previously, she worked on image search, productionizing ML systems, and discovery & recommendation on Flickr, following the acquisition of LookFlow, a visual similarity search engine. Stacey holds a Stanford BS ‘11 and MS ’12 in Symbolic Systems, focusing on neuroscience.

Alek Chakroff

Alek Chakroff is a researcher with a background in Social Psychology and Cognitive Neuroscience. Alek's primary research focus during doctoral work at Harvard was in "moral psychology," pursuing a descriptive understanding of how humans think about moral values, make moral judgments, behave well, and decide how to treat others who act in good or bad ways. More recently, at Google / Jigsaw, he researched psychological interventions on-platform to reduce the spread of misinformation online.