Intelligence Rising

A Strategic Exploration of AI Futures

Exploring AI futures through role-play.

Intelligence Rising is a strategic role-playing game meant to effectively simulate and illustrate the possible paths by which artificial intelligence capabilities and risks might take in the world. 

Players assume the roles of leaders in governments and technology companies, and then try to emulate how they believe these actors will behave in the pursuit of AI development over the coming decades. 

The game was designed by a group of researchers from the Universities of Cambridge, Oxford and Wichita State to allow decision-makers to stress-test their assumptions, and develop a stronger sense of the possible impacts and meaning of the gradual development of AI.

Background on our Game

"AI is one of the most important things we’re working on … as humanity.
It’s more profound than fire or electricity or any of the bigger things we have worked on.
It has tremendous positive sides to it, but, you know it has real negative consequences, [too].”

- Sundar Pichai, CEO of Alphabet, 2020


AI is expected to be one of the most transformative technologies in human history. The implications of this are difficult to truly understand — AI has the potential to dramatically or even radically transform society, in ways that may be difficult to anticipate. 

For more than a century, military planners have been using games for the purpose of exploring plausible futures, imagining potential future conflicts and enacting them within the game in order to prepare themselves should things come to pass. These techniques, known widely as ‘wargames’, are frequently used by all modern militaries. Their success has been so profound that they’ve been adapted for a variety of applications, including for strategic planning by policy makers and organizations. In such applications the techniques are known as ‘serious games’ or ‘simulation games’, and they can be very useful when preparing for future scenarios which involve high degrees of uncertainty, where even marginal reduction of uncertainty is valuable.

Inspired by these techniques, Intelligence Rising was created for exploring plausible futures involving the development of radically transformative AI. Developed since 2018, the game has evolved five distinct versions, each meant for different audiences:

  • an online role-play game
  • a tabletop role-play game
  • an educational board game
  • a free-form role-play for large groups
  • a free-form role play for training AI researchers and high-level decision makers. 

Playing a Game in Your Organization

Games typically include between four and twelve players and tend to take around four hours, including introductions and summaries. 

Every round of player actions represent two years in-game, with games frequently spanning ten to fourteen years, covering the global dynamics of competing AI laboratories and state actors into the 2030’s. 

Games can also be tailored to the needs of your organization, and we are happy to work with you to find custom solutions that can improve your team’s foresight on the issue of transformative AI.

Please reach out to us for quotes or more information, via email

Senior Game Masters

Our senior game masters have robust expertise in global affairs, technology policy and AI technologies. They are also experienced with facilitating Intelligence Rising for a wide variety of audiences, including governments, think tanks, and industry organizations. They are comfortable facilitating games with all levels of experts, ranging from C-level executives, senior research managers, government officials and career policy professionals, to junior students and entry-level researchers.

Shahar Avin, PhD

Shahar Avin is a senior researcher associate at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge. He works with CSER researchers and others in the global catastrophic risk community to identify and design risk prevention strategies, through organising workshops, building agent-based models, and by frequently asking naïve questions.

ross Gruetzemacher, PHD

Ross Gruetzemacher is an assistant professor business analytics in the Barton School of Business at Wichita State University and a Research Affiliate at CSER. His research involves strategic planning, foresight and forecasting for transformative artificial intelligence. He has developed novel techniques for this, including a workshop and techniques for mapping paths to AGI. He also works on improving organizational decision making and applications of NLP in business.

James Fox

James Fox is the research director at London Initiative for Safe AI. James works on multi-agent AI behaviour, which encompasses game theory, causality, reinforcement learning, and temporal logics. This has resulted in 6 accepted peer-reviewed conference papers (including AAAI, ECAI, and AAMAS), a best-paper award (TARK), and 2 journal publications (including AIJ). James obtained his PhD in Computer Science at the University of Oxford.