A strategic simulation of AI futures
Artificial Intelligence (AI) is expected to be one of the most transformative technologies in human history.
The implications of this are difficult to truly understand — AI has the potential to dramatically or even radically transform society, in ways that are difficult to anticipate. In many domains AI technologies are expected to bring significant benefits, but there are also reasons to think that powerful AI systems introduce the risk of unprecedented harm to society.
Intelligence Rising is a training workshop that lets present and future decision-makers experience the tensions and risks that can emerge in the highly competitive environment of AI development through an educational roleplay game. Learn to expect the unexpected.
Why Play Intelligence Rising?
Due to the scale of outcomes and unpredictability inherent in AI technologies, it is crucial that decisions relating to the development, deployment and governance of transformative artificial intelligence be guided by foresight, responsible decision-making, and a deep understanding of the potential impacts these decisions will have on the future.
Intelligence Rising is about leaning into the change, understanding that there are going to be challenges, what these might be, and identifying potential opportunities to speculate. It gives orientation around a rapidly changing domain with tremendous uncertainty. Participants get to actively inhabit powerful actors and see how they think about the world, and start building mental models for how the future might play out.
Inspired by the practice of strategic wargames, and informed by the latest research and guidance of AI risk experts, Intelligence Rising exposes participants to dynamics and decision junctures representative of possible futures of AI tech and policy development. Participants discover, explore and internalise opportunities, while facing safety, ethical and security concerns related to AI.
Understanding how dynamics between great powers evolve, or how frontier AI labs will influence governments and vice versa, will allow your team to make considerate decisions when planning for the next 2-5 years, and when responding to surprises. As regulations emerge, teams who were trained and educated through Intelligence Rising will gain a competitive advantage by being aware ahead of time of expected changes.
In 2019 Intelligence Rising games were already featuring the rise in market power and political influence of AI-focused tech giants, the importance of AI hardware and cybersecurity, and international agreements on AI safety, security and benefit sharing. Today these are the focus of government policies, international fora and leading think tank reports.
50%
chance of high-level machine intelligence by 2047. This is 13 years sooner than when surveyed in 2022.
38%
of top AI researchers put at least a 10% chance on extremely bad outcomes from high-level artificial intelligence.
The above are aggregate forecast by 2,778 researchers who had published in top-tier artificial intelligence venues. You can read the study here.
The explanation given to researchers was:
“High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.”
The researchers instructed respondents to “ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.” They asked for predictions, “assuming human scientific activity continues without major negative disruption.”
Learning objectives
Unprecedented risks
The development of AI has huge potential, and with it, a huge capacity for misuse, accidental and systemic harm. It is possible that within a decade the technology will advance enough to pose catastrophic threats to entire communities, from the destabilisation of local governance up to an existential risk to global society. In this aspect, responsible AI development requires levels of coordination and investment previously only seen in a handful of fields, such as nuclear energy – except that AI’s development and adoption is quicker, performed by many more actors, and its consequences harder to predict than anything that came before.
As you play your role, you will be exposed to a variety of unpredictable outcomes based on your actions and the actions of others. You will be able to decide to divert attention and resources to the development of safety technologies and cross-actor cooperation, or possibly face the consequences of failing to do so.
Mitigating the dangers of competition
Powerful actors can clearly see the potential competitive advantages of AI, and therefore tend to perceive the development of AI as a race. This incentive actively encourages neglect, willful ignorance, or both, toward the threats these technologies pose. Most threats require cooperation to address.
Applying lessons from wargames, we invite you and the other players to act against or with each other and experience the consequences for yourselves, while learning about the opportunities and challenges inherent in large-scale coordination.
Destabilisation
Eventually, AI research and development might bring about radically transformative technologies, with far-reaching effects on global society. There’s no need to wait, however, as existing AI tech already has had a dramatic impact, becoming a destabilising force in various aspects of modern life: changing employment patterns, dis-empowerment, copyright issues, misinformation propagation, and more.
During the workshop you will be able to learn of many of these destabilisation factors, how to expect them, and how to possibly handle them. You will be required to face the consequences of emerging technologies, whether these were created by you, or another actor.
What to expect
In Intelligence Rising, players embody characters such as elected officials and their AI advisors in major states and CEOs and their executive teams at leading technology firms.
Analogous to wargaming, Intelligence Rising has been conceived to be generative and structured. Adversarial interactions between teams with different, but conflicting, objectives, in a decade-long scenario where AI technologies can advance rapidly along academically-informed pathways, create possible futures that non-expert are unlikely to have imagined outside of this exercise.
Sessions are moderated by expertly trained and knowledgeable facilitators, who bring their extensive AI strategy background and in-depth knowledge of the AI landscape to provide insightful and realistic feedback on player decisions. The participants’ strategic abilities are stretched and challenged whilst the facilitator introduces a rapidly changing world.
About us
Intelligence Rising was designed by researchers at University of Cambridge, University of Oxford and Wichita State University, alongside non-academic collaborators.
→ Learn more about the team, our story, and why we have created Intelligence Rising.
Our facilitators
-
Shahar Avin
Senior Research Associate
Institute for Technology and Humanity at the University of Cambridge
-
Ross Gruetzemacher
Assistant Professor of Business Analytics
Wichita State University
-
James Fox
Research Director
London Initiative for Safe AI
-
Anine Andresen
Chief of Staff to senior AI policy practitioner
PECC/OECD/GPAI/UNESCO
-
Auriane Técourt
European Tech Policy Fellow
-
Vaniver Gray
Staff Writer
Machine Intelligence Research Institute
What Our
Clients Say
84% of participants would recommend Intelligence Rising after playing.
This is based on an evaluation of Intelligence Rising conducted by the University of Cambridge in the period 2020-2023, based on 50 survey responses from 14 different games. Results from the evaluation are to be published in an academic paper.
We are proud to have facilitated Intelligence Rising for teams in government, AI labs, industry, academia, NGOs, think tanks, training programs, student groups and the general public.
Some of our ongoing and past audiences include the following organisations:
“It prompted me to more actively put myself in the shoes of other actors as I consider AI strategy questions.”
(Manager at AI company, UK)
“It’s about the feeling of unexpectedness that will stick with me.”
(AI PhD candidate, UK)
“I got an insight into the kinds of decisions, points of leverage, and dynamics that make up the world stage with respect to AI development and deployment.”
(AI researcher, USA)
Get in touch
For questions or quotes, please fill out this form or email us at team@intelligencerising.org
Are you interested in playing individually?
Please fill out this form and we will let you know if there is an opportunity to join a game.