Publications
While our publications are all listed here, they are easier to browse on our research page.
Why some people disagree with the CAIS statement on AI
Previous research from Rethink Priorities found that a majority of the population agreed with a statement from the Center for AI Safety (CAIS) that stated “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This research piece explores why 26% of the population disagreed with this statement.
AI Safety Bounties
AI safety bounties: programs where members of the public or approved security researchers receive rewards for identifying issues within powerful ML systems (analogous to bug bounties in cybersecurity).
US public perception of CAIS statement and the risk of extinction
On June 2-3, 2023, Rethink Priorities conducted an online poll of US adults to assess their views regarding a recent open statement from the Center for AI Safety (CAIS). The statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
US public opinion of AI policy and risk
This nationally-representative survey of U.S. public opinions on AI aimed to replicate and extend other recent polls. The findings suggest that people are cautious about AI and favor federal regulation though they perceive other risks (e.g. nuclear war) as more likely to cause human extinction.
Prospects for AI safety agreements between countries
In this report, Associate Researcher Oliver Guest investigates the idea of bringing about international agreements to coordinate on safe AI development (“international safety agreements”), evaluates the tractability of these interventions, and suggests the best means of carrying them out.
Survey on intermediate goals in AI governance
As one effort to increase strategic clarity, the AI Governance and Strategy team sent a survey to 229 people they had reason to believe are knowledgeable about longtermist AI governance.
Conclusion and Bibliography for “Understanding the diffusion of large language models”
This is the ninth and final post in the “Understanding the diffusion of large language models” sequence, which presented key findings from case studies on the diffusion of eight language models that are similar to GPT-3. This post provides a conclusion, highlighting key findings from the research, along with a bibliography.
Questions for further investigation of AI diffusion
This is the eighth post in the “Understanding the diffusion of large language models” sequence. In this post, Ben Cottier lists questions about AI diffusion that he thinks would be worthy of more research at the time of writing.
Implications of large language model diffusion for AI governance
This is the seventh post in the “Understanding the diffusion of large language models” sequence. While the sequence is primarily descriptive, this post explores how to beneficially shape AI diffusion, and what the project’s findings mean for the governance of transformative AI (TAI).
Publication decisions for large language models, and their impacts
This is the sixth post in the “Understanding the diffusion of large language models” sequence. In this piece, the researcher provides an overview of the information and artifacts that have been published for the GPT-3-like models studied in this project, estimates some of the impacts of these publication decisions, assesses the rationales for these decisions, and makes predictions about how decisions and norms will change in the future.
Drivers of large language model diffusion: incremental research, publicity, and cascades
This is the fifth post in the “Understanding the diffusion of large language models” sequence. This piece describes the most important factors for GPT-3-like model diffusion.
The replication and emulation of GPT-3
This is the fourth post in the “Understanding the diffusion of large language models” sequence. This piece explores what was required for various actors to produce a GPT-3-like model from scratch, and the timing of various GPT-3-like models being developed. A timeline of selected GPT-3-like models and their significance examines the development of GPT-3-like models (or attempts at producing them) since GPT-3’s release.
GPT-3-like models are now much easier to access and deploy than to develop
This is the third post in the “Understanding the diffusion of large language models” sequence. This piece describes some GPT-3-like models that are widely available for download and what resources are required to actually use them.
Background for “Understanding the diffusion of large language models”
This is the second post in the “Understanding the diffusion of large language models” sequence. This piece provides background, including definitions of relevant terms, the inputs to AI development, the relevance of AI diffusion, and other information to contextualize the remainder of the sequence.
Understanding the diffusion of large language models: summary
How might transformative AI technology (or the means of producing it) spread among companies, states, institutions, and even individuals? What might the impact of that be, and how can we minimize risks in light of that?
This is the first post in the “Understanding the diffusion of large language models” sequence, which introduces and summarizes the research project.