← Back to Library

Import AI 421: Kimi 2 - a great Chinese open weight model; giving AI systems rights and what it means; and how to pause AI progress

Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this, please subscribe.

Want to stop or slow AI progress? Here's what you need:
…MIRI enumerates the option space…
Researchers with MIRI have written a paper on the technical tools it'd take to slow or stop AI progress. For those not familiar with MIRI, the organization's leaders are shortly publishing a book called "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All", so that should tell you where they're coming from as an organization. Though people have a range of views on this, I think it's very helpful to dispassionately look at what would be required to achieve a goal like this, which is what the researchers do here.

So, you want to stop AI progress? Here's how: Here are the different categories you need to do work in and some of the capabilities you'll need:

  • Chip location: Track shipments via manufacturers and distributors; include hardware-enabled location tracking; centralize compute in a small number of secure and registered datacenters; inspect datacenters containing non-trivial amounts of compute; continuously monitor these datacenters.

  • Chip manufacturing: Monitor for the construction of new fabs; restrict/control the equipment and materials; spy on and inspect fabs to ensure they're producing in line with policy restrictions; be able to verifiably deactivate fabs; be able to validate that fabs can make on-chip hardware-enabled governance mechanisms.

  • Compute/AI monitoring: Create 'if then' measures to let you change your governance of a system depending on its capabilities; figure out compute thresholds for different governance regimes; keep track of consumer compute sales as well; build 'inference-only' hardware that can't be used for AI training.

  • Non-compute monitoring: Tell companies to report specific capabilities; ensure you can do third-party and/or government evaluations of AI models; be able to inspect what happens at AI labs or other AI developers; place people inside AI organizations to audit them; create 'automated auditors'; use espionage to figure out what is going on in the private sector; protect whistleblowers who work at AI organizations.

  • Avoiding proliferation: Ensure model weights are hard to steal; ensure algorithmic secrets are hard to steal; mandate 'structured access' (e.g., API-mediated) access to AI systems; limit the release of open weight models with powerful capabilities; tie models to hardware so they can only run on certain compute substrates;

  • ...
Read full article on Import AI →