Import AI 447: The AGI economy; testing AIs with generated games; and agent ecologies
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe.
The AGI economy - most labor goes to the machines, and humans shift to verification:
…What grappling with the singularity seriously looks like…
Researchers with MIT, WashU, and UCLA have written a fun paper called “Some Simple Economics of AGI” which wrestles with what happens when machines can do the vast majority of tasks in the economy. The conclusion is that our ability as humans to control and benefit from this vast machine-driven economy will rely on allocating our ability toward monitoring and verifying the actions of our myriad AI agents, and indulging in artisanal tasks where the value comes from the human-derived aspect more than any particular capability.
What is AGI in an economic sense? “We model the AGI transition as the collision of two racing cost curves: an exponentially decaying Cost to Automate and a biologically bottlenecked Cost to Verify,” the authors write. “In an economy where autonomous agents act with broad agency rather than narrow instructions, the binding constraint on growth is no longer intelligence. It is human verification bandwidth: the scarce capacity to validate outcomes, audit behavior, and underwrite meaning and responsibility when execution is abundant… We are moving from an era where our worth was defined by our capacity to build and discover, to an era where our survival depends on our capacity to steer, understand, and stand behind the meaning of what is created.”
The risks of a mostly no-human economy and the “Hollow Economy”: As we proliferate the number of AI agents then it’s necessarily the case that we’ll delegate more and more labor to machines. One of the key risks of this is what the authors call a “Trojan Horse” externality: “measured activity rises, but hidden debt accumulates in the gap between visible metrics and actual human intent”.
The Hollow Economy: “”Agents consume real resources to produce output that satisfies measurable proxies while violating unmeasured intent. As this hidden debt accumulates, it drives the system toward a Hollow Economy of high nominal output but collapsing realized utility—a regime where agents generate counterfeit utility,” they write.
Verification as the solution: To avoid this risk, we are going to need to invest in systems of verifying that AI agents are doing what we want them to do and ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.