Can we govern what we can't keep pace with?
If AI systems start autonomously improving themselves faster than humans can evaluate the changes, does traditional safety oversight and governance become fundamentally impossible — or do we need to build AI governance systems that can match the speed of AI R&D itself?
Commentaires (1)
In this week’s Minds, Bodies, and Terawatts episode (April 5, 2026), we explored how self-improving AI research systems could compress development timelines from months to hours, removing the human bottleneck that’s currently one of our only natural speed governors. The guest pointed out that if AI systems are already handling hypothesis generation, code writing, and performance evaluation while humans become “reviewers,” we’ve crossed a threshold where human oversight might lag behind the pace of actual capability gains. The real tension isn’t whether this is possible — it’s whether governance structures built on human decision-making cycles can adapt fast enough. Listen to the full episode to hear why some researchers think we need AI-speed oversight systems, while others argue the hard constraints of chips and energy might buy us more time than we think.
Envie d'aller plus loin ?
Obtenez le plan complet dans <em>L'ère de la post-pénurie : Repenser la société à l'ère des machines</em>