AI as Referee, Humans as Conscience
This principle assigns mechanistic rule-enforcement to AI while reserving normative judgment to people. In practice, AI handles logistics, measurement, and enforcement–like Hawk-Eye in tennis or Wikipedia’s bots–while humans decide whether rules are fair, meaningful, or need revision. The model scales governance without surrendering moral authority to software.
In Unscarcity’s MOSAIC, this division keeps automated coordination from sliding into technocracy. AI can spot violations of transparency or resource shortfalls; Commons deliberations still determine cultural priorities, aesthetics, and the boundaries of acceptable risk. The approach also provides auditability: every automated action remains reversible by human oversight, preventing lock-in or silent drift.
The idea parallels socio-technical scholarship on “human-in-the-loop” systems and draws from Wikipedia’s success: bots perform 10-16% of edits but any human can revert them. It rejects both full AI rule and purely human bureaucracy, aiming for a resilient partnership.
References
- UnscarcityBook, chapter2
- Geoffrey C. Bowker & Leigh Star, “Sorting Things Out” (1999)
- Wikipedia: Wikipedia bots (accessed 2024)