This was an interesting (and exhaustive) survey on what automation and AI might mean for the future of human agency. Some of the verbatims were quite insightful.
This passage from Micah Altman of MIT’s Center for Research in Equitable and Open Scholarship really resonated with me (emphasis mine):
Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate we’re charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially developed algorithms are increasingly replacing complex, obscure, static and bureaucratically authored rules.
Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be ‘made’ by automated evaluation processes. For the most high-profile decisions, people may continue to be ‘in the loop,’ or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores – leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.
This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new – other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves – all at a level of detail, frequency and interactivity that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.