The Robot Overlords Are Coming... Or Are They?
AI's agency, or its ability to act independently, poses a significant regulatory problem. Current frameworks struggle to address AI's decision-making capabilities.
Artificial Intelligence (AI), once the stuff of science fiction, now strides confidently across the stage of reality, presenting possibilities as wondrous as they are worrisome. As machines begin to perform tasks that were once the exclusive domain of humans, we find ourselves standing at the crossroads of opportunity and challenge. While AI can enhance everything from decision-making to everyday tasks, its most disruptive potential lies in its agencyâthe capacity to act autonomously. And therein lies the rub. This very autonomy, or agency, is the crux of the problem when it comes to regulating AI.
As the European Commission's High-Level Expert Group on Artificial Intelligence observed in 2019, AI is not just an isolated device or a single piece of technology. Rather, it is a sophisticated systemâa system built upon the intertwined threads of information processing, learning, and reasoning. But this seemingly innocuous observation belies a deeper philosophical dilemma: what happens when machines, rather than merely responding to inputs, begin to make decisions with agency? And more importantly, how do we regulate them?