Local governments are increasingly adopting technologies that automate various city services, leading to potential ethical conflicts between citizen expectations and the behavior of these “smart city” tools. Researchers from North Carolina State University are advocating for a framework to help policymakers and technology developers align the values embedded in these systems with the ethical standards of the communities they serve.
In a recent study, Veljko Dubljević, a professor of philosophy and the corresponding author of the paper, stated, “Our work here lays out a blueprint for how we can both establish what an AI-driven technology’s values should be and actually program those values into the relevant AI systems.” This research addresses the complexities of smart cities, a term encompassing various technological and administrative practices that have emerged in urban areas over the last few decades.
The technologies in question include automated systems that, for example, dispatch law enforcement when potential gunfire is detected or use sensors to regulate pedestrian and vehicle traffic. Such tools raise significant ethical questions. Dubljević highlights a critical scenario: “If AI technology presumes it detected a gunshot and sends a SWAT team to a location, but the noise was actually something else, is that reasonable?”
As cities increasingly implement these technologies, fundamental questions arise regarding surveillance and tracking. “Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should trigger escalated surveillance?” Dubljević asks. Currently, there is no standardized procedure for addressing these concerns, nor is there clarity on how to train AI to handle them.
To tackle these challenges, the researchers propose the Agent Deed Consequence (ADC) model, which considers three elements in moral judgment: the agent (the intent of the individual performing an action), the deed (the action itself), and the consequences (the outcomes resulting from the action). The study illustrates how this model can be utilized to both understand and program ethical decision-making into AI systems.
Daniel Shussett, the first author of the paper and a postdoctoral researcher at North Carolina State University, explains, “The ADC model uses deontic logic, a form of imperative logic. It allows us to capture not only what is true but what should be done.” This capability is essential for AI systems to differentiate between legitimate and illegitimate requests.
For instance, if an AI system manages traffic and an ambulance with flashing emergency lights approaches a traffic signal, the AI should recognize that the ambulance warrants priority and adjust the traffic signals accordingly. Conversely, if an ordinary vehicle attempts to use flashing lights to bypass traffic, the AI should deny that request as illegitimate.
Dubljević notes, “With humans, it is possible to explain things in a way that enables learning about appropriate actions. However, with computers, a mathematical formula must represent the reasoning process. The ADC model facilitates the creation of that formula.”
The researchers assert that as smart city technologies are implemented globally, the ADC model offers a viable solution to the ethical dilemmas these innovations present. The next step involves testing various scenarios across multiple technologies in simulations to ensure consistent and predictable results. If successful, this model could be ready for application in real-world environments.
The study, titled “Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics,” was published in March 2024 in the open-access journal Algorithms. The research received support from the National Science Foundation under grant number 2043612.
