-
Eroding privacy- AI systems collect, process and transfer vast amounts of data, eroding people’s privacy in ways that may not be immediately obvious but have long-term implications. For example, facial recognition systems can track people in public and private spaces, effectively turning mass surveillance into the norm.
-
Undermining autonomy- AI systems often subtly undermine your ability to make autonomous decisions by manipulating the information you see. Social media platforms use algorithms to show users content that maximizes a third party’s interests, subtly shaping opinions, decisions and behaviors across millions of users.
-
Diminishing equality- AI systems, while designed to be neutral, often inherit the biases present in their data and algorithms. This reinforces societal inequalities over time. In one infamous case, a facial recognition system used by retail stores to detect shoplifters disproportionately misidentified women and people of color.
-
Impairing safety- AI systems make decisions that affect people’s safety and well-being. When these systems fail, the consequences can be catastrophic. But even when they function as designed, they can still cause harm, such as social media algorithms’ cumulative effects on teenagers’ mental health.
Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.
Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.
Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology.