Contemporary discussion about automated computer systems is feeding into a moral panic for which technology is the savior. As mentioned in various (and usually United States-based) news stories and popular discourse, systems powered by bad data, bad algorithmic models, or both lead to ‘high-tech’ discrimination – misclassifications, over targeting, disqualifications, and flawed predictions that affect some groups, such as historically marginalized ones, more than others. To remedy this problem, many argue that the introduction of fair, accountable, and transparent machine learning will thwart biased, racist, or sexist automated systems, or so the story goes.


But what computer scientists, engineers, and industry evangelists of fair machine learning get wrong is the sufficiency of technical tweaks to prevent or avoid discriminatory outcomes. This weakness stems not only from the fact that fairness, the counterpart to discrimination2, means many different things depending on one’s normative understanding of equality. It also derives from the fact that these competing frameworks marshal different resources and remedies that variously involve laws, institutional policies, and procedures, as well as require cultural transformation to shift people’s behaviors, norms, and practices towards individuals and groups that differ from the status quo. Moreover, as Young (Citation1990) explains, discrimination ties to larger processes of oppression, which leave socially different groups susceptible to processes of violence, marginalization, exploitation, cultural imperialism, and powerlessness.

1. Our Aims

In this article, we grapple with the insufficiency of a techno-centric focus on data and discrimination by decentering debates on algorithmic bias and data injustices and connecting them to ongoing and often entrenched debates about traditional discrimination and injustice, which is not technologically mediated. This reflexive turn requires acknowledgment not only of the growing threats of surveillance capitalism (Zuboff, Citation2019), but also other social institutions or practices which have contributed to differential treatment of social groups.

But what computer scientists, engineers, and industry evangelists of fair machine learning get wrong is the sufficiency of technical tweaks to prevent or avoid discriminatory outcomes.

To accomplish this aim, we briefly review the ‘techno-centricity’ of fairness, accountability, and transparency studies, as well as data justice studies, which adopt a more sociotechnical approach but which nonetheless privilege technology. We then develop a normative ‘decentered’ framework that relies on Fraser’s (Citation2010) recent theory of social justice. We use this framework to analyze how European civil society groups make sense of data and discrimination. Attending to ideas of maldistribution, misrecognition, and misrepresentation, our thematic analysis of interviews with 30 civil society representatives in Europe’s human rights sector. We show how many groups prioritize the specific experiences of marginalized groups and ‘see through’ technology, acknowledging its connection to larger systems of institutionalized oppression. This decentered approach contrasts the process-oriented perspective of tech-savvy civil society groups that shy from an analysis of systematic forms of injustice. We conclude by arguing for a plurality of approaches that challenges both discriminatory processes (technological or otherwise) and discriminatory outcomes and that reflects the interconnected nature of injustice today.

Technologically mediated discrimination

To appreciate the relevance of Fraser’s theory of justice, it is helpful to understand differences in how technology has been centered in discussion about discrimination. A comparison between the emergent fields of fairness, accountability, and transparency in machine learning, on the one hand, and data justice, on the other, also reveals how marginalization or systems of oppression do – and do not – feature alongside discussions of technology.

Fairness, accountability, transparency, and data justice in automated systems of a highly influential field focus on engineering and technical choices to deal with problematic automated systems that risk harming specific groups. This field, known as fairness, accountability, and transparency studies, concentrates on various ethical dilemmas related to automated computer systems (Barocas, Citation2015).

2. Encryption as contemporary resistance

In this article, we grapple with the insufficiency of a techno-centric focus on data and discrimination by decentering debates on algorithmic bias and data injustices and connecting them to ongoing and often entrenched debates about traditional discrimination and injustice, which is not technologically mediated. This reflexive turn requires acknowledgment not only of the growing threats of surveillance capitalism (Zuboff, Citation2019), but also other social institutions or practices which have contributed to differential treatment of social groups.

To accomplish this aim, we briefly review the ‘techno-centricity’ of fairness, accountability, and transparency studies, as well as data justice studies, which adopt a more sociotechnical approach but which nonetheless privilege technology. We then develop a normative ‘decentered’ framework that relies on Fraser’s (Citation2010) recent theory of social justice. We use this framework to analyze how European civil society groups make sense of data and discrimination. Attending to ideas of maldistribution, misrecognition, and misrepresentation, our thematic analysis of interviews with 30 civil society representatives in Europe’s human rights sector. We show how many groups prioritize the specific experiences of marginalized groups and ‘see through’ technology, acknowledging its connection to larger systems of institutionalized oppression. This decentered approach contrasts the process-oriented perspective of tech-savvy civil society groups that shy from an analysis of systematic forms of injustice. We conclude by arguing for a plurality of approaches that challenges both discriminatory processes (technological or otherwise) and discriminatory outcomes and that reflects the interconnected nature of injustice today. ■