Taking a step back and positioning bias: three considerations

Sun 01 January 2023

In my research, I use various approaches to investigate social bias in language models. When discussing such undesirable biases, we often take a mathematical and 'mechanistic' approach, measuring deviations from a prescriptive norm of ideal behavior (e.g., a skew in gender distribution from 50/50%) or trying to explain how biases are encoded in the NLP model's parameters.

However, it is valuable to sometimes take a step back and consider bias in NLP from a broader perspective. The analysis of bias is incomplete if we ignore normative questions and the sociotechnical context; both the technical details of the model and social aspects (designers, users, stakeholders, historical and cultural context, company goals, etc.) are important to consider. In this blog post, we'll discuss three key considerations:

  1. Algorithmic bias is a sociotechnical problem
  2. Society is constantly changing and so is our conceptualization of bias
  3. Algorithmic bias is not simply a reflection of the data/society

1. Bias is a sociotechnical problem

When should we consider the gender bias of an AI system harmful? The implicit assumption in AI debates is generally that we should aim for gender-neutral behavior, based on the idea that not differentiating between genders constitutes fair behavior. However, whether this is true might depend strongly on the particular task we want the system to perform and its sociotechnical context (i.e., both technical and social aspects matter).

In translations, we might want the AI system to consider the (grammatical) gender of the subject, but not in assessing the competency of job candidates when automatically filtering resumes! (In fact, whether we should use AI for automating these tasks is another question entirely.)

Our perspective may shift if we view AI bias not in isolation, but as situated within broader practices: Individual examples of bias may not reveal the full picture of structural bias within institutions, businesses, or organizations using these systems. Why does an AI system assign higher competency scores to resumes of people similar to existing employees? Is it because they are truly more competent, or is the training dataset skewed due to historical reasons, and would more diversity actually benefit the company? In this light, we might even need to consider adding counteracting bias to generate equal opportunities for different subgroups, compensating for disadvantages these groups face.

Not all bias is unwanted, and there might be contexts where it's necessary to reach certain goals. To formulate appropriate (moral) standards for an AI system, we need to: - Examine the broader context in which it functions - Understand how the AI system interacts with its environment - Consider how the entire system might contribute to unfairness or harm particular groups

This suggests the current paradigm for analyzing bias in NLP may be inadequate: Raji et al. (2021) make a compelling argument that benchmarks for evaluating AI systems are fundamentally limited, as they consist of decontextualized examples.

2. Society is constantly changing and so is "bias"

Ideally, discussions about norms and standards for a particular AI application would be resolved before development begins. But what counts as unfair or harmful behavior isn't stable—these factors evolve as societal debates progress, making a definitive solution to bias impossible. Worse, new biases can emerge if our AI systems don't adjust to these changes (Friedman and Nissenbaum, 1996; Bender et al., 2021). This concern is especially relevant for very large language models, which are expensive to train and therefore reused for many downstream tasks (Bender et al., 2021).

Moreover, given the diverse applications that could use language technology, no single set of standards can fit them all. However, we can be transparent and detailed about how a particular model is trained, including dataset information, making this available for model transfer scenarios (Mitchell et al., 2019; Bender and Koller, 2020; Gebru et al., 2021; Bender et al., 2021). Furthermore, we need technologies that allow us to counteract biases whenever they matter for downstream tasks. For this, we need a clear understanding of how bias originates in the first place.

3. Algorithmic bias is not only reflecting pre-existing bias

A popular argument in the AI community is that deep neural model bias simply reflects pre-existing biases in training data. However, we shouldn't neglect our responsibility in designing and implementing these systems: Many forms of bias can emerge at different stages of creating and deploying language technology (see Hovy and Prabhumoye, 2021). In Van der Wal et al., 2022 we show how models may amplify biases seen in the dataset. Others have pointed out that biased algorithms can transform society in profound ways. For example, Ensign et al. (2018) show how biased policing algorithms could result in increased surveillance of certain neighborhoods, which then feeds back into new data reinforcing the earlier bias, creating a 'runaway feedback loop'.

Language technology doesn't merely reflect society—its implementations become part of society and can reshape it in unexpected ways. A well-known theme in the philosophy of technology is that technologies 'mediate' our experiences and shape our worldview of "how to live" (Verbeek, 2005). Machine translation systems may promote a worldview primarily of men, with women restricted to stereotypical occupations (Wellner, 2020), and search engines showing only men for "CEO" searches similarly shape our perception of the archetypal business leader. Consider this example from this UNESCO/COMEST report from 2019:

"The 'gendering' of digital assistants, for example, may reinforce understandings of women as subservient and compliant. Indeed, female voices are routinely chosen as personal assistance bots, mainly fulfilling customer service duties, whilst the majority of bots in professional services such as the law and finance sectors, for example, are coded as male voices. This has educational implications with regards to how we understand 'male' vs 'female' competences, and how we define authoritative versus subservient positions."

How we define and measure bias may also influence how we conceptualize bias itself. In fairness metrics discussions, Jacobs and Wallach (2021) highlight 'consequential validity'—the fact that "measurements shape the ways that we understand the construct itself"—which is often overlooked when designing bias metrics.

How we define and measure racial and gender categorizations shapes how we view and act on these constructs in society; viewing gender as binary may harm non-binary communities (Costanza-Chock, 2018). (And see Glasgow, 2019 for a discussion of different perspectives on race.)

Algorithmic bias is inherently complex due to its sociotechnical and context-sensitive nature, making precise definition difficult—yet discussions about its conceptualization are crucial for research (e.g., Blodgett et al., 2020, Van der Wal et al. 2024). Researchers cannot rely on a 'catch-all' bias metric, and mitigating harms might require more than simply removing biased information (Talat et al., 2022). Is completely debiasing an AI system even possible? (See Talat et al., 2021 for a discussion of debiasing.)

Looking Forward

Perhaps our starting point should not be how to debias AI models, but rather to focus on larger societal questions: How do we want to shape the world with language technology as part of life? How can we design AI systems that help create a more just society, instead of reinforcing or creating new forms of systemic bias?

Such broad discussions about fair behavior in AI systems need to involve not only AI researchers but various experts from outside the technical domain. By expanding the conversation beyond technical solutions to encompass ethical, social, and philosophical dimensions, we can work toward language technologies that truly serve the diverse needs of our changing society.


Thanks to Wout Moltmaker for his helpful comments on this blog post.