I am a PhD candidate at ILLC, at the University of Amsterdam, advised by Katrin Schulz and Willem Zuidema. You can also find me at the EleutherAI Discord, where I do interpretability research. Previously, I was an active participant of the BigScience initiative as part of the evaluation working group.

My research broadly focuses on understanding why and how language models exhibit social biases. I study how we can reliably measure bias in NLP and try to ground the discussion of bias in the broader societal perspective. I'm in particular interested in using interpretability tools to answer questions like "How can we reliably measure and mitigate bias?" and "How do LMs learn these biases in the first place?"

As part of the Bias Barometer research group, I also aim to develop NLP techniques for investigating the (hidden) biases in Dutch digital media, in a broader multidisciplinary effort to understand its role on the public perception of politics, events, and social groups.

  • Research interests: Language Models, AI Ethics, Social Biases, Interpretability