I am a PhD candidate at ILLC, at the University of Amsterdam, advised by Katrin Schulz and Willem Zuidema. At the time of writing, I'm doing a research internship at EleutherAI. I have also been an active participant of the BigScience initiative.

My research broadly focuses on understanding why and how language models exhibit social biases. I study how we can reliably measure bias in NLP and try to ground the discussion of bias in the broader societal perspective. I'm in particular interested in using interpretability tools to answer questions like "How can we reliably measure and mitigate bias?" and "How do LMs learn these biases in the first place?"

As part of the Bias Barometer research group, I also aim to develop NLP techniques for investigating the (hidden) biases in Dutch digital media, in a broader multidisciplinary effort to understand its role on the public perception of politics, events, and social groups.

  • Research interests: Language Models, AI Ethics, Social Biases, Interpretability