I am a PhD candidate at ILLC, at the University of Amsterdam, advised by Katrin Schulz and Willem Zuidema. At the time of writing, I'm doing a research internship at EleutherAI. I have also been an active participant of the BigScience initiative.
My research broadly focuses on understanding why and how language models exhibit social biases. I study how we can reliably measure bias in NLP and try to ground the discussion of bias in the broader societal perspective. I'm in particular interested in using interpretability tools to answer questions like "How can we reliably measure and mitigate bias?" and "How do LMs learn these biases in the first place?"
As part of the Bias Barometer research group, I also aim to develop NLP techniques for investigating the (hidden) biases in Dutch digital media, in a broader multidisciplinary effort to understand its role on the public perception of politics, events, and social groups.
Before doing my PhD, I was a Master's student in Artificial Intelligence at the University of Amsterdam, where I did my thesis on the topic of language emergence in referential games. As an undergraduate, I studied at the University College Twente—a selective Bachelor’s program focusing on both technology and its role in society—where I grew an interest in computer science, ethics, and philosophy of technology.
In my free time I like to sketch and read. I'm also more proficient in Emacs than in Vim.