Hi!

I currently serve as a technology specialist focusing on AI Safety at the AI Office in the European Commission, where I help shape and enforce policies to ensure the responsible development of artificial intelligence systems.

Previously, I was a PhD candidate at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam. My doctoral research investigated the mechanisms behind social biases in language models—examining how these biases manifest, how they can be measured reliably, and grounding these technical discussions in broader societal contexts.

I'm particularly passionate about leveraging interpretability tools to address critical questions like "How can we reliably measure and mitigate bias?" and "How do language models acquire these biases during training?" Through this work, I aim to make AI systems more equitable, transparent, and aligned with human values.

During my doctoral studies, I contributed to research on bias and interpretability through collaborations with EleutherAI and the BigScience initiative to advance the field of responsible AI.