I am interested in adapting large (vision-) language models to downstream tasks, mostly using modular and parameter-efficient transfer learning methods. Such methods update only a fraction of a model’s parameters, add a small number of trainable parameters, or reuse otherwise existing parameters. Moreover, I like interpreting and editing model-internal representations, e.g., by tracing information flow or analyzing individual model components. As part of the Hessian AI Service Center, beside my research I contribute to the AI infrastructure and help disseminate AI-knowledge.