Patricia Kahr receives her Doctorate!
The Dynamics of Trust and Reliance in Human-AI Interactions
Patricia succesfully defended her doctoral thesis at the TU Eindhoven, with her thesis on trust and reliance in human-AI interactions. Here is a small synopsis of her work, which can be found here.
Trust in AI is not built in a moment. It evolves, shifts, and sometimes goes wrong in ways we don't fully understand yet. The dissertation of Patricia took a close look at how trust and reliance develop over time across repeated and long-term interactions with AI systems. Through lab experiments and real-world fieldwork, it reveals a striking pattern: people don't always adapt to AI the way we'd hope. Early experiences may anchor behavior, errors impact trust and reliance (but not always the same way), and familiarity over time may help recovery but also risk uncritical compliance with AI.
These findings point to a clear design challenge: technical reliability alone is not enough. AI systems need to be built with an understanding of how people actually interpret, accept, and adapt to algorithmic advice — taking into account the context they're used in, the responsibilities they involve, and the social environments they're embedded in. Even a perfectly performing system can fall short if users don't grasp its role or limitations. Calibrating trust, it turns out, is as much a design problem as it is a human one.
Congratulations Patricia!