Investigating organizational AI bias tolerance
Description
Many AI systems contain biases that are built into the data or the models themselves. These biases can be reduced but are often impossible to remove completely. In practice, organizations may still deploy such systems, sometimes consciously accepting a certain level of bias, and other times without fully recognizing it. This thesis will explore this bias tolerance, i.e. how much inherent AI bias organizations are willing or unwillingly prepared to accept, and examine the factors influencing these decisions and their implications for governance and trust. Interested?
Contacts
For more information please contact Liudmila Zavolokina