Screened as part of TOAFF20
When MIT Media Lab researcher Joy Buolamwini discovers that most facial recognition software fails to identify darker-skinned faces or women’s faces, she sets out to investigate widespread bias in algorithms. She proves what many suspected: Artificial intelligence is not neutral. From facial scanning used for policing and surveillance to automated HR and mortgage application systems, the building blocks of these omnipresent technologies are far from impartial. Rather, they mirror and magnify the toxic racism and sexism that are already so deep-rooted in society.
Coded Bias illuminates mass misconceptions about AI and emphasises the urgent need for legislative protection. It celebrates the women who, with humour, intelligence and determination, are leading the collective fight for justice in the age of automation.
Content notes:
Contains references to racism (including anti-semitism), misogyny, and some depictions of racially-motivated violence.
Screened as part of TOAFF20