Does technology covertly holds the biases of its creators? Alexis Stevens in Cluster Magazine writes about an unintended dimension of of facial-recognition-based surveillance software:
“HP Computers are Racist” is a 2009 YouTube video in which two electronics store employees demonstrate how face recognition and video tracking technology on Hewlett-Packard computers works more accurately for people of whiter skin tones. “I think,” one of the employees remarks with biting accuracy, “my blackness is interfering with the computer’s ability to—to follow me.”
The company issued an apology after the clip went viral, suggesting that face-detection algorithms have more difficulty identifying the contrast that helps discern facial structure in low lighting. An ironic outcome of this corporate oversight is that while black people are more likely to be eyed as suspicious and tracked in real life (e.g. stop-and-frisk), the engineering of webcams for a presumptively white target audience renders people of color more invisible to technology.
Whether it’s via Photo Booth, surveillance cameras, drones, or Instagram [...] we are now beginning to see ourselves through the screen and think of ourselves as perpetually watched beings.
The chicken-vs.-egg question [...] is: Are computers racist or are the humans who advance computer technology racist? The “Object-Oriented-Ontology” approach to technology favored by New Aesthetic theorists suggests that the answer is both. OOO is about our gaining empathy with our digital tech, and privileging the relationship that we have with these objects. By imparting agency on them, we can begin to imagine their inner lives and how they might relate to our condition. As Meg Jayanth writes, “phones know their location, algorithms read the news, the camera-mounted car is an organ of sight for the diffuse Google Street View body.” Some (arguably-Ellisonian) computers just don’t see black people.