“Engineering is about universalizable things like effectiveness, rationality, and algorithms, while culture is about subjective and particular things, like taste, creativity, and artistic expression”
What happens, however, when culture and engineering are conflated without regard to potential subjective biases?
In line with the view that the natural sciences constitute an objective discipline, we are conditioned to believe that programming, coding, and algorithms must also be objective. Poovey notes that according to the British Association for the Advancement of Science, technology is knowledge that was “value-free and objective, and, above all, impervious to political controversy” (Poovey 1993; 256). Surely, a sequence of numbers, symbols, and letters typed into a black screen could not index bias, could it?
MIT graduate student, Joy Buolamwini, looks at the nature of algorithmic bias within cameras. I use Star’s description to frame cameras within this idea of infrastructural networks. In my opinion, this is appropriate because a camera is embedded within other infrastructural networks and reliant upon their “social arrangements and technologies” (Star 1999; 382).
Cameras should, objectively, be tools used to capture any moment; however, in practice, experiments show that cameras can be selective in discerning what constitutes a face. At MIT, Joy is working on a project called the ‘Aspire Mirror’. From my understanding, this technology is similar to the filters available on Snapchat that allow users to project a virtual image onto their face; in Joy’s project, the filter allows users to reflect a digital persona of different characters. Unfortunately, Joy’s project revealed that imbued within the digital facial recognition programs was a bias that did not identify human faces with brown skin. This forced Joy to wear a plastic white mask in order for her face to be recognised … (The symbolism of this could inspire a whole essay premised on Janon’s ‘Black Skin White Mask’, but I will save that for another day) … As such, this prompted me to view cameras as gatekeepers; entities deciding who can and cannot be included (Bozdag 2014; 213).
I would argue that cameras are similar to other infrastructures, e.g. roads or passports. In the same way that these infrastructures have the power to dictate belonging, a camera’s ability to discriminate is evident in facial recognition technology. Cameras should objectively capture everything included within a particular frame, unfortunately, though, for those with non-white skin, cameras can actively erase people. Therefore, I disagree with Seaver’s view that technology is rational and objective. His view is rather reductionist and attempts to shift any potential accountability. This example of facial recognition bias demonstrates that a corporate culture of hiring coders that “fit”, typically white man, tempts an intersection between culture and technology that culminates in racial bias. This is the case because a homogenous group of coders is more likely to inscribe less racially diverse examples in their technologies, thus rendering examples that fit outside of this, as ‘aliens’. Ultimately, these technologies make minoritarian persons vulnerable to discrimination (Blas 2016).
Blas, Zach (2016). ‘A Cage of Information’. Documentary Across Disciplines, eds. Erika Balsom and Hila Peleg (Cambridge: The MIT Press and Haus der Kulturen der Welt, 2016)
Bozdag, Engin (2013). ‘Bias in algorithmic filtering and personalization’. Ethics and Information Technology. Vol.15 (3): 209-227
Poovey, M (1993). ‘Figures of Arithmetic, Figures of Speech: The Discourse of Statistics in the 1830s’. Critical Inquiry 19(2): 256-276.
Seaver, Nick (2014) ‘On Reverse Engineering’ webpage: https://medium.com/anthropology-and-algorithms/on-reverse-engineering-d9f5bae87812
Star, S. L (1999). ‘The Ethnography of Infrastructure’. American Behavioral Scientist 43 (3): 377-391.