Alexa (above) is a web analytics service that ranks websites based on a combination of average daily visitors and daily page views. It’s striking that three of the top five websites are search engines. Of the top fifty, at least twenty are search engines. Billions of people rely on these websites to direct them to up-to-date news, entertainment, medical information, and even help with work tasks.
Search engines are important enough, in fact, that the percentage of traffic on a website arriving via search is considered a vital metric in and of itself. Websites compete for coveted positions at the top of search results, trying to stay ahead of arcane ranking algorithms that change constantly.
These algorithms prioritise showing users content they’re likely to agree with or want to engage with, prompting them to remain on the website for longer, where they can be targeted by monetization efforts. Search engines adapt to their users over time. This is particularly true of Google, where search histories are associated with user accounts. Popular websites are ranked more highly, but so are individual users’ most frequently visited websites, and with data from usage over time search results are tweaked to display what Google thinks you’ll be most interested in – while other users see different results (Wilf 2013).
For website owners, search engines act like switchboards or traffic controllers, connecting users to websites, so long as the search engine’s requirements are abided by. For the user, it acts like a sieve, filtering out irrelevant sites (Kockelman 2013). This puts search engines in a powerful position to control the information users are able to find. On one level, that is a fundamental part of a search engine’s functionality, because the internet is too vast to easily traverse without filtration. On the other hand, it opens up the possibility of systemic censorship, or the kinds of recursive bubble effects we see on social media where opinion groups are increasingly isolated from each other, because the work the algorithms do is obscured in favour of presenting a set of results as fait accompli. There’s no way for users to see the results we weren’t shown.
Some of these are effects that developers didn’t foresee and aren’t in control of – but much is deliberate. Tristan Harris, who was hired by Google as a ‘Design Ethicist’, criticises tech and advertising companies for exploiting psychological quirks to influence our browsing and purchasing behaviours. Harris’s concerns are echoed by Pietsch (2013), who highlights the sheer scale and utility of the data sets available, and the increasing predictability and hence malleability of internet users, who may not even realise that they are being sieved and shaped in this way.
Search engines are an indispensable infrastructure in the information age, providing access to information that would otherwise be lost in a deluge of data, but they are not magic. Work, both human and algorithmic, goes into producing search results. It’s important to ask what effects that work produces.
- Kockelman, P. (2013) “The anthropology of an equation: Sieves, spam filters, agentive algorithms, and ontologies of transformation” HAU Journal of Ethnographic Theory 3:3, pp.33-61
- Pietsch, W. (2013) “Data and Control — a Digital Manifesto” Public Culture 25:2, pp.307-310
- Wilf, E. (2013) “Toward an Anthropology of Computer-Mediated, Algorithmic Forms of Sociality” Current Anthropology 54:6, pp.716-739