The Sieve and the Search Engine: Internet information infrastructures and their algorithms

alexa

Alexa (above) is a web analytics service that ranks websites based on a combination of average daily visitors and daily page views. It’s striking that three of the top five websites are search engines. Of the top fifty, at least twenty are search engines. Billions of people rely on these websites to direct them to up-to-date news, entertainment, medical information, and even help with work tasks.

Search engines are important enough, in fact, that the percentage of traffic on a website arriving via search is considered a vital metric in and of itself. Websites compete for coveted positions at the top of search results, trying to stay ahead of arcane ranking algorithms that change constantly.

These algorithms prioritise showing users content they’re likely to agree with or want to engage with, prompting them to remain on the website for longer, where they can be targeted by monetization efforts. Search engines adapt to their users over time. This is particularly true of Google, where search histories are associated with user accounts. Popular websites are ranked more highly, but so are individual users’ most frequently visited websites, and with data from usage over time search results are tweaked to display what Google thinks you’ll be most interested in – while other users see different results (Wilf 2013).

For website owners, search engines act like switchboards or traffic controllers, connecting users to websites, so long as the search engine’s requirements are abided by. For the user, it acts like a sieve, filtering out irrelevant sites (Kockelman 2013). This puts search engines in a powerful position to control the information users are able to find. On one level, that is a fundamental part of a search engine’s functionality, because the internet is too vast to easily traverse without filtration. On the other hand, it opens up the possibility of systemic censorship, or the kinds of recursive bubble effects we see on social media where opinion groups are increasingly isolated from each other, because the work the algorithms do is obscured in favour of presenting a set of results as fait accompli. There’s no way for users to see the results we weren’t shown.  

Some of these are effects that developers didn’t foresee and aren’t in control of – but much is deliberate. Tristan Harris, who was hired by Google as a ‘Design Ethicist’, criticises tech and advertising companies for exploiting psychological quirks to influence our browsing and purchasing behaviours. Harris’s concerns are echoed by Pietsch (2013), who highlights the sheer scale and utility of the data sets available, and the increasing predictability and hence malleability of internet users, who may not even realise that they are being sieved and shaped in this way.

Search engines are an indispensable infrastructure in the information age, providing access to information that would otherwise be lost in a deluge of data, but they are not magic. Work, both human and algorithmic, goes into producing search results. It’s important to ask what effects that work produces.

References

  1. https://design.google.com/articles/evolving-the-google-identity/
  2. https://medium.com/swlh/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3#.aymjq1ixd
  3. http://www.alexa.com/topsites
  4. Kockelman, P. (2013) “The anthropology of an equation: Sieves, spam filters, agentive algorithms, and ontologies of transformation” HAU Journal of Ethnographic Theory 3:3, pp.33-61
  5. Pietsch, W. (2013) “Data and Control — a Digital Manifesto” Public Culture 25:2, pp.307-310
  6. Wilf, E. (2013) “Toward an Anthropology of Computer-Mediated, Algorithmic Forms of Sociality” Current Anthropology 54:6, pp.716-739

 

Advertisements

One thought on “The Sieve and the Search Engine: Internet information infrastructures and their algorithms

  1. I think this concept of search engines filtering what users read/have access to is very interesting in today’s context. With the recent political turmoil going on in the U.S., many people are actively trying to stay informed, and for some, this includes a desire to read completely unbiased articles, or the least bias news they can find. However, what your blog post implies is that these search engines can skew the kind of information offered, often without the user noticing. Since Trump’s election, the U.S. has seen a rather impressive effort of citizens attempting to be well-informed and encouraging each other to look at multiple news sources with the intention of getting only the facts and proper, informed opinions. It also greatly relates to the notion of ‘fake news’ as media outlets are accused of relaying factually incorrect news. Seeing as the internet is the main source of information, it is obviously very important to take into consideration the various algorithms that affect what information is offered to you.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s