
When you look something up on the internet, you likely do not put much thought into the search engine you use and why you use it. For most of us, it just so happened that this search engine was the one that we came upon first and so we persisted with it, such as with Apple users who use Google because it is what Safari defaults to. However, the different search engines available to us (e.g. Google, Bing, Ecosia, DuckDuckGo) must be different enough from one another to warrant them being distinct search engines. Indeed, this is the case, and one of the ways in which they differ that is of interest is how their algorithms determine what their users are most likely looking for, through their autocomplete functionalities, and what website to recommend to their users first that is most likely to answer their query. Yet, the code that determines these aspects of search engines has become so complex that the programmers maintaining them no longer fully understand how they work. Regardless, an engine’s search results and autosuggestions are important because they provide the user with an overview of the information that is most relevant to what they are looking for, including the latest news regarding it and the general public’s opinions on it. Whether or not search engines claim to do this does not matter; in Algorithms of Oppression, Safiya Umoja Noble explains,
“Search does not merely present pages but structures knowledge, and the results retrieved in a commercial search engine create their own particular material reality. Ranking is itself information that also reflects the political, social, and cultural values of the society that search engine companies operate within, a notion that is often obscured in traditional information science studies.”
Safiya Umoja Noble 2018, 148–148
Thus, despite not knowing exactly how different search engines provide a representation of how modern society views or understands a certain topic, we know that they do differ to the point where two different search engines may yield different results for the same search. These differences could be attributed to a fundamental difference in the engine’s algorithm, or some additional filtering that is done by programmers to remove results that users may find offensive or culturally insensitive, as shown in Noble’s Algorithms of Oppression.
One can study how search engines portray a given subject by entering a question or statement relating to that subject in the search bar and analyzing the differences in search results and autosuggestions between them. I chose to use the term “are karens…” because until recently, Karen was just a common name. However, it has now become a slang term with its own unique meaning. According to BBC News’ Ashitha Nagesh, a Karen is “a specific type of middle-class white woman, who exhibits behaviours that stem from privilege.” There is a discussion to be had about the controversies associated with this new meaning. However, for the purposes of studying search engines, its recent definition uniquely allows us to study how current algorithms adjusted to being presented with the new revelation of Karen being more than just a name, in addition to how they represent the public’s perception of it.

The first pages of results on Google and Bing show no remnants of a life before Karen took on its new meaning, suggesting that their algorithms adjusted well to this sudden shift and “understand” what the people looking this up are likely referring to. Interestingly, Ecosia’s results page does link to a Wikipedia page on Karen people – people who speak Sino-Tibetan languages and originate from southern Myanmar, which suggests that its adjustment, despite noticeable, allowed it to maintain a view of Karen that transcends the US or what is most trending on the news at any given time. Moreover, all three search engines suggest articles from news sources that are credible and/or informative in nature, Ecosia’s sources do deviate slightly, with the title, “Are Karens even real? Yes. Here’s where they live,” which depicts Karens as creatures who are to be spotted. Apart from this implication, all three search engines maintained a relatively neutral and informative view of what Karens are, with the only negative associations being the ones that the definition itself has.

A search engine’s autosuggestions provide a more concise view of what their users think about a certain topic due to how short they are, leaving less room for interpretation. As I looked through the autosuggestions, they appeared to give a different story than what the search results showed. Google’s suggestions, shown above, indicate that their users believe Karens to be old, narcissistic, and delusional, which is far more specific than the description of being white that was given to us in the search results. Surprisingly, Bing and Ecosia’s autosuggestions differ significantly from Google’s, but are exactly the same with the first five completing terms, in order of top to bottom, being real, evil, nice, narcissists, and liberals. Therefore, the users of Bing and Ecosia appear to view Karens in the same light, which is that it is hard to believe that people exist who fit that description, and that they are evil, mean, and narcissistic people. It is also notable that Bing and Ecosia users seem to be interested in Karens’ political beliefs as opposed to Google users’ interest in their state of mind, but it is hard to say if this is in fact true or if it is due to Google’s filtering of results. The fact that Google’s autosuggestions are inconsistent with Bing and Ecosia’s, which are consistent with each other, suggests that Google’s algorithm is doing some additional filtering on this search in some respect.
Overall, though, the three search engines do overlap in their representation of Karens as unbelievably narcissistic people. Whether or not this depiction of Karens is representative of the wider population’s views is something that cannot be deduced, though, because we know that autosuggestions are filtered within and beyond the programs that run them, and that they are personalized to their respective user. Furthermore, would we be able to make the claim that the views of the people making these searches are representative of the population, or are they just a vocal minority? Despite how accessible the internet is nowadays, it might even still be a stretch to say that the population of users on a particular search engine is representative of the entire population. As such, we can gain a lot of insight into the views of a lot of people by looking at search engines’ recommendations for our queries, but we cannot pinpoint the extent to which this picture is accurate. A question we may ask ourselves is whether the filtering by search engines to remove inappropriate, insensitive, or controversial suggestions makes their results and autosuggestions more or less representative of the wider population, and if so, by how much.
Bibliography:
Noble, Safiya Umoja. “Searching for Black Girls.” Essay. In Algorithms of Oppression: How Search Engines Reinforce Racism, 148. New York: New York University Press, 2018.
Nagesh, Ashitha. 2020. “What Exactly Is a ‘Karen’?” BBC News, July 31, 2020, sec. World. https://www.bbc.com/news/world-53588201.
Wikipedia Contributors. 2020. “Karen People.” Wikipedia. Wikimedia Foundation. January 12, 2020. https://en.wikipedia.org/wiki/Karen_people.