Representation: Why are Conservatives…
The internet is a very divided space. It is hardly a surprise that when the internet is available to everyone regardless of social, political, and economic status pages will pop up catering to the wants of every kind of potential user. The internet in particular has provided a very effective means of disseminating news as quickly as possible. Live coverage of news through rapidly updating articles or watching your favorite reporter live-tweeting from the most recent protest are things that classic print media and news could not do nearly as effectively as the internet does. Because of this, politics have cemented a very important niche online. With that, many news outlets have risen to satisfy the news needs of people all over the political spectrum.
While not the only two political positions, the conflict between conservative and liberal ideologies has been pervasive in online political discourse. It can be very difficult to have a rational discussion without it devolving into name-calling and ad hominem attacks in online political forums. Personally I have certainly been called plenty of variations of “snowflake,” “liberal [derogatory],” “soyboy,” etc. Being aware of the divisiveness of this topic, I wanted to examine how autosuggestions may contribute to reinforcing this division, if at all. I examined the query stem “why are conservatives…” across three search engine platforms: Yahoo!, Bing, and Ecosia. I will note that I intentionally omitted Google, the most popular search engine, from my searches because they actively remove suggestions which could be problematic according to their support page; for this post I wanted to examine search engines that may not regulate their suggestions as tightly. I figured this was a relatively neutral question beginning. However, I would find the autosuggestions to be very divisive and representative of this “us versus them” narrative that is so common in political discourse online.
Three main types of perspectives were represented by these search suggestions. I will refer to these as “stance clarifications,” or questions about conservative political opinions, “anti-conservative,” or questions which attach a negative connotation to conservatives, and “pro-conservative,” questions which attach a positive connotation. I began by examining stance clarification questions, which I believe represent the subject of “conservatives” the most neutrally of the three. I found a couple of these scattered through the suggestions of all three engine I examined, none of them having more than the others apparently. Results such as, “why are conservatives pro life,” “why are conservatives anti mask,” and “why are conservatives boycotting coca-cola [sic],” are all questions addressing stances that may be held by conservatives. Questions like this do not impart a negative or positive connotation to conservatives or the stances being asked about by not using any inciteful vocabulary, contributing to a neutral representation of the stance. This is arguably the best type of response a search engine could give. Umoja Noble in Algorithms of Oppression cites Alex Havarish, sociology professor at Arizona State University, who writes, “Search engines have come to play a central role in corralling and controlling the ever-growing sea of information that is available to us, and yet they are trusted more readily than they ought to be. They freely provide, it seems, a sorting of the wheat from the chaff, and answer our most profound and most trivial questions. They have become an object of faith.” Search engines offer results that are represented as objective facts presented in response to objective questions to the average internet user. That is how the ideal search engine may work in theory, but in reality, a whole slew of factors contribute to representation in search results such as number of clicks, citations, keywords, etc. Autosuggestions contribute to what information we view in our search results, especially when colored by bias toward or against the subject of a search. Ideally, an autosuggestion would be free from any of this bias and guide the user toward a straight answer, and these suggestions come closest to that of the three.
Autosuggestions such as, “why are conservatives so afraid of change,” “why are conservatives anti-science,” and “why are conservatives so hateful” all display an implicit bias against conservatives. This is clear through the negative connotation associated with the words used: hateful, anti-science, afraid. Conversely, although similarly, “why are conservatives happier,” “why are conservatives being silenced”, and “why are conservatives being censored” all present a bias in favor of conservatives. I noticed especially that autosuggestions specifically about conservatives being “silenced” or censored in some capacity were very present across all three platforms, indicating that this is a more popular search (regardless of the veracity of the claim). Polarized autosuggestions were the most prevalent on all three search engines’ autosuggestion pages. These questions all have depictions of conservatives implicitly baked into them, be it positive or negative, and this can manipulate the engine into fetching content conforming to one narrative or another. Somebody looking up “why are conservatives being censored” may be directed to a Reddit thread or Quora thread (as I was) that have absolutely no real backing to any claims made. Radical suggestions beget radical recommendations that lean towards polarized ends of the political spectrum because these are what generate clicks. Noble writes, “In the context of commercial search, they [search engines]] signal what advertisers think we want, influenced by the kinds of information algorithms programmed to lead to popular and profitable web spaces. They galvanize attention, no matter the potential real-life cost, and they feign impartiality and objectivity in the process of displaying results” (Noble, 116). Honesty and impartiality are not the deciding factor for what is pushed to the top of a search algorithm’s suggested list of queries; honesty does not get clicks. These divisive articles are represented even in just the autosuggestions, further contributing to a narrative of division in politics. This representation of extremes only serves to further radicalize people, pushing people further away from calm discussion and cooperation on the basis of political identity. Media consumed shapes people’s identities and beliefs, and the overrepresentation of division likely does not have a positive impact on people. Division nets clicks and clicks net profit, so despite the toll it takes on users we see such a strong representation of more divisive subjects by search engines that propagate this industry.
Something of note I have encountered through my research is that the first page of results is near identical between all three engines when just searching for “why are conservatives.” Most of the pages listed at the top of the search are some permutations of “why are conservatives bad/wrong.” There is one article titled “Why Are Conservatives Happier than Liberals?” that also appears in the midst of these apparently “anti-conservative” headlines. Through this we perhaps we may conclude that search engines have a bias towards anti-conservative representation in search results consistent with the apparent online censorship of conservatives, as so many more apparently liberal articles are presented to the user; this is certainly a narrative presented by conservative commentators online (hence the autosuggestion “why are conservatives being censored” being so high up). However, a correspondence experiment performed by Hans J. G. Hassel et al. found no statistically significant bias against conservatives (or liberals) in what news reporters choose to cover despite 64% of Americans believing media representation to favor the Democratic Party (I do not wish to conflate “Democrat” with “Liberal,” but for the purposes of this examination I will align liberals with Democrats). Hassel et al. suggest, “it is also possible that the public perceives ideological bias in what journalists choose to cover because they are psychologically motivated to see bias in the news.” The clear representation of media that antagonizes conservatives makes it much easier for conservatives to feel that they are being antagonized for their identity even if it may not be the case. This is a great example of presentation bias, where a person makes conclusions based off the limited information they are presented (in this example, by search engines). These results express only a fraction of the media available online, and for people to base their perception of people or ideologies of the limited representation of a search engine’s first page of results is not conducive to an in-depth and well-rounded understanding.
Overall, the internet holds a massive wealth of information, some of that information even being accurate sometimes. Search engines provide an invaluable service by trawling through this information and directing us to the most useful of it. Despite this, it is still too common for engines to suggest content which is not the most helpful or even accurate when truth is not the driving factor of the search. Representation of problematic ideas and practices that sow disunity among internet users is a problem, and it needs to be addressed. This may be through more thorough regulation and moderation of content posted online, tighter content filters on search engines, or even (perhaps unfeasible on a large scale) human curation of search results. Regulation is so tightly bound to representation online and in the circuit of culture because regulations determine what can and cannot be shown to an audience, such as a search engine user. Despite these problems with representation by engines, autosuggestions and the first page of results are never going to present a full picture of a subject. A single Quora thread about how conservatives are being silenced by the liberal media is not going to give anyone a good understanding of the nature of bias in politics. People need to learn not to believe everything they read online and in the search results and accept that the engine is fallible. Skepticism and questioning of what media we engage with daily can foster intelligent conversation and lead to a deeper understanding of virtually any subject.
Hassell, Hans J., John B. Holbein, and Matthew R. Miles. “There Is No Liberal Media Bias in Which News Stories Political Journalists Choose to Cover.” Science Advances 6, no. 14 (2020). https://doi.org/10.1126/sciadv.aay9344.
Noble, Safiya Umoja. “Searching for People and Communities.” Chapter. In Algorithms of Oppression: How Search Engines Reinforce Racism, 116. New York: New York University Press, 2018.