The concept of fake news has been at the forefront of American politics since the 2016 election. One of the reasons that Trump was able to win that year’s election was the massive spread of false information and propoganda by alt right groups and foreign nations like Russia. Facebook was the platform of choice for many of these groups because of how easy it was to spread information on the site, real or not. Facebook has come under fire for these issues in the past few years, and they are just now starting to make progress in snuffing out these fake stories before they are able to spread across the platform. This is a very complicated issue because of how quickly things can spread on the internet. If one person shares an article, ten of his friends might share it, and then ten of their friends might share it. The number of people being exposed to the article grows exponentially, making it crucial to block a fake news article as quickly as possible.
Facebook has recently created a page on its website titled “Tips to Spot False News”. This page shows how Facebook is trying to equip its users to identify and report fake news on the platform. It first explains what Facebook themselves are doing, such as “remove[ing] fake accounts and disrupt[ing] economic incentives for people that share misinformation.” It then lists 10 tips for users to determine for themselves what is fake:

These tips can all pretty much be summed up with using common sense and checking other sources to confirm the information you see. One interesting tip was to make sure the story is not a joke. Parody news websites on both sides of the political spectrum like The Onion and The Babylon Bee have been getting harder and harder to separate from reality as of late. President Trump even retweeted an article from the Babylon Bee last week. While it is nice to see Facebook trying to educate its users, this list does not tell people anything they do not already know in my opinion. This just distracts from the real problem at hand, preventing fake news from being spread in the first place.
Facebook goes into more details about what they are doing to identify false information on a page called “Working to Stop Misinformation and False News.” Here they talk more about restricting ad revenue to untrustworthy news sources and technological innovations to prevent fake news. These innovations include making it easier to report articles, improving the algorithm to show less articles that are not being shared by people, and working with third party fact-checking organizations to determine the validity of articles before they are shown in peoples’ feeds. They also talk about 2 projects, the Facebook Journalism Project and the News Integrity Initiative, that work to inform users about what news is trustworthy and what is not. These projects are big steps in the right direction because many different organizations and companies are a part of them. The more collaboration there is in tackling this issue, the better results there will be.
While fact checking and warning users helps the problem, it does not do enough. With how quickly fake news can spread, these methods are like putting a band-aid on a broken arm. What is really needed is a way to catch the problem before it can spread. A new tool that is starting to be used by Facebook is being referred to as a “virality circuit breaker.” This article from Fortune describes how the circuit breakers kick in when a post reaches a certain amount of views and shares. When this limit is reached, the algorithm is told to slow the spread by showing it to less people and further down in feeds. This allows a grace period to determine if there is false information present and stop showing the post if necessary before too many users see it. The article talks about while this tool could be highly effective, Facebook has been hesitant to use it for two reasons: accusations of political bias and less ad revenue. Facebook has attempted to remain impartial since they began. The fact that the vast majority of fake news comes from the right has put them in the difficult position of having to anger a large portion of their user base when removing false information. Showing articles to users, no matter where they come from, make Facebook money. This means they would be sacrificing a lot to stop showing untrustworthy articles. Since money rules everything in this country, it is not likely they will employ their circuit breakers as much as they could without government regulations telling them to do so.
With the next presidential election less than a month away, Facebook is making it known that they are more prepared to handle the spread of false information than in 2016. They posted an article on their website earlier this month titled “Preparing for Election Day.” They boast some of the results of the tactics I have mentioned earlier, such as 120,000 pieces of content being removed, 2.2 million rejected ad submissions, and 150 million warnings displayed on content just this year. These are some massive numbers, and it is crazy to think how much has still slipped through the cracks. They have also announced a “Voting Information Center” on Facebook and Instagram that is meant to inform voters about how to register, how to vote, and display real time election results. That last item is especially important because of how many people will be mailing in their votes this year. There is bound to be tons of false information about the winner and the validity of the results, so it is comforting to see Facebook putting the correct information front and center on their apps. In conclusion, Facebook has begun to crack down on fake news on their platform since the 2016 election, but it is unclear how far they will decide to go with it and how much they are willing to sacrifice to ensure people are seeing trustworthy news.
