Fake News on Facebook

With more than 2 billion active users, Facebook plays a unique role in aiding the spread of information, as well as disinformation, as it curates and customizes what we see and read on the social media platform. 

Facebook’s algorithm and filter bubbles  

The Frontline episode “The Facebook Dilemma” blamed the proliferation of fake news on the company’s proprietary algorithm, which ranks and displays content on a user’s news feed.  These under-the-hood calculations push “engaging content” to the top of users’ home pages (Jacoby, Bourg, & Priest, 2018).  Stories may go higher up the feeds if they have more likes, comments, or shares.  Because people tend to like posts that align with their worldview or affirm their confirmation bias, Facebook has effectively created filter bubbles in which some people are only ever exposed to one political orientation (Pariser, 2011).  For example, if liberal-leaning voters only see posts that attack Trump, this echo chamber isolates them from other points of view and amplifies the negative posts they share, some of which may be untrue.  Filter bubbles allow falsehoods to flourish because algorithms decide for us what they think we want to see, not necessarily what we need to see.  The phenomenon is not exclusive to Facebook; Google and other search engines also curate their results based on our perceived tastes, likes, and interests.

Benkler et al. (2018) believed that Facebook’s algorithm rewards what they called “hyperpartisan bullshit” and false content (p. 10).  Facebook is also accused of acting as an “amplification service for websites that otherwise receive little attention online, allowing them to spread propaganda during the 2016 election” (Dreyfuss & Lapowsky, 2019, para. 6).  Because of the social network’s reach, little-known sites have found a platform to spread their content to the masses.

Facebook’s fact-checkers

The spread of fake news online has ushered in a new cottage industry: fact-checkers.  Alan Duke, a co-founder of Lead Stories, a company aimed at fighting misinformation, scours the Internet for trending fake stories.  Lead Stories is one of dozens of companies currently under contract with Facebook as a third-party fact-checker (A. Duke, personal communication, October 4, 2019).  Duke’s company uses a software tracking tool called the Trendolizer to monitor the spread of disinformation.  Trendolizer “scrapes data from social media and across the internet to surface hidden content that is on the cusp of going viral” (Lizza, 2019, para. 9).  According to Duke, fake news creators now proactively label their fake posts as “satire,” even though that moniker may not be apparent to people who only read the headline and share the story.

In addition to Lead Stories, Facebook relies on The Associated Press, Factcheck.org, Poynter, and other outlets that are certified through the non-partisan International Fact-Checking Network to help review false news online (Lizza, 2019).  Once the fact-checkers flag suspicious content, Facebook says it deprioritizes the posts to prevent them from spreading further in the news feed (Facebook, n.d.).  However, in many cases, flagged content is not removed from the site.  Instead, the content may carry a warning label that it was rated as false or misleading by third-party fact-checkers, and if users try to share the post, they will be shown additional reporting on the topic (Facebook, n.d.).

Facebook typically pays media partners who participate in their fact-checking program.  However, some critics say that it creates a conflict of interest because the media companies are taking funding from Facebook while also reporting critically on the company (Levin, 2017).  Others accused Facebook of exploiting the work of third-party fact-checkers for a PR campaign to improve its image (Levin, 2017).  In 2018, Snopes.com disclosed it received $406,000 from Facebook for being enlisted in their fact-checking program, but the site has since voluntarily ended its partnership with Facebook (Green & Mikkelson, 2019).

Facebook’s pledges  

Facebook has conceded that it was “too slow to recognize” the extent that disinformation spread on its platform.  In a blog post, Chakrabarti (2018) wrote about Russian interference: “It’s abhorrent to us that a nation-state used our platform to wage a cyberwar intended to divide society.  This was a new kind of threat that we couldn’t easily predict, but we should have done better” (para. 10).  The company outlined several measures it would undertake as it pledged to work to stop misinformation and fake news.  These included: (a) “disrupting economic incentives because most false news is financially motivated,” (b) “building new products to curb the spread of false news,” and (c) “helping people make more informed decisions when they encounter false news” (Mosseri, 2017, para. 3).  Mosseri announced that the company was testing ways of letting the Facebook community better police itself and report false news stories.  He also said stories that have been debunked by third-party fact-checking organizations would appear lower in its news feed.

For his part, Facebook CEO Mark Zuckerberg acknowledged that he is responsible for the company not quickly addressing the spread of fake news in 2016.  In testimony before Congress in 2018, he said:

It’s clear now that we didn’t do enough to prevent these tools from being used for harm as well.  That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.  We didn’t take a broad enough view of our responsibility, and that was a big mistake.  It was my mistake, and I’m sorry.  I started Facebook, I run it, and I’m responsible for what happens here. (Burch, 2018, para. 3)

Previously, in 2016, Zuckerberg had downplayed concerns that fake news on Facebook influenced the election, calling it “a pretty crazy idea” (Newton, 2016, para. 1).

Despite the pledges to do more, a recent study suggested disinformation is still pervasive on Facebook in the leadup to the 2020 election.  In 2019, Avaaz, a non-profit activist organization fighting disinformation on social media, found “86 million estimated views of disinformation in the last 3 months, which is more than 3 times as many as during the preceding 3 months (27 million)” (Avaaz, p. 4).  The study focused on posts made in the first ten months of 2019.  According to the analysis, fake reports attacking U.S. political candidates were viewed 158 million times on Facebook since the beginning of 2019.  To give that number context, that is enough for every registered voter in the U.S. to have seen the posts at least once.  The study determined that most of the negative disinformation (62%) targeted Democrats or liberals, which are findings similar to studies conducted following the 2016 election.  By contrast, 29% of the disinformation in their sample attacked Republicans or conservatives (Avaaz, 2019).

Ryan Cooper