User:RESphil/sandbox

"Fake News and Partisan Epistemology" is the title of a philosophical paper by Regina Rini published in June 2017 by the Kennedy Institute of Ethics Journal.

In this paper, Rini analyzes the concept of "fake news", specifically in relation to the United States and modern politics. Rini presents four main goals––one in each section of the paper. The respective goals are (1) to analyze fake news, (2) to identify unusual epistemic features of social media testimony as a means of transmitting news, (3) to argue that partisanship can be individually reasonable and have epistemic virtue, and (4) to argue that the harms caused by partisanship should be treated on a systemic and institutional level (versus focusing on the individual). As part of her final goal, Rini provides a possible solution in the form of "reputation scores," which would be calculated by looking at the reliability of the past information a user has shared. She clarifies that this would not be censorship but rather would serve as an awareness mechanism so that other people would be aware of how often the information a particular user shares is disputed, false, or misleading.

Utilizing all four goals of her paper, Rini submits as her thesis that people believe fake news because fake news is transmitted on social media through a bent but epistemically justifiable form of testimony that benefits from credibility gained through partisan affiliation and lack of institutional arrangements.

= Background =

On Regina Rini
As of 2021, Regina Rini is an assistant professor in the Philosophy Department at York University, where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She is also an affiliated fellow at the University of Southern California’s Philosophy Academy.

Prior to these positions, she received her undergraduate degree at Georgetown University, where she graduated with a Bachelor of Arts in Philosophy. She received her Master of Arts and Doctor of Philosophy from New York University. After receiving her PhD in 2011, she was a postdoctoral research fellow in Moral Cognition at Oxford University until 2014. From 2014 until 2017, she held an assistant professor and faculty fellow position at the NYU Center for Bioethics. In 2017, she joined the faculty at York University and has been in residence since.

Rini is a prolific writer and presenter, and she focuses on topics including moral agency, the psychology of moral judgment, partisanship in political epistemology, and the moral status of artificial intelligence.

Additional information on Rini can be found on her own website, which features a copy of her 2018 CV, as well as on her most recent faculty profile from York University.

On Fake News
Fake news is recognized as false or misleading information presented as news that contains little or no verifiable facts, sources, or quotes. Fake news consists of misinformation and/or disinformation and can take on many forms. According to Claire Wardle of First Draft News, the types of misinformation or disinformation include:

While fake news has existed for centuries in different mediums, the prevalence of fake news has increased especially in the 21st century with the rise of social media and the ability for information to be rapidly disseminated on social media platforms. Stories, fake or otherwise, can be published, consumed, and shared by any Internet user quickly and with little verification. In 2016, the Pew Research Center discovered that over 60% of Americans access news through social media, and an MIT Initiative on the Digital Economy research study found that false political information tends to spread 3 times faster than other false news.
 * 1) Satire of Parody
 * 2) Misleading Content
 * 3) Imposter Content
 * 4) Fabricated Content
 * 5) False Connection
 * 6) False Context
 * 7) Manipulated Content

Fake news was created and shared at particularly substantial rates during the 2016 presidential election in the United States. Following the election and public concern about fake news, Facebook began using independent fact-checkers to label false news. During the 2020 presidential election in the United States, Twitter labeled fake news and, in some cases, suspended accounts sharing fake news.

Other social media platforms took similar initiatives against fake news in the wake of the COVID-19 pandemic. In November 2020, YouTube temporarily suspended One America News Network for promoting a false cure to the virus, and Twitter labeled fake or misleading news.

Despite these efforts, many remain concerned about fake news, and the growing concern has prompted news recipients to scrutinize media outlets, lawmakers to vow action, and researchers and philosophers like Rini to analyze fake news.

= Paper Goals and Section Summaries  =

Section 1: Analysis of Fake News
In Section 1, Rini begins her paper by elaborating on her understanding of "fake news." Rini asserts that fake news is not "merely false information conveyed by reportage" but also involves a specific type of deception where the transmission of fake news has two goals. One goal is the intention for fake news to be widely re-transmitted, or for fake news to be passed on many times and reach a larger group than those the fake news is initially shared with. The other goal is the intention for fake news to deceive at least some portion of its audience. According to Rini, intentional deception is an important distinction from instances when honest errors are made by journalists. Rini clarifies that the intentions behind the creators of fake news may not be purely to deceive. For instance, one could have monetary motivations like gaining advertising revenue from the number of ad clicks on a website. A distinction can be made on the underlying purpose of the fake news with "pure fake news" being produced primarily to deceive as many readers as possible while "impure fake news" uses deception to spread the fake news for some other motive, often times financial. However, for fake news to spread effectively, some proportion of those that interact with fake news must be deceived in either of the aforementioned forms of fake news. Therefore, there must be some intent to deceive since it is a necessary condition for the spreading of fake news.

From these conditions, Rini provides her ultimate definition of fake news: "A fake news story is one that purports to describe events in the real world, typically be mimicking the conventions of traditional media reportage, yet is known by its creators to be significantly false, and is transmitted with the two goals of being widely re-transmitted and of deceiving at least some of its audience."

Section 2: Unusual Epistemic Features of Social Media Testimony
In Section 2, Rini argues that fake news transmitted through social media is a bent form of testimony. She begins her argument by explaining epistemology from testimony. She points out that a person believes a proposition based on testimony when that person believes it because it is presented to them by another person. Since we often rely on others for knowledge, this is typically considered an epistemically virtuous practice. Sensible use of testimony suspends default acceptance of testimony and requires norms for rejecting suspect testimony. However, Rini later claims that such norms seem routinely violated by social media users “who accept extraordinary stories about political enemies [such as Pizzagate] on mere say-so.”

After discussing epistemology from testimony, Rini contends that social media transmission of fake news is a form of testimony. She provides the example of a Facebook post that claims former President Donald Trump threatened to deport Lin-Manuel Miranda despite Miranda having American citizenship. This is untrue, but someone may believe the story because it was presented as the truth by another person, namely whoever posted or shared the Facebook story. However, Rini notes that the relationship between the poster, or testifier, and the content of the story, or the testimony, is hard to categorize. Unliked established norms in older forms of communication, social media has disputed norms. Questions like “Is a retweet an endorsement?” become contentious since the norms of accountability are undefined. Rini provides the example of Donald Trump retweeting an infographic with false statistics and a made-up claim that African-Americans are responsible for 81% of homicides where victims are white. Rini posits that people are less inclined to subject fake news stories––as ridiculous as some may appear––to scrutiny because there are unstable testimonial norms on social media.

Rini summarizes that social media sharing of fake news is a bent form of testimony for two reasons:


 * 1) The epistemic relationship between testifier and testimony is ambiguous since there are no established norms.
 * 2) The suspension of default acceptance of testimony or use of norms for rejecting suspect testimony is suppressed in social media sharing.

Utilizing the above, Rini suggests that it is plausible to conclude that the bent aspects of social media testimony play a role in the transmission of fake news.

Section 3: Epistemic Virtue of Partisanship
In Section 3, Rini utilizes a more normative approach in order to argue that the testimonial practices that propel fake news are actually consistent with individual epistemic virtue, despite the harmful nature of the fake news itself. She further argues that it is partisanship specifically that makes these processes reasonable and consistent with epistemic virtue. Rini emphasizes that although she is arguing that partisanship and the circulation of fake news can be based upon solid epistemic virtue, she is not claiming that partisanship or the circulation of fake news are inherently good things.

Introducing the idea of partisanship in testimony, Rini argues that people are more likely to accept the observations and ideas of those with which they share political beliefs. She suggests that politically normative claims, morally normative claims, character judgments of political candidates, and descriptive political claims all fall within the domain of politically relevant claims; and she argues that partisanship in testimony reception is present in the evaluation of these types of claims.

Rini asserts that partisan affiliation indicates to others a person’s value commitments. She argues that two people who share a common partisan affiliation may deem each other better judges of normative questions than two people who do not because of their perception of one another’s value commitments. She also mentions Adam Elga’s argument that a person will only see another as an epistemic peer if they agree on a wide range of issues. If two people disagree on many issues, Elga suggests that they will see one another as often mistaken and hold the other’s claims in a lower regard.

When fake news is shared, Rini argues that based upon the normative decisions made in the sharing of political news, the audience can evaluate the value judgments of the source. This is especially the case, according to Rini, when the information in question concerns character judgments of political figures because selectiveness can paint almost any desired picture. People will often rely on whether the source shares their partisan affiliation in order to determine whether its representation of a political figure should be taken as representative.

While Rini does not argue that people should believe anyone who shares their partisan affiliation, she does believe that people can give greater weight to the testimony of those who do share their affiliation. The common value judgments and commitments this shared affiliation indicates can allow a person to reasonably believe that another person has evaluated information in a compatible way with their own worldview.

Section 4: Institution Solutions
In Section 4, Rini considers potential solutions to reduce the bentness of social media testimony. Rini provides an example of a social media platform that is trying to solve the problem of fake news. Facebook, a social media platform used by 2.8 billion people daily, created an option for users to report stories that they see on their timeline that have the potential to be false. Based on the number of reports that a story received, Facebook management had the option to flag or label the story as false information. This action was taken in 2016, but the problem of fake news persists.

To rectify this issue, Rini first considers the implementation of a norm of accountability in social media. In theory, if social media platforms held users to a higher standard of accountability, users of social media would unambiguously understand that they lend their testimonial endorsement and are accountable for the people that believe their social media testimony, even if it is fake news. However, Rini recognizes that addressing partisanship or individual epistemic virtues is unlikely to work, so she next suggests that social media platforms implement institutional solutions as opposed to individual ones. She recommends that social media platforms provide infrastructure to track each individual user's testimonial reputation. If a user continuously posts information that is accurate, their testimonial reputation would remain stable or perhaps increase. Oppositely, if a user continuously posts information that is deceptive, their testimonial reputation would be diminished. Users on social media platforms could then evaluate other users' testimonial reputations and evaluate their claims accordingly. In theory, if social media platforms make such institutional changes, ambiguous norms that make social media testimony bent would be resolved according to Rini.

= Examples of Fake News = Rini explicitly mentions multiple examples of fake news throughout her paper. One example is a retweet from former United States President Donald Trump. The content of his November 2015 retweet was a set of false statistics claiming that 81% of white homicide victims are killed by African Americans. In her paper, Rini raises the question of whether a retweet is an endorsement of the content's truth. Many who retweet fake news often claim that they do not endorse its content but rather intend to elicit discussion. However, many argue that it should be generally understood that if someone retweets false information that the person who retweeted the information believes the content to be true. Since the norms of testimony on social media are unstable, it is not universally agreed or acknowledged that sharing is asserting one's own beliefs.

Another example that Rini provides is the 2016 social media posts that claimed a Satanic cult frequently met at the basement of a pizza parlor to sexually abuse children. This claim included the allegation that Hillary Clinton was a member of this Satanic cult. A survey on YouGov showed that 46% of Trump voters believed that a Satanic cult sexually abused children in the basement of this pizza parlor and that Hillary Clinton was a member of the cult. On December 4, 2016, Edgar Welch drove to Washington D.C. from North Carolina with an AR-15 rifle in his possession, stating that he planned to self-investigate the pizza parlor. Welch found no children or Satanic cult, and he was ultimately arrested. In her paper, Rini explains that people have beliefs about the world that are logical, however, sharing information on social media causes people to suspend their logical beliefs and accept information on social media as truth, even though information may be discernably false. Rini calls this phenomenon a bent testimony. Sharing information on social media creates an ambiguous relationship between the person sharing and the information they share. Rini states there could be a connection between believing information is true and the knowledge of whether the sharer truly believes information they are sharing to be true. Ultimately, bent testimony contributes to how fake news is spread.

Additional examples of viral fake news not explicitly mentioned in Rini's paper include:


 * A story that former United States President Barack Obama signed an Executive Order to ban the Pledge of Allegiance in schools. The story gained over 2 million interactions on Facebook but was false.
 * A story that Pope Francis endorsed Donal Trump for President of the United States. The story was created by a fake news site and gained over 960,000 interactions on Facebook, but it was false.
 * A story that Donald Trump sent his own plane to transport 200 stranded marines. The story was shared on multiple major media talk shows but was false.

= Comparisons =

Similar Views and Support
Throughout her paper, “Fake News and Partisan Epistemology,” Rini informs readers about the significance of how fake news poses an epistemic threat to the environment in which it is curated. As a result of her work, she has attracted the attention of many philosophers who continue to build off of her ideologies in order to expand society’s understanding of fake news and to pose potential solutions to it.

For example, Christopher Blake-Turner agrees with some key positions of Rini. In his paper, “Fake News, Relevant Alternatives, and the Degradation of Our Epistemic Environment,” Blake-Turner says that he wishes Rini would have given more attention to holding people accountable for the information they spread since he thinks that doing this would also slow the spread of negative and false beliefs. However, Blake-Turner gives credit to Rini for her approach and definition of fake news, specifically since she clarifies that fake news does not consist of “erroneous but good-faith reporting.” Rini's definition allows him to build upon his own definition of fake news, which explains that fake news requires that news reporters do not believe a story to be significantly true. This essentially means that fake news could originate solely as a lie––which Rini believes––but it could also originate as bullshit since both spread false reports. Blake-Turner argues that, after all, bullshit is another approach to the truth in the same way that lying is; bullshit just involves the concept of being indifferent to the truth rather than rejecting the truth completely.

Christopher Blake-Turner also agrees with Rini that partisanship between two people allows for them to create an understanding of how they both evaluate certain situations as well as their ideologies. With such partisanship, people believe that they have an increased knowledge and credibility on a subject––such as politics––that they might not have had otherwise if they had not conversed and affirmed each other. Both Rini and Blake-Turner believe that when a partisanship develops, it can amplify the spread of misinformation, especially when the information is presented and shared via social media. In this way, they also agree that solutions should be proposed on an institutional level rather than an individual level since the former is a more effective means to hinder the spread of misinformation entirely.

Contrary Views and Criticism
Throughout her paper, “Fake News and Partisan Epistemology,” Rini argues that partisan affiliation plays a major role in the sharing and creation of fake news. As a result of her work, Rini has also attracted the attention of other philosophers who contest her ideologies in their own pursuit to expand society’s understanding of fake news and to pose potential solutions to it. For example, while Rini argues that fake news recipients often act in an epistemically virtuous manner, C.Thi Nguyen argues that fake news recipients are often epistemically blameless for believing fake news if recipients are raised in echo chambers. Additionally, Quassim Cassam argues that fake news recipients often act in an epistemically vicious manner if recipients fail to comply with the norms of responsible inquiry which dictate that one should have a sense that they are in danger of being deceived.

In her paper, Rini also offers a proposal on how to combat fake news: by displaying reputational scores based on social media users' frequency for sharing false information. However, not all sociologists share Rini’s enthusiasm for her solution. For example, Josef Huber argues that the use of reputation scores following Rini’s model may backfire since her interpretation of users’ motivations is too narrow. He believes that giving reputation scores to users “partisans’ testimony qua its contrast to mainstream reportage” would create a backfire effect. Nhyan and Reifler also agreed that such backfire effect could reinforce misinformation in “ideological subgroups.” Huber suggests that reputation scores should instead be used to diversify feeds without displaying the scores to other users. As stated by Huber, “Feeding reputational scores to social media algorithms without displaying them to users would curb the spread of fake news through exposure to a diverse pool of sources and opinions while, at the same time, preventing backfire effects." The Backfire effects that Huber highlights include further distrust of mainstream media sources and no effect on the spread of fake news. Huber argues that by creating a grading platform on social media, it will forcefully coerce readers to reinforce their own positions on fake news.

Rini also continuously places paramount importance on how partisan affiliation reflects one’s own moral values, and, therefore people can and do use it as a metric for when to trust someone and to what degree. However, Rini's paper does little to acknowledge other broader issues at hand, such as social media regulations, mass media culture, and the digital constructs of modern society, and how such issues may affect fake news. According to the Center for Information Technology and Society at UC Santa Barbara, fake news spreads primarily through Facebook. One example of this can be found in the 2016 elections, where the primary way fake news was spread was through routing fake websites through the Facebook interface. One way this was done was through bot accounts that randomly added friends and then promoted their fake news account as a way to get impressions, clicks, and subsequent ad revenue. These accounts often steal other users' information, including their pictures, names, and occupational data, and use this information to target certain audiences with the goal of gaining followers and influence. In her account of fake news, Rini focuses on how sharing political views with another person creates a bond between conspirators that promotes the creation and sharing of fake news. However, she neglects the potential spread and impact of fake news from fake accounts, which, by nature, are unable to share partisanship with users. Additional criticism to Rini's account is that, although she asserts that the only way to fix the issues brought on by fake news in social media is through institutional change, she does not go into detail on what these changes would look like. This is especially noteworthy since one can argue that her proposed solution is already being employed by some social media platforms that already have fake news monitoring systems that will remove and delete fake news and bot accounts.

= References =