Talk:Online hate speech

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 24 August 2020 and 14 December 2020. Further details are available on the course page. Student editor(s): Iluvcats34.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 01:52, 18 January 2022 (UTC)[reply]

Article needs incorporation of criticism and controversy[edit]

This article could use a section (or incorporate into the article criticism) for critiques in the implementation of countering online hate speech which is a controversial issue. Certainly it's not a black and white issue on what constitutes hate speech and in some jurisdictions there are much looser definitions than others which muddies the water. Also relevant is the issue of freedom of expression and censorship. Someone with a little better writing skill than me could elaborate on the problems with countering online hate speech in this article. Xanikk999 (talk) 18:30, 15 September 2019 (UTC)[reply]

Agree. Zezen (talk) 17:04, 15 September 2020 (UTC)[reply]

Ethiopia[edit]

Let us add

https://www.vice.com/en_us/article/xg897a/hate-speech-on-facebook-is-pushing-ethiopia-dangerously-close-to-a-genocide Zezen (talk) 17:04, 15 September 2020 (UTC)[reply]

Article should be updated[edit]

What I am changing: Intro: -adding a 2020 relevant sentence to the Introduction -fixing grammatical and writing errors

Definition: -fixing grammatical and writing errors

Characteristics: -adding more information on trolling -adding information on real-name policies in China and South Korea -adding information about AI systems -fixing grammatical and writing errors -moving "stormfront precedent" to Frameworks section

Frameworks: -moving "stormfront precedent" to here -fixing grammatical and writing errors

Social responses (Case studies): -adding case study on online gaming -adding case study on Anti-Asian hate & COVID-19 -adding case study on social media's impact on opinion of social justice -adding case study on the alt-right's impact on social media & 2016 election -fixing grammatical and writing errors

Social responses (Private companies) -adding information about Facebook and including Instagram -adding information about YouTube -adding a section for TikTok -fixing grammatical errors, writing errors

Throughout: -adding citations to information that needs them or do not have a full citation -fixing grammatical and writing errors

I will also add a section about Ethiopia, as another user suggested. — Preceding unsigned comment added by Iluvcats34 (talkcontribs) 19:31, 6 December 2020 (UTC)[reply]

Merger proposal[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
To not merge on the grounds that racism is a distinct subtopic of hate speech that warrants separate coverage. Klbrain (talk) 12:08, 27 March 2022 (UTC)[reply]

I think that racism on the Internet should be merged into this article, as this article contains the same subject. PhotographyEdits (talk) 15:57, 2 November 2021 (UTC)[reply]

Oppose - Online hate speech is only one aspect of racism found on the internet. There are aspects such as facial recognition software used for online image processing being biased towards and against images of people of different races. As well as algorithms used by various internet organisation demonstrating racial biases. Then again, some news organizations are, or have been, racially biased in their reporting. But that is not hate speech. Similarly, racism is only one aspect of online hate speech, with several other possibilities including age, sex, religion, health status, employment status, nationality and various belief system all coming to mind. Yes, the two subjects intersect but one is not a subset of the other, so they should be kept as separate articles. - Cameron Dewe (talk) 08:59, 17 February 2022 (UTC)[reply]
Oppose - I agree with the explanation provided by user Cameron Dewe that online hate speech is not merely an example of racism on the internet but rather a full set of anti-social and offensive behaviors akin to internet trolling whose purpose is to humiliate and dehumanize other people through online communication and electronic devices by attacking or denigrating their religion, ethnic origin, sexual orientation, disability, and/or gender,[1][2][3] and is most frequently and notably perpetrated against people who identify with non-normative gender identities and sexual orientations (primarily LGBTQIA+ people).[3] As you can see, racism is not even the main form of online hate speech,[3] therefore I don't see the point to merge these two articles since they are concerned with two different topics. GenoV84 (talk) 18:40, 14 March 2022 (UTC)[reply]

References

  1. ^ Johnson, N. F.; Leahy, R.; Johnson Restrepo, N.; Velasquez, N.; Zheng, M.; Manrique, P.; Devkota, P.; Wuchty, S. (21 August 2019). "Hidden resilience and adaptive dynamics of the global online hate ecology". Nature. 573 (7773). Nature Research: 261–265. Bibcode:2019Natur.573..261J. doi:10.1038/s41586-019-1494-7. ISSN 1476-4687. PMID 31435010. S2CID 201118236.
  2. ^ Gagliardone, Iginio; Gal, Danit; Alves, Thiago; Martinez, Gabriela (2015). Countering Online Hate Speech (PDF). Paris: UNESCO Publishing. pp. 7–15. ISBN 978-92-3-100105-5. Archived from the original on 13 March 2022. Retrieved 13 March 2022.
  3. ^ a b c Powell, Anastasia; Scott, Adrian J.; Henry, Nicola (March 2020). Treiber, Kyle (ed.). "Digital harassment and abuse: Experiences of sexuality and gender minority adults". European Journal of Criminology. 17 (2). Los Angeles and London: SAGE Publications on behalf of the European Society of Criminology: 199–223. doi:10.1177/1477370818788006. ISSN 1741-2609. S2CID 149537486. A key feature of contemporary digital society is the integration of communications and other digital technologies into everyday life, such that many of us are 'constantly connected'. Yet the entangling of the social and the digital has particular implications for interpersonal relationships. Digital harassment and abuse refers to a range of harmful interpersonal behaviours experienced via the internet, as well as via mobile phone and other electronic communication devices. These online behaviours include: offensive comments and name-calling, targeted harassment, verbal abuse and threats, as well as sexual, sexuality and gender based harassment and abuse. Sexual, sexuality and gender-based harassment and abuse refers to harmful and unwanted behaviours either of a sexual nature, or directed at a person on the basis of their sexuality or gender identity.
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Significant re-writing of "Stormfront precedent" section needed[edit]

Section currently reads (as of 5th July 2022)

In the aftermath of 2014's dramatic incidents, calls for more restrictive or intrusive measures to contain the Internet's potential to spread hate and violence are common, as if the links between online and offline violence were well known. On the contrary, as the following example indicates, appearances may often be deceiving. Stormfront is considered the first "hate website." Launched in March 1995 by a former Ku Klux Klan leader, it quickly became a popular space for discussing ideas related to Neo-Nazism, White nationalism and White separatism, first in the United States of America and then globally. The forum hosts calls for a racial holy war and incitement to use violence to resist immigration. and is considered a space for recruiting activists and possibly coordinating violent acts. The few studies that have explored the identities of Stormfront actually depict a more complex picture. Rather than seeing it as a space for coordinating actions. Well-known extreme right activists have accused the forum to be just a gathering for "keyboard warriors." One of them for example, as reported by De Koster and Houtman, stated, "I have read quite a few pieces around the forum, and it strikes me that a great fuss is made, whereas little happens. The section activism/politics itself is plainly ridiculous. [...] Not to mention the assemblies where just four people turn up." Even more revealing are some of the responses to these accusations provided by regular members of the website. As one of them argued, "Surely, I am entitled to have an opinion without actively carrying it out. [...] I do not attend demonstrations and I neither join a political party. If this makes me a keyboard warrior, that is all right. I feel good this way. [...] I am not ashamed of it." De Koster and Houtman surveyed only one national chapter of Stormfront and a non-representative sample of users, but answers like those above should at least invite to caution towards hypotheses connecting expressions and actions, even in spaces whose main function is to host extremist views. The Southern Poverty Law Center published a study in 2014 that found users of the site "were allegedly responsible for the murders of nearly 100 people in the preceding five years."


There are many problems with this paragraph.

  • the first two wikilinks redirect to ISIL and Islamic terrorism respectively. That seems to be evidence of anti-Islamic bias on the original writer's part, as I am unsure to which "dramatic incidents" they are referring. Nonetheless, Islamist terrorism has nothing to do with the section here.
  • The argument of this paragraph seems to be that the Stormfront website is not worthy of being considered a threat to anybody offline, and thus in general that the link between online hate speech and violence in real life should be questioned. However, as the last sentence shows, Stormfront was directly linked in the deaths of nearly 100 people.
  • What is the "Stormfront precedent"? Nowhere else online does anybody refer to this phrase, apart from wiki clones.
  • This paragraph thus reads like white nationalist apologism, or at the very least Stormfront apologism.

I would suggest moving the bulk of this paragraph to the Case Studies section, with significant revision. Due to the genuine and documented real world effects of Stormfront, as mentioned at the bottom of the paragraph, and further at https://www.thedrum.com/news/2017/08/28/stormfront-one-internets-oldest-white-supremacist-sites-knocked-offline, Stormfront was taken offline briefly in 2017, and this article should include the nuance that some neo-nazis view the site as being a gathering for less extreme individuals to vent and is not a impetus for action, but conclude that on balance there is in this case a genuine link between online and offline violence.

A draft re-write of my own is as follows:

Stormfront is considered the first "hate website." Launched in March 1995 by a former Ku Klux Klan leader, it quickly became a popular space for discussing ideas related to Neo-Nazism, White nationalism and White separatism, first in the United States of America and then globally. The forum hosts calls for a racial holy war and incitement to use violence to resist immigration,[original citation] and is considered a space for recruiting white nationalists and possibly coordinating violent acts. Within the white nationalist activism sphere, though, some accuse the forum as being merely a space for "keyboard warriors". As reported by De Koster and Houtman,[original citation] one white nationalist stated, "I have read quite a few pieces around the forum, and it strikes me that a great fuss is made, whereas little happens. The section activism/politics itself is plainly ridiculous. [...] Not to mention the assemblies where just four people turn up." Some regular members of the forum argue that this is a positive attribute: "Surely, I am entitled to have an opinion without actively carrying it out. [...] I do not attend demonstrations and I neither join a political party. If this makes me a keyboard warrior, that is all right. I feel good this way. [...] I am not ashamed of it." However, both opinions do not seem to align with the facts. In 2014, the Southern Poverty Law Center published a study that found users of the site "were allegedly responsible for the murders of nearly 100 people in the preceding five years."[original citation]. Stormfront.org was taken offline by its domain registrar in 2017 due to pressure from a campaign by the Lawyers’ Committee for Civil Rights Under Law, following the takedown of far-right news website the Daily Stormer under similar circumstances. These takedowns followed the 2017 Unite the Right rally in Charlottesville, Virginia in which a white supremacist killed a peaceful counterprotestor, and the general public saw the extent of the rising neo-Nazi and white supremacist movement in the United States, who primarily organised through online forums like Stormfront.[citation=https://www.theguardian.com/technology/2017/aug/29/stormfront-neo-nazi-hate-site-murder-internet-pulled-offline-web-com-civil-rights-action]. The takedown lasted only a few months, though, and as of July 2022, the site remains active. In this case, the argument for freedom of speech and expression has won out over the argument for protecting the safety of minorities against hate movements. Zed.eds (talk) 21:46, 5 July 2022 (UTC)[reply]

No hate speech[edit]

No hate speech! 2600:6C5A:557F:E0DD:8C86:E9BC:F83F:881F (talk) 20:52, 19 July 2023 (UTC)[reply]