Talk:Algorithmic bias

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 14 January 2020 and 28 April 2020. Further details are available on the course page. Student editor(s): Botchedway. Peer reviewers: Zachpiroh23.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:06, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 7 July 2020 and 14 August 2020. Further details are available on the course page. Student editor(s): Yaluys.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:06, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 24 August 2020 and 2 December 2020. Further details are available on the course page. Student editor(s): Morgan such.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:06, 17 January 2022 (UTC)

Stub for now, plenty more to come
Please give me some time to flesh this stub out. There are plenty of sources available on the topic, which has been widely covered in the pop press (see here, and here, and here but there are also massive reading lists compiled from a variety of high-quality academic resources. See examples here and here and here. Thanks! -- Owlsmcgee (talk) 06:58, 17 November 2017 (UTC)

Facial recognition
There's a sentence in this section that states "Software was assessed at identifying men more frequently than women..." - should this not read as "Software was assessed as identifying men more frequently than women..."? If the first version is correct, do we not need to know the results? PaleCloudedWhite (talk) 11:10, 9 December 2017 (UTC)
 * Good catch, PaleCloudedWhite. It's been corrected. -- Owlsmcgee (talk) 22:21, 9 December 2017 (UTC)
 * Thanks. PaleCloudedWhite (talk) 12:15, 10 December 2017 (UTC)

Moved section Detection
, I have removed this content, because it needs to be integrated as prose into the article. Simply stated, writing a Wikipedia article is not preparing a Powerpoint presentation: you need to write, not just list things. Please read MOS:EMBED first, and while you are at it, study WP:RS as well.-- Farang Rak Tham   (Talk) 21:02, 8 June 2018 (UTC)

There are several attempts to create methods and tools to detect and observe interacting biases in the training data:
 * Google:
 * Microsoft: bias-dashboard ,
 * Facebook: fairness-flow ,
 * Pymetrics: audit-ai (open source) ,
 * Aequitas: Bias and Fairness Audit Toolkit (open source) ,
 * IBM:

Second GA nomination
, currently the GA nomination has already been failed. However, i see you have been working very hard to improve the article. If you would like to continue the GA nomination, you could consider nominating the article again for GA, and i can review, if you'd like me to.-- Farang Rak Tham   (Talk) 05:58, 5 July 2018 (UTC)
 * I have re-submitted for GA, if you're happy to review it based on the feedback and responses here, I'd be happy for your efforts! Thanks. --Owlsmcgee (talk) 21:11, 6 July 2018 (UTC)

Recent deletion by user:‎Owlsmcgee : Lack of consensus about goals and criteria
@‎Owlsmcgee : The two paragraphs deleted in this modification illustrate new ways people are discriminated. In one case because of company internal politics (should loans be granted based on the short term profit or based on image of the company in the public eye - i.e. long term goodwill?), and in the other case because of company interests of a quasimonopolist with that of the society at large (users that have Facebok as their news source get discriminated in what news they receive by being offered news that generate more clicks rather than news that reflect society better, users that have Youtube as their opinion forming content get discriminated by being offered more and more radical videos in an attempt to keep them watching).

May be these paragraphs should instead of being deleted, be relocated in the article as emergent or unintended impact?

I believe it is always more challenging to do impartial scientific research in a politically charged environment but I agree it should not be. The issue is rather the challenge to take any corrective action on the bias when the conflict of the short term intentions of the actors with that of their own long term interests or with the better welfare of the society at large are the cause. The article seems to lack a section for "obstacles to corrective action". Cobanyastigi (talk) 08:57, 29 July 2018 (UTC)

FAC
I saw a TEDtalk on this subject! It was absolutely fascinating! It began with asking the audience, 'how many believe computer data is dependably neutral and always fair?' then he went on to spend a good half an hour talking about what amounted to a version of 'garbage in--garbage out'. His two primary examples were about a serial killer in Sweden I think--if I remember correctly--and Michelle Obama--who knew she had raised such emotional responses? His company had to prevent any mention of the two names for a few weeks--because the programmers were upset enough they were putting in data in a manner that was creating all kinds of false outcomes. I wish I felt qualified to review this for you because it seems timely and important. I just wanted to wish you well and tell you I really hope you are successful! Good luck! Jenhawk777 21:09, 19 August 2018 (UTC)

Definition
The article body rightly says "algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others". But the lead described algorithmic bias as due to the values of the designers. One very important thing we have learned is that algorithms can easily be biased without *any* intention on the part of the designers to introduce bias, and indeed in ways that are wholly antithetical to the values of these designers. NBeale (talk) 22:53, 27 March 2019 (UTC)
 * Been on a Wikibreak but came back and saw this and wanted to say: that was a great edit to a highly visible portion of the text. Thank you! Owlsmcgee (talk)

Re-adding content that I removed
Lauraham8, I had removed a portion of content under the Google Search sub-heading which listed all the Google search results for which biased content comes up. I deleted that portion because I feel that the paragraph immediately after the list sums up and gets the point across to the reader that Google's algorithm is prone influence via systemic biases. I feel that the list will become a dumpyard for every vulgar or derogatory cliche term that can be googled and thus will degrade the overall quality of the article.

However, after my deletion, you re-added the same content without any indication of why you thought that my line of thinking was wrong. I'd like to ask you to explain why you re-added the content here so that I can understnd your veiws on the topic. Sohom Datta (talk) 15:43, 11 November 2019 (UTC)

Clarification needed
I added a clarify tag next to the following text: "However, FAT has come under serious criticism itself due to dubious funding. " I think this sentence is really vague and ominous-sounding, and we ought to expand it using details from the source text. But I don't have time to make the required edits right now, so I'm flagging this here for someone to address. Qzekrom (she/her • talk) 05:38, 8 September 2020 (UTC)


 * Thanks for the suggestion U|Qzekrom. While I didn't add the original line, I hope the sentence reflects the referenced article better. -- Owlsmcgee (talk)

Anti-LGBT Bias
I've transferred this text from Machine learning to be incorporated in the article:

In addition to racial bias in machine learning, its usage in airport security has been found to target those who do not identify within the gender binary. In 2010, the Transportation Security Administration (TSA) started to use machine learning for bomb detection scanning. However, design justice advocates have found that the algorithm for TSA machine learning technology also detect bodies that may not align with the constructs of cisnormativity. When using ML bomb detection scanning technology, TSA agents are trained to search a person based on whether they are male or female. This has shown to be harmful towards people who identify as transgender, non-binary, gender fluid, gender-non conforming, or intersex. In 2018, nonbinary trans-feminine author and design justice advocate Sasha Costanza-Chock wrote on airport security's transition to AI: "I worry that the current path of A.I. development will produce systems that erase those of us on the margins, whether intentionally or not, whether in a spectacular moment of Singularity or (far more likely) through the mundane and relentless repetition of reduction in a thousand daily interactions with A.I. systems that, increasingly, will touch every domain of our lives." Many other people have shared their stories online about coded bias as a result of the use of machine learning in airport security procedures. For example, transgender women will almost always be flagged because their genitalia does not match the scanning guidelines set by the Advanced Imaging Technology (AIT) scanner. Since the implementation of AI-based protocol in 2010, the TSA has faced backlash from queer and transgender advocacy groups, such as the National Center for Lesbian Rights, the National Center for Transgender Equality, and the Transgender Law Center. All argue that despite the TSA's commitment to unbiased security measures, AIT and machine learning are constructed based off of biased data sets that enforce a system of oppression for people who do not identify as cisgender.

A separate "Algorithm (social media)" article?
There's a proposal over at Talk:Algorithm to create a section or perhaps a separate article on the general-audience article of "algorithm", as opposed to a CS/math article, since it's been talked about in the media so much. I took a shot at this at User:Hirsutism/Algorithm (social media). Is this useful? Should the scope be limited to social media algorithms, or should it cover all AI algorithms?

Wikipedia already has a wealth of such articles — this article, filter bubble, algorithmic radicalization, algorithmic transparency, regulation of algorithms, right to explanation, etc. But would a central article trying to define algorithm for a general audience be useful? --Hirsutism (talk) 15:40, 5 September 2021 (UTC)

Article Review
I thought the article was very well written, but could benefit from talking more about A.I in other countries, expanding from just Europe, The United States and India. Another country to consider would be China as they have invested a lot in A.I. as well.

67.220.29.31 (talk) 00:06, 6 October 2021 (UTC)

Merger proposal
has proposed merging Fairness (machine learning) into this page. Qzekrom (she/her &bull; talk) 06:18, 16 February 2022 (UTC)


 * My own opinion is weak oppose: it's natural to have separate articles for related concepts. Currently, Algorithmic bias serves as a general, interdisciplinary overview of the subject, whereas Fairness (machine learning) talks about the technical details of fairness measures used in machine learning. In my opinion, Algorithmic bias should discuss technical solutions at a high level more, whereas Fairness (machine learning) should explain them in greater depth.
 * Instead, I suggest merging Fairness (machine learning) with Fairness measure, an article that discusses fairness measures used in network engineering, as there are now papers that propose adapting the fairness measures used in network engineering (as well as welfare economics) to the artificial intelligence context (e.g. Chen and Hooker 2021). Qzekrom (she/her &bull; talk) 04:51, 21 February 2022 (UTC)
 * My opinion is also a weak oppose. The Algorithmic bias page is focused on bias in general, while the page Fairness (machine learning) is focused on fairness definitions, which is a largely debated topic in the literature, and techniques for mitigation bias in machine learning, another very active domain nowadays in scientific research. I think it deserves its own page, referenced here (as it is now). I am not sure also regarding 's proposal, since Fairness (machine learning) is not entirely dedicated to fairness measures and assessment, but also to mitigation of bias. Deltasun (talk) 10:23, 3 April 2022 (UTC)
 * Strong oppose. The politics of machine learning needs to be even further separated from phrases like "algorithmic bias". If anything, the present article on "algorithmic bias" should be merged under "algorithmic fairness".  The field of machine learning is, via "algorithmic information" just starting to recover from several decades of technically-biased criteria for model selection due to the description languages of those information criteria being statistical rather than being algorithmic.  Algorithmic information clarifies what is and is not "algorithmic bias" in a rigorous manner -- but it is completely, and by definition, utterly devoid of any notion of "ought", just as is science.  A scientifically true statement may be fair or unfair, just like reality.  Jim Bowery (talk) 20:13, 16 April 2022 (UTC)

Vampire image
I suspect this example might be vaguely disturbing to some readers. I added WP:OR because it needs a source; I would be in favor of removing, though, as I am not sure how it helps illustrate the points in the article. Caleb Stanford (talk) 03:51, 11 July 2022 (UTC)