User:Penguinblueberry/sandbox

Draft
Deep Face

Deep Face[edit]
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces based on digital images published on the website. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. DeepFace shows human-level performance. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than the human beings.

Commercial Rollout[edit]
Origin

DeepFace was produced by a collection of Facebook’s artificial intelligence scientists including Yainiv Taigman and Facebook research scientist Ming Yang. They were also joined by Lior Wolf, a faculty member from Tel Aviv University. Yaniv Taigman, came to Facebook when Facebook acquired Face.com in 2007.

Facebook started rolling out the technology to its users in early 2015, continuously expanding DeepFace's use and software. DeepFace, according to the director of Facebook’s artificial intelligence research, is not intended to invade individual privacy. Instead, DeepFace alerts individuals when their face appears in any photo posted on Facebook. When they receive this  notification, they have the  option of removing their face from the photo.

European Union

When the technology was initially deployed, users had the option to turn DeepFace off but were not notified that it was on. DeepFace was not released in the European Unio n due to data privacy laws there. Local technology regulators in Europe argued that Facebook’s facial recognition did not comply with EU data protection laws because users do not consent to all uses of their biometric data.

Efficacy in Comparison

DeepFace systems can identify faces with 97% accuracy, almost the same accuracy as a human in the same position. Facebook’s facial recognition is more effective than the FBI’s technology, which has 85% accuracy. Google’s technology, FaceNet is more successful than DeepFace using the same data sets. FaceNet set a record for accuracy, 99.63%. Google’s FaceNet incorporates data from Google Photos.

Current Uses[edit]
Following the release of DeepFace in 2015, its uses have remained fairly stagnant. Because more individuals have uploaded images to Facebook, the algorithm has gotten more accurate and been capable of identifying more and more faces. Facebook’s DeepFace is the largest facial recognition dataset that currently exists. Some individuals argue that  as Facebook’s facial ID database expands it could potentially be distributed to government agencies and be used in ways that individuals do not allow In response to privacy concerns, Facebook removed their automatic facial recognition feature – allowing individuals to opt in to tagging through DeepFace. This change was implemented in 2019.

Facebook uses individual facial recognition templates to find photos that an individual is in so that they can review, engage, or share the content. They also claim that they use facial recognition to help protect individuals from impersonation or identity misuse. Take, for example, an instance where an individual used someone’s profile photo  as their own. Through DeepFace,  Facebook can identify and alert the person whose  information is being misused. To ensure that individuals have control over their facial recognition, Facebook does not share facial templates. Additionally, Facebook will remove images from facial recognition templates if someone has deleted their account or untagged themself from a photo. Individuals also have the ability to turn their facial recognition off on facebook. If they say that they do not want Facebook to be able to recognize them in photos and videos, Facebook will cease facial recognition for that individual.

Method[edit]
DeepFace begins by using aligned versions of several existing databases to improve the algorithms and produce a normalized output. However, these models are insufficient to produce effective facial recognition in all instances. DeepFace uses fiducial point detectors based on existing databases to direct the alignment of faces. The facial alignment begins with a 2D alignment, and then continues with 3D alignment and frontalization. That is, DeepFace’s process is two steps. First, it corrects the angles  of an image so that the face in the photo  is looking forward. To accomplish this, it uses a 3-D model  of  a face. Then the deep learning produces a numerical description of the face. If DeepFace comes up with a similar enough description for two images, it assumes that these two images share a face.

2D Alignment

The DeepFace process begins by detecting 6 fiducial points on the detected face — the center of the eyes, tip of the nose and mouth location. These points are translated onto a warped image to help detect the face. However, 2D transformation fails to compensate for rotations that are out of place.

3D Alignment

In order to align faces, DeepFace uses a generic 3D model wherein 2D images are cropped as 3D versions. The 3D image has 67 fiducial points. After the image has been warped, there are 67 anchor points manually placed on the image to match the 67 fiducial points. A 3D-to-2D camera is then fitted that minimizes losses. Because 3D detected points on the contour of the face can be inaccurate, this step is important.

Frontalization

Because full perspective projections are not modeled, the fitted camera is only an approximation of the individual’s actual face. To reduce errors, DeepFace aims to warp the 2D images with smaller distortions. Also, thee camera P is capable of replacing parts of the image and blending them with their symmetrical counterparts. Reactions

Reactions[edit]
Industry Reaction

AI researcher Ben Goertzel said Facebook had "pretty convincingly solved face recognition" with the project, but said it would be incorrect to conclude that deep learning is the entire solution to AI.

Neeraj Kumar, a researcher at the University of Washington said that Facebook’s DeepFace shows how large sets of outside data can result in a “higher capacity” model. Because of Facebook’s wide access to images of individuals, their facial recognition software can perform comparatively better than other softwares with much smaller data sets.

Media Reaction

A Huffington Post piece called the technology "creepy" and, citing data privacy concerns, noting that some European governments had already required Facebook to delete facial-recognition data. According to Broadcasting & Cable, both Facebook and Google had been invited by the Center for Digital Democracy to attend a 2014 National Telecommunications and Information Administration "stakeholder meeting" to help develop a consumer privacy Bill of Rights, but they both declined. Broadcasting & Cable also noted that Facebook had not released any press announcements concerning DeepFace, although their research paper had been published earlier in the month. Slate said the lack of publicity from Facebook was "probably because it's wary of another round of headlines decrying its creepiness.

User Reaction

Many individuals fear facial recognition technology. The technology’s nearly perfect accuracy allows social media companies and the government to create digital ideas of millions of Americans. However, an individual’s fear of facial recognition and other privacy concerns does not correspond to a decrease in social media use. Instead, attitudes towards privacy and privacy settings do not have a large impact on an individual’s intention to use Facebook apps. Because Facebook is a social media site, individual fears about privacy get over ruled by a desire to participate in social media.

Privacy Concerns[edit]
BIPA Lawsuit

Facebook users raised a class action lawsuit against Facebook under Illinois Biometric Information Privacy Act (BIPA). Illinois has the most comprehensive biometric privacy legislation, regulating the collection of biometric information by commercial entities. Illinois’ BIPA requires a corporation that obtains a person’s biometric information to obtain a written release, provide them notice that their information is being collected, and state the duration the information will be collected. The lawsuit alleges that Facebook’s collection of facial identification information for the purpose of the tag suggestion tool violates BIPA. Because Facebook does not give notice or consent to individuals when they use this tool, Facebook users argue that it violates BIPA. The Ninth Circuit denied Facebook’s motion to dismiss the case and ultimately certified the case. Facebook sought to appeal to the certification of the Ninth Circuit decision which was ultimately granted. Facebook claims that the case should not have been verified because Plaintiffs have no alleged any harm beyond Facebook’s violation of BIPA. Facebook removed their automatic facial recognition tagging feature in 2019, in response to the concerns raised in the lawsuit. Facebook proposed a $550 million settlement to the case, which was rejected. When Facebook increased the settlement to $650 million, the court accepted it. Facebook was ordered to pay their $650 million  settlement in early March of 2021. 1.6 million residents of Illinois will receive at least $345 dollars.

Racism in Facial Identification Technology

Facial recognition algorithms are not universally successful. While the algorithms are capable of classifying faces with over 90% accuracy in some cases, accuracy is lower when the algorithms are pled to women, black individuals, and young people. The systems falsely identify black and asian faces 10 to 100 times more than they do with white faces. Because algorithms are primarily trained with white men, systems like DeepFace have a more difficult time identifying them. Scientists believe that once facial recognition data bases are trained to identify people of color — exposing them to more diverse faces — they will be more successful at identification.

In July of 2020, Facebook announced that it is building teams that will look into racism in its algorithms. Facebook’s teams will work with Facebook’s Responsible AI team to study bias in their systems. The implementation of these programs is recent, and it is still unclear what reforms will be made.

10 Year Challenge

In 2019, a Facebook challenge went viral asking users to post a photo from 10 years ago and one from 2019. The challenge was coined the “10 Year challenge.”  More than 5 million people participated in the challenge, including many celebrities. Worry arose that Facebook’s 10 year challenge was designed to train Facebook’s facial recognition database. Kate O’Neill, a writer for Wired, wrote an op-ed that echoed this possibility. Facebook denied that they played a role in generating the challenge. Instead, they argued that it was a user generated challenge that allowed individuals to have fun online. However, individuals have argued that the concerns that underscore theories around the 10 year challenge are echoed by broader concerns about Facebook and the right to privacy.

Week 4 Questions
One important addition to the deepface entry that I plan to add is more recent information. The article's newest citations are from 2014.

I also want to add relevant information about the impacts that DeepFace has had. I will seek to answer the following questions: How extensively has deepface been implemented? Have other software companies begun using deepface? What privacy impacts has it had? Have there been any lawsuits against deepface?

1


 * Is everything in the article relevant to the article topic? Is there anything that distracted you?
 * It seems like the United States Harbor program and passenger name record issues has a lot devoted to it. It seems less relevant to general information privacy and more related to legislation to protect informational privacy in the US


 * Is any information out of date? Is anything missing that could be added?
 * Some information about the EU is outdated. I wonder if more recent information could be found about their positions on the US' legislation. The information about healthcare outdated. Wonder if Covid/vaccination would influence this.
 * Can you identify any notable equity gaps? Does the article underrepresent or misrepresent historically marginalized populations?
 * The entry seems equitable.
 * What else could be improved?
 * Top level analysis regarding informational  privacy should be prioritized.
 * Is the article? Are there any claims that appear heavily biased toward a particular position?
 * In the medical section the explanation of why patients prioritize doctor patient confidentiality could be more so focus on why patients prioritize doctor patient confidentiality.
 * Are there viewpoints that are overrepresented, or underrepresented?
 * Not really.
 * Evaluating sources -
 * The links work
 * The sources could be updated to be more recent to support the claims in the article. Especially regarding the importance of legislation.
 * Every fact referenced has a reliable reference.
 * There is diversity to the authors
 * Checking the talk page
 * There's quite a lot of discussion occuring on the talk page. Individuals have added new points of view, added in-text citation, and added information about legislation.
 * The article is part of WikiProject
 * The article has a C-class score.

2

Information privacy law:

Is everything in the article relevant to the article topic? Is there anything that distracted you?"I think it would be helpful to divide the country divisions even further. Because the entry is about legislation, it would be helpful to divide it into 'medical privacy legislation', 'online privacy legislation' etc."Is any information out of date? Is anything missing that could be added?"The information about credit privacy is quite outdated. It's from 2002. Maybe some newer information about the California Consumer privacy act."Can you identify any notable equity gaps? Does the article underrepresent or misrepresent historically marginalized populations? "The article seems equitable in most ways. There is not a lot of information regarding legislation in South America. I wonder if this is because of less legislation or less incentive to write about it."Is the article equitable? Are there any claims that appear heavily biased toward a particular position? Are there viewpoints that are overrepresented, or underrepresented? "There is not bias in the article. Despite the fact that legislation is discussed, it's a fairly nonpartisan way."


 * Evaluating sources -
 * The links work
 * The sources could be updated to be more recent to support the claims in the article. Especially regarding the relevance/successes of the legislation.
 * Every fact referenced has a reliable reference.
 * There is diversity to the authors
 * Checking the talk page
 * There's quite a lot of discussion occuring on the talk page. Individuals have added new points of view, added in-text citation, and added information about legislation.
 * The article is part of WikiProject law and Wiki Project Human rights
 * The article has a C-class score in both categories.