User:Mahailem/sandbox

Gaming
Black gamers are put into unique positions when it comes to entering spaces of gaming, for when they are represented incorrectly whilst constantly at risk of being harassed for a wide variety of reasons. Whenever they are represented, which is not as often as is what occurs in the real world, it typically comes at the price of being stereotyped into typically two categories: being an athlete, a criminal, or both. If they decide to call out these issues, there is typically heavy backlash for their actions. One such example comes from The Sims community. When its black player base call out issues about various hair texture representations, enter Sims community spaces, or see storylines about black sims members, they typically faced racial attacks, microagressions, or see storylines of characters that looked like them that were based off of prevalent stereotypes of black people. The solution to their issues did not come from the creators, but rather groups of black Sims players coming together to make their own spaces in order to have somewhere to go to. Moreover, Black content creators have a unique space within the gaming world: they need to maintain a level of being black that allows people to be comfortable with watching their content, but in creating who they are as creators, they are inherently creating spaces for racialized comments against them that fills their comment sections. Moreover, whenever they do ask for bigger changes, companies take on a race-blind approach to ignoring the problems within the communities they are allowing to exist. When black people are included, it’s mostly because the games being played are inherently included in African American culture, and often considered “diversity nights” for black creators.

Artificial Intelligence
The issues that lie dormant within the training data of Large Language Models such as ChatGPT can be seen through how it sees black people. Former Google AI Ethicist Timnit Gebru had her time end at Google due to complications over a paper that described the issues of some AI Ethicists: its carbon impact is an issue that could create many issues very soon, greater datasets would lead to complications with currently insensitive vocabulary that was utilized in earlier days of the internet, and the amount of effort it takes to train the model again if something were to fail. There has already been clear evidence that AI models have latent biases that claim that white men are the best scientists. When this was discovered, OpenAI quickly created a block for questions that directly pertained to race, rather than fixing the issue at hand. Something else is the idea of beauty: when creating a supposedly unbiased judge for a beauty contest, BeautyAI asked for submissions from throughout the world, and within its 44 winners of the contest, 38 were white, and 1 finalist had an obvious darker skin tone. These submissions also were used in a manner of gleaning information about health factors affecting the users, and the fact that "healthy" people were put further to the front implies to the AI model that those who are darker skin toned are generally less healthy. Within both of these models, there exists training data that inherently has been given data that presents biases against people of color. A lack of representation within the spaces of developing these models creates an underlying issue of a lack of consideration for more people to be included. If the people that initial testing is done on are coworkers, it is possible that these models from the beginning are untested on all scenarios.

Surveillance
Black and Latinx communities have frequently been the targets of new surveillance and risk assessment technologies that have brought more arrest to these communities. The police have utilized tools to target communities of color for decades. One of the earliest examples of this occurring within the borders United States itself was directly after attacks on the Twin Towers. The New York Police Department used community leaders, taxi drivers, and extensive databases that managed to find ways of connecting people together in order to find more potential terrorists that lived within the United States. This has mostly been done through a program called CompStat, and many precincts have been encouraged to do the same because of its ability to find high crime areas and put more police in areas where they believe crime will happen, leading to even more arrests. In time, this has created systems in which entire states have attempted to create gang databases that have been based off of risk assessments, but in turn created situations where children less than a year old were determined to be "self identified gang members". This creates a sense of both confusion and distrust amongst those within these communities, and in turn could lead to even more violence and arrests. These programs have been used throughout the United States such as Boston, Massachusetts, Salina, California, and, most clearly, Camden, New Jersey. Outside of specifically Boston, most of these places have not provided social services to those who are a part of these cycles of violence. Rather, they prefer to put them into prison. This cycle is a positive feedback loop for the computers, and does not help these communities.

Social Media
Africans throughout the world have a much higher risk of harassment through the internet:


 * 1) The two countries with the highest levels of cyberbullying reports came from Kenya and Nigeria, with around 70% of all users claiming to have received hate throughout their time using the internet.
 * 2) Tweets that have discriminatory ideals within them are linked to rates of hate crimes within the area that the Tweet was made.
 * 3) Black People are more likely to report the attacks they received throughout the internet are mostly based off of their race.

There is an inherent tie to being black within the internet and also receiving racially-charged hatred. Moreover, because of the lax nature of many popular social media sites (such as Twitter), there exists many ways in which white nationalists can come together to spread hatred through large hate waves that target people of color, and most especially black women.