User talk:Mcoop23/sandbox

Peer Review
Your contributions seem very well thought out and constructed. They fit in to the already existing article and it seems like all your claims are backed up by citations. I didn't pick up on any spelling/grammar mistakes.

In the first paragraph, you mention "significant implications". This seems a little ambiguous. It might be helpful to the reader for you to specify what these implications are. The second sentence in your first paragraph seems to repeat what the current article already explains, therefore, this sentence may not be necessary. I think it might be beneficial to expand a little on the ethical standpoints you mention in the second paragraph, mainly because that is what the section you are adding is about. Overall, these are very good and interesting additions! Aimende (talk) 17:56, 29 March 2017 (UTC)


 * Thank you for your suggestions! You are right - "significant implications" was very ambiguous, so I modified the first paragraph to improve this area.  I will research more about the ethical standpoints mentioned in the second paragraph and incorporate this as well!  --Mcoop23 (talk) 01:45, 1 April 2017 (UTC)

Peer Review
I think you did a great job finding compelling evidence and giving an overview of the ethical implications of the filter bubble. Your sources are varied and you do a good job with the amount of detail you go into and what you cover. My only concern is that some of your claims sound like hypotheses, but I think you can easily reword a lot of them to make them sound more encyclopedic, since you have the sources to back up your claims. For example, you say “As the popularity of cloud services increases, personalized algorithms used to construct filter bubbles will unavoidably become more widespread,” but I think if you say “are expected to become more widespread (with citation),” that would add some authority. You also said “it is important to consider the acceptability of these practices from an ethical standpoint,” which is an opinion, but you could re-word it to sound more like an encyclopedia by saying something along the lines of “news outlets and academic scholars (with citations) have begun exploring the ethical implications of….” The places you use phrases like “could result” “could arise” ”users… must understand” are other areas that could use mild rework to fit in more with the tone of Wikipedia. I think your section would be enhanced by adding a sentence about how the advance in technologies that have allowed these algorithms to become more sophisticated, so that the problem is getting worse. Finally, I’d move up the info in your last paragraph about the majority of social media users using sites as their primary news source and how many of them are unaware of the filter bubble to the first paragraph in your section, since I think that’s really compelling and a good way to introduce the ethical implications. I think your closing sentence is a more compelling intro than the intro you have now, since you currently introduce your section by saying there are “security, ethics, and personal freedom” implications, without diving very far into these 3 issues. Llevan2 (talk) 01:47, 30 March 2017 (UTC)


 * Thanks for your help! I did not even realize that some of my claims sounded more like opinions and less like factual representations.  As you mentioned, everything I wrote was backed by a source, so I have made the effort to correct the instances in which I deviated from the neutral tone exemplified in Wikipedia articles.  I agree that my former introduction was significantly lacking.  I have made the suggested edits and helped to start my edition much stronger.  Again, thank you for the help! --Mcoop23 (talk) 01:45, 1 April 2017 (UTC)

Instructor Comments
Nice job Mitchell. This is well on its way. I agree with the peer reviewers above -- you may have already made some changes based on these reviews, but I would suggest you take Liza's specific sentence-level suggestions: "..will unavoidably become more widespread," and "it is important to consider the acceptability..." in particular. I also agree with with Adriana that there seems to be some information in the opening paragraph that is repetitive with the already existing article. I think some more organization would help with this: Rearrange the opening so that you have one or two sentences about the potential rise of filter bubbles and the ethical implications for "personal freedom, security, and information bias" (since these are really the three topics you cover here). Give each of these its own short paragraph, in the order in which you list it. I have re-arranged your sentences below to give a more concrete sense of what I mean (though the sentences themselves will still need some work):

As the popularity of cloud services increases, personalized algorithms used to construct filter bubbles will unavoidably become more widespread (2). The development and emergence of new technologies during the twenty-first century and the ways in which these new technologies are regulated have significant implications concerning security, ethics, and personal freedom (1).

The materialization of filter bubbles in popular social media and personalized search sites dictates the particular content seen by users, often without their direct consent or cognizance (2). Filter bubbles, then, could result in individuals losing their autonomy over their own social media platforms and individuals’ identities being socially constructed without their awareness (2).

Filter bubbles should also be considered from the perspective of technologists, social media engineers, or computer specialists (5). When evaluating this issue, users of personalized search engines and social media platforms must understand that their information is not private (5). This raises the question as to whether or not it is moral for these information technologists to take users’ online activity and manipulate future exposure to information (5).

Social media platforms, such as Facebook, personalize users’ news feeds based on recent activity, such as recent comments and “likes” (3). Consequently, individual users are not exposed to differing points of view, enhancing users’ sense of confirmation bias (3). Additionally, social sorting and other unintentional discriminatory practices could arise as a result of personalized filtering (4). Inevitably, a majority of social media users utilize these platforms as their primary source of news (6). Accordingly, the ethical and moral considerations surrounding filter bubbles is important because of the potential for biased, misleading information exposure (7). Though it is difficult to quantify concern, it is important to consider the acceptability of these practices from an ethical standpoint (4).

--Jmstew2 (talk) 11:39, 4 April 2017 (UTC)