User talk:Longinuos

Ethical and Privacy Implications of AI-Supported Business Decision Making Introduction Project Background Artificial intelligence is associated with privacy implications in its application in decision-making. The problem cuts across all industries, but it is more pronounced in healthcare, where privacy is critical. The growth of technology in the healthcare industry has accelerated artificial intelligence applications. The technology holds forth incredible potential in terms of favorable, precise, and practical curative and preventive actions. The main category of applications covers methods for treatment and diagnosis, treatment adherence and hospital operations management. AI use in health facilities play a critical role in reducing the workload for medical staff, are economical, and ultimately raise the standard of treatment. Healthcare professionals are increasingly organizing the available medical data through AI. This is by converting everyday semantics into a medical description, evaluating patient data, spotting medical diagnostic similarities, and supporting medical hypotheses. Nonetheless, AI use in healthcare elicits global ethical, legal, cultural, and commercial issues. Software developers, politicians, and healthcare professionals are all facing new challenges due to the use of digital software and technology in healthcare facilities and businesses. AI raises ethical issues that hamper the development of its application in the healthcare industry (Lee & Yoon, 2021). An extraordinary amount of health data and computer power are utilized in the use of AI to support decision-making. This opens the door for new moral conundrums involving the privacy of acquired data, its secrecy, its use openness, its responsibility, and potential disparities in AI implementation, as most AI algorithms operate in a "black box" methodology where the analysis process is opaque. Artificial intelligence is subject to important ethical implications regarding respect for knowledge, non-maleficence, justice, beneficence, and autonomy. The health care system must ensure the right to privacy as enshrined by patient self-governance or autonomy, personal identity, and wellbeing. It is essential from an ethical standpoint to protect people’s privacy and uphold confidentiality (Sunarti et al., 2021). Although sex or race may not be the primary contributing variables to an illness, artificial intelligence systems have a computational bias that predicts the likelihood of a disease diagnosis based on these criteria (Davenport & Kalakota, 2019). One characteristic that sets the medical sector apart from business and other service industries is the patients' unwavering faith in medical experts, furthered by the placebo effect. This means that patients in AI-assisted healthcare must establish a relationship with a machine rather than a human, negatively impacting treatment outcomes. Scientific and technological developments and the ubiquitous digital technology presence have facilitated developments of unanimous trust in digital space. Digitalized legal regulation that includes AI systems in medical treatment primarily aims to prevent the establishment of public health concerns and to preserve patient confidentiality and personal information. A robot must not injure its users, must obey human directions, and must be safe, according to European legislation that was adopted in February 2017 for use in healthcare (Laptev et al., 2022). The body proposed a law requiring AI technology documentation and data sheets containing information on training methods and application procedures, including their range and attributes. In 2015, all UN members implemented Sustainable Development Goals. This framework aims at reducing inequalities and promoting good health and wellbeing (Cepal, 2018). Though the goals are well-founded in the moral standards of fairness, collaboration, and inclusivity, the growth of AI can aggravate current health disparities. It is imperative to understand and handle the ethical considerations fully in light of the potential benefits of AI and mitigate its potential dangers, given the quick spread of AI technology in business and the medical industry. It is necessary to take a broad view of all possible ethical matters associated with AI use in healthcare institutions. Stakeholders Analysis The implementation of AI for decision-making poses privacy and ethical issues to various stakeholders including patients, clients, hospital management, general public and government, and the legal system. Issues related with data breach whether on the side of patients, health care clients or public are usually likely to lead to litigation to seek legal redress. The parties that suffer data breach feel vulnerable and exposed and this affects their wellbeing negatively. Leaking private information may lead to job loss, destabilization of marital relationships and other varied negative results. The parties that breach privacy face legal consequences when the aggrieved parties take legal action. The issues around AI implementation in business decision making therefore bind all these parties. This essay identifies the risk factors for such data breaches, consequences and the measures necessary to address the issues. Project Aims and Objectives The aim of this research project would be to ensure that AI is being used in a responsible and ethical manner in Business decision-making process, and to provide insights and recommendations for organizations looking to maximize the benefits of these systems while minimizing any negative impact on individuals and society. Research Questions •	What are the ethical and privacy implications of AI-supported business decision-making in health care? This question will be addressed by reviewing existing literature on the ethical and privacy implications of AI in Business decision-making process and focus on medical journals and use the healthcare industry as a case study, to gain a comprehensive understanding of the current state of knowledge in this area. There will also be surveys and interviews with organizations that use AI in Business decision making-process, to understand their experiences and challenges, and to identify best practices for responsible deployment of these systems. •	How does AI implementation potentiate privacy violations in health care? This question will be answered by analyzing case studies of organizations that have used AI in Business decision-making process such as in the healthcare industry, to understand the potential consequences of these systems on privacy and ethics. •	What legal and ethical framework can be used to address the issue? The question will be addressed by developing a set of guidelines and recommendations for organizations using or considering using-AI in Business decision-making process, with a focus on how to ensure that these systems are aligned with ethical and privacy principles. Report Contents The report utilizes the IMRaD format. It is organized as follows: introduction, literature review, methodology, research analysis, discussion, conclusion, references, and appendices. Literature Review Introduction This research will concentrate on the Ethical and Privacy Implications of AI-Supported Business Decision Making", and the healthcare/medical sector is used as a case study as many privacy and ethical problems can be found within the AI use in healthcare, which can be linked back to businesses in general. The aim is to evaluate the ethical implications associated with using AI in decision-making in the healthcare industry and recommend guidelines for ethical and legal frameworks to address the issue. Methodology The first step in the literature review process is to develop a search strategy to classify relevant literature as a systematic literature appraisal. The second step is to develop key search terms from the research question. These terms include Ethical implications, Privacy Implications, artificial intelligence, Business Decision Making, healthcare sector, case study, guidelines, ethical framework, and legal framework. Google Scholar and EBSCO search engines will be used in the literature search. Health care-related databases such as CINAHL, PubMed, Cochrane, and EMBASE will also be used. The search was restricted to materials released in the past five years, from 2018 to 2023. The use of peer-reviewed journal articles was made because this factor affects an article's credibility. Ethical/Privacy Issues of AI-Decision Making in Health Care Artificial intelligence (AI) is a phrase used to describe the ability of a machine or software to simulate intelligent human actions, perform computational functions instantly, solve problems, and evaluate user input from previously evaluated data (Amisha et al., 2019). This technology is increasingly being applied in medical systems, health care, sports analytics and activities, fashion, autonomous vehicles, manufacturing and production, agriculture, and farming. These are just a few of the numerous industries and fields that AI has a significant impact on. Despite the technology's potential to influence both the future of the industry and of people, it is associated with numerous privacy and ethical issues. Medical and health care are critical industries when considering AI and associated issues. Some issues include sympathy, empathy, medical consultation, social gaps, informed consent, privacy and data protection, and various ethical dilemmas. The use of AI applications in healthcare decision-making has changed the healthcare and medical landscape, including access for health organizations, data storage, speeding up processes, extensive biological data analysis, providing preventive and precision medicine, new drug discovery, augmented physicians' intelligence, treatment, lab diagnosis, electronic medical records, and imaging. Despite the tremendous gains, technology has created inequalities in healthcare provision because many people are from low-income and developing nations. This has necessitated the need for healthcare professionals and policymakers to consider all four medical ethical principles, including justice, maleficence, beneficence, and autonomy, before integrating AI with the healthcare system. The European Union enacted General Data Protection Regulation and amended the privacy legislation in other nations, including Canada and the United States. To ensure that the information of natural persons is adequately protected, these regulations require that all personal data, as well as the operations of foreign communities and businesses, be handled by the union-based data processor or administrator. Employers are prohibited from making discriminatory judgments based on an individual's genetic health information. Despite these efforts, ethical and privacy issues have characterized the healthcare landscape. These laws are, therefore, insufficient to protect patient or client health information. During the COVID-19 pandemic, robotics were highly deployed in the healthcare industry to lower the risk of virus spread and to supplement the acute healthcare professionals shortage. The technology, however, presented privacy and ethical issues as the robots could be hacked into and used for malicious purposes. Social media was also a critical tool during the pandemic as it allowed easy and fast information sharing. The government and healthcare professionals utilized the tool to share critical information about the virus and debunk associated myths. People would also use the platform to ease the anxiety associated with the uncertainties caused by the pandemic. Unfortunately, some social networks began gathering and storing large quantities of user data without their consent. Such data would be sold to other companies for advertising, marketing, and sales. Artificial intelligence has raised numerous issues associated with informed consent and patient/client autonomy (Farhud & Zokaei, 2021). The principle entails ethical disclosures, documenting informed consent, and decision capacity as well as competency. Informed consent and patient autonomy require healthcare professionals to share medical information with patients, such as health insurance, health status, cost of care, and diagnosis. They also seek consent from the patients before sharing the information with third parties. The principle allows patients and clients seeking healthcare services to access information related to their health, including asking questions before treatments and procedures. The information includes the privacy of data and access control, programming errors, data capture anomalies, the risks of screening and imaging, and the general treatment process. If the patient feels uncomfortable in any of the procedures, they have the right to withdraw from the treatment or negotiate alternative treatment. Implementing the AI-decision making in health care threatens all these provisions because the machine learning and other algorithms used would not provide for arrangements. For example, a patient admitted to the COVID-19 ward may not have the opportunity to question Tommy, the robotic nurse shown in the image below, about their safety or intention to withdraw from the treatment. Fig 1: Tommy the robot nurse; Circolo Hospital in Varese, Italy The use of AI in healthcare decision-making has been associated with ethical issues related to social gaps and justice. Ethical principles in health care require the judicious provision of care regardless of social, economic, or political demographic differences. Every advancement, invention, and discovery increases social inequality and reduces social justice for people everywhere in the world. Even if AI makes it easier to acquire information on technology and science, current affairs, climate change, and global politics, it worsens social inequality (Nordling, 2019). This is because advanced economies and automation widen the gap between developed and developing nations. As robots advance, more individuals lose their employment in the process. When automated systems proliferate, administrators and accountants in various areas may lose employment, and wages will drop significantly. The emergence of robotic surgery and robotic nurses in the healthcare industry, which operates in place of surgeons and provides patient care in place of nurses, affects their employment prospects in the future. Medical Sympathy, Empathy, and Consultation are critical ethical principles. The increasing use of AI technology in health care threatens this aspect, creating an ethical issue. AI integration across the board in the healthcare industry seems challenging or impossible. Humans and therapeutic robots would not evolve together quickly due to distinct human emotions. It is recommended that doctors and other healthcare professionals consult with or consult with their peers, which is not possible with autonomous (robotic) systems. Nonetheless, it appears doubtful that patients will prefer "human-human" medical relationships over "machine-human" ones. Patients' ability to recuperate will be greatly impacted by the empathic and compassionate treatment environment that healthcare professionals must establish. Robotic doctors and nurses will not be able to accomplish this. Patients interacting with robotic doctors and nurses will no longer exhibit compassion, kindness, and acceptable behavior because these machines lack human qualities. The examples below show how AI integration in health care creates ethical issues. Any clinical examination in gynecology and obstetrics involves empathy and compassion, which cannot be found in robot doctors.

Children frequently feel scared or anxious when interacting with healthcare environments and professionals. Insufficient cooperation, withdrawal, and violence are some of their behavioral symptoms, which the new robotic medical system may be unable to handle. Patients who suffer from serious psychiatric problems may be negatively impacted by using robotic systems in psychiatric hospitals. How AI Potentiates Privacy Violation Compared to conventional health technology, AI has several distinctive qualities. Notably, they are not always easy or even reasonably overseen by actual health professionals and can be vulnerable to specific kinds of biases and errors (Char et al., 2018). The potential for bias is due to the "black box" issue, in which the techniques and "rationalization" employed by learning algorithms to arrive at their inferences may be partly or completely invisible to human eyes (Hashimoto et al., 2018). If the appropriate controls are unavailable, this invisibility might also extend to how medical and private info is utilized and deployed. Commercial AI's significant data usage raises serious privacy concerns due to the external threat posed by the highly complex algorithmic systems. Numerous nations around the globe, including the Us, Canada, and Europe, have seen an increase in medical data breaches. Furthermore, even though malicious hackers may not be using them frequently right now, AI is making it harder and harder to protect patient information (Kolata, 2021). Recent research has shown how newly developed computational techniques can locate specific persons in health data repositories run by public or private organizations. Furthermore, this is accurate even if the data has been deidentified and anonymized. For instance, Na et al. study discovered that "despite data aggregation and excision of confidential medical data, an algorithm could be utilized to re-identify 85.6% of adults and 69.8% of children in a cohort study (Na et al., 2018). According to a 2018 study, data gathered by ancestry firms might be used in identifying about 60% of Americans with European ancestry. That number is projected to rise sharply in the coming years (Erlich et al., 2018). Additionally, a 2019 research successfully demonstrated "the susceptibility of current web health data" by using a "linkage attack framework"— an algorithm designed to re-identify anonymized health data (Ji et al., 2020). These are only a few instances of the evolving AI instances that have sparked concerns about the security of medical data that is portrayed as secret. It has been argued that current re-identification strategies negate scrubbing and violate privacy. Even when "homomorphic encryption" is in place, this reality might enhance the privacy dangers of giving private AI corporations power over patient health information. Furthermore, it raises worries about protection, obligation, and other concrete matters discrete from circumstances in which administrative bodies have direct possession over patient data. Although AI technology developments are mainly driven by academic research and the healthcare industry, the technology is also relevant in business-decision making and the public sector. These entities use AI for different purposes and thereby increase the risk of privacy breaches through data manipulation, identification and tracking, and speech and facial recognition. Smart home apps and computer software have specific characteristics that make them susceptible to data manipulation by AI. When consumers continue to interconnect more gadgets without understanding how their software and electronics exchange, analyze, and generate data, things only get worse. Furthermore, as we depend more on digital technology, the possibility of data manipulation continues to grow. AI is transforming business by resolving several problems. These businesses must manage complex data volumes to enhance accurate, quick, granular, and practical analysis. AI techniques like facial recognition provide several advantages in various areas of human existence. Due to the widespread use of digital photography via websites, social media, and security cameras, face recognition has advanced from a hazy method of identifying cats to one far more specific in identifying people. These developments in AI do have a downside, however. When organizations send enormous volumes of data into AI-driven systems, privacy violations also happen. Artificial intelligence (AI) may disclose personally identifiable data that was created without any of the subject's consent. Similar to this, using facial recognition technology violates people's privacy. As a result, many countries are demanding that facial recognition be banned. In Oregon, California, and New Hampshire, laws banning facial recognition technology have been already been approved. Ethical and Privacy Issues Case Study Corporations are currently employing big data analytics to take on various activities that violate their customers' privacy. Due to this privacy invasion, customers may find themselves in unpleasant situations with their family members. Almost ten years ago, Target developed an AI algorithm to ascertain from female customers' purchase patterns whether they were expectant. Vouchers would then be delivered to their homes by the company. This anticipatory behavior proved problematic, especially when a lady is reluctant to tell her parents she is pregnant. Her personal information was revealed as a result of the postal voucher, nevertheless. The healthcare sector faces increasing difficulties in protecting vast amounts of sensitive and personal data due to digitizing health information. Protected health information has increased in value recently because patients do not want their health data to be disclosed without their consent. In 2019, the NHS foundation Trust disclosed 1.6 million patients' data to Alphabet’s DeepMind without patients’ consent (Lomas, 2019). This is one example of a privacy breach in health care. Many people are concerned about their personal information. In 2017, Genpact surveyed more than five thousand people from various countries and found that 63% of respondents valued privacy more than a positive customer experience and wanted businesses to avoid using AI in case it invaded their privacy, regardless of how wonderful a customer experience it was providing (Genpact ltd, 2020). Over 71% of the respondents expressed concern that AI might make their most important decisions without awareness or agreement. In another case, a woman who was going through a divorce was hospitalized in February of last year for major surgery. She was very worried when she was admitted since she understood that her ex-husband, a non-nurse, and his fiancée, an RN, worked at the hospital where she was receiving treatment. The woman explained to the facility's admissions staff, nurses, and doctors that she didn't want her ex-husband or his fiancée to be aware of her hospitalization or have access to her private medical records. She was given her pre-marriage name when she was checked into the facility as an extra precaution. However, after discharge, she learned that her estranged husband knew that she had been hospitalized. This made her feel offended, as she perceived invasion of privacy. After some time, she filed a formal complaint with the hospital stating that the husband and fiancée had accessed her medical data without her consent. The facility's chief privacy officer promptly flagged the patient's electronic medical record in reaction to the allegation and reviewed all the data breach. This meant that every moment the record was accessed, a report would be submitted to the facility's chief privacy officer, and everyone who accessed the file would be made aware that it was being carefully monitored. The facility also launched an inquiry, which discovered that the patient's former husband and his fiancée had illegally accessed their electronic hospital record ten times each. Her ex-husband wasn't a nurse, and even though his fiancée worked as a nurse, she wasn't a part of the patient's care team and didn't have permission access the information. The result was that the disgruntled spouse, who had worked at the hospital for twenty one years, was placed on leave for ten days without pay. The RN (partner), who had a spotless twenty four-year career at the facility, received a four-week suspension without pay. Both employees were informed that their behavior would be continuously observed by their management and were obligated to undertake privacy classes. The patient was told of the inquiry's findings and conclusion by the hospital's chief privacy officer. She resulted to hiring a lawyer to help her understand her legal alternatives in this circumstance. The cases study shows how AI and technology jeopardizes privacy information and the repercussions. Conclusion AI is resulting in privacy issues among users and clients. While it is certainly a big boon, it poses a genuine risk of violating human rights, so we might draw that conclusion. Consumers have always been concerned with protecting their data. Many groups view the rapid development of artificial intelligence in the medical and clinical domains as a brilliant strategy that may support healthcare workers. Yet, this breakthrough has set new constraints in the realm of medical ethics, notwithstanding the immense promise and growth of AI in the realm of medicine and health care. As a result, we should be mindful that its drawbacks may outweigh its advantages. Experts must take into account ethics and humanity in order to solve this issue. The result was that the estranged spouse, who had worked at the facility for twenty one years, was placed on leave for ten days without pay. The RN fiancée, who had a spotless twenty four-year career at the facility, received a four-week suspension without pay. Both employees were informed that their behavior would be continuously observed by their management and were obligated to undertake privacy classes. The patient was told of the inquiry's findings and conclusion by the facility's chief privacy officer. The patient hired a lawyer to help her understand her legal alternatives in this circumstance.

Speedy deletion nomination of User:Longinuos


Hello, and welcome to Wikipedia. A tag has been placed on User:Longinuos requesting that it be speedily deleted from Wikipedia. This has been done under section U5 of the criteria for speedy deletion, because the page appears to consist of writings, information, discussions, and/or activities not closely related to Wikipedia's goals. Please note that Wikipedia is not a free web hosting service. Under the criteria for speedy deletion, such pages may be deleted at any time.

If you think this page should not be deleted for this reason, you may contest the nomination by visiting the page and clicking the button labelled "Contest this speedy deletion". This will give you the opportunity to explain why you believe the page should not be deleted. However, be aware that once a page is tagged for speedy deletion, it may be deleted without delay. Please do not remove the speedy deletion tag from the page yourself, but do not hesitate to add information in line with Wikipedia's policies and guidelines. If the page is deleted, and you wish to retrieve the deleted material for future reference or improvement, then please contact the, or if you have already done so, you can place a request here. ― Blaze WolfTalkBlaze Wolf#6545 17:09, 27 January 2023 (UTC)

January 2023
Hello, Longinuos. This is your user talk page; the purpose of this page is for notification and communication with other Wikipedia editors. It is not a workspace for articles in progress or self-promotion. Please use your user sandbox or the draft article space to practice editing or create new articles. Thank you. Drm310 🍁 (talk) 18:01, 27 January 2023 (UTC)

 You have been blocked indefinitely from editing, because it is clear that you are  not here to contribute to building the encyclopedia. If you think there are good reasons why you should be unblocked, you may request an unblock by adding the text, but you should read the guide to appealing blocks first. JBW (talk) 19:56, 27 January 2023 (UTC)