User:Johna/sandbox

Trustworthy AI is a term used to describe AI that fulfills certain conditions, that is lawful, ethically adherent, and technically robust. It is based on the idea that AI will reach its full potential when trust can be established in each stage of its lifecycle, from design to development, deployment and use.

Brief History
The term was introduced by the Europeaan Commission in its so called  Ethics Guidelines for Trustworthy Artificial Intelligence (AI).

To maximize the opportunities while managing the risks, Europe has to focus on human-centered Trustworthy AI based on strong collaboration among key stakeholders. According to the High-Level Expert Group on AI, Trustworthy AI should be lawful, adhere to ethical principles and have a robust implementation. Europe is taking important steps towards becoming the worldwide centre for Trustworthy AI. Trustworthiness however still requires significant foundational research, and it is becoming clear that the only way to achieve this is through the integration of learning, optimisation and reasoning, as neither approach will be sufficient on its own.

The term is used with minor differences by NGOs, commercial enterprises (e.g., IBM or Microsoft) and academia, both in Europe and in other areas of the world, such as the United States.

High Level Expert Group Ethical Guidelines
The Euroepan commission guidelines are very influential...

According to the guidelines, Trustworthy AI has three components, which should be met throughout the AI system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them. The cited Guidelines set out a framework for achieving Trustworthy AI which  does not explicitly deal with Trustworthy AI’s first component (lawful AI), which is demanded to the official legislative channels (e.g., EU parliament). Guidance is provided on the other two components in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems. In particular: Chapter I describes these principles and encourage to acknowledge and address the potential tensions between them. Both technical and non-technical methods can be used for their implementation.<7p>
 * Chapter I identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems. The ethical principles are:
 * respect for human autonomy,
 * prevention of harm,
 * fairness, and
 * explicability.
 * Chapter II provides guidance on how Trustworthy AI can be implemented, listing seven requirements that AI systems should meet:
 * 1) human agency and oversight,
 * 2) technical robustness and safety,
 * 3) privacy and data governance,
 * 4) transparency,
 * 5) diversity, non-discrimination and fairness,
 * 6) environmental and societal well-being, and
 * 7) accountability.
 * Chapter III provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II. This assessment list needs to be tailored to the specific use case of the AI system. More information on this Assessment List can be found at the specific website

MIT Initiatives
User:Johna/sandbox1