Artificial Intelligence (AI) has become essential to our everyday lives, driving search engines to healthcare, manufacturing to banking. It promises to make life easier, creating more efficient, cheaper processes to organize and make information accessible. An example is an underlying technology in an Internet search engine. It can be seen in common email or text messaging software, which predicts and offers the next word you are likely to need. AI also underpins computer functions that sort images by category (e.g., people, nature) and translate webpages from one language to another.
In the early days, it was thought that AI tasks would be restricted to simple repetitive jobs, but it has developed so much further, and today it is capable of complex strategic decisions. Until recently, there has been very little government oversight of AI developments. As AI increases in sophistication, concerns have increased regarding the dangers that may lurk in these technologies. An article in the Harvard Gazette highlights three major ethical questions regarding AI, surveillance and privacy, the role of human discernment, and discrimination and bias.
The HHS Artificial Intelligence in Healthcare Playbook
In response to a recent initiative from the US White House, the Department of Health and Human Services (HHS) published the AI playbook, providing guidelines for Trustworthy AI (TAI) in healthcare. The document lays out the principles needed to ensure TAI and seeks to meet the requirements of Executive Order 13960, encouraging AI development underpinned by ethical standards.
The playbook highlights six major principles that should form the basis of TAI. These are
- Fairness and impartiality
- Transparent and explainable
- Robust and reliable
- Responsible and accountable
- Safe and secure
- Guardians of privacy
The playbook goes on to explain how healthcare professionals can incorporate the principles of TAI into their projects when building and installing AI systems.
The Principles of Trustworthy Artificial Intelligence in Healthcare Explained
The TAI playbook explains each of the principles:
- Fairness and Impartiality. The first principle revolves around impartiality. All stakeholders should ensure that participants receive equal treatment. It is recommended that AI is trained on the geographic realities of the area where it will operate. It must also treat native and non-native English speakers with the same level of respect.
- Transparent and Explainable. Participants must know how their data is used and how the AI makes decisions. The algorithms should be transparent and open for examination.
- Robust and Reliable. The AI must produce reliable, accurate information according to its design specifications on an ongoing basis. It should learn from other systems and humans for continuous improvement.
- Responsible and Accountable. Policies must carefully outline who is responsible for all aspects of AI and hold those people accountable for policy breaches.
- Guarding Privacy. The privacy of all entities, including individuals, groups, and organizations, must be protected. The organization may only use the data for its intended purpose as approved by the owner or guardian of the data.
- Safety and Security. Organizations must carefully protect AI systems from risks like cyber-attacks to prevent any harm to participants in their systems.
The Role of Ethics AI in Healthcare
All these principles align with federal guidelines in Executive Order 13960 for promoting trustworthy AI. It was released concurrently with the AI Bill of Rights by the White House to provide