responsible AI
What is responsible AI?
Responsible AI is an approach to developing and deploying artificial intelligence (AI) from both an ethical and legal point of view. The goal of responsible AI is to employ AI in a safe, trustworthy and ethical fashion. Using AI responsibly should increase transparency and help reduce issues such as AI bias.
Proponents of responsible AI hope that a widely adopted governance framework of AI best practices makes it easier for organizations around the globe to ensure their AI programming is human-centered, interpretable and explainable. Having a responsible AI system in place ensures fairness, reliability and transparency.
Trustworthy AI standards, however, are currently up to the discretion of the data scientists and software developers who write and deploy an organization's AI models. This means that the steps required to prevent discrimination and ensure transparency vary from company to company.
Implementation can also differ from company to company. For example, the chief analytics officer or other dedicated AI officers and teams might be responsible for developing, implementing and monitoring the organization's responsible AI framework. An explanation of the organization's framework should be documented on the organization's website, listing how it addresses accountability and ensures its use of AI is anti-discriminatory.
Why responsible AI is important
Responsible AI is a still emerging area of AI governance. The use of the word responsible is an umbrella term that covers both ethics and democratization.
Often, the data sets used to train machine learning (ML) models introduce bias into AI. This is caused by either incomplete or faulty data, or by the biases of those training the ML model. When an AI program is biased, it can end up negatively affecting or hurting humans -- such as unjustly declining applications for financial loans or, in healthcare, inaccurately diagnosing a patent.
Now that software programs with AI features are becoming more common, it's increasingly apparent that there's a need for standards in AI beyond those established by science fiction writer Isaac Asimov in his "Three Laws of Robotics."
The implementation of responsible AI can help reduce AI bias, create more transparent AI systems and increase end-user trust in those systems.
What are the principles of responsible AI?
AI and machine learning models should follow a list of principles that might differ from organization to organization.
For example, Microsoft and Google both follow their own list of principles, and the National Institute of Standards and Technology (NIST) has published a 1.0 version of an AI Risk Management Framework that follows many of the same principles found in Microsoft and Google's lists. NIST's list of seven principles includes the following:
- Accountable and transparent. Increased transparency is meant to provide increased trust in the AI system, while making it easier to fix problems associated with AI model outputs. This also enables developers more accountability over their AI systems.
- Explainable and interpretable. Explainability and interpretability are meant to provide more in-depth insights into the functionality and trustworthiness of an AI system. Explainable AI, for example, is meant to provide users with an explanation as to why and how it got to its output.
- Fair with harmful bias managed. Fairness is meant to address issues concerning AI bias and discrimination. This principle focuses on providing equality and equity, which is difficult as values differ per organization and culture.
- Privacy-enhanced. Privacy is meant to enforce practices that help to safeguard end-user autonomy, identity and dignity. Responsible AI systems must be developed and deployed with values such as anonymity, confidentiality and control.
- Secure and resilient. Responsible AI systems should be secure and resilient against potential threats such as adversarial attacks. Responsible AI systems should be built to avoid, protect against and respond to attacks, while also being able to recover from an attack.
- Valid and reliable. Responsible AI systems should be able to maintain their performance in different unexpected circumstances without failure.
- Safe. Responsible AI shouldn't endanger human life, property or the environment.
How do you design responsible AI?
Ongoing scrutiny is crucial to ensure an organization is committed to providing an unbiased, trustworthy AI. This is why it's crucial for an organization to have a maturity model to follow while designing and implementing an AI system.
At a base level, responsible AI should be built around development standards that focus on the principles for responsible AI design. As these principles differ per organization, each one should be carefully considered. AI should be built with resources according to a company-wide development standard that mandates the use of the following:
- Shared code repositories.
- Approved model architectures.
- Sanctioned variables.
- Established bias testing methodologies to help determine the validity of tests for AI systems.
- Stability standards for active machine learning models to ensure AI programming works as intended.
AI models should be built with concrete goals that focus on building a model in a safe, trustworthy and ethical way. For example, an organization could construct responsible AI with the goals and principles noted in Figure 2.
Implementation and how it works
An organization can implement responsible AI and demonstrate that it has created a responsible AI system in the following ways:
- Ensure data is explainable in a way that a human can interpret.
- Document design and decision-making processes to the point where if a mistake occurs, it can be reverse-engineered to determine what transpired.
- Build a diverse work culture and promote constructive discussions to help mitigate bias.
- Use interpretable features to help create human-understandable data.
- Create a rigorous development process that values visibility into each application's latent features.
- Focus on eliminating typical black box AI model development methods. Instead, focus on building a white box or explainable AI system, which provides an explanation for each decision the AI makes.
Best practices for responsible AI principles
When designing responsible AI, governance processes need to be systematic and repeatable. Some best practices include the following:
- Implement machine learning best practices.
- Create a diverse culture of support. This includes creating gender and racially diverse teams that work on creating responsible AI standards. Enable this culture to speak freely on ethical concepts around AI and bias.
- Promote transparency to create an explainable AI model so that any decisions made by AI are visible and easily fixable.
- Make the work as measurable as possible. Dealing with responsibility is subjective, so ensure there are measurable processes in place such as visibility and explainability and that there are auditable technical frameworks and ethical frameworks.
- Use responsible AI tools to inspect AI models. Options such as the TensorFlow toolkit are available.
- Identify metrics for training and monitoring to help keep errors, false positives and biases at a minimum.
- Perform tests such as bias testing or predictive maintenance to help produce verifiable results and increase end-user trust.
- Continue to monitor after deployment. This helps ensure the AI model continues to function in a responsible, unbiased way.
- Stay mindful and learn from the process. An organization learns more about responsible AI in implementation -- from fairness practices to technical references and materials surrounding technical ethics.
Examples of companies embracing responsible AI
Microsoft has created its own responsible AI governance framework with help from its AI, Ethics and Effects in Engineering and Research Committee and Office of Responsible AI (ORA) groups. These two groups work together within Microsoft to spread and uphold their defined responsible AI values. ORA is specifically responsible for setting company-wide rules for responsible AI through the implementation of governance and public policy work. Microsoft has implemented several responsible AI guidelines, checklists and templates, including the following:
- Human-AI interaction guidelines.
- Conversational AI guidelines.
- Inclusive design guidelines.
- AI fairness checklists.
- Templates for data sheets.
- AI security engineering guidance.
Credit scoring organization FICO has created responsible AI governance policies to help its employees and customers understand how the ML models the company uses work and the programming's limitations. FICO's data scientists are tasked with considering the entire lifecycle of its machine learning models and are constantly testing their effectiveness and fairness. FICO has developed the following methodologies and processes for bias detection:
- Building, executing and monitoring explainable models for AI.
- Using blockchain as a governance tool for documenting how an AI model works.
- Sharing an explainable AI toolkit with employees and clients.
- Comprehensive testing for bias.
IBM has its own ethics board dedicated to the issues surrounding artificial intelligence. The IBM AI Ethics Board is a central body that supports the creation of ethical and responsible AI throughout IBM. Some guidelines and resources IBM focuses on include the following:
- AI trust and transparency.
- Everyday ethics for AI.
- Open source community resources.
- Research into trusted AI.
Responsible AI use in blockchain
Besides being useful for transactional data, a distributed ledger can be a valuable tool for creating a tamper-proof record that documents why a machine learning model made a particular prediction. That's why some companies are using blockchain, the popular distributed ledger used for the cryptocurrency bitcoin, to document their use of responsible AI.
With blockchain, each step in the development process -- including who made, tested and approved each decision -- is recorded in a human-readable format that can't be altered.
Responsible AI standardization
The heads of large corporations such as IBM have publicly called for AI regulations, but no standardizations have yet to be made. Even with the recent boom in generative AI models such as ChatGPT, the adoption of any AI acts is lacking. The U.S., for example, has yet to pass federal legislation for governing AI, and there are conflicting opinions on whether or not AI regulation is on the horizon. However, both NIST and the Biden administration have published broad guidelines for the use of AI.
For example, in addition to NIST's Artificial Intelligence Risk Management Framework, the Biden administration has published blueprints for an AI Bill of Rights, an AI Risk Management Framework and a roadmap for creating a National AI Research Resource.
Learn how to make AI systems responsible and functional.