Connect with us

Perspective

Role of explainable AI in addressing bias and discrimination in AI systems

The recent excitement around Generative AI is much more than a hype, as is evident from the emergence of a myriad number of use cases and continued research and development of powerful foundation models. At Infosys, we are witnessing this first hand, as we are working on several state-of-the-art implementations in AI-augmented software engineering, semantic search and information summarization systems, and similar initiatives for leading enterprises. A report by Fortune Business Insights estimates the global AI market size to reach USD 2025.12 billion by 2030, at a CAGR of 21.6 percent. These growth predictions are promising but concerns on potential risks around the use of AI are being voiced from all quarters.

One of the risk factors is the presence of different forms of bias that percolate into AI because of the unconscious prejudices of the humans that have built them.

AI bias can adversely affect business reputation, erode brand value and harm society
There have been several widely publicized instances of AI-based discriminations. An AI-powered recruitment system of a major ecommerce player exhibited gender bias, and an AI used by some US hospitals for identifying healthcare needs undermined the needs of certain ethnic groups. An advertisement engine of a social media platform exhibited gender stereotypes while recommending jobs to different genders and ethnicities. It has also been found that certain language models were displaying similar gender-based stereotypes in their responses (doctors are associated with he, and nurses as she). Facial recognition systems are often unable to correctly identify faces of ethnic minorities.

Role of explainable AI (XAI) in bias detection and mitigation
AI models become increasingly complex as they are aimed for higher accuracy. Large language models (LLMs) have billions of parameters and are essentially black box systems. They are so opaque, that it is difficult for even AI scientists to explain all aspects of the output. This lack of transparency can cause biases to persist in AI.

Explainable AI (XAI) plays a critical role in combating these biases by shedding light on the decision-making process, and unravelling all the sources of bias. Recent reports identify including XAI as a critical pillar of the responsible AI by design construct, which again is one of the building blocks of an AI-first enterprise.

XAI can analyze the underlying data, feature importance, and behavior of the model. XAI techniques can identify representation bias due to inadequate presence of all samples in the training data. It can identify bias, which is baked in the design of the AI model. For example, the model can assign different weightages to different classes in the training data, thereby underplaying certain crucial characteristics. XAI can determine if the model selection has perpetuated the bias, as certain AI models might not be suitable for certain scenarios. XAI can also act as a strong safeguard during model validation and testing phase to identify any residual biases. This knowledge enables us to re-evaluate and refine the models, thus minimizing or eliminating discriminatory outcomes altogether.

Often human reviewers also override AI predictions based on their own biases and stereotypes, and these can be picked up by AI systems, which are continuously learning from human feedback. XAI can identify and pinpoint such occurrences and help in model auditing. As bias can creep in any phase of the model lifecycle, it is essential to integrate XAI in all stages from data preparation to training and final inferencing stages. This will help us adopt the right intervention strategies to make the model fairer and more inclusive.

Explainable AI ensures accountability for biases by providing evidence-based explanations for automated decisions across all stages of an AI workflow. This transparency also enables regulators, auditors, and internal governing bodies to identify unfair and discriminatory practices, process gaps, and ensure that these are not perpetuated. The provisions in the latest EU AI Act as well as the GDPR set minimum requirements for explainability of AI systems.

Explainable AI tools, techniques, and platforms
XAI focuses either on explaining the model as a whole (global) or explaining the reasoning behind individual predictions (local), which relies on techniques, such as using simpler proxy models (decision tree) or embedding visual explanations and several others.

It is recommended to utilize algorithms and inputs in AI products and processes such that the decisions or outputs from these can be checked and understood by humans. There are several open-source tools for explainability as well as various paid offerings from leading vendors. All of these offer various functionalities for explainability of different AI models, fine-tuned for specific use-cases and applications. The Gartner Market Guide offers detailed analysis of several vendors.

While progress has been made in XAI, challenges remain. Striking a balance between transparency and protection of proprietary algorithms, addressing interpretability and performance trade-offs, developing policy-driven guardrails, and conducting white box testing for complex language models are ongoing problems. Continued research and interdisciplinary collaboration among domain, technical experts, and regulators is necessary to overcome these challenges and develop robust XAI frameworks.

As AI continues to shape our society and gets embedded in the fabric of all industries, the integration of explainable AI is vital for creating fair, inclusive, and ethical AI systems that align with the basic principles of responsible AI.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

error: Content is protected !!