The Most Comprehensive Guide On Explainable AI
This article was published as a part of the Data Science Blogathon.
Introduction on Explainable AI
I love artificial intelligence and I like to delve into it a lot in all or all aspects, and I do the follow-up every day to see what is new in this field. I made the latest update to me. I missed this title and this technology that is before you today, which is one of the newest and rarest technologies that have been worked on recently. In artificial intelligence, at the end of this era, I have been working on the development of this new technology, which is called interpretable artificial intelligence, which works to clarify a new concept, which is how to communicate information better and well to the ordinary person, meaning that it works to make the results more flexible than before. As it works on drawing new, easy-to-understand, and easy-to-flex plans to make the average person work to understand these results in a large and accurate way.
What is Explainable AI?
Explanatory Artificial Intelligence (XAI) has been created as it is programmed to describe its purpose in explanation and give its accuracy, rationale, and decision-making process in a way that can be understood by the average human. XAI is often discussed in relation to deep learning and plays an important role in the FAT ML model where fairness, accountability, and transparency are in machine learning. And XAI provides a lot of information about how the AI program makes a certain decision about something, as it is taken and followed ways to detect it, namely: Strengths and weaknesses of the program used.
Understand in Depth About Explainable AI
First of all, we must understand what XAI is and why this technology is needed. Hence, AI algorithms often act as “black boxes” that come in and provide the output anyway to understand their inner workings. Where the goal of Xai is to make the rationale behind producing an algorithm that is understandable to the ordinary person who is not familiar with the subject, making him fully aware of that subject. Hence we can assume that, for example, many AI algorithms are used deep learning in this matter, where the algorithms learn to identify patterns based on the data bloating and the data with large training data. Whereas, deep learning is a neural network approach that simulates the way the brain of normal human beings operates like ours. As with human thought processes, it can be difficult or impossible to determine the extent to which a deep learning algorithm has reached a prediction or decision.
How Does Explainable AI Work?
First, we define and understand what interpretable AI is. Here we will explain how these principles help determine the expected output from XAI, but they do not provide any guidance on how to reach this desired outcome. It may be easy and well to divide XAI into three categories: Where these questions are asked to clarify more and more, and the questions range as follows:
What are the Different Types of XAI?
What are the Features of the Xai Interface?
Features of this interface include here: XAI interfaces depict the output of different data points to explain the relationships between specific features and model predictions. Where users can observe the x and y values of different data points and understand their effect on the absolute error received from the color code to understand it. It makes models easier and clearer in the ideas they present to normal people so that they can understand exactly how to interact with that feature that works in this figure. XAI interfaces visualize the output of different data points to explain the relationships between specific features and model predictions.
How does XAI Serve AI?
When we saw that artificial intelligence is more widely used in our daily lives, from here we went to an important point, which is the ethics of artificial intelligence. However, the increasing complexity of advanced AI models and the lack of easiness raise doubts about these models. Without understanding them, humans cannot decide whether these AI models are socially useful, trustworthy, safe, and fair. Thus, AI models need to follow specific ethical guidelines. Gartner combines the ethics of artificial intelligence into five main components:
- Clarity and transparency
- Human-centered and socially beneficial
- Exhibition.
- Safe and secure.
- Responsible.
What are Explainable AI Advantages?
- It improves explanation and transparency: Companies can understand organizational models, better understand developments, and see why they behave in certain ways under certain conditions. Even if it’s a black model, humans can use an interface of interpretation to understand how these AI models achieve certain conclusions.
- Faster adoption: As companies can better understand AI models, they can be trusted with more important decisions
- Debugging optimization: When the system is running unexpectedly, Xai can be used to identify the problem and help developers debug the problem.
- Enable audit for regulatory requirements
Implementation
Import library:
Python Code:
Information about data:
df.describe().style.background_gradient(cmap = 'copper')
df.isna().count()
fig = ff.create_distplot([df.age],['age'],bin_size=5) iplot(fig, filename='Basic Distplot') #Get also the QQ-plot fig = plt.figure() res = stats.probplot(df['age'], plot=plt) plt.show()
print('Heatmap') plt.figure(figsize=(15,10)) sns.heatmap(df.corr(),annot=True,cmap='coolwarm')
Using XAI:
!pip install xai !pip install xai_data import sys, os import pandas as pd import numpy as np from collections import defaultdict import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.pipeline import make_pipeline # Use below for charts in dark jupyter theme THEME_DARK = False if THEME_DARK: # This is used if Jupyter Theme dark is enabled. # The theme chosen can be activated with jupyter theme as follows: # >>> jt -t oceans16 -T -nfs 115 -cellw 98% -N -kl -ofs 11 -altmd font_size = '20.0' dark_theme_config = { "ytick.color" : "w", "xtick.color" : "w", "text.color": "white", 'font.size': font_size, 'axes.titlesize': font_size, 'axes.labelsize': font_size, 'xtick.labelsize': font_size, 'ytick.labelsize': font_size, 'legend.fontsize': font_size, 'figure.titlesize': font_size, 'figure.figsize': [20, 7], 'figure.facecolor': "#384151", 'legend.facecolor': "#384151", "axes.labelcolor" : "w", "axes.edgecolor" : "w" } plt.rcParams.update(dark_theme_config) sys.path.append("..") import xai import xai.data
df_groups = xai.imbalance_plot(df, 'age', categorical_cols=categorical_cols)
proc_df = xai.normalize_numeric(bal_df) proc_df = xai.convert_categories(proc_df) x = df.drop("output", axis=1) y = df["output"]
x_train, y_train, x_test, y_test, train_idx, test_idx = xai.balanced_train_test_split( x, y, "age", min_per_group=1, max_per_group=1, categorical_cols=categorical_cols)
import sklearn from sklearn.metrics import classification_report, mean_squared_error, roc_curve, auc from keras.layers import Input, Dense, Flatten, Concatenate, concatenate, Dropout, Lambda from keras.models import Model, Sequential from keras.layers.embeddings import Embedding def build_model(X): input_els = [] encoded_els = [] dtypes = list(zip(X.dtypes.index, map(str, X.dtypes))) for k,dtype in dtypes: input_els.append(Input(shape=(1,))) if dtype == "int8": e = Flatten()(Embedding(X[k].max()+1, 1)(input_els[-1])) else: e = input_els[-1] encoded_els.append(e) encoded_els = concatenate(encoded_els) layer1 = Dropout(0.5)(Dense(100, activation="relu")(encoded_els)) out = Dense(1, activation='sigmoid')(layer1) # train model model = Model(inputs=input_els, outputs=[out]) model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy']) return model def f_in(X, m=None): """Preprocess input so it can be provided to a function""" if m: return [X.iloc[:m,i] for i in range(X.shape[1])] else: return [X.iloc[:,i] for i in range(X.shape[1])] def f_out(probs, threshold=0.5): """Convert probabilities into classes""" return list((probs >= threshold).astype(int).T[0])
model = build_model(x_train) model.fit(f_in(x_train), y_train, epochs=1000, batch_size=512)
Conclusion on Explainable AI
In this article, we figured out a few things related to the topic of XAI, and this tool has recently become of interest to many researchers, data scientists, and analysts. This process is from the beginning and from here comes the benefit of this technology, here in our article we worked on several things, namely the definition of this technology, which as we mentioned is new in this field. We got acquainted with the history of the emergence of this technology and the harm that affected its development, we also defined it and how to use this technology and its advantages and disadvantages, and in the end we applied the code where we fetched the data that was mentioned and worked on it with some tools and then we implemented Xai where it was all mentioned in the above code. In any project, you can see in this attached link and see how this technique works.
I hope you enjoy this article, and then we have several main points, which are that we have clarified the absolute concept behind the technology that we explained in that article, and the second point is understanding the technology and then how it works and what are its different nuclei and what are the features that it consists of and how it serves that technology In the end, we made an implementation of the code that this technology works on.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.