Get our latest book recommendations, author news, and competitions right to your inbox.
Published by Manning
Distributed by Simon & Schuster
Table of Contents
About The Book
AI doesn’t have to be a black box. These practical techniques help shine a light on your model’s mysterious inner workings. Make your AI more transparent, and you’ll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements.
In Interpretable AI, you will learn:
Why AI models are hard to interpret
Interpreting white box models such as linear regression, decision trees, and generalized additive models
Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning
What fairness is and how to mitigate bias in AI systems
Implement robust AI systems that are GDPR-compliant
Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.
About the technology
It’s often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside “black box” models, designing accountable algorithms, and understanding the factors that cause skewed results.
About the book
Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, you’ll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. You’ll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python.
What's inside
Techniques for interpreting AI models
Counteract errors from bias, data leakage, and concept drift
Measuring fairness and mitigating bias
Building GDPR-compliant AI systems
About the reader
For data scientists and engineers familiar with Python and machine learning.
About the author
Ajay Thampi is a machine learning engineer focused on responsible AI and fairness.
Table of Contents
PART 1 INTERPRETABILITY BASICS
1 Introduction
2 White-box models
PART 2 INTERPRETING MODEL PROCESSING
3 Model-agnostic methods: Global interpretability
4 Model-agnostic methods: Local interpretability
5 Saliency mapping
PART 3 INTERPRETING MODEL REPRESENTATIONS
6 Understanding layers and units
7 Understanding semantic similarity
PART 4 FAIRNESS AND BIAS
8 Fairness and mitigating bias
9 Path to explainable AI
In Interpretable AI, you will learn:
Why AI models are hard to interpret
Interpreting white box models such as linear regression, decision trees, and generalized additive models
Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning
What fairness is and how to mitigate bias in AI systems
Implement robust AI systems that are GDPR-compliant
Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.
About the technology
It’s often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside “black box” models, designing accountable algorithms, and understanding the factors that cause skewed results.
About the book
Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, you’ll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. You’ll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python.
What's inside
Techniques for interpreting AI models
Counteract errors from bias, data leakage, and concept drift
Measuring fairness and mitigating bias
Building GDPR-compliant AI systems
About the reader
For data scientists and engineers familiar with Python and machine learning.
About the author
Ajay Thampi is a machine learning engineer focused on responsible AI and fairness.
Table of Contents
PART 1 INTERPRETABILITY BASICS
1 Introduction
2 White-box models
PART 2 INTERPRETING MODEL PROCESSING
3 Model-agnostic methods: Global interpretability
4 Model-agnostic methods: Local interpretability
5 Saliency mapping
PART 3 INTERPRETING MODEL REPRESENTATIONS
6 Understanding layers and units
7 Understanding semantic similarity
PART 4 FAIRNESS AND BIAS
8 Fairness and mitigating bias
9 Path to explainable AI
Product Details
- Publisher: Manning (July 26, 2022)
- Length: 328 pages
- ISBN13: 9781638350422
Resources and Downloads
High Resolution Images
- Book Cover Image (jpg): Interpretable AI eBook 9781638350422