Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability.
Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability.
Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.
Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.
This essential book provides:
Michael Munn is a research software engineer at Google. His work focuses on better understanding the mathematical foundations of machine learning and how those insights can be used to improve machine learning models at Google. Previously, he worked in the Google Cloud Advanced Solutions Lab helping customers design, implement, and deploy machine learning models at scale. Michael has a PhD in mathematics from the City University of New York. Before joining Google, he worked as a research professor. David Pitman is a staff engineer working in Google Cloud on the AI Platform, where he leads the Explainable AI team. He's also a co-organizer of PuPPy, the largest Python group in the Pacific Northwest. David has a Masters of Engineering degree and a BS in computer science from MIT, where he previously served as a research scientist.
This item is eligible for free returns within 30 days of delivery. See our returns policy for further details.