Interpretable or Accurate? Why Not Both?

Building interpretable Boosting Models with IntepretML Originally published here As summed up by Miller, interpretability refers to the degree to which a human can understand the cause of a decision. A common notion in the machine learning community is that a trade-off exists between accuracy and interpretability. This means that the learning methods that are more … Continue reading Interpretable or Accurate? Why Not Both?

Shapley summary plots: the latest addition to the H2O.ai’s Explainability arsenal

Originally published at https://www.h2o.ai on April 21, 2021. It is impossible to deploy successful AI models without taking into account or analyzing the risk element involved. Model overfitting, perpetuating historical human bias, and data drift are some of the concerns that need to be taken care of before putting the models into production. At H2O.ai, Machine Learning … Continue reading Shapley summary plots: the latest addition to the H2O.ai’s Explainability arsenal

TCAV: Interpretability Beyond Feature Attribution

An overview of GoogleAI’s model Interpretability technique in terms of human-friendly concepts. How convolutional neural networks see the world It’s not enough to know if a model works, we need to know how it works: Sundar Pichai The emphasis today is slowly moving towards model interpretability rather than model predictions alone. The real essence of Interpretability, however, … Continue reading TCAV: Interpretability Beyond Feature Attribution

Interpretable Machine Learning

Extracting human understandable insights from any Machine Learning model Originally published here. It’s time to get rid of the black boxes and cultivate trust in Machine Learning In his book ‘Interpretable Machine Learning’, Christoph Molnar beautifully encapsulates the essence of ML interpretability through this example: Imagine you are a Data Scientist and in your free time … Continue reading Interpretable Machine Learning