Interpretable or Accurate? Why Not Both?
Building interpretable Boosting Models with IntepretML Originally published here As summed up by Miller, interpretability refers to the degree to which a human can understand the cause of a decision. A common notion in the machine learning community is that a trade-off exists between accuracy and interpretability. This means that the learning methods that are more … Continue reading Interpretable or Accurate? Why Not Both?