Computer Vision News - April 2020

2 Summary Code With Us 14 One very important tool which aids to the understanding of a machine learning model is permutation importance . Feature values are randomly shuffled, one column at a time, and the performance of the model is measured before and after. Performance can be measured by a choice of standard metrics and the scores that permutation importance returns represent the change in the performance of a trained model, after permutation. Important features are usually more sensitive to the shuffling process and will thus result in higher importance scores. If you are interested in more details and exploring this module further, you can have a look at the ELI5 documentation. by Ioannis Valasakis A closer step towards explainability in the deep learning heart disease prediction Just back with the next part of the series in the model explainability! Meanwhile, a lot has changed in the world. COVID-19, a new version of Coronavirus pandemic is spreading very fast. Many countries are in isolation, conferences for the year are cancelled and this has a great effect on people's lives, families and of course the science. Wishing well to everybody and let’s hope that we can get some positive, remote working abilities out of this and we get stronger and use our knowledge to explain problems and our humanity to care about others. Last month, we looked at using a random forest classifier with heart data and their possible correlations. Using the ML explainability tools we can now start identifying the model results. perm = PermutationImportance(model, random_state=1).fit(X_test, y_test) eli5.show_weights(perm, feature_names = X_test.columns.tolist()) This will return the following table: "Important features are usually more sensitive to the shuffling process and will thus result in higher importance scores."

RkJQdWJsaXNoZXIy NTc3NzU=