A machine learning application typically generates a probability of an event, without explaining why. Because some models, like neural networks, tend to be overconfident on data that isn’t similar to what they have been trained on, their probability estimates can be completely wrong. These heavy errors could leave your end-users with trust issues, especially during the early phases of the model life cycle, when these errors occur more frequently.
To improve the end-users trust, Ixor adopts the information provided by intermediate results (feature maps) of the model to give more context to the prediction. The key idea is that similar inputs generate similar feature maps. The similarity of these can be calculated and when stored we have a database of how similar inputs are to each other.
A machine learning model is a representation of the data it has been trained on and therefore it can only be as good as the data it has been trained on. If the model is used on data it has not been trained on, no similarities will be retrieved between the training set and the specific input. No similarities gives the end-user an indication that it has hit the limitations of the model and that he/she therefore shouldn’t trust the output.
It could also work in the other direction for when a difficult input has been given to the model, retrieved similar inputs could argument in favor of the decision of the model.
More specific use cases of this method are described on our Medium blog.