Paper Accepted at the International Conference on Case-Based Reasoning
The automatic acquisition of similarity measures for a CBR system is appealing as it frees the system designer from the tedious task of dening it manually. However, acquiring similarity measures with some machine learning approach typically results in some black box representation of similarity whose magic-like combination of high precision and low explainability may decrease a human user's trust in the system. In our contribution to this year's ICCBR conference (ICCBR 2015), we target this problem by suggesting a method to induce a human-readable and easily understandable - and thus potentially trustworthy - representation of similarity from a previously learned black box-like representation of similarity measures. Our experimental evaluations support the claim that, given some highly precise learned similarity measure, we can induce a less powerful, but human-understandable representation of it while its corresponding level of accuracy is only marginally impaired. For details see Publications.