InterpretME: A Tool for Interpretations of Machine Learning Models over Knowledge Graphs
Mostly complex machine learning models, commonly referred to as black-boxes. Understanding the decision making process is crucial in domains such as healthcare to transform the outcome model into trustworthy. The effectiveness of existing XAI frameworks, especially concerning algorithms that work with knowledge graph as opposed to tabular data, is still an open research question. Integrating interpretability layer can make them trustworthy and can help decision-makers to better understand the interpretations of the decisions that led to the model’s output.
Our tool: we propose an analytical tool, named InterpretME, for tracing and explaining the predictive models built over both data collected from Knowledge Graphs or datasets.