| Name: | Description: | Size: | Format: | |
|---|---|---|---|---|
| 7.66 MB | Adobe PDF |
Advisor(s)
Abstract(s)
Current AI solutions, especially black-box models, show a set of limitations that may include biases
and lack of transparency. Especially in high-risk applications, such as medical diagnosis or infrastructure
management, the need to understand the reasoning of such models gives rise to post-hoc explanation
methods. Knowledge Graphs represent real-world entities and their relations in a graph structure with
explicit semantics and support several applications from recommendation systems to question-answering
and data mining. Knowledge Graph Embeddings have emerged as a solution for representing entities for
downstream tasks, such as link prediction or node classification, by producing vector representations
of entities in an embedding space that aims to conserve syntactic and structural properties while being
amenable to further computation. However, they do so at the cost of the inherent explainability of
knowledge graphs: most approaches result in meaningless vectors.
This work builds on current approaches in order to explain node classification prediction models
based on Knowledge Graph Embeddings, a problem not tackled by current methods. It applies Explainable AI paradigms to design, implement and test two novel methods, LOFI, and C-KEE, that are
perturbation-based and that use counterfactuals, in the form of necessary and sufficient explanations, to
identify the set of facts or entities present in the Knowledge Graph that more profoundly affect an entity’s
classification.
Extensive experiments were conducted using different benchmark datasets. In the end, LoFI showed
mixed results, being able to successfully generate sufficient explanations only. With C-KEE, a superior
solution was achieved, given that it significantly outperformed baselines in almost all scenarios, including
several representative Knowledge Graph embedding methods, while being able to produce empirically
sound and intuitive explanations in minimal computational processing time.
Description
Tese de mestrado, Ciência de Dados, 2024, Universidade de lisboa, Faculdade de Ciências
Keywords
redes de conhecimento vectores de redes de conhecimento redes neuronais para grafos aprendizagem automática inteligência artificial explicável Teses de mestrado - 2024
