Logo do repositório
 
Publicação

Explaining Predictions from Node Classification with Knowledge Graph Embeddings

datacite.subject.fosDepartamento de Informáticapt_PT
dc.contributor.advisorPesquita, Cátia, 1980-
dc.contributor.authorPaulino, Filipe José Bastos Benvindo
dc.date.accessioned2024-11-20T11:28:47Z
dc.date.available2024-11-20T11:28:47Z
dc.date.issued2024
dc.date.submitted2024
dc.descriptionTese de mestrado, Ciência de Dados, 2024, Universidade de lisboa, Faculdade de Ciênciaspt_PT
dc.description.abstractCurrent AI solutions, especially black-box models, show a set of limitations that may include biases and lack of transparency. Especially in high-risk applications, such as medical diagnosis or infrastructure management, the need to understand the reasoning of such models gives rise to post-hoc explanation methods. Knowledge Graphs represent real-world entities and their relations in a graph structure with explicit semantics and support several applications from recommendation systems to question-answering and data mining. Knowledge Graph Embeddings have emerged as a solution for representing entities for downstream tasks, such as link prediction or node classification, by producing vector representations of entities in an embedding space that aims to conserve syntactic and structural properties while being amenable to further computation. However, they do so at the cost of the inherent explainability of knowledge graphs: most approaches result in meaningless vectors. This work builds on current approaches in order to explain node classification prediction models based on Knowledge Graph Embeddings, a problem not tackled by current methods. It applies Explainable AI paradigms to design, implement and test two novel methods, LOFI, and C-KEE, that are perturbation-based and that use counterfactuals, in the form of necessary and sufficient explanations, to identify the set of facts or entities present in the Knowledge Graph that more profoundly affect an entity’s classification. Extensive experiments were conducted using different benchmark datasets. In the end, LoFI showed mixed results, being able to successfully generate sufficient explanations only. With C-KEE, a superior solution was achieved, given that it significantly outperformed baselines in almost all scenarios, including several representative Knowledge Graph embedding methods, while being able to produce empirically sound and intuitive explanations in minimal computational processing time.pt_PT
dc.identifier.tid203739892
dc.identifier.urihttp://hdl.handle.net/10400.5/95461
dc.language.isoengpt_PT
dc.subjectredes de conhecimentopt_PT
dc.subjectvectores de redes de conhecimentopt_PT
dc.subjectredes neuronais para grafospt_PT
dc.subjectaprendizagem automáticapt_PT
dc.subjectinteligência artificial explicávelpt_PT
dc.subjectTeses de mestrado - 2024pt_PT
dc.titleExplaining Predictions from Node Classification with Knowledge Graph Embeddingspt_PT
dc.typemaster thesis
dspace.entity.typePublication
rcaap.rightsopenAccesspt_PT
rcaap.typemasterThesispt_PT
thesis.degree.nameTese de mestrado em Ciência de Dadospt_PT

Ficheiros

Principais
A mostrar 1 - 1 de 1
A carregar...
Miniatura
Nome:
TM_Filipe_Paulino.pdf
Tamanho:
7.66 MB
Formato:
Adobe Portable Document Format
Licença
A mostrar 1 - 1 de 1
Miniatura indisponível
Nome:
license.txt
Tamanho:
1.2 KB
Formato:
Item-specific license agreed upon to submission
Descrição: