Repository logo
 
Publication

Investigating Large Language Models ability to evaluate heading-related accessibility barriers

datacite.subject.fosEngenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informáticapt_PT
dc.contributor.advisorDuarte, Carlos Alberto Pacheco dos Anjos
dc.contributor.advisorPereira, Letícia Seixas
dc.contributor.authorCosta, Miguel Santos Pedro Soares da
dc.date.accessioned2025-01-10T15:55:57Z
dc.date.available2025-01-10T15:55:57Z
dc.date.issued2024
dc.date.submitted2024
dc.descriptionTese de Mestrado, Engenharia Informática, 2024, Universidade de Lisboa, Faculdade de Ciênciaspt_PT
dc.description.abstractThis thesis investigates the ability of Large Language Models (LLMs) to identify and evaluate heading-related accessibility barriers in HTML documents, with a focus on improving web accessibility. Headings play a crucial role in ensuring web content is accessible, yet they are often misused, leading to barriers that impact users, particularly those relying on assistive technologies. The study aims to evaluate how effectively LLMs, such as GPT-4o and Llama 3.1, can detect specific heading-related issues. To achieve this, I identified and categorized common heading-related accessibility issues, such as ”Excessive Use of Headings”, ”Misleading or Confusing Headings”, and ”Empty Accessible Name”. A set of test webpages was created to simulate these issues, and specific questions were designed to prompt the LLMs to analyze and evaluate these accessibility barriers. The thesis explores the accuracy and consistency of LLMs in responding to these prompts, highlighting their ability to address certain issues while encountering limitations in others. The study further examines the impact of context, session, and queue structures—Singular and Connected Queues—on LLM performance. Results show that including contextual information significantly improves LLM accuracy, while session data and queue structures have a lesser impact. Overall, the thesis presents a system for improving LLMs’ ability to evaluate webpages’ accessibility and suggests future work on refining prompts and leveraging LLMs for real-time accessibility repairs. This research contributes to the ongoing exploration of how LLMs can enhance web accessibility, specifically in addressing heading-related issues, offering a foundation for further advancements in LLM-driven evaluation and remediation systems.pt_PT
dc.identifier.tid203875362
dc.identifier.urihttp://hdl.handle.net/10400.5/97081
dc.language.isoengpt_PT
dc.subjectAcessibilidade na Webpt_PT
dc.subjectGrandes Modelos de Linguagempt_PT
dc.subjectTítulospt_PT
dc.subjectChat-GPTpt_PT
dc.subjectLlamapt_PT
dc.subjectTeses de mestrado - 2024pt_PT
dc.titleInvestigating Large Language Models ability to evaluate heading-related accessibility barrierspt_PT
dc.typemaster thesis
dspace.entity.typePublication
rcaap.rightsopenAccesspt_PT
rcaap.typemasterThesispt_PT
thesis.degree.nameMestrado em Engenharia Informáticapt_PT

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TM_Miguel_Costa.pdf
Size:
4.06 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.2 KB
Format:
Item-specific license agreed upon to submission
Description: