Logo do repositório
 
Publicação

Leveraging Large Language Models to Support Web Accessibility Evaluation

dc.contributor.authorCosta, Joao Miguel Marques da
dc.contributor.institutionFaculty of Sciences
dc.contributor.institutionDepartment of Informatics
dc.contributor.supervisorDuarte, Carlos
dc.contributor.supervisorPereira, Letícia Seixas
dc.date.accessioned2026-01-17T14:40:01Z
dc.date.available2026-01-17T14:40:01Z
dc.date.issued2025
dc.descriptionTese de Mestrado, Engenharia Informática, 2025, Universidade de Lisboa, Faculdade de Ciências
dc.description.abstractWeb accessibility remains a critical challenge, as automated tools identify only a subset of issues, and conformance checking still requires extensive manual verification. Large Language Models (LLMs), with their ability to process natural language and reason over structured inputs, offer an opportunity to address evaluation tasks that involve contextual or interaction-dependent understanding. This thesis investigates how LLMs can support the assessment of form-related accessibility criteria that are not reliably captured by existing rule-based tools. To this end, an evaluation pipeline was developed using DSPy, combining automated form interaction, DOM mutation observation, retrieval grounding, and fine-tuning trainsets. A small annotated dataset was constructed to guide and stabilize model outputs. In addition, experiments compared different LLMs against manual benchmarks and traditional accessibility checkers. The pipeline was applied to four representative criteria that are traditionally not addressed by automated evaluation tools, including correct use of required attribute, failure to identify errors or invalid identification, ambiguous or generic error messages and error states indicated only by color. Results indicate that LLMs can provide meaningful and actionable judgments in these contexts, particularly when supplied with structured inputs and retrieval-based grounding. They showed strengths in detecting missing feedback and distinguishing vague from precise error messages, while also revealing limitations related to dataset scale, model variability, and extraction fragility. Rather than replacing rule-based engines, the findings suggest that LLMs are best positioned as a complementary layer, extending coverage to nuanced cases and reducing the burden of manual auditing.en
dc.formatapplication/pdf
dc.identifier.tid204177359
dc.identifier.urihttp://hdl.handle.net/10400.5/116678
dc.language.isoeng
dc.subjectWeb Accessibility
dc.subjectLarge Language Models
dc.subjectHTML Forms
dc.subjectAutomated Accessibility Evaluation
dc.subjectHuman-Computer Interaction
dc.titleLeveraging Large Language Models to Support Web Accessibility Evaluationen
dc.typemaster thesis
dspace.entity.typePublication
rcaap.rightsopenAccess

Ficheiros

Principais
A mostrar 1 - 1 de 1
A carregar...
Miniatura
Nome:
TM_Joao_Costa.pdf
Tamanho:
579.6 KB
Formato:
Adobe Portable Document Format