| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 579.6 KB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
Web accessibility remains a critical challenge, as automated tools identify only a subset of issues, and conformance checking still requires extensive manual verification. Large Language Models (LLMs), with their ability to process natural language and reason over structured inputs, offer an opportunity to address evaluation tasks that involve contextual or interaction-dependent understanding. This thesis investigates how LLMs can support the assessment of form-related accessibility criteria that are not reliably captured by existing rule-based tools. To this end, an evaluation pipeline was developed using DSPy, combining automated form interaction, DOM mutation observation, retrieval grounding, and fine-tuning trainsets. A small annotated dataset was constructed to guide and stabilize model outputs. In addition, experiments compared different LLMs against manual benchmarks and traditional accessibility checkers. The pipeline was applied to four representative criteria that are traditionally not addressed by automated evaluation tools, including correct use of required attribute, failure to identify errors or invalid identification, ambiguous or generic error messages and error states indicated only by color. Results indicate that LLMs can provide meaningful and actionable judgments in these contexts, particularly when supplied with structured inputs and retrieval-based grounding. They showed strengths in detecting missing feedback and distinguishing vague from precise error messages, while also revealing limitations related to dataset scale, model variability, and extraction fragility. Rather than replacing rule-based engines, the findings suggest that LLMs are best positioned as a complementary layer, extending coverage to nuanced cases and reducing the burden of manual auditing.
Descrição
Tese de Mestrado, Engenharia Informática, 2025, Universidade de Lisboa, Faculdade de Ciências
Palavras-chave
Web Accessibility Large Language Models HTML Forms Automated Accessibility Evaluation Human-Computer Interaction
