Logo do repositório
 
Publicação

Emovideo : automatic prediction of emotional responses to videos using its cinematic features

datacite.subject.fosDepartamento de Informáticapt_PT
dc.contributor.advisorFonseca, Manuel João Caneira Monteiro da, 1968-
dc.contributor.authorGraça, Silvana Moreira
dc.date.accessioned2022-07-21T12:58:40Z
dc.date.available2022-07-21T12:58:40Z
dc.date.issued2022
dc.date.submitted2021
dc.descriptionTese de mestrado, Engenharia Informática (Engenharia de Software), Universidade de Lisboa, Faculdade de Ciências, 2022pt_PT
dc.description.abstractMovies can evoke intense emotions in their audiences and are often used in the field of psychology to study emotion. Predicting how video content affects viewer’s emotional response has become a popular area of research over the past years due to its extensive applications. In order to provoke an emotional response, film makers use a variety of techniques while of filming and editing a movie. However, there are not many studies on how these techniques affect viewer’s emotional responses and if they can be used solely to predict these responses. With this work we developed a solution that predicts emotional responses to videos using cinematic features. To accomplish this goal, we started by studying cinematic techniques and their effects on viewer’s emotions. With this information, we decided to focus on shot length, key lighting, shot type and camera movement in this work. Then, we experimented with different methods to extract these features from videos. First, we used a trained model to segment videos into shots, and then segmented these shots into frames, key frames, dense flow frames and frames with the subject and background isolated. The key lighting was calculated from the contrast of each key frame. We trained and tested models to classify shot type and camera movement, with different Convolutional Neural Networks, parameters, types of frames and labels. The best results achieved were 81% for shot type and 89% for camera movement. Lastly, we created models to predict valence and arousal values with classification and regression algorithms using all the extracted cinematic features. Overall, our method had results close to previous works, especially with error metrics, with only cinematic features. This shows they affect viewers’ valence and arousal and can be a tool in predicting exact values, while providing interpretability.pt_PT
dc.identifier.tid202994880
dc.identifier.urihttp://hdl.handle.net/10451/53904
dc.language.isoengpt_PT
dc.subjectfeatures cinemáticaspt_PT
dc.subjectclassificação de emoçõespt_PT
dc.subjectestimação de emoçõespt_PT
dc.subjectrede neural convolucionalpt_PT
dc.subjectanálise afetiva de conteúdo de vídeopt_PT
dc.subjectTeses de mestrado - 2022pt_PT
dc.titleEmovideo : automatic prediction of emotional responses to videos using its cinematic featurespt_PT
dc.typemaster thesis
dspace.entity.typePublication
rcaap.rightsopenAccesspt_PT
rcaap.typemasterThesispt_PT
thesis.degree.nameTese de mestrado em Engenharia Informática (Engenharia de Software)pt_PT

Ficheiros