Loading...
Research Project
I predict, therefore I do not hallucinate: a longitudinal study testing the neurophysiological underpinnings of auditory verbal hallucinations
Funder
Authors
Publications
Attention to voices is increased in non-clinical auditory verbal hallucinations irrespective of salience
Publication . Castiajo, P; Pinheiro, Ana P.
Emotional authenticity modulates affective and social trait inferences from voices
Publication . Pinheiro, Ana P.; Anikin, Andrey; Conde, Tatiana magro; Sarzedas, João; Chen, Sinead; Scott, Sophie K; Lima, César F.
The human voice is a primary tool for verbal and nonverbal communication.
Studies on laughter emphasize a distinction between spontaneous laughter,
which reflects a genuinely felt emotion, and volitional laughter, associated
with more intentional communicative acts. Listeners can reliably differentiate
the two. It remains unclear, however, if they can detect authenticity in other
vocalizations, and whether authenticity determines the affective and social
impressions that we form about others. Here, 137 participants listened to
laughs and cries that could be spontaneous or volitional and rated them on
authenticity, valence, arousal, trustworthiness and dominance. Bayesian
mixed models indicated that listeners detect authenticity similarly well in
laughter and crying. Speakers were also perceived to be more trustworthy,
and in a higher arousal state, when their laughs and cries were spontaneous.
Moreover, spontaneous laughs were evaluated as more positive than volitional
ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and
less voicing. For crying, associations between acoustic features and ratings
were less reliable. These findings indicate that emotional authenticity shapes
affective and social trait inferences from voices, and that the ability to detect
authenticity in vocalizations is not limited to laughter.
This article is part of the theme issue ‘Voice modulation: from origin and
mechanism to social impact (Part I)’.
Expectancy changes the self-monitoring of voice identity
Publication . Johnson, Joseph; Belyk, Michel; Schwartze, Michael; Pinheiro, Ana P.; Kotz, Sonja
Self-voice attribution can become difficult when voice characteristics are ambiguous,
but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice-morphing (self-other) to manipulate (un-)certainty
in self-voice attribution in a button-press paradigm. This allowed investigating how
levels of self-voice certainty alter brain activation in brain regions monitoring voice
identity and unexpected changes in voice playback quality. FMRI results confirmed
a self-voice suppression effect in the right anterior superior temporal gyrus (aSTG)
when self-voice attribution was unambiguous. Although the right inferior frontal
gyrus (IFG) was more active during a self-generated compared to a passively heard
voice, the putative role of this region in detecting unexpected self-voice changes
during the action was demonstrated only when hearing the voice of another speaker
and not when attribution was uncertain. Further research on the link between right
aSTG and IFG is required and may establish a threshold monitoring voice identity in
action. The current results have implications for a better understanding of the altered
experience of self-voice feedback in auditory verbal hallucinations.
The perceived salience of vocal emotions is dampened in non-clinical auditory verbal hallucinations
Publication . Amorim, Maria; Roberto, Magda Sofia; Kotz, Sonja; Pinheiro, Ana P.
Introduction: Auditory verbal hallucinations (AVH) are a cardinal
symptom of schizophrenia but are also reported in the general
population without need for psychiatric care. Previous evidence
suggests that AVH may reflect an imbalance of prior
expectation and sensory information, and that altered salience
processing is characteristic of both psychotic and non-clinical
voice hearers. However, it remains to be shown how such an
imbalance affects the categorisation of vocal emotions in
perceptual ambiguity.
Methods: Neutral and emotional nonverbal vocalisations were
morphed along two continua differing in valence (anger;
pleasure), each including 11 morphing steps at intervals of
10%. College students (N = 234) differing in AVH proneness
(measured with the Launay-Slade Hallucination Scale) evaluated
the emotional quality of the vocalisations.
Results: Increased AVH proneness was associated with more
frequent categorisation of ambiguous vocalisations as ‘neutral’,
irrespective of valence. Similarly, the perceptual boundary for
emotional classification was shifted by AVH proneness:
participants needed more emotional information to categorise
a voice as emotional.
Conclusions: These findings suggest that emotional salience in
vocalisations is dampened as a function of increased AVH
proneness. This could be related to changes in the acoustic
representations of emotions or reflect top-down expectations
of less salient information in the social environment.
From Sound Perception to Automatic Detection of Schizophrenia: An EEG-Based Deep Learning Approach
Publication . Barros, Carla; Roach, Brian; Ford, Judith M.; Pinheiro, Ana P.; Silva, Carlos
Deep learning techniques have been applied to electroencephalogram (EEG) signals,
with promising applications in the field of psychiatry. Schizophrenia is one of the most
disabling neuropsychiatric disorders, often characterized by the presence of auditory
hallucinations. Auditory processing impairments have been studied using EEG-derived
event-related potentials and have been associated with clinical symptoms and cognitive
dysfunction in schizophrenia. Due to consistent changes in the amplitude of ERP
components, such as the auditory N100, some have been proposed as biomarkers
of schizophrenia. In this paper, we examine altered patterns in electrical brain activity
during auditory processing and their potential to discriminate schizophrenia and healthy
subjects. Using deep convolutional neural networks, we propose an architecture to
perform the classification based on multi-channels auditory-related EEG single-trials,
recorded during a passive listening task. We analyzed the effect of the number of
electrodes used, as well as the laterality and distribution of the electrical activity over
the scalp. Results show that the proposed model is able to classify schizophrenia
and healthy subjects with an average accuracy of 78% using only 5 midline channels
(Fz, FCz, Cz, CPz, and Pz). The present study shows the potential of deep learning
methods in the study of impaired auditory processing in schizophrenia with implications
for diagnosis. The proposed design can provide a base model for future developments
in schizophrenia research.
Organizational Units
Description
Keywords
Contributors
Funders
Funding agency
Fundação para a Ciência e a Tecnologia
Funding programme
Projetos de Investigação Científica e Desenvolvimento Tecnológico - 2014 (P2020)
Funding Award Number
PTDC/MHC-PCN/0101/2014
