戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 mes) to long-duration cues (e.g., syllables, prosody).
2 tions based upon vocal inflection (affective prosody).
3 states recognized cross-culturally in speech prosody.
4 istic comprehension to alterations in speech prosody.
5 cessing of pitch changes in speech affecting prosody.
6 produced in either a happy, angry or neutral prosody.
7 ctor contributing to right lateralization of prosody.
8 usly associated with perception of emotional prosody.
9 brain showed particular sensitivity to happy prosody.
10 o words spoken with neutral, happy, or angry prosody.
11 nd the generation/comprehension of emotional prosody.
12 ng between acoustic features and emotions in prosody.
13 onous stimuli or failed to fully control for prosody.
14 and intonation-features often referred to as prosody.
15 could not accurately map visual and auditory prosodies.
16                               Whether or not prosody--a reportedly right-hemispheric faculty--involve
17  RH regions change to support both emotional prosody and also typically left-lateralized language fun
18  pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese
19 capture the nuances of human speech, such as prosody and immediately hearing one's own voice.
20 sive overlap in neural resources involved in prosody and music processing, music perception seems to
21 itry disambiguate uncertainties in affective prosody and propositional linguistic content of language
22 be due to misaligned processing of emotional prosody and semantics.
23 rative online processing of fear and disgust prosody and semantics.
24 ctive aspects, including phonology, grammar, prosody and semantics.
25 aryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically org
26  there is not clear brain lateralization for prosody and there may be bilateral representation for th
27 ech; through the voice in the form of speech prosody and through the face in the form of facial expre
28 al aspects of communication like pragmatics, prosody, and eye contact.
29 eaker's body movements) complements auditory prosody, and it is unclear how the brain temporally anal
30 y on tasks involving expression of affective prosody, and on tests of linguistic prosody, compared wi
31 cally driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure an
32 e pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brai
33 ilities-sentence processing versus emotional prosody-are supported by the left (LH) versus the right
34 vian syntax is more prevalent in spontaneous prosody, as compared to scripted speech.
35 we introduce a data-driven model for English prosody, based on large-scale analysis of spontaneous co
36 that when participants prepare for emotional prosody, bilateral ventral striatum is specifically acti
37 tences and phrases): alignment of syntax and prosody boosted EEG responses, whereas their misalignmen
38  also impaired in comprehension of affective prosody, but many patients with impairments in prosodic
39 gyrus (STG) is considered a critical hub for prosody, but the role of earlier auditory regions like H
40  statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of em
41 tream decreased participants' performance in prosody categorization, arguing for a motor involvement
42  comparisons indicated that infants detected prosody changes across a broader range of conditions dur
43 y, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native
44                                   Syntax and prosody coding occurred in the same frequency bands, whi
45 t impairments in identifying clear emotional prosody compared to healthy individuals.
46 ffective prosody, and on tests of linguistic prosody, compared with controls.
47 ently for noise-vocoded than clear emotional prosody comprehension.
48               While both the happy and angry prosody conditions exhibited right lateralized increases
49    This was true for both overt and imagined prosody conditions.
50                          Our vocal tone--the prosody--contributes a lot to the meaning of speech beyo
51 ralization of cortical regions mobilized for prosody control could point to efficient processing of s
52 p children to tap into the rhythm of speech (prosody), cueing them to prosodic markers of grammatical
53 sis is that sensitivity to emotion in speech prosody derives from the capacity to process music.
54 ase of the temporal analysis of multisensory prosody fluctuations in speech.
55 fway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-direc
56 ctive from sensorimotor aspects of emotional prosody generation.
57 ging from multiple constraints (e.g., words, prosody, gesture) underlie structural priming.
58                                We found that prosody, gestures, and informative mouth movements each
59 -face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucia
60                                              Prosody has a vital function in speech, structuring a sp
61 or the neural processing stages of emotional prosody has suggested that auditory emotion is perceived
62                              However, visual prosody (i.e., a speaker's body movements) complements a
63 We examined sensitivity to emotion in speech prosody in a sample of individuals with congenital amusi
64 re (RH) appears to be dominant for affective prosody in adults, while the left hemisphere (LH) mediat
65 isual attention and detection of a change in prosody in audiovisual speech.
66 d the perception and adaptation of receptive prosody in autistic adolescents and two groups of non-au
67     The interactive time course of emotional prosody in the context of emotional semantics was invest
68  reported difficulty understanding emotional prosody in their daily lives, suggesting some awareness
69 cry vocalisations and for building blocks of prosody (intonation) over the first 6 months of life, mo
70 nce of a mapping between emotions and speech prosody is commonly assumed.
71                                       Speech prosody is essential for verbal communication.
72 ude that processing of both overt and covert prosody is reflected in the frequency-tagged neural resp
73 an integrative overview, arguing that speech prosody is subserved by the same anatomical and neuroche
74                                              Prosody is that quality of speech that imparts meaning b
75  asynchrony detection of visual and auditory prosodies leads to increased delta power in left motor c
76           Functional connectivity studies on prosody leave no doubt about the existence of such strea
77                           Specific emotional prosodies may therefore differentially influence emotion
78 icipants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (si
79  shed light on the influence of language and prosody on infant attention and highlight the complexity
80 eted automatically (e.g., various aspects of prosody), or more deliberatively (e.g., description of r
81 c perception, speech recognition, and speech prosody perception in CI users.
82 previous research has investigated emotional prosody perception in these diseases under non-ideal lis
83    Previous research suggests that emotional prosody perception is impaired in neurodegenerative dise
84 dal neuroimaging and brain stimulation, that prosody perception takes dual routes along dorsal and ve
85 rization, arguing for a motor involvement in prosody perception.
86  to cognitive and motor aspects of emotional prosody preparation and production and is more strongly
87 cisely how sentence processing and emotional prosody processing are both organized in the intact RH o
88 nd non-human primate models, we investigated prosody processing in narrative speech, focusing on pitc
89 d auditory sentence processing and emotional prosody processing, comparing the overlap for these two
90 otemporal regions and overlap with emotional prosody processing?
91 while a second model proposes that emotional prosody relies heavily on basal ganglia.
92 ases involved in the generation of affective prosody remain unclear and debated.
93                            Hearing emotional prosody resulted in increased responses in a voice-sensi
94 peech amplitude envelope, resounding speaker prosody, temporally align with articulatory and body ges
95                                              Prosody, the musical facet of speech, is pivotal in huma
96  regarding how the brain perceives emotional prosody, the neural bases involved in the generation of
97  the underlying mechanisms of conversational prosody: They empirically inform and refine existing the
98  specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences,
99 miliaris), specifically, by exploiting vocal prosody to signal elevation in space.
100 l attention, face recognition, and emotional prosody to the right hemisphere.
101 nces lacking overt prosodic cues (instructed prosody trial) and were instructed to imagine the prosod
102                         Following each overt prosody trial, participants were presented with a second
103 sodic contour present in the previous, overt prosody trial.
104 tic phrasing in the sentences (initial overt prosody trials).
105 esses by which people recognize emotion from prosody, US and Indian participants were asked to judge
106 n and expression of affective and linguistic prosody were tested in subjects with documented unilater
107 showed attenuated adaptivity in categorizing prosody, whereas they were equivalent to controls in ter
108 sia have some deficits recognizing emotional prosody, whereas those with the semantic variant show mo
109 o be mostly driven by changes in emotion and prosody, which are mainly captured by acoustic features.
110 ns production and comprehension of emotional prosody, while a second model proposes that emotional pr
111  than matched controls at decoding emotional prosody, with decoding rates for some emotions up to 20%
112 wocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase s
113  to a large collection of intended emotional prosody, yielding more than 3,000 minutes of recordings.

 
Page Top