コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 domains, especially using images, text, and speech.
2 tical representation of concomitant auditory speech.
3 rehend, unlike auditory-only and audiovisual speech.
4 on appear to manifest in multiple domains of speech.
5 the face to recover vocal tract shape during speech.
6 ter listeners acquired knowledge of incoming speech.
7 e is feasible to capture vital properties of speech.
8 s of auditory information provided by visual speech.
9 of behavioral effects of ignored background speech.
10 nto syntactic structure to produce connected speech.
11 ixture, even if they occurred in the ignored speech.
12 from the changing shape of the mouth during speech.
13 underlying the construction of intelligible speech.
14 riants in nine patients with ASD and lack of speech.
15 presence of multi-talker babble or competing speech.
16 nonymous UPF3B variant in a male with absent speech.
17 y and speed remain far below that of natural speech.
18 ability in the correlates of expert backward speech.
19 nd larger, phasic responses to auditory-only speech.
20 iled spectrotemporal information from visual speech.
21 angular variations due to the complexity of speech; 2) a longer distance, ~1 m, where directed trans
22 o' was uttered either in IDS or adult-direct speech (ADS) followed by an upright or inverted face.
24 havioral assessments of backward and forward speech alongside neuroimaging measures of voxel-based mo
25 g verbal quantity, verbal quality, and motor speech, alongside four core language and cognitive compo
27 isensory speech perception since audiovisual speech and auditory-only speech are easily intelligible
29 h masks at blocking expiratory particles for speech and coughing at varied intensity and to assess wh
31 ocabularies in participant-generated natural speech and examined their relationships to individual di
33 naturally embedding primes within a person's speech and gestures effectively influenced people's deci
34 radically improved our understanding of how speech and language abilities map to the brain in normal
35 h a spectrum of disorders, ranging from mild speech and language delay to intractable neurodevelopmen
36 s UPF3B mutation in a patient with prominent speech and language disabilities and identify plausible
39 sustained positive responses to visual-only speech and larger, phasic responses to auditory-only spe
40 ENT Harmonic complex tones are ubiquitous in speech and music and produce strong pitch percepts when
41 tex.SIGNIFICANCE STATEMENT Our perception of speech and music depends strongly on temporal context, i
42 c complex tones (HCTs) commonly occurring in speech and music evoke a strong pitch at their fundament
45 gulate cortex is involved in the analysis of speech and nonspeech vocal feedback driving adaptation o
46 ognitive selection of orofacial, as well as, speech and nonspeech vocal responses; and the midcingula
47 Although the brain areas indispensable for speech and song learning are known, the neural circuits
49 entrained cortical EEG responses to attended speech and to simple tones modulated at speech rates (4
50 al envelope modulations (TEMs) to understand speech, and clinical outcomes depend on the accuracy wit
51 n since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is
52 onstrate that spectrotemporal modulations in speech are more strongly represented in neural responses
53 tress testing using a series of standardized speech/arithmetic stressors and simultaneous brain imagi
54 spered sounds, and in congruence with normal speech articulation, as accounted for by the Bayesian cl
55 e contributing to the first author's keynote speech at the conference, its influencers, and its influ
57 E STATEMENT Lip-reading consists in decoding speech based on visual information derived from observat
59 ifficult to study neural responses to visual speech because visual-only speech is difficult or imposs
60 d in the HI listeners, both for the attended speech, but also for tone sequences modulated at slow ra
61 li always contained both auditory and visual speech, but jittering the onset asynchrony between modal
62 ttering observations have revealed that loud speech can emit thousands of oral fluid droplets per sec
63 ning networks have been trained to recognize speech, caption photographs, and translate text between
65 We propose that working representations of speech categories are driven both by their current envir
66 dditional resources to disambiguate degraded speech codes, resources mediated by nAChRs may be compro
67 research, three unique orthogonal connected speech components were extracted in a unified model, ref
68 llations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al.,
69 avioural dissociation of acoustic and visual speech comprehension and suggest that cerebral represent
70 mains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regi
71 non-native language proficiency, reading and speech comprehension displayed substantial changes in he
72 lasticity of three language systems-reading, speech comprehension, and verbal production-in cross-sec
74 er this type of correspondence could improve speech comprehension, we selectively degraded the spectr
77 rve as a temporal map for listeners to group speech contents and to predict incoming speech signals.
78 evant representations of auditory and visual speech converged only in anterior angular and inferior f
81 ith delayed motor milestones and significant speech delay (50% non-verbal); intellectual disability i
82 d by intellectual disability (ID), motor and speech delay, autistic features, hypotonia, feeding diff
83 uding hearing loss, developmental delay, and speech delay, but excluding death), and were assessed at
84 rum of intellectual disability, motor delay, speech delay, seizures, hypotonia, and behavioral proble
85 were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural
86 aggerated intonation, has been documented in speech directed toward young children in many countries.
87 he asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negat
89 ental language disorder, dyslexia, and motor-speech disorders such as articulation disorder and stutt
90 patients, which persisted in 2 at 4 months; speech disturbance in 15 patients, which persisted in 3
93 siveness, supporting a model in which visual speech enhances the efficiency of auditory speech proces
94 However, despite robust attention-modulated speech entrainment, the HI listeners rated the competing
95 umans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when l
99 tence spectrograms to assess how well visual speech facilitated comprehension under each degradation
101 evel visual (oral deformations) and auditory speech features (frequency modulations) to extract detai
106 2266; p = 8.9 x 10(-6)), STXBP1 with "absent speech" (HP: 0001344; p = 1.3 x 10(-11)), and SLC6A1 wit
107 es to achieve optimal tremor control without speech impairment in essential tremor patients with thal
109 ntent and structure of spontaneous connected speech in 52 speakers during the acute stage of a left h
110 uced discriminability of neural responses to speech in background noise at high sound intensities, wi
113 the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by
114 s patients report difficulties understanding speech in noise or competing talkers, despite having "no
116 selective attention to suppress distracting speech in situations when the distractor is well segrega
117 s (presbycusis) often struggle to understand speech in such situations, even when wearing a hearing a
118 we investigated the neural representation of speech in the auditory midbrain of gerbils with "hidden
121 ognition thresholds were measured for target speech in the presence of multi-talker babble or competi
122 The findings also show motor theories (of speech) in a different light, placing new mechanistic co
123 th caregivers, compared with overheard adult speech, in the function of language networks in infancy.
124 ion performance (words-in-noise (WIN), quick speech-in-noise (QuickSIN), gaps-in-noise) and auditory
126 n temporal, spectral, intensive, masking and speech-in-noise perception tasks between 45 human listen
129 ia, we found preserved integration of visual speech information to optimize processing of syntactic i
130 he presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speake
131 While we have a good understanding of where speech integration occurs in the brain, it is unclear ho
132 wn to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel an
137 sponses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike
138 e a descriptive norm against the use of hate speech is evidently in place to contexts in which the no
141 l tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities.
144 of animal behavior-from locomotion to human speech-is thought to consist of different hierarchical l
146 earing loss can cause detrimental effects on speech, language, developmental, educational, and cognit
147 tradiol (E2), which is associated with human speech-language development, and is abundant in both NCM
148 boys in the first year of life produced more speech-like vocalizations than girls and that the effect
150 ore central in origin) produced by competing speech may further illuminate central interference due t
152 s the discourses, we submitted the connected speech metrics to principal component analysis alongside
154 The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscilla
155 speakers, the effects of disruption of left speech motor cortex on responses to tone changes were in
156 me changes, disruption of left but not right speech motor cortex suppressed responses in both languag
157 een language groups: disruption of the right speech motor cortex suppressed responses to tone changes
158 age speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes
159 that the contributions of the right and left speech motor cortex to auditory speech processing are de
160 We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulat
164 and Redesign Model (STORM) and Nucleic-Acid Speech (NuSpeak), two orthogonal and synergistic deep le
167 might help combine foreground elements, like speech, over seconds to aid their separation from the ba
169 auditory cortices, most likely facilitating speech parsing.SIGNIFICANCE STATEMENT Lip-reading consis
170 maneuvers with assistance and guidance from speech pathologists to help improve HNC complications an
171 cordingly, they downweight pitch cues during speech perception and instead rely on other dimensions s
172 oken language, grounding cognitive models of speech perception and production in human neurobiology.
173 ed specification of a computational model of speech perception based on predictive coding frameworks.
175 lation associated with tinnitus and impaired speech perception cause cochlear synaptopathy, character
178 up and top-down markers of poor multi-talker speech perception identified here could inform the desig
181 anterior middle temporal and angular gyri; a speech perception network involving superior temporal an
183 is procedure is problematic for multisensory speech perception since audiovisual speech and auditory-
184 nduced by this disorder may actually improve speech perception under narrow conditions within an over
188 ation of Heschl's gyrus selectively disrupts speech perception, while stimulation of planum temporale
191 ght and left speech motor cortex to auditory speech processing are determined by the functional roles
193 dynamics that potentially shape auditory and speech processing at different levels of the cortical hi
194 asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal a
202 tion on the multidimensionality of connected speech production at both behavioural and neural levels.
203 and vocal tract movements are linked during speech production by comparing videos of the face and fa
205 the quantity and quality of fluent connected speech production while controlling for other co-factors
210 atric) document the feasibility of capturing speech properties within the electrocochleography (ECoch
212 nded speech and to simple tones modulated at speech rates (4 Hz) in listeners with age-related hearin
216 dynamic behaviors (motifs), as it is done in speech recognition and other data mining applications.
218 est that tinnitus negatively affected masked speech recognition even in individuals with no measurabl
219 o-haptic" stimulation substantially improved speech recognition in multi-talker noise when the speech
220 er, results suggest that noise adaptation in speech recognition is probably mediated by neural dynami
222 for ameliorating hearing loss and improving speech recognition technology in the presence of backgro
227 l to significantly improve music perception, speech recognition, and speech prosody perception in CI
229 arious contexts, such as computer vision and speech recognition, multiview learning has not yet been
233 and for infant-directed over adult-directed speech, reflects early sensitivity to social communicati
235 ollect day-long audio recordings, and infant speech-related and adult vocalisation onsets and offsets
239 esented in neural responses than alternative speech representations (e.g. spectrogram or articulatory
240 volves the transformation of stimulus-locked speech representations in sensorimotor and premotor cort
241 emes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of s
242 in most nonhuman primates, the evolution of speech required the addition of vocalization onto this s
244 Content analyses conducted on all connected speech samples indicated that performance differed acros
247 ic words in Jueju negatively correlated with speech segmentation, which provides an alternative persp
248 henomenon indicating predictive processes of speech segmentation-the neural phase advanced faster aft
254 elling we show that recalibration of natural speech sound categories is better described by represent
255 vements, mapping them onto the corresponding speech sound features; this information is fed to audito
256 account to embrace phonetic and phonological speech sound representations and their neural bases.
257 power (iHGP) across cortex in humans during speech-sound working memory in individuals with schizoph
258 r proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect.
261 m that tests auditory working memory for non-speech sounds that vary in frequency and amplitude modul
270 Even when the only task is listening to speech stimuli, participants should be asked to place th
271 significantly increased neural responses to speech stimuli, with a more pronounced increase at moder
272 enre and two naturalistic forms of connected speech (storytelling narrative, and procedural discourse
274 tures present in attended as well as ignored speech, suggests an active cortical stream segregation p
277 stinct types of nonlinear transformations of speech that varied considerably from primary to nonprima
278 sional signatures of two experts in backward speech, that is, the capacity to produce utterances by r
280 reliable and temporally precise responses to speech; these patterns transformed to distinct sentence-
281 f neurosurgical patients as they listened to speech, this approach significantly improves the predict
282 ities and the ability to benefit from visual speech to represent the syllabic content of SiN account
283 ent process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere,
284 the effect of the terrorist attacks in hate speech toward refugees in contexts where a descriptive n
289 Lip-reading is known to improve auditory speech understanding, especially when speech is degraded
290 track the temporal dynamics of purely visual speech using the phase of their slow oscillations and ph
291 ts, suggesting that the detected patterns of speech variability are associated with drug consumption.
295 e articulators shape the spectral content of speech, we hypothesized that the perceptual system might
296 lts from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speec
297 y tracks linguistic structure during natural speech, where linguistic structure does not follow such
298 this question using naturalistic audiovisual speech with intracranial recordings in humans of both se