戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  domains, especially using images, text, and speech.
2 tical representation of concomitant auditory speech.
3 rehend, unlike auditory-only and audiovisual speech.
4 on appear to manifest in multiple domains of speech.
5 the face to recover vocal tract shape during speech.
6 ter listeners acquired knowledge of incoming speech.
7 e is feasible to capture vital properties of speech.
8 s of auditory information provided by visual speech.
9  of behavioral effects of ignored background speech.
10 nto syntactic structure to produce connected speech.
11 ixture, even if they occurred in the ignored speech.
12  from the changing shape of the mouth during speech.
13  underlying the construction of intelligible speech.
14 riants in nine patients with ASD and lack of speech.
15 presence of multi-talker babble or competing speech.
16 nonymous UPF3B variant in a male with absent speech.
17 y and speed remain far below that of natural speech.
18 ability in the correlates of expert backward speech.
19 nd larger, phasic responses to auditory-only speech.
20 iled spectrotemporal information from visual speech.
21  angular variations due to the complexity of speech; 2) a longer distance, ~1 m, where directed trans
22 o' was uttered either in IDS or adult-direct speech (ADS) followed by an upright or inverted face.
23 ecialized for processing of pitch changes in speech affecting prosody.
24 havioral assessments of backward and forward speech alongside neuroimaging measures of voxel-based mo
25 g verbal quantity, verbal quality, and motor speech, alongside four core language and cognitive compo
26                                    Automated speech analysis may represent a novel method to detect o
27 isensory speech perception since audiovisual speech and auditory-only speech are easily intelligible
28                                        Human speech and bird song are acoustically complex communicat
29 h masks at blocking expiratory particles for speech and coughing at varied intensity and to assess wh
30                        Audio analysis of the speech and coughing intensity confirmed that people spea
31 ocabularies in participant-generated natural speech and examined their relationships to individual di
32                      Adverse events included speech and gait disturbances, weakness on the treated si
33 naturally embedding primes within a person's speech and gestures effectively influenced people's deci
34  radically improved our understanding of how speech and language abilities map to the brain in normal
35 h a spectrum of disorders, ranging from mild speech and language delay to intractable neurodevelopmen
36 s UPF3B mutation in a patient with prominent speech and language disabilities and identify plausible
37 anscription factor causes a severe monogenic speech and language disorder.
38 sed using anonymous surveys sent to UK-based Speech and Language Therapists (SLTs).
39  sustained positive responses to visual-only speech and larger, phasic responses to auditory-only spe
40 ENT Harmonic complex tones are ubiquitous in speech and music and produce strong pitch percepts when
41 tex.SIGNIFICANCE STATEMENT Our perception of speech and music depends strongly on temporal context, i
42 c complex tones (HCTs) commonly occurring in speech and music evoke a strong pitch at their fundament
43 to the slow rates of FM that are crucial for speech and music.
44 h recognition in multi-talker noise when the speech and noise came from different locations.
45 gulate cortex is involved in the analysis of speech and nonspeech vocal feedback driving adaptation o
46 ognitive selection of orofacial, as well as, speech and nonspeech vocal responses; and the midcingula
47   Although the brain areas indispensable for speech and song learning are known, the neural circuits
48                          We provided missing speech and spatial hearing cues through haptic stimulati
49 entrained cortical EEG responses to attended speech and to simple tones modulated at speech rates (4
50 al envelope modulations (TEMs) to understand speech, and clinical outcomes depend on the accuracy wit
51 n since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is
52 onstrate that spectrotemporal modulations in speech are more strongly represented in neural responses
53 tress testing using a series of standardized speech/arithmetic stressors and simultaneous brain imagi
54 spered sounds, and in congruence with normal speech articulation, as accounted for by the Bayesian cl
55 e contributing to the first author's keynote speech at the conference, its influencers, and its influ
56                    The temporal structure of speech at this scale is remarkably stable across languag
57 E STATEMENT Lip-reading consists in decoding speech based on visual information derived from observat
58             Nonhuman primates might not have speech because they lack this ability.
59 ifficult to study neural responses to visual speech because visual-only speech is difficult or imposs
60 d in the HI listeners, both for the attended speech, but also for tone sequences modulated at slow ra
61 li always contained both auditory and visual speech, but jittering the onset asynchrony between modal
62 ttering observations have revealed that loud speech can emit thousands of oral fluid droplets per sec
63 ning networks have been trained to recognize speech, caption photographs, and translate text between
64                                       Visual speech carried by lip movements is an integral part of c
65   We propose that working representations of speech categories are driven both by their current envir
66 dditional resources to disambiguate degraded speech codes, resources mediated by nAChRs may be compro
67  research, three unique orthogonal connected speech components were extracted in a unified model, ref
68 llations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al.,
69 avioural dissociation of acoustic and visual speech comprehension and suggest that cerebral represent
70 mains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regi
71 non-native language proficiency, reading and speech comprehension displayed substantial changes in he
72 lasticity of three language systems-reading, speech comprehension, and verbal production-in cross-sec
73 e following three language systems: reading, speech comprehension, and verbal production.
74 er this type of correspondence could improve speech comprehension, we selectively degraded the spectr
75 l substrate of statistical learning and even speech comprehension.
76 nd localization, ototoxicity prevention, and speech comprehension.
77 rve as a temporal map for listeners to group speech contents and to predict incoming speech signals.
78 evant representations of auditory and visual speech converged only in anterior angular and inferior f
79  investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/.
80 representation of both spectral and temporal speech cues.
81 ith delayed motor milestones and significant speech delay (50% non-verbal); intellectual disability i
82 d by intellectual disability (ID), motor and speech delay, autistic features, hypotonia, feeding diff
83 uding hearing loss, developmental delay, and speech delay, but excluding death), and were assessed at
84 rum of intellectual disability, motor delay, speech delay, seizures, hypotonia, and behavioral proble
85  were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural
86 aggerated intonation, has been documented in speech directed toward young children in many countries.
87 he asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negat
88 crease in hearing sensitivity, threshold and speech discrimination.
89 ental language disorder, dyslexia, and motor-speech disorders such as articulation disorder and stutt
90  patients, which persisted in 2 at 4 months; speech disturbance in 15 patients, which persisted in 3
91                                              Speech droplets generated by asymptomatic carriers of se
92 tion and tracking of foreground objects like speech during natural listening.
93 siveness, supporting a model in which visual speech enhances the efficiency of auditory speech proces
94  However, despite robust attention-modulated speech entrainment, the HI listeners rated the competing
95 umans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when l
96 effects of hearing loss on EEG correlates of speech envelope synchronization in cortex.
97 sitivity and decreased ability to understand speech, especially in a noisy environment.
98                         We found that visual speech evokes a positive response in the human posterior
99 tence spectrograms to assess how well visual speech facilitated comprehension under each degradation
100                                       Visual speech facilitates auditory speech perception, but the v
101 evel visual (oral deformations) and auditory speech features (frequency modulations) to extract detai
102 uman brain, has been adapted to serve higher speech function.
103                              Shifts in robot speech have the power not only to affect how people inte
104                 A training method to improve speech hearing in noise has proven elusive, with most me
105 sible utility of using such games to improve speech hearing in noise.
106 2266; p = 8.9 x 10(-6)), STXBP1 with "absent speech" (HP: 0001344; p = 1.3 x 10(-11)), and SLC6A1 wit
107 es to achieve optimal tremor control without speech impairment in essential tremor patients with thal
108 ld-to-severe developmental delay, hypotonia, speech impairment, and seizures.
109 ntent and structure of spontaneous connected speech in 52 speakers during the acute stage of a left h
110 uced discriminability of neural responses to speech in background noise at high sound intensities, wi
111 the brain, so this ability is not special to speech in humans.
112 factors, including the ability to understand speech in noise (SiN).
113 the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by
114 s patients report difficulties understanding speech in noise or competing talkers, despite having "no
115 mal hearing abilities struggle to understand speech in noisy backgrounds.
116  selective attention to suppress distracting speech in situations when the distractor is well segrega
117 s (presbycusis) often struggle to understand speech in such situations, even when wearing a hearing a
118 we investigated the neural representation of speech in the auditory midbrain of gerbils with "hidden
119 ch this verbal content is processed as overt speech in the brain.
120 neural hearing loss often struggle to follow speech in the presence of competing talkers.
121 ognition thresholds were measured for target speech in the presence of multi-talker babble or competi
122    The findings also show motor theories (of speech) in a different light, placing new mechanistic co
123 th caregivers, compared with overheard adult speech, in the function of language networks in infancy.
124 ion performance (words-in-noise (WIN), quick speech-in-noise (QuickSIN), gaps-in-noise) and auditory
125                                              Speech-in-noise (SiN) perception is a critical aspect of
126 n temporal, spectral, intensive, masking and speech-in-noise perception tasks between 45 human listen
127                   These results suggest that speech-in-noise problems experienced by older HI listene
128  tested using auditory reaction time and two speech-in-noise tasks.
129 ia, we found preserved integration of visual speech information to optimize processing of syntactic i
130 he presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speake
131  While we have a good understanding of where speech integration occurs in the brain, it is unclear ho
132 wn to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel an
133 d for 78% of the variability in multi-talker speech intelligibility.
134 y which the brain merges auditory and visual speech into a unitary perception.
135              The mental rehearsal of natural speech involves the transformation of stimulus-locked sp
136 ditory speech understanding, especially when speech is degraded.
137 sponses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike
138 e a descriptive norm against the use of hate speech is evidently in place to contexts in which the no
139 ith hearing their voice, especially when the speech is noisy.
140 eech are easily intelligible but visual-only speech is not.
141 l tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities.
142  brain extracts meaning from, silent, visual speech is still under debate.
143                  Understanding language from speech is the human benchmark for this.
144  of animal behavior-from locomotion to human speech-is thought to consist of different hierarchical l
145 000 swallows manually labeled by experienced speech language pathologists.
146 earing loss can cause detrimental effects on speech, language, developmental, educational, and cognit
147 tradiol (E2), which is associated with human speech-language development, and is abundant in both NCM
148 boys in the first year of life produced more speech-like vocalizations than girls and that the effect
149                                           In speech, listeners extract continuously-varying spectrote
150 ore central in origin) produced by competing speech may further illuminate central interference due t
151                                The origin of speech may have required the evolution of a "command app
152 s the discourses, we submitted the connected speech metrics to principal component analysis alongside
153                   Our results support that a speech motor code is used for the recognition of infrequ
154      The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscilla
155  speakers, the effects of disruption of left speech motor cortex on responses to tone changes were in
156 me changes, disruption of left but not right speech motor cortex suppressed responses in both languag
157 een language groups: disruption of the right speech motor cortex suppressed responses to tone changes
158 age speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes
159 that the contributions of the right and left speech motor cortex to auditory speech processing are de
160   We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulat
161  a bidirectional interaction of auditory and speech motor cortices.
162                 During speech production and speech motor learning, speakers' experience matched audi
163 aviors include perception and performance of speech, music, driving, and many sports.
164  and Redesign Model (STORM) and Nucleic-Acid Speech (NuSpeak), two orthogonal and synergistic deep le
165                   Coordinated skills such as speech or dance involve sequences of actions that follow
166 lus rather than mental imagery of unrelated, speech or non-speech, sounds.
167 might help combine foreground elements, like speech, over seconds to aid their separation from the ba
168  N95 mask in the "no-talking" (P > .99) and "speech" (P = .831) scenarios.
169  auditory cortices, most likely facilitating speech parsing.SIGNIFICANCE STATEMENT Lip-reading consis
170  maneuvers with assistance and guidance from speech pathologists to help improve HNC complications an
171 cordingly, they downweight pitch cues during speech perception and instead rely on other dimensions s
172 oken language, grounding cognitive models of speech perception and production in human neurobiology.
173 ed specification of a computational model of speech perception based on predictive coding frameworks.
174                                        Human speech perception can be described as Bayesian perceptua
175 lation associated with tinnitus and impaired speech perception cause cochlear synaptopathy, character
176             Hidden hearing loss manifests as speech perception difficulties with normal hearing thres
177                                     Auditory speech perception enables listeners to access phonologic
178 up and top-down markers of poor multi-talker speech perception identified here could inform the desig
179                  It is well established that speech perception is improved when we are able to see th
180                                              Speech perception is mediated by both left and right aud
181 anterior middle temporal and angular gyri; a speech perception network involving superior temporal an
182                                              Speech perception presumably arises from internal models
183 is procedure is problematic for multisensory speech perception since audiovisual speech and auditory-
184 nduced by this disorder may actually improve speech perception under narrow conditions within an over
185                                              Speech perception uses information from both the auditor
186           Visual speech facilitates auditory speech perception, but the visual cues responsible for t
187                                  However, in speech perception, we lack evidence of perception being
188 ation of Heschl's gyrus selectively disrupts speech perception, while stimulation of planum temporale
189 al and auditory cues are combined to improve speech perception.
190 STG), a brain area known to be important for speech perception.
191 ght and left speech motor cortex to auditory speech processing are determined by the functional roles
192 istic function and language experience shape speech processing asymmetries.
193 dynamics that potentially shape auditory and speech processing at different levels of the cortical hi
194 asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal a
195 l speech enhances the efficiency of auditory speech processing in pSTG.
196 own motor influences can affect asymmetry of speech processing in the auditory system.
197                                              Speech processing relies on interactions between auditor
198 derlying hemispheric asymmetries of auditory speech processing remain debated.
199  gyrus, enhancing the efficiency of auditory speech processing.
200                                       Visual speech produced drastically larger enhancements during s
201                                       During speech production and speech motor learning, speakers' e
202 tion on the multidimensionality of connected speech production at both behavioural and neural levels.
203  and vocal tract movements are linked during speech production by comparing videos of the face and fa
204                                        Human speech production requires the ability to couple motor a
205 the quantity and quality of fluent connected speech production while controlling for other co-factors
206                                   Similar to speech production, language produced with the hands by f
207 ion of planum temporale selectively disrupts speech production.
208 c stimuli that is uniquely suppressed during speech production.
209 s network across primate evolution to enable speech production.
210 atric) document the feasibility of capturing speech properties within the electrocochleography (ECoch
211 ve music perception, speech recognition, and speech prosody perception in CI users.
212 nded speech and to simple tones modulated at speech rates (4 Hz) in listeners with age-related hearin
213 orticogram with high accuracy and at natural-speech rates.
214                                    Models of speech recalibration classically ignore this volatility.
215                                    Automated speech recognition (ASR) systems, which use sophisticate
216 dynamic behaviors (motifs), as it is done in speech recognition and other data mining applications.
217 aks down in hearing-impaired individuals and speech recognition devices.
218 est that tinnitus negatively affected masked speech recognition even in individuals with no measurabl
219 o-haptic" stimulation substantially improved speech recognition in multi-talker noise when the speech
220 er, results suggest that noise adaptation in speech recognition is probably mediated by neural dynami
221 mations and the neuromorphic hardware to the speech recognition success rate.
222  for ameliorating hearing loss and improving speech recognition technology in the presence of backgro
223 uce these performance differences and ensure speech recognition technology is inclusive.
224                        For competing speech, speech recognition thresholds were measured for differen
225                                              Speech recognition thresholds were measured for target s
226                            Eye tracking with speech recognition was 92% accurate in labeling lesion l
227 l to significantly improve music perception, speech recognition, and speech prosody perception in CI
228                                        Using speech recognition, gaze points corresponding to each le
229 arious contexts, such as computer vision and speech recognition, multiview learning has not yet been
230  for bench-marking such devices is automatic speech recognition.
231 kthroughs in natural language processing and speech recognition.
232 specific tasks, such as image processing and speech recognition.
233  and for infant-directed over adult-directed speech, reflects early sensitivity to social communicati
234 watched news clips, campaign ads, and public speeches related to immigration policy.
235 ollect day-long audio recordings, and infant speech-related and adult vocalisation onsets and offsets
236 nfluence of visual cues on the processing of speech remain incompletely understood.
237 nd how it may have evolved to enable complex speech remain unknown.
238 gnal to synthesize a coarse-grained auditory speech representation in early auditory cortices.
239 esented in neural responses than alternative speech representations (e.g. spectrogram or articulatory
240 volves the transformation of stimulus-locked speech representations in sensorimotor and premotor cort
241 emes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of s
242  in most nonhuman primates, the evolution of speech required the addition of vocalization onto this s
243                                    Connected speech samples across descriptive, narrative, and proced
244  Content analyses conducted on all connected speech samples indicated that performance differed acros
245 t were isolated in 8 of 75 (11%) cultures in speech scenarios (P = .02).
246                                   During the speech scenarios, subjects wearing a tight-fitting surgi
247 ic words in Jueju negatively correlated with speech segmentation, which provides an alternative persp
248 henomenon indicating predictive processes of speech segmentation-the neural phase advanced faster aft
249 rspective on how statistical cues facilitate speech segmentation.
250 ecific acoustic information contained in the speech signal.
251 those that are used to generate the acoustic speech signal.
252 resentations, but require lossy remapping of speech signals onto abstracted representations.
253 roup speech contents and to predict incoming speech signals.
254 elling we show that recalibration of natural speech sound categories is better described by represent
255 vements, mapping them onto the corresponding speech sound features; this information is fed to audito
256 account to embrace phonetic and phonological speech sound representations and their neural bases.
257  power (iHGP) across cortex in humans during speech-sound working memory in individuals with schizoph
258 r proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect.
259 ulators, which are key for the production of speech sounds in humans.
260               The asymmetry of processing of speech sounds is affected by low-level acoustic cues, bu
261 m that tests auditory working memory for non-speech sounds that vary in frequency and amplitude modul
262                           During encoding of speech sounds, SZ lacked the correlation of iHGP with ta
263 pecific sensory features are associated with speech sounds.
264 eners to access phonological categories from speech sounds.
265 n mental imagery of unrelated, speech or non-speech, sounds.
266 peaker out of an acoustic mixture of several speech sources.
267 t driven by cross-modal recovery of auditory speech spectra.
268 ns, which can improve the neural encoding of speech spectral and temporal cues.
269                                For competing speech, speech recognition thresholds were measured for
270      Even when the only task is listening to speech stimuli, participants should be asked to place th
271  significantly increased neural responses to speech stimuli, with a more pronounced increase at moder
272 enre and two naturalistic forms of connected speech (storytelling narrative, and procedural discourse
273                          Listeners can parse speech streams by using not only grammatical and statist
274 tures present in attended as well as ignored speech, suggests an active cortical stream segregation p
275 inment, the HI listeners rated the competing speech task to be more difficult.
276             Participants completed two 5-min speech tasks during peak drug effects.
277 stinct types of nonlinear transformations of speech that varied considerably from primary to nonprima
278 sional signatures of two experts in backward speech, that is, the capacity to produce utterances by r
279  as exercise and physical, occupational, and speech therapies).
280 reliable and temporally precise responses to speech; these patterns transformed to distinct sentence-
281 f neurosurgical patients as they listened to speech, this approach significantly improves the predict
282 ities and the ability to benefit from visual speech to represent the syllabic content of SiN account
283 ent process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere,
284  the effect of the terrorist attacks in hate speech toward refugees in contexts where a descriptive n
285 rades the attentional modulation of cortical speech tracking.
286                         Pitch is critical to speech understanding (particularly in noise), to separat
287 g paradigms that are also good predictors of speech understanding in humans.
288                 In the present study, masked speech understanding was measured in normal hearing list
289     Lip-reading is known to improve auditory speech understanding, especially when speech is degraded
290 track the temporal dynamics of purely visual speech using the phase of their slow oscillations and ph
291 ts, suggesting that the detected patterns of speech variability are associated with drug consumption.
292 ons: manual, orofacial, nonspeech vocal, and speech vocal actions.
293                               A decade after speech was first decoded from human brain signals, accur
294                          In social settings, speech waveforms from nearby speakers mix together in ou
295 e articulators shape the spectral content of speech, we hypothesized that the perceptual system might
296 lts from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speec
297 y tracks linguistic structure during natural speech, where linguistic structure does not follow such
298 this question using naturalistic audiovisual speech with intracranial recordings in humans of both se
299 iled spectrotemporal information from visual speech without employing high-level abstractions.
300               Droplet emission occurs during speech, yet few studies document the flow to provide the

 
Page Top