コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 -talker masker for both natural and monotone speech.
2 y ultimately reflect a special type of overt speech.
3 ption by people with physical limitations of speech.
4 stened to several hours of natural narrative speech.
5 y between produced gestures and co-occurring speech.
6 d (in tone languages) lexical information in speech.
7 l communication resembles that of gesture in speech.
8 p gradually at further angles, more than for speech.
9 each an articulatory target for intelligible speech.
10 n for the recognition of words in continuous speech.
11 aerodigestive theory and the development of speech.
12 ct perception for either form of incongruent speech.
13 ated communication system, as do gesture and speech.
14 xperiment while subjects listened to natural speech.
15 me informational versus energetic masking of speech.
16 rbitrary combinations of auditory and visual speech.
17 al, articulatory, and semantic properties of speech.
18 responses to phoneme instances in continuous speech.
19 med the low levels observed for intelligible speech.
20 erarchy of language structures in continuous speech.
23 ll have intellectual disability with delayed speech, a history of febrile and/or non-febrile seizures
24 llary expansion techniques to develop normal speech, achieve functional occlusion for nutrition intak
25 nt 1 tested feedforward control by examining speech adaptation across trials in response to a consist
26 uggest that sign should not be compared with speech alone but should be compared with speech-plus-ges
28 ronal oscillations in processing accelerated speech also relates to their scale-free amplitude modula
29 temporal modulations in a range relevant for speech analysis ( approximately 2-4 Hz) were reconstruct
30 ts to standard approaches that use segmented speech and block designs, which report more laterality i
31 needed to account for differences between co-speech and co-sign gesture (e.g., different degrees of o
32 more integrative view embraces not only sign/speech and co-sign/speech gesture, but also indicative g
41 and comprehensive dysphagia assessments by a speech and language therapist (SALT) were associated wit
42 pants to either 3 weeks or more of intensive speech and language therapy (>/=10 h per week) or 3 week
43 t guidelines for aphasia recommend intensive speech and language therapy for chronic (>/=6 months) ap
45 imed to examine whether 3 weeks of intensive speech and language therapy under routine clinical condi
47 ly improved from baseline to after intensive speech and language treatment (mean difference 2.61 poin
48 signed, is dominant over laughter, and that speech and manual signing involve similar mechanisms.
50 ain gateway to communication with others via speech and music, and it also plays an important role in
52 er the distinct pitch processing pattern for speech and nonspeech stimuli in autism was due to a spee
55 terature shows that power energizes thought, speech, and action and orients individuals toward salien
57 ant factor in the perceptual organization of speech, and reveal a widely distributed neural network s
58 prehensible and incomprehensible accelerated speech, and show that neural phase patterns in the theta
62 l part of the feedforward control system for speech but is not essential for online, feedback control
63 maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control.SI
64 ectroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme
69 urage sustained directional movement between speech communities, then languages should be channeled a
71 ral processing of the formant frequencies of speech, compared to non-native nonmusicians, suggesting
74 dressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking an
75 ons in 20 patients with chronic aphasia with speech comprehension impairment following left hemispher
77 ork for a comprehensive bottom-up account of speech comprehension in the human brain.SIGNIFICANCE STA
78 sistent with previous studies, we found that speech comprehension involves hierarchical representatio
83 CANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cor
84 udiovisual (AV) integration is essential for speech comprehension, especially in adverse listening si
86 ed in a small but significant improvement in speech comprehension, whereas donepezil had a negative e
89 er in the high informational masking natural speech condition, where the musician advantage was appro
90 we studied Bengalese finch song, which, like speech, consists of variable sequences of "syllables." W
95 hension by challenging syllable tracking and speech decoding using comprehensible and incomprehensibl
96 ental delay/intellectual disability (10/10), speech delay (10/10), postnatal microcephaly (7/9), and
99 tational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized
100 the musicians were better able to understand speech either in noise or in a two-talker competing spee
103 Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech process
105 g severe intellectual disability with absent speech, epilepsy, and hypotonia was observed in all affe
106 a dynamic neural transformation of low-level speech features as they propagate along the auditory pat
108 e used models based on a hierarchical set of speech features to predict BOLD responses of individual
110 auditory cortex disrupts the segregation of speech from background noise, leading to deficits in spe
111 tivation of auditory brain regions by visual speech from before to after implantation and its relatio
112 tivation of auditory brain regions by visual speech from before to after implantation is associated w
113 f modulated sounds disrupt the separation of speech from modulated background noise in auditory corte
114 ropagation of low-level acoustic features of speech from posterior superior temporal gyrus toward ant
116 ew embraces not only sign/speech and co-sign/speech gesture, but also indicative gestures irrespectiv
119 tion is deciding whether auditory and visual speech have the same source, a process known as causal i
120 o severe intellectual disability with absent speech, hypotonia, brachycephaly, congenital heart defec
122 n left posteromedial auditory cortex predict speech identification in modulated background noise.
123 en magnified envelope coding and deficits in speech identification in modulated noise has been absent
129 st time the cortical processing of ambiguous speech in people without psychosis who regularly hear vo
132 the ability of SNHL listeners to understand speech in the presence of modulated background noise.
133 hearers reported recognizing the presence of speech in the stimuli before controls, and before being
134 we show that the natural statistics of human speech, in which voices co-occur with mouth movements, a
135 ssociation between cognitive performance and speech-in-noise (SiN) perception examine different aspec
138 musicians outperform nonmusicians on a given speech-in-noise task may well depend on the type of nois
141 , the periodic cues of TFS are essential for speech intelligibility and are encoded in auditory neuro
143 Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targe
146 esis that musical training leads to improved speech intelligibility in complex speech or noise backgr
149 h is limited by the brain's ability to parse speech into syllabic units using delta/theta oscillation
151 it that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech
152 es most of the information conveyed by human speech, is not principally determined by basilar membran
153 ormalized ratio measured, HbA1c measurement, speech language pathology consultation, anticoagulation
156 These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex
157 d in feedback control.SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thoug
158 cerebellum has been shown to be part of the speech motor control network, its functional contributio
160 ss both anticipatory and reactive aspects of speech motor control, comparing the performance of patie
161 opamine release into the dorsal striatum and speech motor cortex exerts direct modulation of neuronal
162 entary examines this claim in the context of speech motor learning and biomechanics, proposing that s
164 ould provide a basis for both swallowing and speech movements, and provides biomechanical simulation
166 brain synchrony was unrelated to episodes of speech/no-speech or general content of conversation.
167 ces in the MMRs to categorical perception of speech/nonspeech stimuli or lack thereof, neural oscilla
169 the relationship between gesture, sign, and speech offers a valuable tool for investigating how lang
171 (CI) processors, the temporal information in speech or environmental sounds is delivered through modu
174 hen we hear an auditory stream like music or speech or scan a texture with our fingertip, physical fe
175 f theta rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mec
178 ed model of causal inference in multisensory speech perception (CIMS) that predicts the perception of
182 rom background noise, leading to deficits in speech perception in modulated background noise.SIGNIFIC
185 me measures were audibility, scores from the speech perception tests, and scores from a questionnaire
186 imately .3 between cognitive performance and speech perception, although some variability in associat
187 S), a brain region known to be important for speech perception, is complex, with some regions respond
188 ate or frequency, plays an important role in speech perception, music perception, and listening in co
192 ignals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated indep
196 ock designs, which report more laterality in speech processing and associated semantic processing to
202 ively by the involvement of visual cortex in speech processing, and negatively by the cross-modal rec
203 eported comparable expectations for improved speech processing, thereby controlling for placebo effec
205 hat each instance of a phoneme in continuous speech produces multiple distinguishable neural response
206 hat each instance of a phoneme in continuous speech produces several observable neural responses at d
209 ic lateralization of neural processes during speech production has been known since the times of Broc
210 neural activation during natural, connected speech production in children who stutter demonstrates t
212 promise for the use of fNIRS during natural speech production in future research with typical and at
213 onses over neural regions integral to fluent speech production including inferior frontal gyrus, prem
214 tering, atypical functional organization for speech production is present and suggests promise for th
216 ted to language production (sentential overt speech production-Speech task) and activation related to
221 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
225 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
226 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
228 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
231 opulations, we collected genetic markers and speech recordings in the admixed creole-speaking populat
232 -0.90, p < 0.05 corrected), suggesting that speech recovery is related to structural plasticity of l
233 selective neurodegeneration of human frontal speech regions results in delayed reconciliation of pred
236 ying network mechanisms by quantifying local speech representations and directed connectivity in MEG
237 e of auditory-frontal interactions in visual speech representations and suggest that functional conne
238 ally contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typical
241 the effects of modifying the TFS in natural speech sentences on both speech recognition and neural c
243 continuous and various types of fluctuating speech-shaped Gaussian noise including those with both r
245 ing block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the process
246 g scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially
247 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
248 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
256 omplex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical s
257 omplex acoustic scene consisting of multiple speech sources is represented in separate hierarchical s
259 and nonspeech stimuli in autism was due to a speech-specific deficit in categorical perception of lex
260 Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex
261 ions in the sensory tracking of the attended speech stream and frontoparietal activity during selecti
262 al areas, by contrast, represent an attended speech stream separately from, and with significantly hi
263 y sensory areas is dominated by the attended speech stream, whereas competing input is suppressed.
267 ory scene, with both attended and unattended speech streams represented with almost equal fidelity.
269 sm with the addition of progressive balance, speech, swallowing, eye movement and cognitive impairmen
270 auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual
271 tegration combines information from auditory speech (talker's voice) and visual speech (talker's mout
273 oduction (sentential overt speech production-Speech task) and activation related to cognitive process
274 and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and
275 ity in humans relied on the periodic cues of speech TFS in both quiet and noisy listening conditions.
276 ase locking patterns to the periodic cues of speech TFS that disappear when reconstructed sounds do n
277 TD would laugh less in response to their own speech than other dementia groups or controls, while tho
278 for left-hemispheric lateralization of human speech that is due to left-lateralized dopaminergic modu
279 ntence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded
280 ly listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or uni
281 y employed a novel design to show that inner speech - the silent production of words in one's mind -
282 ants with chronic aphasia received intensive speech therapy for 3 weeks, with standardized naming tes
284 uditory evoked potential (CAEP) responses to speech tokens was introduced into the audiology manageme
285 omputerized CL auditory training can enhance speech understanding in levels of background noise that
287 c dysfunction at the level of MGB may affect speech understanding negatively in the elderly populatio
290 s compared with original clinician or family speech using the qualitative research methods of directe
292 nes capable of recognizing patterns (images, speech, video) and interacting with the external world i
296 ccipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory c
299 Patients' ability to understand auditory speech with their CI was also measured following 6 mo of
300 ific hypotheses about the representations of speech without using block designs and segmented or synt
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。