戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 ystem, and (6) correct identification of the speech sound.
2  a P1-N1-P2 response) was different for each speech sound.
3 pecific sensory features are associated with speech sounds.
4 the orthographic system codes explicitly for speech sounds.
5 tex of rhesus monkeys while they categorized speech sounds.
6 t the difference between predicted and heard speech sounds.
7 ve model neurons needed to represent typical speech sounds.
8 ex signals, such as familiar faces or native speech sounds.
9 ory, irrespective of whether the objects are speech sounds.
10  processing human nonverbal vocalizations or speech sounds.
11 tors contribute to categorical perception of speech sounds.
12  affects listeners' perception of subsequent speech sounds.
13 detect, discriminate, localize and order non-speech sounds.
14 ively to monosyllables and produced the same speech sounds.
15 ex, dampening its response to self-generated speech sounds.
16 aring individuals in the absence of auditory speech sounds.
17 tatistical relationships between neighboring speech sounds.
18 n can enhance selective spatial attention to speech sounds.
19 ability to perceive and discriminate between speech sounds.
20 t, which defines the limited set of possible speech sounds.
21 ansitions between and relative timing across speech sounds.
22 n basic auditory processing and manipulating speech sounds.
23 nknown deficits in coordination of gaze with speech sounds.
24 uch as humans can discriminate renditions of speech sounds.
25 l critical information about the identity of speech sounds.
26 tect new correspondences between letters and speech sounds.
27 al network supporting perceptual grouping of speech sounds.
28 in human subjects as they manipulated stored speech sounds.
29  right-frontal regions during recognition of speech sounds.
30 icted by their relative similarity to voiced speech sounds.
31 eners to access phonological categories from speech sounds.
32 istinguish different musical instruments and speech sounds.
33 istinctly affected by acoustic properties of speech sounds.
34 stimulus onset across electrodes that encode speech sounds.
35  electrodes that most strongly discriminated speech sounds.
36 rs shaping the human subcortical response to speech sounds.
37 it is due to abnormal auditory processing of speech sounds.
38 responses can be highly variable to auditory speech sounds.
39 erformance during discrimination of isolated speech sounds.
40 ngs while participants spoke and listened to speech sounds.
41 l probability of both preceding and upcoming speech sounds.
42 tongue tip while the infants listened to the speech sounds.
43 ecifically discrimination of lip-articulated speech sounds.
44 ditory and motor cortex during processing of speech sounds.
45  changes to the perceptual classification of speech sounds.
46 ants to demonstrate statistical learning for speech sounds.
47 ditory ventral stream for temporally complex speech sounds.
48 n mental imagery of unrelated, speech or non-speech, sounds.
49                            From sequences of speech sounds(1,2) or letters(3), humans can extract ric
50 ible vocal output(12-15), including mimicked speech sounds(16).
51 e the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynam
52 reparing motor plans corresponding to target speech sounds, a process known as speech-motor sequencin
53 r proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect.
54 visual representation even when listening to speech sounds alone.
55 egory boundaries from modal distributions of speech sounds along acoustic continua.
56 t modest musical training as they classified speech sounds along an acoustic-phonetic continuum.
57 e of propagation of the temporal features of speech sounds along the ventral pathway of language proc
58 hat motor circuits controlling production of speech sounds also contribute to their perception.
59 andom sequence of equiprobable loud and soft speech sounds and bright and dim checkerboard patterns o
60 uage, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environme
61 ap, we derived slowly varying AM and FM from speech sounds and conducted listening tests using stimul
62 isplay deficient left hemisphere response to speech sounds and have abnormally right-lateralized temp
63 erves the processing of specific features of speech sounds and is behaviorally relevant for speech re
64 nsory inputs affect the neural processing of speech sounds and shows the involvement of the somatosen
65  infants' sensitivity to the distribution of speech sounds and that infant-directed speech contains t
66 tion between the ability to perceive foreign speech sounds and the volume of Heschl's gyrus (HG), the
67 gs that exist between phonetic properties of speech sounds and their meaning.
68 of the human auditory cortex in representing speech sounds and transforming them to meaning is not ye
69 ition involves comparing heard and predicted speech sounds and using prediction error to update lexic
70 e intrinsic relationship between meaningless speech sounds and visual shapes, as exemplified by the f
71  difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demandi
72 y be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memo
73 rn classification of corresponding phonemes (speech sounds) and visemes (lip movements).
74 ze in processing fast temporal components of speech sounds, and the right ACx slower components.
75 y stimuli (environmental sounds, meaningless speech sounds, and words) were presented either binaural
76 ain the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment
77  is that linguistic context affects both how speech sounds are categorized into phonemes, and how dif
78 so the timbre, intonation, and stress of how speech sounds are delivered (often referred to as "paral
79 cripts, such as Mandarin Chinese, individual speech sounds are not orthographically represented, rais
80               The neural substrates by which speech sounds are perceptually segregated into distinct
81 controversially interpreted as evidence that speech sounds are processed as articulatory gestures.
82  rapidly varying spectrotemporal features of speech sounds are processed, as compared to processing s
83                                        Human speech sounds are produced through a coordinated movemen
84  the Bouba/Kiki effect, in which meaningless speech sounds are systematically mapped onto rounded or
85                                              Speech sounds are traditionally divided into consonants
86              Animal sounds, as well as human speech sounds, are characterized by multiple parameters
87 matic mapping between round/spiky shapes and speech sounds ("Bouba"/"Kiki").
88 fic speech sounds from a continuous train of speech sounds but did not impair performance during disc
89 enced not only by the acoustic properties of speech sounds, but also by higher-level processes involv
90 us to determine what acoustic information in speech sounds can be reconstructed from population neura
91 elling we show that recalibration of natural speech sound categories is better described by represent
92 tion, and that their apparent selectivity to speech sound categories may reflect a more general prefe
93 magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in
94 of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporopa
95 he processing and integration of letters and speech sounds changes with LSS learning.
96 ation resulted in a selective enhancement of speech sounds compared with the background noises.
97       Single neurons encoded a wide range of speech sound cues, including features of consonants and
98 gate brainstem and cortical responses to the speech sound /da/.
99 as the ability to discriminate among foreign speech sounds declines.
100 e same ability to discriminate among foreign speech sounds decreases.
101  8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex r
102 ich included skilled forelimb motor control, speech sound discrimination, and paired-associates learn
103 mal cortical responses to sound and impaired speech sound discrimination.
104  development, oral-motor movements influence speech sound discrimination.
105 s of unknown utility in children with SLI or speech sound disorder (SSD) who do not have epilepsy.
106  among three developmental disorders, namely speech sound disorder (SSD), language impairment (LI), a
107 tro-encephalographic (EEG) abnormalities and speech sound disorder in rolandic epilepsy families - an
108                                              Speech-sound disorder (SSD) is a complex behavioral diso
109  in the genetic investigation of stuttering, speech-sound disorder (SSD), specific language impairmen
110 ity locus and a linkage region for dyslexia, speech-sound disorder and reading.
111 nnative, and hence never-before-experienced, speech sound distinction.
112 FC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidenc
113 o the timing and spectral characteristics of speech sounds during speech perception.
114  potentials are synchronized to the onset of speech sounds during the Talk and Listen conditions.
115 t of the event-related potential elicited by speech sounds during vocalization (talk) and passive pla
116 e to hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in a word.
117 reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroim
118     A consistent increase in grammatical and speech sound errors and a simplification of spoken synta
119                                              Speech sound errors occur when this STN-cortical interac
120 grammar on the one hand, and grammatical and speech sound errors on the other.
121 ated reductions in speech perception because speech sounds, especially consonants, become inaudible.
122 tex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks an
123 erior temporal regions in processing complex speech sounds, evidence suggests that the motor system m
124 acoustic-phonetic processing of foundational speech sound features(2,3), such as vowels and consonant
125 vements, mapping them onto the corresponding speech sound features; this information is fed to audito
126 nses and mismatch responses to nonspeech and speech sounds for children with MMHL.
127 ted the ability of rats to identify specific speech sounds from a continuous train of speech sounds b
128 igm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted the
129   Attempts to teach mammals to produce human speech sounds have largely been unsuccessful, most notab
130 factors regulating this modulation regarding speech sounds have not been disclosed.
131  characterized by difficulties in processing speech sounds (i.e., phonemes).
132 that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level
133  nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior
134  understanding speech in noise, when cues to speech-sound identity are less redundant.
135 nfant's ability to discriminate among native speech sounds improves, whereas the ability to discrimin
136 nfant's ability to discriminate among native speech sounds improves, whereas the same ability to disc
137 vely using the CI in response to a simulated speech sound in seven adult participants and compared it
138                      The neural responses to speech sounds in A1 were not degraded as a function of p
139 ts were delayed to present lip movements and speech sounds in antiphase specifically with respect to
140 g. Wernicke's area responded specifically to speech sounds in controls but was not specialized in pat
141 c-to-higher order phonetic level encoding of speech sounds in human language receptive cortex.
142 ulators, which are key for the production of speech sounds in humans.
143 e of being able to hear, but not understand, speech sounds in noisy environments.
144 uously encodes the three most recently heard speech sounds in parallel, and maintains this informatio
145 utton press was required in response to soft speech sounds in the auditory attention task and to dim
146 lling new evidence for dynamic processing of speech sounds in the auditory pathway.
147                     The accurate encoding of speech sounds in the subcortical auditory nervous system
148 ponse, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and c
149  of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as bein
150 back, namely the self-perception of produced speech sounds, in the online control of spatial and temp
151 ical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own
152 eech movements but their ability to perceive speech sounds, including their own errors, is unaffected
153 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
154 lex series of processing stages to translate speech sounds into meaning.
155               The asymmetry of processing of speech sounds is affected by low-level acoustic cues, bu
156   We found that the neural representation of speech sounds is categorically organized in the human po
157 lish associations between visual objects and speech sounds is essential for human reading.
158 irst evidence that formant perception in non-speech sounds is improved by fundamental frequency modul
159                   However, the rhythm of non-speech sounds is tracked by cortical activity as well.
160 cies underpins the acquisition of new voiced speech sounds, is not uniquely human among great apes.
161 er, these results demonstrate that nonnative speech sound learning involves a wide array of changes i
162     The acoustic dimensions that distinguish speech sounds (like the vowel differences in "boot" and
163 ions between letters and their corresponding speech sounds (LSS) is pivotal in the early stages of re
164 is experiment, human participants identified speech sounds masked by varying levels of noise while bl
165 iate the position of their articulators with speech sounds may impair the development of phonological
166 ry brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a
167  beyond the principal frequency range of the speech sound modulated in opposite fashion.
168 etition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error
169 ve-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error c
170 ct was generalized to other types of similar speech sounds not included in the training material.
171 ng associations with specific kinds of human speech sounds, occurring persistently across continents
172  distribution of information among different speech sounds of words is governed by a critical computa
173 nalogous to phonological-level processing of speech sounds) of the gestures.
174 n (n = 11) demonstrated abnormal encoding of speech sounds on both individual measures of brainstem a
175 on, the N1 event-related brain potentials to speech sound onset during talking and listening were com
176         Mapping acoustically highly variable speech sounds onto less variable motor representations m
177 med a delayed-match-to-sample task on either speech sound or speaker identity.
178 categorizing dynamic sensory stimuli such as speech sounds or visual objects.
179 ring development, language experience alters speech sound (phoneme) categorization.
180                    Language testing included speech sound (phoneme) discrimination, single word and p
181 vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly org
182 ow to convert letters (graphemes) into these speech sounds (phonemes).
183 ness that words are comprised of a system of speech sounds (phonological awareness) and the knowledge
184 st, effects of musical ability on non-native speech-sound processing and of inhibitory control on vow
185 st that individual differences in non-native speech-sound processing are to some extent determined by
186 e in the face of acoustic variability (among speech sounds produced by different speakers at differen
187 mprehension and expression and a disorder of speech sound production.
188 onsonants Correct-Revised test, a measure of speech-sound production (85+/-7 vs. 86+/-7); the General
189 onsonants Correct-Revised test, a measure of speech-sound production (96+/-2 vs. 96+/-3); the SCAN te
190 complex behavioral disorder characterized by speech-sound production errors associated with deficits
191 scores measured several processes underlying speech-sound production, including phonological memory,
192 ases, to facilitate category judgments about speech sounds (rather than speech perception, which invo
193               Although specialized voice and speech-sound regions have been proposed, it is unclear h
194                                         Each speech sound representation evolves over time, jointly e
195                        Here, we describe how speech sound representation in the STG relies on fundame
196 account to embrace phonetic and phonological speech sound representations and their neural bases.
197                                     Although speech-sound responses were distributed, spatially discr
198 ecision on the order of 1-10 ms to represent speech sounds shifted into the rat hearing range.
199 rn their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do s
200                           During encoding of speech sounds, SZ lacked the correlation of iHGP with ta
201 he earliest stages of cortical processing of speech sounds take place in the auditory cortex.
202 MEG) was used to investigate the response to speech sounds that differ in onset dynamics, parameteriz
203 ficial acoustic continua ranging between two speech sounds that differed in place of articulation, in
204 rical perception of continua ranging between speech sounds that do not involve the lips in their arti
205 ior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variat
206 ior STG is tuned for temporally fast varying speech sounds that have relatively constant energy acros
207 m that tests auditory working memory for non-speech sounds that vary in frequency and amplitude modul
208                      Participants identified speech sounds that were preceded by phrases from two dif
209                               An analysis of speech sounds, the principal source of periodic sound st
210                            When infants hear speech sounds, they can learn rules that govern their co
211 e experience affect the neural processing of speech sounds throughout the auditory system.
212                           The ability to map speech sounds to corresponding letters is critical for e
213 al processing to deficits in the matching of speech sounds to their appropriate visual representation
214 ollected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by diffe
215 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
216                     Phonological decoding of speech sounds was assessed by auditory syllable discrimi
217 uditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding du
218                                    Examining speech sounds, we show that activation associated with t
219                                         When speech sounds were ignored, the effect of this motor dis
220   Do speakers of all languages use segmental speech sounds when they produce words?
221 auditory-cortex responses to lip-articulated speech sounds when they were attended.
222 pling content (acoustically similar to human speech sounds), which may represent some of the signal a
223 eners experience it as sequences of discrete speech sounds, which are used to recognise discrete word
224 iovisual correspondences between letters and speech sounds, which can be detected within the first 40
225 diovisual speech between mouth movements and speech sounds, which last 80 ms longer for /ga/ than for
226 ecific sets of acoustic cues, extracted from speech sounds, which vary across judgment dimensions.
227 ilarly, the strength of neural tracking of a speech sound with a dynamic pitch trajectory was not rel
228 daptable sensorimotor maps that couple heard speech sounds with motor programs for speech production;
229 ated processing of sound amplitude rises and speech sounds with posterior and middle superior tempora
230       Adults can learn to identify nonnative speech sounds with training, albeit with substantial var
231    Typically, stuttering is characterized by speech sounds, words or syllables which may be repeated
232  power (iHGP) across cortex in humans during speech-sound working memory in individuals with schizoph

 
Page Top