コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 ystem, and (6) correct identification of the speech sound.
2 a P1-N1-P2 response) was different for each speech sound.
3 pecific sensory features are associated with speech sounds.
4 the orthographic system codes explicitly for speech sounds.
5 tex of rhesus monkeys while they categorized speech sounds.
6 t the difference between predicted and heard speech sounds.
7 ve model neurons needed to represent typical speech sounds.
8 ex signals, such as familiar faces or native speech sounds.
9 ory, irrespective of whether the objects are speech sounds.
10 processing human nonverbal vocalizations or speech sounds.
11 tors contribute to categorical perception of speech sounds.
12 affects listeners' perception of subsequent speech sounds.
13 detect, discriminate, localize and order non-speech sounds.
14 ively to monosyllables and produced the same speech sounds.
15 ex, dampening its response to self-generated speech sounds.
16 aring individuals in the absence of auditory speech sounds.
17 tatistical relationships between neighboring speech sounds.
18 n can enhance selective spatial attention to speech sounds.
19 ability to perceive and discriminate between speech sounds.
20 t, which defines the limited set of possible speech sounds.
21 ansitions between and relative timing across speech sounds.
22 n basic auditory processing and manipulating speech sounds.
23 nknown deficits in coordination of gaze with speech sounds.
24 uch as humans can discriminate renditions of speech sounds.
25 l critical information about the identity of speech sounds.
26 tect new correspondences between letters and speech sounds.
27 al network supporting perceptual grouping of speech sounds.
28 in human subjects as they manipulated stored speech sounds.
29 right-frontal regions during recognition of speech sounds.
30 icted by their relative similarity to voiced speech sounds.
31 eners to access phonological categories from speech sounds.
32 istinguish different musical instruments and speech sounds.
33 istinctly affected by acoustic properties of speech sounds.
34 stimulus onset across electrodes that encode speech sounds.
35 electrodes that most strongly discriminated speech sounds.
36 rs shaping the human subcortical response to speech sounds.
37 it is due to abnormal auditory processing of speech sounds.
38 responses can be highly variable to auditory speech sounds.
39 erformance during discrimination of isolated speech sounds.
40 ngs while participants spoke and listened to speech sounds.
41 l probability of both preceding and upcoming speech sounds.
42 tongue tip while the infants listened to the speech sounds.
43 ecifically discrimination of lip-articulated speech sounds.
44 ditory and motor cortex during processing of speech sounds.
45 changes to the perceptual classification of speech sounds.
46 ants to demonstrate statistical learning for speech sounds.
47 ditory ventral stream for temporally complex speech sounds.
48 n mental imagery of unrelated, speech or non-speech, sounds.
51 e the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynam
52 reparing motor plans corresponding to target speech sounds, a process known as speech-motor sequencin
57 e of propagation of the temporal features of speech sounds along the ventral pathway of language proc
59 andom sequence of equiprobable loud and soft speech sounds and bright and dim checkerboard patterns o
60 uage, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environme
61 ap, we derived slowly varying AM and FM from speech sounds and conducted listening tests using stimul
62 isplay deficient left hemisphere response to speech sounds and have abnormally right-lateralized temp
63 erves the processing of specific features of speech sounds and is behaviorally relevant for speech re
64 nsory inputs affect the neural processing of speech sounds and shows the involvement of the somatosen
65 infants' sensitivity to the distribution of speech sounds and that infant-directed speech contains t
66 tion between the ability to perceive foreign speech sounds and the volume of Heschl's gyrus (HG), the
68 of the human auditory cortex in representing speech sounds and transforming them to meaning is not ye
69 ition involves comparing heard and predicted speech sounds and using prediction error to update lexic
70 e intrinsic relationship between meaningless speech sounds and visual shapes, as exemplified by the f
71 difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demandi
72 y be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memo
75 y stimuli (environmental sounds, meaningless speech sounds, and words) were presented either binaural
76 ain the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment
77 is that linguistic context affects both how speech sounds are categorized into phonemes, and how dif
78 so the timbre, intonation, and stress of how speech sounds are delivered (often referred to as "paral
79 cripts, such as Mandarin Chinese, individual speech sounds are not orthographically represented, rais
81 controversially interpreted as evidence that speech sounds are processed as articulatory gestures.
82 rapidly varying spectrotemporal features of speech sounds are processed, as compared to processing s
84 the Bouba/Kiki effect, in which meaningless speech sounds are systematically mapped onto rounded or
88 fic speech sounds from a continuous train of speech sounds but did not impair performance during disc
89 enced not only by the acoustic properties of speech sounds, but also by higher-level processes involv
90 us to determine what acoustic information in speech sounds can be reconstructed from population neura
91 elling we show that recalibration of natural speech sound categories is better described by represent
92 tion, and that their apparent selectivity to speech sound categories may reflect a more general prefe
93 magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in
94 of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporopa
101 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex r
102 ich included skilled forelimb motor control, speech sound discrimination, and paired-associates learn
105 s of unknown utility in children with SLI or speech sound disorder (SSD) who do not have epilepsy.
106 among three developmental disorders, namely speech sound disorder (SSD), language impairment (LI), a
107 tro-encephalographic (EEG) abnormalities and speech sound disorder in rolandic epilepsy families - an
109 in the genetic investigation of stuttering, speech-sound disorder (SSD), specific language impairmen
112 FC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidenc
114 potentials are synchronized to the onset of speech sounds during the Talk and Listen conditions.
115 t of the event-related potential elicited by speech sounds during vocalization (talk) and passive pla
117 reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroim
118 A consistent increase in grammatical and speech sound errors and a simplification of spoken synta
121 ated reductions in speech perception because speech sounds, especially consonants, become inaudible.
122 tex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks an
123 erior temporal regions in processing complex speech sounds, evidence suggests that the motor system m
124 acoustic-phonetic processing of foundational speech sound features(2,3), such as vowels and consonant
125 vements, mapping them onto the corresponding speech sound features; this information is fed to audito
127 ted the ability of rats to identify specific speech sounds from a continuous train of speech sounds b
128 igm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted the
129 Attempts to teach mammals to produce human speech sounds have largely been unsuccessful, most notab
132 that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level
133 nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior
135 nfant's ability to discriminate among native speech sounds improves, whereas the ability to discrimin
136 nfant's ability to discriminate among native speech sounds improves, whereas the same ability to disc
137 vely using the CI in response to a simulated speech sound in seven adult participants and compared it
139 ts were delayed to present lip movements and speech sounds in antiphase specifically with respect to
140 g. Wernicke's area responded specifically to speech sounds in controls but was not specialized in pat
144 uously encodes the three most recently heard speech sounds in parallel, and maintains this informatio
145 utton press was required in response to soft speech sounds in the auditory attention task and to dim
148 ponse, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and c
149 of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as bein
150 back, namely the self-perception of produced speech sounds, in the online control of spatial and temp
151 ical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own
152 eech movements but their ability to perceive speech sounds, including their own errors, is unaffected
153 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
156 We found that the neural representation of speech sounds is categorically organized in the human po
158 irst evidence that formant perception in non-speech sounds is improved by fundamental frequency modul
160 cies underpins the acquisition of new voiced speech sounds, is not uniquely human among great apes.
161 er, these results demonstrate that nonnative speech sound learning involves a wide array of changes i
162 The acoustic dimensions that distinguish speech sounds (like the vowel differences in "boot" and
163 ions between letters and their corresponding speech sounds (LSS) is pivotal in the early stages of re
164 is experiment, human participants identified speech sounds masked by varying levels of noise while bl
165 iate the position of their articulators with speech sounds may impair the development of phonological
166 ry brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a
168 etition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error
169 ve-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error c
170 ct was generalized to other types of similar speech sounds not included in the training material.
171 ng associations with specific kinds of human speech sounds, occurring persistently across continents
172 distribution of information among different speech sounds of words is governed by a critical computa
174 n (n = 11) demonstrated abnormal encoding of speech sounds on both individual measures of brainstem a
175 on, the N1 event-related brain potentials to speech sound onset during talking and listening were com
181 vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly org
183 ness that words are comprised of a system of speech sounds (phonological awareness) and the knowledge
184 st, effects of musical ability on non-native speech-sound processing and of inhibitory control on vow
185 st that individual differences in non-native speech-sound processing are to some extent determined by
186 e in the face of acoustic variability (among speech sounds produced by different speakers at differen
188 onsonants Correct-Revised test, a measure of speech-sound production (85+/-7 vs. 86+/-7); the General
189 onsonants Correct-Revised test, a measure of speech-sound production (96+/-2 vs. 96+/-3); the SCAN te
190 complex behavioral disorder characterized by speech-sound production errors associated with deficits
191 scores measured several processes underlying speech-sound production, including phonological memory,
192 ases, to facilitate category judgments about speech sounds (rather than speech perception, which invo
196 account to embrace phonetic and phonological speech sound representations and their neural bases.
199 rn their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do s
202 MEG) was used to investigate the response to speech sounds that differ in onset dynamics, parameteriz
203 ficial acoustic continua ranging between two speech sounds that differed in place of articulation, in
204 rical perception of continua ranging between speech sounds that do not involve the lips in their arti
205 ior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variat
206 ior STG is tuned for temporally fast varying speech sounds that have relatively constant energy acros
207 m that tests auditory working memory for non-speech sounds that vary in frequency and amplitude modul
213 al processing to deficits in the matching of speech sounds to their appropriate visual representation
214 ollected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by diffe
215 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
217 uditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding du
222 pling content (acoustically similar to human speech sounds), which may represent some of the signal a
223 eners experience it as sequences of discrete speech sounds, which are used to recognise discrete word
224 iovisual correspondences between letters and speech sounds, which can be detected within the first 40
225 diovisual speech between mouth movements and speech sounds, which last 80 ms longer for /ga/ than for
226 ecific sets of acoustic cues, extracted from speech sounds, which vary across judgment dimensions.
227 ilarly, the strength of neural tracking of a speech sound with a dynamic pitch trajectory was not rel
228 daptable sensorimotor maps that couple heard speech sounds with motor programs for speech production;
229 ated processing of sound amplitude rises and speech sounds with posterior and middle superior tempora
231 Typically, stuttering is characterized by speech sounds, words or syllables which may be repeated
232 power (iHGP) across cortex in humans during speech-sound working memory in individuals with schizoph