コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 ystem, and (6) correct identification of the speech sound.
2 a P1-N1-P2 response) was different for each speech sound.
3 al network supporting perceptual grouping of speech sounds.
4 processing human nonverbal vocalizations or speech sounds.
5 tors contribute to categorical perception of speech sounds.
6 detect, discriminate, localize and order non-speech sounds.
7 istinguish different musical instruments and speech sounds.
8 istinctly affected by acoustic properties of speech sounds.
9 ively to monosyllables and produced the same speech sounds.
10 ex, dampening its response to self-generated speech sounds.
11 aring individuals in the absence of auditory speech sounds.
12 tatistical relationships between neighboring speech sounds.
13 n can enhance selective spatial attention to speech sounds.
14 stimulus onset across electrodes that encode speech sounds.
15 electrodes that most strongly discriminated speech sounds.
16 rs shaping the human subcortical response to speech sounds.
17 it is due to abnormal auditory processing of speech sounds.
18 in human subjects as they manipulated stored speech sounds.
19 responses can be highly variable to auditory speech sounds.
20 erformance during discrimination of isolated speech sounds.
21 ngs while participants spoke and listened to speech sounds.
22 right-frontal regions during recognition of speech sounds.
23 l probability of both preceding and upcoming speech sounds.
24 tongue tip while the infants listened to the speech sounds.
25 ecifically discrimination of lip-articulated speech sounds.
26 ditory and motor cortex during processing of speech sounds.
27 icted by their relative similarity to voiced speech sounds.
28 changes to the perceptual classification of speech sounds.
29 ants to demonstrate statistical learning for speech sounds.
30 ditory ventral stream for temporally complex speech sounds.
31 the orthographic system codes explicitly for speech sounds.
32 tex of rhesus monkeys while they categorized speech sounds.
33 t the difference between predicted and heard speech sounds.
34 ve model neurons needed to represent typical speech sounds.
35 ex signals, such as familiar faces or native speech sounds.
36 e the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynam
39 e of propagation of the temporal features of speech sounds along the ventral pathway of language proc
41 andom sequence of equiprobable loud and soft speech sounds and bright and dim checkerboard patterns o
42 uage, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environme
43 ap, we derived slowly varying AM and FM from speech sounds and conducted listening tests using stimul
44 isplay deficient left hemisphere response to speech sounds and have abnormally right-lateralized temp
45 erves the processing of specific features of speech sounds and is behaviorally relevant for speech re
46 nsory inputs affect the neural processing of speech sounds and shows the involvement of the somatosen
47 infants' sensitivity to the distribution of speech sounds and that infant-directed speech contains t
48 tion between the ability to perceive foreign speech sounds and the volume of Heschl's gyrus (HG), the
50 y be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memo
51 y stimuli (environmental sounds, meaningless speech sounds, and words) were presented either binaural
52 ain the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment
53 is that linguistic context affects both how speech sounds are categorized into phonemes, and how dif
54 cripts, such as Mandarin Chinese, individual speech sounds are not orthographically represented, rais
56 controversially interpreted as evidence that speech sounds are processed as articulatory gestures.
57 rapidly varying spectrotemporal features of speech sounds are processed, as compared to processing s
59 the Bouba/Kiki effect, in which meaningless speech sounds are systematically mapped onto rounded or
62 fic speech sounds from a continuous train of speech sounds but did not impair performance during disc
63 enced not only by the acoustic properties of speech sounds, but also by higher-level processes involv
64 us to determine what acoustic information in speech sounds can be reconstructed from population neura
65 tion, and that their apparent selectivity to speech sound categories may reflect a more general prefe
66 magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in
67 of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporopa
71 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex r
74 s of unknown utility in children with SLI or speech sound disorder (SSD) who do not have epilepsy.
75 among three developmental disorders, namely speech sound disorder (SSD), language impairment (LI), a
76 tro-encephalographic (EEG) abnormalities and speech sound disorder in rolandic epilepsy families - an
78 in the genetic investigation of stuttering, speech-sound disorder (SSD), specific language impairmen
82 t of the event-related potential elicited by speech sounds during vocalization (talk) and passive pla
84 reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroim
85 A consistent increase in grammatical and speech sound errors and a simplification of spoken synta
87 ated reductions in speech perception because speech sounds, especially consonants, become inaudible.
88 tex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks an
89 erior temporal regions in processing complex speech sounds, evidence suggests that the motor system m
90 ted the ability of rats to identify specific speech sounds from a continuous train of speech sounds b
91 igm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted the
94 that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level
95 nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior
96 nfant's ability to discriminate among native speech sounds improves, whereas the ability to discrimin
97 nfant's ability to discriminate among native speech sounds improves, whereas the same ability to disc
99 g. Wernicke's area responded specifically to speech sounds in controls but was not specialized in pat
102 utton press was required in response to soft speech sounds in the auditory attention task and to dim
105 ponse, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and c
106 of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as bein
107 back, namely the self-perception of produced speech sounds, in the online control of spatial and temp
108 ical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own
109 eech movements but their ability to perceive speech sounds, including their own errors, is unaffected
110 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
112 We found that the neural representation of speech sounds is categorically organized in the human po
113 irst evidence that formant perception in non-speech sounds is improved by fundamental frequency modul
114 is experiment, human participants identified speech sounds masked by varying levels of noise while bl
115 iate the position of their articulators with speech sounds may impair the development of phonological
116 ry brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a
118 etition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error
119 ct was generalized to other types of similar speech sounds not included in the training material.
120 ng associations with specific kinds of human speech sounds, occurring persistently across continents
122 n (n = 11) demonstrated abnormal encoding of speech sounds on both individual measures of brainstem a
123 on, the N1 event-related brain potentials to speech sound onset during talking and listening were com
128 vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly org
130 ness that words are comprised of a system of speech sounds (phonological awareness) and the knowledge
131 st, effects of musical ability on non-native speech-sound processing and of inhibitory control on vow
132 st that individual differences in non-native speech-sound processing are to some extent determined by
133 e in the face of acoustic variability (among speech sounds produced by different speakers at differen
135 onsonants Correct-Revised test, a measure of speech-sound production (85+/-7 vs. 86+/-7); the General
136 onsonants Correct-Revised test, a measure of speech-sound production (96+/-2 vs. 96+/-3); the SCAN te
137 complex behavioral disorder characterized by speech-sound production errors associated with deficits
138 scores measured several processes underlying speech-sound production, including phonological memory,
139 ases, to facilitate category judgments about speech sounds (rather than speech perception, which invo
143 rn their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do s
145 MEG) was used to investigate the response to speech sounds that differ in onset dynamics, parameteriz
146 ficial acoustic continua ranging between two speech sounds that differed in place of articulation, in
147 rical perception of continua ranging between speech sounds that do not involve the lips in their arti
148 ior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variat
149 ior STG is tuned for temporally fast varying speech sounds that have relatively constant energy acros
153 al processing to deficits in the matching of speech sounds to their appropriate visual representation
154 ollected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by diffe
155 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
157 uditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding du
162 pling content (acoustically similar to human speech sounds), which may represent some of the signal a
163 diovisual speech between mouth movements and speech sounds, which last 80 ms longer for /ga/ than for
164 ecific sets of acoustic cues, extracted from speech sounds, which vary across judgment dimensions.
165 daptable sensorimotor maps that couple heard speech sounds with motor programs for speech production;
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。