戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 ystem, and (6) correct identification of the speech sound.
2  a P1-N1-P2 response) was different for each speech sound.
3 al network supporting perceptual grouping of speech sounds.
4  processing human nonverbal vocalizations or speech sounds.
5 tors contribute to categorical perception of speech sounds.
6 detect, discriminate, localize and order non-speech sounds.
7 istinguish different musical instruments and speech sounds.
8 istinctly affected by acoustic properties of speech sounds.
9 ively to monosyllables and produced the same speech sounds.
10 ex, dampening its response to self-generated speech sounds.
11 aring individuals in the absence of auditory speech sounds.
12 tatistical relationships between neighboring speech sounds.
13 n can enhance selective spatial attention to speech sounds.
14 stimulus onset across electrodes that encode speech sounds.
15  electrodes that most strongly discriminated speech sounds.
16 rs shaping the human subcortical response to speech sounds.
17 it is due to abnormal auditory processing of speech sounds.
18 in human subjects as they manipulated stored speech sounds.
19 responses can be highly variable to auditory speech sounds.
20 erformance during discrimination of isolated speech sounds.
21 ngs while participants spoke and listened to speech sounds.
22  right-frontal regions during recognition of speech sounds.
23 l probability of both preceding and upcoming speech sounds.
24 tongue tip while the infants listened to the speech sounds.
25 ecifically discrimination of lip-articulated speech sounds.
26 ditory and motor cortex during processing of speech sounds.
27 icted by their relative similarity to voiced speech sounds.
28  changes to the perceptual classification of speech sounds.
29 ants to demonstrate statistical learning for speech sounds.
30 ditory ventral stream for temporally complex speech sounds.
31 the orthographic system codes explicitly for speech sounds.
32 tex of rhesus monkeys while they categorized speech sounds.
33 t the difference between predicted and heard speech sounds.
34 ve model neurons needed to represent typical speech sounds.
35 ex signals, such as familiar faces or native speech sounds.
36 e the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynam
37 egory boundaries from modal distributions of speech sounds along acoustic continua.
38 t modest musical training as they classified speech sounds along an acoustic-phonetic continuum.
39 e of propagation of the temporal features of speech sounds along the ventral pathway of language proc
40 hat motor circuits controlling production of speech sounds also contribute to their perception.
41 andom sequence of equiprobable loud and soft speech sounds and bright and dim checkerboard patterns o
42 uage, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environme
43 ap, we derived slowly varying AM and FM from speech sounds and conducted listening tests using stimul
44 isplay deficient left hemisphere response to speech sounds and have abnormally right-lateralized temp
45 erves the processing of specific features of speech sounds and is behaviorally relevant for speech re
46 nsory inputs affect the neural processing of speech sounds and shows the involvement of the somatosen
47  infants' sensitivity to the distribution of speech sounds and that infant-directed speech contains t
48 tion between the ability to perceive foreign speech sounds and the volume of Heschl's gyrus (HG), the
49 gs that exist between phonetic properties of speech sounds and their meaning.
50 y be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memo
51 y stimuli (environmental sounds, meaningless speech sounds, and words) were presented either binaural
52 ain the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment
53  is that linguistic context affects both how speech sounds are categorized into phonemes, and how dif
54 cripts, such as Mandarin Chinese, individual speech sounds are not orthographically represented, rais
55               The neural substrates by which speech sounds are perceptually segregated into distinct
56 controversially interpreted as evidence that speech sounds are processed as articulatory gestures.
57  rapidly varying spectrotemporal features of speech sounds are processed, as compared to processing s
58                                        Human speech sounds are produced through a coordinated movemen
59  the Bouba/Kiki effect, in which meaningless speech sounds are systematically mapped onto rounded or
60                                              Speech sounds are traditionally divided into consonants
61              Animal sounds, as well as human speech sounds, are characterized by multiple parameters
62 fic speech sounds from a continuous train of speech sounds but did not impair performance during disc
63 enced not only by the acoustic properties of speech sounds, but also by higher-level processes involv
64 us to determine what acoustic information in speech sounds can be reconstructed from population neura
65 tion, and that their apparent selectivity to speech sound categories may reflect a more general prefe
66 magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in
67 of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporopa
68 gate brainstem and cortical responses to the speech sound /da/.
69 as the ability to discriminate among foreign speech sounds declines.
70 e same ability to discriminate among foreign speech sounds decreases.
71  8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex r
72 mal cortical responses to sound and impaired speech sound discrimination.
73  development, oral-motor movements influence speech sound discrimination.
74 s of unknown utility in children with SLI or speech sound disorder (SSD) who do not have epilepsy.
75  among three developmental disorders, namely speech sound disorder (SSD), language impairment (LI), a
76 tro-encephalographic (EEG) abnormalities and speech sound disorder in rolandic epilepsy families - an
77                                              Speech-sound disorder (SSD) is a complex behavioral diso
78  in the genetic investigation of stuttering, speech-sound disorder (SSD), specific language impairmen
79 ity locus and a linkage region for dyslexia, speech-sound disorder and reading.
80 nnative, and hence never-before-experienced, speech sound distinction.
81  potentials are synchronized to the onset of speech sounds during the Talk and Listen conditions.
82 t of the event-related potential elicited by speech sounds during vocalization (talk) and passive pla
83 e to hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in a word.
84 reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroim
85     A consistent increase in grammatical and speech sound errors and a simplification of spoken synta
86 grammar on the one hand, and grammatical and speech sound errors on the other.
87 ated reductions in speech perception because speech sounds, especially consonants, become inaudible.
88 tex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks an
89 erior temporal regions in processing complex speech sounds, evidence suggests that the motor system m
90 ted the ability of rats to identify specific speech sounds from a continuous train of speech sounds b
91 igm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted the
92 factors regulating this modulation regarding speech sounds have not been disclosed.
93  characterized by difficulties in processing speech sounds (i.e., phonemes).
94 that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level
95  nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior
96 nfant's ability to discriminate among native speech sounds improves, whereas the ability to discrimin
97 nfant's ability to discriminate among native speech sounds improves, whereas the same ability to disc
98                      The neural responses to speech sounds in A1 were not degraded as a function of p
99 g. Wernicke's area responded specifically to speech sounds in controls but was not specialized in pat
100 c-to-higher order phonetic level encoding of speech sounds in human language receptive cortex.
101 e of being able to hear, but not understand, speech sounds in noisy environments.
102 utton press was required in response to soft speech sounds in the auditory attention task and to dim
103 lling new evidence for dynamic processing of speech sounds in the auditory pathway.
104                     The accurate encoding of speech sounds in the subcortical auditory nervous system
105 ponse, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and c
106  of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as bein
107 back, namely the self-perception of produced speech sounds, in the online control of spatial and temp
108 ical plane, responded to both non-speech and speech sounds, including the sound of the speaker's own
109 eech movements but their ability to perceive speech sounds, including their own errors, is unaffected
110 rmed by the human brain to transform natural speech sound into meaningful language, we used models ba
111 lex series of processing stages to translate speech sounds into meaning.
112   We found that the neural representation of speech sounds is categorically organized in the human po
113 irst evidence that formant perception in non-speech sounds is improved by fundamental frequency modul
114 is experiment, human participants identified speech sounds masked by varying levels of noise while bl
115 iate the position of their articulators with speech sounds may impair the development of phonological
116 ry brainstem predicts cerebral asymmetry for speech sounds measured in a group of children spanning a
117  beyond the principal frequency range of the speech sound modulated in opposite fashion.
118 etition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error
119 ct was generalized to other types of similar speech sounds not included in the training material.
120 ng associations with specific kinds of human speech sounds, occurring persistently across continents
121 nalogous to phonological-level processing of speech sounds) of the gestures.
122 n (n = 11) demonstrated abnormal encoding of speech sounds on both individual measures of brainstem a
123 on, the N1 event-related brain potentials to speech sound onset during talking and listening were com
124         Mapping acoustically highly variable speech sounds onto less variable motor representations m
125 med a delayed-match-to-sample task on either speech sound or speaker identity.
126 ring development, language experience alters speech sound (phoneme) categorization.
127                    Language testing included speech sound (phoneme) discrimination, single word and p
128 vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly org
129 ow to convert letters (graphemes) into these speech sounds (phonemes).
130 ness that words are comprised of a system of speech sounds (phonological awareness) and the knowledge
131 st, effects of musical ability on non-native speech-sound processing and of inhibitory control on vow
132 st that individual differences in non-native speech-sound processing are to some extent determined by
133 e in the face of acoustic variability (among speech sounds produced by different speakers at differen
134 mprehension and expression and a disorder of speech sound production.
135 onsonants Correct-Revised test, a measure of speech-sound production (85+/-7 vs. 86+/-7); the General
136 onsonants Correct-Revised test, a measure of speech-sound production (96+/-2 vs. 96+/-3); the SCAN te
137 complex behavioral disorder characterized by speech-sound production errors associated with deficits
138 scores measured several processes underlying speech-sound production, including phonological memory,
139 ases, to facilitate category judgments about speech sounds (rather than speech perception, which invo
140               Although specialized voice and speech-sound regions have been proposed, it is unclear h
141                                     Although speech-sound responses were distributed, spatially discr
142 ecision on the order of 1-10 ms to represent speech sounds shifted into the rat hearing range.
143 rn their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do s
144 he earliest stages of cortical processing of speech sounds take place in the auditory cortex.
145 MEG) was used to investigate the response to speech sounds that differ in onset dynamics, parameteriz
146 ficial acoustic continua ranging between two speech sounds that differed in place of articulation, in
147 rical perception of continua ranging between speech sounds that do not involve the lips in their arti
148 ior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variat
149 ior STG is tuned for temporally fast varying speech sounds that have relatively constant energy acros
150                               An analysis of speech sounds, the principal source of periodic sound st
151                            When infants hear speech sounds, they can learn rules that govern their co
152 e experience affect the neural processing of speech sounds throughout the auditory system.
153 al processing to deficits in the matching of speech sounds to their appropriate visual representation
154 ollected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by diffe
155 pants judged whether a given consonant-vowel speech sound was large or small, round or angular, using
156                     Phonological decoding of speech sounds was assessed by auditory syllable discrimi
157 uditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding du
158                                    Examining speech sounds, we show that activation associated with t
159                                         When speech sounds were ignored, the effect of this motor dis
160   Do speakers of all languages use segmental speech sounds when they produce words?
161 auditory-cortex responses to lip-articulated speech sounds when they were attended.
162 pling content (acoustically similar to human speech sounds), which may represent some of the signal a
163 diovisual speech between mouth movements and speech sounds, which last 80 ms longer for /ga/ than for
164 ecific sets of acoustic cues, extracted from speech sounds, which vary across judgment dimensions.
165 daptable sensorimotor maps that couple heard speech sounds with motor programs for speech production;

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top