コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 tegories (faces, objects, printed words, and spoken words).
2 itten text before a degraded (noise-vocoded) spoken word.
3 y mapping the alphabetic characters onto the spoken word.
4 ing, and integration of a heard sound with a spoken word.
5 pictures, indicating their understanding of spoken words.
6 uals report color experiences when they hear spoken words.
7 r the acoustic-phonetic cues at the onset of spoken words.
8 only, but not for objects, printed words, or spoken words.
9 and that this impacts the way they memorize spoken words.
10 g the processing of pantomimes, emblems, and spoken words.
11 EG data while human participants listened to spoken words.
12 about listeners' brain activity as they hear spoken words.
13 behaviorally relevant oscillatory tuning for spoken language.
14 ocalize very much like human infants acquire spoken language.
15 explain the evolutionary advantage of human spoken language.
16 into neural mechanisms contributing to human spoken language.
17 hen it comes to an important human attribute-spoken language.
18 y integration in sign language compared with spoken language.
19 tational primitive for the representation of spoken language.
20 erstanding brain mechanisms and disorders of spoken language.
21 but no other animal, make meaningful use of spoken language.
22 eme difficulties producing and understanding spoken language.
23 ing brain and the acquisition of hearing and spoken language.
24 difficulties in expressing and understanding spoken language.
25 r experimental items appeared in written and spoken language.
26 akers while learning new information through spoken language.
27 se changes affect the processing of everyday spoken language.
28 ins may shed light on the emergence of human spoken language.
29 g is thus vicarious: Writing borrows it from spoken language.
30 ssil hominins was driven by the emergence of spoken language.
31 cular convergence between birdsong and human spoken language.
32 er than human ones in the frequency range of spoken language.
33 n and sensory input in the interpretation of spoken language.
34 tion: At their core, they are notations of a spoken language.
35 an animal studies that inform us about human spoken language.
36 setting, we blocked the possibility of using spoken language.
37 luable vocal learning animal model for human spoken language.
38 namics adjusts to the temporal properties of spoken language.
39 unique in their ability to communicate using spoken language.
40 ationships among sign language, gesture, and spoken language.
41 ind from birth responds to touch, sound, and spoken language.
42 "visual") brain regions respond to sound and spoken language.
43 scripts which encode the sound properties of spoken language.
44 ral code by which the human brain represents spoken language?
45 amental difference versus human gestures and spoken language [1, 5] that suggests these features have
46 e phenotypes were (1) phonemic awareness (of spoken words); (2) phonological decoding (of printed non
47 lack-box" along the evolutionary timeline of spoken language; a vocal hominid went in and, millions o
49 tic children experience atypical patterns of spoken language acquisition, yet the mechanisms underlyi
52 characterized by developmental regression of spoken language and hand use that, with hand stereotypie
53 loped by deaf individuals who cannot acquire spoken language and have not been exposed to sign langua
55 the environment, is a key component of human spoken language and learned song in three independently
56 ic recordings of brain responses to degraded spoken words and experimentally manipulated signal quali
57 ations during the identification of familiar spoken words and perception of unfamiliar pseudowords.
62 nson), language (Comprehensive Assessment of Spoken Language), and quality of life (Pediatric Quality
63 with both the perception of visual words and spoken language, and it examines how such functional cha
64 e neural parallel between birdsong and human spoken language, and they have important consequences fo
65 , in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures ar
66 he key concepts drawn are that components of spoken language are continuous between species, and that
68 utational content of the processes evoked as spoken words are heard in context, and to evaluate the r
69 work of neural structures, regardless of how spoken words are represented orthographically in a writi
71 panzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such s
72 he potential role of these mechanisms in the spoken language atypicalities seen in autism are under-r
73 iduals who could not acquire the surrounding spoken language because they could not hear it, and who
74 Functional flexibility is a sine qua non in spoken language, because all words or sentences can be p
77 ille words and occipital cortex responded to spoken words but not differentially with "old"/"new" rec
79 y tool humans use to exchange information is spoken language, but the exact speed of the neuronal mec
80 tailed time-varying spectral content) of the spoken words, but not other sounds, were very successful
81 ests that the semantic representation of the spoken words can be activated automatically in the late
82 Learning to read requires an awareness that spoken words can be decomposed into the phonologic const
84 nologically related signs, just as hearing a spoken word coactivates phonologically related words.
85 onclude that in the absence of visual input, spoken language colonizes the visual system during brain
86 ngs support the dual neurocognitive model of spoken language comprehension and emphasize the importan
88 icipants with chronic poststroke aphasia and spoken language comprehension impairments completed cons
89 issue within a dual neurocognitive model of spoken language comprehension in which core linguistic f
90 ures analyses of variance compared change in spoken language comprehension on two co-primary outcomes
93 rocessing extralinguistic information during spoken language comprehension which indicates existence
94 ating children's syntactic processing during spoken language comprehension, and a wealth of research
95 of hearing loss on neural systems supporting spoken language comprehension, beginning with age-relate
96 plore the brain regions that are involved in spoken language comprehension, fractionating this system
97 al-semantic information to show that, during spoken language comprehension, oscillatory modulations r
98 ha and beta oscillations during naturalistic spoken language comprehension, providing evidence for th
107 al coordinate data for lip shape during four spoken words decomposed into seven visemes (which includ
109 ded woman with left-hemisphere dominance for spoken language, demonstrated a dissociation between spo
110 n V4/V8 when imagining colors in response to spoken words, despite overtraining on word-color associa
112 ification (hearing aids) that can facilitate spoken language development in young children with sever
113 itudinal, and multidimensional assessment of spoken language development over a 3-year period in chil
114 day (datalogs) and because their hearing and spoken language development was particularly vulnerable
115 arning and social attention, suggesting that spoken language differences in this population might be
117 Two groups of participants learned novel spoken words (e.g., cathedruke) that overlapped phonolog
120 s.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidatin
121 e compared with hearing speakers with infant spoken language experience, showing that the effects of
122 ver, it has not been clear whether it is the spoken word forms or the meanings (or both) of nouns and
124 cabulary and learning the sound structure of spoken language go hand in hand as language acquisition
125 structure-both architecture and function-for spoken language, grounding cognitive models of speech pe
128 ocal learning, a critical component of human spoken language, has been assumed to be associated with
129 ess is particularly critical when learning a spoken language, helping in the identification of discre
130 s on one of the first steps in comprehending spoken language: How do listeners extract the most funda
131 mbine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference).
132 ic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the proj
133 elation to performance on a standard test of spoken language in 16 chronic aphasic patients both befo
136 Cortical networks for the production of spoken language in humans are organized by phonetic feat
139 stimuli.SIGNIFICANCE STATEMENT Understanding spoken language in natural environments requires listene
141 se in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female
143 IFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable
145 tory domains (faces, objects, printed words, spoken words) in autistic and neurotypical individuals.
146 t dogs do comprehend the meaning of familiar spoken words, in that a word can evoke the mental repres
147 ch hearing loss influences the processing of spoken language, including higher-level processing such
148 n predominantly based on written text or the spoken word increasing numbers are now drawing on visual
149 f a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackerm
151 word recognition propose that the onset of a spoken word initiates a continuous process of activation
155 rning to comprehend and express oneself with spoken language is impaired, but the reason for this rem
156 ndings suggest that occipital plasticity for spoken language is independent of plasticity for Braille
157 Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation
158 dies have shown that semantic information in spoken language is represented in multiple regions in th
159 ed that one of the fundamental properties of spoken language is the arbitrary relation between sound
160 in young children was associated with better spoken language learning than would be predicted from th
161 sleep in the consolidation of a naturalistic spoken-language learning task that produces generalizati
162 poral regions in which symbolic gestures and spoken words may be mapped onto common, corresponding co
163 15 autistic preschoolers with minimal or no spoken language (mean chronological age = 30.20, SD = 7.
164 ttention in 13 autistic preschoolers who use spoken language (mean chronological age = 34.38, SD = 8.
165 old control was critical to the evolution of spoken language, much as it today allows us to learn vow
167 nts evidence that audiovisual integration in spoken language occurs when one modality (vision) acts o
172 grated in cognitive and/or motor theories on spoken language origins and with more analogous nonhuman
173 rge and significant improvements for trained spoken words over therapy versus standard care (11%, Coh
174 emands for musical rhythm discrimination and spoken language paradigms are another possible source of
175 f the roles assigned to the basal ganglia in spoken language parallel very well their contribution to
176 he impact of adverse listening conditions on spoken language perception is well established, but the
177 r implantation showed greater improvement in spoken language performance (10.4; 95% confidence interv
180 g ability on the neural processes supporting spoken language processing in humans, we used functional
181 activate orthographic representations during spoken language processing, while those with reading dif
186 eferred processing rate-that is, the rate of spoken language production and perception-onto the oculo
187 esearch on the neuroanatomical correlates of spoken language production in aphasia using constrained
191 urocomputational, bilateral pathway model of spoken language production, designed to provide a unifie
196 tions of people with impaired development of spoken language provide windows into key aspects of huma
197 ns: clinical diagnosis, language impairment (spoken language quotient <85) and reading discrepancy (n
199 and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechan
200 specifies the neural mechanisms that support spoken word recognition by testing two distinct implemen
203 e still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMEN
204 ese findings are consistent with accounts of spoken word recognition in which neural computations of
205 We propose a predictive coding model of spoken word recognition in which STG neurons represent t
206 ounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and pre
209 urolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic varia
211 Using a Bayesian framework for modelling spoken word recognition, we find that computational mode
214 ws one to project the internal processing of spoken-word recognition onto a two-dimensional layout of
215 his idea, we proposed that AV integration in spoken language reflects visually induced weighting of p
223 ear implantation, some deaf children develop spoken language skills approaching those of their hearin
224 oup analyses revealed no association between spoken language skills, measured via the Mullen Scales o
226 continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phoneti
229 ological awareness, the auditory analysis of spoken language that relates the sounds of language to p
231 possibly contributing to the development of spoken language through differential RNA regulation duri
232 doxically, the increased complexity of human spoken language thus followed simplification of our lary
233 opriate behavior, they have difficulty using spoken language to explain why it is inappropriate.
234 cated machine-learning algorithms to convert spoken language to text, have become increasingly widesp
236 subjects, we compared semantic processing of spoken words to equivalent processing of environmental s
238 dy, we assessed the brain regions supporting spoken word understanding in adult listeners with right
242 aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfo
245 orded the neural response patterns to single-spoken words with multi-channel electrodes from the guin
247 s have argued that sign is no different from spoken language, with all of the same linguistic structu
249 fMRI response patterns that enable decoding spoken words within languages (within-language discrimin
250 tantiation of written language processes and spoken language, working memory and other cognitive skil
251 with dyslexia for a wide variety of stimuli, spoken words, written words, visual objects, and faces.