戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 tegories (faces, objects, printed words, and spoken words).
2 itten text before a degraded (noise-vocoded) spoken word.
3 y mapping the alphabetic characters onto the spoken word.
4 ing, and integration of a heard sound with a spoken word.
5  pictures, indicating their understanding of spoken words.
6 uals report color experiences when they hear spoken words.
7 r the acoustic-phonetic cues at the onset of spoken words.
8 only, but not for objects, printed words, or spoken words.
9  and that this impacts the way they memorize spoken words.
10 g the processing of pantomimes, emblems, and spoken words.
11 EG data while human participants listened to spoken words.
12 about listeners' brain activity as they hear spoken words.
13 behaviorally relevant oscillatory tuning for spoken language.
14 ocalize very much like human infants acquire spoken language.
15  explain the evolutionary advantage of human spoken language.
16 into neural mechanisms contributing to human spoken language.
17 hen it comes to an important human attribute-spoken language.
18 y integration in sign language compared with spoken language.
19 tational primitive for the representation of spoken language.
20 erstanding brain mechanisms and disorders of spoken language.
21  but no other animal, make meaningful use of spoken language.
22 eme difficulties producing and understanding spoken language.
23 ing brain and the acquisition of hearing and spoken language.
24 difficulties in expressing and understanding spoken language.
25 r experimental items appeared in written and spoken language.
26 akers while learning new information through spoken language.
27 se changes affect the processing of everyday spoken language.
28 ins may shed light on the emergence of human spoken language.
29 g is thus vicarious: Writing borrows it from spoken language.
30 ssil hominins was driven by the emergence of spoken language.
31 cular convergence between birdsong and human spoken language.
32 er than human ones in the frequency range of spoken language.
33 n and sensory input in the interpretation of spoken language.
34 tion: At their core, they are notations of a spoken language.
35 an animal studies that inform us about human spoken language.
36 setting, we blocked the possibility of using spoken language.
37 luable vocal learning animal model for human spoken language.
38 namics adjusts to the temporal properties of spoken language.
39 unique in their ability to communicate using spoken language.
40 ationships among sign language, gesture, and spoken language.
41 ind from birth responds to touch, sound, and spoken language.
42 "visual") brain regions respond to sound and spoken language.
43 scripts which encode the sound properties of spoken language.
44 ral code by which the human brain represents spoken language?
45 amental difference versus human gestures and spoken language [1, 5] that suggests these features have
46 e phenotypes were (1) phonemic awareness (of spoken words); (2) phonological decoding (of printed non
47 lack-box" along the evolutionary timeline of spoken language; a vocal hominid went in and, millions o
48                      In typical development, spoken language acquisition is gated by statistical lear
49 tic children experience atypical patterns of spoken language acquisition, yet the mechanisms underlyi
50          Response patterns discriminative of spoken words across language were limited to localized c
51                                              Spoken word activation follows the temporal structure of
52 characterized by developmental regression of spoken language and hand use that, with hand stereotypie
53 loped by deaf individuals who cannot acquire spoken language and have not been exposed to sign langua
54 tional blueprint for syntax and phonology in spoken language and human song.
55 the environment, is a key component of human spoken language and learned song in three independently
56 ic recordings of brain responses to degraded spoken words and experimentally manipulated signal quali
57 ations during the identification of familiar spoken words and perception of unfamiliar pseudowords.
58 pts presented in different modalities (e.g., spoken words and pictures or text) [1-3].
59 primary outcome (Comprehensive Aphasia Test: Spoken Words and Sentences).
60  for sign and speech, but not for individual spoken words and signs.
61                                              Spoken words and syllables could be decoded from single
62 nson), language (Comprehensive Assessment of Spoken Language), and quality of life (Pediatric Quality
63 with both the perception of visual words and spoken language, and it examines how such functional cha
64 e neural parallel between birdsong and human spoken language, and they have important consequences fo
65 , in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures ar
66 he key concepts drawn are that components of spoken language are continuous between species, and that
67                    By contrast, responses to spoken language are present by 4 years of age and are no
68 utational content of the processes evoked as spoken words are heard in context, and to evaluate the r
69 work of neural structures, regardless of how spoken words are represented orthographically in a writi
70                                    We used a spoken word as a repeating "standard" and periodically i
71 panzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such s
72 he potential role of these mechanisms in the spoken language atypicalities seen in autism are under-r
73 iduals who could not acquire the surrounding spoken language because they could not hear it, and who
74  Functional flexibility is a sine qua non in spoken language, because all words or sentences can be p
75 dent." However, this has only been tested in spoken language bilinguals.
76                                              Spoken language, both perception and production, is thou
77 ille words and occipital cortex responded to spoken words but not differentially with "old"/"new" rec
78 specialized cognitive resources for learning spoken language, but lack them for graphic codes.
79 y tool humans use to exchange information is spoken language, but the exact speed of the neuronal mec
80 tailed time-varying spectral content) of the spoken words, but not other sounds, were very successful
81 ests that the semantic representation of the spoken words can be activated automatically in the late
82  Learning to read requires an awareness that spoken words can be decomposed into the phonologic const
83 - is iconic, highly variable, and similar to spoken language co-speech gesture.
84 nologically related signs, just as hearing a spoken word coactivates phonologically related words.
85 onclude that in the absence of visual input, spoken language colonizes the visual system during brain
86 ngs support the dual neurocognitive model of spoken language comprehension and emphasize the importan
87              Electrophysiological studies of spoken language comprehension have identified an event-r
88 icipants with chronic poststroke aphasia and spoken language comprehension impairments completed cons
89  issue within a dual neurocognitive model of spoken language comprehension in which core linguistic f
90 ures analyses of variance compared change in spoken language comprehension on two co-primary outcomes
91                                              Spoken language comprehension requires abstraction of li
92                              The efficacy of spoken language comprehension therapies for persons with
93 rocessing extralinguistic information during spoken language comprehension which indicates existence
94 ating children's syntactic processing during spoken language comprehension, and a wealth of research
95 of hearing loss on neural systems supporting spoken language comprehension, beginning with age-relate
96 plore the brain regions that are involved in spoken language comprehension, fractionating this system
97 al-semantic information to show that, during spoken language comprehension, oscillatory modulations r
98 ha and beta oscillations during naturalistic spoken language comprehension, providing evidence for th
99 ed from processes of semantic integration in spoken language comprehension.
100 dress this question focusing on naturalistic spoken language comprehension.
101 to higher-level cognitive processes, such as spoken language comprehension.
102 brain oscillations as "building blocks" with spoken language comprehension.
103 duals with chronic aphasia can improve their spoken word comprehension many years after stroke.
104                                    Toddlers' spoken word comprehension was examined in the context of
105 age in this posterior perisylvian region and spoken word comprehension.
106                     Participants listened to spoken word cues (e.g., "lilac") and determined whether
107 al coordinate data for lip shape during four spoken words decomposed into seven visemes (which includ
108              We explored the neural basis of spoken language deficits in children with reading diffic
109 ded woman with left-hemisphere dominance for spoken language, demonstrated a dissociation between spo
110 n V4/V8 when imagining colors in response to spoken words, despite overtraining on word-color associa
111  to produce songs in a manner reminiscent of spoken language development in humans.
112 ification (hearing aids) that can facilitate spoken language development in young children with sever
113 itudinal, and multidimensional assessment of spoken language development over a 3-year period in chil
114 day (datalogs) and because their hearing and spoken language development was particularly vulnerable
115 arning and social attention, suggesting that spoken language differences in this population might be
116                                To identify a spoken word (e.g., dog), people must categorize the spee
117     Two groups of participants learned novel spoken words (e.g., cathedruke) that overlapped phonolog
118 y to do so depends on the structure of their spoken language (English vs. Hebrew).
119       Poor hearing acuity reduces memory for spoken words, even when the words are presented with eno
120 s.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidatin
121 e compared with hearing speakers with infant spoken language experience, showing that the effects of
122 ver, it has not been clear whether it is the spoken word forms or the meanings (or both) of nouns and
123 ths by number of phonemes and graphemes, and spoken-word frequencies.
124 cabulary and learning the sound structure of spoken language go hand in hand as language acquisition
125 structure-both architecture and function-for spoken language, grounding cognitive models of speech pe
126                            The processing of spoken language has been attributed to areas in the supe
127                           The recognition of spoken language has typically been studied by focusing o
128 ocal learning, a critical component of human spoken language, has been assumed to be associated with
129 ess is particularly critical when learning a spoken language, helping in the identification of discre
130 s on one of the first steps in comprehending spoken language: How do listeners extract the most funda
131 mbine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference).
132 ic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the proj
133 elation to performance on a standard test of spoken language in 16 chronic aphasic patients both befo
134 ce of "visual" cortex responses to sound and spoken language in blind children and adolescents.
135 earing impaired and allow the acquisition of spoken language in children born deaf.
136      Cortical networks for the production of spoken language in humans are organized by phonetic feat
137 e of auditory feedback in the development of spoken language in humans is striking.
138                     Luganda, the most widely spoken language in Kampala was used to conduct the FGDs
139 stimuli.SIGNIFICANCE STATEMENT Understanding spoken language in natural environments requires listene
140 arriers when assessing children with minimal spoken language in this population.
141 se in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female
142                      Participants recognized spoken words in a visual world task while their brains w
143 IFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable
144             We found that while listening to spoken words in quiet, listeners with cochlear implants
145 tory domains (faces, objects, printed words, spoken words) in autistic and neurotypical individuals.
146 t dogs do comprehend the meaning of familiar spoken words, in that a word can evoke the mental repres
147 ch hearing loss influences the processing of spoken language, including higher-level processing such
148 n predominantly based on written text or the spoken word increasing numbers are now drawing on visual
149 f a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackerm
150 nt to position and are transformed to convey spoken language information.
151 word recognition propose that the onset of a spoken word initiates a continuous process of activation
152        SIGNIFICANCE STATEMENT: Understanding spoken words involves complex processes that transform t
153                                Understanding spoken words involves complex processes that transform t
154                                              Spoken language is a central part of our everyday lives,
155 rning to comprehend and express oneself with spoken language is impaired, but the reason for this rem
156 ndings suggest that occipital plasticity for spoken language is independent of plasticity for Braille
157  Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation
158 dies have shown that semantic information in spoken language is represented in multiple regions in th
159 ed that one of the fundamental properties of spoken language is the arbitrary relation between sound
160 in young children was associated with better spoken language learning than would be predicted from th
161 sleep in the consolidation of a naturalistic spoken-language learning task that produces generalizati
162 poral regions in which symbolic gestures and spoken words may be mapped onto common, corresponding co
163  15 autistic preschoolers with minimal or no spoken language (mean chronological age = 30.20, SD = 7.
164 ttention in 13 autistic preschoolers who use spoken language (mean chronological age = 34.38, SD = 8.
165 old control was critical to the evolution of spoken language, much as it today allows us to learn vow
166                           Yet, in evolution, spoken language must have emerged from neural mechanisms
167 nts evidence that audiovisual integration in spoken language occurs when one modality (vision) acts o
168 e compare with gesture, on the one hand, and spoken language on the other?
169             Although language, and therefore spoken language or speech, is often considered unique to
170 ; and that the "language of thought" maps to spoken language or symbol systems.
171              The answer may take the form of spoken words or a nonverbal signal such as a hand moveme
172 grated in cognitive and/or motor theories on spoken language origins and with more analogous nonhuman
173 rge and significant improvements for trained spoken words over therapy versus standard care (11%, Coh
174 emands for musical rhythm discrimination and spoken language paradigms are another possible source of
175 f the roles assigned to the basal ganglia in spoken language parallel very well their contribution to
176 he impact of adverse listening conditions on spoken language perception is well established, but the
177 r implantation showed greater improvement in spoken language performance (10.4; 95% confidence interv
178           Our observers identify printed and spoken words presented concurrently or separately.
179                                   Learning a spoken language presupposes efficient auditory functions
180 g ability on the neural processes supporting spoken language processing in humans, we used functional
181 activate orthographic representations during spoken language processing, while those with reading dif
182 y focusing on the role of orthography during spoken language processing.
183                            Certain models of spoken-language processing, like those for many other pe
184 ons from this subregion to areas involved in spoken-language processing.
185                                  We compared spoken language production (Speech) with multiple baseli
186 eferred processing rate-that is, the rate of spoken language production and perception-onto the oculo
187 esearch on the neuroanatomical correlates of spoken language production in aphasia using constrained
188 identify spatiotemporal networks involved in spoken language production in humans.
189                                              Spoken language production involves selecting and assemb
190                                              Spoken language production is a complex brain function t
191 urocomputational, bilateral pathway model of spoken language production, designed to provide a unifie
192 contemporary accounts of the neurobiology of spoken language production.
193 res combine to form phonological segments in spoken language production.
194  in cognitive control specific to sentential spoken language production.
195 ss large-scale cortical networks involved in spoken word production.
196 tions of people with impaired development of spoken language provide windows into key aspects of huma
197 ns: clinical diagnosis, language impairment (spoken language quotient <85) and reading discrepancy (n
198                   In toddlers, as in adults, spoken words rapidly evoke their referents.
199 and semantic processes in Chinese disyllabic spoken word recognition are modulated by top-down mechan
200 specifies the neural mechanisms that support spoken word recognition by testing two distinct implemen
201         Although it is well established that spoken word recognition engages the superior, middle, an
202                                              Spoken word recognition in context is remarkably fast an
203 e still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMEN
204 ese findings are consistent with accounts of spoken word recognition in which neural computations of
205      We propose a predictive coding model of spoken word recognition in which STG neurons represent t
206 ounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and pre
207              Influential cognitive models of spoken word recognition propose that the onset of a spok
208                                              Spoken word recognition requires complex, invariant repr
209 urolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic varia
210       Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoence
211     Using a Bayesian framework for modelling spoken word recognition, we find that computational mode
212 ates for higher cognitive functions, such as spoken word recognition.
213 mics of lexical activations during real-time spoken-word recognition in a visual context.
214 ws one to project the internal processing of spoken-word recognition onto a two-dimensional layout of
215 his idea, we proposed that AV integration in spoken language reflects visually induced weighting of p
216  The relationship between these gestures and spoken language remains unclear.
217  the similarity reflects retrieval of common spoken language representations.
218                                Understanding spoken language requires a complex series of processing
219                                Understanding spoken language requires the rapid integration of inform
220                                Understanding spoken language requires transforming ambiguous acoustic
221                                              Spoken language samples were obtained using the Cookie T
222                                              Spoken language samples were obtained using the Cookie T
223 ear implantation, some deaf children develop spoken language skills approaching those of their hearin
224 oup analyses revealed no association between spoken language skills, measured via the Mullen Scales o
225                       Studies of written and spoken language suggest that nonidentical brain networks
226  continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phoneti
227                       By coarse-graining the spoken word testimony into synonym sets and dividing the
228 rofound impact on the emergence and shape of spoken language than previously recognized.
229 ological awareness, the auditory analysis of spoken language that relates the sounds of language to p
230 fit: it enhances people's ability to predict spoken language thereby aiding comprehension.
231  possibly contributing to the development of spoken language through differential RNA regulation duri
232 doxically, the increased complexity of human spoken language thus followed simplification of our lary
233 opriate behavior, they have difficulty using spoken language to explain why it is inappropriate.
234 cated machine-learning algorithms to convert spoken language to text, have become increasingly widesp
235                   The ability of written and spoken words to access the same semantic meaning provide
236 subjects, we compared semantic processing of spoken words to equivalent processing of environmental s
237 n begin acquiring their first words, linking spoken words to their visual counterparts.
238 dy, we assessed the brain regions supporting spoken word understanding in adult listeners with right
239                                         As a spoken word unfolds over time, it is temporarily consist
240  In auditory masking, background sound makes spoken words unrecognizable.
241                           Indeed, in hearing spoken language users, text and spoken language are co-d
242 aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfo
243          By contrast, occipital responses to spoken language were maximal by age 4 and were not relat
244          Vocal learning is a key property of spoken language, which might also be present in nonhuman
245 orded the neural response patterns to single-spoken words with multi-channel electrodes from the guin
246                         Humans can recognize spoken words with unmatched speed and accuracy.
247 s have argued that sign is no different from spoken language, with all of the same linguistic structu
248          Response patterns discriminative of spoken words within language were distributed in multipl
249  fMRI response patterns that enable decoding spoken words within languages (within-language discrimin
250 tantiation of written language processes and spoken language, working memory and other cognitive skil
251 with dyslexia for a wide variety of stimuli, spoken words, written words, visual objects, and faces.
252                            Using a number of spoken word-written word matching paradigms, her compreh

 
Page Top