戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 nce of relative sound source motion ("active listening").
2 ulation, or EAS) and/or across ears (bimodal listening).
3  participants (n = 27) were engaged in music listening.
4  in the context of natural, narrative speech listening.
5 oreground objects like speech during natural listening.
6 ediction that are common to both reading and listening.
7 on-amusic patients during instrumental music listening.
8 spatial cues are modulated by active spatial listening.
9 activate the currently heard language during listening.
10 sound making compared with the initial naive listening.
11 nterpretation of the sound during subsequent listening.
12 auditory functional topography during active listening.
13 otopic mismatch for EAS, but not for bimodal listening.
14 to simulations of unimodal, EAS, and bimodal listening.
15 versational contexts both while speaking and listening.
16 te factors that affect IE in EAS and bimodal listening.
17  frontoparietal brain areas during selective listening.
18 nd the to-be-ignored stream during selective listening.
19 own about their interaction during selective listening.
20 n they actively vocalize than during passive listening.
21 auditory-visual cues modulate this selective listening.
22 mporal integration that occur during passive listening.
23 in speech production encountered in everyday listening.
24 cy between imagined speech, overt speech and listening.
25 onset expectations during naturalistic music listening.
26 erated within auditory cortex during passive listening.
27 odulated at slow rates (4 Hz) during passive listening.
28 ubject correlation of brain responses during listening.
29 ubject correlation of brain responses during listening.
30 ctly from EEG dynamics measured during music listening.
31           In contrast, P2 became larger when listening after sound making compared with the initial n
32 ed in controlling attention during selective listening, allowing for a better cortical tracking of th
33 d with the other groups, in their ability to listen and talk to their children, who as a group report
34                            Spatial selective listening and auditory choice underlie important process
35 ed the spectrum of the neural activity while listening and compared it to the modulation spectrum of
36  classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86
37          We find that semantic tuning during listening and reading are highly correlated in most sema
38 ditory and sensorimotor brain regions during listening and speaking.
39 tation time, in-depth specialised knowledge, listening and understanding to patients' needs, and a ho
40 form human-like activities including seeing, listening, and speaking.
41 used to map brain activity while the animals listened attentively to the sound sequences they had lea
42 cs plays an important role in the changes in listening behavior that occur with age.
43 n terms of musical structure and in terms of listening behavior, yet little is known about how engage
44 e of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus.
45 rimotor cortices, was stronger in the second listening block.
46 eviously acquired BOLD activity during music listening, but not for a control monetary reward task in
47 uential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that comm
48 nship offsets age-related declines in speech listening by refining the hierarchical interplay between
49 eners so that central effects in multisource listening can be examined.
50        Successful behavioral adaptation to a listening challenge often requires stronger engagement o
51            However, users who prefer DNR for listening comfort are not likely to jeopardize their abi
52 both perspectives are related to "Auditory" (listening, communicating, and speaking), "Social" (relat
53 ntence recognition test under the best aided listening condition may be considered as candidates for
54 However, the requirement for 'the best aided listening condition' needs significant time and clinical
55  responses persisted even during the passive listening condition, demonstrating memory for task categ
56                       We investigate passive listening conditions (very short duration stimulus not p
57 ther that information is used in challenging listening conditions remains unknown.
58 significantly greater in noisy than in quiet listening conditions, consistent with the principle of i
59                             Under nonspatial listening conditions, cortical sensitivity to ITD and IL
60 c cues of speech TFS in both quiet and noisy listening conditions.
61  hearing aids for speech processing in noisy listening conditions.
62 ssary feature for successful hearing in many listening conditions.
63  short- and long-term adaptations to varying listening conditions.
64     We do this under both active and passive listening conditions.
65                Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, an
66 task demands, but not during passive reading/listening, conditions that strongly activate the frontot
67 (3) sham mindfulness meditation, or (4) book-listening control intervention.
68 ) perception is a critical aspect of natural listening, deficits in which are a major contributor to
69 ive sensing, we present here a single-sensor listening device that separates simultaneous overlapping
70 amage and central compensation could explain listening difficulties despite normal hearing thresholds
71  Critically, despite increased self-reported listening difficulties, cortical synchronization to spee
72 f these two types of information in everyday listening (e.g., conversing in a noisy social situation;
73 ers of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was
74 prehension, and memory, leading to increased listening effort during speech comprehension.
75   Semantic context led to rapid reduction of listening effort for people with normal hearing; the red
76 ot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception.
77 ral fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted
78  both speech perception ability in noise and listening effort.
79 drivers engaged in a driving simulator while listening either to global positioning system instructio
80 ing speech despite the fact that our natural listening environment is often filled with interference.
81  of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive r
82 ifficulty understanding speech in real-world listening environments (e.g., restaurants), even with am
83 se sound processing, particularly in complex listening environments that place high demands on brain
84 may support speech processing in challenging listening environments, and that this infrastructure is
85 tion is especially pronounced in challenging listening environments.
86 ety-four unilateral CI patients with bimodal listening experience (CI plus HA in contralateral ear) c
87 rty nightmare for echolocating bats: as bats listen for the faint returning echoes of their loud call
88 to measure brain activity while participants listened for short silences that interrupted ongoing noi
89 respectful care for the deceased and family, listening for and addressing family concerns, and an att
90 rved in polar angle judgement for the active listening group.
91 anguage lateralization measured via dichotic listening, handedness and footedness were assessed.
92 ignificantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EA
93                                              Listening in a noisy environment is challenging for indi
94                                              Listening in challenging situations, or when the auditor
95  Figure-ground segregation is fundamental to listening in complex acoustic environments.
96  in speech perception, music perception, and listening in complex acoustic environments.
97 d the occupation of cognitive resources when listening in difficult conditions.
98 vious studies have investigated speaking and listening in isolation, this study focuses on the behavi
99 (SNHL) often experience more difficulty with listening in multisource environments than do normal-hea
100 on constitutes one of the building blocks of listening in natural environments, its neural bases rema
101 dition that combined natural and beamforming listening in order to preserve localization for broadban
102  locate and segregate sounds, which can make listening in schools, cafes, and busy workplaces extreme
103 y, non-speech orofacial movements and speech listening, in a cohort of 27 patients implanted with pen
104 pleted, on average, 85 hours (IQR=70-100) of Listen-In (therapy first, n=18).
105 prehension impairments completed consecutive Listen-In and standard care blocks (both 12 weeks with o
106                                              Listen-In will soon be available on GooglePlay.
107 ted the efficacy of a self-led therapy app, 'Listen-In', and examined the relation between brain stru
108 re and focus before greeting a patient); (2) listen intently and completely (sit down, lean forward,
109 ay practice controlling two languages during listening is likely to explain previously observed bilin
110  processing levels interact during selective listening is not understood.
111  echolocation.SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain
112 and frontoparietal activity during selective listening is, however, not understood.
113              This formulation suggests that 'listening' is a more active process than traditionally c
114 y weak positive correlation between dichotic listening lateralization quotients (LQs) and handedness
115                               Active spatial listening (location tasks) enhanced both contralateral a
116  of colaughter between friends and strangers listened longer to colaughter between friends.
117 emales with males exhibiting higher dichotic listening LQs indicating more left-hemispheric language
118 edness LQs were not correlated with dichotic listening LQs, but individuals with atypical language la
119 T) and auditory thalamus of awake, passively-listening marmosets.
120  users' speech and music perception, bimodal listening may partially compensate for these deficits.
121 ast temporal processing and cognitive active listening mechanisms.
122 understand the lessons from these trials and listen more carefully to what truly matters to our patie
123 ctional connectivity between the driving and listening networks.
124 e benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced i
125 evious work has shown that, during selective listening, ongoing neural activity in auditory sensory a
126 ctly from the human brain typically consider listening or speaking tasks in isolation.
127                              Using a passive listening paradigm and multivariate decoding of single-t
128                                In a dichotic listening paradigm, participants attended to a narrative
129                                         When listening passively to speech, high synchronizers show i
130  were focused on the scenes relative to when listening passively, consistent with the notion that att
131 st that music perception is an active act of listening, providing an irresistible epistemic offering.
132 showed more face orienting during periods of listening relative to speaking, and during the introduct
133  on the efficacy of the training, but active listening resulted in a significantly greater improvemen
134 nversations in an ongoing, spatially dynamic listening scenario.
135 erent voices and locations to create dynamic listening scenarios.
136                                              Listening selectively to one out of several competing sp
137 ttended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneous
138 ts' self-reports collected during an initial listening session.
139                  In contrast, during nontask listening sessions, cortical improvements were weak and
140 of auditory spatial attention in challenging listening situations.
141  speech comprehension, especially in adverse listening situations.
142 erve as an effective means to bolster speech listening skills that decline across the lifespan.
143                                  In everyday listening, sound reaches our ears directly from a source
144 ent of the supratemporal plane during rhythm listening, speech perception, and speech production.
145 f a large commercial vessel, <10 km from the listening station, the communication space of both speci
146  recorded by two land-based passive acoustic listening stations (PALS) deployed in Sarasota Bay, Flor
147 d immediately, while in previous learning-by-listening studies P2 increases occurred on a later day.
148 ative to resting state predicts individuals' listening success in states of divided and selective att
149 ree to which they adapt behaviorally and can listen successfully under such circumstances.
150     Event-related potentials during the talk/listen task were obtained before infusion and during inf
151 guration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner
152  decision process, in this spatial selective listening task AC neural activity represents the sensory
153  correlates of an auditory spatial selective listening task by recording single-neuron activity in be
154 re significantly different during a language listening task compared to during sleep, HR infants' mov
155                                          The listening task featured predictable or unpredictable sen
156  and twenty adults (18-28 years) completed a listening task to determine auditory discrimination abil
157 oncurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech
158                     Afterward, in a dichotic listening task, participants were cued to direct attenti
159                          During the dichotic listening task, presentation of fear-conditioned stimuli
160                           Using a continuous listening task, we evaluated the coupling between the li
161 ex (A1) while mice were engaged in an active listening task.
162 different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefit
163 stency in activation across both reading and listening tasks revealed a mostly left-hemispheric corti
164 red using fMRI during spatial and nonspatial listening tasks.
165  function has been investigated with various listening tasks.
166 ot only what to do, but which inner voice to listen to - our 'automatic' response system, which some
167 ce of anticipatory reinstatement as subjects listen to a familiar narrative.
168                 During fMRI, we had subjects listen to a real-life auditory narrative and to temporal
169                                     After we listen to a series of words, we can silently replay them
170 cted attention we instructed participants to listen to a short story coming from one of these speaker
171          By 5 months, infants preferentially listen to colaughter between friends and detect when col
172 nerator with block sizes of four and six, to listen to either the Informed Health Choices podcast (in
173 appointments to receive linguistic feedback, listen to language input in their own recordings, and di
174 o are habitually active or wakeful at night) listen to less-intense music.
175 ere differences in baselines: younger people listen to more intense music; compared with other region
176                                  Individuals listen to more relaxing music late at night and more ene
177                                  When people listen to one person in a "cocktail party," their audito
178                                  When people listen to speech, electrophysiological oscillations in a
179                                      When we listen to speech, we have to make sense of a waveform of
180                                 You can also listen to the associated interview with Debbie Sweet, Ed
181 indings suggest that humans might literally 'listen to their heart' to guide their altruistic behavio
182                          When juvenile birds listen to their tutor, NIf neurons are also activated at
183 ptively pick the subgroup of neighbors they 'listen to' to determine their own behavior.
184                               Human subjects listened to 'scenes' comprised of concurrent tone-pip st
185  1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the spec
186 EG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbea
187 of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalo
188                                 Participants listened to a stream of sounds and pressed a button ever
189                           Then, participants listened to an auditory narrative describing the crime i
190 we recorded cortical signals as participants listened to Bach melodies.
191 rom 20 right-handed healthy adult humans who listened to five different recorded stories (attended sp
192 nal MRI (fMRI) data collected while subjects listened to hours of narrative stories.
193 frontal cortical (PFC) activities while they listened to infant and adult vocalizations in two condit
194 y activities around 20 Hz while participants listened to metronome beats and imagined musical meters
195 primary AC in neurosurgical patients as they listened to multi-talker speech.
196 phalography data recorded while participants listened to music of varying note rates.
197 corded simultaneously while the participants listened to music that had been specifically generated t
198 ield potential recordings while participants listened to natural continuous speech.
199 ecorded in an fMRI experiment while subjects listened to natural speech.
200 e 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwock
201  two separate experiments while participants listened to or read several hours of the same narrative
202   Ten human subjects (four female, six male) listened to pairs of tone triplets varying in pitch, tim
203                     Here, human participants listened to questions and responded aloud with answers w
204 investigate neuronal activity while subjects listened to radio news played faster and faster until be
205 ded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensib
206  this study, EEG was recorded while subjects listened to rhythmic sequences.
207 ly from the brain surface while participants listened to sentences that varied in intonational pitch
208 l's gyrus, and were suppressed when subjects listened to sentences.
209 ht neurosurgical patients (5 male; 3 female) listened to sequences of repeated triplets where tones w
210 nt in which five men and two women passively listened to several hours of natural narrative speech.
211              Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally
212 etwork (DMN) of the brain while participants listened to sounds from artificial and natural environme
213 n MEG data obtained while human participants listened to speech of varying acoustic SNR and visual co
214 ical recordings while participants spoke and listened to speech sounds.
215 tex (superior temporal gyrus) while subjects listened to speech syllables.
216 uditory cortex of six human subjects as they listened to speech with abruptly changing background noi
217 ory cortex of neurosurgical patients as they listened to speech, this approach significantly improves
218 stening version in which participants simply listened to spoken sentences and an explicit task versio
219 collecting MEG data while human participants listened to spoken words.
220                In Experiment 2, participants listened to spontaneous and posed laughs, and either inf
221 ed children and adolescents (4-17 years old) listened to stories and two auditory control conditions
222 regions to these areas emerged when newborns listened to the familiar word in the test phase.
223  during speaking compared with when subjects listened to the playback of their own voice.
224 ve human participants (both sexes) passively listened to these signals while performing a visual atte
225 d blindfolded sighted participants passively listened to three audio-movie clips, an auditory narrati
226                                     Patients listened to trains of task-irrelevant tones in two condi
227 stress intensity while participants (n = 66) listened to true biographies describing human suffering.
228 troencephalograms were recorded while humans listened to two spoken digits against a distracting talk
229 e male and female human subjects watched and listened to videos of a speaker uttering consonant vowel
230 ts with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural
231 efulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including
232  neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from th
233                                        After listening to a radio story in the scanner, participants
234 elope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-
235                          This may be because listening to a story, rather than watching a video, is a
236                  Humans excel at selectively listening to a target speaker in background noise such a
237                    We can learn new tasks by listening to a teacher, but we can also learn by trial-a
238 with rhythmic brain activity in participants listening to and seeing the speaker.
239 s have shown that watching visual motion and listening to auditory motion influence each other, but r
240 r involvement for watching video relative to listening to auditory scenes, stronger physiological res
241 c resonance imaging were used during passive listening to brief, 95-dB sound pressure level, white no
242                           In contrast, those listening to Cantonese, a language that differs consider
243                   In normal-hearing subjects listening to CI simulated audio, we showed that particip
244                        We found that, during listening to connected speech, cortical activity of diff
245 om human participants (both male and female) listening to continuous natural speech and find that the
246 netic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that
247 nferior frontal gyrus (IFG) was reduced when listening to excerpts with alterations in both domains c
248                                      Infants listening to German, a nonnative language that shares ke
249 ersus scrambled biological motion, and while listening to happy versus angry voices.
250 al differences in speech reception solely by listening to individuals' brain activity.
251 ing patterns found previously during passive listening to isochronous beats.
252 mance, changed beta power modulations during listening to isochronous beats.
253               Acoustic overexposure, such as listening to loud music too often, results in noise-indu
254  popular music suggests that the pleasure of listening to music is linked to two characteristic inter
255                                              Listening to music often evokes intense emotions [1, 2].
256                                         When listening to music we constantly generate plausible hypo
257  multiple ineffective countermeasures (e.g., listening to music) and effective countermeasures (e.g.,
258                                         When listening to music, humans can easily identify and move
259 ent fasting conditions; (4) warming-up while listening to music; or (5) prolonged periods of training
260 c resonance imaging data from human subjects listening to natural stories.
261                                 We find that listening to naturalistic audio movies and narrative dri
262  and functional coupling within the DMN when listening to naturalistic sounds.
263             Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of s
264 e culture in which they are being raised, by listening to other people.
265 ce to switch between two contexts: passively listening to pure tones and performing a recognition tas
266 halographic recordings while first passively listening to recorded sounds of a bell ringing, then act
267 iking the bell with a mallet, and then again listening to recorded sounds.
268 human subjects (102 males) either reading or listening to sentences.
269 tion was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal l
270 the brain activity of male and female humans listening to sounds moving left, right, up, and down as
271               Humans are good at selectively listening to specific target conversations, even in the
272  but after the onset of articulation, during listening to speech and during production of non-speech
273                                   In humans, listening to speech evokes neural responses in the motor
274                   Even when the only task is listening to speech stimuli, participants should be aske
275 and print for sighted participants), and (2) listening to spoken sentences of different grammatical c
276 namics of this representation during passive listening to task-relevant stimuli and during active ret
277          The primary outcome, measured after listening to the entire podcast, was the mean score and
278 on detection task and while they sat quietly listening to the identical stimuli.
279                              INTERPRETATION: Listening to the Informed Health Choices podcast led to
280 mal modeling, relating the pleasure of music listening to the intrinsic reward of learning.SIGNIFICAN
281 ities, such as driving on a quiet road while listening to the radio.
282 with those elicited by the subjects directly listening to the same music.
283 oss different people (n = 45, men and women) listening to the same story.
284  activated auditory midbrain and cortex, but listening to the sequences that were learned by self-pro
285 e interspersed to ascertain the monkeys were listening to the sounds they produced.
286                   When subjects are actively listening to the stimuli, these responses are larger and
287                                        While listening to this music, participants also continuously
288 his link is not exclusive to human language: listening to vocalizations of nonhuman primates also sup
289 sess the brainstem's activity when a subject listens to one of two competing speakers, and show that
290  articulatory representations during passive listening using carefully controlled stimuli (spoken syl
291 ing two versions of an experiment: a natural listening version in which participants simply listened
292 plex, the semantic representations evoked by listening versus reading are almost identical.
293  representations of information perceived by listening versus reading is unclear.
294 nterventions were diverse and included music listening, visual arts, reading and creative writing, an
295 tructure of responses in motor cortex during listening was organized along acoustic features similar
296  activity of human subjects engaged in music listening, we measured the dynamics of information proce
297                                       During listening, we observed neural activity in the superior a
298                               During passive listening, we recorded neural activity in primary audito
299          Motor cortex neural patterns during listening were substantially different than during artic
300 ng vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppres

 
Page Top