コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 nce of relative sound source motion ("active listening").
2 ulation, or EAS) and/or across ears (bimodal listening).
3 participants (n = 27) were engaged in music listening.
4 in the context of natural, narrative speech listening.
5 oreground objects like speech during natural listening.
6 ediction that are common to both reading and listening.
7 on-amusic patients during instrumental music listening.
8 spatial cues are modulated by active spatial listening.
9 activate the currently heard language during listening.
10 sound making compared with the initial naive listening.
11 nterpretation of the sound during subsequent listening.
12 auditory functional topography during active listening.
13 otopic mismatch for EAS, but not for bimodal listening.
14 to simulations of unimodal, EAS, and bimodal listening.
15 versational contexts both while speaking and listening.
16 te factors that affect IE in EAS and bimodal listening.
17 frontoparietal brain areas during selective listening.
18 nd the to-be-ignored stream during selective listening.
19 own about their interaction during selective listening.
20 n they actively vocalize than during passive listening.
21 auditory-visual cues modulate this selective listening.
22 mporal integration that occur during passive listening.
23 in speech production encountered in everyday listening.
24 cy between imagined speech, overt speech and listening.
25 onset expectations during naturalistic music listening.
26 erated within auditory cortex during passive listening.
27 odulated at slow rates (4 Hz) during passive listening.
28 ubject correlation of brain responses during listening.
29 ubject correlation of brain responses during listening.
30 ctly from EEG dynamics measured during music listening.
32 ed in controlling attention during selective listening, allowing for a better cortical tracking of th
33 d with the other groups, in their ability to listen and talk to their children, who as a group report
35 ed the spectrum of the neural activity while listening and compared it to the modulation spectrum of
36 classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86
39 tation time, in-depth specialised knowledge, listening and understanding to patients' needs, and a ho
41 used to map brain activity while the animals listened attentively to the sound sequences they had lea
43 n terms of musical structure and in terms of listening behavior, yet little is known about how engage
46 eviously acquired BOLD activity during music listening, but not for a control monetary reward task in
47 uential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that comm
48 nship offsets age-related declines in speech listening by refining the hierarchical interplay between
52 both perspectives are related to "Auditory" (listening, communicating, and speaking), "Social" (relat
53 ntence recognition test under the best aided listening condition may be considered as candidates for
54 However, the requirement for 'the best aided listening condition' needs significant time and clinical
55 responses persisted even during the passive listening condition, demonstrating memory for task categ
58 significantly greater in noisy than in quiet listening conditions, consistent with the principle of i
66 task demands, but not during passive reading/listening, conditions that strongly activate the frontot
68 ) perception is a critical aspect of natural listening, deficits in which are a major contributor to
69 ive sensing, we present here a single-sensor listening device that separates simultaneous overlapping
70 amage and central compensation could explain listening difficulties despite normal hearing thresholds
71 Critically, despite increased self-reported listening difficulties, cortical synchronization to spee
72 f these two types of information in everyday listening (e.g., conversing in a noisy social situation;
73 ers of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was
75 Semantic context led to rapid reduction of listening effort for people with normal hearing; the red
76 ot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception.
77 ral fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted
79 drivers engaged in a driving simulator while listening either to global positioning system instructio
80 ing speech despite the fact that our natural listening environment is often filled with interference.
81 of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive r
82 ifficulty understanding speech in real-world listening environments (e.g., restaurants), even with am
83 se sound processing, particularly in complex listening environments that place high demands on brain
84 may support speech processing in challenging listening environments, and that this infrastructure is
86 ety-four unilateral CI patients with bimodal listening experience (CI plus HA in contralateral ear) c
87 rty nightmare for echolocating bats: as bats listen for the faint returning echoes of their loud call
88 to measure brain activity while participants listened for short silences that interrupted ongoing noi
89 respectful care for the deceased and family, listening for and addressing family concerns, and an att
92 ignificantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EA
98 vious studies have investigated speaking and listening in isolation, this study focuses on the behavi
99 (SNHL) often experience more difficulty with listening in multisource environments than do normal-hea
100 on constitutes one of the building blocks of listening in natural environments, its neural bases rema
101 dition that combined natural and beamforming listening in order to preserve localization for broadban
102 locate and segregate sounds, which can make listening in schools, cafes, and busy workplaces extreme
103 y, non-speech orofacial movements and speech listening, in a cohort of 27 patients implanted with pen
105 prehension impairments completed consecutive Listen-In and standard care blocks (both 12 weeks with o
107 ted the efficacy of a self-led therapy app, 'Listen-In', and examined the relation between brain stru
108 re and focus before greeting a patient); (2) listen intently and completely (sit down, lean forward,
109 ay practice controlling two languages during listening is likely to explain previously observed bilin
111 echolocation.SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain
114 y weak positive correlation between dichotic listening lateralization quotients (LQs) and handedness
117 emales with males exhibiting higher dichotic listening LQs indicating more left-hemispheric language
118 edness LQs were not correlated with dichotic listening LQs, but individuals with atypical language la
120 users' speech and music perception, bimodal listening may partially compensate for these deficits.
122 understand the lessons from these trials and listen more carefully to what truly matters to our patie
124 e benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced i
125 evious work has shown that, during selective listening, ongoing neural activity in auditory sensory a
130 were focused on the scenes relative to when listening passively, consistent with the notion that att
131 st that music perception is an active act of listening, providing an irresistible epistemic offering.
132 showed more face orienting during periods of listening relative to speaking, and during the introduct
133 on the efficacy of the training, but active listening resulted in a significantly greater improvemen
137 ttended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneous
144 ent of the supratemporal plane during rhythm listening, speech perception, and speech production.
145 f a large commercial vessel, <10 km from the listening station, the communication space of both speci
146 recorded by two land-based passive acoustic listening stations (PALS) deployed in Sarasota Bay, Flor
147 d immediately, while in previous learning-by-listening studies P2 increases occurred on a later day.
148 ative to resting state predicts individuals' listening success in states of divided and selective att
150 Event-related potentials during the talk/listen task were obtained before infusion and during inf
151 guration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner
152 decision process, in this spatial selective listening task AC neural activity represents the sensory
153 correlates of an auditory spatial selective listening task by recording single-neuron activity in be
154 re significantly different during a language listening task compared to during sleep, HR infants' mov
156 and twenty adults (18-28 years) completed a listening task to determine auditory discrimination abil
157 oncurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech
162 different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefit
163 stency in activation across both reading and listening tasks revealed a mostly left-hemispheric corti
166 ot only what to do, but which inner voice to listen to - our 'automatic' response system, which some
170 cted attention we instructed participants to listen to a short story coming from one of these speaker
172 nerator with block sizes of four and six, to listen to either the Informed Health Choices podcast (in
173 appointments to receive linguistic feedback, listen to language input in their own recordings, and di
175 ere differences in baselines: younger people listen to more intense music; compared with other region
181 indings suggest that humans might literally 'listen to their heart' to guide their altruistic behavio
185 1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the spec
186 EG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbea
187 of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalo
191 rom 20 right-handed healthy adult humans who listened to five different recorded stories (attended sp
193 frontal cortical (PFC) activities while they listened to infant and adult vocalizations in two condit
194 y activities around 20 Hz while participants listened to metronome beats and imagined musical meters
197 corded simultaneously while the participants listened to music that had been specifically generated t
200 e 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwock
201 two separate experiments while participants listened to or read several hours of the same narrative
202 Ten human subjects (four female, six male) listened to pairs of tone triplets varying in pitch, tim
204 investigate neuronal activity while subjects listened to radio news played faster and faster until be
205 ded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensib
207 ly from the brain surface while participants listened to sentences that varied in intonational pitch
209 ht neurosurgical patients (5 male; 3 female) listened to sequences of repeated triplets where tones w
210 nt in which five men and two women passively listened to several hours of natural narrative speech.
212 etwork (DMN) of the brain while participants listened to sounds from artificial and natural environme
213 n MEG data obtained while human participants listened to speech of varying acoustic SNR and visual co
216 uditory cortex of six human subjects as they listened to speech with abruptly changing background noi
217 ory cortex of neurosurgical patients as they listened to speech, this approach significantly improves
218 stening version in which participants simply listened to spoken sentences and an explicit task versio
221 ed children and adolescents (4-17 years old) listened to stories and two auditory control conditions
224 ve human participants (both sexes) passively listened to these signals while performing a visual atte
225 d blindfolded sighted participants passively listened to three audio-movie clips, an auditory narrati
227 stress intensity while participants (n = 66) listened to true biographies describing human suffering.
228 troencephalograms were recorded while humans listened to two spoken digits against a distracting talk
229 e male and female human subjects watched and listened to videos of a speaker uttering consonant vowel
230 ts with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural
231 efulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including
232 neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from th
234 elope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-
239 s have shown that watching visual motion and listening to auditory motion influence each other, but r
240 r involvement for watching video relative to listening to auditory scenes, stronger physiological res
241 c resonance imaging were used during passive listening to brief, 95-dB sound pressure level, white no
245 om human participants (both male and female) listening to continuous natural speech and find that the
246 netic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that
247 nferior frontal gyrus (IFG) was reduced when listening to excerpts with alterations in both domains c
254 popular music suggests that the pleasure of listening to music is linked to two characteristic inter
257 multiple ineffective countermeasures (e.g., listening to music) and effective countermeasures (e.g.,
259 ent fasting conditions; (4) warming-up while listening to music; or (5) prolonged periods of training
265 ce to switch between two contexts: passively listening to pure tones and performing a recognition tas
266 halographic recordings while first passively listening to recorded sounds of a bell ringing, then act
269 tion was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal l
270 the brain activity of male and female humans listening to sounds moving left, right, up, and down as
272 but after the onset of articulation, during listening to speech and during production of non-speech
275 and print for sighted participants), and (2) listening to spoken sentences of different grammatical c
276 namics of this representation during passive listening to task-relevant stimuli and during active ret
280 mal modeling, relating the pleasure of music listening to the intrinsic reward of learning.SIGNIFICAN
284 activated auditory midbrain and cortex, but listening to the sequences that were learned by self-pro
288 his link is not exclusive to human language: listening to vocalizations of nonhuman primates also sup
289 sess the brainstem's activity when a subject listens to one of two competing speakers, and show that
290 articulatory representations during passive listening using carefully controlled stimuli (spoken syl
291 ing two versions of an experiment: a natural listening version in which participants simply listened
294 nterventions were diverse and included music listening, visual arts, reading and creative writing, an
295 tructure of responses in motor cortex during listening was organized along acoustic features similar
296 activity of human subjects engaged in music listening, we measured the dynamics of information proce
300 ng vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppres