戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 only during articulation (not during passive listening).
2 ulation, or EAS) and/or across ears (bimodal listening).
3 ng a predictable time window during which to listen.
4 in speech production encountered in everyday listening.
5 cy between imagined speech, overt speech and listening.
6 activate the currently heard language during listening.
7  greater attentional selection during active listening.
8 g vocalization, compared with during passive listening.
9 er auditory perceptual performance for music listening.
10 ry stimuli while participants were passively listening.
11 n of resting-state connectivity during music-listening.
12 sound making compared with the initial naive listening.
13 on-amusic patients during instrumental music listening.
14 nterpretation of the sound during subsequent listening.
15 auditory functional topography during active listening.
16 otopic mismatch for EAS, but not for bimodal listening.
17 to simulations of unimodal, EAS, and bimodal listening.
18 versational contexts both while speaking and listening.
19 te factors that affect IE in EAS and bimodal listening.
20  frontoparietal brain areas during selective listening.
21 nd the to-be-ignored stream during selective listening.
22 spatial cues are modulated by active spatial listening.
23 own about their interaction during selective listening.
24 n they actively vocalize than during passive listening.
25 auditory-visual cues modulate this selective listening.
26  quiet conditions, both CI alone and bimodal listening achieved the largest benefits when telephone s
27           In contrast, P2 became larger when listening after sound making compared with the initial n
28 ed in controlling attention during selective listening, allowing for a better cortical tracking of th
29 d with the other groups, in their ability to listen and talk to their children, who as a group report
30 ed the spectrum of the neural activity while listening and compared it to the modulation spectrum of
31  classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86
32 tation time, in-depth specialised knowledge, listening and understanding to patients' needs, and a ho
33                                          The listening application notified physicians in real time w
34 eech recognition is remarkably robust to the listening background, even when the energy of background
35 cs plays an important role in the changes in listening behavior that occur with age.
36 e of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus.
37 rimotor cortices, was stronger in the second listening block.
38 nship offsets age-related declines in speech listening by refining the hierarchical interplay between
39 eners so that central effects in multisource listening can be examined.
40               These factors include off-site listening, charge interactions between masker and probe
41            However, users who prefer DNR for listening comfort are not likely to jeopardize their abi
42 both perspectives are related to "Auditory" (listening, communicating, and speaking), "Social" (relat
43  same recordings of their auditory feedback (listen condition).
44 ntence recognition test under the best aided listening condition may be considered as candidates for
45 However, the requirement for 'the best aided listening condition' needs significant time and clinical
46 ilized to different extents depending on the listening conditions (e.g., full spectrum vs. spectrally
47 r difficulty understanding speech in adverse listening conditions and exhibit degraded temporal resol
48 ligibility performance in a range of adverse listening conditions and hearing impairments, including
49 n SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional r
50 anisms that subserve coping with challenging listening conditions for speech and non-speech.
51 e behaviorally relevant signal in a range of listening conditions have yet to be discovered.
52 nate Response Measure for six CI users in 27 listening conditions including each combination of rever
53  under such acoustically complex and adverse listening conditions is not known, and, indeed, it is no
54 dies have looked at the effects of different listening conditions on the intelligibility of speech, t
55              Recognizing speech in difficult listening conditions requires considerable focus of atte
56 slope changes across studies and to identify listening conditions that affect the slope of the psycho
57 ts, bimodal users were presented under quiet listening conditions with wideband speech (WB), bandpass
58                                Under adverse listening conditions, better discriminative activity in
59 significantly greater in noisy than in quiet listening conditions, consistent with the principle of i
60                             Under nonspatial listening conditions, cortical sensitivity to ITD and IL
61  short- and long-term adaptations to varying listening conditions.
62     We do this under both active and passive listening conditions.
63 erstand continuous speech even under adverse listening conditions.
64  categorical speech perception under adverse listening conditions.
65  also supports word recognition in difficult listening conditions.
66 c cues of speech TFS in both quiet and noisy listening conditions.
67  hearing aids for speech processing in noisy listening conditions.
68 ssary feature for successful hearing in many listening conditions.
69                Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, an
70 (3) sham mindfulness meditation, or (4) book-listening control intervention.
71 ive sensing, we present here a single-sensor listening device that separates simultaneous overlapping
72                                    Assistive listening devices (classroom FM systems) may reduce audi
73                                    Assistive listening devices can improve the neural representation
74         To understand the neural dynamics of listening difficulties and according listening strategie
75 f these two types of information in everyday listening (e.g., conversing in a noisy social situation;
76 prehension, and memory, leading to increased listening effort during speech comprehension.
77   Semantic context led to rapid reduction of listening effort for people with normal hearing; the red
78 ot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception.
79 erall alpha power were related to subjective listening effort.
80 drivers engaged in a driving simulator while listening either to global positioning system instructio
81 ing speech despite the fact that our natural listening environment is often filled with interference.
82 er, these results suggest that, in a complex listening environment, auditory cortex can selectively e
83  in signal-to-noise ratio depends greatly on listening environment.
84 ifficulty understanding speech in real-world listening environments (e.g., restaurants), even with am
85 se sound processing, particularly in complex listening environments that place high demands on brain
86 may support speech processing in challenging listening environments, and that this infrastructure is
87 tion is especially pronounced in challenging listening environments.
88  that mimics the challenges of many everyday listening environments.
89 t mechanism that promotes hearing in complex listening environments.
90 erstanding speech, especially in challenging listening environments.
91 ety-four unilateral CI patients with bimodal listening experience (CI plus HA in contralateral ear) c
92 speak their first words and without specific listening experience, sensorimotor information from the
93 to measure brain activity while participants listened for short silences that interrupted ongoing noi
94 respectful care for the deceased and family, listening for and addressing family concerns, and an att
95 visual temporal coherence alone on selective listening, free of linguistic confounds.
96                                        Music listening has been suggested to beneficially impact heal
97 eech stimuli to investigate 'cocktail-party' listening have focused on entrainment of cortical activi
98 ignificantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EA
99             Our findings indicate that music listening impacted the psychobiological stress system.
100  in speech perception, music perception, and listening in complex acoustic environments.
101 (SNHL) often experience more difficulty with listening in multisource environments than do normal-hea
102 on constitutes one of the building blocks of listening in natural environments, its neural bases rema
103 uency spatial cues in a manner that benefits listening in noisy or reverberant environments.
104 dition that combined natural and beamforming listening in order to preserve localization for broadban
105                       We conclude that music-listening is a valid condition under which the DMN can b
106                           In noisy settings, listening is aided by correlated dynamic visual cues gle
107 ay practice controlling two languages during listening is likely to explain previously observed bilin
108  processing levels interact during selective listening is not understood.
109  echolocation.SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain
110 and frontoparietal activity during selective listening is, however, not understood.
111  an early effect of coherence during passive listening, lasting from approximately 115 to 185 ms post
112 tive forced-choice increment detection task, listening level was varied whilst contrast was held cons
113                               Active spatial listening (location tasks) enhanced both contralateral a
114  users' speech and music perception, bimodal listening may partially compensate for these deficits.
115 ctional connectivity between the driving and listening networks.
116 e benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced i
117        No evidence was obtained for off-site listening of the type observed in acoustic hearing.
118 egression to investigate the effect of music-listening on RSNs and the DMN in particular.
119 evious work has shown that, during selective listening, ongoing neural activity in auditory sensory a
120 quential activity with regions involved with listening or reading words.
121                              Using a passive listening paradigm and multivariate decoding of single-t
122                             Using a dichotic listening paradigm in human participants, where we provi
123 netoencephalography, 12 young healthy adults listened passively to an isochronous auditory rhythm wit
124 Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes c
125  were focused on the scenes relative to when listening passively, consistent with the notion that att
126  primates; within 3 mo, this initially broad listening preference is tuned specifically to human voca
127 l point, human vocalizations evoke more than listening preferences alone: they engender in infants a
128   Listening to sentences in the context of a listen-repeat task was expected to activate regions invo
129 te, and dolphins exhibit handedness in their listening response.
130 nversations in an ongoing, spatially dynamic listening scenario.
131 erent voices and locations to create dynamic listening scenarios.
132                                Animals often listen selectively for particular sounds, a strategy tha
133                                              Listening selectively to one out of several competing sp
134 ttended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneous
135                  In contrast, during nontask listening sessions, cortical improvements were weak and
136  when successfully adapting to a challenging listening situation.
137                              In many natural listening situations, meaningful sounds (e.g., speech) f
138  speech comprehension, especially in adverse listening situations.
139 t sound properties is essential for everyday listening situations.
140 erve as an effective means to bolster speech listening skills that decline across the lifespan.
141                                  In everyday listening, sound reaches our ears directly from a source
142 f a large commercial vessel, <10 km from the listening station, the communication space of both speci
143                 To simulate cochlear implant listening, stimuli were vocoded with two unique features
144 mics of listening difficulties and according listening strategies, we contrasted neural responses in
145 d immediately, while in previous learning-by-listening studies P2 increases occurred on a later day.
146 oped a new, easy-to-use test of attention in listening (TAIL) based on reaction time.
147     Event-related potentials during the talk/listen task were obtained before infusion and during inf
148 re significantly different during a language listening task compared to during sleep, HR infants' mov
149                                          The listening task featured predictable or unpredictable sen
150 ngs from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demons
151 oncurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech
152                           Using a continuous listening task, we evaluated the coupling between the li
153  healthy listeners performing a naturalistic listening task.
154 honeme category using a challenging dichotic listening task.
155 different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefit
156 red using fMRI during spatial and nonspatial listening tasks.
157  function has been investigated with various listening tasks.
158                     Data are obtained from a listening test (N = 10) using linearly ramped increment-
159  features of auditory stimuli during passive listening; this preference for speech features was dimin
160 ce of anticipatory reinstatement as subjects listen to a familiar narrative.
161                 During fMRI, we had subjects listen to a real-life auditory narrative and to temporal
162 nerator with block sizes of four and six, to listen to either the Informed Health Choices podcast (in
163                                  When people listen to one person in a "cocktail party," their audito
164 al magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either
165 ing echolocation, dolphin produce clicks and listen to returning echoes to determine the location and
166  regions in both hemispheres, while subjects listen to sentences, and show that information travels i
167                                  When people listen to speech, electrophysiological oscillations in a
168                                 You can also listen to the associated interview with Debbie Sweet, Ed
169                    You snap your fingers and listen to the room's response.
170 indings suggest that humans might literally 'listen to their heart' to guide their altruistic behavio
171                               Human subjects listened to 'scenes' comprised of concurrent tone-pip st
172 ds changed (p=0.009), and carers not feeling listened to (p=0.006).
173 ve English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-
174 EG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbea
175 ral responses from human subjects who either listened to a 7 min spoken narrative or read a time-lock
176  and adults' eye gaze while they watched and listened to a female reciting a monologue either in thei
177  in veterinary surgeons' beliefs before they listened to a review of the evidence.
178 irectly from the cortex while human subjects listened to a spoken story.
179 eural signature of recognition when newborns listened to a test word that had the same vowel of a pre
180     Participants in the control arm (n = 80) listened to a verbal narrative describing CPR and the li
181 re less likely to opt for CPR than those who listened to a verbal narrative.
182                                 Participants listened to all versions and provided ratings based on a
183 halography (EEG) was recorded while subjects listened to auditory click trains presented at 20, 30, a
184 an infants from monolingual English settings listened to English and Spanish syllable contrasts.
185 rom 20 right-handed healthy adult humans who listened to five different recorded stories (attended sp
186                           Human participants listened to frequency-modulated sounds that varied over
187                                 Participants listened to frequent standard stimuli, which were inters
188 nal MRI (fMRI) data collected while subjects listened to hours of narrative stories.
189 y activities around 20 Hz while participants listened to metronome beats and imagined musical meters
190 oduced vocalizations on command or passively listened to monkey calls.
191                    Patients in the PDM group listened to music for a mean (SD) of 79.8 (126) (median
192 ield potential recordings while participants listened to natural continuous speech.
193 ecorded in an fMRI experiment while subjects listened to natural speech.
194 ical surface recordings in humans while they listened to natural, continuous speech to reveal the STG
195           Patients in the intervention group listened to nature-based sounds through headphones; the
196 n to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed o
197 ctions for two such positions, men and women listened to pairs of male and female voices that differe
198 oG was also recorded when subjects passively listened to playback of their own pitch-shifted vocaliza
199 investigate neuronal activity while subjects listened to radio news played faster and faster until be
200 ded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensib
201                                Nine subjects listened to recordings of a speaker describing visual sc
202  (EEG) was recorded while human participants listened to rhythms consisting of short sounds alternati
203 ly from the brain surface while participants listened to sentences that varied in intonational pitch
204 nt in which five men and two women passively listened to several hours of natural narrative speech.
205 lthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global
206 etwork (DMN) of the brain while participants listened to sounds from artificial and natural environme
207  were investigated using fMRI while patients listened to speech and speech-like sounds.
208 n MEG data obtained while human participants listened to speech of varying acoustic SNR and visual co
209 ical recordings while participants spoke and listened to speech sounds.
210 tex (superior temporal gyrus) while subjects listened to speech syllables.
211 stening version in which participants simply listened to spoken sentences and an explicit task versio
212 collecting MEG data while human participants listened to spoken words.
213                In Experiment 2, participants listened to spontaneous and posed laughs, and either inf
214 as activated in the aphasic patients as they listened to standard (undistorted) sentences.
215 ed children and adolescents (4-17 years old) listened to stories and two auditory control conditions
216                                 Participants listened to temporally regular (periodic) or temporally
217 regions to these areas emerged when newborns listened to the familiar word in the test phase.
218 articipants in the intervention arm (n = 70) listened to the identical narrative and viewed a 3-minut
219  during speaking compared with when subjects listened to the playback of their own voice.
220  cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and
221 movement of the tongue tip while the infants listened to the speech sounds.
222                                     Patients listened to trains of task-irrelevant tones in two condi
223 stress intensity while participants (n = 66) listened to true biographies describing human suffering.
224 troencephalograms were recorded while humans listened to two spoken digits against a distracting talk
225 e male and female human subjects watched and listened to videos of a speaker uttering consonant vowel
226 ts with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural
227 lateral superior temporal cortex as subjects listened to words and nonwords with varying transition p
228 efulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including
229  MEG, the neural responses of human subjects listening to a narrated story.
230                                        After listening to a radio story in the scanner, participants
231                      Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we
232 stributed activation patterns during passive listening to a sound continuum before and after category
233                  Humans excel at selectively listening to a target speaker in background noise such a
234                    We can learn new tasks by listening to a teacher, but we can also learn by trial-a
235 with rhythmic brain activity in participants listening to and seeing the speaker.
236                                         When listening to auditory definitions and covertly retrievin
237            Auditory training involves active listening to auditory stimuli and aims to improve perfor
238 y on a psychophysical task simulating active listening to beats within frequency windows that is base
239 t depends on the following: first, selective listening to beats within frequency windows, and, second
240 y found their hearing aids to be helpful for listening to both live and reproduced music, although le
241 c resonance imaging were used during passive listening to brief, 95-dB sound pressure level, white no
242                        We found that, during listening to connected speech, cortical activity of diff
243 netic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that
244 ng of auditory regularities in awake monkeys listening to first- and second-order sequence violations
245 ersus scrambled biological motion, and while listening to happy versus angry voices.
246 of self-generated sounds relative to passive listening to identical sounds.
247 al differences in speech reception solely by listening to individuals' brain activity.
248 rocessing in march and waltz contexts during listening to isochronous beats were reflected in neuroma
249 e and prevalence of problems associated with listening to live and reproduced music with hearing aids
250             Employment sector and time spent listening to MP3 players and stereos and participating i
251 rns of DMN connectivity in subjects who were listening to music compared with those who were not, wit
252 ers to the perception of periodicities while listening to music occurring within the frequency range
253                                              Listening to music prior to a standardized stressor pred
254   The results indicate that the enjoyment of listening to music with hearing aids could be improved b
255            This finding indicates that, when listening to music, humans apply cognitive processes tha
256 aid to the effectiveness of hearing aids for listening to music.
257 e not satisfied with their hearing aids when listening to music.
258  and functional coupling within the DMN when listening to naturalistic sounds.
259 systems, the sequence of words in texts, and listening to new songs in online music catalogues.
260 stion by recording from subjects selectively listening to one of two competing speakers, either of di
261 e culture in which they are being raised, by listening to other people.
262 ormal experimentation, and from watching and listening to others.
263 ticipants performed an incidental task while listening to phonemes in the MRI scanner.
264 ce to switch between two contexts: passively listening to pure tones and performing a recognition tas
265 halographic recordings while first passively listening to recorded sounds of a bell ringing, then act
266 iking the bell with a mallet, and then again listening to recorded sounds.
267  than that present in a control condition of listening to reversed speech.
268                         We hypothesized that listening to RM prior to the stress test, compared to SW
269                                              Listening to sentences in the context of a listen-repeat
270 tion was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal l
271               Humans are good at selectively listening to specific target conversations, even in the
272                                   In humans, listening to speech evokes neural responses in the motor
273 and print for sighted participants), and (2) listening to spoken sentences of different grammatical c
274 d the American Thoracic Society meeting) and listening to state of the art presentations, viewing res
275 of the attended speaker, as if subjects were listening to that speaker alone.
276          The primary outcome, measured after listening to the entire podcast, was the mean score and
277 on detection task and while they sat quietly listening to the identical stimuli.
278                              INTERPRETATION: Listening to the Informed Health Choices podcast led to
279 ities, such as driving on a quiet road while listening to the radio.
280 with those elicited by the subjects directly listening to the same music.
281 oss different people (n = 45, men and women) listening to the same story.
282         We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimu
283                   When subjects are actively listening to the stimuli, these responses are larger and
284                  Newborn infants (2-5 d old) listening to these three types of syllables displayed di
285                     At birth, infants prefer listening to vocalizations of human and nonhuman primate
286 ound localization performance in NH subjects listening to vocoder processed and nonvocoded virtual ac
287 sess the brainstem's activity when a subject listens to one of two competing speakers, and show that
288          The pallid bat (Antrozous pallidus) listens to prey-generated noise to localize and hunt ter
289 uses echolocation for obstacle avoidance and listens to prey-generated noise to localize prey.
290 rticipants heard simple sentences, with each listening trial followed immediately by a trial in which
291  articulatory representations during passive listening using carefully controlled stimuli (spoken syl
292 ing two versions of an experiment: a natural listening version in which participants simply listened
293            For hearing-impaired participants listening via a simulated five-channel compression heari
294 nterventions were diverse and included music listening, visual arts, reading and creative writing, an
295 tructure of responses in motor cortex during listening was organized along acoustic features similar
296                                       During listening, we observed neural activity in the superior a
297          Motor cortex neural patterns during listening were substantially different than during artic
298 ng vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppres
299 environments, for all four types of stimuli, listening with both hearing aid (HA) and cochlear implan
300 r implant (CI) was significantly better than listening with CI alone.

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top