コーパス検索結果 (1語後でソート)
  通し番号をクリックするとPubMedの該当ページを表示します
  
   1 only during articulation (not during passive listening).                                             
     2 ulation, or EAS) and/or across ears (bimodal listening).                                             
     3 ng a predictable time window during which to listen.                                                 
     4 in speech production encountered in everyday listening.                                              
     5 cy between imagined speech, overt speech and listening.                                              
     6 activate the currently heard language during listening.                                              
     7  greater attentional selection during active listening.                                              
     8 g vocalization, compared with during passive listening.                                              
     9 er auditory perceptual performance for music listening.                                              
    10 ry stimuli while participants were passively listening.                                              
    11 n of resting-state connectivity during music-listening.                                              
    12 sound making compared with the initial naive listening.                                              
    13 on-amusic patients during instrumental music listening.                                              
    14 nterpretation of the sound during subsequent listening.                                              
    15 auditory functional topography during active listening.                                              
    16 otopic mismatch for EAS, but not for bimodal listening.                                              
    17 to simulations of unimodal, EAS, and bimodal listening.                                              
    18 versational contexts both while speaking and listening.                                              
    19 te factors that affect IE in EAS and bimodal listening.                                              
    20  frontoparietal brain areas during selective listening.                                              
    21 nd the to-be-ignored stream during selective listening.                                              
    22 spatial cues are modulated by active spatial listening.                                              
    23 own about their interaction during selective listening.                                              
    24 n they actively vocalize than during passive listening.                                              
    25 auditory-visual cues modulate this selective listening.                                              
    26  quiet conditions, both CI alone and bimodal listening achieved the largest benefits when telephone s
  
    28 ed in controlling attention during selective listening, allowing for a better cortical tracking of th
    29 d with the other groups, in their ability to listen and talk to their children, who as a group report
    30 ed the spectrum of the neural activity while listening and compared it to the modulation spectrum of 
    31  classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86
    32 tation time, in-depth specialised knowledge, listening and understanding to patients' needs, and a ho
  
    34 eech recognition is remarkably robust to the listening background, even when the energy of background
  
  
  
    38 nship offsets age-related declines in speech listening by refining the hierarchical interplay between
  
  
  
    42 both perspectives are related to "Auditory" (listening, communicating, and speaking), "Social" (relat
  
    44 ntence recognition test under the best aided listening condition may be considered as candidates for 
    45 However, the requirement for 'the best aided listening condition' needs significant time and clinical
    46 ilized to different extents depending on the listening conditions (e.g., full spectrum vs. spectrally
    47 r difficulty understanding speech in adverse listening conditions and exhibit degraded temporal resol
    48 ligibility performance in a range of adverse listening conditions and hearing impairments, including 
    49 n SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional r
  
  
    52 nate Response Measure for six CI users in 27 listening conditions including each combination of rever
    53  under such acoustically complex and adverse listening conditions is not known, and, indeed, it is no
    54 dies have looked at the effects of different listening conditions on the intelligibility of speech, t
  
    56 slope changes across studies and to identify listening conditions that affect the slope of the psycho
    57 ts, bimodal users were presented under quiet listening conditions with wideband speech (WB), bandpass
  
    59 significantly greater in noisy than in quiet listening conditions, consistent with the principle of i
  
  
  
  
  
  
  
  
  
  
  
    71 ive sensing, we present here a single-sensor listening device that separates simultaneous overlapping
  
  
  
    75 f these two types of information in everyday listening (e.g., conversing in a noisy social situation;
  
    77   Semantic context led to rapid reduction of listening effort for people with normal hearing; the red
    78 ot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception.    
  
    80 drivers engaged in a driving simulator while listening either to global positioning system instructio
    81 ing speech despite the fact that our natural listening environment is often filled with interference.
    82 er, these results suggest that, in a complex listening environment, auditory cortex can selectively e
  
    84 ifficulty understanding speech in real-world listening environments (e.g., restaurants), even with am
    85 se sound processing, particularly in complex listening environments that place high demands on brain 
    86 may support speech processing in challenging listening environments, and that this infrastructure is 
  
  
  
  
    91 ety-four unilateral CI patients with bimodal listening experience (CI plus HA in contralateral ear) c
    92 speak their first words and without specific listening experience, sensorimotor information from the 
    93 to measure brain activity while participants listened for short silences that interrupted ongoing noi
    94 respectful care for the deceased and family, listening for and addressing family concerns, and an att
  
  
    97 eech stimuli to investigate 'cocktail-party' listening have focused on entrainment of cortical activi
    98 ignificantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EA
  
  
   101 (SNHL) often experience more difficulty with listening in multisource environments than do normal-hea
   102 on constitutes one of the building blocks of listening in natural environments, its neural bases rema
  
   104 dition that combined natural and beamforming listening in order to preserve localization for broadban
  
  
   107 ay practice controlling two languages during listening is likely to explain previously observed bilin
  
   109  echolocation.SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain 
  
   111  an early effect of coherence during passive listening, lasting from approximately 115 to 185 ms post
   112 tive forced-choice increment detection task, listening level was varied whilst contrast was held cons
  
   114  users' speech and music perception, bimodal listening may partially compensate for these deficits.  
  
   116 e benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced i
  
  
   119 evious work has shown that, during selective listening, ongoing neural activity in auditory sensory a
  
  
  
   123 netoencephalography, 12 young healthy adults listened passively to an isochronous auditory rhythm wit
   124 Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes c
   125  were focused on the scenes relative to when listening passively, consistent with the notion that att
   126  primates; within 3 mo, this initially broad listening preference is tuned specifically to human voca
   127 l point, human vocalizations evoke more than listening preferences alone: they engender in infants a 
   128   Listening to sentences in the context of a listen-repeat task was expected to activate regions invo
  
  
  
  
  
   134 ttended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneous
  
  
  
  
  
  
  
   142 f a large commercial vessel, <10 km from the listening station, the communication space of both speci
  
   144 mics of listening difficulties and according listening strategies, we contrasted neural responses in 
   145 d immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. 
  
   147     Event-related potentials during the talk/listen task were obtained before infusion and during inf
   148 re significantly different during a language listening task compared to during sleep, HR infants' mov
  
   150 ngs from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demons
   151 oncurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech
  
  
  
   155 different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefit
  
  
  
   159  features of auditory stimuli during passive listening; this preference for speech features was dimin
  
  
   162 nerator with block sizes of four and six, to listen to either the Informed Health Choices podcast (in
  
   164 al magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either
   165 ing echolocation, dolphin produce clicks and listen to returning echoes to determine the location and
   166  regions in both hemispheres, while subjects listen to sentences, and show that information travels i
  
  
  
   170 indings suggest that humans might literally 'listen to their heart' to guide their altruistic behavio
  
  
   173 ve English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-
   174 EG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbea
   175 ral responses from human subjects who either listened to a 7 min spoken narrative or read a time-lock
   176  and adults' eye gaze while they watched and listened to a female reciting a monologue either in thei
  
  
   179 eural signature of recognition when newborns listened to a test word that had the same vowel of a pre
   180     Participants in the control arm (n = 80) listened to a verbal narrative describing CPR and the li
  
  
   183 halography (EEG) was recorded while subjects listened to auditory click trains presented at 20, 30, a
   184 an infants from monolingual English settings listened to English and Spanish syllable contrasts.     
   185 rom 20 right-handed healthy adult humans who listened to five different recorded stories (attended sp
  
  
  
   189 y activities around 20 Hz while participants listened to metronome beats and imagined musical meters 
  
  
  
  
   194 ical surface recordings in humans while they listened to natural, continuous speech to reveal the STG
  
   196 n to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed o
   197 ctions for two such positions, men and women listened to pairs of male and female voices that differe
   198 oG was also recorded when subjects passively listened to playback of their own pitch-shifted vocaliza
   199 investigate neuronal activity while subjects listened to radio news played faster and faster until be
   200 ded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensib
  
   202  (EEG) was recorded while human participants listened to rhythms consisting of short sounds alternati
   203 ly from the brain surface while participants listened to sentences that varied in intonational pitch 
   204 nt in which five men and two women passively listened to several hours of natural narrative speech.  
   205 lthy controls (n = 22) and patients (n = 22) listened to short stories in which we manipulated global
   206 etwork (DMN) of the brain while participants listened to sounds from artificial and natural environme
  
   208 n MEG data obtained while human participants listened to speech of varying acoustic SNR and visual co
  
  
   211 stening version in which participants simply listened to spoken sentences and an explicit task versio
  
  
  
   215 ed children and adolescents (4-17 years old) listened to stories and two auditory control conditions 
  
  
   218 articipants in the intervention arm (n = 70) listened to the identical narrative and viewed a 3-minut
  
   220  cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and
  
  
   223 stress intensity while participants (n = 66) listened to true biographies describing human suffering.
   224 troencephalograms were recorded while humans listened to two spoken digits against a distracting talk
   225 e male and female human subjects watched and listened to videos of a speaker uttering consonant vowel
   226 ts with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural
   227 lateral superior temporal cortex as subjects listened to words and nonwords with varying transition p
   228 efulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including 
  
  
  
   232 stributed activation patterns during passive listening to a sound continuum before and after category
  
  
  
  
  
   238 y on a psychophysical task simulating active listening to beats within frequency windows that is base
   239 t depends on the following: first, selective listening to beats within frequency windows, and, second
   240 y found their hearing aids to be helpful for listening to both live and reproduced music, although le
   241 c resonance imaging were used during passive listening to brief, 95-dB sound pressure level, white no
  
   243 netic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that 
   244 ng of auditory regularities in awake monkeys listening to first- and second-order sequence violations
  
  
  
   248 rocessing in march and waltz contexts during listening to isochronous beats were reflected in neuroma
   249 e and prevalence of problems associated with listening to live and reproduced music with hearing aids
  
   251 rns of DMN connectivity in subjects who were listening to music compared with those who were not, wit
   252 ers to the perception of periodicities while listening to music occurring within the frequency range 
  
   254   The results indicate that the enjoyment of listening to music with hearing aids could be improved b
  
  
  
  
  
   260 stion by recording from subjects selectively listening to one of two competing speakers, either of di
  
  
  
   264 ce to switch between two contexts: passively listening to pure tones and performing a recognition tas
   265 halographic recordings while first passively listening to recorded sounds of a bell ringing, then act
  
  
  
  
   270 tion was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal l
  
  
   273 and print for sighted participants), and (2) listening to spoken sentences of different grammatical c
   274 d the American Thoracic Society meeting) and listening to state of the art presentations, viewing res
  
  
  
  
  
  
  
  
  
  
  
   286 ound localization performance in NH subjects listening to vocoder processed and nonvocoded virtual ac
   287 sess the brainstem's activity when a subject listens to one of two competing speakers, and show that 
  
  
   290 rticipants heard simple sentences, with each listening trial followed immediately by a trial in which
   291  articulatory representations during passive listening using carefully controlled stimuli (spoken syl
   292 ing two versions of an experiment: a natural listening version in which participants simply listened 
  
   294 nterventions were diverse and included music listening, visual arts, reading and creative writing, an
   295 tructure of responses in motor cortex during listening was organized along acoustic features similar 
  
  
   298 ng vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppres
   299 environments, for all four types of stimuli, listening with both hearing aid (HA) and cochlear implan
  
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。