コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 e descriptions of reception of the resultant sound.
2 these spaces due to anthropogenic underwater sound.
3 detect a mistuned harmonic within a complex sound.
4 lea are mechanosensors for the perception of sound.
5 typical correspondences between spelling and sound.
6 signal timing changes to that of the nearby sound.
7 alisation processing across infant and adult sounds.
8 ymptom of FXS is extreme sensitivity to loud sounds.
9 tuations in intensity in amplitude-modulated sounds.
10 e, knocking at a door results in predictable sounds.
11 n subjects as they manipulated stored speech sounds.
12 daptive filter for cancelling self-generated sounds.
13 t neurons can also encode the probability of sounds.
14 frontal regions during recognition of speech sounds.
15 lus in a sequence with similar or dissimilar sounds.
16 se zinc to influence how the brain processes sounds.
17 ion, the auditory analysis of self-generated sounds.
18 preference for vocal compared with nonvocal sounds.
19 be detected and converted into controllable sounds.
20 ork supporting perceptual grouping of speech sounds.
21 cending pathway in the perception of complex sounds.
22 mallet, and then again listening to recorded sounds.
32 ntacts, type II afferents are insensitive to sound and only weakly depolarized by glutamate release f
33 s selective immediately after the onset of a sound and then become highly selective in the following
34 eview of the literature data on the speed of sound and ultrasound absorption in pure ionic liquids (I
38 to avoid presentation of uncomfortably loud sounds, and (c) to ensure that subjects have control ove
39 00 ms) between actions and action-associated sounds, and we recorded magnetoencephalography (MEG) dat
40 uminance for vision, pitch and intensity for sound-and assemble a stimulus set that systematically va
42 ferent orofacial movement patterns and these sounds are used in communicatively relevant contexts.
43 he songs where both songs contained "similar sounds arranged in a similar pattern." Songs appear to b
48 show that auditory cortex neurons respond to sound at very young ages, even before the opening of the
51 TMS increased the disadvantage for spelling-sound atypical words more for the individuals with stron
52 ditory system can identify the category of a sound based on the global features of the acoustic conte
53 bnormal salience is attributed to particular sounds based on the abnormal activation and functional c
54 s: a phonological input buffer that captures sound-based information and an articulatory rehearsal sy
55 uring sleep biased AC activity patterns, and sound-biased AC patterns predicted subsequent hippocampa
57 and suppressed by alternating (asynchronous) sounds, but only when the animals engaged in task perfor
59 cant individual differences in the use of AG sounds by chimpanzees and, here, we examined whether cha
60 e shown that some can learn to produce novel sounds by configuring different orofacial movement patte
64 4 days, beginning 2 days before a calibrated sound challenge (4 h of pre-recorded music delivered by
65 All participants who received the calibrated sound challenge and at least one dose of study drug were
66 ntinued from the study before the calibrated sound challenge because they no longer met the inclusion
67 t 4 kHz measured 15 min after the calibrated sound challenge by pure tone audiometry; a reduction of
68 listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evol
77 showed that in misophonic subjects, trigger sounds elicit greatly exaggerated blood-oxygen-level-dep
80 e the mammalian phono-receptors, transducing sound energy into graded changes in membrane potentials,
84 he future thalamocortical input layer 4, and sound-evoked spike latencies were longer in layer 4 than
85 itory brainstem of cats, spatial patterns of sound-evoked Ve can resemble, strikingly, Ve generated b
88 supported more accurate decoding of temporal sound features in the inferior colliculus and auditory c
92 wever, there remains a lack of statistically sound frameworks to model the underlying transmission dy
93 erves are tuned to respond best to different sound frequencies because basilar membrane vibration is
99 ate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonanc
100 particularly challenging in audition, where sounds from various sources and localizations, degraded
101 with animals' impairments in detecting brief sound gaps, which is often considered a sign of tinnitus
105 mportantly, a critical analysis of speeds of sound in ILs vs those in classical molecular solvents is
106 ure and pressure dependences on the speed of sound in ILs, as well as the impact of impurities in ILs
107 ility to generate, amplify, mix and modulate sound in one simple electronic device would open up a ne
108 ctive experience of a rhythmically modulated sound in real time, even when the perceptual experience
110 each syllable to the most spectrally similar sound in the target, regardless of its temporal position
117 ocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises
119 sition, and the illumination patterns of the sound-indication devices allow us to discriminate multip
120 echnique that utilizes multiple, distributed sound-indication devices and a miniature LED backpack to
122 ical limitations and non-invasively measured sound-induced vibrations at four locations distributed o
127 trol group using a key press to generate the sounds instead of learning to play the musical instrumen
128 low-latency encoding of onset and offset of sound intensity in the cochlea's base and submillisecond
132 ever, these studies only manipulated overall sound intensity; therefore, it is unclear whether loomin
133 duced transparency based on a coherent light-sound interaction, with the coupling originating from th
135 elation between ions structure and speeds of sound is presented by highlighting existing correlation
136 integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, b
137 ause they would affect how information about sounds is conveyed to higher-order areas for further pro
139 sion of auditory responses to self-generated sounds is well known, it is not clear whether the learne
140 he ability to perceive and memorize rhythmic sounds is widely shared among humans [6] but seems rare
141 ied to explain the "mystery" of Stradivari's sound, it is only recently that studies have addressed t
142 ormatics searches for enzyme candidates with sound kinetic measurements, evolutionary considerations
145 source position that is robust to changes in sound level.SIGNIFICANCE STATEMENT Sensory neurons' resp
146 protocol compared with those without maximum sound levels 81 dB (95% CI, 79-83) versus 77 dB (95% CI,
156 l superior olive (MSO) play a unique role in sound localization because of their ability to compare t
159 e this method to probe the representation of sound localization in auditory neurons of chinchillas an
161 e auditory brainstem and participates in the sound localization process with fast and well-timed inhi
166 ice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9).
169 d good performance in distinguishing between sound maize and undesirable materials, with cross-valida
170 trast, P2 became larger when listening after sound making compared with the initial naive listening.
172 changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of S
175 alities include the ability to project their sound more effectively in a concert hall-despite seeming
177 onductance, and pupil area responses to loud sounds (multivariate p = .007) compared with trauma-expo
178 opose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies i
179 ests as popping, hissing, and faint rustling sounds occurring simultaneously with the arrival of ligh
180 w that newborns are capable of retaining the sound of specific words despite hearing other stimuli du
182 while first passively listening to recorded sounds of a bell ringing, then actively striking the bel
183 ft hand while they were presented with brief sounds of rising, falling or constant pitches, and in th
184 ndividually tailored spectral cues to create sounds of similar intensity but different naturalness.
185 ucing color sensations was the name, not the sound, of the note; behavioral experiments corroborated
187 at related to the latency of the response to sound onset, which is found in left auditory cortex.
191 w nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particula
192 s in that responses were more consistent for sounds perceived as approaching than for sounds perceive
195 a indicate that the non-linear processing of sound performed by the guinea pig cochlea varies substan
198 re, research opportunities and barriers, and sound practices to guide providers, patients, and famili
200 alcium sensor for exocytosis and encoding of sound preferentially over the neuronal calcium sensor sy
202 the device's performance and applicability, sound pressure level is characterized in both space and
203 ect operates by continuously integrating the sound pressure level of background noise through tempora
204 tial translation across samples are based on sound principles, but require users to choose between ac
205 of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synapt
209 presentations by the manipulation of natural sounds produced when one's body impacts on surfaces have
213 Here we show that models of measured fish sound production versus independently measured fish dens
216 ions are, in part, based on methodologically sound randomized controlled trials (RCTs), demonstrating
217 e different systems: a music-playing flag, a sound recording film and a flexible microphone for secur
219 icit memory also characterized the impact of sound regularities in benefitting dyslexics' oral readin
220 of clinical development strategies to enable sound regulatory assessment, with a goal toward licensur
222 r sensory processing by dynamically changing sound representation and by controlling the pattern of s
224 oduces an approach to embed models of neural sound representations in the analysis of fMRI response p
225 results suggest that learning to produce AG sounds resulted in region-specific cortical reorganizati
226 gnal assignment is fundamental for obtaining sound results when interpreting statistical data from me
230 tion of human exposure to chemicals in food, sound risk assessments, and more focused risk abatement
231 ntial preclinical evidence needed to build a sound scientific basis for increased medicinal use of CB
233 n to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures b
234 ted behaviours such as locomotion, touch and sound sensation across different species including Caeno
235 itory stream segregation-the organization of sound sequences into perceptual streams reflecting diffe
236 impanzees that have learned to produce these sounds show significant differences in central sulcus (C
238 map the seabed using intense, low-frequency sound signals that penetrate kilometers into the Earth's
243 f discriminating the individual identity and sound source distance in conspecific communication calls
244 nd source location in the face of changes in sound source level by neurons of the auditory midbrain.
245 thod to the problem of the representation of sound source location in the face of changes in sound so
246 properties contribute to a representation of sound source position that is robust to changes in sound
251 ing, as well as the acoustical properties of sound sources in the natural environment, thereby provid
252 on devices allow us to discriminate multiple sound sources including loudspeakers broadcasting calls
253 can be captured by an array of tongue-driven sound sources located along the side of the mouth, and t
254 Here we show tracheophones possess three sound sources, two oscine-like labial pairs and the uniq
257 allenging for airborne acoustics because the sound speed (inversely proportional to the refractive in
258 we present longitudinal (c L) and transverse sound speeds (c T) versus pressure from higher than room
261 s may be active during consecutive cycles of sound stimuli, somatic EPSP normalization renders spike
263 es additional high-quality, methodologically sound studies to clearly elucidate the role of palliativ
265 (anger and anxiety) in response to everyday sounds, such as those generated by other people eating,
269 gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, includin
271 n (PS) model of auditory stream segregation, sounds that activate the same or separate neural populat
272 Listeners were presented with sequences of sounds that varied in either fundamental frequency (elic
273 ficits, highlighting the potential for using sound therapy soon after cochlear damage to prevent the
275 h sufficient intensity can create concurrent sounds through radiative heating of common dielectric ma
278 tory cortex switches its input modality from sound to vision but preserves its task-specific activati
281 rophysiology has mapped acoustic features of sounds to the response properties of neurons; however, g
282 ina and integrates it with information about sound, touch, and state of the animal that is relayed fr
283 ensory hypersensitivity (aversion to certain sounds, touch, etc., or increased ability to make sensor
285 complications may be accomplished through a sound understanding of the hemodynamic and physiological
287 nly utilising the two parameters velocity of sound (VOS) and broadband ultrasound attenuation (BUA),
289 udged whether a given consonant-vowel speech sound was large or small, round or angular, using a size
293 .The control and manipulation of propagating sound waves on a surface has applications in on-chip sig
295 as remarkable for slightly asymmetric breath sounds, which appeared to be diminished on the right sid
296 Thus, the neural encoding of low-frequency sounds, which includes most of the information conveyed
297 M100 component over time for self-generated sounds, which indicates cortical adaptation to the intro
298 tual-room size with either active or passive sounds while measuring their brain activity with fMRI.
299 ons are largely unaffected by self-generated sounds while remaining sensitive to external acoustic st
300 tex may play an important role in processing sounds with harmonic structures, such as animal vocaliza
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。