コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 re most relevant for behavior (i.e., speech, voice).
2 and the auditory modality (especially human voices).
3 ties, including the face of a person and her voice.
4 bjects listened to the playback of their own voice.
5 s, as is low self-efficacy related to safety voice.
6 was negatively related to moral distress via voice.
7 n their eyes spontaneously or in response to voice.
8 werful positive or negative affect on safety voice.
9 ice and asking whether it came from the cued voice.
10 ntially activated only for the natural human voice.
11 mouth-preferring regions respond strongly to voices.
12 peaker in background noise such as competing voices.
13 n, and while listening to happy versus angry voices.
14 nly the temporal or the spectral features of voices.
15 ed to allocate less spontaneous attention to voices.
16 1500 ms exposures to groups of simultaneous voices.
17 were cued beforehand to attend to one of the voices.
18 ras preserving the exact spectral profile of voices.
19 people without psychosis who regularly hear voices.
20 Most participants described hearing multiple voices (124 [81%] of 153 individuals) with characterful
21 and power dynamics negatively affect safety voice, 2) open communication is unsafe and ineffective,
22 endent variables on moral distress and moral voice: (a) frequency of ethical dilemmas and conflicts;
23 ars) were randomly assigned with a telephone voice-activated or web-based system in a 1:1 ratio to tr
24 gauge another person's body size from their voice alone may serve multiple functions ranging from th
25 quently presenting the ending portion of one voice and asking whether it came from the cued voice.
26 of maternal sounds (including their mother's voice and heartbeat) or routine exposure to hospital env
28 ndergo a pretreatment baseline assessment of voice and swallowing function and receive counseling wit
30 make use of auditory cues from the talker's voice and visual cues from the talker's mouth to underst
31 rated randomisation sequence and interactive voice and web response system to assign patients aged 18
32 signed participants (2:1) via an interactive voice and web response system to raltegravir 1200 mg (tw
33 sation (1:1) using a centralised interactive voice and web response system to receive 100 mg/kg opici
34 randomly assigned (2:1) with an interactive voice and web response system to receive either 10 mg ev
35 y assigned (1:1) centrally by an interactive voice and web response system to receive intravenous bre
36 kal score at diagnosis) using an interactive voice and web response system to receive oral ponatinib
37 randomly assigned (1:1), via an interactive voice and web response system with a block size of four,
38 isation in a 1:1:1 ratio with an interactive voice and web response system with a block size of six,
39 randomly assigned (1:1), via an interactive voice and web response system with a permuted block desi
40 e randomly assigned (1:1) via an interactive voice and web response system with no stratification to
41 re randomly assigned (2:1) by an interactive voice and web response system with permuted block random
42 core >/=12) were randomised (via interactive voice and web response system) to tildrakizumab 200 mg,
46 nd randomisation system with an interactive (voice and web) response system and stratification by num
47 ) were randomly assigned, via an interactive voice and web-response system with computer-generated se
48 hen implemented centrally via an interactive voice and web-response system, to receive 1 year of oral
49 randomly assigned (1:1), via an interactive voice and web-response system, to receive a single intra
50 andomly assigned (1:1:1), via an interactive voice and web-response system, to receive once-weekly ex
52 eir biological mother and two female control voices and explored relationships between speech-evoked
53 nswer) that may be associated with different voices and locations to create dynamic listening scenari
54 ateral amygdala resection either listened to voices and nonvocal sounds or heard binaural vocalizatio
55 uth-preferring regions responded strongly to voices and showed a significant preference for vocal com
56 pleasantness of musical chords and affective voices and that, for listeners with clinically normal he
57 the unique tempo and timbre of their rivals' voices and use this rhythmic information to individually
58 revealed varied patterns of lecture (single voice) and nonlecture activity (multiple and no voice) u
59 s information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to i
60 expectations of nurse behavior affect safety voice, and 4) nurse managers have a powerful positive or
61 erance, weight gain, constipation, change in voice, and dry skin, but clinical presentation can diffe
64 rse, qualitative, speak up, silence, safety, voice, and safety voice identified 372 articles with 11
65 d remains today, his was only one among many voices, and attention to them would be well repaid by a
66 reported bodily sensations while they heard voices, and these sensations were significantly associat
71 ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive im
72 via a centralised interactive web-based and voice-based randomisation system to receive oral palboci
74 tures (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult h
78 ation (avatar) of their presumed persecutor, voiced by the therapist so that the avatar responds by b
81 Diarrhoea, neutropenia, hypertension, and voice changes were significantly more common, during che
83 natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in t
84 ology with high precision 3D actuation (e.g. voice coil, 1microm encoder resolution; stepper motors,
87 certainty, which further supports previously voiced concerns about the usability and efficiency of th
88 , and it will open practical applications in voice control, wearable electronics and many other areas
89 Ugandan not-for-profit organisation Raising Voices-could reduce physical violence from school staff
90 activity that encodes pitch in natural human voice, distinguishes between self-generated and passivel
92 owel-consonant duration ratio, and consonant voicing duration) were systematically varied in synthesi
94 predict the quantity of time spent on single voice (e.g., lecture), multiple voice (e.g., pair discus
95 nt on single voice (e.g., lecture), multiple voice (e.g., pair discussion), and no voice (e.g., click
96 Compared to female control voices, mother's voice elicited greater activity in primary auditory regi
99 nd include intentional leadership; increased voice, especially of women; implementation of integrated
100 the right TVA remains selective to the human voice even when accounting for a variety of acoustical c
101 mbodiment illusion depends on the child-like voice feedback being congruent or incongruent with the a
105 rticulated speech, the potential function of voice frequency modulation in human nonverbal communicat
106 e randomly modulated in pitch, adjusting his voice frequency up or down when the human demonstrator d
107 s voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is
108 measured their ability to extract this cued voice from the mixture by subsequently presenting the en
111 d frequency following responses that tracked voice fundamental frequency (F0) and were significantly
112 s applied to the index finger that converted voice fundamental frequency into tactile vibrations.
113 ) is a new approach in which people who hear voices have a dialogue with a digital representation (av
114 from within the agricultural sector, outside voices have become an important influence in broadening
116 s that differentiated voice-hearers from non-voice-hearers and treatment-seekers from non-treatment-s
117 we identified processes that differentiated voice-hearers from non-voice-hearers and treatment-seeke
121 speak up, silence, safety, voice, and safety voice identified 372 articles with 11 retained after a r
124 nication intervention for the restoration of voice in ventilated tracheostomy patients in the ICU.
127 e previously observed attentional deficit to voices in ASD individuals could be due to a failure to c
128 tle statistical differences about people and voices in order to direct their attention toward the mos
130 s for identification of word-final fricative voicing in "loss" versus "laws", and possible changes th
131 lege of Physicians (ACP) is attentive to all voices, including those who speak of the desire to contr
132 rength, decreased fat mass, deepening of the voice, increased sexual desire, cessation of menstruatio
133 salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from th
134 study sponsor, implemented by a computerised voice interactive system, and stratified by LDL choleste
141 ethics support moderated the moral efficacy-voice-moral distress relationship such that when organiz
144 follow the fundamental frequency (FF) of the voice of their VB, indicating a new motor plan for speak
147 has been increasing interest in taking the "voice" of the patient into account during the developmen
149 associated with linguistic features such as voice onset time, duration of the formant transitions, a
152 d patients (1:1:1) via a central interactive voice or web response system to either placebo every 2 w
157 h a block size of four) using an interactive voice or web system to receive intravenous atezolizumab
158 ral vs peritoneal), by use of an interactive voice or web system, to receive intravenous tremelimumab
159 ation was done centrally with an interactive voice or web-based response system and stratified by eth
160 rating system implemented via an interactive voice or web-based response system with a block size of
161 isation schedule accessed via an interactive voice or web-based response system, patients were random
163 y were randomly assigned with an interactive voice or web-response system (1:1:1) to receive adalimum
164 ation was done centrally with an interactive voice or web-response system with patients stratified to
165 biologically salient voices such as mother's voice or whether this brain activity is related to child
166 s fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli.
169 the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identific
171 ome brain areas concerned with olfaction and voice perception consistent with sexual identification,
173 and face-processing regions during mother's voice perception predicted social communication skills.
174 eech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated
175 ker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending
177 e talking simultaneously may differ in their voice pitch, perceiving the harmonic structure of sounds
178 tion afforded by the CI limits perception of voice pitch, which is an important cue for speech prosod
180 e that regions that typically specialize for voice processing in the hearing brain preferentially reo
181 ds to globally reduced ipsilesional cortical voice processing, but only left amygdala lesions are suf
184 registered each patient using an interactive voice recognition system into one of the three treatment
185 ntral computerised system and an interactive voice recognition system, we randomly assigned (1:1) pat
187 be for self-powered anti-interference throat voice recording and recognition, as well as high-accurac
189 ter a period of eight years, indicating that voice representations or interest could be limited in ti
193 e did the randomisation using an interactive voice response system and a centralised, computer-genera
194 he investigators using an interactive web or voice response system and a computer-generated randomisa
195 s centrally implemented using an interactive voice response system and integrated web response system
196 e randomly assigned (1:1) via an interactive voice response system and integrated web response system
197 tion occurred centrally using an interactive voice response system and integrated web response system
198 andomisation was done through an interactive voice response system and no stratification factors were
199 ation was done centrally with an interactive voice response system and patients were stratified by re
200 generated randomisation list and interactive voice response system and stratified by geographical reg
201 randomly assigned (1:1) with an interactive voice response system by the permuted block method using
203 Patients were randomised via an interactive voice response system in a 1:1:1 ratio to either lenvati
204 ze of four) by a telephone-based interactive voice response system or interactive web response system
205 y were randomly assigned with an interactive voice response system to an age-based and weight-based b
206 assigned (1:1:1) by an interactive web-based voice response system to benralizumab 30 mg either every
207 ndomly assigned (1:1:1) using an interactive voice response system to dapagliflozin 5 mg or 10 mg onc
208 assigned (1:1:1) centrally by an interactive voice response system to dulanermin (8 mg/kg for a maxim
209 domly assigned in a 1:1 ratio by interactive voice response system to receive concomitant oral acetyl
210 re randomly assigned (1:1) by an interactive voice response system to receive either a combination of
211 e randomly assigned (1:1) via an interactive voice response system to receive enzalutamide 160 mg/day
212 ly assigned in a 2:1 ratio by an interactive voice response system to receive everolimus 10 mg per da
213 e randomly assigned (1:1) via an interactive voice response system to receive inotuzumab ozogamicin (
214 randomly assigned (1:1:1) by an interactive voice response system to receive nivolumab 1 mg/kg every
215 tion (block size of three) by an interactive voice response system to receive oral everolimus (10 mg
216 e randomly assigned (1:1) via a computerised voice response system to receive rilotumumab 15 mg/kg in
217 of the world]) with an interactive web-based voice response system to receive subcutaneous placebo or
218 enerated random sequence with an interactive voice response system to receive subcutaneous placebo, e
219 e randomly assigned (1:1) via an interactive voice response system to receive subcutaneous romosozuma
220 domly assigned (2:1) through the interactive voice response system to receive weekly romiplostim or p
221 amic randomisation scheme and an interactive voice response system to trastuzumab emtansine (3.6 mg/k
222 omly assigned (1:1) by use of an interactive voice response system with a block size of four to eithe
223 ation was done by an independent interactive voice response system with a permuted block schedule (bl
224 Randomisation was done via an interactive voice response system with a permuted block schedule (bl
225 ators randomly assigned (with an interactive voice response system) patients 2:1 to receive an intrav
226 s were randomly assigned (via an interactive voice response system) to oral metformin 1000 mg twice d
227 y assigned (1:1) centrally by an interactive voice response system, to receive either ipilimumab 10 m
228 assigned (1:1), centrally by an interactive voice response system, to receive intravenous infusions
234 re randomly assigned (1:1) using interactive voice response technology (block size of 6) on day 15 of
235 er-generated random-sequence and interactive voice-response and web-response system, stratified by Hb
236 randomly assigned (1:1), via an interactive voice-response or web-response system, to one of two dos
237 on (block size of four) using an interactive voice-response or web-response system, to receive atalur
238 ned (2:1), via a telephone-based interactive voice-response system (GlaxoSmithKline Registration and
239 e of four) via a telephone-based interactive voice-response system or interactive web-response system
240 on score (<1% vs >/=1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 m
241 allocated patients 2:1 using an interactive voice-response system to eltrombopag or placebo, stratif
242 domised (1:1) via a centralised, interactive voice-response system to receive 8 mg/kg intravenous ram
243 re randomly assigned (1:1) by an interactive voice-response system to receive either oral lenalidomid
244 locks of six per stratum with an interactive voice-response system to receive pembrolizumab 2 mg/kg,
245 a were randomly assigned, via an interactive voice-response system with a permuted block randomisatio
246 d randomisation sequence with an interactive voice-response system, to receive once-weekly dulaglutid
251 The strength of brain connectivity between voice-selective STS and reward, affective, salience, mem
252 auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amyg
254 sponses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of
255 scillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal m
256 cortex, downstream auditory regions, such as voice-sensitive cortex, appear to functionally engage pr
258 listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speakin
260 te nearly identical responses to picture and voice stimuli of famous U.S. politicians during a naming
261 ge the relative heights of women from paired voice stimuli, and importantly, whether errors in size e
262 We found several areas supporting face or voice stimulus classification based on fMRI responses, c
263 as significantly more use of multiple and no voice strategies in courses for STEM majors compared wit
267 engaged in children by biologically salient voices such as mother's voice or whether this brain acti
269 lus cytarabine through a central interactive voice system with a permuted block procedure stratified
271 isingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice rec
272 ubjects responded more intensely to familiar voices than to calls from unknown individuals - the firs
276 (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich re
277 ICU, bereaved families need opportunities to voice their feelings about their experience in the ICU a
282 e hardly novel, they gave a new and powerful voice to the cancer survivorship movement that demanded
286 t to safe, open cultures, may improve safety voice utilization among nurses and other healthcare work
289 y a computerised system using an interactive voice-web response system with a block size of three.
290 ndomised in a 1:1 ratio using an interactive voice-web response system, stratified by geographical re
292 andomly assigned (1:1:1) with an interactive voice-web-based response system to receive lebrikizumab
293 (q2w), or placebo via a central interactive voice/web response system, stratified by severity and gl
294 randomly assigned (1:1), via an interactive voice/web response system, to receive oral macitentan (1
296 Our statistical analysis showed that mixed voices were more likely to have changed over time (p=0.0
298 yed by pitch changes in the highest-register voice, whereas meter or rhythm is often carried by instr
299 zed: discourse-related cues, such as passive voice, which effect a higher predictability of remention
300 outh movements at the same time as they hear voices, while there is no auditory accompaniment to visu
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。