コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 re most relevant for behavior (i.e., speech, voice).
2 and the auditory modality (especially human voices).
3 ng view of an optimistic, limitless future a voice.
4 ever before makes it a formidable force and voice.
5 is formed through variable exposure to that voice.
6 efore auditory information from the talker's voice.
7 ntially activated only for the natural human voice.
8 electively attending to one of two competing voices.
9 1500 ms exposures to groups of simultaneous voices.
10 ras preserving the exact spectral profile of voices.
11 people without psychosis who regularly hear voices.
12 mouth-preferring regions respond strongly to voices.
13 peaker in background noise such as competing voices.
14 ling depressed, social, and calm and hearing voices.
15 and controls identified a similar number of voices.
16 ot" and "boat") also differentiate speakers' voices.
17 nto the instrument, hence, confirming active voicing.
20 s elicited by unfamiliar voices and mother's voice, a biologically salient voice for social learning,
22 gauge another person's body size from their voice alone may serve multiple functions ranging from th
24 assigned (1:1) through a central interactive voice and integrated web response system to receive epac
26 ndergo a pretreatment baseline assessment of voice and swallowing function and receive counseling wit
28 played limited tracking of both the attended voice and the global acoustic input at the 4-8 Hz syllab
29 make use of auditory cues from the talker's voice and visual cues from the talker's mouth to underst
30 rated randomisation sequence and interactive voice and web response system to assign patients aged 18
31 o assign patients (2:1:1) via an interactive voice and web response system to atezolizumab (840 mg in
32 mised in a 1:2:2:2:2 ratio by an interactive voice and web response system to opicinumab 3 mg/kg, 10
33 signed participants (2:1) via an interactive voice and web response system to raltegravir 1200 mg (tw
34 sation (1:1) using a centralised interactive voice and web response system to receive 100 mg/kg opici
35 y assigned (1:1) centrally by an interactive voice and web response system to receive intravenous bre
36 andomly allocated (1:1) using an interactive voice and web response system to riociguat (0.5-2.5 mg t
37 Randomisation was done via an interactive voice and web response system using a permuted block sch
38 isation in a 1:1:1 ratio with an interactive voice and web response system with a block size of six,
39 randomly assigned (1:1), via an interactive voice and web response system with a permuted block desi
40 re randomly assigned (2:1) by an interactive voice and web response system with permuted block random
41 core >/=12) were randomised (via interactive voice and web response system) to tildrakizumab 200 mg,
42 randomly assigned (1:1) using an interactive voice and web response system, stratified by baseline HI
44 nd randomisation system with an interactive (voice and web) response system and stratification by num
45 hen implemented centrally via an interactive voice and web-response system, to receive 1 year of oral
46 discriminated repetition of morphed faces or voices and either directed their attention to stimulus i
47 ined neural responses elicited by unfamiliar voices and mother's voice, a biologically salient voice
49 k for research and practice that centers the voices and perspectives of historically marginalized pop
50 uth-preferring regions responded strongly to voices and showed a significant preference for vocal com
51 tween intrinsic characteristics of faces and voices and the demands of everyday life, showing how the
52 the unique tempo and timbre of their rivals' voices and use this rhythmic information to individually
53 dard narrative of asylums by considering the voices and views of those who were in them at different
54 revealed varied patterns of lecture (single voice) and nonlecture activity (multiple and no voice) u
55 s information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to i
56 erance, weight gain, constipation, change in voice, and dry skin, but clinical presentation can diffe
58 d remains today, his was only one among many voices, and attention to them would be well repaid by a
66 d hypermethylation in a network of face- and voice-associated genes (SOX9, ACAN, COL2A1, NFIX and XYL
67 r more were randomly assigned (1:1), using a voice-based or web-based response system, to receive int
68 O) recommends that communities should have a voice, be informed and engaged, and participate in this
69 lity impressions irrespective of whether the voices belong to the native or the foreign language of t
71 volving many cue modalities, including face, voice, body, touch, and interpersonal space; different l
74 axillary hair growth, and age at menarche or voice break and first ejaculation-every 6 months from 11
76 participants (69%) exhibited a reduction of voice breaks and/or a meaningful increase in smoothed ce
79 ation (avatar) of their presumed persecutor, voiced by the therapist so that the avatar responds by b
80 consonant- and vowel-like calls, but active voicing by our closest relatives has historically been t
81 ttery is used to perform for three purposes; voice calling, music playing and LED strip lighting.
84 natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in t
85 ology with high precision 3D actuation (e.g. voice coil, 1microm encoder resolution; stepper motors,
86 Physicians and policymakers, however, have voiced concern that value-based payment programs may pen
91 Children's ability to distinguish speakers' voices continues to develop throughout childhood, yet it
92 membranophone, further demonstrating plastic voice control as a result of experience with the instrum
93 lizing real-time object detection, tracking, voice control, obstacle avoidance and balance control.
94 , and it will open practical applications in voice control, wearable electronics and many other areas
96 emains unclear how children's sensitivity to voice cues, such as differences in speakers' gender, dev
97 by phrases from two different speakers whose voices differed along the same acoustic dimension as tar
102 ctivity in receptive emotion areas and angry voices displaying activity in anterior expressive emotio
103 site strategies to significantly alter their voice duration and frequency to better activate the memb
105 ltiple voice (e.g., pair discussion), and no voice (e.g., clicker question thinking) activities.
106 predict the quantity of time spent on single voice (e.g., lecture), multiple voice (e.g., pair discus
107 nt on single voice (e.g., lecture), multiple voice (e.g., pair discussion), and no voice (e.g., click
108 mplies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming
114 the right TVA remains selective to the human voice even when accounting for a variety of acoustical c
115 lly influence emotion perception, with happy voices exhibiting posterior activity in receptive emotio
117 mbodiment illusion depends on the child-like voice feedback being congruent or incongruent with the a
120 phone: a musical instrument where a player's voice flares a membrane's vibration through oscillating
121 s and mother's voice, a biologically salient voice for social learning, and identified a striking rel
122 er, doubts about these assignments have been voiced, fueled especially by studies counting the number
123 s applied to the index finger that converted voice fundamental frequency into tactile vibrations.
124 velopment of discrimination and weighting of voice gender cues are dissociated, i.e., adult-like perf
128 interquartile range, 0-0.80; p < 0.001; and Voice Handicap Index-10: median, 0; interquartile range,
129 artile range, 0.48-2.10) and vocal symptoms (Voice Handicap Index-10: median, 2; interquartile range,
132 ) is a new approach in which people who hear voices have a dialogue with a digital representation (av
134 s that differentiated voice-hearers from non-voice-hearers and treatment-seekers from non-treatment-s
135 we identified processes that differentiated voice-hearers from non-voice-hearers and treatment-seeke
139 orm abstracted representations of individual voice identities based on averages, despite having never
141 ut how the representation of an individual's voice identity is formed through variable exposure to th
147 tle statistical differences about people and voices in order to direct their attention toward the mos
150 lege of Physicians (ACP) is attentive to all voices, including those who speak of the desire to contr
151 rength, decreased fat mass, deepening of the voice, increased sexual desire, cessation of menstruatio
152 es recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the
155 is controversy, a diagnostic test for active voicing is reached here through the use of a membranopho
156 ding when learning to discriminate different voices, little is known about how the representation of
159 follow the fundamental frequency (FF) of the voice of their VB, indicating a new motor plan for speak
163 associated with linguistic features such as voice onset time, duration of the formant transitions, a
164 logical mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like
166 d patients (1:1:1) via a central interactive voice or web response system to either placebo every 2 w
167 ssigned (1:1:1) centrally via an interactive voice or web response system to receive acalabrutinib an
168 lock [block size of six] with an interactive voice or web response system) to receive atezolizumab (1
169 procedure, done centrally via an interactive voice or web response system, with stratification by Eas
174 ral vs peritoneal), by use of an interactive voice or web system, to receive intravenous tremelimumab
175 isation schedule accessed via an interactive voice or web-based response system, patients were random
176 lacebo using a central validated interactive voice or web-based response system, stratified by concom
177 ithm (block size of four) via an interactive voice or web-based response system, to receive letrozole
178 y were randomly assigned with an interactive voice or web-response system (1:1:1) to receive adalimum
179 ation was done centrally with an interactive voice or web-response system with patients stratified to
182 ded that we were not able to include migrant voices or those professionals not already interested in
185 the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identific
186 ome brain areas concerned with olfaction and voice perception consistent with sexual identification,
187 These results provide evidence that social voice perception contains certain elements invariant acr
190 nctional and neural organisation of face and voice perception, while locating these in the context of
193 ing out noise and fast temporal cues such as voicing periodicity, that are not directly relevant to t
196 ll malleable, meaning that their encoding of voice pitch information might not receive as much neural
197 e talking simultaneously may differ in their voice pitch, perceiving the harmonic structure of sounds
198 known regarding the neurobiological basis of voice processing and its link to social impairments in A
201 e that regions that typically specialize for voice processing in the hearing brain preferentially reo
202 en voice-selective and reward regions during voice processing predicted social communication in child
206 preventing HSV-2 acquisition among women in VOICE, randomized, double-blinded, placebo-controlled tr
207 lts selectively track the attended speaker's voice rather than the global acoustic input at phrasal a
209 Randomisation was done using an interactive voice recognition system after stratification for previo
210 registered each patient using an interactive voice recognition system into one of the three treatment
211 d randomly assigned (1:1) via an interactive voice-recognition system to receive 400 mg amikacin (Ami
215 assigned patients (2:1) using an interactive voice response system and a blocked design (block size=3
216 e did the randomisation using an interactive voice response system and a centralised, computer-genera
217 he investigators using an interactive web or voice response system and a computer-generated randomisa
218 tion occurred centrally using an interactive voice response system and integrated web response system
219 s centrally implemented using an interactive voice response system and integrated web response system
220 e randomly assigned (1:1) via an interactive voice response system and integrated web response system
221 andomisation was done through an interactive voice response system and no stratification factors were
222 e nivolumab or ipilimumab via an interactive voice response system and stratified according to diseas
223 generated randomisation list and interactive voice response system and stratified by geographical reg
224 randomly assigned (1:1) with an interactive voice response system by the permuted block method using
225 Randomisation was done using an interactive voice response system or integrated web response system,
226 izumab or placebo with a central interactive voice response system or interactive web response system
227 ators at each site telephoned an interactive voice response system to centrally randomly assign patie
228 randomly assigned (1:1:1) via an interactive voice response system to daily oral ozanimod 1.0 mg or 0
229 ndomly assigned (1:1:1) using an interactive voice response system to dapagliflozin 5 mg or 10 mg onc
230 e randomly assigned (1:1) via an interactive voice response system to receive inotuzumab ozogamicin (
231 randomly assigned (1:1:1) by an interactive voice response system to receive nivolumab 1 mg/kg every
232 tion (block size of three) by an interactive voice response system to receive oral everolimus (10 mg
233 e randomly assigned (1:1) via a computerised voice response system to receive rilotumumab 15 mg/kg in
234 of the world]) with an interactive web-based voice response system to receive subcutaneous placebo or
235 e randomly assigned (1:1) via an interactive voice response system to receive subcutaneous romosozuma
236 amic randomisation scheme and an interactive voice response system to trastuzumab emtansine (3.6 mg/k
237 s were randomly assigned (via an interactive voice response system) to oral metformin 1000 mg twice d
238 e randomly assigned (1:1) via an interactive voice response system, stratified by mutation type and d
239 o the two treatment groups by an interactive voice response system, stratified by mutation type and d
240 y assigned (1:1) centrally by an interactive voice response system, to receive either ipilimumab 10 m
241 andomisation was done through an interactive voice response system, with a block size of four and str
242 e randomly assigned by either an interactive voice response telephone system or an internet-based app
243 ocks of four per stratum with an interactive voice-response and integrated web-response system to rec
244 puter generated, accessed via an interactive voice-response and integrated web-response system, and s
245 er-generated random-sequence and interactive voice-response and web-response system, stratified by Hb
246 ed by planned platinum, using an interactive voice-response or web-response system to receive intrave
247 on (block size of four) using an interactive voice-response or web-response system, to receive atalur
250 block size six) was done with an interactive voice-response system and stratified by PD-L1 expression
251 tion (1:1) was done by use of an interactive voice-response system and was stratified by geographical
253 allocated patients 2:1 using an interactive voice-response system to eltrombopag or placebo, stratif
254 re randomly assigned (1:1) by an interactive voice-response system to receive either oral lenalidomid
255 ethod (block size of six) and an interactive voice-response system with integrated web-response to pe
263 ocated around the perimeter of within-person voice spaces - crucially, these distributions were missi
264 our species underpins the acquisition of new voiced speech sounds, is not uniquely human among great
267 as significantly more use of multiple and no voice strategies in courses for STEM majors compared wit
269 term VTS and its optimal dosage for treating voice symptoms in SD are still unknown and require furth
274 itates open sharing of information and gives voice to diverse viewpoints from SC experts across indus
275 choice: not only what to do, but which inner voice to listen to - our 'automatic' response system, wh
276 e hardly novel, they gave a new and powerful voice to the cancer survivorship movement that demanded
278 point, we try to pull the reader up, giving voice to the opposing view of an optimistic, limitless f
285 e randomly assigned (2:1) via an interactive voice web recognition system to receive oral enzalutamid
286 ly assigned (1:1) by means of an interactive voice-web response system to receive cabazitaxel (25 mg/
287 3 weeks or atezolizumab alone by interactive voice-web response system using permuted block randomisa
288 y a computerised system using an interactive voice-web response system with a block size of three.
289 randomly assigned (1:1:1; via an interactive voice-web response system) to receive dapagliflozin (10
290 andomisation was done through an interactive voice-web response system, with stratification by cispla
292 (q2w), or placebo via a central interactive voice/web response system, stratified by severity and gl
293 randomly assigned (1:1), via an interactive voice/web response system, to receive oral macitentan (1
294 domly assigned (4:4:1:1) with an interactive voice/web-response system to receive BUP-XR 300 mg/300 m
298 yed by pitch changes in the highest-register voice, whereas meter or rhythm is often carried by instr
300 outh movements at the same time as they hear voices, while there is no auditory accompaniment to visu