戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  raters), and to a subset of 80 scans (three raters).
2  .04) was achieved with the more experienced rater.
3 ntensity, diagnosis, and gender of poser and rater.
4 ent, the final opinion was made by the third rater.
5  Method for the ICU and 47% by the reference rater.
6 Cs ranging from 0.96 to 0.99 with 5 separate raters.
7 y an expert neuroradiologist and 3 clinician raters.
8 e Examination were performed by 76 different raters.
9 ensus was defined as agreement by >/= 75% of raters.
10 ich were independently evaluated by pairs of raters.
11 ), as assessed by blinded central AIMS video raters.
12 rrelations, was extracted independently by 2 raters.
13 es and direct observations of city blocks by raters.
14 D revealed by fundus photography and trained raters.
15 ficantly more consistent scoring than novice raters.
16 rved and rated by patient actors and faculty raters.
17 ion pre, intra, and postoperatively by blind raters.
18 (N = 6) and an expert/novice (N = 6) pair of raters.
19 chosen for accuracy spot-checks by reference raters.
20 eaching skills, as judged by medical student raters.
21 ic influences that are not shared with other raters.
22 orrelated (r = 0.79) overall measures across raters.
23 scored on the criteria by 2 of 3 independent raters.
24 esonance imaging features by two independent raters.
25 nario was evaluated by two interprofessional raters.
26 ities were analyzed by 2 independent blinded raters.
27 g it against the performance of expert human raters.
28 ility was described as the agreement between raters.
29 ohen kappa statistics for pairs of different raters.
30 e randomly selected patients (15 patients by rater 1, and 16 patients by rater 2), followed by USG.
31  = 0.69, P < .001; and r = 0.54, P < .01 for raters 1, 2, and 3, respectively).
32  (15 patients by rater 1, and 16 patients by rater 2), followed by USG.
33              Patients were assessed by blind raters 6, 12, 18, and 24 months after treatment.
34 he degree of agreement among 100 independent raters about the likely location to contain a target obj
35 c analysis of participants with a consistent rater across the trial revealed greater improvement in t
36                                              Raters agreed on 272 of 288 subscore ratings (94.4%).
37                               A plurality of raters agreed on operative performance ratings (OPRs) fo
38                                          Two raters agreed on subject eligibility on the basis of DSM
39                                    The inter-rater agreement between 2 experts and between 1 expert a
40                     There was moderate inter-rater agreement between the ophthalmologist and the firs
41                                        Inter-rater agreement between the ophthalmologist and the seco
42                                        Inter-rater agreement for identifying stacked breaths was high
43                                        Inter-rater agreement for ischemia improved from fair to moder
44                                    The intra-rater agreement for overall retinal pathologies, retinal
45 ll commonly used statistics assess the inter-rater agreement for the PASI.
46                                The physician rater agreement in choosing "yes" was highest for "routi
47         In patients with septic shock, inter-rater agreement of electrocardiogram interpretation for
48 h septic shock, we assessed intra- and inter-rater agreement of electrocardiogram interpretation, and
49 relation coefficient overestimated the inter-rater agreement of PASI as compared with the intra-class
50 is study aims to assess the intra- and inter-rater agreement of retinal pathologies observed on fundu
51                                        Inter-rater agreement on adult cases was: 0.84 for fibrosis, 0
52 nresponsiveness score showed excellent inter-rater agreement overall and at each of the five hospital
53  Manchester Triage System shows a wide inter-rater agreement range with a prevalence of good and very
54 atistics (r and rho) for assessing the inter-rater agreement reliability of asymmetrically distribute
55  analysis to patients with a PASI <20, inter-rater agreement severely decreased (r=0.38, rho=0.41, IC
56                             A moderate inter-rater agreement was achieved with Krippendorff's alpha o
57                                        Inter-rater agreement was fair for ischemia (kappa 0.29), mode
58  study of internal consistency, reliability, rater agreement, and the relationship with measures of m
59                                        Inter-rater agreement, measured using intraclass correlations,
60 tistic were used to analyze intra- and inter-rater agreement.
61                        Intra-rater and inter-rater agreements were assessed for the ophthalmologist a
62                                        Intra-rater and inter-rater agreements were assessed for the o
63 ts (three with the trait scored by an expert rater and one with the trait self-reported).
64 e (UPDRS) or Hoehn&Yahr (H&Y) staging is its rater and time-of-assessment dependency.
65                                      For 118 raters and 80 stimuli models, we used a 3D scanner to ex
66 ificant correlation between counts by visual raters and automated detection of ePVSs in the same sect
67 The classification reliability between human raters and between human raters and the neural network c
68 e conducted standardized interviews with the raters and made slight changes to the instrument.
69 owed substantial or better agreement between raters and modalities (kappa or ICC > 0.6).
70 rements exhibited strong correlation between raters and modalities, although not universally.
71 association between attractive VHI in female raters and preferences for attractive (low) WCR in male
72        Outcomes were assessed by independent raters and self-report at baseline, at weeks 8 and 16, a
73 ility between human raters and between human raters and the neural network classifier was better than
74                    The agreement between the raters and the USG was 0.37 using Spearman's rho.
75 ase in BMI increased the discrepancy between raters and USG by 0.26 cm (p = 0.012).
76 and were delineated on T2w MRI by two expert raters and were used to construct statistical shape atla
77 ronto-insula) were applied to 257 scans (two raters), and to a subset of 80 scans (three raters).
78 e of a well-defined glossary and training of raters are essential to ensure the optimal performance o
79       New lesion counts were compared across raters, as was classification of patients as MRI active
80                                  Independent raters assessed effect sizes, study quality, and allegia
81                                          Two raters assessed leptomeningeal collaterals on baseline C
82 onth intervals during a 1-year period, blind raters assessed the domains of suicidal behavior, aggres
83                              Two independent raters assessed the simulations and subjects completed a
84                              Two independent raters assessed the total thickness, morphology, and the
85 cale, and 10 were assessed twice by the same rater at different times to assess intrarater reliabilit
86          Assessments were completed by blind raters at baseline and 5 (midpoint), 10 (end of treatmen
87  is considerable variation in the ability of raters at different levels of training to identify inapp
88 he interobserver reliability between the two raters (attorneys) in interpretation of six items charac
89 only relatively simple, isolated measures of rater attractiveness.
90  and additional training was provided to the raters before the second exercise (E2).
91      Transcribed interviews were scored by a rater blind to group membership, and the morbid risks fo
92        Patients' symptoms were assessed by a rater blind to treatment group, and they underwent funct
93                Child behaviors were coded by raters blind to child diagnosis and regression history.
94 milton Depression Rating Scale (completed by raters blind to diagnosis and randomization status), sel
95 he Liebowitz Social Anxiety Scale applied by raters blind to group assignment.
96                In a multicenter, randomized, rater-blind clinical trial involving 239 outpatients wit
97  patient resuscitations and coded by trained raters blinded to condition assignment and study hypothe
98 nces were video recorded and assessed by two raters blinded to group allocation using the modified Ad
99 ; scores were obtained over the telephone by raters blinded to treatment assignment.
100 ession Rating Scale (HAM-D), administered by raters blinded to treatment.
101 less at both weeks 10 and 12, as assessed by raters blinded to treatment.
102   A multicenter randomized, sham-controlled, rater-blinded and patient-blinded trial was conducted fr
103                                 In a 3-year, rater-blinded phase 2 study (the CAMMS223 study) in pati
104                    OCD youth-in a randomized rater-blinded trial-were re-scanned after 12-14 weeks of
105                We performed a single-center, rater-blinded, balanced (1:1), split-face, placebo-contr
106           This was a pragmatic, patient- and rater-blinded, noninferiority trial of patients with maj
107                   We performed a split-body, rater-blinded, parallel-group, balanced (1:1), placebo-c
108  DESIGN Prospective, randomized, controlled, rater-blinded, split-scar trial.
109                             Four experienced raters blindly viewed videotapes of two patients and two
110 re observed in either sex between aspects of rater body shape and strength of preferences for attract
111 ttractive opposite-sex body shapes, and that rater body traits -with the exception of VHI in female r
112 ttractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and s
113 visual analyses were highly variable between raters but were superior to automated analyses.
114                                  Most visual raters classified 1 control (10%) and 8 AD (80%) and 2 F
115                                          The raters classified PET scans nearly equivalently using K(
116 ome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (
117  was classified differently by the clinician raters compared with the neuroradiologist or computer pr
118 chs-Carpenter Quality of Life Scale, which a rater completes on the basis of the patient's self-repor
119                                   Subjective rater confidence was analyzed by using a logistic regres
120 he revised COMFORT behavioral scale; and (2) Rater-context problems caused by (a) unfamiliarity with
121                                  Experienced raters could not distinguish actors and patients better
122                                   For pooled rater data, performance of computed radiography was comp
123                                    When most raters determined that demineralization was present at b
124                     When the majority of the raters determined that rachitic changes were absent at b
125 g, assessment of interrater reliability, and rater drift were systematically summarized.
126 g, assessment of interrater reliability, and rater drift.
127  Only three (5%) of the 63 articles reported rater drift.
128 ects of genes and not attributable to shared rater effects, clinical referral biases, or covariation
129 ity disorder in adults are best explained by rater effects.
130          This translates into 37% and 54% of raters' estimates falling within 2 and 3 cm of USG estim
131                                          Two raters examined the homogeneity of fat saturation to det
132 n Vicodin," and "OxyContin." Two independent raters examined the links generated in each search and r
133                              Two independent raters experienced in analyzing OCT images evaluated the
134 orangutans) whose well-being was assessed by raters familiar with the individual apes.
135               Variability among the clinical raters for estimates of new T2 lesions was affected most
136                    Sensitivity of individual raters for identifying inappropriate tests ranged from 4
137 nd sensitivity and specificity of individual raters for identifying inappropriate tests were calculat
138 -kappa between bedside nurses and references raters for the Richmond Agitation-Sedation Scale were 0.
139 dian redness score reported by the 2 blinded raters for the treatment and control sides was 2.0 (inte
140     The percentage agreement among physician raters for treatment decisions in 28 stroke patients was
141                             The two attorney raters found the description of acceptable outcome inade
142                               Expert surgeon raters from multiple institutions, blinded to resident c
143  Phase 2: OSAD was feasible, reliable [inter-rater ICC (intraclass correlation coefficient) = 0.88, t
144                 Using 10 judges adjusted for rater idiosyncrasies.
145  frame-by-frame by an experimentally blinded rater; (ii) automatic retrieval of proxies by TracMouse
146 s able to account for 58% of the variance in raters' impressions of previously unseen faces, and fact
147    Studies were independently reviewed by 12 raters in 3 groups using a systematic and blinded proces
148 ined objectively on coded images by multiple raters in a standardized fashion.
149 tment decisions was remarkably similar among raters in presence or absence of advanced healthcare dir
150 o-recorded, anonymized, and presented to the raters in random order.
151 stematic differences existed between the two raters in the following Short-Form 36 domains: physical
152                                       Masked raters, including an independent suicide monitoring boar
153         Visual rating was performed by three raters, including one neuroradiologist, after establishe
154 s more accurate in final infarct prediction, rater independent, and provided exclusive information on
155                            The CCFS scale is rater-independent and could be used in a multicentre con
156                                    It allows rater-independent quantification of bone metastasis in m
157                         Two trained, blinded raters independently assessed each station (inter-rater
158                                          Two raters independently assessed risk of bias.
159       For the reliability study, two to four raters independently diagnosed 18 patients on the basis
160                                        The 4 raters independently evaluated all websites belonging to
161 a prospective multicenter study, two blinded raters independently examined cervical spine magnetic re
162     In the second part of the study, the two raters independently performed the scratch test on separ
163                               Three pairs of raters independently reviewed 1,681 abstracts, with a sy
164                                          Two raters independently screened 3679 abstracts (which yiel
165 ed and the scratch test was performed by two raters independently, followed by ultrasound (USG) as th
166 0.99 across studies and follow-up) and inter-rater (intraclass correlation, 0.97) reliability were fo
167 are used less frequently, including multiple-rater kappa, are referenced and described briefly.
168 re, cluster-randomised controlled trial with raters masked to an online computer-generated randomisat
169 ing, first, visual assessment by independent raters masked to clinical information and, second, semia
170  Static PET images were evaluated by trained raters masked to clinical status and regional analysis.
171 a live, two-way video by remote, centralized raters masked to study design and treatment.
172                               In our 2 year, rater-masked, randomised controlled phase 3 trial, we en
173                               In our 2 year, rater-masked, randomised controlled phase 3 trial, we en
174  traits -with the exception of VHI in female raters- may not be good predictors of these preferences
175                                      Trained raters observed General Surgery residents performing lap
176 nt was defined as a score difference between raters of A versus C, D, or E or B versus D or E.
177 and the videotapes were rated by two blinded raters on a scale of 0=normal, 1=mild convergence spasm
178  reliability and intrarater reliability by 5 raters on a set of 80 total patient scars, 20 of which w
179                            Agreement between raters on the scratch test was very high, with an intra-
180 bility of criteria during retesting, between raters, over time, and across settings), 2) content vali
181 ts' symptom ratings indicate that some proxy raters overestimate whereas others underestimate patient
182 d for both scenarios and for each individual rater (p = .0061 to p < .0001).
183 an kappa statistic for all variables and all rater pairs for which a kappa could be calculated was 0.
184  = 0.82) and good to excellent for physician rater pairs.
185 mantic content analysis of the interviews, a rater panel trained in this method independently coded a
186  to examine associations between strength of rater preferences for attractive traits in opposite-sex
187               This effect was independent of raters' prior sexual experience and variation in female
188  scoring by expert versus novices-ie, expert raters produce significantly more consistent scoring tha
189 reliability, despite a median number of five raters (range=2-20).
190                              Two independent raters rated each videotape at 31 predetermined time poi
191        These results suggest that among male raters, rater self-perceived attractiveness and sociosex
192  disorders document adequately the number of raters, rater training, assessment of interrater reliabi
193                        There were high inter-rater reliabilities and correlations between the BCoS ap
194                                        Inter-rater reliability (0.85) and reliability of the pass/fai
195                All scales demonstrated inter-rater reliability (alpha = 0.58-0.76), though only the g
196                                        Inter-rater reliability (Cohen's kappa) was acceptable for bot
197 ach patient with a high intrarater and inter-rater reliability (intraclass correlation coefficients 0
198                                        Inter-rater reliability (kappa) of diagnosis among evaluators
199 o clinical assessment and had the best inter-rater reliability (mean kappa = 0.78) and diagnostic acc
200  were always assessable with excellent inter-rater reliability (numeric rating scale intraclass corre
201                                 Robust inter-rater reliability (r = 0.922-0.983) (kappa = 0.64-0.82)
202 tra-rater reliability was 0.89 and for inter-rater reliability 0.80.
203  95% confidence interval = 0.71, 0.75) inter-rater reliability among five investigators (two physicia
204                                        Inter-rater reliability analysis of the presence (rating 1 or
205                                        Inter-rater reliability and the reliability of the pass/fail d
206 ation coefficient was used to evaluate inter-rater reliability between the nurse and the physician fo
207                                        Inter-rater reliability confirmed the level of evidence.
208  method surveys; (ii) determination of inter-rater reliability for each type of pathology in each reg
209                                        Inter-rater reliability for noncardiologist raters was modest
210                                        Inter-rater reliability for the 2009 Appropriate Use Criteria
211            The Spearman's rho (Sp) for intra-rater reliability for the activity score was 0.96 (95% C
212 strating feasibility, we tested it for inter-rater reliability in 40 subjects.
213 d nine patients to estimate inter- and intra-rater reliability in two sessions.
214                                        Inter-rater reliability is good, with intra-class correlation
215 lass correlation coefficient (ICC) for inter-rater reliability of 0.76 (95% confirdence interval (CI)
216 lass correlation coefficient (ICC) for inter-rater reliability of 0.86 for the activity score (95% co
217                 We sought to determine inter-rater reliability of the 2009 Appropriate Use Criteria f
218                                        Inter-rater reliability of the Appropriate Use Criteria was as
219 s of experts and provided evidence for inter-rater reliability of the formula.
220 t reports on a study that examined the inter-rater reliability of the Full Outline of Unresponsivenes
221 h pemphigus to estimate the inter- and intra-rater reliability of the PDAI and the recently described
222  the psychometric properties including inter-rater reliability of the prototype SPLINTS behavioural r
223         Future studies should evaluate inter-rater reliability of the SIOP scale.
224 m response modelling (Rasch analysis), inter-rater reliability testing, construct analysis and correl
225 he studies investigated the inter- and intra-rater reliability using the "kappa" statistic; the valid
226 lass correlation coefficient for MUNIX intra-rater reliability was 0.89 and for inter-rater reliabili
227                                        Inter-rater reliability was assessed using intraclass correlat
228       The tool was feasible to use and inter-rater reliability was excellent (r = 0.96, P < 0.001).
229                                        Inter-rater reliability was excellent.
230                                        Inter-rater reliability was good for all frameworks.
231                           In phase II, inter-rater reliability was measured using intra-class correla
232 s independently assessed each station (inter-rater reliability, 0.75).
233                        Test-retest and inter-rater reliability, construct validity (convergent, discr
234 content, construct validity, intra- or inter-rater reliability, or consistency (28.5%).
235 dies to develop and test: feasibility, inter-rater reliability, repeatability and external validity.
236                             Inter- and intra-rater reliability, validity, responsiveness, and complet
237 d for delirium features with excellent inter-rater reliability.
238 st content validity, acceptability and inter-rater reliability.
239  was found to have a higher inter- and intra-rater reliability.
240 ed by a second psychologist to measure inter-rater reliability.
241 o timepoints 3 months apart to confirm intra-rater reliability.
242 as 0.824, indicating excellent overall inter-rater reliability.
243 NAS for NAFLD and NASH with reasonable inter-rater reproducibility that should be useful for studies
244  Assessment Method for the ICU and reference rater, respectively.
245                 E1 and E2 involved 12 and 14 raters, respectively.
246                                  Experienced raters reviewed all scans for cortical infarctions, lacu
247            Meta-analysis of the three expert-rater-scored cohorts revealed six associated loci harbor
248 counted for 45% of the observed variation in raters' scores for the borderline videos (P < .001).
249 hese results suggest that among male raters, rater self-perceived attractiveness and sociosexuality a
250 index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality.
251                               Moreover, male rater self-perceived attractiveness was positively assoc
252                                Two physician raters separately assessed the patients' disease activit
253                  Results indicated that male rater sociosexuality scores were positively associated w
254                                        Intra-rater test-retest reliability demonstrated an ICC of 0.9
255 Unit also contained sufficient data on inter-rater/test-retest reliability, responsiveness, and feasi
256 ver agreement (median of 30 CT scans and six raters), the prevalence of all early infarction signs wa
257                                              Raters then assessed accuracy of imitation by reconstruc
258                                        Three raters then evaluated resident performance using edited
259                                     We asked raters to assess free-ranging rhesus macaques at two tim
260 ract and full-text review by two independent raters to identify suitable citations.
261 n=11) support the ability of two independent raters to obtain similar results when calculating total
262   The HAM-D was administered by telephone by raters to whom treatment was masked.
263                                              Rater training was described for 26 tools.
264 rs document adequately the number of raters, rater training, assessment of interrater reliability, an
265                          Reported methods of rater training, assessment of interrater reliability, an
266 ate their fellows more highly than the ROSCE raters, typically there was agreement between the progra
267 s selected a priori were estimated by expert raters unaware of case status.
268  no more than 1 hair whorl was present) by 2 raters unaware of sexual orientation.
269 and -Altman plot indicated that, on average, raters underestimated the distance from the right costal
270 of the remaining 312 articles, eight trained raters used a coding system to record standardized nursi
271 performance was assessed by a blinded expert rater using Global Operative Assessment of Laparoscopic
272  room (OR) assessed by 3 independent, masked raters using a previously validated task-specific assess
273 nd secondary outcomes from treatment-blinded raters using an intention-to-treat analysis.
274 (kappa) between bedside nurses and reference raters using the Confusion Assessment Method for the ICU
275 s to establish the interrater reliability of raters using the Hamilton Depression Rating Scale (HDRS)
276              We assessed concordance between raters using the kappa statistic and resolved disagreeme
277 -up period as assessed by blinded diagnostic raters using the telephone-administered Structured Clini
278                                      Between-rater variability in new T2 lesion counts may be reduced
279  noise distribution from published mRS inter-rater variability to generate an error percentage for "s
280 rey matter at C2/C3 level was close to inter-rater variability, reaching an accuracy (DSC) of 0.826 f
281 ements with accuracy equivalent to the inter-rater variability, with a Dice score (DSC) of 0.967 at C
282 ales may be affected by intrarater and inter-rater variability.
283 from bedside nurses and a reference-standard rater was very high for both the sedation scale and the
284         We conclude that the agreement among raters was good to excellent.
285  Inter-rater reliability for noncardiologist raters was modest (unweighted Cohen kappa, 0.51, 95% con
286                          The agreement among raters was similar with the GCS (kappa(w) = 0.82).
287 class correlation coefficients (ICC) between raters were 0.99 (95% confidence intervals (CI) 0.98-1.0
288                                     Clinical raters were blinded to treatment assignment.
289                      Patients, treaters, and raters were effectively masked.
290  excellent (kappa > 0.6), and scores between raters were highly correlated (r = 0.957).
291 All patients, investigators, and independent raters were masked to study treatment.
292                                     Only the raters were masked to treatment assignment.
293             It was particularly helpful when raters were uncertain in their clinical diagnosis.
294 lue (PPV and NPV) for the two Arabic CAM-ICU raters, where calculations were based on considering the
295      Face validity was assessed by surveying raters who used both the Adapted Cognitive Exam and Mini
296 clinical symptoms was evaluated by 3 blinded raters with a standardized video protocol and clinical r
297                                              Raters with different levels of training (including card
298  ADHD were comprehensively assessed by blind raters with structured diagnostic interviews.
299  benefit as assessed by central, independent raters with the Parkinson's disease-adapted scale for as
300 the overall SCAR score predicted whether the rater would consider the scar undesirable, with an odds

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top