戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 ropathologists from both universities (n = 5 raters).
2  raters), and to a subset of 80 scans (three raters).
3  Method for the ICU and 47% by the reference rater.
4  .04) was achieved with the more experienced rater.
5 ent, the final opinion was made by the third rater.
6 nario was evaluated by two interprofessional raters.
7 ities were analyzed by 2 independent blinded raters.
8 g it against the performance of expert human raters.
9 ility was described as the agreement between raters.
10 ohen kappa statistics for pairs of different raters.
11 pa) statistic to determine consistency among raters.
12 y an expert neuroradiologist and 3 clinician raters.
13 e Examination were performed by 76 different raters.
14 ensus was defined as agreement by >/= 75% of raters.
15 mmunity social workers and assessed by blind raters.
16 ich were independently evaluated by pairs of raters.
17 rrelations, was extracted independently by 2 raters.
18 es and direct observations of city blocks by raters.
19 ficantly more consistent scoring than novice raters.
20 rved and rated by patient actors and faculty raters.
21 ion pre, intra, and postoperatively by blind raters.
22 (N = 6) and an expert/novice (N = 6) pair of raters.
23 rds as well as musical sophistication of the raters.
24 ), as assessed by blinded central AIMS video raters.
25 Cs ranging from 0.96 to 0.99 with 5 separate raters.
26 D revealed by fundus photography and trained raters.
27 scored on the criteria by 2 of 3 independent raters.
28 esonance imaging features by two independent raters.
29 e randomly selected patients (15 patients by rater 1, and 16 patients by rater 2), followed by USG.
30  = 0.69, P < .001; and r = 0.54, P < .01 for raters 1, 2, and 3, respectively).
31  (15 patients by rater 1, and 16 patients by rater 2), followed by USG.
32              Patients were assessed by blind raters 6, 12, 18, and 24 months after treatment.
33 he degree of agreement among 100 independent raters about the likely location to contain a target obj
34 c analysis of participants with a consistent rater across the trial revealed greater improvement in t
35                                              Raters agreed on 272 of 288 subscore ratings (94.4%).
36                               A plurality of raters agreed on operative performance ratings (OPRs) fo
37                                              Raters agreed on the appropriateness of 94% of e-consult
38    Positive and negative agreement and inter-rater agreement (kappa) were calculated.
39 hat the revised MBT-r has an excellent inter-rater agreement and has the ability to identify a subgro
40                                    The inter-rater agreement between 2 experts and between 1 expert a
41                    There was very good inter-rater agreement between the imaging-based diagnosis and
42                     There was moderate inter-rater agreement between the ophthalmologist and the firs
43                                        Inter-rater agreement between the ophthalmologist and the seco
44                                        Inter-rater agreement for identifying stacked breaths was high
45                                        Inter-rater agreement for ischemia improved from fair to moder
46                                    The intra-rater agreement for overall retinal pathologies, retinal
47                                    The inter-rater agreement for PHOMS was 97.9% (kappa = 0.951).
48 ll commonly used statistics assess the inter-rater agreement for the PASI.
49                                The physician rater agreement in choosing "yes" was highest for "routi
50         In patients with septic shock, inter-rater agreement of electrocardiogram interpretation for
51 h septic shock, we assessed intra- and inter-rater agreement of electrocardiogram interpretation, and
52 relation coefficient overestimated the inter-rater agreement of PASI as compared with the intra-class
53 is study aims to assess the intra- and inter-rater agreement of retinal pathologies observed on fundu
54 nresponsiveness score showed excellent inter-rater agreement overall and at each of the five hospital
55  Manchester Triage System shows a wide inter-rater agreement range with a prevalence of good and very
56 atistics (r and rho) for assessing the inter-rater agreement reliability of asymmetrically distribute
57  analysis to patients with a PASI <20, inter-rater agreement severely decreased (r=0.38, rho=0.41, IC
58                             A moderate inter-rater agreement was achieved with Krippendorff's alpha o
59                                        Inter-rater agreement was excellent for FBB and FMM.
60                                        Inter-rater agreement was fair for ischemia (kappa 0.29), mode
61                                        Inter-rater agreement, measured using intraclass correlations,
62 tistic were used to analyze intra- and inter-rater agreement.
63                        Intra-rater and inter-rater agreements were assessed for the ophthalmologist a
64                                              Raters also judged each article on indicators of good re
65                                        Intra-rater and inter-rater agreements were assessed for the o
66                                        Inter-rater and interitem reliability were 0.883 and 0.986.
67 ts (three with the trait scored by an expert rater and one with the trait self-reported).
68 3, IQR 2-5, range 0-15) with excellent inter-rater and test-retest reliability (kappa=0.86, 95% CI 0.
69 e (UPDRS) or Hoehn&Yahr (H&Y) staging is its rater and time-of-assessment dependency.
70                                      For 118 raters and 80 stimuli models, we used a 3D scanner to ex
71 ificant correlation between counts by visual raters and automated detection of ePVSs in the same sect
72 The classification reliability between human raters and between human raters and the neural network c
73 owed substantial or better agreement between raters and modalities (kappa or ICC > 0.6).
74 rements exhibited strong correlation between raters and modalities, although not universally.
75 association between attractive VHI in female raters and preferences for attractive (low) WCR in male
76 een carefully validated against expert human raters and previous methods, and can easily be extended
77        Outcomes were assessed by independent raters and self-report at baseline, at weeks 8 and 16, a
78 ility between human raters and between human raters and the neural network classifier was better than
79                    The agreement between the raters and the USG was 0.37 using Spearman's rho.
80 ase in BMI increased the discrepancy between raters and USG by 0.26 cm (p = 0.012).
81 and were delineated on T2w MRI by two expert raters and were used to construct statistical shape atla
82 ronto-insula) were applied to 257 scans (two raters), and to a subset of 80 scans (three raters).
83       New lesion counts were compared across raters, as was classification of patients as MRI active
84                                  Independent raters assessed effect sizes, study quality, and allegia
85                                          Two raters assessed leptomeningeal collaterals on baseline C
86                              Two independent raters assessed the simulations and subjects completed a
87                              Two independent raters assessed the total thickness, morphology, and the
88 m the reanalysis of images (intra- and inter-rater assessment), by calculating the coefficient of var
89 cale, and 10 were assessed twice by the same rater at different times to assess intrarater reliabilit
90          Assessments were completed by blind raters at baseline and 5 (midpoint), 10 (end of treatmen
91  is considerable variation in the ability of raters at different levels of training to identify inapp
92 he interobserver reliability between the two raters (attorneys) in interpretation of six items charac
93 only relatively simple, isolated measures of rater attractiveness.
94 ge analysis of both scans was performed by a rater blind to the participant group.
95        Patients' symptoms were assessed by a rater blind to treatment group, and they underwent funct
96 milton Depression Rating Scale (completed by raters blind to diagnosis and randomization status), sel
97 he Liebowitz Social Anxiety Scale applied by raters blind to group assignment.
98                              Two independent raters blind to the clinical data determined the presenc
99                In a multicenter, randomized, rater-blind clinical trial involving 239 outpatients wit
100  patient resuscitations and coded by trained raters blinded to condition assignment and study hypothe
101 nces were video recorded and assessed by two raters blinded to group allocation using the modified Ad
102 less at both weeks 10 and 12, as assessed by raters blinded to treatment.
103 ession Rating Scale (HAM-D), administered by raters blinded to treatment.
104                                     We did a rater-blinded 2-year extension study at 36 centres in 15
105   A multicenter randomized, sham-controlled, rater-blinded and patient-blinded trial was conducted fr
106                                 In a 3-year, rater-blinded phase 2 study (the CAMMS223 study) in pati
107                    OCD youth-in a randomized rater-blinded trial-were re-scanned after 12-14 weeks of
108                We performed a single-center, rater-blinded, balanced (1:1), split-face, placebo-contr
109           This was a pragmatic, patient- and rater-blinded, noninferiority trial of patients with maj
110                   We performed a split-body, rater-blinded, parallel-group, balanced (1:1), placebo-c
111  DESIGN Prospective, randomized, controlled, rater-blinded, split-scar trial.
112 re observed in either sex between aspects of rater body shape and strength of preferences for attract
113 ttractive opposite-sex body shapes, and that rater body traits -with the exception of VHI in female r
114 ttractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and s
115 visual analyses were highly variable between raters but were superior to automated analyses.
116                                  Two surgeon raters categorized comments relating to operative skills
117 troencephalography reviews were performed by raters certified in standardized continuous electroencep
118                                  Most visual raters classified 1 control (10%) and 8 AD (80%) and 2 F
119 ome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (
120  was classified differently by the clinician raters compared with the neuroradiologist or computer pr
121            Our first study explores an inter-rater comparison, showing that smaller lesions cannot be
122 chs-Carpenter Quality of Life Scale, which a rater completes on the basis of the patient's self-repor
123                                   Subjective rater confidence was analyzed by using a logistic regres
124 he revised COMFORT behavioral scale; and (2) Rater-context problems caused by (a) unfamiliarity with
125                Scan-rescan, intra- and inter-rater COV values were 3.2%, 4.4% and 5.3%, respectively.
126                                   For pooled rater data, performance of computed radiography was comp
127 al spine conditions using time-consuming and rater-dependent manual techniques.
128                                    When most raters determined that demineralization was present at b
129                     When the majority of the raters determined that rachitic changes were absent at b
130                         The neuroradiologist raters did not achieve comparable results to the softwar
131 ity disorder in adults are best explained by rater effects.
132          This translates into 37% and 54% of raters' estimates falling within 2 and 3 cm of USG estim
133                                          Two raters examined the homogeneity of fat saturation to det
134                              Two independent raters experienced in analyzing OCT images evaluated the
135 orangutans) whose well-being was assessed by raters familiar with the individual apes.
136               Variability among the clinical raters for estimates of new T2 lesions was affected most
137                    Sensitivity of individual raters for identifying inappropriate tests ranged from 4
138 nd sensitivity and specificity of individual raters for identifying inappropriate tests were calculat
139 dian redness score reported by the 2 blinded raters for the treatment and control sides was 2.0 (inte
140     The percentage agreement among physician raters for treatment decisions in 28 stroke patients was
141                             The two attorney raters found the description of acceptable outcome inade
142                               Expert surgeon raters from multiple institutions, blinded to resident c
143  Phase 2: OSAD was feasible, reliable [inter-rater ICC (intraclass correlation coefficient) = 0.88, t
144                 Using 10 judges adjusted for rater idiosyncrasies.
145  frame-by-frame by an experimentally blinded rater; (ii) automatic retrieval of proxies by TracMouse
146 s able to account for 58% of the variance in raters' impressions of previously unseen faces, and fact
147    Studies were independently reviewed by 12 raters in 3 groups using a systematic and blinded proces
148 ined objectively on coded images by multiple raters in a standardized fashion.
149  should also be paid to the mix of cases and raters in order to assure fair judgments about operative
150 tment decisions was remarkably similar among raters in presence or absence of advanced healthcare dir
151 o-recorded, anonymized, and presented to the raters in random order.
152 tour and from manual contours provided by 10 raters in regard to four intensity discretization scheme
153         Visual rating was performed by three raters, including one neuroradiologist, after establishe
154 s more accurate in final infarct prediction, rater independent, and provided exclusive information on
155                            The CCFS scale is rater-independent and could be used in a multicentre con
156                                    It allows rater-independent quantification of bone metastasis in m
157                                          Two raters independently assessed risk of bias.
158                                        The 4 raters independently evaluated all websites belonging to
159 a prospective multicenter study, two blinded raters independently examined cervical spine magnetic re
160     In the second part of the study, the two raters independently performed the scratch test on separ
161                               Three pairs of raters independently reviewed 1,681 abstracts, with a sy
162                                          Two raters independently screened 3679 abstracts (which yiel
163 ed and the scratch test was performed by two raters independently, followed by ultrasound (USG) as th
164 0.99 across studies and follow-up) and inter-rater (intraclass correlation, 0.97) reliability were fo
165                                        Inter-rater k agreement was used to determine the strength of
166 ng-RADS classification was assessed using bi-rater kappa.
167 verall observed agreement of 93.4% among the raters (kappa = 0.84, P < 0.0001).
168           The central review committee, EDSS raters, laboratory personnel, and radiologists were mask
169 re, cluster-randomised controlled trial with raters masked to an online computer-generated randomisat
170 ing, first, visual assessment by independent raters masked to clinical information and, second, semia
171  Static PET images were evaluated by trained raters masked to clinical status and regional analysis.
172 a live, two-way video by remote, centralized raters masked to study design and treatment.
173                               In our 2 year, rater-masked, randomised controlled phase 3 trial, we en
174                               In our 2 year, rater-masked, randomised controlled phase 3 trial, we en
175  traits -with the exception of VHI in female raters- may not be good predictors of these preferences
176 eristics as determined by separate groups of raters (n = 2,751) across 14 nations.
177     We examined the extent to which American raters' (n = 515) perceptions of the benefit-generation
178                                      Trained raters observed General Surgery residents performing lap
179 and the videotapes were rated by two blinded raters on a scale of 0=normal, 1=mild convergence spasm
180  reliability and intrarater reliability by 5 raters on a set of 80 total patient scars, 20 of which w
181 tumor delineation upon the agreement between raters on radiomics features was examined via interclass
182                            Agreement between raters on the scratch test was very high, with an intra-
183 or each dilemma was rated by two independent raters on wisdom criteria, i.e., metacognitive humility,
184 y be helpful to arbitrate disagreement among raters or in borderline cases.
185 bility of criteria during retesting, between raters, over time, and across settings), 2) content vali
186 ts' symptom ratings indicate that some proxy raters overestimate whereas others underestimate patient
187 liability showed significant agreement among raters (P < 0.001).
188 mantic content analysis of the interviews, a rater panel trained in this method independently coded a
189                                          One rater performed region-of-interest analysis in the corti
190  to examine associations between strength of rater preferences for attractive traits in opposite-sex
191               This effect was independent of raters' prior sexual experience and variation in female
192  scoring by expert versus novices-ie, expert raters produce significantly more consistent scoring tha
193                              Two independent raters rated each videotape at 31 predetermined time poi
194        These results suggest that among male raters, rater self-perceived attractiveness and sociosex
195                        There were high inter-rater reliabilities and correlations between the BCoS ap
196                                        Inter-rater reliabilities for each parameter using this system
197                                        Inter-rater reliability (0.85) and reliability of the pass/fai
198                All scales demonstrated inter-rater reliability (alpha = 0.58-0.76), though only the g
199                                        Inter-rater reliability (Cohen's kappa) was acceptable for bot
200 ach patient with a high intrarater and inter-rater reliability (intraclass correlation coefficients 0
201 ecision of data entry was confirmed by inter-rater reliability (IRR).
202                                        Inter-rater reliability (kappa) of diagnosis among evaluators
203  were always assessable with excellent inter-rater reliability (numeric rating scale intraclass corre
204 tra-rater reliability was 0.89 and for inter-rater reliability 0.80.
205                                        Inter-rater reliability among the participants' ratings was go
206                                        Inter-rater reliability analysis of the presence (rating 1 or
207                              Means for inter-rater reliability and accuracy were all the same (p = 0.
208 dity, internal consistency, inter- and intra-rater reliability and sensitivity to change.
209                                        Inter-rater reliability and the reliability of the pass/fail d
210 ation coefficient was used to evaluate inter-rater reliability between the nurse and the physician fo
211                                        Inter-rater reliability confirmed the level of evidence.
212 g a pre-determined extraction form and inter-rater reliability evaluations were conducted.
213                                        Inter-rater reliability for blinded diagnosis was kappa of 0.8
214  method surveys; (ii) determination of inter-rater reliability for each type of pathology in each reg
215                                        Inter-rater reliability for noncardiologist raters was modest
216                                        Inter-rater reliability for the 2009 Appropriate Use Criteria
217 strating feasibility, we tested it for inter-rater reliability in 40 subjects.
218 lass correlation coefficient (ICC) for inter-rater reliability of 0.76 (95% confirdence interval (CI)
219                         There was poor inter-rater reliability of Alpha/Beta classification (mean kap
220  to assess system sensitivity; examine inter-rater reliability of ratings; investigate concurrent con
221                 We sought to determine inter-rater reliability of the 2009 Appropriate Use Criteria f
222                                        Inter-rater reliability of the Appropriate Use Criteria was as
223 s of experts and provided evidence for inter-rater reliability of the formula.
224 t reports on a study that examined the inter-rater reliability of the Full Outline of Unresponsivenes
225 h pemphigus to estimate the inter- and intra-rater reliability of the PDAI and the recently described
226  the psychometric properties including inter-rater reliability of the prototype SPLINTS behavioural r
227         Future studies should evaluate inter-rater reliability of the SIOP scale.
228                                        Inter-rater reliability showed significant agreement among rat
229 m response modelling (Rasch analysis), inter-rater reliability testing, construct analysis and correl
230 he studies investigated the inter- and intra-rater reliability using the "kappa" statistic; the valid
231 lass correlation coefficient for MUNIX intra-rater reliability was 0.89 and for inter-rater reliabili
232                                        Inter-rater reliability was assessed by calculating intraclass
233                                        Inter-rater reliability was assessed using intraclass correlat
234                                    The inter-rater reliability was excellent (kappa = 0.815, P < 0.00
235       The tool was feasible to use and inter-rater reliability was excellent (r = 0.96, P < 0.001).
236                                        Inter-rater reliability was excellent [0.87 (95%-confidence in
237                                        Inter-rater reliability was excellent.
238                                        Inter-rater reliability was good for all frameworks.
239                                        Inter-rater reliability was high for the checklist scores (0.8
240                           In phase II, inter-rater reliability was measured using intra-class correla
241                           The level of inter-rater reliability was similar when rating teams and atte
242                              Intra and inter-rater reliability were largely "excellent" (intraclass c
243 .86), and 0.81 (95% CI, 0.71-0.86) for intra-rater reliability, and 0.74 (95% CI, 0.63-0.80), 0.67 (9
244                        Test-retest and inter-rater reliability, construct validity (convergent, discr
245 content, construct validity, intra- or inter-rater reliability, or consistency (28.5%).
246 dies to develop and test: feasibility, inter-rater reliability, repeatability and external validity.
247                             Inter- and intra-rater reliability, validity, responsiveness, and complet
248 d for delirium features with excellent inter-rater reliability.
249 st content validity, acceptability and inter-rater reliability.
250  was found to have a higher inter- and intra-rater reliability.
251 ed by a second psychologist to measure inter-rater reliability.
252 ly assigned these categories to assess inter-rater reliability.
253 I, 0.57-0.91) representing substantial inter-rater reliability.
254 0.75) and 0.68 (95% CI, 0.56-0.77) for inter-rater reliability.
255 o timepoints 3 months apart to confirm intra-rater reliability.
256  Assessment Method for the ICU and reference rater, respectively.
257                                  Experienced raters reviewed all scans for cortical infarctions, lacu
258            Meta-analysis of the three expert-rater-scored cohorts revealed six associated loci harbor
259 counted for 45% of the observed variation in raters' scores for the borderline videos (P < .001).
260 hese results suggest that among male raters, rater self-perceived attractiveness and sociosexuality a
261 index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality.
262                               Moreover, male rater self-perceived attractiveness was positively assoc
263                  Results indicated that male rater sociosexuality scores were positively associated w
264                                        Intra-rater test-retest reliability demonstrated an ICC of 0.9
265 Unit also contained sufficient data on inter-rater/test-retest reliability, responsiveness, and feasi
266                             Across all three raters, the frequency of moderate to severe dyspnea was
267                                              Raters then assessed accuracy of imitation by reconstruc
268                                     We asked raters to assess free-ranging rhesus macaques at two tim
269 ract and full-text review by two independent raters to identify suitable citations.
270 n=11) support the ability of two independent raters to obtain similar results when calculating total
271                                              Rater training was described for 26 tools.
272 ate their fellows more highly than the ROSCE raters, typically there was agreement between the progra
273 s selected a priori were estimated by expert raters unaware of case status.
274  no more than 1 hair whorl was present) by 2 raters unaware of sexual orientation.
275 and -Altman plot indicated that, on average, raters underestimated the distance from the right costal
276 of the remaining 312 articles, eight trained raters used a coding system to record standardized nursi
277 performance was assessed by a blinded expert rater using Global Operative Assessment of Laparoscopic
278  room (OR) assessed by 3 independent, masked raters using a previously validated task-specific assess
279 nd secondary outcomes from treatment-blinded raters using an intention-to-treat analysis.
280              We assessed concordance between raters using the kappa statistic and resolved disagreeme
281 -up period as assessed by blinded diagnostic raters using the telephone-administered Structured Clini
282                                      Between-rater variability in new T2 lesion counts may be reduced
283  noise distribution from published mRS inter-rater variability to generate an error percentage for "s
284 rey matter at C2/C3 level was close to inter-rater variability, reaching an accuracy (DSC) of 0.826 f
285 ements with accuracy equivalent to the inter-rater variability, with a Dice score (DSC) of 0.967 at C
286 ales may be affected by intrarater and inter-rater variability.
287  Inter-rater reliability for noncardiologist raters was modest (unweighted Cohen kappa, 0.51, 95% con
288 class correlation coefficients (ICC) between raters were 0.99 (95% confidence intervals (CI) 0.98-1.0
289                                     Clinical raters were blinded to treatment assignment.
290                      Patients, treaters, and raters were effectively masked.
291  excellent (kappa > 0.6), and scores between raters were highly correlated (r = 0.957).
292 s features vary in response to the number of raters were largely feature-dependent.
293 All patients, investigators, and independent raters were masked to study treatment.
294                                     Only the raters were masked to treatment assignment.
295 lue (PPV and NPV) for the two Arabic CAM-ICU raters, where calculations were based on considering the
296      Face validity was assessed by surveying raters who used both the Adapted Cognitive Exam and Mini
297 clinical symptoms was evaluated by 3 blinded raters with a standardized video protocol and clinical r
298                                              Raters with different levels of training (including card
299  benefit as assessed by central, independent raters with the Parkinson's disease-adapted scale for as
300 the overall SCAR score predicted whether the rater would consider the scar undesirable, with an odds

 
Page Top