1 When compared with the intra- and
interrater 95% limits of agreement (0.7% and 0.8%), acce
2 tifact (median score, 1; P = .17), with good
interrater agreement (image quality, noise, and artifact
3 an completion time per study was 20 minutes;
interrater agreement (kappa statistic) reported by 9 rev
4 ication of RCM descriptors with fair to good
interrater agreement (kappa statistic, >/=0.3) and indep
5 sts and referrals per participant, with fair
interrater agreement about the suitability of WGS findin
6 purpose in this study was to investigate the
interrater agreement among psychiatrists in psychiatric
7 Interrater agreement and intrarater agreement were asses
8 The
interrater agreement between allergists was substantial
9 riables, the kappa statistics used to assess
interrater agreement between readers were fair (0.45, 0.
10 common cases, there was strong (> or = 0.70)
interrater agreement for 30 of 34 elements.
11 There was also a better
interrater agreement for ADC map analysis than for DWI a
12 ement among the dentists is described by the
interrater agreement kappa for several standard clinical
13 and pharmacogenomic findings, and burden and
interrater agreement of proposed clinical follow-up.
14 nalyzed content, with kappa coefficients for
interrater agreement ranging from 0.82 to 0.93.
15 Interrater agreement revealed a kappa value of 0.95 with
16 s quantified using Cohen kappa, a measure of
interrater agreement that takes into account the possibi
17 Interrater agreement values were 0.65 for fibrosis, 0.86
18 ; p < .001) and the weighted kappa score for
interrater agreement was 0.92 (p < .001).
19 Interrater agreement was 65% for rachitic changes (kappa
20 Interrater agreement was also excellent (kappa > 0.6), a
21 le detection rate between MR techniques, and
interrater agreement was assessed by using Bland-Altman
22 Interrater agreement was assessed by using kappa statist
23 ators independently rated study quality, and
interrater agreement was calculated.
24 nonenhanced (TNE) images was determined, and
interrater agreement was evaluated by using the Cohen k
25 Interrater agreement was good (kappa = 0.78).
26 The agreement in margin distance and
interrater agreement was good (kappa = 0.81 and 0.912, r
27 As a result,
interrater agreement was low for most adverse effects, r
28 Interrater agreement was similar for procedure-specific
29 The level of
interrater agreement was very strong (kappa = 0.77-1).
30 SAS is both reliable (high
interrater agreement) and valid (high correlation with t
31 Standard indices of
interrater agreement, expressed as a kappa statistic, we
32 The CAINS structure,
interrater agreement, test-retest reliability, and conve
33 rnal consistency, test-retest stability, and
interrater agreement.
34 Interrater agreements were analyzed by using the Krippen
35 Interrater analysis showed significant agreement in term
36 The
interrater and intrarater intraclass correlation coeffic
37 The
interrater and intrarater reliabilities of the multiple-
38 The
interrater and intrarater reliabilities were good (0.95
39 Main Outcomes and Measures:
Interrater and intrarater reliability and convergent val
40 Primary outcomes included
interrater and intrarater reliability and convergent val
41 car rating assessments, and to determine the
interrater and intrarater reliability of the SCAR scale.
42 ic regression for categorical variables, and
interrater and intrarater reliability was assessed by us
43 ntraclass correlation coefficient ranges for
interrater and intrarater reliability were 0.72 to 0.98
44 Internal consistency,
interrater and intrarater reliability, and criterion val
45 as a reliability study to assess clinicians'
interrater and intrarater reliability, as well as the re
46 the remaining 60 of which were analyzed for
interrater and intrarater reliability.
47 Interrater and test-retest consistency were determined.
48 Interrater and test-retest correlations were good or ver
49 Interrater and test-retest reliability for the total sco
50 On the standardized cases,
interrater consensus was achieved on 82% of scores with
51 ANTS and NOTSS had the highest intertool and
interrater consistency, respectively.
52 There was an excellent
interrater correlation in aortoseptal angle and aortic a
53 Interrater correlation of map scoring ranged from weak t
54 Interrater correlation was high for SAS (r2 = .83; p < .
55 on the contralateral side in three patients (
interrater kappa value, 0.80).
56 positive agreement, negative agreement, and
interrater kappa values ranging from 17.9% to 42.9%, 91.
57 positive agreement, negative agreement, and
interrater kappa values ranging from 87.5% to 93.1%, 95.
58 er (proSPI-s, saSPI-s, SPI-p, and SPI-i) and
interrater (
proSPI-s) reliability was demonstrated (all
59 Interrater reliabilities for intern and team technical s
60 Internal consistencies and
interrater reliabilities of factors were stable across a
61 this study was to determine test-retest and
interrater reliabilities of RUCAM in retrospectively-ide
62 Interrater reliabilities were .82 or greater for all MRI
63 The
interrater reliabilities were highest for the PDAI, foll
64 Analyses of
interrater reliabilities, convergent validities accordin
65 physicians using structured implicit review (
interrater reliability >0.90).
66 dapted Cognitive Exam demonstrated excellent
interrater reliability (intraclass correlation coefficie
67 l records review studies, information on the
interrater reliability (IRR) of the data is seldom repor
68 ccuracy of 94% (95% CI 88% to 97%), and high
interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
69 93%, specificities of 98% and 100%, and high
interrater reliability (kappa = 0.96; 95% confidence int
70 5% confidence interval, 95-100%), and a high
interrater reliability (kappa = 0.96; 95% confidence int
71 Interrater reliability (Kendall's coefficient of concord
72 Interrater reliability (reported as intraclass correlati
73 n atypical characteristics yielded very high
interrater reliability (weighted kappa = 0.80; bootstrap
74 both the RASS and RS demonstrated excellent
interrater reliability (weighted kappa, 0.91 and 0.94, r
75 Average neurologic soft sign scores (
interrater reliability = 0.74) of women with PTSD owing
76 ested the Sedation-Agitation Scale (SAS) for
interrater reliability and compared it with the Ramsay s
77 ease (ILD), relatively little is known about
interrater reliability and construct validity of HRCT-re
78 This study demonstrates that HRCT has good
interrater reliability and correlates with indices of th
79 The RASS demonstrated excellent
interrater reliability and criterion, construct, and fac
80 The RCT-PQRS had good
interrater reliability and internal consistency.
81 The CPM (a) demonstrated satisfactory
interrater reliability and internal consistency; (b) exh
82 sted for photographic equivalency as well as
interrater reliability and intrarater reliability by 5 r
83 This study was conducted to determine the
interrater reliability and predictive validity of a set
84 iew for Prodromal Syndromes showed promising
interrater reliability and predictive validity.
85 There was improvement in the
interrater reliability and the level of agreement from E
86 We tested
interrater reliability and validity in determining the N
87 isease activity and damage demonstrated high
interrater reliability and were shown to be comprehensiv
88 ndently scored by 3 dermatopathologists with
interrater reliability assessed.
89 ssments of performance were recorded with an
interrater reliability between reviewers of 0.99.
90 For a subsample,
interrater reliability data were available.
91 tion coefficient scores were used to measure
interrater reliability for both scenarios.
92 Good
interrater reliability for BPII can be achieved when the
93 Further, the
interrater reliability for diagnosing schizoaffective di
94 By contrast, the
interrater reliability for erythema was higher during in
95 Interrater reliability for multiphase CT angiography is
96 Interrater reliability for OSAD was excellent (ICC = 0.9
97 The
interrater reliability for radiographs was dependent on
98 The
interrater reliability for specific locations was also e
99 Interrater reliability for the Arabic CAM-ICU, overall a
100 There was excellent
interrater reliability for the identification of localiz
101 Rheumatologists and patients had low
interrater reliability for the presence of hypercholeste
102 Live scoring showed an excellent
interrater reliability for the VES (intraclass correlati
103 udy nurses and intensivist demonstrated high
interrater reliability for their CAM-ICU ratings with ka
104 ation over time was observed because of high
interrater reliability from the outset (ie, a ceiling ef
105 There was an improvement in
interrater reliability in the second phase of the study.
106 atric rheumatologists demonstrated excellent
interrater reliability in their global assessments of ju
107 9.0 minutes per patient) and more objective (
interrater reliability kappa 0.79 vs 0.45) than the conv
108 Interrater reliability measures across subgroup comparis
109 predefined errors for each procedure minute (
interrater reliability of error assessment r > 0.80).
110 Interrater reliability of handgrip dynamometry was very
111 Interrater reliability of handheld dynamometry was compa
112 The
interrater reliability of many of the key concepts in ps
113 Criterion, construct, face validity, and
interrater reliability of NICS over time and comparison
114 Interrater reliability of nodule detection with MR imagi
115 Interrater reliability of proSPI-s was assessed in 12 pa
116 portray depressed patients to establish the
interrater reliability of raters using the Hamilton Depr
117 A scoring cut point of 9 demonstrated good
interrater reliability of the Cornell Assessment of Pedi
118 e To evaluate the diagnostic performance and
interrater reliability of the Liver Imaging Reporting an
119 Interrater reliability of the Medical Research Council s
120 Interrater reliability of the Medical Research Council-s
121 The
interrater reliability of the modified Advocacy-Inquiry
122 The
interrater reliability of the NDJ was excellent, with an
123 The
interrater reliability of the overall scale showed an IC
124 The kappa coefficient for
interrater reliability ranged from 0.41 (95% CI, 0.31 to
125 perienced PET researchers participated in an
interrater reliability study using both (11)C-DTBZ K(1)
126 The poor
interrater reliability suggests that if digital ulcerati
127 Outcome measures had higher
interrater reliability than process measures.
128 Interrater reliability was (k = 0.79).
129 .54 (upper 95% confidence limit = 0.77); the
interrater reliability was 0.45 (upper 95% confidence li
130 Interrater reliability was 0.536 (95% confidence interva
131 Interrater reliability was 0.91 (intraclass correlation
132 Excellent
interrater reliability was achieved in all assessments (
133 Interrater reliability was assessed by using a set of te
134 The
interrater reliability was assessed using intraclass cor
135 Interrater reliability was assessed using kappa statisti
136 Interrater reliability was assessed, using the five scal
137 Interrater reliability was determined by using a two-way
138 Interrater reliability was estimated using a multirater
139 MR images were assessed;
interrater reliability was evaluated.
140 VTI
interrater reliability was excellent (intraclass correla
141 Interrater reliability was excellent for CSAMI Activity
142 We found that the
interrater reliability was excellent with the FOUR score
143 ants in whom visual and SUVR data disagreed,
interrater reliability was moderate (kappa = 0.44), but
144 Interrater reliability was poorer (weighted kappa = 0.46
145 Excellent
interrater reliability was present (correlation coeffici
146 was scored on a six-point ordinal scale, and
interrater reliability was tested.
147 Interrater reliability was then explored.
148 ty including sensitivity and specificity and
interrater reliability were determined using daily delir
149 Adequate levels of
interrater reliability were found for 24 of 26 items.
150 ata also indicate the presence of acceptable
interrater reliability when using the Ottawa GRS.
151 hypodensities at baseline (kappa = 0.87 for
interrater reliability).
152 Internal consistency,
interrater reliability, and concurrent (criterion) valid
153 Secondary outcomes included feasibility,
interrater reliability, and efficiency to complete bedsi
154 ted methods of rater training, assessment of
interrater reliability, and rater drift in clinical tria
155 ted methods of rater training, assessment of
interrater reliability, and rater drift were systematica
156 ber of raters, rater training, assessment of
interrater reliability, and rater drift.
157 was found to have good internal consistency,
interrater reliability, concurrent validity, high sensit
158 ree (14%) of the multicenter trials reported
interrater reliability, despite a median number of five
159 ead to the diagnosis of a syndrome with high
interrater reliability, good face validity, and high pre
160 Severity Scale was associated with excellent
interrater reliability, moderate internal consistency, a
161 lass correlation coefficient as a measure of
Interrater reliability, NICS scored as high, or higher t
162 Interrater reliability, validity, and dimensionality of
163 ypes IV, VI, and VI demonstrated a sustained
interrater reliability, with an ICC of 0.93 (95% CI, 0.8
164 There was high
interrater reliability, with an intraclass correlation c
165 ing concerns regarding testing confounds and
interrater reliability.
166 Fourteen studies evaluated
interrater reliability.
167 ment Scale all exhibited very high levels of
interrater reliability.
168 ently scored by the other raters to evaluate
interrater reliability.
169 ent two independent assessments to establish
interrater reliability.
170 of care, but a major drawback has been poor
interrater reliability.
171 were performed in blinded fashion to assess
interrater reliability.
172 category, and management for test-retest and
interrater reliability.
173 independently by a second researcher to test
interrater reliability.
174 arater reliability and from 0.44 to 1.00 for
interrater reliability.
175 face-to-face interviews was contrasted with
interrater values, which were obtained by having a secon
176 raphs and CT scans (both by McNemar's test),
interrater variability (by logistic regression), and the
177 Median
interrater variability was 3.3% and 5.9% for THGr(Ce) an
178 Interrater variability was estimated with the kappa stat
179 The scores were tabulated, and
interrater variability was measured for the common cases
180 Kappa statistics were used to evaluate
interrater variability.