1 ates demonstrated intrarater (0.80-0.85) and
interrater (
0.60-0.72) reliability.
2 When compared with the intra- and
interrater 95% limits of agreement (0.7% and 0.8%), acce
3 aseline imaging, visually confirmed with 86%
interrater agreement (Cohen kappa = 0.69).
4 Detection recall (in percentage),
interrater agreement (Gwet k), sensitivity, and specific
5 tifact (median score, 1; P = .17), with good
interrater agreement (image quality, noise, and artifact
6 Excellent intra- and
interrater agreement (intraclass correlation coefficient
7 an completion time per study was 20 minutes;
interrater agreement (kappa statistic) reported by 9 rev
8 ication of RCM descriptors with fair to good
interrater agreement (kappa statistic, >/=0.3) and indep
9 Interrater agreement (rwg) was moderate to very strong (
10 sts and referrals per participant, with fair
interrater agreement about the suitability of WGS findin
11 Interrater agreement among experts was calculated using
12 purpose in this study was to investigate the
interrater agreement among psychiatrists in psychiatric
13 Interrater agreement among the blinded physician readers
14 For
interrater agreement analysis and interrater reliability
15 fficients of 0.844 (95% CI, 0.681-0.942) for
interrater agreement and 0.856 (95% CI, 0.791-0.901) for
16 three breast imaging radiologists determined
interrater agreement and inclusion into the study.
17 Interrater agreement and intrarater agreement were asses
18 Interrater agreement and the agreement between pathologi
19 e outcomes, but their predictive ability and
interrater agreement are unclear in comprehensive clinic
20 The
interrater agreement between allergists was substantial
21 riables, the kappa statistics used to assess
interrater agreement between readers were fair (0.45, 0.
22 Interrater agreement between the 2 graders was moderate,
23 Interrater agreement between the 2 graders was moderate,
24 thin 1 cm in most cases and showed excellent
interrater agreement compared with radiologists.
25 Conclusion: We observed high
interrater agreement despite applying different visual r
26 common cases, there was strong (> or = 0.70)
interrater agreement for 30 of 34 elements.
27 Interrater agreement for a given image's diagnostic cate
28 There was also a better
interrater agreement for ADC map analysis than for DWI a
29 h 95% CI was calculated to assess intra- and
interrater agreement for miTNM stages.
30 Interrater agreement for overall accuracy was moderate (
31 Interrater agreement for technologists was fair (Fleiss
32 Interrater agreement for the presence or absence of GSE
33 Interrater agreement for the primary endpoint was low, e
34 Interrater agreement in assessments of e-consult appropr
35 Interrater agreement in CrAgSQ reading was excellent (98
36 ement among the dentists is described by the
interrater agreement kappa for several standard clinical
37 Interrater agreement of image quality was excellent (kap
38 Interrater agreement of MRI variables was assessed by us
39 rpose To compare the diagnostic accuracy and
interrater agreement of multiparametric MRI and (68)Ga-P
40 and pharmacogenomic findings, and burden and
interrater agreement of proposed clinical follow-up.
41 Interrater agreement of radiologic measurements on 4D CT
42 nalyzed content, with kappa coefficients for
interrater agreement ranging from 0.82 to 0.93.
43 Interrater agreement rates were high with both systems (
44 Interrater agreement revealed a kappa value of 0.95 with
45 s quantified using Cohen kappa, a measure of
interrater agreement that takes into account the possibi
46 able and multivariable feature analysis, and
interrater agreement using Light kappa were determined.
47 Interrater agreement values were 0.65 for fibrosis, 0.86
48 ; p < .001) and the weighted kappa score for
interrater agreement was 0.92 (p < .001).
49 Cohen kappa for
interrater agreement was 0.938.
50 Interrater agreement was 65% for rachitic changes (kappa
51 Interrater agreement was 75% and Fleiss k was 0.12 (P <
52 The
interrater agreement was 81% and Fleiss k was 0.17 (P <
53 Interrater agreement was also excellent (kappa > 0.6), a
54 Interrater agreement was analyzed.
55 le detection rate between MR techniques, and
interrater agreement was assessed by using Bland-Altman
56 Interrater agreement was assessed by using kappa statist
57 Interrater agreement was assessed with the Fleiss multir
58 ators independently rated study quality, and
interrater agreement was calculated.
59 Interrater agreement was characterized by percent agreem
60 Interrater agreement was comparable between two versions
61 nonenhanced (TNE) images was determined, and
interrater agreement was evaluated by using the Cohen k
62 Interrater agreement was evaluated using the intraclass
63 Interrater agreement was excellent.Sharp kernels benefit
64 Interrater agreement was good (kappa = 0.78).
65 The agreement in margin distance and
interrater agreement was good (kappa = 0.81 and 0.912, r
66 Interrater agreement was high for all three measures, wi
67 Interrater agreement was high for aortic segmentation (D
68 Interrater agreement was high for aortic segmentation (D
69 As a result,
interrater agreement was low for most adverse effects, r
70 Interrater agreement was moderate (ICC, 0.62) for MUL am
71 Almost perfect
interrater agreement was observed (P > .91).
72 Conclusion Overall
interrater agreement was similar between Bosniak version
73 Interrater agreement was similar for procedure-specific
74 Interrater agreement was substantial (kappa = 0.74; 95%
75 Interrater agreement was substantial compared with the p
76 The level of
interrater agreement was very strong (kappa = 0.77-1).
77 SAS is both reliable (high
interrater agreement) and valid (high correlation with t
78 Light kappa was used to estimate
interrater agreement, and bootstrapped t statistics were
79 eived classification was almost perfect (95%
interrater agreement, Cohen kappa = 0.92; 95% CI, 0.86-0
80 Standard indices of
interrater agreement, expressed as a kappa statistic, we
81 The CAINS structure,
interrater agreement, test-retest reliability, and conve
82 rnal consistency, test-retest stability, and
interrater agreement.
83 Interrater agreements were analyzed by using the Krippen
84 Interrater agreements were substantial (kappa = 0.65-0.7
85 Interrater analysis showed significant agreement in term
86 Interrater and inter-method agreements for collateral pe
87 ight-sided cardiovascular system, assess its
interrater and intraobserver reproducibility, and examin
88 d twice in 2023, 1 month apart, to allow for
interrater and intrarater agreement assessments.
89 Interrater and intrarater agreement for MAM total scores
90 TING, AND PARTICIPANTS: This cross-sectional
interrater and intrarater agreement study was conducted
91 During the validation phase, reliability (
interrater and intrarater agreement using intraclass cor
92 The
interrater and intrarater intraclass correlation coeffic
93 The
interrater and intrarater reliabilities of the multiple-
94 The
interrater and intrarater reliabilities were good (0.95
95 Primary outcomes included
interrater and intrarater reliability and convergent val
96 Main Outcomes and Measures:
Interrater and intrarater reliability and convergent val
97 ement study, APPRAISE-AI demonstrated strong
interrater and intrarater reliability and correlated wel
98 car rating assessments, and to determine the
interrater and intrarater reliability of the SCAR scale.
99 ic regression for categorical variables, and
interrater and intrarater reliability was assessed by us
100 ntraclass correlation coefficient ranges for
interrater and intrarater reliability were 0.72 to 0.98
101 ntraclass correlation coefficient ranges for
interrater and intrarater reliability were 0.74 to 1.00
102 Internal consistency,
interrater and intrarater reliability, and criterion val
103 as a reliability study to assess clinicians'
interrater and intrarater reliability, as well as the re
104 the remaining 60 of which were analyzed for
interrater and intrarater reliability.
105 Interrater and test-retest consistency were determined.
106 Interrater and test-retest correlations were good or ver
107 Interrater and test-retest reliability for the total sco
108 Cohen kappa was computed for intrarater,
interrater,
and intermodality reliability.
109 On the standardized cases,
interrater consensus was achieved on 82% of scores with
110 ANTS and NOTSS had the highest intertool and
interrater consistency, respectively.
111 Interrater correlation coefficients for continuous NIHSS
112 There was an excellent
interrater correlation in aortoseptal angle and aortic a
113 A very high
interrater correlation of 0.95 was found.
114 Interrater correlation of map scoring ranged from weak t
115 Interrater correlation was high for SAS (r2 = .83; p < .
116 ed at least moderate interrater reliability (
interrater ICC range, 0.42 [95% CI: 0.25, 0.57] to 0.80
117 .88]), and maximal stricture wall thickness (
interrater ICC, 0.50 [95% CI: 0.34, 0.62])-were independ
118 ee continuous measurements-stricture length (
interrater ICC, 0.64 [95% CI: 0.42, 0.81]), maximal asso
119 ]), maximal associated small bowel dilation (
interrater ICC, 0.80 [95% CI: 0.67, 0.88]), and maximal
120 elop the expanded NAS (intrarater ICC, 0.90;
interrater ICC, 0.80).
121 Hepatocyte ballooning items had similar
interrater ICCs (0.68-0.79), including those extending s
122 Features with an
interrater intraclass correlation coefficient (ICC) of 0
123 on the contralateral side in three patients (
interrater kappa value, 0.80).
124 positive agreement, negative agreement, and
interrater kappa values ranging from 17.9% to 42.9%, 91.
125 positive agreement, negative agreement, and
interrater kappa values ranging from 87.5% to 93.1%, 95.
126 a 0.68), test-retest (Mak's rho = 0.76), and
interrater (
Mak's rho = 0.91) reliability were substanti
127 er (proSPI-s, saSPI-s, SPI-p, and SPI-i) and
interrater (
proSPI-s) reliability was demonstrated (all
128 Interrater reliabilities for intern and team technical s
129 Most body sites exhibited moderate to good
interrater reliabilities for scale and erythema.
130 Internal consistencies and
interrater reliabilities of factors were stable across a
131 this study was to determine test-retest and
interrater reliabilities of RUCAM in retrospectively-ide
132 Interrater reliabilities were .82 or greater for all MRI
133 The
interrater reliabilities were highest for the PDAI, foll
134 Analyses of
interrater reliabilities, convergent validities accordin
135 physicians using structured implicit review (
interrater reliability >0.90).
136 s identified features with at least moderate
interrater reliability (ICC >=0.41) that were independen
137 observations demonstrated at least moderate
interrater reliability (interrater ICC range, 0.42 [95%
138 dapted Cognitive Exam demonstrated excellent
interrater reliability (intraclass correlation coefficie
139 ement among laboratories, calculated through
interrater reliability (IRR) measures for the PCR test t
140 l records review studies, information on the
interrater reliability (IRR) of the data is seldom repor
141 0 cohort year to assess standardized patient
interrater reliability (IRR).
142 es also favored progression with substantial
interrater reliability (kappa = 0.80 [95% CI, 0.61-0.99]
143 n = 97, respectively; P < .001), with higher
interrater reliability (kappa = 0.91-0.95 for EPI-FLAIR
144 ccuracy of 94% (95% CI 88% to 97%), and high
interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
145 5% confidence interval, 95-100%), and a high
interrater reliability (kappa = 0.96; 95% confidence int
146 93%, specificities of 98% and 100%, and high
interrater reliability (kappa = 0.96; 95% confidence int
147 Interrater reliability (Kendall's coefficient of concord
148 Interrater reliability (reported as intraclass correlati
149 n atypical characteristics yielded very high
interrater reliability (weighted kappa = 0.80; bootstrap
150 both the RASS and RS demonstrated excellent
interrater reliability (weighted kappa, 0.91 and 0.94, r
151 Average neurologic soft sign scores (
interrater reliability = 0.74) of women with PTSD owing
152 validity among guideline developers and good
interrater reliability across trained reviewers.
153 An
interrater reliability analysis was performed using the
154 For interrater agreement analysis and
interrater reliability analysis, multirater Fleiss kappa
155 visual rating protocol achieved the highest
interrater reliability and accuracy especially under low
156 ested the Sedation-Agitation Scale (SAS) for
interrater reliability and compared it with the Ramsay s
157 ease (ILD), relatively little is known about
interrater reliability and construct validity of HRCT-re
158 This study demonstrates that HRCT has good
interrater reliability and correlates with indices of th
159 The RASS demonstrated excellent
interrater reliability and criterion, construct, and fac
160 The RCT-PQRS had good
interrater reliability and internal consistency.
161 The CPM (a) demonstrated satisfactory
interrater reliability and internal consistency; (b) exh
162 sted for photographic equivalency as well as
interrater reliability and intrarater reliability by 5 r
163 This study was conducted to determine the
interrater reliability and predictive validity of a set
164 iew for Prodromal Syndromes showed promising
interrater reliability and predictive validity.
165 Interrater reliability and responsiveness were each asse
166 There was improvement in the
interrater reliability and the level of agreement from E
167 We tested
interrater reliability and validity in determining the N
168 isease activity and damage demonstrated high
interrater reliability and were shown to be comprehensiv
169 ndently scored by 3 dermatopathologists with
interrater reliability assessed.
170 ssments of performance were recorded with an
interrater reliability between reviewers of 0.99.
171 For a subsample,
interrater reliability data were available.
172 tion coefficient scores were used to measure
interrater reliability for both scenarios.
173 Good
interrater reliability for BPII can be achieved when the
174 Interrater reliability for care-received classification
175 Interrater reliability for classification of care receiv
176 Further, the
interrater reliability for diagnosing schizoaffective di
177 By contrast, the
interrater reliability for erythema was higher during in
178 Expert
interrater reliability for gamma spikes (percentage agre
179 Interrater reliability for infarct size between the core
180 and perform detection at the level of human
interrater reliability for metastases larger than 6 mm.K
181 Interrater reliability for multiparametric MRI versus PE
182 Interrater reliability for multiphase CT angiography is
183 Interrater reliability for OSAD was excellent (ICC = 0.9
184 The
interrater reliability for radiographs was dependent on
185 The
interrater reliability for specific locations was also e
186 Interrater reliability for the Arabic CAM-ICU, overall a
187 Interrater reliability for the final NEATS instrument ha
188 There was excellent
interrater reliability for the identification of localiz
189 Rheumatologists and patients had low
interrater reliability for the presence of hypercholeste
190 Live scoring showed an excellent
interrater reliability for the VES (intraclass correlati
191 udy nurses and intensivist demonstrated high
interrater reliability for their CAM-ICU ratings with ka
192 ation over time was observed because of high
interrater reliability from the outset (ie, a ceiling ef
193 ined raters achieved moderate to substantial
interrater reliability in coding cases using 5 types of
194 There was an improvement in
interrater reliability in the second phase of the study.
195 atric rheumatologists demonstrated excellent
interrater reliability in their global assessments of ju
196 9.0 minutes per patient) and more objective (
interrater reliability kappa 0.79 vs 0.45) than the conv
197 Interrater reliability measures across subgroup comparis
198 The objective was to assess the
interrater reliability of ABSIS and PDAI scores and thei
199 predefined errors for each procedure minute (
interrater reliability of error assessment r > 0.80).
200 Interrater reliability of handgrip dynamometry was very
201 Interrater reliability of handheld dynamometry was compa
202 The
interrater reliability of many of the key concepts in ps
203 Criterion, construct, face validity, and
interrater reliability of NICS over time and comparison
204 Interrater reliability of nodule detection with MR imagi
205 Interrater reliability of proSPI-s was assessed in 12 pa
206 How and to what extent
interrater reliability of radiomics features vary in res
207 portray depressed patients to establish the
interrater reliability of raters using the Hamilton Depr
208 A scoring cut point of 9 demonstrated good
interrater reliability of the Cornell Assessment of Pedi
209 eline development, the external validity and
interrater reliability of the instrument were evaluated.
210 Interrater reliability of the lesion assessment was high
211 e To evaluate the diagnostic performance and
interrater reliability of the Liver Imaging Reporting an
212 Interrater reliability of the Medical Research Council s
213 Interrater reliability of the Medical Research Council-s
214 The
interrater reliability of the modified Advocacy-Inquiry
215 The
interrater reliability of the NDJ was excellent, with an
216 The
interrater reliability of the overall scale showed an IC
217 The kappa coefficient for
interrater reliability ranged from 0.41 (95% CI, 0.31 to
218 perienced PET researchers participated in an
interrater reliability study using both (11)C-DTBZ K(1)
219 The poor
interrater reliability suggests that if digital ulcerati
220 Outcome measures had higher
interrater reliability than process measures.
221 Interrater reliability was (k = 0.79).
222 .54 (upper 95% confidence limit = 0.77); the
interrater reliability was 0.45 (upper 95% confidence li
223 Interrater reliability was 0.536 (95% confidence interva
224 The Fleiss kappa for
interrater reliability was 0.78 (95% CI: 0.77, 0.78), an
225 Interrater reliability was 0.91 (intraclass correlation
226 Excellent
interrater reliability was achieved in all assessments (
227 Interrater reliability was assessed by percent concordan
228 Interrater reliability was assessed by using a set of te
229 The
interrater reliability was assessed using intraclass cor
230 Interrater reliability was assessed using kappa statisti
231 Interrater reliability was assessed, using the five scal
232 Interrater reliability was determined by using a two-way
233 Interrater reliability was determined.
234 Interrater reliability was estimated using a multirater
235 MR images were assessed;
interrater reliability was evaluated.
236 VTI
interrater reliability was excellent (intraclass correla
237 Interrater reliability was excellent for all ancillary t
238 Interrater reliability was excellent for CSAMI Activity
239 Interrater reliability was excellent for methods requiri
240 We found that the
interrater reliability was excellent with the FOUR score
241 Interrater reliability was fair (weighted kappa 0.47 and
242 ants in whom visual and SUVR data disagreed,
interrater reliability was moderate (kappa = 0.44), but
243 For infarct size,
interrater reliability was moderate (kappa = 0.675; 95%
244 Interrater reliability was moderate (kappa = 0.68) among
245 Fair-to-moderate
interrater reliability was observed between the resident
246 Light generalization of Cohen k for
interrater reliability was performed.
247 Interrater reliability was poorer (weighted kappa = 0.46
248 Excellent
interrater reliability was present (correlation coeffici
249 was scored on a six-point ordinal scale, and
interrater reliability was tested.
250 Interrater reliability was then explored.
251 ty including sensitivity and specificity and
interrater reliability were determined using daily delir
252 Code frequency and
interrater reliability were determined using NVIVO softw
253 Adequate levels of
interrater reliability were found for 24 of 26 items.
254 ata also indicate the presence of acceptable
interrater reliability when using the Ottawa GRS.
255 Indices had low-to-fair
interrater reliability within institutions (kappa range,
256 hypodensities at baseline (kappa = 0.87 for
interrater reliability).
257 Internal consistency,
interrater reliability, and concurrent (criterion) valid
258 scores and dilutional CrAg titers, assessed
interrater reliability, and determined the clinical corr
259 Secondary outcomes included feasibility,
interrater reliability, and efficiency to complete bedsi
260 ted methods of rater training, assessment of
interrater reliability, and rater drift in clinical tria
261 ted methods of rater training, assessment of
interrater reliability, and rater drift were systematica
262 ber of raters, rater training, assessment of
interrater reliability, and rater drift.
263 was found to have good internal consistency,
interrater reliability, concurrent validity, high sensit
264 ree (14%) of the multicenter trials reported
interrater reliability, despite a median number of five
265 ead to the diagnosis of a syndrome with high
interrater reliability, good face validity, and high pre
266 Severity Scale was associated with excellent
interrater reliability, moderate internal consistency, a
267 lass correlation coefficient as a measure of
Interrater reliability, NICS scored as high, or higher t
268 Interrater reliability, validity, and dimensionality of
269 ypes IV, VI, and VI demonstrated a sustained
interrater reliability, with an ICC of 0.93 (95% CI, 0.8
270 There was high
interrater reliability, with an intraclass correlation c
271 independently by a second researcher to test
interrater reliability.
272 arater reliability and from 0.44 to 1.00 for
interrater reliability.
273 ing concerns regarding testing confounds and
interrater reliability.
274 Fourteen studies evaluated
interrater reliability.
275 ment Scale all exhibited very high levels of
interrater reliability.
276 ently scored by the other raters to evaluate
interrater reliability.
277 ent two independent assessments to establish
interrater reliability.
278 of care, but a major drawback has been poor
interrater reliability.
279 up visits was created to test the intra- and
interrater reliability.
280 terpretations of chest radiographs have poor
interrater reliability.
281 tion, kappa coefficients were calculated for
interrater reliability.
282 t group, nine features had at least moderate
interrater reliability.
283 coefficients (ICCs) were computed to compare
interrater reliability.
284 were performed in blinded fashion to assess
interrater reliability.
285 category, and management for test-retest and
interrater reliability.
286 Interrater reproducibility in visual scores was higher f
287 [mean age, 71.0 years +/- 6.1; 22 men]), the
interrater reproducibility of the 4D flow MRI measures w
288 Intrarater and
interrater reproducibility was >0.60 for 12 out of 12 an
289 Interrater reproducibility was assessed by two independe
290 lear medicine specialists showed substantial
interrater reproducibility, exceeding that of PI-RADS ap
291 re performed by independent raters to assess
interrater reproducibility.
292 face-to-face interviews was contrasted with
interrater values, which were obtained by having a secon
293 raphs and CT scans (both by McNemar's test),
interrater variability (by logistic regression), and the
294 The impact of
interrater variability in tumor delineation upon the agr
295 Median
interrater variability was 3.3% and 5.9% for THGr(Ce) an
296 Interrater variability was estimated with the kappa stat
297 The scores were tabulated, and
interrater variability was measured for the common cases
298 suggesting adequate generalizability and low
interrater variability.
299 Kappa statistics were used to evaluate
interrater variability.
300 interest is a potential source of error and
interrater variability.