1 as = 0 cm intraobserver versus bias = 0.3 cm
interobserver).
2 lowest for T2 mapping (intraobserver, 0.05;
interobserver,
0.09; interimage, 0.1) followed by EGE (i
3 , 0.1) followed by EGE (intraobserver, 0.03;
interobserver,
0.14; interimage, 0.14), with improved de
4 y of IRA versus ACUT2E (intraobserver, 0.11;
interobserver,
0.22; interimage, 0.12) and T2-weighted S
5 2) and T2-weighted STIR (intraobserver, 0.1;
interobserver,
0.32; interimage, 0.1).
6 n efficiency(r): intraobserver: 0.984-0.991;
interobserver:
0.969-0.971; all P < 0.001).
7 server, interobserver, interacquisition, and
interobserver-
acquisition (different observers and diffe
8 Overall,
interobserver-
acquisition percent differences were signi
9 , interacquisition (for both observers), and
interobserver-
acquisition reproducibilities (for both ob
10 was comparable for both observers, with good
interobserver agreement ( TAB temporal artery biopsy sub
11 l interpretation of tracer binding gave good
interobserver agreement (0.80 +/- 0.045), this was impro
12 and 0.69-0.74, respectively, with excellent
interobserver agreement (intraclass correlation coeffici
13 ed technique has excellent intraobserver and
interobserver agreement (intraclass correlation coeffici
14 Interobserver agreement (intraclass correlation coeffici
15 dds ratio of 8 (95% CI: 3, 18) and only fair
interobserver agreement (kappa = 0.32; 95% CI: 0.16, 0.4
16 0.27 vs. 1.3 +/- 0.45, P < 0.01) and better
interobserver agreement (kappa = 0.5 vs. 0.2) than SPECT
17 experienced observers showed only a moderate
interobserver agreement (kappa = 0.51).
18 curve (AUC) of 0.78 and 0.80-0.88, with good
interobserver agreement (kappa = 0.70).
19 There was almost perfect
interobserver agreement (kappa = 0.82; 95% CI: 0.72, 0.9
20 Intraobserver agreement (kappa = 1) and
interobserver agreement (kappa = 0.932) were excellent.
21 Interobserver agreement (kappa) was 0.619 (range, 0.469-
22 Fair to good
interobserver agreement (kappa, 0.72) was observed for d
23 d pulmonary radiologists with almost perfect
interobserver agreement (kappa=0.83).
24 SWI yielded higher
interobserver agreement (R(2) = 0.99, P < .001; 95% CI:
25 validation (R(2) = 0.84-0.92) and excellent
interobserver agreement (R(2) = 0.9928).
26 dependently analyzed by readers 1 and 2, and
interobserver agreement (weighted kappa) was calculated.
27 There was high
interobserver agreement [(Equation is included in full-t
28 There was excellent intra- and
interobserver agreement according to intraclass correlat
29 tantial intraobserver agreement but moderate
interobserver agreement among glaucoma specialists using
30 Interobserver agreement among the 5 glaucoma specialists
31 elation coefficient (ICC) was used to assess
interobserver agreement among three readers evaluating 2
32 Intra- and
interobserver agreement and agreement between observers
33 Visual
interobserver agreement and correlations with quantitati
34 The
interobserver agreement and diagnostic performance of ea
35 Interobserver agreement and differences in measurements
36 diagnostic precision was calculated based on
interobserver agreement and kappa scores.
37 Interobserver agreement between blinded and nonblinded i
38 The
interobserver agreement between CellaVision and microsco
39 Additionally,
interobserver agreement between PGA and PtGA scores was
40 ique agreement between MR imaging and US and
interobserver agreement between the two primary MR imagi
41 Interobserver agreement coefficients did not reach the s
42 Intra- and
interobserver agreement coefficients for dimension, volu
43 Both intra- and
interobserver agreement differed by lesion size, margin
44 Moreover, the
interobserver agreement for BMI in this study proved exc
45 Interobserver agreement for classifying sources of infec
46 Interobserver agreement for clinical grading of Fuchs' d
47 Interobserver agreement for CT features was assessed, as
48 ccuracy (area under the ROC curve, 0.97) and
interobserver agreement for detecting postoperative chol
49 and both diagnostic accuracy and intra- and
interobserver agreement for diagnosis of PD with 7-T MR
50 xpert reviewer and measurement of intra- and
interobserver agreement for each technique.
51 Interobserver agreement for Hvisu was moderate (kappa =
52 Intra- and
interobserver agreement for OMRs ranged from moderate to
53 tch on source of infection was obtained, the
interobserver agreement for plausibility of infection wa
54 Interobserver agreement for scoring RVI was substantial
55 Results
Interobserver agreement for some features was strong (eg
56 The
interobserver agreement for their depiction was excellen
57 There was
interobserver agreement for TIRM score grading (kappa =
58 ient, 0.76) for image quality score and good
interobserver agreement for vasculature measurements (in
59 There was slight to fair
interobserver agreement in assessment of most signs and
60 are packages are insufficient to obtain high
interobserver agreement in both devices except in patien
61 Kappa value for
interobserver agreement in detecting CC fractures was 0.
62 ned-rank test were used to assess intra- and
interobserver agreement in image quality, alignment, and
63 The overall
interobserver agreement in IPF diagnosis was similar for
64 ficantly different PFS, and showed very good
interobserver agreement in patients with metastatic RCC
65 Interobserver agreement in quantifying contact between t
66 Flicker chronoscopy demonstrated acceptable
interobserver agreement in structural progression detect
67 n with ischemic stroke, and to determine the
interobserver agreement in the assessment of carotid web
68 The
interobserver agreement in the automated BSI interpretat
69 Interobserver agreement in the detection of carotid webs
70 The
interobserver agreement in the image quality score was g
71 ible in 99% (198 of 200) of examinations and
interobserver agreement in the visual grading of splenic
72 Interobserver agreement is strong for some features, but
73 he accuracy, reproducibility, and intra- and
interobserver agreement of a computer-based quantitative
74 Intra- and
interobserver agreement of aortic volume calculation was
75 Intra- and
interobserver agreement of aortic volume were calculated
76 SK-like melanomas, patient demographics, and
interobserver agreement of criteria were evaluated.
77 odest levels of diagnostic accuracy, and the
interobserver agreement of most individual criteria was
78 the sensitivity, diagnostic confidence, and
interobserver agreement of the diagnosis of ischemia, a
79 stology-derived tumor volumes and intra- and
interobserver agreement of the PET-derived volumes were
80 th manual delineation, and intraobserver and
interobserver agreement of using the program were evalua
81 across different evaluators, and only a fair
interobserver agreement rate could be detected.
82 47, 0.966) and 0.945 (95% CI: 0.933, 0.955);
interobserver agreement rates were 0.954 (95% CI: 0.943,
83 nt scores that assess nutritional status and
interobserver agreement regarding nursing diagnoses will
84 Results Accuracy and
interobserver agreement regarding the nine CT signs of I
85 f 123] vs 94.3% [116 of 123], P = .002), and
interobserver agreement significantly increased, from mo
86 hnically reliable than VCTE and had a higher
interobserver agreement than liver biopsy.
87 t is important to evaluate intraobserver and
interobserver agreement using visual field (VF) testing
88 Interobserver agreement was 0.76.
89 The pretraining
interobserver agreement was 72% (kappa = 0.58), and the
90 The mean percentage
interobserver agreement was 96% for PET/CT and 99% for P
91 was 72% (kappa = 0.58), and the posttraining
interobserver agreement was 98% (kappa = 0.97) (P = .04)
92 Excellent
interobserver agreement was achieved (95% confidence int
93 Interobserver agreement was almost perfect (0.99; 95% co
94 Interobserver agreement was assessed and receiver operat
95 Intra- and
interobserver agreement was assessed by intraclass corre
96 Interobserver agreement was assessed by two separate obs
97 Interobserver agreement was assessed by using an intracl
98 Interobserver agreement was assessed by using kappa stat
99 Interobserver agreement was assessed using kappa statist
100 Interobserver agreement was assessed with kappa statisti
101 Interobserver agreement was calculated by using Cohen ka
102 Interobserver agreement was calculated.
103 Interobserver agreement was checked, and diagnostic accu
104 ax was slightly inferior, but the intra- and
interobserver agreement was clearly superior.
105 Interobserver agreement was determined between 3 patholo
106 Interobserver agreement was determined between 3 patholo
107 Interobserver agreement was determined by the Cohen kapp
108 Interobserver agreement was determined; imaging findings
109 Intraobserver and
interobserver agreement was estimated using kappa statis
110 Interobserver agreement was estimated using the kappa st
111 Interobserver agreement was evaluated by using kappa sta
112 Interobserver agreement was evaluated by weighted kappa
113 Interobserver agreement was evaluated.
114 Interobserver agreement was excellent ( ICC intraclass c
115 The
interobserver agreement was excellent (kappa = 0.85).
116 Interobserver agreement was excellent (kappa = 0.98).
117 Interobserver agreement was excellent for detecting micr
118 Interobserver agreement was excellent for tumor staging
119 Interobserver agreement was excellent for whole-tumor vo
120 Interobserver agreement was expressed as a concordant pe
121 For planar imaging,
interobserver agreement was fair after 48 h (kappa = 0.3
122 Interobserver agreement was fair regarding questions abo
123 Overall
interobserver agreement was good (kappa = 0.76; 95% conf
124 Interobserver agreement was good for T2-weighted MR chol
125 Interobserver agreement was high for all superficial FAZ
126 The intra- and
interobserver agreement was high using this method.
127 Interobserver agreement was higher with MR elastography
128 Interobserver agreement was kappa = 0.88 for NLM and kap
129 g a 2-level scale across 18 centers, but the
interobserver agreement was low for the (18)F-FMISO and
130 Interobserver agreement was moderate for diagnostic SPEC
131 Interobserver agreement was moderate for Nakanuma stage
132 Results No substantial difference in
interobserver agreement was observed between sessions, a
133 Good
interobserver agreement was observed for the Likert scal
134 Interobserver agreement was substantial (k = 0.76).
135 Interobserver agreement was substantial for staining (ka
136 Interobserver agreement was substantial in images classi
137 Interobserver agreement was substantial or excellent for
138 Interobserver agreement was substantial to almost perfec
139 Interobserver agreement was substantial with respect to
140 Interobserver agreement was substantial, and the median
141 In a patient-level analysis,
interobserver agreement was very good for assessing perc
142 For AVM characterization,
interobserver agreement was very good to excellent, and
143 The kappa values for
interobserver agreement were 0.84 for focal uptake and 0
144 Diagnostic accuracy and
interobserver agreement were calculated, and multivariat
145 Sensitivity, specificity, and intra- and
interobserver agreement were calculated.
146 on correlation and Bland-Altman analysis for
interobserver agreement were used.
147 Examination success rate,
interobserver agreement, and diagnostic accuracy for fib
148 sis included diagnostic accuracy parameters,
interobserver agreement, and receiver operating characte
149 Intraobserver agreement,
interobserver agreement, and repeatability of MRI-PDFF a
150 Intraobserver agreement,
interobserver agreement, and repeatability showed a sign
151 correlation coefficients were used to assess
interobserver agreement, as appropriate.
152 arly- and late-response assessment with good
interobserver agreement, is becoming widely used both in
153 With overlapping phenotypes and modest
interobserver agreement, OSSN and benign conjunctival le
154 and patient-by-patient validation, with good
interobserver agreement.
155 ndently scored by six liver pathologists for
interobserver agreement.
156 ns showed lower accuracy and/or poor to fair
interobserver agreement.
157 , and the kappa statistic was used to assess
interobserver agreement.
158 alculated as a measure of the reliability of
interobserver agreement.
159 nt further classification and result in poor
interobserver agreement.
160 analysis, logistic regression analysis, and
interobserver agreement.
161 There was substantial
interobserver agreement.
162 nosed in female patients with a fair to good
interobserver agreement.
163 The k coefficients were calculated for
interobserver agreement.
164 ontributed to significantly higher levels of
interobserver agreement.
165 ortic repair, with excellent correlation and
interobserver agreement.
166 en's kappa was used to assess reliability of
interobserver agreement.
167 ckground regions showed excellent intra- and
interobserver agreement.
168 Most dermoscopic criteria had poor to fair
interobserver agreement.
169 eceiver operating characteristic curves, and
interobserver agreement/variability.
170 d follow-up imaging showed better intra- and
interobserver agreements (k = 0.77 and 0.60, respectivel
171 Interobserver agreements for identifying baseline photog
172 There were better intra- than
interobserver agreements in the measurement of single lo
173 Intra- and
interobserver agreements that used nonenhanced thick CT
174 Intra- and
interobserver agreements were good and comparable for re
175 Repeatability (intra- and
interobserver agreements) and reproducibility (intersoft
176 t differences were significantly higher than
interobserver and interacquisition percent differences (
177 giomyolipoma, hypovascularity-which has high
interobserver and intermachine agreement-of solid small
178 nd Fleiss methodology were used to determine
interobserver and intermachine agreement.
179 appa coefficients were computed to determine
interobserver and intermodality agreement.
180 Interobserver and interprotocol agreement was assessed b
181 Interobserver and interprotocol agreement was good to ve
182 Interobserver and intraobserver agreement based on the 1
183 Interobserver and intraobserver agreement in the grading
184 Interobserver and intraobserver reliabilities were almos
185 cy differs in index versus revision TKA, and
interobserver and intraobserver reliability for assessme
186 d nonspecific synovitis, with almost perfect
interobserver and intraobserver reliability.
187 Interobserver and intraobserver reproducibility were des
188 nt imaging and reimaging reproducibility and
interobserver and intraobserver variability.
189 T2 mapping and EGE had best agreement (
interobserver bias: T2-weighted STIR, -0.9 [mean differe
190 WSI/TM diagnoses were compared, followed by
interobserver comparison with GTC.
191 signs on video clips was high (>/=89%), with
interobserver concordance being substantial to high (AC1
192 Because
interobserver concordance between independent pathologis
193 Interobserver concordance between the diagnoses made by
194 Mean
interobserver concordance between WSI, TM, and GTC was 9
195 Intra- and
interobserver concordance for cytopathology was similarl
196 Mean
interobserver concordance was 94% for WSI and GTC and 94
197 Interobserver consistency for the subarachnoid space mea
198 g aneurysms (P < .002), with high intra- and
interobserver correlation coefficients for size, volume,
199 Mean
interobserver correlation was 0.9 for image perception a
200 A strong positive
interobserver correlation was obtained for choroidal thi
201 Intra- and
interobserver correlations were greater than 0.95 for al
202 The
interobserver COV ranged from 2.23% to 5.18%, and the CO
203 Additionally, the
interobserver diagnosis agreement increased from 74% to
204 hydatidiform moles continues to suffer from
interobserver diagnostic variability, emphasizing the ne
205 Average
interobserver difference for diameters and volumes was 2
206 For AR, the Bland-Altman mean
interobserver difference in RVol was -0.7 mL (95% confid
207 dependent of tumor size, with no significant
interobserver differences (P > .10).
208 Interobserver differences in endoscopic assessments cont
209 ADC values were tested for
interobserver differences, as well as for differences re
210 Statistical analysis was used to correlate
interobserver findings and compare choroidal thickness a
211 (ROI) and asking two different radiologists (
interobserver)
for their opinion.
212 Three dermatopathologists established
interobserver ground truth consensus (GTC) diagnosis for
213 er ICC 0.75; density intraobserver ICC 0.86,
interobserver ICC 0.73.
214 erver ICC 0.71; shape intraobserver ICC 0.88
interobserver ICC 0.75; density intraobserver ICC 0.86,
215 ass correlation coefficient (ICC) 0.96-0.97,
interobserver ICC 0.88; modified ABC/2 intraobserver ICC
216 modified ABC/2 intraobserver ICC 0.95-0.97,
interobserver ICC 0.91; SAS intraobserver ICC 0.95-0.99,
217 r ICC 0.91; SAS intraobserver ICC 0.95-0.99,
interobserver ICC 0.93; largest diameter: (visual) inter
218 Interobserver IMA-IHE reproducibility was good for cross
219 statistically significant difference between
interobservers in SI values.
220 Intraobserver,
interobserver,
interacquisition, and interobserver-acqui
221 Interobserver,
intraobserver, and interimage variability
222 , good intraobserver (k = 0.70) and moderate
interobserver (
k = 0.56) agreements were noted.
223 Interobserver kappa value range for individual features
224 = 0.92; 95% CI: 0.83, 1.00) and substantial
interobserver (
kappa = 0.72; 95% CI: 0.58, 0.87) agreeme
225 nique (kappa = 0.77; 95% CI: 0.63, 0.90) and
interobserver (
kappa = 0.76; 95% CI: 0.61, 0.91) agreeme
226 the RG-ROI method showed highest intra- and
interobserver levels of agreement compared with Elip-ROI
227 Interobserver luminal measurements were reliable (intrac
228 technical side, BPIVOL and BPISUV showed an
interobserver maximum difference of 3.5%, and their comp
229 ificant difference between SI of HC types of
interobservers (
O1-O2) and ROI sizes (4-8 mm) (p>0.05 fo
230 We also determined the
interobserver reliability between the two raters (attorn
231 intraobserver reliability, and intermediate
interobserver reliability but unclear interpretability a
232 Interobserver reliability for DT imaging measurements wa
233 There was moderate
interobserver reliability for the diagnosis of glaucoma
234 Interobserver reliability in determining hernia recurren
235 The sensitivity, specificity, and
interobserver reliability of MRDTI were determined.
236 Additionally, the intra- and
interobserver reliability of the wireless EPT device was
237 (EPT) devices and to evaluate the intra- and
interobserver reliability of the wireless EPT device.
238 bone and good-to-excellent in type IV bone;
interobserver reliability was evaluated as fair-to-good
239 highly correlated between both methods, and
interobserver reliability was excellent.
240 Intraobserver and
interobserver reliability were determined for these meas
241 ORAD) has adequate validity, responsiveness,
interobserver reliability, and interpretability and uncl
242 to interpretation and require validation of
interobserver reliability.
243 exhibited a high degree of intraobserver and
interobserver repeatability.
244 's exact test were used to assess intra- and
interobserver reproducibilities and to compare response
245 ; it also demonstrates acceptable intra- and
interobserver reproducibilities for HCC lesions treated
246 Image interpretation yielded high
interobserver reproducibility (kappa >/= .80).
247 G) AHEP-0731 trial in an attempt to validate
interobserver reproducibility and ability to monitor res
248 The initial system showed moderate
interobserver reproducibility and prognostic stratificat
249 trating high intraobserver repeatability and
interobserver reproducibility for all the examined data.
250 Interobserver reproducibility for both acquisitions was
251 We tested
interobserver reproducibility in recognition of tissue a
252 Interobserver reproducibility of (68)Ga-DOTATATE PET/CT
253 is to assess the diagnostic performance and
interobserver reproducibility of FFRangio in patients wi
254 Intravisit and
interobserver reproducibility of SFCT measurements were
255 ervers using the developed criteria, and the
interobserver reproducibility of the measurements was re
256 Purpose To determine the
interobserver reproducibility of the Prostate Imaging Re
257 Intra- and
interobserver reproducibility was calculated by using th
258 Volumetric analysis demonstrated better
interobserver reproducibility when compared with single-
259 asurements of TLF10 and FTV10 exhibited high
interobserver reproducibility, within +/-0.77% and +/-3.
260 5% CI confidence interval : 0.78, 0.96), and
interobserver values were 0.93 for FMBV fractional movin
261 ed with TTE, CMR has lower intraobserver and
interobserver variabilities for RVol(AR), suggesting CMR
262 Intraobserver and
interobserver variabilities were similar.
263 t the two ROIs demonstrated good to moderate
interobserver variability (for the two ROIs, 0.46 and 0.
264 Practice Advice 2: Given the significant
interobserver variability among pathologists, the diagno
265 ncer patients were analyzed to determine the
interobserver variability between the automated BSIs and
266 roach can provide a significant reduction in
interobserver variability for DCE MR imaging measurement
267 considered clinically insignificant because
interobserver variability for echocardiographic measurem
268 (100% versus 47%; P<0.0001) and with better
interobserver variability for RT-ungated (coefficient of
269 The uncertainty is compounded by
interobserver variability in histologic diagnosis.
270 imaging can have may be in the reduction of
interobserver variability in target volume delineation a
271 doses, reducing the toxicity issues and the
interobserver variability in tumor detection.
272 rdance with current guidelines to assess the
interobserver variability of FCT measurement by intracla
273 es and calcification contributed to the high
interobserver variability of FCT measurement.
274 The overall
interobserver variability of K(trans) with manual ROI pl
275 Overall intra- and
interobserver variability rates were similar; in clinica
276 Interobserver variability was analyzed by calculating in
277 Interobserver variability was analyzed by using three di
278 Interobserver variability was analyzed by using weighed
279 Intra- and
interobserver variability was assessed in a subset of 18
280 EF than for manual EF or manual LS, whereas
interobserver variability was higher for both visual and
281 Whole-lesion measurement showed the lowest
interobserver variability with both measurement methods
282 e index (diagnostic accuracy range, 50%-87%;
interobserver variability, +/-7%).
283 e heterogeneity quantification, with reduced
interobserver variability, and independent prognostic va
284 n interclass correlation were used to define
interobserver variability, and receiver operating charac
285 ppropriate testing, improve accuracy, reduce
interobserver variability, and reduce diagnostic and rep
286 ations still exist including sampling error,
interobserver variability, bleeding, arteriovenous fistu
287 Owing to the high
interobserver variability, CT scan was not associated wi
288 p vascular network may be subject to greater
interobserver variability.
289 access to both SBR and CPR data to minimize
interobserver variability.
290 -Altman plots were used to assess intra- and
interobserver variability.
291 orrelation coefficient was used to determine
interobserver variability.
292 ch optimization method was evaluated through
interobserver variability.
293 hology, which is associated with substantial
interobserver variability.
294 between surgeon and radiologist may decrease
interobserver variability.
295 FI vascularization flow index for intra- and
interobserver variability; intraobserver values were 0.9
296 of diagnoses between WSI and TM methods and
interobserver variance from GTC, following College of Am
297 Interobserver variation can be partially resolved by dev
298 ate (kappa = 0.565 and 0.592, respectively);
interobserver variation led to different potential treat
299 rithm based on the SAF score should decrease
interobserver variations among pathologists and are like
300 he SAF score and FLIP algorithm can decrease
interobserver variations among pathologists.