1 as = 0 cm intraobserver versus bias = 0.3 cm
interobserver).
2 3 +/- 0.10 or higher compared with the human
interobserver (
0.44 +/- 0.09; P < .01) and intraobserver
3 lowest for T2 mapping (intraobserver, 0.05;
interobserver,
0.09; interimage, 0.1) followed by EGE (i
4 , 0.1) followed by EGE (intraobserver, 0.03;
interobserver,
0.14; interimage, 0.14), with improved de
5 y of IRA versus ACUT2E (intraobserver, 0.11;
interobserver,
0.22; interimage, 0.12) and T2-weighted S
6 2) and T2-weighted STIR (intraobserver, 0.1;
interobserver,
0.32; interimage, 0.1).
7 n efficiency(r): intraobserver: 0.984-0.991;
interobserver:
0.969-0.971; all P < 0.001).
8 server, interobserver, interacquisition, and
interobserver-
acquisition (different observers and diffe
9 Overall,
interobserver-
acquisition percent differences were signi
10 , interacquisition (for both observers), and
interobserver-
acquisition reproducibilities (for both ob
11 nced reconstruction methods did not increase
interobserver agreement (80.6%-84.7%), compared with the
12 Interobserver agreement (intraclass correlation coeffici
13 and 0.69-0.74, respectively, with excellent
interobserver agreement (intraclass correlation coeffici
14 dds ratio of 8 (95% CI: 3, 18) and only fair
interobserver agreement (kappa = 0.32; 95% CI: 0.16, 0.4
15 experienced observers showed only a moderate
interobserver agreement (kappa = 0.51).
16 curve (AUC) of 0.78 and 0.80-0.88, with good
interobserver agreement (kappa = 0.70).
17 Intraobserver agreement (kappa = 1) and
interobserver agreement (kappa = 0.932) were excellent.
18 Interobserver agreement (kappa) was 0.619 (range, 0.469-
19 Fair to good
interobserver agreement (kappa, 0.72) was observed for d
20 d pulmonary radiologists with almost perfect
interobserver agreement (kappa=0.83).
21 SWI yielded higher
interobserver agreement (R(2) = 0.99, P < .001; 95% CI:
22 validation (R(2) = 0.84-0.92) and excellent
interobserver agreement (R(2) = 0.9928).
23 dependently analyzed by readers 1 and 2, and
interobserver agreement (weighted kappa) was calculated.
24 There was high
interobserver agreement [(Equation is included in full-t
25 There was excellent intra- and
interobserver agreement according to intraclass correlat
26 tantial intraobserver agreement but moderate
interobserver agreement among glaucoma specialists using
27 The
interobserver agreement among radiologists and the major
28 Interobserver agreement among the 5 glaucoma specialists
29 elation coefficient (ICC) was used to assess
interobserver agreement among three readers evaluating 2
30 by human-machine correlation and intra- and
interobserver agreement and (b) that the IQ-DCNN algorit
31 The
interobserver agreement and diagnostic performance of ea
32 Interobserver agreement and differences in measurements
33 CNN was within the range of human intra- and
interobserver agreement and in very good agreement with
34 diagnostic precision was calculated based on
interobserver agreement and kappa scores.
35 ed chest CT images and to report its initial
interobserver agreement and performance.
36 ent detection rate; secondary endpoints were
interobserver agreement and predictors of PET positivity
37 ve of OS, but visual criteria showed greater
interobserver agreement and stronger discrimination betw
38 Interobserver agreement between blinded and nonblinded i
39 The
interobserver agreement between CellaVision and microsco
40 Additionally,
interobserver agreement between PGA and PtGA scores was
41 ique agreement between MR imaging and US and
interobserver agreement between the two primary MR imagi
42 There was almost perfect
interobserver agreement between two reviewers for detect
43 Interobserver agreement coefficients did not reach the s
44 The purpose of this study was to investigate
interobserver agreement during magnetic resonance cholan
45 Intra- and
interobserver agreement for automated tracking was excel
46 Moreover, the
interobserver agreement for BMI in this study proved exc
47 Interobserver agreement for CT features was assessed, as
48 Cohen's kappa (k) was used to test
interobserver agreement for each imaging modality.
49 Interobserver agreement for LVMI and MWT was higher for
50 Intra- and
interobserver agreement for OMRs ranged from moderate to
51 Interobserver agreement for scoring RVI was substantial
52 Results
Interobserver agreement for some features was strong (eg
53 The
interobserver agreement for their depiction was excellen
54 ient, 0.76) for image quality score and good
interobserver agreement for vasculature measurements (in
55 Our aim was to evaluate the
interobserver agreement in (18)F-sodium fluoride (NaF) P
56 There was slight to fair
interobserver agreement in assessment of most signs and
57 are packages are insufficient to obtain high
interobserver agreement in both devices except in patien
58 Kappa value for
interobserver agreement in detecting CC fractures was 0.
59 The
interobserver agreement in distinction between the low-
60 In this study we found a high level of
interobserver agreement in evaluating MRCP.
61 ned-rank test were used to assess intra- and
interobserver agreement in image quality, alignment, and
62 The overall
interobserver agreement in IPF diagnosis was similar for
63 ss correlation and Bland-Altman indexes, and
interobserver agreement in Lung-RADS classification was
64 ficantly different PFS, and showed very good
interobserver agreement in patients with metastatic RCC
65 Interobserver agreement in quantifying contact between t
66 Interobserver agreement in raw measurements was assessed
67 n with ischemic stroke, and to determine the
interobserver agreement in the assessment of carotid web
68 The
interobserver agreement in the automated BSI interpretat
69 Interobserver agreement in the detection of carotid webs
70 Purpose To assess
interobserver agreement in the measurements and American
71 ible in 99% (198 of 200) of examinations and
interobserver agreement in the visual grading of splenic
72 Information about
interobserver agreement is limited.
73 However,
interobserver agreement is only moderate.
74 Interobserver agreement is strong for some features, but
75 he accuracy, reproducibility, and intra- and
interobserver agreement of a computer-based quantitative
76 SK-like melanomas, patient demographics, and
interobserver agreement of criteria were evaluated.
77 odest levels of diagnostic accuracy, and the
interobserver agreement of most individual criteria was
78 the sensitivity, diagnostic confidence, and
interobserver agreement of the diagnosis of ischemia, a
79 stology-derived tumor volumes and intra- and
interobserver agreement of the PET-derived volumes were
80 The
interobserver agreement of the semiautomated workflow wa
81 the sensitivity, specificity, accuracy, and
interobserver agreement of the two most commonly used cl
82 Conclusion: The
interobserver agreement on (18)F-NaF PET/CT for the dete
83 The secondary outcome was the
interobserver agreement on the MRDTI readings.
84 nt scores that assess nutritional status and
interobserver agreement regarding nursing diagnoses will
85 Interobserver agreement regarding the American Joint Com
86 Results Accuracy and
interobserver agreement regarding the nine CT signs of I
87 f 123] vs 94.3% [116 of 123], P = .002), and
interobserver agreement significantly increased, from mo
88 hnically reliable than VCTE and had a higher
interobserver agreement than liver biopsy.
89 Diagnostic performance and
interobserver agreement using pCLE to identify PVC were
90 t is important to evaluate intraobserver and
interobserver agreement using visual field (VF) testing
91 rver agreement was >=93% (kappa >= 0.83) and
interobserver agreement was >=93% (kappa >= 0.66); compl
92 Interobserver agreement was 0.76.
93 The pretraining
interobserver agreement was 72% (kappa = 0.58), and the
94 was 72% (kappa = 0.58), and the posttraining
interobserver agreement was 98% (kappa = 0.97) (P = .04)
95 Interobserver agreement was almost perfect (0.99; 95% co
96 Interobserver agreement was almost perfect, with kappa v
97 Intra- and
interobserver agreement was also assessed.
98 Interobserver agreement was assessed and receiver operat
99 Interobserver agreement was assessed by calculating intr
100 Intra- and
interobserver agreement was assessed by intraclass corre
101 Interobserver agreement was assessed by two separate obs
102 Interobserver agreement was assessed by using kappa stat
103 Interobserver agreement was assessed using Fleiss kappa.
104 Interobserver agreement was assessed with kappa statisti
105 Interobserver agreement was calculated by using Cohen ka
106 Interobserver agreement was calculated.
107 Interobserver agreement was checked, and diagnostic accu
108 ax was slightly inferior, but the intra- and
interobserver agreement was clearly superior.
109 Interobserver agreement was determined between 3 patholo
110 Interobserver agreement was determined by the Cohen kapp
111 Interobserver agreement was determined; imaging findings
112 Intraobserver and
interobserver agreement was estimated using kappa statis
113 Interobserver agreement was estimated using the kappa st
114 Interobserver agreement was evaluated by calculating wei
115 Interobserver agreement was evaluated by using kappa sta
116 Interobserver agreement was evaluated.
117 The
interobserver agreement was excellent (kappa = 0.85).
118 Interobserver agreement was excellent (kappa = 0.98).
119 Interobserver agreement was excellent for Rvol(FLOW) (r
120 Interobserver agreement was excellent for tumor staging
121 Interobserver agreement was excellent for whole-tumor vo
122 For planar imaging,
interobserver agreement was fair after 48 h (kappa = 0.3
123 Interobserver agreement was fair regarding questions abo
124 Overall
interobserver agreement was good (kappa = 0.76; 95% conf
125 Interobserver agreement was good for the "typical" and "
126 Interobserver agreement was high for all superficial FAZ
127 The intra- and
interobserver agreement was high using this method.
128 Conclusion
Interobserver agreement was high with manual diameter me
129 Interobserver agreement was higher with MR elastography
130 Interobserver agreement was measured with Cohen kappa co
131 Interobserver agreement was moderate for diagnostic SPEC
132 Interobserver agreement was moderate for Nakanuma stage
133 Results No substantial difference in
interobserver agreement was observed between sessions, a
134 Good
interobserver agreement was observed for the Likert scal
135 No improved
interobserver agreement was observed with advanced recon
136 Overall, PSMA PET/CT
interobserver agreement was substantial by Landis and Ko
137 Interobserver agreement was substantial for staining (ka
138 Interobserver agreement was substantial in images classi
139 Interobserver agreement was substantial or excellent for
140 Interobserver agreement was substantial to almost perfec
141 Interobserver agreement was substantial with respect to
142 Interobserver agreement was substantial, and the median
143 In a patient-level analysis,
interobserver agreement was very good for assessing perc
144 The kappa values for
interobserver agreement were 0.84 for focal uptake and 0
145 Marginal differences and
interobserver agreement were assessed.
146 Diagnostic accuracy and
interobserver agreement were calculated, and multivariat
147 Sensitivity, specificity, and intra- and
interobserver agreement were calculated.
148 on correlation and Bland-Altman analysis for
interobserver agreement were used.
149 construction methods on BCR localization and
interobserver agreement with (18)F-DCFPyL PET/CT scans i
150 Examination success rate,
interobserver agreement, and diagnostic accuracy for fib
151 Intraobserver agreement,
interobserver agreement, and interaction time were recor
152 sis included diagnostic accuracy parameters,
interobserver agreement, and receiver operating characte
153 Intraobserver agreement,
interobserver agreement, and repeatability of MRI-PDFF a
154 Intraobserver agreement,
interobserver agreement, and repeatability showed a sign
155 erate to severe symptoms and has substantial
interobserver agreement, especially for categories 1 and
156 re is considerable variation in the reported
interobserver agreement, malignancy rate, and prevalence
157 With overlapping phenotypes and modest
interobserver agreement, OSSN and benign conjunctival le
158 ns showed lower accuracy and/or poor to fair
interobserver agreement.
159 nosed in female patients with a fair to good
interobserver agreement.
160 ontributed to significantly higher levels of
interobserver agreement.
161 en's kappa was used to assess reliability of
interobserver agreement.
162 ckground regions showed excellent intra- and
interobserver agreement.
163 Most dermoscopic criteria had poor to fair
interobserver agreement.
164 and patient-by-patient validation, with good
interobserver agreement.
165 VID-19 pneumonia has moderate-to-substantial
interobserver agreement.
166 ndently scored by six liver pathologists for
interobserver agreement.
167 ortic repair, with excellent correlation and
interobserver agreement.
168 d follow-up imaging showed better intra- and
interobserver agreements (k = 0.77 and 0.60, respectivel
169 Interobserver agreements for EORTC, PERCIST, Peter Mac,
170 There were better intra- than
interobserver agreements in the measurement of single lo
171 Intra- and
interobserver agreements that used nonenhanced thick CT
172 mutual agreement of 85% in Dice seen in the
interobserver analysis of operators.
173 t differences were significantly higher than
interobserver and interacquisition percent differences (
174 giomyolipoma, hypovascularity-which has high
interobserver and intermachine agreement-of solid small
175 nd Fleiss methodology were used to determine
interobserver and intermachine agreement.
176 appa coefficients were computed to determine
interobserver and intermodality agreement.
177 Interobserver and interprotocol agreement was assessed b
178 Interobserver and interprotocol agreement was good to ve
179 Interobserver and intraobserver agreement based on the 1
180 Interobserver and intraobserver agreement in the grading
181 Interobserver and intraobserver reliabilities were almos
182 cy differs in index versus revision TKA, and
interobserver and intraobserver reliability for assessme
183 d nonspecific synovitis, with almost perfect
interobserver and intraobserver reliability.
184 s determined across the different observers (
interobserver)
and within each observer's own data sets
185 Intraobserver,
interobserver,
and scan-rescan variability was calculate
186 T2 mapping and EGE had best agreement (
interobserver bias: T2-weighted STIR, -0.9 [mean differe
187 Interobserver comparison (intraclass correlation coeffic
188 WSI/TM diagnoses were compared, followed by
interobserver comparison with GTC.
189 two observers in order to achieve intra- and
interobserver compliance.
190 The
interobserver concordance (kappa value) for Evans', CAP,
191 signs on video clips was high (>/=89%), with
interobserver concordance being substantial to high (AC1
192 Mean
interobserver concordance between WSI, TM, and GTC was 9
193 Mean
interobserver concordance was 94% for WSI and GTC and 94
194 The
interobserver concordance was calculated for the two rea
195 vans', JPS, MDA and ART grading systems, and
interobserver concordance was compared between the five
196 Interobserver consistency for the subarachnoid space mea
197 The
interobserver correlation and the correlation between MR
198 KE, typically on the order of 30%, with poor
interobserver correlation between measurements.
199 The
interobserver correlation using IVC was excellent (0.97)
200 A strong positive
interobserver correlation was obtained for choroidal thi
201 s were excellent and better than or equal to
interobserver correlations for all 3 thresholds: 0.94 ve
202 Intra- and
interobserver correlations were greater than 0.95 for al
203 ial flow reserve was 33% to 38%, whereas the
interobserver COV was 13% to 22%.
204 The
interobserver COV was between 11% and 15%.
205 hydatidiform moles continues to suffer from
interobserver diagnostic variability, emphasizing the ne
206 Average
interobserver difference for diameters and volumes was 2
207 dependent of tumor size, with no significant
interobserver differences (P > .10).
208 Additionally, an
interobserver evaluation of the semiautomated approach w
209 Statistical analysis was used to correlate
interobserver findings and compare choroidal thickness a
210 (ROI) and asking two different radiologists (
interobserver)
for their opinion.
211 Three dermatopathologists established
interobserver ground truth consensus (GTC) diagnosis for
212 er ICC 0.75; density intraobserver ICC 0.86,
interobserver ICC 0.73.
213 erver ICC 0.71; shape intraobserver ICC 0.88
interobserver ICC 0.75; density intraobserver ICC 0.86,
214 ass correlation coefficient (ICC) 0.96-0.97,
interobserver ICC 0.88; modified ABC/2 intraobserver ICC
215 modified ABC/2 intraobserver ICC 0.95-0.97,
interobserver ICC 0.91; SAS intraobserver ICC 0.95-0.99,
216 r ICC 0.91; SAS intraobserver ICC 0.95-0.99,
interobserver ICC 0.93; largest diameter: (visual) inter
217 tion assessment (intraobserver, ICC >= 0.94;
interobserver,
ICC >= 0.89).
218 was excellent for NFV assessment (intra- and
interobserver,
ICC >= 0.99) and strong to excellent for
219 Interobserver IMA-IHE reproducibility was good for cross
220 statistically significant difference between
interobservers in SI values.
221 Intraobserver,
interobserver,
interacquisition, and interobserver-acqui
222 Interobserver,
intraobserver, and interimage variability
223 , good intraobserver (k = 0.70) and moderate
interobserver (
k = 0.56) agreements were noted.
224 98; 95% confidence interval: 0.97, 0.99) and
interobserver (
kappa = 0.93; 95% confidence interval: 0.
225 the RG-ROI method showed highest intra- and
interobserver levels of agreement compared with Elip-ROI
226 Interobserver luminal measurements were reliable (intrac
227 technical side, BPIVOL and BPISUV showed an
interobserver maximum difference of 3.5%, and their comp
228 ment intraclass correlation coefficients for
interobserver measurements were 0.984, 0.990, and 0.988,
229 relation (0.92) compared with the intra- and
interobserver measures (0.74 and 0.39, respectively; bot
230 ificant difference between SI of HC types of
interobservers (
O1-O2) and ROI sizes (4-8 mm) (p>0.05 fo
231 There was moderate
interobserver reliability for the diagnosis of glaucoma
232 Interobserver reliability in determining hernia recurren
233 (EPT) devices and to evaluate the intra- and
interobserver reliability of the wireless EPT device.
234 Interobserver reliability was assessed with kappa statis
235 Intraobserver and
interobserver reliability were determined for these meas
236 to interpretation and require validation of
interobserver reliability.
237 's exact test were used to assess intra- and
interobserver reproducibilities and to compare response
238 ; it also demonstrates acceptable intra- and
interobserver reproducibilities for HCC lesions treated
239 nce interval [CI]: 0.94, 1.00) and excellent
interobserver reproducibility (intraclass correlation co
240 The IHC algorithm classification showed high
interobserver reproducibility among pathologists and was
241 We assessed the
interobserver reproducibility and interocular symmetry o
242 Assessment of
interobserver reproducibility and interocular symmetry u
243 The initial system showed moderate
interobserver reproducibility and prognostic stratificat
244 trating high intraobserver repeatability and
interobserver reproducibility for all the examined data.
245 Interobserver reproducibility for both acquisitions was
246 We tested
interobserver reproducibility in recognition of tissue a
247 Interobserver reproducibility of (68)Ga-DOTATATE PET/CT
248 is to assess the diagnostic performance and
interobserver reproducibility of FFRangio in patients wi
249 The intra- and
interobserver reproducibility of MRI were good (intracla
250 Intravisit and
interobserver reproducibility of SFCT measurements were
251 ervers using the developed criteria, and the
interobserver reproducibility of the measurements was re
252 Purpose To determine the
interobserver reproducibility of the Prostate Imaging Re
253 Interobserver reproducibility was excellent (intraclass
254 Volumetric analysis demonstrated better
interobserver reproducibility when compared with single-
255 asurements of TLF10 and FTV10 exhibited high
interobserver reproducibility, within +/-0.77% and +/-3.
256 ain malignant potential, classified based on
interobserver review by dermatopathologists.
257 5% CI confidence interval : 0.78, 0.96), and
interobserver values were 0.93 for FMBV fractional movin
258 Practice Advice 2: Given the significant
interobserver variability among pathologists, the diagno
259 Intra- and
interobserver variability analyses showed high agreement
260 assessment, quantitative assessment has low
interobserver variability and could yield a tumor size c
261 to routine practice because it is limited by
interobserver variability and generally only meets accep
262 n tumour histology) resulted in considerable
interobserver variability and substantial variation in p
263 ncer patients were analyzed to determine the
interobserver variability between the automated BSIs and
264 cinoma is of major importance; however, high
interobserver variability exists.
265 considered clinically insignificant because
interobserver variability for echocardiographic measurem
266 Interobserver variability for individual CT findings was
267 This AI system overcomes substantial
interobserver variability in expert predictions, perform
268 The uncertainty is compounded by
interobserver variability in histologic diagnosis.
269 Interobserver variability in reporting between a senior
270 imaging can have may be in the reduction of
interobserver variability in target volume delineation a
271 rdance with current guidelines to assess the
interobserver variability of FCT measurement by intracla
272 es and calcification contributed to the high
interobserver variability of FCT measurement.
273 normal values, and determine the intra- and
interobserver variability of measurements.
274 idated by comparing its accuracy against the
interobserver variability of six trained graders from th
275 Our study showed minimal
interobserver variability using CAM based quantification
276 Interobserver variability was analyzed by calculating in
277 Interobserver variability was analyzed by using weighed
278 Intra- and
interobserver variability was assessed in a subset of 18
279 EF than for manual EF or manual LS, whereas
interobserver variability was higher for both visual and
280 Interobserver variability was not statistically signific
281 Interobserver variability was reported using multirater
282 Intra- and
interobserver variability was tested by using intraclass
283 e index (diagnostic accuracy range, 50%-87%;
interobserver variability, +/-7%).
284 ssification with a high accuracy and without
interobserver variability, along with the molecular reso
285 1.6% for intraobserver variability, 4.0% for
interobserver variability, and 10.3% for scan-rescan var
286 .6% for intraobserver variability, 10.7% for
interobserver variability, and 19.8% for scan-rescan var
287 0.7% for intraobserver variability, 1.5% for
interobserver variability, and 8.1% for scan-rescan vari
288 ppropriate testing, improve accuracy, reduce
interobserver variability, and reduce diagnostic and rep
289 Owing to the high
interobserver variability, CT scan was not associated wi
290 vity determination, assessment of intra- and
interobserver variability, validation of data from qPSMA
291 ch optimization method was evaluated through
interobserver variability.
292 hology, which is associated with substantial
interobserver variability.
293 between surgeon and radiologist may decrease
interobserver variability.
294 to assess the deep learning model as well as
interobserver variability.
295 p vascular network may be subject to greater
interobserver variability.
296 art, and we calculated the intraobserver and
interobserver variability.
297 FI vascularization flow index for intra- and
interobserver variability; intraobserver values were 0.9
298 of diagnoses between WSI and TM methods and
interobserver variance from GTC, following College of Am
299 Interobserver variation can be partially resolved by dev
300 rithm based on the SAF score should decrease
interobserver variations among pathologists and are like