戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 icient of variation was calculated to assess interobserver variability.
2 images had the highest specificity and least interobserver variability.
3 MR images and is an important contributor to interobserver variability.
4 ch optimization method was evaluated through interobserver variability.
5  access to both SBR and CPR data to minimize interobserver variability.
6 -Altman plots were used to assess intra- and interobserver variability.
7 orrelation coefficient was used to determine interobserver variability.
8 ilcoxon signed-rank test were used to assess interobserver variability.
9 hology, which is associated with substantial interobserver variability.
10  scans from patients with nAMD is subject to interobserver variability.
11  reprocessed for determination of intra- and interobserver variability.
12 used in rheumatoid arthritis (RA), have high interobserver variability.
13 nature of the procedure, sampling error, and interobserver variability.
14 r-intensive analyses and potential intra- or interobserver variability.
15 between surgeon and radiologist may decrease interobserver variability.
16  there has been little attempt to quantitate interobserver variability.
17 due to PE, but with low sensitivity and high interobserver variability.
18  thickness were assessed, as were intra- and interobserver variability.
19 were used to evaluate both intraobserver and interobserver variability.
20 ere performed to determine intraobserver and interobserver variability.
21 -to-muscle contrast and demonstrated minimal interobserver variability.
22 p vascular network may be subject to greater interobserver variability.
23 opulation characteristics, CT technique, and interobserver variability.
24 pa analysis was also performed to assess for interobserver variability.
25 e and mass with good accuracy and acceptable interobserver variability.
26 sible for significantly increased intra- and interobserver variabilities.
27 Additionally, the RT3D technique reduced the interobserver variability (37% to 7%) and intraobserver
28 e index (diagnostic accuracy range, 50%-87%; interobserver variability, +/-7%).
29 out contrast injection for intraobserver and interobserver variabilities (all p < 0.001).
30     Practice Advice 2: Given the significant interobserver variability among pathologists, the diagno
31                                     However, interobserver variability and image quality influence ob
32 ved a more guarded reception lately owing to interobserver variability and lack of standardized proto
33 stological features, generating considerable interobserver variability and limited diagnostic reprodu
34 ctional MR examination significantly reduces interobserver variability and offers reliable and reprod
35                                              Interobserver variability and practice guidelines remain
36                                              Interobserver variability and the correlation between au
37 to improve endoluminal visualization, reduce interobserver variability, and improve patient acceptanc
38 e heterogeneity quantification, with reduced interobserver variability, and independent prognostic va
39 ompliance is more often identified, has less interobserver variability, and poses less risk to the pa
40 er biopsy is associated with sampling error, interobserver variability, and potential complications.
41 n interclass correlation were used to define interobserver variability, and receiver operating charac
42 ppropriate testing, improve accuracy, reduce interobserver variability, and reduce diagnostic and rep
43 n PET measures (22%-44%) was attributable to interobserver variability as measured by the reader stud
44                    The addition also reduced interobserver variability (Az = 0.86 vs Az = 0.75).
45 ncer patients were analyzed to determine the interobserver variability between the automated BSIs and
46 ations still exist including sampling error, interobserver variability, bleeding, arteriovenous fistu
47       3D-Gd-MRA revealed a slightly improved interobserver variability but incorrectly graded 6 of 34
48  evaluation of renal artery stenosis with an interobserver variability comparable with that of conven
49      LV-METRIC had reduced intraobserver and interobserver variability compared with other methods.
50                            Owing to the high interobserver variability, CT scan was not associated wi
51  g +/- 9, kappa = 0.49 [P < .0001]) and less interobserver variability (difference, 5.4 g +/- 18, kap
52                                              Interobserver variability expressed as 1 SD was 3.6 mm f
53  (F = 6.9, P = 0.011; trained observers) and interobserver variability (F = 33.7, P = 0.004; group of
54 ed with TTE, CMR has lower intraobserver and interobserver variabilities for RVol(AR), suggesting CMR
55 pectively compare diagnostic performance and interobserver variability for computed tomography (CT) a
56                                              Interobserver variability for conventional angiograms wa
57 roach can provide a significant reduction in interobserver variability for DCE MR imaging measurement
58  considered clinically insignificant because interobserver variability for echocardiographic measurem
59                                              Interobserver variability for interpretation of the lesi
60  (100% versus 47%; P<0.0001) and with better interobserver variability for RT-ungated (coefficient of
61                            Intraobserver and interobserver variability for score assessment were 6% a
62                                              Interobserver variability for the degree of renal artery
63                        DSA had a substantial interobserver variability for the grading of stenosis (m
64               kappa Values for assessment of interobserver variability for the T2, single-voxel, mult
65 t the two ROIs demonstrated good to moderate interobserver variability (for the two ROIs, 0.46 and 0.
66 ial for improving specificity and decreasing interobserver variability in biopsy recommendations.
67  causality, but there was still considerable interobserver variability in both.
68             The uncertainty is compounded by interobserver variability in histologic diagnosis.
69 ot quite as good, and there is slightly more interobserver variability in interpretation.
70  a significant difference, there was greater interobserver variability in lesion descriptions among r
71                        There was significant interobserver variability in Pflex, with a maximum diffe
72     A change of >32 mum was likely to exceed interobserver variability in SFCT.
73  imaging can have may be in the reduction of interobserver variability in target volume delineation a
74 acy for less experienced readers and reduces interobserver variability in the diagnosis of ECE of pro
75                         There is significant interobserver variability in the diagnosis of LGD even a
76 as evaluated in the 2 trained observers, and interobserver variability in the group of 15 observers.
77 chnique also minimized right-left kidney and interobserver variability in the measurement of EF.
78                                        Large interobserver variability in the measurement of vascular
79 s investigations have identified significant interobserver variability in the measurements of central
80 eatures, and radiology residents had greater interobserver variability in their selection of five of
81  doses, reducing the toxicity issues and the interobserver variability in tumor detection.
82                                              Interobserver variability, interobserver correlation, an
83 FI vascularization flow index for intra- and interobserver variability; intraobserver values were 0.9
84                                              Interobserver variability (kappa statistic or intraclass
85                 The MRA measurements had low interobserver variability (&lt; or =5%) and good correlatio
86                                              Interobserver variability may be reduced in the future b
87 A and PC-flow revealed the best (P = 0.0003) interobserver variability (median kappa = 0.75) and almo
88 aobserver variations were small, with a mean interobserver variability of -0.1 g +/- 2.3 and a mean i
89 been reported evaluating the performance and interobserver variability of computerized tomographic co
90 rdance with current guidelines to assess the interobserver variability of FCT measurement by intracla
91 es and calcification contributed to the high interobserver variability of FCT measurement.
92                                  The overall interobserver variability of K(trans) with manual ROI pl
93 s to determine preliminary intraobserver and interobserver variability of measurements in a subset of
94                                          The interobserver variability of MRI and the relative import
95                             Knowledge of the interobserver variability of quantitative parameters is
96 orrections that in turn resulted in a higher interobserver variability of SUVmean (CCCs for follow-up
97         These results were compared with the interobserver variability of the same radiologists obtai
98                 Our results show substantial interobserver variability, particularly for overall diag
99 SPECT/CT demonstrated both a high intra- and interobserver variability (R(2) = 0.997) and an accuracy
100                           Overall intra- and interobserver variability rates were similar; in clinica
101 al studies are required to further establish interobserver variability, to assess intraobserver varia
102 ment of regional wall motion, and intra- and interobserver variability values are low.
103 e interstudy reproducibility, and intra- and interobserver variability values were analyzed.
104 of myocardial velocity with small intra- and interobserver variability values.
105                                         High interobserver variability warrants further investigation
106                     The mean kappa value for interobserver variability was 0.62 (95% confidence inter
107                                              Interobserver variability was analyzed by calculating in
108                                              Interobserver variability was analyzed by using three di
109                                              Interobserver variability was analyzed by using weighed
110                                              Interobserver variability was analyzed.
111                                              Interobserver variability was assessed by placing cases
112                                              Interobserver variability was assessed by using the Pear
113                                              Interobserver variability was assessed for the grading o
114                                   Intra- and interobserver variability was assessed in a subset of 18
115                                              Interobserver variability was assessed with the Cohen ka
116                                 Interlot and interobserver variability was assessed.
117 reast Imaging Reporting and Data System, and interobserver variability was calculated with the Cohen
118                                              Interobserver variability was calculated.
119                                              Interobserver variability was determined (kappa analysis
120 d for assessment by the senior observer, and interobserver variability was determined.
121 by expert readers (r = 0.96; p < 0.001), but interobserver variability was greater (3.4 +/- 2.9% vs.
122                                              Interobserver variability was high for CT colonography w
123  EF than for manual EF or manual LS, whereas interobserver variability was higher for both visual and
124                                  Significant interobserver variability was identified during these as
125                                              Interobserver variability was negligible.
126                                              Interobserver variability was not explained by positive
127                                  Significant interobserver variability was observed (P < .001).
128                                              Interobserver variability was only fair (kappa = 0.54) f
129                            Intraobserver and interobserver variability was small, with intraclass cor
130  sum test and two-sample Student t test, and interobserver variability was tested with kappa coeffici
131                                              Interobserver variability was tested with the kappa coef
132                            Intraobserver and interobserver variabilities were determined.
133                            Intraobserver and interobserver variabilities were excellent (4+/-4% and 4
134                            Intraobserver and interobserver variabilities were similar.
135 an square percent error (accuracy), bias and interobserver variability were 0.992, 11.9 g, 4.8%, -4.9
136                                   Intra- and interobserver variability were analyzed in image categor
137                           Accuracy, bias and interobserver variability were calculated.
138   Whole-lesion measurement showed the lowest interobserver variability with both measurement methods

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top