コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 hypodensities at baseline (kappa = 0.87 for interrater reliability).
2 ment Scale all exhibited very high levels of interrater reliability.
3 ently scored by the other raters to evaluate interrater reliability.
4 ent two independent assessments to establish interrater reliability.
5 of care, but a major drawback has been poor interrater reliability.
6 up visits was created to test the intra- and interrater reliability.
7 terpretations of chest radiographs have poor interrater reliability.
8 tion, kappa coefficients were calculated for interrater reliability.
9 t group, nine features had at least moderate interrater reliability.
10 coefficients (ICCs) were computed to compare interrater reliability.
11 idation group, respectively) had the highest interrater reliability.
12 The reviewers had moderate to excellent interrater reliability.
13 were performed in blinded fashion to assess interrater reliability.
14 category, and management for test-retest and interrater reliability.
15 independently by a second researcher to test interrater reliability.
16 arater reliability and from 0.44 to 1.00 for interrater reliability.
17 ing concerns regarding testing confounds and interrater reliability.
18 Fourteen studies evaluated interrater reliability.
24 visual rating protocol achieved the highest interrater reliability and accuracy especially under low
25 ested the Sedation-Agitation Scale (SAS) for interrater reliability and compared it with the Ramsay s
26 ease (ILD), relatively little is known about interrater reliability and construct validity of HRCT-re
27 This study demonstrates that HRCT has good interrater reliability and correlates with indices of th
31 sted for photographic equivalency as well as interrater reliability and intrarater reliability by 5 r
32 This study was conducted to determine the interrater reliability and predictive validity of a set
37 isease activity and damage demonstrated high interrater reliability and were shown to be comprehensiv
40 scores and dilutional CrAg titers, assessed interrater reliability, and determined the clinical corr
41 Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedsi
42 ted methods of rater training, assessment of interrater reliability, and rater drift in clinical tria
43 ted methods of rater training, assessment of interrater reliability, and rater drift were systematica
47 There was a high degree of agreement and interrater reliability between CEC and RHD outcome deter
49 whole-brain CT perfusion and CT angiography, interrater reliability (Cohen kappa), and adverse events
50 was found to have good internal consistency, interrater reliability, concurrent validity, high sensit
53 ree (14%) of the multicenter trials reported interrater reliability, despite a median number of five
66 and perform detection at the level of human interrater reliability for metastases larger than 6 mm.K
77 udy nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with ka
78 ation over time was observed because of high interrater reliability from the outset (ie, a ceiling ef
79 ead to the diagnosis of a syndrome with high interrater reliability, good face validity, and high pre
81 s identified features with at least moderate interrater reliability (ICC >=0.41) that were independen
83 ined raters achieved moderate to substantial interrater reliability in coding cases using 5 types of
85 atric rheumatologists demonstrated excellent interrater reliability in their global assessments of ju
86 observations demonstrated at least moderate interrater reliability (interrater ICC range, 0.42 [95%
87 echniques and demonstrated good to excellent interrater reliability (intraclass correlation coefficie
88 dapted Cognitive Exam demonstrated excellent interrater reliability (intraclass correlation coefficie
89 ement among laboratories, calculated through interrater reliability (IRR) measures for the PCR test t
90 l records review studies, information on the interrater reliability (IRR) of the data is seldom repor
91 was reviewed by 2 independent investigators, interrater reliability (IRR) was calculated, and the WPV
93 9.0 minutes per patient) and more objective (interrater reliability kappa 0.79 vs 0.45) than the conv
95 es also favored progression with substantial interrater reliability (kappa = 0.80 [95% CI, 0.61-0.99]
96 n = 97, respectively; P < .001), with higher interrater reliability (kappa = 0.91-0.95 for EPI-FLAIR
97 ccuracy of 94% (95% CI 88% to 97%), and high interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
98 93%, specificities of 98% and 100%, and high interrater reliability (kappa = 0.96; 95% confidence int
99 5% confidence interval, 95-100%), and a high interrater reliability (kappa = 0.96; 95% confidence int
102 Severity Scale was associated with excellent interrater reliability, moderate internal consistency, a
103 lass correlation coefficient as a measure of Interrater reliability, NICS scored as high, or higher t
105 this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-ide
108 predefined errors for each procedure minute (interrater reliability of error assessment r > 0.80).
112 Criterion, construct, face validity, and interrater reliability of NICS over time and comparison
116 portray depressed patients to establish the interrater reliability of raters using the Hamilton Depr
118 A scoring cut point of 9 demonstrated good interrater reliability of the Cornell Assessment of Pedi
119 eline development, the external validity and interrater reliability of the instrument were evaluated.
121 e To evaluate the diagnostic performance and interrater reliability of the Liver Imaging Reporting an
127 This study also assessed the intra- and interrater reliability of ultrasound as a measurement to
128 ual reliability of the optimal feature using interrater reliability, percentage agreement (standard d
133 perienced PET researchers participated in an interrater reliability study using both (11)C-DTBZ K(1)
138 .54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence li
159 45 studies representing the 3 study designs, interrater reliability was high (Cohen's kappa: 0.73; 95
161 ants in whom visual and SUVR data disagreed, interrater reliability was moderate (kappa = 0.44), but
171 n atypical characteristics yielded very high interrater reliability (weighted kappa = 0.80; bootstrap
172 both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, r
175 ty including sensitivity and specificity and interrater reliability were determined using daily delir
179 ctive accuracy of our model approaches human interrater reliability, which simulations suggest would
180 ypes IV, VI, and VI demonstrated a sustained interrater reliability, with an ICC of 0.93 (95% CI, 0.8