コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 hypodensities at baseline (kappa = 0.87 for interrater reliability).
2 ently scored by the other raters to evaluate interrater reliability.
3 ent two independent assessments to establish interrater reliability.
4 of care, but a major drawback has been poor interrater reliability.
5 were performed in blinded fashion to assess interrater reliability.
6 category, and management for test-retest and interrater reliability.
7 independently by a second researcher to test interrater reliability.
8 arater reliability and from 0.44 to 1.00 for interrater reliability.
9 ing concerns regarding testing confounds and interrater reliability.
10 Fourteen studies evaluated interrater reliability.
11 ment Scale all exhibited very high levels of interrater reliability.
13 ested the Sedation-Agitation Scale (SAS) for interrater reliability and compared it with the Ramsay s
14 ease (ILD), relatively little is known about interrater reliability and construct validity of HRCT-re
15 This study demonstrates that HRCT has good interrater reliability and correlates with indices of th
19 sted for photographic equivalency as well as interrater reliability and intrarater reliability by 5 r
20 This study was conducted to determine the interrater reliability and predictive validity of a set
24 isease activity and damage demonstrated high interrater reliability and were shown to be comprehensiv
26 Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedsi
27 ted methods of rater training, assessment of interrater reliability, and rater drift in clinical tria
28 ted methods of rater training, assessment of interrater reliability, and rater drift were systematica
32 was found to have good internal consistency, interrater reliability, concurrent validity, high sensit
35 ree (14%) of the multicenter trials reported interrater reliability, despite a median number of five
49 udy nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with ka
50 ation over time was observed because of high interrater reliability from the outset (ie, a ceiling ef
51 ead to the diagnosis of a syndrome with high interrater reliability, good face validity, and high pre
54 atric rheumatologists demonstrated excellent interrater reliability in their global assessments of ju
55 dapted Cognitive Exam demonstrated excellent interrater reliability (intraclass correlation coefficie
56 l records review studies, information on the interrater reliability (IRR) of the data is seldom repor
57 9.0 minutes per patient) and more objective (interrater reliability kappa 0.79 vs 0.45) than the conv
58 ccuracy of 94% (95% CI 88% to 97%), and high interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
59 5% confidence interval, 95-100%), and a high interrater reliability (kappa = 0.96; 95% confidence int
60 93%, specificities of 98% and 100%, and high interrater reliability (kappa = 0.96; 95% confidence int
63 Severity Scale was associated with excellent interrater reliability, moderate internal consistency, a
64 lass correlation coefficient as a measure of Interrater reliability, NICS scored as high, or higher t
66 this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-ide
67 predefined errors for each procedure minute (interrater reliability of error assessment r > 0.80).
74 portray depressed patients to establish the interrater reliability of raters using the Hamilton Depr
75 A scoring cut point of 9 demonstrated good interrater reliability of the Cornell Assessment of Pedi
76 e To evaluate the diagnostic performance and interrater reliability of the Liver Imaging Reporting an
84 perienced PET researchers participated in an interrater reliability study using both (11)C-DTBZ K(1)
89 .54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence li
103 ants in whom visual and SUVR data disagreed, interrater reliability was moderate (kappa = 0.44), but
108 n atypical characteristics yielded very high interrater reliability (weighted kappa = 0.80; bootstrap
109 both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, r
112 ty including sensitivity and specificity and interrater reliability were determined using daily delir
115 ypes IV, VI, and VI demonstrated a sustained interrater reliability, with an ICC of 0.93 (95% CI, 0.8
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。