戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  hypodensities at baseline (kappa = 0.87 for interrater reliability).
2 ently scored by the other raters to evaluate interrater reliability.
3 ent two independent assessments to establish interrater reliability.
4  of care, but a major drawback has been poor interrater reliability.
5  were performed in blinded fashion to assess interrater reliability.
6 category, and management for test-retest and interrater reliability.
7 independently by a second researcher to test interrater reliability.
8 arater reliability and from 0.44 to 1.00 for interrater reliability.
9 ing concerns regarding testing confounds and interrater reliability.
10                   Fourteen studies evaluated interrater reliability.
11 ment Scale all exhibited very high levels of interrater reliability.
12         Average neurologic soft sign scores (interrater reliability = 0.74) of women with PTSD owing
13 ested the Sedation-Agitation Scale (SAS) for interrater reliability and compared it with the Ramsay s
14 ease (ILD), relatively little is known about interrater reliability and construct validity of HRCT-re
15   This study demonstrates that HRCT has good interrater reliability and correlates with indices of th
16              The RASS demonstrated excellent interrater reliability and criterion, construct, and fac
17                        The RCT-PQRS had good interrater reliability and internal consistency.
18        The CPM (a) demonstrated satisfactory interrater reliability and internal consistency; (b) exh
19 sted for photographic equivalency as well as interrater reliability and intrarater reliability by 5 r
20    This study was conducted to determine the interrater reliability and predictive validity of a set
21 iew for Prodromal Syndromes showed promising interrater reliability and predictive validity.
22                 There was improvement in the interrater reliability and the level of agreement from E
23                                    We tested interrater reliability and validity in determining the N
24 isease activity and damage demonstrated high interrater reliability and were shown to be comprehensiv
25                        Internal consistency, interrater reliability, and concurrent (criterion) valid
26     Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedsi
27 ted methods of rater training, assessment of interrater reliability, and rater drift in clinical tria
28 ted methods of rater training, assessment of interrater reliability, and rater drift were systematica
29 ber of raters, rater training, assessment of interrater reliability, and rater drift.
30 ndently scored by 3 dermatopathologists with interrater reliability assessed.
31 ssments of performance were recorded with an interrater reliability between reviewers of 0.99.
32 was found to have good internal consistency, interrater reliability, concurrent validity, high sensit
33                                  Analyses of interrater reliabilities, convergent validities accordin
34                             For a subsample, interrater reliability data were available.
35 ree (14%) of the multicenter trials reported interrater reliability, despite a median number of five
36                                              Interrater reliabilities for intern and team technical s
37 tion coefficient scores were used to measure interrater reliability for both scenarios.
38                                         Good interrater reliability for BPII can be achieved when the
39                                 Further, the interrater reliability for diagnosing schizoaffective di
40                             By contrast, the interrater reliability for erythema was higher during in
41                                              Interrater reliability for multiphase CT angiography is
42                                              Interrater reliability for OSAD was excellent (ICC = 0.9
43                                          The interrater reliability for radiographs was dependent on
44                                          The interrater reliability for specific locations was also e
45                                              Interrater reliability for the Arabic CAM-ICU, overall a
46                          There was excellent interrater reliability for the identification of localiz
47         Rheumatologists and patients had low interrater reliability for the presence of hypercholeste
48             Live scoring showed an excellent interrater reliability for the VES (intraclass correlati
49 udy nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with ka
50 ation over time was observed because of high interrater reliability from the outset (ie, a ceiling ef
51 ead to the diagnosis of a syndrome with high interrater reliability, good face validity, and high pre
52 physicians using structured implicit review (interrater reliability >0.90).
53                  There was an improvement in interrater reliability in the second phase of the study.
54 atric rheumatologists demonstrated excellent interrater reliability in their global assessments of ju
55 dapted Cognitive Exam demonstrated excellent interrater reliability (intraclass correlation coefficie
56 l records review studies, information on the interrater reliability (IRR) of the data is seldom repor
57 9.0 minutes per patient) and more objective (interrater reliability kappa 0.79 vs 0.45) than the conv
58 ccuracy of 94% (95% CI 88% to 97%), and high interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
59 5% confidence interval, 95-100%), and a high interrater reliability (kappa = 0.96; 95% confidence int
60 93%, specificities of 98% and 100%, and high interrater reliability (kappa = 0.96; 95% confidence int
61                                              Interrater reliability (Kendall's coefficient of concord
62                                              Interrater reliability measures across subgroup comparis
63 Severity Scale was associated with excellent interrater reliability, moderate internal consistency, a
64 lass correlation coefficient as a measure of Interrater reliability, NICS scored as high, or higher t
65                   Internal consistencies and interrater reliabilities of factors were stable across a
66  this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-ide
67 predefined errors for each procedure minute (interrater reliability of error assessment r > 0.80).
68                                              Interrater reliability of handgrip dynamometry was very
69                                              Interrater reliability of handheld dynamometry was compa
70                                          The interrater reliability of many of the key concepts in ps
71     Criterion, construct, face validity, and interrater reliability of NICS over time and comparison
72                                              Interrater reliability of nodule detection with MR imagi
73                                              Interrater reliability of proSPI-s was assessed in 12 pa
74  portray depressed patients to establish the interrater reliability of raters using the Hamilton Depr
75   A scoring cut point of 9 demonstrated good interrater reliability of the Cornell Assessment of Pedi
76 e To evaluate the diagnostic performance and interrater reliability of the Liver Imaging Reporting an
77                                              Interrater reliability of the Medical Research Council s
78                                              Interrater reliability of the Medical Research Council-s
79                                          The interrater reliability of the modified Advocacy-Inquiry
80                                          The interrater reliability of the NDJ was excellent, with an
81                                          The interrater reliability of the overall scale showed an IC
82                    The kappa coefficient for interrater reliability ranged from 0.41 (95% CI, 0.31 to
83                                              Interrater reliability (reported as intraclass correlati
84 perienced PET researchers participated in an interrater reliability study using both (11)C-DTBZ K(1)
85                                     The poor interrater reliability suggests that if digital ulcerati
86                  Outcome measures had higher interrater reliability than process measures.
87                                              Interrater reliability, validity, and dimensionality of
88                                              Interrater reliability was (k = 0.79).
89 .54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence li
90                                              Interrater reliability was 0.536 (95% confidence interva
91                                              Interrater reliability was 0.91 (intraclass correlation
92                                    Excellent interrater reliability was achieved in all assessments (
93                                              Interrater reliability was assessed by using a set of te
94                                          The interrater reliability was assessed using intraclass cor
95                                              Interrater reliability was assessed using kappa statisti
96                                              Interrater reliability was assessed, using the five scal
97                                              Interrater reliability was determined by using a two-way
98                                              Interrater reliability was estimated using a multirater
99                     MR images were assessed; interrater reliability was evaluated.
100                                          VTI interrater reliability was excellent (intraclass correla
101                                              Interrater reliability was excellent for CSAMI Activity
102                            We found that the interrater reliability was excellent with the FOUR score
103 ants in whom visual and SUVR data disagreed, interrater reliability was moderate (kappa = 0.44), but
104                                              Interrater reliability was poorer (weighted kappa = 0.46
105                                    Excellent interrater reliability was present (correlation coeffici
106 was scored on a six-point ordinal scale, and interrater reliability was tested.
107                                              Interrater reliability was then explored.
108 n atypical characteristics yielded very high interrater reliability (weighted kappa = 0.80; bootstrap
109  both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, r
110                                              Interrater reliabilities were .82 or greater for all MRI
111                                          The interrater reliabilities were highest for the PDAI, foll
112 ty including sensitivity and specificity and interrater reliability were determined using daily delir
113                           Adequate levels of interrater reliability were found for 24 of 26 items.
114 ata also indicate the presence of acceptable interrater reliability when using the Ottawa GRS.
115 ypes IV, VI, and VI demonstrated a sustained interrater reliability, with an ICC of 0.93 (95% CI, 0.8
116                               There was high interrater reliability, with an intraclass correlation c

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top