戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  hypodensities at baseline (kappa = 0.87 for interrater reliability).
2 ment Scale all exhibited very high levels of interrater reliability.
3 ently scored by the other raters to evaluate interrater reliability.
4 ent two independent assessments to establish interrater reliability.
5  of care, but a major drawback has been poor interrater reliability.
6 up visits was created to test the intra- and interrater reliability.
7 terpretations of chest radiographs have poor interrater reliability.
8 tion, kappa coefficients were calculated for interrater reliability.
9 t group, nine features had at least moderate interrater reliability.
10 coefficients (ICCs) were computed to compare interrater reliability.
11 idation group, respectively) had the highest interrater reliability.
12      The reviewers had moderate to excellent interrater reliability.
13  were performed in blinded fashion to assess interrater reliability.
14 category, and management for test-retest and interrater reliability.
15 independently by a second researcher to test interrater reliability.
16 arater reliability and from 0.44 to 1.00 for interrater reliability.
17 ing concerns regarding testing confounds and interrater reliability.
18                   Fourteen studies evaluated interrater reliability.
19         Average neurologic soft sign scores (interrater reliability = 0.74) of women with PTSD owing
20 validity among guideline developers and good interrater reliability across trained reviewers.
21                      Purpose To evaluate the interrater reliability among radiologists examining post
22                                           An interrater reliability analysis was performed using the
23        For interrater agreement analysis and interrater reliability analysis, multirater Fleiss kappa
24  visual rating protocol achieved the highest interrater reliability and accuracy especially under low
25 ested the Sedation-Agitation Scale (SAS) for interrater reliability and compared it with the Ramsay s
26 ease (ILD), relatively little is known about interrater reliability and construct validity of HRCT-re
27   This study demonstrates that HRCT has good interrater reliability and correlates with indices of th
28              The RASS demonstrated excellent interrater reliability and criterion, construct, and fac
29                        The RCT-PQRS had good interrater reliability and internal consistency.
30        The CPM (a) demonstrated satisfactory interrater reliability and internal consistency; (b) exh
31 sted for photographic equivalency as well as interrater reliability and intrarater reliability by 5 r
32    This study was conducted to determine the interrater reliability and predictive validity of a set
33 iew for Prodromal Syndromes showed promising interrater reliability and predictive validity.
34                                              Interrater reliability and responsiveness were each asse
35                 There was improvement in the interrater reliability and the level of agreement from E
36                                    We tested interrater reliability and validity in determining the N
37 isease activity and damage demonstrated high interrater reliability and were shown to be comprehensiv
38       Percentage of visual reader agreement, interrater reliability, and agreement of each visual rea
39                        Internal consistency, interrater reliability, and concurrent (criterion) valid
40  scores and dilutional CrAg titers, assessed interrater reliability, and determined the clinical corr
41     Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedsi
42 ted methods of rater training, assessment of interrater reliability, and rater drift in clinical tria
43 ted methods of rater training, assessment of interrater reliability, and rater drift were systematica
44 ber of raters, rater training, assessment of interrater reliability, and rater drift.
45 ndently scored by 3 dermatopathologists with interrater reliability assessed.
46 with high intrarater (average ICC, 0.91) and interrater reliability (average ICC, 0.78).
47     There was a high degree of agreement and interrater reliability between CEC and RHD outcome deter
48 ssments of performance were recorded with an interrater reliability between reviewers of 0.99.
49 whole-brain CT perfusion and CT angiography, interrater reliability (Cohen kappa), and adverse events
50 was found to have good internal consistency, interrater reliability, concurrent validity, high sensit
51                                  Analyses of interrater reliabilities, convergent validities accordin
52                             For a subsample, interrater reliability data were available.
53 ree (14%) of the multicenter trials reported interrater reliability, despite a median number of five
54 y included 95 DBT examinations with moderate interrater reliability (Fleiss k = 0.45).
55 y included 95 DBT examinations with moderate interrater reliability (Fleiss kappa = 0.45).
56                                              Interrater reliabilities for intern and team technical s
57   Most body sites exhibited moderate to good interrater reliabilities for scale and erythema.
58 tion coefficient scores were used to measure interrater reliability for both scenarios.
59                                         Good interrater reliability for BPII can be achieved when the
60                                              Interrater reliability for care-received classification
61                                              Interrater reliability for classification of care receiv
62                                 Further, the interrater reliability for diagnosing schizoaffective di
63                             By contrast, the interrater reliability for erythema was higher during in
64                                       Expert interrater reliability for gamma spikes (percentage agre
65                                              Interrater reliability for infarct size between the core
66  and perform detection at the level of human interrater reliability for metastases larger than 6 mm.K
67                                              Interrater reliability for multiparametric MRI versus PE
68                                              Interrater reliability for multiphase CT angiography is
69                                              Interrater reliability for OSAD was excellent (ICC = 0.9
70                                          The interrater reliability for radiographs was dependent on
71                                          The interrater reliability for specific locations was also e
72                                              Interrater reliability for the Arabic CAM-ICU, overall a
73                                              Interrater reliability for the final NEATS instrument ha
74                          There was excellent interrater reliability for the identification of localiz
75         Rheumatologists and patients had low interrater reliability for the presence of hypercholeste
76             Live scoring showed an excellent interrater reliability for the VES (intraclass correlati
77 udy nurses and intensivist demonstrated high interrater reliability for their CAM-ICU ratings with ka
78 ation over time was observed because of high interrater reliability from the outset (ie, a ceiling ef
79 ead to the diagnosis of a syndrome with high interrater reliability, good face validity, and high pre
80 physicians using structured implicit review (interrater reliability >0.90).
81 s identified features with at least moderate interrater reliability (ICC >=0.41) that were independen
82                                  We assessed interrater reliability in a subgroup of 180 stroke event
83 ined raters achieved moderate to substantial interrater reliability in coding cases using 5 types of
84                  There was an improvement in interrater reliability in the second phase of the study.
85 atric rheumatologists demonstrated excellent interrater reliability in their global assessments of ju
86  observations demonstrated at least moderate interrater reliability (interrater ICC range, 0.42 [95%
87 echniques and demonstrated good to excellent interrater reliability (intraclass correlation coefficie
88 dapted Cognitive Exam demonstrated excellent interrater reliability (intraclass correlation coefficie
89 ement among laboratories, calculated through interrater reliability (IRR) measures for the PCR test t
90 l records review studies, information on the interrater reliability (IRR) of the data is seldom repor
91 was reviewed by 2 independent investigators, interrater reliability (IRR) was calculated, and the WPV
92 0 cohort year to assess standardized patient interrater reliability (IRR).
93 9.0 minutes per patient) and more objective (interrater reliability kappa 0.79 vs 0.45) than the conv
94                               There was poor interrater reliability (kappa = 0.36 [range, 0.06-0.64];
95 es also favored progression with substantial interrater reliability (kappa = 0.80 [95% CI, 0.61-0.99]
96 n = 97, respectively; P < .001), with higher interrater reliability (kappa = 0.91-0.95 for EPI-FLAIR
97 ccuracy of 94% (95% CI 88% to 97%), and high interrater reliability (kappa = 0.94; 95% CI 0.83-1.0).
98 93%, specificities of 98% and 100%, and high interrater reliability (kappa = 0.96; 95% confidence int
99 5% confidence interval, 95-100%), and a high interrater reliability (kappa = 0.96; 95% confidence int
100                                              Interrater reliability (Kendall's coefficient of concord
101                                              Interrater reliability measures across subgroup comparis
102 Severity Scale was associated with excellent interrater reliability, moderate internal consistency, a
103 lass correlation coefficient as a measure of Interrater reliability, NICS scored as high, or higher t
104                   Internal consistencies and interrater reliabilities of factors were stable across a
105  this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-ide
106 condary outcomes included the intrarater and interrater reliabilities of the BF-VR.
107              The objective was to assess the interrater reliability of ABSIS and PDAI scores and thei
108 predefined errors for each procedure minute (interrater reliability of error assessment r > 0.80).
109                                              Interrater reliability of handgrip dynamometry was very
110                                              Interrater reliability of handheld dynamometry was compa
111                                          The interrater reliability of many of the key concepts in ps
112     Criterion, construct, face validity, and interrater reliability of NICS over time and comparison
113                                              Interrater reliability of nodule detection with MR imagi
114                                              Interrater reliability of proSPI-s was assessed in 12 pa
115                       How and to what extent interrater reliability of radiomics features vary in res
116  portray depressed patients to establish the interrater reliability of raters using the Hamilton Depr
117                                          The interrater reliability of the BF-VR algorithm was excell
118   A scoring cut point of 9 demonstrated good interrater reliability of the Cornell Assessment of Pedi
119 eline development, the external validity and interrater reliability of the instrument were evaluated.
120                                              Interrater reliability of the lesion assessment was high
121 e To evaluate the diagnostic performance and interrater reliability of the Liver Imaging Reporting an
122                                              Interrater reliability of the Medical Research Council s
123                                              Interrater reliability of the Medical Research Council-s
124                                          The interrater reliability of the modified Advocacy-Inquiry
125                                          The interrater reliability of the NDJ was excellent, with an
126                                          The interrater reliability of the overall scale showed an IC
127      This study also assessed the intra- and interrater reliability of ultrasound as a measurement to
128 ual reliability of the optimal feature using interrater reliability, percentage agreement (standard d
129                    The kappa coefficient for interrater reliability ranged from 0.41 (95% CI, 0.31 to
130                                          The interrater reliability ranged from kappa = 0.86 to kappa
131                                              Interrater reliability (reported as intraclass correlati
132 index distribution, which was assessed using interrater reliability scores.
133 perienced PET researchers participated in an interrater reliability study using both (11)C-DTBZ K(1)
134                                     The poor interrater reliability suggests that if digital ulcerati
135                  Outcome measures had higher interrater reliability than process measures.
136                                              Interrater reliability, validity, and dimensionality of
137                                              Interrater reliability was (k = 0.79).
138 .54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence li
139                                              Interrater reliability was 0.536 (95% confidence interva
140                         The Fleiss kappa for interrater reliability was 0.78 (95% CI: 0.77, 0.78), an
141                                              Interrater reliability was 0.91 (intraclass correlation
142                                    Excellent interrater reliability was achieved in all assessments (
143                                              Interrater reliability was assessed by percent concordan
144                                              Interrater reliability was assessed by using a set of te
145                                          The interrater reliability was assessed using intraclass cor
146                                              Interrater reliability was assessed using kappa statisti
147                                          The interrater reliability was assessed using weighted kappa
148                                              Interrater reliability was assessed, using the five scal
149                                              Interrater reliability was determined by using a two-way
150                                              Interrater reliability was determined.
151                                              Interrater reliability was estimated using a multirater
152                     MR images were assessed; interrater reliability was evaluated.
153                                          VTI interrater reliability was excellent (intraclass correla
154                                              Interrater reliability was excellent for all ancillary t
155                                              Interrater reliability was excellent for CSAMI Activity
156                                              Interrater reliability was excellent for methods requiri
157                            We found that the interrater reliability was excellent with the FOUR score
158                                              Interrater reliability was fair (weighted kappa 0.47 and
159 45 studies representing the 3 study designs, interrater reliability was high (Cohen's kappa: 0.73; 95
160                                      Overall interrater reliability was higher for the PRIMARY scale
161 ants in whom visual and SUVR data disagreed, interrater reliability was moderate (kappa = 0.44), but
162                            For infarct size, interrater reliability was moderate (kappa = 0.675; 95%
163                                              Interrater reliability was moderate (kappa = 0.68) among
164                             Fair-to-moderate interrater reliability was observed between the resident
165          Light generalization of Cohen k for interrater reliability was performed.
166 ates, and a comparison with neuroradiologist interrater reliability was performed.
167                                              Interrater reliability was poorer (weighted kappa = 0.46
168                                    Excellent interrater reliability was present (correlation coeffici
169 was scored on a six-point ordinal scale, and interrater reliability was tested.
170                                              Interrater reliability was then explored.
171 n atypical characteristics yielded very high interrater reliability (weighted kappa = 0.80; bootstrap
172  both the RASS and RS demonstrated excellent interrater reliability (weighted kappa, 0.91 and 0.94, r
173                                              Interrater reliabilities were .82 or greater for all MRI
174                                          The interrater reliabilities were highest for the PDAI, foll
175 ty including sensitivity and specificity and interrater reliability were determined using daily delir
176                           Code frequency and interrater reliability were determined using NVIVO softw
177                           Adequate levels of interrater reliability were found for 24 of 26 items.
178 ata also indicate the presence of acceptable interrater reliability when using the Ottawa GRS.
179 ctive accuracy of our model approaches human interrater reliability, which simulations suggest would
180 ypes IV, VI, and VI demonstrated a sustained interrater reliability, with an ICC of 0.93 (95% CI, 0.8
181                               There was high interrater reliability, with an intraclass correlation c
182                      Indices had low-to-fair interrater reliability within institutions (kappa range,

 
Page Top