コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 raters), and to a subset of 80 scans (three raters).
2 .04) was achieved with the more experienced rater.
3 ntensity, diagnosis, and gender of poser and rater.
4 ent, the final opinion was made by the third rater.
5 Method for the ICU and 47% by the reference rater.
6 Cs ranging from 0.96 to 0.99 with 5 separate raters.
7 y an expert neuroradiologist and 3 clinician raters.
8 e Examination were performed by 76 different raters.
9 ensus was defined as agreement by >/= 75% of raters.
10 ich were independently evaluated by pairs of raters.
11 ), as assessed by blinded central AIMS video raters.
12 rrelations, was extracted independently by 2 raters.
13 es and direct observations of city blocks by raters.
14 D revealed by fundus photography and trained raters.
15 ficantly more consistent scoring than novice raters.
16 rved and rated by patient actors and faculty raters.
17 ion pre, intra, and postoperatively by blind raters.
18 (N = 6) and an expert/novice (N = 6) pair of raters.
19 chosen for accuracy spot-checks by reference raters.
20 eaching skills, as judged by medical student raters.
21 ic influences that are not shared with other raters.
22 orrelated (r = 0.79) overall measures across raters.
23 scored on the criteria by 2 of 3 independent raters.
24 esonance imaging features by two independent raters.
25 nario was evaluated by two interprofessional raters.
26 ities were analyzed by 2 independent blinded raters.
27 g it against the performance of expert human raters.
28 ility was described as the agreement between raters.
29 ohen kappa statistics for pairs of different raters.
30 e randomly selected patients (15 patients by rater 1, and 16 patients by rater 2), followed by USG.
34 he degree of agreement among 100 independent raters about the likely location to contain a target obj
35 c analysis of participants with a consistent rater across the trial revealed greater improvement in t
48 h septic shock, we assessed intra- and inter-rater agreement of electrocardiogram interpretation, and
49 relation coefficient overestimated the inter-rater agreement of PASI as compared with the intra-class
50 is study aims to assess the intra- and inter-rater agreement of retinal pathologies observed on fundu
52 nresponsiveness score showed excellent inter-rater agreement overall and at each of the five hospital
53 Manchester Triage System shows a wide inter-rater agreement range with a prevalence of good and very
54 atistics (r and rho) for assessing the inter-rater agreement reliability of asymmetrically distribute
55 analysis to patients with a PASI <20, inter-rater agreement severely decreased (r=0.38, rho=0.41, IC
58 study of internal consistency, reliability, rater agreement, and the relationship with measures of m
66 ificant correlation between counts by visual raters and automated detection of ePVSs in the same sect
67 The classification reliability between human raters and between human raters and the neural network c
71 association between attractive VHI in female raters and preferences for attractive (low) WCR in male
73 ility between human raters and between human raters and the neural network classifier was better than
76 and were delineated on T2w MRI by two expert raters and were used to construct statistical shape atla
77 ronto-insula) were applied to 257 scans (two raters), and to a subset of 80 scans (three raters).
78 e of a well-defined glossary and training of raters are essential to ensure the optimal performance o
82 onth intervals during a 1-year period, blind raters assessed the domains of suicidal behavior, aggres
85 cale, and 10 were assessed twice by the same rater at different times to assess intrarater reliabilit
87 is considerable variation in the ability of raters at different levels of training to identify inapp
88 he interobserver reliability between the two raters (attorneys) in interpretation of six items charac
94 milton Depression Rating Scale (completed by raters blind to diagnosis and randomization status), sel
97 patient resuscitations and coded by trained raters blinded to condition assignment and study hypothe
98 nces were video recorded and assessed by two raters blinded to group allocation using the modified Ad
102 A multicenter randomized, sham-controlled, rater-blinded and patient-blinded trial was conducted fr
110 re observed in either sex between aspects of rater body shape and strength of preferences for attract
111 ttractive opposite-sex body shapes, and that rater body traits -with the exception of VHI in female r
112 ttractive traits in opposite-sex bodies, and raters' body shape, self-perceived attractiveness, and s
116 ome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (
117 was classified differently by the clinician raters compared with the neuroradiologist or computer pr
118 chs-Carpenter Quality of Life Scale, which a rater completes on the basis of the patient's self-repor
120 he revised COMFORT behavioral scale; and (2) Rater-context problems caused by (a) unfamiliarity with
128 ects of genes and not attributable to shared rater effects, clinical referral biases, or covariation
132 n Vicodin," and "OxyContin." Two independent raters examined the links generated in each search and r
137 nd sensitivity and specificity of individual raters for identifying inappropriate tests were calculat
138 -kappa between bedside nurses and references raters for the Richmond Agitation-Sedation Scale were 0.
139 dian redness score reported by the 2 blinded raters for the treatment and control sides was 2.0 (inte
140 The percentage agreement among physician raters for treatment decisions in 28 stroke patients was
143 Phase 2: OSAD was feasible, reliable [inter-rater ICC (intraclass correlation coefficient) = 0.88, t
145 frame-by-frame by an experimentally blinded rater; (ii) automatic retrieval of proxies by TracMouse
146 s able to account for 58% of the variance in raters' impressions of previously unseen faces, and fact
147 Studies were independently reviewed by 12 raters in 3 groups using a systematic and blinded proces
149 tment decisions was remarkably similar among raters in presence or absence of advanced healthcare dir
151 stematic differences existed between the two raters in the following Short-Form 36 domains: physical
154 s more accurate in final infarct prediction, rater independent, and provided exclusive information on
161 a prospective multicenter study, two blinded raters independently examined cervical spine magnetic re
162 In the second part of the study, the two raters independently performed the scratch test on separ
165 ed and the scratch test was performed by two raters independently, followed by ultrasound (USG) as th
166 0.99 across studies and follow-up) and inter-rater (intraclass correlation, 0.97) reliability were fo
168 re, cluster-randomised controlled trial with raters masked to an online computer-generated randomisat
169 ing, first, visual assessment by independent raters masked to clinical information and, second, semia
170 Static PET images were evaluated by trained raters masked to clinical status and regional analysis.
174 traits -with the exception of VHI in female raters- may not be good predictors of these preferences
177 and the videotapes were rated by two blinded raters on a scale of 0=normal, 1=mild convergence spasm
178 reliability and intrarater reliability by 5 raters on a set of 80 total patient scars, 20 of which w
180 bility of criteria during retesting, between raters, over time, and across settings), 2) content vali
181 ts' symptom ratings indicate that some proxy raters overestimate whereas others underestimate patient
183 an kappa statistic for all variables and all rater pairs for which a kappa could be calculated was 0.
185 mantic content analysis of the interviews, a rater panel trained in this method independently coded a
186 to examine associations between strength of rater preferences for attractive traits in opposite-sex
188 scoring by expert versus novices-ie, expert raters produce significantly more consistent scoring tha
192 disorders document adequately the number of raters, rater training, assessment of interrater reliabi
197 ach patient with a high intrarater and inter-rater reliability (intraclass correlation coefficients 0
199 o clinical assessment and had the best inter-rater reliability (mean kappa = 0.78) and diagnostic acc
200 were always assessable with excellent inter-rater reliability (numeric rating scale intraclass corre
203 95% confidence interval = 0.71, 0.75) inter-rater reliability among five investigators (two physicia
206 ation coefficient was used to evaluate inter-rater reliability between the nurse and the physician fo
208 method surveys; (ii) determination of inter-rater reliability for each type of pathology in each reg
215 lass correlation coefficient (ICC) for inter-rater reliability of 0.76 (95% confirdence interval (CI)
216 lass correlation coefficient (ICC) for inter-rater reliability of 0.86 for the activity score (95% co
220 t reports on a study that examined the inter-rater reliability of the Full Outline of Unresponsivenes
221 h pemphigus to estimate the inter- and intra-rater reliability of the PDAI and the recently described
222 the psychometric properties including inter-rater reliability of the prototype SPLINTS behavioural r
224 m response modelling (Rasch analysis), inter-rater reliability testing, construct analysis and correl
225 he studies investigated the inter- and intra-rater reliability using the "kappa" statistic; the valid
226 lass correlation coefficient for MUNIX intra-rater reliability was 0.89 and for inter-rater reliabili
235 dies to develop and test: feasibility, inter-rater reliability, repeatability and external validity.
243 NAS for NAFLD and NASH with reasonable inter-rater reproducibility that should be useful for studies
248 counted for 45% of the observed variation in raters' scores for the borderline videos (P < .001).
249 hese results suggest that among male raters, rater self-perceived attractiveness and sociosexuality a
250 index [VHI] in both sexes) and also measured rater self-perceived attractiveness and sociosexuality.
255 Unit also contained sufficient data on inter-rater/test-retest reliability, responsiveness, and feasi
256 ver agreement (median of 30 CT scans and six raters), the prevalence of all early infarction signs wa
261 n=11) support the ability of two independent raters to obtain similar results when calculating total
264 rs document adequately the number of raters, rater training, assessment of interrater reliability, an
266 ate their fellows more highly than the ROSCE raters, typically there was agreement between the progra
269 and -Altman plot indicated that, on average, raters underestimated the distance from the right costal
270 of the remaining 312 articles, eight trained raters used a coding system to record standardized nursi
271 performance was assessed by a blinded expert rater using Global Operative Assessment of Laparoscopic
272 room (OR) assessed by 3 independent, masked raters using a previously validated task-specific assess
274 (kappa) between bedside nurses and reference raters using the Confusion Assessment Method for the ICU
275 s to establish the interrater reliability of raters using the Hamilton Depression Rating Scale (HDRS)
277 -up period as assessed by blinded diagnostic raters using the telephone-administered Structured Clini
279 noise distribution from published mRS inter-rater variability to generate an error percentage for "s
280 rey matter at C2/C3 level was close to inter-rater variability, reaching an accuracy (DSC) of 0.826 f
281 ements with accuracy equivalent to the inter-rater variability, with a Dice score (DSC) of 0.967 at C
283 from bedside nurses and a reference-standard rater was very high for both the sedation scale and the
285 Inter-rater reliability for noncardiologist raters was modest (unweighted Cohen kappa, 0.51, 95% con
287 class correlation coefficients (ICC) between raters were 0.99 (95% confidence intervals (CI) 0.98-1.0
294 lue (PPV and NPV) for the two Arabic CAM-ICU raters, where calculations were based on considering the
295 Face validity was assessed by surveying raters who used both the Adapted Cognitive Exam and Mini
296 clinical symptoms was evaluated by 3 blinded raters with a standardized video protocol and clinical r
299 benefit as assessed by central, independent raters with the Parkinson's disease-adapted scale for as
300 the overall SCAR score predicted whether the rater would consider the scar undesirable, with an odds
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。