1 tion of errors of prediction (SDEP) of 0.51 (
leave-one-out).
2 Leave-one-out analyses and pleiotropy-robust methods did
3 Leave-one-out analyses confirmed the stability of these
4 Leave-one-out analyses generated highly significant poly
5 manual segmentations in the healthy control
leave-one-out analyses in two of the three atlas databas
6 odel was built and validated by the standard
leave one out analysis.
7 Subgroup analyses included a
leave-one-out analysis and the omission of "high" ROB st
8 o [RR], 1.16; 95% CI, 1.00-1.34), although a
leave-one-out analysis demonstrated significance in this
9 Leave-one-out analysis demonstrated that the gray matter
10 A
leave-one-out analysis reveals that these models predict
11 Furthermore, the
leave-one-out analysis successfully cross-validated the
12 ation-induced change in intelligibility in a
leave-one-out analysis.
13 ominantly returned consistent results in the
leave-one-out analysis.
14 We validate these results with a
leave-one-out analysis.
15 70 SNPs with accuracy up to 99% according to
leave-one-out and 10-fold cross-validation.
16 The
leave-one-out and energy partition models both performed
17 illustrate the theory and performance of the
leave-one-out and energy partition models for estimating
18 f 16000 and 18522 simulated subjects for the
leave-one-out and independent datasets, the model was ab
19 on performance of TPCR was evaluated by both
leave-one-out and leave-half-out cross-validation using
20 ntiating the two case groups was assessed by
leave-one-out and Monte Carlo cross-validations.
21 cluded MR-Egger, weighted-median estimator, '
leave-one-out',
and multivariable MR analyses.
22 The model was then cross-validated with the
leave-one-out approach revealing only 2% residual standa
23 Sensitivity analysis was conducted using a
leave-one-out approach to assess the robustness of the r
24 We used a
leave-one-out approach to evaluate methylation scores co
25 diction algorithm based on these genes and a
leave-one-out approach, we assigned sample class to thes
26 l operating condition (NOC) samples, using a
leave-one-out approach.
27 Sensitivity analysis was performed using a
leave-one-out approach.
28 characteristic analysis and validated with a
leave-one-out approach.
29 two principal components was assessed by the
leave-one-out approach.
30 Using a '
leave-one-out'
approach we find average success rates be
31 ine-learning classification system that uses
leave-one-out bias optimization and discriminates among
32 been used as a proof of concept for a novel "
leave-one-out"
biosensor design in which a protein that
33 n) or large bias (such as resubstitution and
leave-one-out bootstrap).
34 The technique was cross (
leave-one-out),
both internally and externally, validate
35 ies from benign lesions was evaluated with a
leave-one-out-
by-case analysis.
36 e classifier, and then computes whether this
leave-one-out classifier correctly classifies the delete
37 A
leave-one-out classifier successfully distinguished auti
38 to: (i) calculate steady-state abundances of
leave-one-out communities and use these results to infer
39 We achieve a low
leave one out cross validation error of <10% for the can
40 characterized by the values of the internal
leave one out cross-validated R2 (q2) for the training s
41 rtial least squares regression model using a
leave one out cross-validation approach.
42 tica spectrum disorder/multiple sclerosis; a
leave one out cross-validation procedure assessed the pe
43 sex-stratified, age-matched 50:50 split, and
leave one out cross-validation were performed.
44 ME model of groundwater (222)Rn results in a
leave-one out cross-validation r(2) of 0.46 (Pearson cor
45 using elastic net regression analysis with a
leave-one-out cross validation (CV) and 100 CV runs.
46 The
leave-one-out cross validation (LOOCV) method was implem
47 Average model performances based on
leave-one-out cross validation (loocv) over ten differen
48 The model was supported by (i)
leave-one-out cross validation and (ii) division into th
49 Leave-one-out cross validation and case studies have sho
50 Through
leave-one-out cross validation and cross-classification
51 The model was validated by
leave-one-out cross validation and showed good recogniti
52 and 20 long-term survival patients) using a
leave-one-out cross validation approach for performance
53 assifier evaluated the feature matrices in a
leave-one-out cross validation design across patients.
54 built using logistic regression models with
leave-one-out cross validation in the training set.
55 The model was validated with a
leave-one-out cross validation procedure.
56 A PNC LUR model (R(2) = 0.48,
leave-one-out cross validation R(2) = 0.32) including tr
57 Additionally, standard
leave-one-out cross validation tests show how our approa
58 We use the
leave-one-out cross validation to compare the performanc
59 tive Mendelian relationship in families, and
leave-one-out cross validation to verify our results.
60 This approach was evaluated using
leave-one-out cross validation using actual human data.
61 Leave-one-out cross validation was performed to evaluate
62 A
leave-one-out cross validation was used to assess the ac
63 Leave-one-out cross validation was used to generate rece
64 Assessed with
leave-one-out cross validation, the model identifies kno
65 Through
leave-one-out cross validation, the overall prediction e
66 By using
leave-one-out cross validation, two quantitative US mult
67 individual sequence pairs, and is tested by
leave-one-out cross validation.
68 tic curves and was further assessed by using
leave-one-out cross validation.
69 nd were assessed for internal validity using
leave-one-out cross validation.
70 The resulting model was validated by
leave-one-out cross validation.
71 ity and specificity were calculated by using
leave-one-out cross validation.
72 d by using logistic regression analyses with
leave-one-out cross validation.
73 bination of atlas and method, we conducted a
leave-one-out cross-comparison to estimate the segmentat
74 The best
leave-one-out cross-validated 2 parameter classifier con
75 The
leave-one-out cross-validated coefficients q(2) for the
76 The best models were characterized by the
leave-one-out cross-validated correlation coefficient q(
77 ctive of improvement in PTSD symptoms, using
leave-one-out cross-validated elastic-net regression mod
78 Leave-one-out cross-validated partial least-squares give
79 model that was evaluated on the basis of its
leave-one-out cross-validated partial least-squares valu
80 characterized by high internal accuracy with
leave-one-out cross-validated R(2) (q(2)) values ranging
81 high internal accuracy were generated, with
leave-one-out cross-validated R(2) (q(2)) values ranging
82 s with ulcerative colitis from controls with
leave-one-out cross-validation (area under the curve = 0
83 rmance of algorithms was assessed using both
leave-one-out cross-validation (LOOCV) and external vali
84 The coefficient of determination (R(2)) and
leave-one-out cross-validation (LOOCV) demonstrate good
85 The two-group classifier obtained a
leave-one-out cross-validation (LOOCV) F1-score of 87.6
86 SEA) to evaluate the clustering results; (2)
Leave-one-out cross-validation (LOOCV) to ensure that th
87 Leave-one-out cross-validation (LOOCV) was carried out t
88 Leave-one-out cross-validation (LOOCV) was obtained (R(2
89 In these small samples,
leave-one-out cross-validation (LOOCV), 10-fold cross-va
90 to a distinct brain circuit, validated with
leave-one-out cross-validation (p = 0.0005).
91 rker genes while offering the same or better
leave-one-out cross-validation accuracy compared with ap
92 fier for this training set that achieves 87%
leave-one-out cross-validation accuracy.
93 highly disordered, and displayed comparable
leave-one-out cross-validation accuracy.
94 tcomes of the respective other sample, and a
leave-one-out cross-validation across the whole group fu
95 both an out-of-sample cross-validation and a
leave-one-out cross-validation across the whole group.
96 We used split-half and
leave-one-out cross-validation analyses in large MRI dat
97 blished the robustness of these results in a
leave-one-out cross-validation analysis and by reproduci
98 t sizes were sufficiently large to survive a
leave-one-out cross-validation analysis of predictive va
99 The
leave-one-out cross-validation analysis of the data from
100 r approach was evaluated in both de novo and
leave-one-out cross-validation analysis using known DTIs
101 A
leave-one-out cross-validation analysis was used identif
102 In
leave-one-out cross-validation analysis, ten of 11 sensi
103 y; r(2) = 0.93 and a q(2) = 0.91 utilizing a
leave-one-out cross-validation analysis.
104 of 88.0%, a finding that was confirmed using
leave-one-out cross-validation analysis.
105 Leave-one-out cross-validation and an independent data s
106 s estimated and subsequently validated using
leave-one-out cross-validation and data from the Multice
107 In the
leave-one-out cross-validation and de novo gene predicti
108 performance of sNebula on this dataset using
leave-one-out cross-validation and five-fold cross-valid
109 Leave-one-out cross-validation and gene pairing analysis
110 gorous benchmarking of CCN-BLPred using both
leave-one-out cross-validation and independent test sets
111 Leave-one-out cross-validation applied to the prediction
112 A
leave-one-out cross-validation approach is used to deter
113 est classification algorithm combined with a
leave-one-out cross-validation approach was implemented
114 ome for 85/102 (83%) NB patients through the
leave-one-out cross-validation approach.
115 redict HCC in the training cohort, using the
leave-one-out cross-validation approach.
116 ining all these metabolites and applying the
leave-one-out cross-validation approach.
117 01-1.09; P = .01) and predictive of relapse (
leave-one-out cross-validation balanced accuracy, 67%; 9
118 Repeat analysis by using
leave-one-out cross-validation decreased the apparent di
119 Leave-one-out cross-validation demonstrated that data as
120 Leave-one-out cross-validation demonstrated that this ef
121 Leave-one-out cross-validation experiments show that pre
122 hold-out testing methods: a nearly unbiased
leave-one-out cross-validation for the 60 training compo
123 Leave-one-out cross-validation indicated high predictive
124 A
leave-one-out cross-validation is used for the independe
125 (PLS) discriminant analyses, validated by a
leave-one-out cross-validation method.
126 f the classifications was performed with the
leave-one-out cross-validation method.
127 accuracy of these models was assessed by the
leave-one-out cross-validation method.
128 ittsburgh cohort as a training set using the
leave-one-out cross-validation method.
129 Various statistical methods, including
leave-one-out cross-validation methods, were applied to
130 The
leave-one-out cross-validation on experimentally charact
131 We evaluated the rcNet algorithms with
leave-one-out cross-validation on Online Mendelian Inher
132 r neurological disease, with 87% accuracy by
leave-one-out cross-validation on training data (N = 23)
133 lites, we tested 148 metabolites following a
leave-one-out cross-validation procedure or by using MS/
134 A
leave-one-out cross-validation procedure using the top 2
135 Using a
leave-one-out cross-validation procedure, we were able t
136 een produced with excellent accuracy using a
leave-one-out cross-validation process.
137 -redundant polytopic proteins using a strict
leave-one-out cross-validation protocol, MemBrain achiev
138 nalysis model built using these markers with
leave-one-out cross-validation provided a sensitivity of
139 LUR-BME results in a
leave-one-out cross-validation r2 of 0.74 and 0.33 for m
140 The model's prediction performance using
leave-one-out cross-validation reached 85.3 %, with the
141 Leave-one-out cross-validation revealed a significant re
142 e corresponding carcinomatous lesions, and a
leave-one-out cross-validation showed a 98% correct pred
143 By applying a
leave-one-out cross-validation strategy, we could show t
144 ill achieved comparable success rates to the
leave-one-out cross-validation suggesting that sufficien
145 With use of a
leave-one-out cross-validation technique, this method wa
146 Of these, 54 genes were selected by
leave-one-out cross-validation to construct a transcript
147 ta from individuals of known origin, it uses
leave-one-out cross-validation to determine population a
148 ted plasma metabolomics and elastic net with
leave-one-out cross-validation to identify metabolite si
149 We used logistic regression with
leave-one-out cross-validation to predict outcomes, and
150 dimensionality reduction algorithm, and used
leave-one-out cross-validation to predict underlying pat
151 Training models include nested
leave-one-out cross-validation to select features, train
152 s removal and surgical outcome and performed
leave-one-out cross-validation to support their prognost
153 e "test" set of tumors, we used a supervised
leave-one-out cross-validation to test how well we could
154 This was used in a
leave-one-out cross-validation to train weights that opt
155 y with an area under the curve of 0.96 using
leave-one-out cross-validation training in a logistic re
156 In
leave-one-out cross-validation using support vector mach
157 Leave-one-out cross-validation was applied to each tumor
158 CPM with
leave-one-out cross-validation was conducted to identify
159 ve for the random forest classification with
leave-one-out cross-validation was found to be 0.86 usin
160 Leave-one-out cross-validation was performed.
161 Leave-one-out cross-validation was used for validation.
162 Leave-one-out cross-validation was used to establish con
163 Leave-one-out cross-validation was used to show internal
164 Leave-one-out cross-validation was used to validate the
165 gaussian process classifiers using a nested
leave-one-out cross-validation were used to predict the
166 tly classified 18 of the 21 classic cases in
leave-one-out cross-validation when compared with pathol
167 functions in the classifier can be chosen by
leave-one-out cross-validation with the aim of minimizin
168 redictors and initial severity combined with
leave-one-out cross-validation yielded a categorical pre
169 Machine-learning analyses (with
leave-one-out cross-validation) assessed whether speech
170 they either have large variability (such as
leave-one-out cross-validation) or large bias (such as r
171 ive predictive values >=55%, validated using
leave-one-out cross-validation) outperforming dipoles on
172 A local regression model (selected by
leave-one-out cross-validation) was used to explore clim
173 NA binding and 6761 non-RNA binding domains (
leave-one-out cross-validation).
174 Using
leave-one-out cross-validation, all the models incorpora
175 Finally, using a Random Forest model with
leave-one-out cross-validation, an exploratory BAL genom
176 lysis, discriminant function analysis (DFA),
leave-one-out cross-validation, and Kendall coefficient
177 Logistic regression,
leave-one-out cross-validation, and receiver operating c
178 cells was determined by logistic regression,
leave-one-out cross-validation, and receiver operating c
179 Logistic regression analysis,
leave-one-out cross-validation, and receiver operating c
180 Logistic regression analysis,
leave-one-out cross-validation, and receiver operating c
181 In logistic regression,
leave-one-out cross-validation, and receiver-operating c
182 In this study, hold-out,
leave-one-out cross-validation, and ten-fold cross-valid
183 The method performs well on
leave-one-out cross-validation, and we further validated
184 Following
leave-one-out cross-validation, the addition of T2WI-der
185 In a
leave-one-out cross-validation, the average rank was top
186 In
leave-one-out cross-validation, we correctly predict MOA
187 With patient-wise
leave-one-out cross-validation, we have been able to ach
188 on operator logistic regression, followed by
leave-one-out cross-validation, were used to determine t
189 ted by combining data set bootstrapping with
leave-one-out cross-validation, with random sampling of
190 "
Leave-one-out cross-validation," in which each data inst
191 of 60 samples not used for discovery, using
leave-one-out cross-validation.
192 elated patterns with logistic regression and
leave-one-out cross-validation.
193 c curves, adjusting for overconfidence using
leave-one-out cross-validation.
194 ing their respective benchmark datasets, and
leave-one-out cross-validation.
195 sion profiles was rigorously evaluated using
leave-one-out cross-validation.
196 estimated 100% predictive accuracy based on
leave-one-out cross-validation.
197 the genotypes within each age group using a
leave-one-out cross-validation.
198 (AUC) value of 0.878 (0.852-0.904) following
leave-one-out cross-validation.
199 odel building and evaluation utilized nested
leave-one-out cross-validation.
200 Statistical validation was achieved through
leave-one-out cross-validation.
201 53 readers to each image was assessed using
leave-one-out cross-validation.
202 trained on the data set and evaluated using
leave-one-out cross-validation.
203 le logistic regression analysis, followed by
leave-one-out cross-validation.
204 nd binary resistance or susceptibility using
leave-one-out cross-validation.
205 %-confidence interval (CI): 0.65-0.94) using
leave-one-out cross-validation.
206 anges in connectivity after TMS, followed by
leave-one-out cross-validation.
207 mis and Bacillus subtilis, was confirmed via
leave-one-out cross-validation.
208 eatures alone), with 84% accuracy in 5-fold,
leave-one-out cross-validation.
209 ng 89 bacterial species in our library using
leave-one-out cross-validation.
210 3 with approximately 91% accuracy, based on
leave-one-out cross-validation.
211 tors of overall survival were developed from
leave-one-out cross-validation.
212 eralization of the findings was supported by
leave-one-out cross-validation.
213 noma , and results were validated by using a
leave-one-out cross-validation.
214 tures but have similar levels of accuracy in
leave-one-out cross-validations (LOOCV).
215 pifarnib with the greatest accuracy using a "
leave one out"
cross validation (LOOCV; 96%).
216 e data set out", similar to the traditional "
leave one out"
cross-validation procedure employed in pa
217 tion and 80% in prediction ability by using "
leave-one-out"
cross-validation procedure.
218 t independent training and testing sets, or '
leave-one-out'
cross-validation analysis with all tumors
219 etinal sensitivity could be inferred with a (
leave-one-out)
cross-validated mean absolute error (MAE)
220 mean absolute error as 40.3 +/- 2.9 HU from
leave-one-out-
cross-validation (LOOCV) across all six pa
221 ained satisfactory prediction results by the
leave-one-out-
cross-validation (LOOCV) compared with exi
222 cal reaction was optimized for each pig, and
leave-one-out-
cross-validation (LOOCV) method was used t
223 d use regression models (LUR) frequently use
leave-one-out-
cross-validation (LOOCV) to assess model f
224 UR models were evaluated using (1) internal "
leave-one-out-
cross-validation (LOOCV)" within the train
225 tures and 97.35% for dynamic gestures from a
leave-one-out-
cross-validation approach.
226 Leave-one-out-
cross-validation is applied to verify the
227 cation of 37 clinically relevant bacteria in
Leave-One-Out-
Cross-Validation.
228 of the ProtPair for IPS study as measured by
leave-one-out CV is 69.1%, which can be very beneficial
229 -level 4-fold CV, and 0.73 for patient-level
leave-one-out CV.
230 ased atlas structure was then validated in a
leave-one-out design.
231 es Phase 3 haplotype reference panel using a
leave-one-out design.
232 es in the step-1 ego network, as well as the
leave-one-out differences in average redundancy, average
233 The
leave-one-out estimates of the probability of test error
234 First, a
leave-one-out experiment is used to optimize our method
235 Cross-validation (in
leave-one-out form) removes each observation in turn, co
236 provement of 28.60%, 21.55% and 8.22% in the
leave-one-out generalization test.
237 supervised machine learning with batch-wise
leave-one-out implementation.
238 n models was evaluated comparing the classic
leave-one-out internal validation with a more challengin
239 The method was validated using the
leave-one-out Jackknife method.
240 their predictive power was assessed using a "
leave one out"
jackknife cross-validation strategy.
241 In this study
Leave One Out (
LOO) cross validation is used for validat
242 fied CV, bootstrapping (500 x repeated), and
leave one out (
LOO) validation.
243 ee receptors involving 202 complexes, with a
leave-one out (
LOO) cross-validated Q(2) of 0.689, was o
244 cies in 10-fold cross validation (10xCV) and
leave-one-out (
LOO) approaches, respectively.
245 -specific transcripts for EDMD, we applied a
leave-one-out (
LOO) cross-validation approach using LMNA
246 chemical HC5 values for each chemical using
leave-one-out (
LOO) variance estimation and compared the
247 seizures (from 25 different subjects) using "
leave-one-out" (
LOO) cross validation.
248 s of these two groups by the cross-validated
leave-one-out machine-learning algorithms revealed a mol
249 twork (BP-ANN) was trained in a round-robin (
leave-one-out)
manner to predict biopsy outcome from mam
250 g phase, which was cross-validated using the
leave-one out method.
251 The new models were validated by the
leave-one-out method and were cross-validated in a separ
252 We used the
leave-one-out method in sensitivity analyses and studied
253 with radial basis function (RBF) kernel and
leave-one-out method to classify time-series for emotion
254 s for predicting checkpoint function using a
leave-one-out method.
255 -mass substitutions were performed using the
leave-one-out method.
256 discriminant analysis classifier by using a
leave-one-out method.
257 e performance of these classifiers using the
leave-one-out method.
258 trate the theory and performance of both the
leave-one-out model and energy partition model, by consi
259 1 of 2 statistical modeling approaches: the
leave-one-out model and the energy partition model.
260 Leave-one-out models that examined foods in mass while a
261 A
leave-one-out Monte Carlo analysis examined the predicti
262 Models were evaluated using
leave-one-out (
n - 1) (LOOCV) and grouped (n - 25%) cros
263 A
leave-one-out nested cross-validation was implemented.
264 e, machine learning, algorithm of choice and
leave-one-out nested cross-validation was used to optimi
265 d partial least squares training with either
leave-one-out or batch-to-batch testing.
266 lly, preserved anticorrelations improved the
leave-one-out outcome prediction accuracy of an establis
267 The experiments presented herein utilize
leave-one-out partial least-squares (LOO-PLS) analysis t
268 Leave-one-out polygenic risk score analyses showed signi
269 sequencing data via iterative random forest
leave one out prediction, an explainable artificial inte
270 (ANN) analysis of the combined data set in a
leave-one-out prediction strategy correctly predicted th
271 Leveraging the
leave-one-out principle, we introduce LOO-map, a framewo
272 odel predictive capability performance via a
leave-one-out procedure (21 subjects) and an independent
273 ~120,000 subjects) and MDD (using a 10-fold
leave-one-out procedure in the current sample), (ii) biv
274 in terms of the error rate obtained from the
leave-one-out procedure, and all of the forests are far
275 DeltaHB(total), and DeltaSASA, the r(2) and
leave-one-out q(2) are 0.69 and 0.67.
276 (2) of 0.72, an adjusted R(2) of 0.65, and a
leave-one-out Q(2) of 0.56.
277 ocked alignment) showed the best statistics:
leave-one-out q(2) of 0.616, r(2) of 0.949, and r(2)pred
278 During the
leave-one-out sensitivity analysis, the inclusion of the
279 ive internal cross-validation methods (e.g.,
leave-one out,
split-half) are also warranted.
280 mum norm solution optimizes cross-validation
leave-one-out stability and thereby the expected error.
281 initialization procedure using an efficient
leave-one-out strategy to compare among candidate models
282 ning dynamics from each treatment cycle in a
leave-one-out study, model simulations predict patient-s
283 Cross validation (
leave-one-out technique) was applied to the data.
284 ation accuracy in a cross validation using a
leave-one-out technique.
285 The concordance of the
leave-one-out test is over 99.5% and is 99.9% higher for
286 In
leave-one-out tests, an average of 67% of drugs were cor
287 In a
leave-one-out three-way classification analysis, the mod
288 and specificity values of 100% employing the
leave-one-out validation method.
289 For biological process, the
leave-one-out validation procedure shows 52% precision a
290 f our prediction is measured by applying the
leave-one-out validation procedure to a functional path
291 Leave-one-out validation showed classification accuracy
292 ine achieved high classification scores in a
leave-one-out validation test reaching >90% in some case
293 Subsequent evaluation of the model using
leave-one-out validation yielded a classification accura
294 dologies (2-fold, repeat random subsampling,
leave one out)
were utilized to determine the performanc