コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 (baseline performance measured with 10-fold cross-validation).
2 oefficient of 0.9046 +/- 0.0058 for six-fold cross-validation).
3 -DA) and internally validated (leave 10%-out cross-validation).
4 clusters were identified and evaluated using cross validation.
5 by training with the MD set using a tenfold cross validation.
6 ession and internally validated with 10-fold cross validation.
7 fidence interval [0.85, 1.00]) via five-fold cross validation.
8 t 70% (range 65-76%) using stratified 5-fold cross validation.
9 n was performed with model-external fivefold cross validation.
10 ne classifiers were trained through fivefold cross validation.
11 ievement was trained and tested using 5-fold cross validation.
12 evaluate the segmentation model using 5-fold cross-validation.
13 in half-life predictions) and were robust in cross-validation.
14 average accuracy of 90.8% based on five-fold cross-validation.
15 ural language processing model using 10-fold cross-validation.
16 (P = 0.034), but the model was not robust to cross-validation.
17 through different data partition schemes in cross-validation.
18 sting for overconfidence using leave-one-out cross-validation.
19 tperforms traditional empirical models under cross-validation.
20 nd 0.700 for a more stringent position-based cross-validation.
21 er operating characteristic curve (AUC) with cross-validation.
22 ed and models underwent spatially stratified cross-validation.
23 95 white 17-y-old participants using 10-fold cross-validation.
24 rt vector classification and repeated nested cross-validation.
25 sted a random forest classifier using 5-fold cross-validation.
26 d within 21 months (0.97 +/- 0.02 AUCROC) on cross-validation.
27 ing 10 types of selected features and 5-fold cross-validation.
28 0.7562, and 0.5065, respectively for 10-fold cross-validation.
29 ting characteristic curve (AUC) using nested cross-validation.
30 8,939 hr of data) using a stratified 20-fold cross-validation.
31 haracteristic (ROC) curve of 0.93 on 10-fold cross-validation.
32 of the variance in treatment response after cross-validation.
33 gorithm was first evaluated by using 10-fold cross-validation.
34 odels were developed and tested using nested cross-validation.
35 tric of 0.39 with ten repetitions of 10-fold cross-validation.
36 R(2) of 0.966 and RMSE of 0.013 in a 10-fold cross-validation.
37 receiver operating characteristic of 0.93 in cross-validation.
38 t the genetic risk score (GRS) with ten-fold cross-validation.
39 nterval (CI): 0.65-0.94) using leave-one-out cross-validation.
40 prediction rates range from 62-100%) through cross-validation.
41 These models performed well in cross-validation.
42 Testing was performed based on ten-fold cross-validation.
43 rate of 93.3%, and performed similarly upon cross-validation.
44 which cannot be obtained in the conventional cross-validation.
45 ining all selected predictors was 0.85 after cross-validation.
46 tion pathway CNNs was trained using fivefold cross-validation.
47 ResNet34 architecture with sevenfold nested cross-validation.
48 ovement scores (P = 0.003) and was robust to cross-validation.
49 model validated confirmed with and fivefold cross-validation.
50 hen validated using split-sample and 10-fold cross-validation.
51 %-25% training-validation split and fourfold cross-validation.
52 f GR binders in a 25% test set using holdout cross-validation.
53 k performance was assessed by using fivefold cross-validation.
54 ovement scores (P = 0.012) and was robust to cross-validation.
55 treated for aSAH (2016-2017) using five-fold-cross-validation.
56 s with an AUC of 0.89, as assessed by nested cross-validation.
57 on with best performing models determined by cross-validation.
58 boosting, and tuned with 10 repeated 4-fold cross-validations.
59 e of 2.03%-101.09% accuracy gain in internal cross-validations.
60 igh accuracy (92%) when evaluated via strict cross-validations.
61 with other competing methods via the 5-fold cross validation, 10-cross validation and de novo drug v
62 rt vector machine (SVM) combined with nested cross-validation, a prediction accuracy of 91.2% was ach
63 group was used to develop two algorithms via cross-validation: a classifier to diagnose NAFLD (MRI PD
64 rvised learning approaches, achieving better cross-validation accuracy across different sets of gold-
67 69.39-73.59), followed by leave-one-site-out cross-validation (accuracy = 58.67%, 95% CI = 56.70-60.6
69 respective other sample, and a leave-one-out cross-validation across the whole group further demonstr
72 AI predictions were evaluated using 10-fold cross-validation against annotations by expert surgeons.
76 evaluated in both de novo and leave-one-out cross-validation analysis using known DTIs as the gold s
81 or Autoregression, Estimation Stability with Cross Validation and a nonparametric change point detect
82 eved correlations of up to 0.72 and 0.67 (on cross validation and blind tests, respectively), while o
84 ion, and accuracy) was tested using ten-fold cross validation and externally validated on data from t
85 achieved consistent results between ten-fold cross validation and independent test for predicting hum
86 n the training set (80% of the data) through cross validation and subsequently externally validated o
87 G, HPRD, and TRRUST databases by the 10-fold cross validation and test verification, and to identify
88 ogen combinations) internally through nested cross validation and were also validated using external
89 to predict outcome in both an out-of-sample cross-validation and a leave-one-out cross-validation ac
91 assification accuracy of 83.7% in the 5-fold cross-validation and an accuracy of 92.6% for the test s
92 (AUC) of 0.831 with 77% accuracy on 10-fold cross-validation and an AUC of 0.883 with 83% accuracy i
93 ethod's predictive capacity was tested using cross-validation and assessed for robustness to varying
94 matically evaluated TargetPredict in de novo cross-validation and compared it to a state-of-the-art p
96 ed regional wall motion abnormalities in the cross-validation and external validation datasets with a
97 provided the highest prediction abilities in cross-validation and external validation with mean value
99 tion, and accuracy) was tested using 10-fold cross-validation and externally validated on data from n
106 assifier performance was evaluated by 5-fold cross-validation and on an independent test dataset.
107 ovariation, which were robust as assessed by cross-validation and permutation testing, taking into ac
108 e explained with OSM features alone, and use cross-validation and recursive feature elimination to ev
110 orous framework comprised of classification, cross-validation and statistical analyses that was devel
113 ch outperformed other available tools across cross-validation and two independent blind tests, achiev
115 on performance (average accuracy of 99.7% in cross-validation) and outperformed four baseline methods
116 ristic curve (ROC-AUC) of 0.799 using random cross-validation, and 0.700 for a more stringent positio
118 erm survival patients) using a leave-one-out cross validation approach for performance evaluation.
119 d multi-omics factor analysis (MOFA) using a cross-validation approach to assess overfitting and cons
120 imate models using a novel and comprehensive cross-validation approach, running a series of h-block c
123 and tested their generalizability via nested cross-validation as well as via external validation.
124 , showed robust performance based on tenfold cross-validations as well as candidate prioritization wi
125 other methods by obtaining higher five-fold cross-validation AUC values (CV-AUC) and Leave-One-Chrom
126 oning algorithm, yielding strong results via cross-validation, averaging 0.95 AUROC on test-set indic
129 ication accuracy of 83 percent under 10-fold cross-validation, but its performance could be severely
130 iocarbon dating of 23 individuals, including cross-validation by compound-specific analysis, that E.
131 and holdout sets.We develop consensus nested cross-validation (cnCV) that combines the idea of featur
132 rating of ResNet-50 iterations from fivefold cross-validation, consensus technologists' rating, and c
134 sed prediction model with an improved random cross-validation (CV) R(2) of 0.86, an improved spatial
135 e quantification of Ca content in INF (R(2) (cross-validation (CV))-0.99, RMSECV-0.29 mg/g; R(2) (pre
137 95% confidence interval, 0.925-0.941) on the cross-validation dataset and 0.928 on the test dataset.
138 peting CNN classifiers were developed with a cross-validation dataset of 3396 images (1632 open, 1764
139 and Queensland cohorts formed a training and cross-validation dataset used to identify structural con
145 with standard nCV, Elastic Net optimized by cross-validation, differential privacy and private evapo
146 model hyperparameters optimised to minimise cross validation error, ten methods of automated variabl
147 erator (LASSO) and Estimation Stability with Cross Validation (ES-CV), we were able to, without any p
152 est classification accuracy (average 10-fold cross-validation F(1)-Score of 0.992) using an external
153 predictor composition is less stable across cross-validation folds and estimation takes 40 times as
155 to empower practical applications by using a cross validation framework that assesses the predictive
166 lection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the
169 ge model performances based on leave-one-out cross validation (loocv) over ten different negative sam
170 error as 40.3 +/- 2.9 HU from leave-one-out-cross-validation (LOOCV) across all six patients, which
171 tory prediction results by the leave-one-out-cross-validation (LOOCV) compared with existing methods.
173 stic regression with a nested leave-pair out cross validation (LPOCV) scheme and recursive feature el
176 y, and net benefit (decision curves)-using a cross-validation method, with that of the reference mode
177 dies are still rare and alternative internal cross-validation methods (e.g., leave-one out, split-hal
178 leave-one-out cross-validation, and ten-fold cross-validation methods are implemented on three public
180 ated the calibration performance using three cross-validation methods, which consistently indicated t
182 machine learning-based method incorporating cross-validation, multiple regression, grid search, and
183 om these slides; performance was assessed by cross-validation (N = 6406 specimens) and validated in a
188 prediction capabilities were observed in the cross-validation of PLS and PCR analysis for the adulter
193 e determined its accuracy to be 98 +/- 2% by cross-validation on analyzing 277 perspiration samples.
196 es that we split into eleven subsets (10 for cross-validation, one for testing) using a novel cluster
197 e methods were all validated in simulations, cross-validations or independent retrospective data sets
198 n algorithm, but in contrast to conventional cross-validation, our approach makes it possible to crea
200 ps were calibrated by matching sextant-based cross-validation performance to clinical performance of
202 output and their performance in three tasks: cross validation, prediction of drug targets, and behavi
203 ting characteristic curve of 0.81 in 10-fold cross-validation, prevailing over using any single featu
204 acting drug pairs, their use of conventional cross-validation prevents them from achieving generaliza
206 tive disease in a novel leave-one-center-out cross-validation procedure equivalent to external valida
207 istinct distributed functional networks in a cross-validation procedure, identifying neurotraits.
208 against several existing algorithms using a cross-validation procedure, SArKS identified larger moti
211 By challenging our methodology with rigorous cross-validation procedures and prognostic analyses, we
214 (reference, normal scale, plug-in and biased cross-validation) produced comparable estimates of niche
215 meteorology in California for the year 2016 (cross-validation R(2) = 0.73 (site-based) and 0.81 (obse
216 determination for calibration (R(c)(2)) and cross-validation (R(cv)(2)), with values of 0.96 and 0.8
220 erating characteristic analyses with 10-fold cross validation.ResultsIn participants with acute heart
222 reover, a systematic evaluation using nested cross-validation revealed that the RILP algorithm select
224 using a penalized matrix decomposition with cross-validation; risk scores of 50 and 400 SNPs were id
225 gnostic categories using a 5 x 5-fold nested cross-validation scheme and demonstrated their generaliz
226 n a partially labeled dataset, and develop a cross-validation scheme to enable supervised prediction.
227 ates major benefits of this variable h-block cross-validation scheme, as the effect of spatial autoco
229 th promising results consistently on various cross-validation schemes and outperforms other state of
232 mplexes that were not a part of the training/cross-validation set; deviations of the predicted mobili
233 Comparative analysis of 10 randomly drawn cross-validation sets verified the stability of the resu
238 ive profiles within the sample, and hold-out cross-validation showed that these profiles were signifi
240 ignificant long-term agreement is found with cross-validation sites over North America (R(2) = 0.57-0
241 sion criteria, through randomised seven-fold cross-validation (six-fold training set: n = 433; test s
260 patterns and validate, using left-trial-out cross-validation, the predictive performance of the mode
261 cent of these data was used for training and cross-validation, the remaining 20% for independent inte
262 tree, and Random Forest (RF), using a k-fold cross validation to assess the model's generalization ca
266 a machine learning (ML) pipeline with nested cross-validation to avoid overfitting, the stacked model
267 optimized, and evaluated with 10-fold nested cross-validation to predict the probability of AF recurr
268 Training models include nested leave-one-out cross-validation to select features, train the model, an
270 algorithm (random forest - RF, with 10-fold cross validation) to predict individuals' fall risk grou
277 ith logistic regression models and five-fold cross validation, using area under the ROC curve (AUC) a
287 ogistic regression analysis with leave-1-out cross validation, we developed a model, including a vira
290 egression and XGBoosting methods followed by cross-validation were applied to predict individual dise
291 test set and mean accuracies from threefold cross-validation were used to compare the performance of
292 gistic regression, followed by leave-one-out cross-validation, were used to determine the performance
294 ieved an average of 0.91 F1-score on tenfold cross validation with an average area under the curve (A
296 assessed the resulting models using 10-fold cross-validation with 100 repetitions for statistical co
297 o random chance) by leave-one-chromosome-out cross-validation with stratified linkage disequilibrium
298 sk difference between the two strategies) by cross-validation with the SYNTAX trial (n=1800 participa
299 tial autocorrelation is minimized, while the cross-validations with increasing h values can reveal in
300 hine learning applied to multisite data with cross-validation yielded a factorization generalizable a