戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  (baseline performance measured with 10-fold cross-validation).
2 oefficient of 0.9046 +/- 0.0058 for six-fold cross-validation).
3 -DA) and internally validated (leave 10%-out cross-validation).
4 clusters were identified and evaluated using cross validation.
5  by training with the MD set using a tenfold cross validation.
6 ession and internally validated with 10-fold cross validation.
7 fidence interval [0.85, 1.00]) via five-fold cross validation.
8 t 70% (range 65-76%) using stratified 5-fold cross validation.
9 n was performed with model-external fivefold cross validation.
10 ne classifiers were trained through fivefold cross validation.
11 ievement was trained and tested using 5-fold cross validation.
12 evaluate the segmentation model using 5-fold cross-validation.
13 in half-life predictions) and were robust in cross-validation.
14 average accuracy of 90.8% based on five-fold cross-validation.
15 ural language processing model using 10-fold cross-validation.
16 (P = 0.034), but the model was not robust to cross-validation.
17  through different data partition schemes in cross-validation.
18 sting for overconfidence using leave-one-out cross-validation.
19 tperforms traditional empirical models under cross-validation.
20 nd 0.700 for a more stringent position-based cross-validation.
21 er operating characteristic curve (AUC) with cross-validation.
22 ed and models underwent spatially stratified cross-validation.
23 95 white 17-y-old participants using 10-fold cross-validation.
24 rt vector classification and repeated nested cross-validation.
25 sted a random forest classifier using 5-fold cross-validation.
26 d within 21 months (0.97 +/- 0.02 AUCROC) on cross-validation.
27 ing 10 types of selected features and 5-fold cross-validation.
28 0.7562, and 0.5065, respectively for 10-fold cross-validation.
29 ting characteristic curve (AUC) using nested cross-validation.
30 8,939 hr of data) using a stratified 20-fold cross-validation.
31 haracteristic (ROC) curve of 0.93 on 10-fold cross-validation.
32  of the variance in treatment response after cross-validation.
33 gorithm was first evaluated by using 10-fold cross-validation.
34 odels were developed and tested using nested cross-validation.
35 tric of 0.39 with ten repetitions of 10-fold cross-validation.
36 R(2) of 0.966 and RMSE of 0.013 in a 10-fold cross-validation.
37 receiver operating characteristic of 0.93 in cross-validation.
38 t the genetic risk score (GRS) with ten-fold cross-validation.
39 nterval (CI): 0.65-0.94) using leave-one-out cross-validation.
40 prediction rates range from 62-100%) through cross-validation.
41               These models performed well in cross-validation.
42      Testing was performed based on ten-fold cross-validation.
43  rate of 93.3%, and performed similarly upon cross-validation.
44 which cannot be obtained in the conventional cross-validation.
45 ining all selected predictors was 0.85 after cross-validation.
46 tion pathway CNNs was trained using fivefold cross-validation.
47  ResNet34 architecture with sevenfold nested cross-validation.
48 ovement scores (P = 0.003) and was robust to cross-validation.
49  model validated confirmed with and fivefold cross-validation.
50 hen validated using split-sample and 10-fold cross-validation.
51 %-25% training-validation split and fourfold cross-validation.
52 f GR binders in a 25% test set using holdout cross-validation.
53 k performance was assessed by using fivefold cross-validation.
54 ovement scores (P = 0.012) and was robust to cross-validation.
55 treated for aSAH (2016-2017) using five-fold-cross-validation.
56 s with an AUC of 0.89, as assessed by nested cross-validation.
57 on with best performing models determined by cross-validation.
58  boosting, and tuned with 10 repeated 4-fold cross-validations.
59 e of 2.03%-101.09% accuracy gain in internal cross-validations.
60 igh accuracy (92%) when evaluated via strict cross-validations.
61  with other competing methods via the 5-fold cross validation, 10-cross validation and de novo drug v
62 rt vector machine (SVM) combined with nested cross-validation, a prediction accuracy of 91.2% was ach
63 group was used to develop two algorithms via cross-validation: a classifier to diagnose NAFLD (MRI PD
64 rvised learning approaches, achieving better cross-validation accuracy across different sets of gold-
65                                  The average cross-validation accuracy was much higher for models usi
66 y the end of 80 weeks of follow-up with 100% cross-validation accuracy.
67 69.39-73.59), followed by leave-one-site-out cross-validation (accuracy = 58.67%, 95% CI = 56.70-60.6
68                                        Using cross-validation across independent fMRI studies, we dem
69 respective other sample, and a leave-one-out cross-validation across the whole group further demonstr
70 -sample cross-validation and a leave-one-out cross-validation across the whole group.
71        Classifiers were evaluated by 10-fold cross-validation across three combined data sets or by t
72  AI predictions were evaluated using 10-fold cross-validation against annotations by expert surgeons.
73                        It is based on k-fold cross-validation algorithm, but in contrast to conventio
74                          Using leave-one-out cross-validation, all the models incorporating clinical
75                                              Cross-validation analysis is employed to optimize model
76  evaluated in both de novo and leave-one-out cross-validation analysis using known DTIs as the gold s
77                                   In 10-fold cross-validation analysis with 100 replications, Das-1 i
78 0.23 versus 0.19, P-value<1e-4) in a de novo cross-validation analysis.
79 rophilic asthma, and this was confirmed in a cross-validation analysis.
80                                    In 5-fold cross validation and 10-cross validation, DDRGIP method
81 or Autoregression, Estimation Stability with Cross Validation and a nonparametric change point detect
82 eved correlations of up to 0.72 and 0.67 (on cross validation and blind tests, respectively), while o
83  methods via the 5-fold cross validation, 10-cross validation and de novo drug validation.
84 ion, and accuracy) was tested using ten-fold cross validation and externally validated on data from t
85 achieved consistent results between ten-fold cross validation and independent test for predicting hum
86 n the training set (80% of the data) through cross validation and subsequently externally validated o
87 G, HPRD, and TRRUST databases by the 10-fold cross validation and test verification, and to identify
88 ogen combinations) internally through nested cross validation and were also validated using external
89  to predict outcome in both an out-of-sample cross-validation and a leave-one-out cross-validation ac
90                             Tests using both cross-validation and a separate replication cohort of 27
91 assification accuracy of 83.7% in the 5-fold cross-validation and an accuracy of 92.6% for the test s
92  (AUC) of 0.831 with 77% accuracy on 10-fold cross-validation and an AUC of 0.883 with 83% accuracy i
93 ethod's predictive capacity was tested using cross-validation and assessed for robustness to varying
94 matically evaluated TargetPredict in de novo cross-validation and compared it to a state-of-the-art p
95  yield higher prediction accuracy in 10-fold cross-validation and de novo experiments.
96 ed regional wall motion abnormalities in the cross-validation and external validation datasets with a
97 provided the highest prediction abilities in cross-validation and external validation with mean value
98 valuate model stability, we performed 5-fold cross-validation and external validation.
99 tion, and accuracy) was tested using 10-fold cross-validation and externally validated on data from n
100 to each half, each analyzed separately using cross-validation and hold-out validation.
101 ) from visceral (rectal) stimulation in both cross-validation and independent cohorts.
102               Benchmarking experiments using cross-validation and independent tests showed that iProt
103 ve a better prediction performance on 5-fold cross-validation and jackknife tests.
104                                Nested 5-fold cross-validation and leave-one-pathogen-out validation w
105                                        Using cross-validation and model selection, we identify a mode
106 assifier performance was evaluated by 5-fold cross-validation and on an independent test dataset.
107 ovariation, which were robust as assessed by cross-validation and permutation testing, taking into ac
108 e explained with OSM features alone, and use cross-validation and recursive feature elimination to ev
109                                      Through cross-validation and resampling the probability of gener
110 orous framework comprised of classification, cross-validation and statistical analyses that was devel
111            We used nested leave-one-pair-out cross-validation and supervised principal components ana
112        The multiple modes of enquiry enabled cross-validation and triangulation of the findings.
113 ch outperformed other available tools across cross-validation and two independent blind tests, achiev
114  unbiased prediction model in two scenarios- cross-validations and independent predictions.
115 on performance (average accuracy of 99.7% in cross-validation) and outperformed four baseline methods
116 ristic curve (ROC-AUC) of 0.799 using random cross-validation, and 0.700 for a more stringent positio
117       In this study, hold-out, leave-one-out cross-validation, and ten-fold cross-validation methods
118 erm survival patients) using a leave-one-out cross validation approach for performance evaluation.
119 d multi-omics factor analysis (MOFA) using a cross-validation approach to assess overfitting and cons
120 imate models using a novel and comprehensive cross-validation approach, running a series of h-block c
121                                      Using a cross-validation approach, we first explored in subjects
122 e metabolites and applying the leave-one-out cross-validation approach.
123 and tested their generalizability via nested cross-validation as well as via external validation.
124 , showed robust performance based on tenfold cross-validations as well as candidate prioritization wi
125  other methods by obtaining higher five-fold cross-validation AUC values (CV-AUC) and Leave-One-Chrom
126 oning algorithm, yielding strong results via cross-validation, averaging 0.95 AUROC on test-set indic
127      The model predictions were tested using cross validation based on empirical data collected from
128 o prediction of probable FA, with a combined cross-validation-based AUC of 0.73.
129 ication accuracy of 83 percent under 10-fold cross-validation, but its performance could be severely
130 iocarbon dating of 23 individuals, including cross-validation by compound-specific analysis, that E.
131 and holdout sets.We develop consensus nested cross-validation (cnCV) that combines the idea of featur
132 rating of ResNet-50 iterations from fivefold cross-validation, consensus technologists' rating, and c
133 net regression analysis with a leave-one-out cross validation (CV) and 100 CV runs.
134 sed prediction model with an improved random cross-validation (CV) R(2) of 0.86, an improved spatial
135 e quantification of Ca content in INF (R(2) (cross-validation (CV))-0.99, RMSECV-0.29 mg/g; R(2) (pre
136 ion to diabetes (AUC 0.92) based on standard cross-validation (CV).
137 95% confidence interval, 0.925-0.941) on the cross-validation dataset and 0.928 on the test dataset.
138 peting CNN classifiers were developed with a cross-validation dataset of 3396 images (1632 open, 1764
139 and Queensland cohorts formed a training and cross-validation dataset used to identify structural con
140 tput on a publicly-available match algorithm cross-validation dataset.
141            In 5-fold cross validation and 10-cross validation, DDRGIP method achieves the area under
142                                     Hold-out cross-validation demonstrated that these were significan
143 ated the feature matrices in a leave-one-out cross validation design across patients.
144  67% specificity and 95% sensitivity, in the cross-validation development cohort.
145  with standard nCV, Elastic Net optimized by cross-validation, differential privacy and private evapo
146  model hyperparameters optimised to minimise cross validation error, ten methods of automated variabl
147 erator (LASSO) and Estimation Stability with Cross Validation (ES-CV), we were able to, without any p
148                                   The 5-fold cross-validation experiment of GCNCDA achieved 91.2% acc
149                                       In the cross-validation experiment, LMTRDA obtained 90.51% pred
150                              In the ten-fold cross-validation experiment, the logistic classifier ide
151 version of the yeast PPI network in rigorous cross validation experiments.
152 est classification accuracy (average 10-fold cross-validation F(1)-Score of 0.992) using an external
153  predictor composition is less stable across cross-validation folds and estimation takes 40 times as
154                                              Cross-validation for feature selection in these high-dim
155 to empower practical applications by using a cross validation framework that assesses the predictive
156 -operative MRI using a leave-one-patient-out cross validation framework.
157 learning techniques within a repeated nested cross-validation framework.
158 ogistic regression models with leave-one-out cross validation in the training set.
159 -19 related mortality (AUC = 0.86) following cross-validation in a training set.
160                       We performed threefold cross-validation in the training dataset and generated t
161                                 Our multiple cross-validations indicate the promising accuracy and ro
162                                              Cross-validation indicates that FM-UE endpoints and FM-U
163                                Leave-one-out-cross-validation is applied to verify the predictive ski
164                A machine learning model with cross-validation is then applied for classification.
165                              A leave-one-out cross-validation is used for the independent assessment
166 lection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the
167                                     Standard cross-validation led to over-optimistic performance esti
168                              Using five-fold cross-validation, logistic regression and Cox regression
169 ge model performances based on leave-one-out cross validation (loocv) over ten different negative sam
170  error as 40.3 +/- 2.9 HU from leave-one-out-cross-validation (LOOCV) across all six patients, which
171 tory prediction results by the leave-one-out-cross-validation (LOOCV) compared with existing methods.
172 rough the leave-one-object (tissue site)-out cross-validation (LOOCV) method.
173 stic regression with a nested leave-pair out cross validation (LPOCV) scheme and recursive feature el
174 tratified into 100 iterations of Monte-Carlo cross validation (MCCV).
175            Using the 200-repeats Monte-Carlo cross-validation method, these models provided a multicl
176 y, and net benefit (decision curves)-using a cross-validation method, with that of the reference mode
177 dies are still rare and alternative internal cross-validation methods (e.g., leave-one out, split-hal
178 leave-one-out cross-validation, and ten-fold cross-validation methods are implemented on three public
179         Using volume and cortical thickness, cross-validation methods indicated 2 highly stable subty
180 ated the calibration performance using three cross-validation methods, which consistently indicated t
181                                     A k-fold cross-validation model-comparison selected a model where
182  machine learning-based method incorporating cross-validation, multiple regression, grid search, and
183 om these slides; performance was assessed by cross-validation (N = 6406 specimens) and validated in a
184                                       Nested cross-validation (nCV) is a common approach that chooses
185                                              Cross-validation of human- and mouse-derived protocols i
186                                              Cross-validation of loci reaching genome-wide significan
187        Future avenues of research focused on cross-validation of plant hydraulics methods are discuss
188 prediction capabilities were observed in the cross-validation of PLS and PCR analysis for the adulter
189                                Following the cross-validation of previously published estimates of th
190               Four datasets are employed for cross-validation of spatial gene expression prediction a
191                                    A tenfold cross-validation of the classifier resulted in 84% accur
192                            Internal-external cross-validation of the model demonstrated a random effe
193 e determined its accuracy to be 98 +/- 2% by cross-validation on analyzing 277 perspiration samples.
194                            The leave-one-out cross-validation on experimentally characterized Acr-Aca
195                             Using inter-site cross-validation on functional magnetic resonance images
196 es that we split into eleven subsets (10 for cross-validation, one for testing) using a novel cluster
197 e methods were all validated in simulations, cross-validations or independent retrospective data sets
198 n algorithm, but in contrast to conventional cross-validation, our approach makes it possible to crea
199                      Using a patient-holdout cross-validation, our method achieved classification acc
200 ps were calibrated by matching sextant-based cross-validation performance to clinical performance of
201 tic variant and Multiple Phenotypes based on cross-validation Prediction Error (MultP-PE).
202 output and their performance in three tasks: cross validation, prediction of drug targets, and behavi
203 ting characteristic curve of 0.81 in 10-fold cross-validation, prevailing over using any single featu
204 acting drug pairs, their use of conventional cross-validation prevents them from achieving generaliza
205                    Based on the leave-on-out cross validation procedure of 4 independent data sets we
206 tive disease in a novel leave-one-center-out cross-validation procedure equivalent to external valida
207 istinct distributed functional networks in a cross-validation procedure, identifying neurotraits.
208  against several existing algorithms using a cross-validation procedure, SArKS identified larger moti
209 arries sampling uncertainty estimated by the cross-validation procedure.
210 sed increase in grain yield accuracy under a cross-validation procedure.
211 By challenging our methodology with rigorous cross-validation procedures and prognostic analyses, we
212                                The five-fold cross-validation process produced an average AUC of 0.83
213 te oral toxicity using read-across through a cross-validation process.
214 (reference, normal scale, plug-in and biased cross-validation) produced comparable estimates of niche
215 meteorology in California for the year 2016 (cross-validation R(2) = 0.73 (site-based) and 0.81 (obse
216  determination for calibration (R(c)(2)) and cross-validation (R(cv)(2)), with values of 0.96 and 0.8
217                                 Based on the cross validation results, our model gave F1 score and AU
218                                    Moreover, cross-validation results indicate that 58% of DO variabi
219                                              Cross-validation results reach accuracies between 85% an
220 erating characteristic analyses with 10-fold cross validation.ResultsIn participants with acute heart
221                                              Cross-validation revealed that EAGLE outperformed other
222 reover, a systematic evaluation using nested cross-validation revealed that the RILP algorithm select
223                                              Cross-validation revealed the generalizability of these
224  using a penalized matrix decomposition with cross-validation; risk scores of 50 and 400 SNPs were id
225 gnostic categories using a 5 x 5-fold nested cross-validation scheme and demonstrated their generaliz
226 n a partially labeled dataset, and develop a cross-validation scheme to enable supervised prediction.
227 ates major benefits of this variable h-block cross-validation scheme, as the effect of spatial autoco
228 mission models following a stratified 5-fold cross-validation scheme.
229 th promising results consistently on various cross-validation schemes and outperforms other state of
230          Our proposed biologically-motivated cross-validation schemes provide insight into the robust
231 e introduced two novel protein complex-aware cross-validation schemes.
232 mplexes that were not a part of the training/cross-validation set; deviations of the predicted mobili
233    Comparative analysis of 10 randomly drawn cross-validation sets verified the stability of the resu
234 ion for training and validation in a tenfold cross validation setting.
235 tual and predicted phenotypes in a five-fold cross-validation setting.
236                                              Cross validation showed that 32% of the model-predicted
237                                              Cross-validation showed 95.2% sensitivity and 94.6% spec
238 ive profiles within the sample, and hold-out cross-validation showed that these profiles were signifi
239                                     Ten-fold cross-validation shows that these networks are internall
240 ignificant long-term agreement is found with cross-validation sites over North America (R(2) = 0.57-0
241 sion criteria, through randomised seven-fold cross-validation (six-fold training set: n = 433; test s
242 as evaluated by comparing internal five-fold cross-validation statistics of the training data.
243 algorithms, was evaluated by internal 5-fold cross-validation statistics.
244 rank normalized scores of internal five-fold cross-validation statistics.
245 t-boosted tree regression model with 10-fold cross-validation strategy.
246                                    Five-fold cross-validation summaries out to 1000 single-nucleotide
247                                 With 10-fold cross-validation, tensor factorization achieved AUROC =
248          Accuracy of jackknife test, 10-fold cross-validation test and independent test for these PPI
249                                              Cross-validation testing reveals that ligand-binding pos
250         Additionally, standard leave-one-out cross validation tests show how our approach outperforms
251                                              Cross-validation tests on exactly the same experiment-co
252 piens, and compared using randomized 10-fold cross-validation tests.
253 uggest experimental controls and measures of cross-validation that improve data interpretation.
254                        Through leave-one-out cross validation, the overall prediction error in the on
255                                 In a de novo cross-validation, the area under the receiver operating
256                                  In internal cross-validation, the average C statistic was 0.74.
257                           In a leave-one-out cross-validation, the average rank was top 3.2% for know
258                                       In the cross-validation, the GBM showed better performance, wit
259                                           At cross-validation, the newly developed SYNTAX score II, t
260  patterns and validate, using left-trial-out cross-validation, the predictive performance of the mode
261 cent of these data was used for training and cross-validation, the remaining 20% for independent inte
262 tree, and Random Forest (RF), using a k-fold cross validation to assess the model's generalization ca
263                    AUC analysis using K-fold cross validation to predict eGFR loss of >= 3 ml/min/1.7
264                              We used 10-fold cross validation to validate the exposure models and the
265                                  After using cross-validation to assess robustness, we applied the La
266 a machine learning (ML) pipeline with nested cross-validation to avoid overfitting, the stacked model
267 optimized, and evaluated with 10-fold nested cross-validation to predict the probability of AF recurr
268 Training models include nested leave-one-out cross-validation to select features, train the model, an
269                                      We used cross-validation to select the best model.
270  algorithm (random forest - RF, with 10-fold cross validation) to predict individuals' fall risk grou
271 nternal validation techniques (bootstrap and cross-validation) to account for model training.
272 in a manner that avoids 'leakage' during the cross-validation training procedure.
273                       By using leave-one-out cross validation, two quantitative US multivariable mode
274                           We applied 10-fold cross-validation using independent data not used to deve
275                                      Tenfold cross-validations using a linear discriminant classifier
276 dation approach, running a series of h-block cross-validations using h values of 100-1500 km.
277 ith logistic regression models and five-fold cross validation, using area under the ROC curve (AUC) a
278                                          For cross-validation, using epifluorescence microscopy, we d
279                  For comparison and anatomic cross-validation, volunteers were also scanned with clin
280                                    A tenfold cross validation was used to train two 3-class (each for
281         A support vector machine with k-fold cross-validation was built using the graph metrics featu
282                       CPM with leave-one-out cross-validation was conducted to identify pretreatment
283                                              Cross-validation was employed to test a Linear Discrimin
284 ning (80%) and test (20%) sets, and fivefold cross-validation was performed.
285             For training and testing, 5-fold cross-validation was used.
286                                 Accuracy for cross-validations was assessed using a diverse panel und
287 ogistic regression analysis with leave-1-out cross validation, we developed a model, including a vira
288                           Applying extensive cross-validation, we benchmarked the imputation using th
289              With patient-wise leave-one-out cross-validation, we have been able to achieve predictio
290 egression and XGBoosting methods followed by cross-validation were applied to predict individual dise
291  test set and mean accuracies from threefold cross-validation were used to compare the performance of
292 gistic regression, followed by leave-one-out cross-validation, were used to determine the performance
293 teristic (ROC) curve was 0.97 in the de novo cross-validation when evaluated using 910 drugs.
294 ieved an average of 0.91 F1-score on tenfold cross validation with an average area under the curve (A
295              Its performance was compared in cross validation with that of standard supervised method
296  assessed the resulting models using 10-fold cross-validation with 100 repetitions for statistical co
297 o random chance) by leave-one-chromosome-out cross-validation with stratified linkage disequilibrium
298 sk difference between the two strategies) by cross-validation with the SYNTAX trial (n=1800 participa
299 tial autocorrelation is minimized, while the cross-validations with increasing h values can reveal in
300 hine learning applied to multisite data with cross-validation yielded a factorization generalizable a

 
Page Top