戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 tion of errors of prediction (SDEP) of 0.51 (leave-one-out).
2 odel was built and validated by the standard leave one out analysis.
3                                              Leave-one-out analysis demonstrated that the gray matter
4                                            A leave-one-out analysis reveals that these models predict
5 on performance of TPCR was evaluated by both leave-one-out and leave-half-out cross-validation using
6 ntiating the two case groups was assessed by leave-one-out and Monte Carlo cross-validations.
7 diction algorithm based on these genes and a leave-one-out approach, we assigned sample class to thes
8 two principal components was assessed by the leave-one-out approach.
9 l operating condition (NOC) samples, using a leave-one-out approach.
10                                     Using a 'leave-one-out' approach we find average success rates be
11 ine-learning classification system that uses leave-one-out bias optimization and discriminates among
12 been used as a proof of concept for a novel "leave-one-out" biosensor design in which a protein that
13 n) or large bias (such as resubstitution and leave-one-out bootstrap).
14                     The technique was cross (leave-one-out), both internally and externally, validate
15 ies from benign lesions was evaluated with a leave-one-out-by-case analysis.
16 e classifier, and then computes whether this leave-one-out classifier correctly classifies the delete
17                                            A leave-one-out classifier successfully distinguished auti
18                             We achieve a low leave one out cross validation error of <10% for the can
19  characterized by the values of the internal leave one out cross-validated R2 (q2) for the training s
20 ME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson cor
21                                          The leave-one-out cross validation (LOOCV) method was implem
22               The model was supported by (i) leave-one-out cross validation and (ii) division into th
23                                              Leave-one-out cross validation and case studies have sho
24                                      Through leave-one-out cross validation and cross-classification
25                   The model was validated by leave-one-out cross validation and showed good recogniti
26               The model was validated with a leave-one-out cross validation procedure.
27                A PNC LUR model (R(2) = 0.48, leave-one-out cross validation R(2) = 0.32) including tr
28                                   We use the leave-one-out cross validation to compare the performanc
29 tive Mendelian relationship in families, and leave-one-out cross validation to verify our results.
30                                            A leave-one-out cross validation was used to assess the ac
31         The resulting model was validated by leave-one-out cross validation.
32 ity and specificity were calculated by using leave-one-out cross validation.
33 d by using logistic regression analyses with leave-one-out cross validation.
34  individual sequence pairs, and is tested by leave-one-out cross validation.
35 tic curves and was further assessed by using leave-one-out cross validation.
36                                          The leave-one-out cross-validated coefficients q(2) for the
37    The best models were characterized by the leave-one-out cross-validated correlation coefficient q(
38                                              Leave-one-out cross-validated partial least-squares give
39 model that was evaluated on the basis of its leave-one-out cross-validated partial least-squares valu
40 characterized by high internal accuracy with leave-one-out cross-validated R(2) (q(2)) values ranging
41  high internal accuracy were generated, with leave-one-out cross-validated R(2) (q(2)) values ranging
42  The coefficient of determination (R(2)) and leave-one-out cross-validation (LOOCV) demonstrate good
43 SEA) to evaluate the clustering results; (2) Leave-one-out cross-validation (LOOCV) to ensure that th
44                      In these small samples, leave-one-out cross-validation (LOOCV), 10-fold cross-va
45 rker genes while offering the same or better leave-one-out cross-validation accuracy compared with ap
46  highly disordered, and displayed comparable leave-one-out cross-validation accuracy.
47 fier for this training set that achieves 87% leave-one-out cross-validation accuracy.
48                       We used split-half and leave-one-out cross-validation analyses in large MRI dat
49                                          The leave-one-out cross-validation analysis of the data from
50                                            A leave-one-out cross-validation analysis was used identif
51                                           In leave-one-out cross-validation analysis, ten of 11 sensi
52 y; r(2) = 0.93 and a q(2) = 0.91 utilizing a leave-one-out cross-validation analysis.
53 of 88.0%, a finding that was confirmed using leave-one-out cross-validation analysis.
54                                              Leave-one-out cross-validation and an independent data s
55 s estimated and subsequently validated using leave-one-out cross-validation and data from the Multice
56                                       In the leave-one-out cross-validation and de novo gene predicti
57 performance of sNebula on this dataset using leave-one-out cross-validation and five-fold cross-valid
58                                              Leave-one-out cross-validation and gene pairing analysis
59 gorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets
60                                            A leave-one-out cross-validation approach is used to deter
61 est classification algorithm combined with a leave-one-out cross-validation approach was implemented
62 ome for 85/102 (83%) NB patients through the leave-one-out cross-validation approach.
63                     Repeat analysis by using leave-one-out cross-validation decreased the apparent di
64                                              Leave-one-out cross-validation experiments show that pre
65  hold-out testing methods: a nearly unbiased leave-one-out cross-validation for the 60 training compo
66                                              Leave-one-out cross-validation indicated high predictive
67 accuracy of these models was assessed by the leave-one-out cross-validation method.
68  (PLS) discriminant analyses, validated by a leave-one-out cross-validation method.
69 f the classifications was performed with the leave-one-out cross-validation method.
70       Various statistical methods, including leave-one-out cross-validation methods, were applied to
71       We evaluated the rcNet algorithms with leave-one-out cross-validation on Online Mendelian Inher
72 r neurological disease, with 87% accuracy by leave-one-out cross-validation on training data (N = 23)
73 lites, we tested 148 metabolites following a leave-one-out cross-validation procedure or by using MS/
74                                            A leave-one-out cross-validation procedure using the top 2
75                                      Using a leave-one-out cross-validation procedure, we were able t
76 een produced with excellent accuracy using a leave-one-out cross-validation process.
77 -redundant polytopic proteins using a strict leave-one-out cross-validation protocol, MemBrain achiev
78 nalysis model built using these markers with leave-one-out cross-validation provided a sensitivity of
79                         LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for m
80 e corresponding carcinomatous lesions, and a leave-one-out cross-validation showed a 98% correct pred
81                                By applying a leave-one-out cross-validation strategy, we could show t
82 ill achieved comparable success rates to the leave-one-out cross-validation suggesting that sufficien
83                                With use of a leave-one-out cross-validation technique, this method wa
84             We used logistic regression with leave-one-out cross-validation to predict outcomes, and
85 dimensionality reduction algorithm, and used leave-one-out cross-validation to predict underlying pat
86 e "test" set of tumors, we used a supervised leave-one-out cross-validation to test how well we could
87                           This was used in a leave-one-out cross-validation to train weights that opt
88                                           In leave-one-out cross-validation using support vector mach
89                                              Leave-one-out cross-validation was applied to each tumor
90                                              Leave-one-out cross-validation was performed.
91                                              Leave-one-out cross-validation was used for validation.
92  gaussian process classifiers using a nested leave-one-out cross-validation were used to predict the
93 tly classified 18 of the 21 classic cases in leave-one-out cross-validation when compared with pathol
94 functions in the classifier can be chosen by leave-one-out cross-validation with the aim of minimizin
95 redictors and initial severity combined with leave-one-out cross-validation yielded a categorical pre
96              Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech
97  they either have large variability (such as leave-one-out cross-validation) or large bias (such as r
98 NA binding and 6761 non-RNA binding domains (leave-one-out cross-validation).
99 lysis, discriminant function analysis (DFA), leave-one-out cross-validation, and Kendall coefficient
100                         Logistic regression, leave-one-out cross-validation, and receiver operating c
101 cells was determined by logistic regression, leave-one-out cross-validation, and receiver operating c
102                Logistic regression analysis, leave-one-out cross-validation, and receiver operating c
103                Logistic regression analysis, leave-one-out cross-validation, and receiver operating c
104                      In logistic regression, leave-one-out cross-validation, and receiver-operating c
105 ted by combining data set bootstrapping with leave-one-out cross-validation, with random sampling of
106 ing their respective benchmark datasets, and leave-one-out cross-validation.
107  3 with approximately 91% accuracy, based on leave-one-out cross-validation.
108 sion profiles was rigorously evaluated using leave-one-out cross-validation.
109  estimated 100% predictive accuracy based on leave-one-out cross-validation.
110 tors of overall survival were developed from leave-one-out cross-validation.
111 anges in connectivity after TMS, followed by leave-one-out cross-validation.
112 eralization of the findings was supported by leave-one-out cross-validation.
113 mis and Bacillus subtilis, was confirmed via leave-one-out cross-validation.
114 noma , and results were validated by using a leave-one-out cross-validation.
115 eatures alone), with 84% accuracy in 5-fold, leave-one-out cross-validation.
116 ng 89 bacterial species in our library using leave-one-out cross-validation.
117  of 60 samples not used for discovery, using leave-one-out cross-validation.
118 elated patterns with logistic regression and leave-one-out cross-validation.
119 tures but have similar levels of accuracy in leave-one-out cross-validations (LOOCV).
120 pifarnib with the greatest accuracy using a "leave one out" cross validation (LOOCV; 96%).
121 e data set out", similar to the traditional "leave one out" cross-validation procedure employed in pa
122 tion and 80% in prediction ability by using "leave-one-out" cross-validation procedure.
123 t independent training and testing sets, or 'leave-one-out' cross-validation analysis with all tumors
124 d use regression models (LUR) frequently use leave-one-out-cross-validation (LOOCV) to assess model f
125 UR models were evaluated using (1) internal "leave-one-out-cross-validation (LOOCV)" within the train
126 cation of 37 clinically relevant bacteria in Leave-One-Out-Cross-Validation.
127 of the ProtPair for IPS study as measured by leave-one-out CV is 69.1%, which can be very beneficial
128                                          The leave-one-out estimates of the probability of test error
129                                     First, a leave-one-out experiment is used to optimize our method
130                         Cross-validation (in leave-one-out form) removes each observation in turn, co
131 n models was evaluated comparing the classic leave-one-out internal validation with a more challengin
132 their predictive power was assessed using a "leave one out" jackknife cross-validation strategy.
133                                In this study Leave One Out (LOO) cross validation is used for validat
134 ee receptors involving 202 complexes, with a leave-one out (LOO) cross-validated Q(2) of 0.689, was o
135 cies in 10-fold cross validation (10xCV) and leave-one-out (LOO) approaches, respectively.
136 -specific transcripts for EDMD, we applied a leave-one-out (LOO) cross-validation approach using LMNA
137 s of these two groups by the cross-validated leave-one-out machine-learning algorithms revealed a mol
138 twork (BP-ANN) was trained in a round-robin (leave-one-out) manner to predict biopsy outcome from mam
139 g phase, which was cross-validated using the leave-one out method.
140         The new models were validated by the leave-one-out method and were cross-validated in a separ
141 s for predicting checkpoint function using a leave-one-out method.
142  discriminant analysis classifier by using a leave-one-out method.
143 e performance of these classifiers using the leave-one-out method.
144                  Models were evaluated using leave-one-out (n - 1) (LOOCV) and grouped (n - 25%) cros
145 d partial least squares training with either leave-one-out or batch-to-batch testing.
146     The experiments presented herein utilize leave-one-out partial least-squares (LOO-PLS) analysis t
147 (ANN) analysis of the combined data set in a leave-one-out prediction strategy correctly predicted th
148  ~120,000 subjects) and MDD (using a 10-fold leave-one-out procedure in the current sample), (ii) biv
149 in terms of the error rate obtained from the leave-one-out procedure, and all of the forests are far
150  DeltaHB(total), and DeltaSASA, the r(2) and leave-one-out q(2) are 0.69 and 0.67.
151 (2) of 0.72, an adjusted R(2) of 0.65, and a leave-one-out Q(2) of 0.56.
152 ocked alignment) showed the best statistics: leave-one-out q(2) of 0.616, r(2) of 0.949, and r(2)pred
153                            Cross validation (leave-one-out technique) was applied to the data.
154 ation accuracy in a cross validation using a leave-one-out technique.
155                       The concordance of the leave-one-out test is over 99.5% and is 99.9% higher for
156                                           In leave-one-out tests, an average of 67% of drugs were cor
157                                         In a leave-one-out three-way classification analysis, the mod
158 and specificity values of 100% employing the leave-one-out validation method.
159                  For biological process, the leave-one-out validation procedure shows 52% precision a
160 f our prediction is measured by applying the leave-one-out validation procedure to a functional path
161                                              Leave-one-out validation showed classification accuracy
162 ine achieved high classification scores in a leave-one-out validation test reaching >90% in some case
163     Subsequent evaluation of the model using leave-one-out validation yielded a classification accura
164 dologies (2-fold, repeat random subsampling, leave one out) were utilized to determine the performanc

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top