戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 r shape features as input to a random forest classifier.
2 g baseline method) and a Logistic Regression classifier.
3  Forest by Penalizing Attributes (Forest PA) classifier.
4 ticulation, as accounted for by the Bayesian classifier.
5 ically significant improvement over a random classifier.
6 47) predictive analysis of microarrays (PAM) classifier.
7 to a pretrained convolutional neural network classifier.
8  scan and used to train a Random Forest (RF) classifier.
9 loying discriminant analysis and a one-class classifier.
10 ts by using this spectroscopic imaging-based classifier.
11  the most strongly predictive microbe in the classifier.
12 th a pretrained convolutional neural network classifier.
13 g attention as a useful feature selector and classifier.
14  resistance was predicted by a random forest classifier.
15 itatively with the Gene Oracle deep learning classifier.
16  recursive feature elimination random forest classifier.
17 window of tuning parameters is used for each classifier.
18 creening assays and the selection of optimal classifiers.
19 ntal (BAC = 65.1%) and genetic (BAC = 55.5%) classifiers.
20 o classify object size in different types of classifiers.
21  features or current state-of-the-art single classifiers.
22 ning and the performance of machine learning classifiers.
23 pproach to improving accuracy of model-based classifiers.
24 he-art CMOS and memristor based mixed-signal classifiers.
25 s used for the independent assessment of the classifiers.
26 inst three state-of-the-art sample barcoding classifiers.
27 gularization structures as well as different classifiers.
28  independent of currently used clinical risk classifiers.
29 class (movement type) Support Vector Machine classifiers.
30 performance was compared with that of the DL classifiers.
31 cation and test the performance of benchmark classifiers.
32 ue classification as compared to traditional classifiers.
33 diographs were used to train and test the DL classifiers.
34 dden performance bias that effected previous classifiers.
35 essed by linear regularized machine learning classifiers.
36 an or competitive to the best of eight other classifiers.
37 yielded superior accuracy than commonly used classifiers (~ 75 vs. ~ 64% accuracy) and had superior p
38 dard DNN approaches, a Gradient Boosted Tree classifier (a strong baseline method) and a Logistic Reg
39  best-performing model incorporated a binary classifier, a nonlinear scale, and additive effects for
40                        Instead of optimizing classifiers, a window of tuning parameters is used for e
41 inding surface, are sufficient to generate a classifier able to identify polyreactive antibodies with
42 ectrometers is crucial to develop functional classifiers able to discriminate rapidly the commodity c
43                     A support vector machine classifier accurately classified rats according to their
44                                              Classifiers achieve ~87% human performance in detecting
45 s of 6 different random point processes, the classifier achieved 96.8% accuracy, vastly outperforming
46 methods combined with Gradient Boosting Tree classifier achieved an F(1)-Score of 0.97 on patients wi
47                   In the database A, the SVM classifier achieved an F1 score of 0.74, and AUC of 0.77
48                          The well-trained RF classifier achieved up to 83% precision for the porosity
49  valleys, and Extreme Gradient Boosting Tree classifier, achieved an F(1)-Score of 0.988 on patients
50                                          Our classifier achieves an accuracy of 91% on held-out tumor
51                             We show that our classifier achieves robust performance and is able to pr
52 ear Discriminant Analysis and Naive Bayesian classifiers across cortical depths in V1.
53 fusion process that can combine nonoptimized classifiers across multiple instruments, preprocessing m
54 aining to operate on all image datasets in a classifier-agnostic manner but is adaptable and scalable
55 on of CT scans combined with a deep learning classifier aided in the diagnosis of morphologic and fun
56 arket experience were used as features for a classifier algorithm.
57                                       The S5 classifier alone or in combination with HPV16/18/31/33 g
58 information about spatial position; a linear classifier also decoded position.
59 s in the explanation space of our diagnostic classifier amplifies the different reasons for belonging
60                                   Microarray classifier analysis has shown promise in the toxicogenom
61 Our findings suggest that using a microarray classifier analysis, not only can we create diagnostic c
62  in biological sciences, as both an accurate classifier and a feature selection tool.
63 PHARM approach with a support-vector-machine classifier and compare their classification accuracies.
64                             With both the S5 classifier and cytology set at a specificity of 38.6% (9
65  vs progression with a combination of the S5 classifier and cytology, whereas HPV genotyping did not
66 ed feature importance using a random forests classifier and performed feature selection based on meas
67 rated resulting TOA scores into a rule-based classifier and validated the tissue assignments through
68 ascaded-CNN is a semantic segmentation image classifier and was trained using thousands of simulated
69 ster than the existing Markovian metagenomic classifiers and can therefore be used as a standalone cl
70                      The GW dataset, trained classifiers and evaluation metrics will be made publicly
71                                      Both ML classifiers and radiologists had difficulty recognizing
72 white, non-Hispanic [AUC, 0.76] in a 5-class classifier), and a network trained only in non-Hispanic
73 ls) were included to build a melancholic MDD classifier, and 10 FCs were selected by our sparse machi
74       We compared different machine learning classifiers applied to the task of drug target classific
75 ning process configurations (including multi-classifier approaches, cost-sensitive learning, and feat
76               Automated signal detectors and classifiers are needed to identify events within these d
77                       However, deep learning classifiers are susceptible to adversarial examples, whi
78 In computational biology, random forest (RF) classifiers are widely used due to their flexibility, po
79                              Despite using a classifier as the decoder, arbitrary hand postures are p
80 redict subtype-classification (Random Forest classifier, average accuracy >=90%).
81          We trained a support vector machine classifier based on MPRA data to predict candidate silen
82  in normal speech production with a Bayesian classifier based on the tongue postures recorded from th
83                             A random forests classifier based on these taxa had an AUROC of 0.90 to p
84                              Gene expression classifiers based on airway brushes outperformed those u
85                      The poor performance of classifiers based on these measurements highlights poten
86 such a context, generating fair and unbiased classifiers becomes of paramount importance.
87 airway brushes and compared machine-learning classifiers between the two tissue types.
88       Third, we train another cell-level SVM classifier by using human-expert assessment of cell abno
89 in a peak-level Support Vector Machine (SVM) classifier by using human-expert assessment of peak abno
90                          This cell-level SVM classifier can be used to assess additional Ca(2+) trans
91 copy, and LIBS coupled with machine learning classifiers can be used to identify both consumer and en
92 s and transformative potential as our 'miRNA classifier' can be used as a molecular tool to stratify
93 AN-generated chest radiographs as inputs, ML classifiers categorized the fake chest radiographs as be
94        (NC3) Up to rescaling, the last-layer classifiers collapse to the class means or in other word
95                                    Moreover, classifier comparisons reveal intra-slide spatial simila
96 lity voting of 1000 LSVC, the final ensemble classifier confidently classified all but 17 TCGA glioma
97                        With machine learning classifiers, consumer plastic types were identified with
98                      We demonstrate that our classifier correctly identifies the risk categories of 2
99 ectrometry data and tools like the Aristotle Classifier could ameliorate the ambiguities associated w
100  these extracted features only, a supervised classifier, DeepC, can effectively distinguish tumors fr
101                                          The classifier described here discriminates DLBCL tumors bas
102                                          The classifier detected morphologic and functional worsening
103          The performance of machine learning classifiers developed using sectoral parameter measureme
104                                The cognitive classifier discriminated SCZs from HCs with a balanced a
105 81 controls and identify a microbial species classifier distinguishing patients from controls with an
106                  In fact, the main substrate classifier distinguishing selectivity is the magnitude o
107                            A gene expression classifier, featuring 26 gene expression scores, was der
108 p a generalizable host-gene-expression-based classifier for acute bacterial and viral infections.
109  for variable selection, and a random forest classifier for BD vs. MDD classification.
110 n units to develop a secondary random-forest classifier for directly predicting asthma severity.
111                   We propose a random forest classifier for identifying adequacy of liver MR images u
112 e, we propose CScape-somatic, an integrative classifier for predictively discriminating between recur
113 opological data analysis (TDA), we present a classifier for repeated measurements which samples from
114      The test group was used to evaluate the classifier for sensitivity, specificity, positive predic
115 ing this dataset, we train a general-purpose classifier for virtual screening (vScreenML) that is bui
116  system across pain types, and provide a new classifier for visceral versus somatic pain.
117 s, and we develop effective machine-learning classifiers for cell age.
118 es the predictions from the individual error classifiers for estimating the quality of a protein stru
119 -derived xenografts we constructed dedicated classifiers for experimental models.
120 earn-by-example training of machine learning classifiers for histologic patterns in whole-slide image
121 er or in conjunction with existing taxonomic classifiers for more robust classification of metagenomi
122  analysis, not only can we create diagnostic classifiers for predicting an exact metal contaminant fr
123 ples for labeling and can be used to develop classifiers for prospective application or as a rapid an
124 hat ensemble methods outperform whole series classifiers for this task and are in some cases able to
125 e them to select the "best" machine learning classifier from a range of seven main models.
126 100 asthma-associated methylation markers as classifiers from each dataset, we found that both AEC- a
127 nt levels (image fusion, feature fusion, and classifier fusion) were investigated.
128                                          The classifier genes enable rational selection of patients w
129                             Importantly, the classifier genes predicted pathological complete respons
130      Our study demonstrates the potential of classifier genes to predict risk for disease relapse and
131                                              Classifier genes were validated by Immunohistochemistry
132 patients, our models outperformed comparable classifiers (>0.10 AUC) and our interpretation methods w
133 udies of emotion, researchers use supervised classifiers, guided by emotion labels, to attempt to dis
134 ontal cortex-posteromedial cortex multimodal classifier had a significant predictive value (area unde
135                                          The classifier had an AUROC of 0.82 for advanced fibrosis in
136 expression profile was used, the statistical classifiers had greater predictive accuracy for determin
137                                    The three classifiers had similar performances to identify the cul
138                     Current machine learning classifiers have successfully been applied to whole-geno
139 ing the original data, we train a diagnostic classifier (healthy vs. diseased) and extract instance-w
140      The feature importance metric from this classifier identified a signature based on 50 key genes,
141 ed by the challenge of optimizing the chosen classifier (identifying the best tuning parameter value(
142 ilized two training datasets and applied the classifier in 15 separate datasets.
143 , and replicate the microbiome-based disease classifier in 45 patients and 45 controls (AUC = 0.765).
144  discrimination performance of the cognitive classifier in iterative subsamples.
145                                     A simple classifier in TensorFlow (version 2) is developed and ho
146 pport using the D-COID strategy for training classifiers in other computational biology tasks, and fo
147 rter run times because it does not construct classifiers in the inner folds.
148 uated well-established machine learning (ML) classifiers including random forests (RFs), elastic net
149    The feature importance recorded by the RF classifier indicates that the intensities of spectra at
150                           We then apply this classifier, inflammatix-bacterial-viral-noninfected-vers
151                                    One class classifier is a powerful and devoted for non-targeted an
152                                          The classifier is designed in SPICE with feature size of 15
153               Unfortunately, developing such classifiers is hindered by the limited availability of t
154 ns of AF induction were used to train the ML classifier, its performance remained similar (validation
155 on, and then employed a new machine learning classifier known as Rhapsody.
156                                            A classifier leveraging stool metatranscriptomes resulted
157 k of validation, and restriction to a single classifier (logistic regression).
158 comes the greatest challenge with status quo classifiers: low sensitivity, especially when dealing wi
159 he development of 1000 linear support vector classifiers (LSVC).
160  learning classifier, named Metabolic Allele Classifier (MAC), that uses flux balance analysis to est
161 rentially abundant microbes, a random forest classifier model was created to distinguish advanced fib
162 interactions were observed with the clinical classifier model-assigned phenotypes in both ALVEOLI (P
163 oosted machine algorithm was used to develop classifier models using 24 variables (demographics, vita
164                 In comparison with different classifier models, feature extraction models and other s
165 ntation with a multi-spectral neural network classifier (MSNN).
166 ent a metabolic model-based machine learning classifier, named Metabolic Allele Classifier (MAC), tha
167 44% and 6.2 months vs 19% and 1.6 months for classifier-negative patients (hazard ratio, 0.49; 95% co
168 ORR of 19%, mPFS of 1.5 months, and 5% CR in classifier-negative patients (P = .0096).
169 e public dataset to discriminate subgroup A (classifier-negative, immune-low) and subgroup B (classif
170                     In the database B the RF classifier obtained a F1 score of 0.71, and AUC of 0.75.
171 uracy than IL17A in a support vector machine classifier of psoriasis and healthy transcriptomes.
172 arning, allows us to construct more accurate classifiers of several brain diseases, compared to direc
173                                 We validated classifiers on independent datasets using within-diagnos
174 rain diseases, compared to directly training classifiers on patient versus healthy control datasets o
175  the 12-lead ECG, we train and test multiple classifiers on two independent prospective patient cohor
176 namics can also be analyzed by deploying the classifiers on variant MD simulations and quantifying ho
177 TPOT-generated ML pipelines with selected ML classifiers, optimized with a grid search approach, appl
178 rs and can therefore be used as a standalone classifier or in conjunction with existing taxonomic cla
179 and, with enough training data, the combined classifier outperforms the models trained with HC featur
180 e-level correlation of 0.45 +/- 0.16 between classifier pairs.
181   This tool improves training efficiency and classifier performance by guiding users to the most info
182                                              Classifier performance is stable across a wide range of
183                                              Classifier performance was assessed using an independent
184 teps, then investigated the breakdown of the classifiers' performances.
185                                          The classifiers performed significantly better than a group
186                                              Classifier-positive DLBCL patients (de novo) had an ORR
187                                              Classifier-positive patients exhibited an enrichment in
188 sifier-negative, immune-low) and subgroup B (classifier-positive, immune-high) patients.
189                                          Our classifier predicted beta-lactone synthetases in unchara
190                        In our cohort, the ML classifier predicted probability of AF recurrence with a
191                                        These classifiers predicted high and low interpersonal attract
192        It was evaluated using five different classifiers: probabilistic bayesian, Support Vector Mach
193                       In the test group, the classifier provided 96% (95% confidence interval [CI]: 9
194                          Fusing nonoptimized classifiers provides reliable classifications relative t
195 aussian radial basis function support vector classifier (RBF-SVC) that achieves classification accura
196 for the best performing previously developed classifier ("Reese Score") were 88% and 72% for Raine, 8
197 re developed using multiple machine learning classifiers: regularised logistic regression, decision t
198 ffective in others (AUC 0.88 +/- 0.11), with classifier relationships also recapitulating known adeno
199              The visceral versus the somatic classifier reliably distinguishes somatic (thermal) from
200         Transcriptome-based machine-learning classifiers revealed that half of the mTORC2-deficient N
201 ously identified 111-gene outcome prediction-classifier, revealing FEN1 as the strongest determining
202 curacy of 0.91, while the custom-constructed classifier reveals an accuracy of 0.89.
203            (NC4) For a given activation, the classifier's decision collapses to simply choosing which
204 d extract instance-wise explanations for the classifier's decisions.
205 can be used for the development of molecular classifier scores, which could improve our diagnostic an
206               A support vector machine (SVM) classifier separating 56 healthy controls (HC) from 35 R
207             In retrospective benchmarks, our classifier shows outstanding performance relative to oth
208  clinical classification, identify a 16-gene classifier signature associated with the development of
209 collected have been handled by a multi-block classifier (SO-PLS-LDA) in order to predict the origin o
210  which are created from raw data to fool the classifier such that it assigns the example to the wrong
211 visualize and transform results from various classifiers-such as Kraken, Centrifuge and MethaPhlAn-us
212  faces, and quantify performance of a binary classifier tasked with distinguishing perpetrator from i
213 based approach appears to produce a reliable classifier that additionally allows one to describe how
214 number of cells for training a random forest classifier that can accurately predict the metastatic po
215 Gaussian process classification, we create a classifier that stratifies drugs into safe and arrhythmi
216 ctors of autism association into an ensemble classifier that yields a single score indexing evidence
217                             Machine learning classifiers that attempt to differentiate between early
218                           We designed cancer classifiers that can identify 21 types of cancers and no
219 achine learning to build 27-, 10- and 3-gene classifiers that differentiate COVID-19 from other ARIs
220 how that DEEPLYESSENTIAL outperform existing classifiers that either employ down-sampling to balance
221  We created a system consisting of different classifiers that is feed with novel morphometric feature
222 thms to sequencing data, we trained a 'miRNA classifier' that could robustly classify 'CRPC-NE' from
223 icial vision and machine learning (and other classifiers) that predicts pregnancy using the beta huma
224 lance analysis to identify accurate sequence classifiers thus contributes mechanistic insights to GWA
225  abstraction but that still allowed a linear classifier to decode a large number of other variables (
226 were given as input to a deep learning-based classifier to depict morphologic and functional worsenin
227 mbined with clinical data in a random forest classifier to develop the system, whose results were com
228 velop two algorithms via cross-validation: a classifier to diagnose NAFLD (MRI PDFF >= 5%) and a fat
229 ve US multivariable models were evaluated: a classifier to differentiate participants with NAFLD vers
230                                 We trained a classifier to differentiate progressors and nonprogresso
231              Then, we develop a multivariate classifier to distinguish visceral from somatic pain.
232 ariant callers and developed a random forest classifier to filter called SNPs.
233 oising autoencoder and a supervised learning classifier to identify gene signatures related to asthma
234  develop, train, test, and validate a robust classifier to identify medulloblastoma molecular subtype
235         Then, we build a cascade deep forest classifier to infer new DTIs.
236              We developed a machine-learning classifier to integrate this multi-omic framework and pr
237            iSAIL includes a machine learning classifier to map and interpret interactions, a curated
238 (PCAWG) Consortium, we train a deep learning classifier to predict cancer type based on patterns of s
239  = 1,214) to develop the ColoType scores and classifier to predict CMS1-4 based on expression of 40 g
240 ary sequences, we trained a machine learning classifier to predict donor specificity with nearly 90%
241  protein target and develops a random forest classifier to predict the effect of an input molecule ba
242 ng convolutional neural network with machine classifier to predict the prognosis of stage III colon c
243 sults of these analyses were used to train a classifier to predict wean outcome.
244 ion strategy, we applied the gene expression classifier to pretreatment biopsies from relapsed/refrac
245 lass modeling is determining which one-class classifier to use followed by the challenge of optimizin
246 ined and tested Support Vector Machine, SVM, classifiers to compare the predictive capacity of each o
247 et, with the potential to include additional classifiers to describe different subtypes of clusters.
248         The development and validation of DL classifiers to distinguish between adequate and inadequa
249 ations based on aging; and trained FCD-based classifiers to distinguish fast- from slow-progressing i
250                                     We train classifiers to distinguish MEG field patterns during pre
251 , which were employed to build various novel classifiers to distinguish patients that lived for over
252 e ability of a DNA methylation panel (the S5 classifier) to discriminate between outcomes among young
253 veloped machine learning tool, the Aristotle Classifier, to bacterial classification of MALDI-TOF MS
254 fication methods, including machine learning classifiers, to determine accuracy for identifying type
255                                              Classifiers trained on data with approximate labels have
256                                   Supervised classifiers trained on histologic rejection showed less
257           Additional experiments showed that classifiers trained on responses to color words could de
258  specimens were analyzed by machine learning classifiers trained to identify relevant cytological fea
259                                     Notably, classifiers trained with prospective beliefs of success
260                              We found that a classifier, trained to discriminate the direction of vis
261 on two previously published autism detection classifiers, trained on standard-of-care instrument scor
262  that provides web-based user interfaces for classifier training, validation, exporting inference res
263 ependent of the sampling methodology and the classifier used for their inference.
264  transient data, we trained a cell-level SVM classifier using 200 cells as training data, then tested
265        We trained and tested a random forest classifier using 5-fold cross-validation.
266                                          The classifier utilizes a large composite microarray dataset
267 ss during mapping and designed a graph-based classifier, VAPOR, for selecting mapping references, ass
268 , a strong metabolite-based machine-learning classifier was able to successfully predict unique OAT1
269                The framework from which this classifier was built is generalizable, and represents a
270 ediction accuracy- a machine learning binary classifier was integrated with the device as a proof-of-
271                                The cognitive classifier was relatively specific to schizophrenia (HC-
272                                     The best classifier was ROPN1L, a gene known to be expressed in t
273  [CI], 28.4-49.6), the sensitivity of the S5 classifier was significantly higher (83.6%; 95% CI, 71.9
274 two outcomes in neuron spiking data, the TDA classifier was similarly accurate to the SVM in one case
275 classifier were compared to outcomes, the S5 classifier was the strongest biomarker associated with r
276                                 The selected classifier was trained on a "training cohort" and tested
277                                          The classifier was used to assess the classification power f
278                           The performance of classifier was validated in an additional cohort of mCRP
279                                          The classifier was validated using a gene expression microar
280                         A series of 3 binary classifiers was employed, and the prediction model exhib
281              Using standard machine learning classifiers we found that the current prospective decisi
282 the computed accuracies with that of a naive classifier, we can identify the experimental conditions
283 mages to the predictive capability of the ML classifier, we found that when only features from simula
284 PPV, and NPV at the optimal cutoffs for each classifier were 94.2%, 96.9%, 97%, and 94% for the logis
285 18 and HPV16/18/31/33 genotyping, and the S5 classifier were compared to outcomes, the S5 classifier
286  The AD probability scores calculated by the classifier were correlated with brain tau deposition in
287                       Support vector machine classifiers were applied to UV-Visible spectra of liquid
288 istic regression and nonlinear Random Forest classifiers were benchmarked and evaluated for predictin
289                             Machine learning classifiers were then trained to decode different tones
290 nd T2w sequences, and support vector machine classifiers were trained on the CNN features to distingu
291 d genetic, environmental, and neurocognitive classifiers were trained to separate 337 HCs from 103 SC
292        Additionally, a novel gene expression classifier, which identifies tumors with a high immune c
293 nput to a quadratic discriminant analysis ML classifier, which was trained, optimized, and evaluated
294 ts maintained fixation, were used to train a classifier, whose performance was then tested on saccade
295 lected host mRNAs, we train a neural-network classifier with a bacterial-vs-other area under the rece
296 rming hand-optimized pipeline was a Bayesian classifier with Fischer Score feature selection, achievi
297                ELNET was the top stand-alone classifier with the best calibration profiles.
298                          The best performing classifier (XGBoost) predicted 45% (95% CI: 43%, 46%) of
299                               A two-variable classifier yielded a cross-validated area under the rece
300                          The single-sequence classifiers yielded areas under the ROC curves (AUCs) [9

 
Page Top