戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 y selected by Bayesian information criterion regularization.
2 ystems of linear equations by using sparsity regularization.
3 g techniques like Group lasso with l(1)/l(2) regularization.
4 scales for the Euler fluid equations without regularization.
5 oduced for variance stabilization as well as regularization.
6 imization problem subject to Shannon entropy regularization.
7 s in TLE, suggesting a shift towards network regularization.
8 ng procedure while preventing overfitting by regularization.
9  parameter to weight between ridge and lasso regularization.
10 of transcription factor motifs as a manifold regularization.
11 ore motifs by logistic regression with LASSO regularization.
12 ute shrinkage and selection operator (LASSO) regularization.
13 uential logistic regression with Elastic Net regularization.
14 ) of features can be expanded or reduced via regularization.
15 mbines boosting, bagging, and strong dropout regularization.
16 d clustering and soft clustering by sparsity regularization.
17 r mixed regressions with Lp (0 < p < 1) norm regularization.
18 parameterization and the absence of explicit regularization.
19 al methods are based on mixture modeling and regularization.
20 inverse procedure and abolishes the need for regularization.
21 r effects, and children's overextensions and regularizations.
22 o generalized linear models with elastic net regularization (14VF and 14GT), based on the 14 genes, s
23 ated samples, which are analyzed by Tikhonov regularization, a method that has not yet been applied t
24 CAPS) project conducted extensive review and regularization across studies of all schizophrenia linka
25                                     Bayesian Regularization Algorithm (BRA), Scaled Conjugate Gradien
26    In addition, we propose a domain-specific regularization algorithm for training the proposed multi
27 range, temperature increment, hold time, and regularization algorithm, all play a role in showing ana
28 nucleus neurons revealed gait-related firing regularization and a drop of beta oscillations during th
29 ddings), a framework unifying spectral graph regularization and imbalance-aware learning.
30 layer Perceptron (MLP) optimized by Bayesian Regularization and Levenberg-Marquardt were applied for
31         Applied to COVID-19 data with proper regularization and model-selection criteria, the approac
32                 RA was performed by using L1 regularization and principal component analysis.
33 lassification pipeline combining consistency regularization and pseudo-labelling, and adapts it for t
34 nforcement may be pivotal to achieve pattern regularization and restore the neural activity in the nu
35 using an iterative regression algorithm with regularization and subsequently fit with a zero-memory n
36 egularization constants, i.e., lambda for L2 regularization and t for L1 regularization, is developed
37 field (STRF) with two hold-back datasets for regularization and validation.
38                         In addition, we used regularization and variable selection via the elastic ne
39 We derive optimal profiles under 2 different regularizations and uncover the efficiency of N-shaped b
40 t IDH mutations (logistic regression with L2 regularization) and the 1p/19q codeletion (support vecto
41 ly better than linear regression (with lasso regularization), and the gradient boosting model perform
42           Age at menarche, time to menstrual regularization, and duration or intensity of menstrual f
43                         The influence of the regularization appears solely on the right-hand side, wh
44  model fitting is then completed with proper regularizations applied.
45 models, random effects models were used as a regularization approach to reduce overfitting.
46 on the negative binomial distribution, use a regularization approach to select a few transcripts coll
47 rning technique we develop uses a task-based regularization approach.
48 ssion calls lends itself perfectly to modern regularization approaches that thrive in high-dimensiona
49                            Randomization and regularization are also applied in BiXGBoost to address
50  as feature-selection, cross-validation, and regularization are employed to combat overfitting, but r
51 uently, models fitted using derivative-based regularization are often biased toward underestimating t
52 ), but there was no difference for RA and L1 regularization (AUC, 0.81 vs 0.80, respectively; P = .76
53 erval: 0.86, 0.89) was better than RA and L1 regularization (AUC, 0.81; 95% confidence interval: 0.79
54 method substantially outperforms traditional regularization-based inverse ECG approaches and previous
55 odel ultilizes maximum entropy modeling with regularization-based structure learning to statistically
56 , we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and se
57 le, we propose to use a bounded nuclear norm regularization (BNNR) method to complete the drug-diseas
58 gorithms: Levenberg-Marquardt (LM), bayesian-regularization (BR), and scaled conjugate gradient (SCG)
59  of a constant second-order coefficient, our regularization by a singular function is straightforward
60 mplectomorphic registration with phase space regularization by entropy spectrum pathways (SYMREG), th
61                                   Reweighted regularization can enhance sparsity and obtain better sp
62 ntly developed NCA-r algorithm with Tikhonov regularization can help solve the first issue but cannot
63  Bayesian parameter estimation combined with regularization can isolate and reveal core motifs suffic
64 esent an ensemble approach using elastic net regularization combined with mRNA expression profiling a
65           For IFBP with 15 iterations and no regularization compared to 3DRP, both using a ramp filte
66 ctor elastic net that self-optimizes two key regularization constants, i.e., lambda for L2 regulariza
67  program CONTIN, the parameter governing the regularization constraint is adjusted by variance analys
68 ions, and the various physical and numerical regularizations contributing in different proportions in
69 multivariate calibration method, constrained regularization (CR), and demonstrate its utility via num
70 pose a clustering threshold gradient descent regularization (CTGDR) method, for simultaneous cluster
71                     We study how the rate of regularization depends on the frequency of word usage.
72 mploys a stand-alone CNN-based model with L2 regularization, dropout, and early stopping, achieving 7
73 tion, and whole-brain decoding with L1 or L2 regularization-each have critical and complementary blin
74 we solve this puzzle by showing an effective regularization effect of gradient descent in terms of th
75 ichardson-Lucy algorithm with Total Variance regularization, enabling submicrometer image fidelity, d
76 ng positive relationship between the rate of regularization errors and damage to the posterior half o
77 ption words, especially on trials where over-regularization errors occurred.
78 ency exception words, and made frequent over-regularization errors.
79 eficits were not correlated with the rate of regularization errors.
80 istinct from the lesion site associated with regularization errors.
81  with this deficit also make characteristic 'regularization' errors, in which an irregularly spelled
82            Our results show that the optimal regularization factor can be determined well with the L-
83                      In both ridge and lasso regularization, feature shrinkage is controlled by a pen
84 graph' that implements the graph-constrained regularization for both sparse linear regression and spa
85     The clustering also acts as an effective regularization for data imputation on unassayed CpG site
86 low as 0.52 eV/atom with proper selection of regularization for physical consistency.
87 reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets b
88 ans of interaction matrix weighting and dual regularization from both chemicals and proteins.
89 lution method that combines an entropy-based regularization function with kernels that can exploit ge
90 torization with a new biologically motivated regularization function.
91 s and uses L2 and Hessian Frebonius norms as regularization functions to improve performance.
92 new computational method, called graph-based regularization (GBR), for expressing a pairwise prior th
93  In the other two regimes, the two different regularizations give rise to the same generalized flows
94 ermediate compressibility, the two different regularizations give rise to two different scaling behav
95 y and traveling waves could lead to synaptic regularization, giving unique insights into the role and
96 t that promotes sparse variable selection by regularization governed by the covariance and inverse co
97 ssion analysis, our model uses a new form of regularization, group l(2,1)-norm (G(2,1)-norm), to inco
98 e of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality
99 end critically on the heuristic selection of regularization (hyper-) parameters affecting the balance
100                Estimation can be improved by regularization, i.e. by imposing a structure on the esti
101 , the transaxial resolution for IFBP without regularization improved 35% compared to 3DRP; with regul
102                                  Statistical regularization improves predictions and provides a gener
103 ctors, underscoring the importance of strong regularization in noisy datasets with far more features
104 yoSPARC followed by 3D refinement with Blush regularization in RELION constitutes an effective strate
105     In this paper we develop a novel type of regularization in support vector machines (SVMs) to iden
106                       Thus there is implicit regularization in training deep networks under exponenti
107 ages this trait network to encode structured regularizations in a multivariate regression model over
108 dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when mode
109 The model for CVD chosen through elastic net regularization included interaction terms suggesting tha
110 Hanning roll-off, the noise for IFBP without regularization increased by a factor of 6 compared to 3D
111      Epithelial healing time, DeltaKmax, and regularization index (RI) were significantly better in t
112        Efficient approaches that incorporate regularization into multi-trait linear models (no random
113                              We incorporated regularization into the model fitting procedure to addre
114 (i) introducing combined structured sparsity regularizations into multimodal multitask learning to in
115 er analysis we demonstrate that the observed regularization is associated with memory effects in the
116 t screening to reduce dimension first before regularization is more efficient and stable than applyin
117 often ill-conditioned, and data smoothing or regularization is required to avoid overfitting the nois
118                  In addition, an l(2,1)-norm regularization is utilized to couple feature selection a
119 ., lambda for L2 regularization and t for L1 regularization, is developed and referred to as self-opt
120 ystems-based modeling approach called kinome regularization (KiR) to identify multitargeted kinase in
121         Our approach modifies the Tikhonov's regularization, known previously in CONTIN and Maximum E
122 n by sparsity inducing regression (l(1) norm regularization) leads to a stable set of features: i.e.
123 rrors in substrate deformation to adjust the regularization level accordingly.
124 cluded cross-correlation, displacement field regularization, lobar segmentation overlap, and the Jaco
125 lation using reconstruction and latent space regularization losses, we propose various histogram-base
126         Nevertheless, we show that our fused regularization matrix factorization provides a novel inc
127 sic particle filter (PF) with resampling and regularization, maximum likelihood estimation via iterat
128 ification and the threshold gradient descent regularization method for estimation and biomarker selec
129                                 The Bayesian regularization method for high-throughput differential a
130                                         This regularization method increases the probability of findi
131                           Here, we present a regularization method that improves convergence towards
132 mentation (DA), a simple but effective model regularization method to improve resilience to zero infl
133                  Second, we use a reweighted regularization method to obtain the sparse representatio
134             We propose a network constrained regularization method to select important SNPs by taking
135 TGDR (clustering threshold gradient directed regularization) method to genetic association studies us
136 n is more efficient and stable than applying regularization methods alone.
137                    The integrated group-wise regularization methods increases the interpretability of
138 d procedure outperforms existing main-stream regularization methods such as lasso and elastic-net whe
139 e two spike-timing-based sparse-firing (SSR) regularization methods to further reduce the firing freq
140 comparison burden, and direct application of regularization methods to select regulator-gene pairs is
141                         The effects of these regularization methods were investigated on the MNIST, F
142 compared to commonly used sparsity-promoting regularization methods while maintaining predictive perf
143      Compared to the standard TGDR and other regularization methods, the CTGDR takes into account the
144 verfitting, which can be avoided by standard regularization methods.
145 der, Tikhonov first-order and L1 first-order regularization methods.
146 e STRE model significantly outperforms other regularization models that are widely used in current pr
147 recent years, focusing on those that utilize regularization, modularity, memory, and meta-learning, a
148 evaluations indicate that the most efficient regularization norm is the identity matrix.
149      Morphologic results showed an analogous regularization of corneal shape with a significant reduc
150 ed Epistatic Net (EN), a method for spectral regularization of DNNs that exploits evidence that epist
151 namics of language evolution, we studied the regularization of English verbs over the past 1,200 year
152 ges were generated by voxelwise rank-shaping regularization of exponential spectral analysis (RS-ESA)
153 e variation in the cultural transmission and regularization of musical systems.
154  statistical techniques for dealing with the regularization of noisy data.
155           We also discuss techniques for the regularization of our parameter extraction method.
156 ll-known grammatical changes in English: the regularization of past-tense verbs, the introduction of
157  that some such models show will not survive regularization of the constitutive description, by inclu
158 sional and low-sample-sized data: one is the regularization of the covariance matrix, and the other i
159  method to estimate the TR parameter ( ) for regularization of the Fredholm integral equations of fir
160                                          The regularization of the parameter space achieved by the ME
161 r with other tuning parameters, controls the regularization of the random forests model.
162     In this work, we propose mobility-driven regularization of the SEIR transmission rate trajectory.
163 n ideal observer paradigm, we illustrate how regularization of the spike train can significantly impr
164 or many types of statistical analyses, while regularization of variances (pooling of information) can
165 cial anatomical obstacle induces slowing and regularization of VF, impairs the persistence of VF as j
166           Results reveal pathways for robust regularizations of stochastic responses of metamaterials
167                             Two most natural regularizations of this problem, namely the regularizati
168 e furthermore evaluate the interplay between regularization (often used to estimate model fits to the
169                       We propose a method of regularization on protein concentration estimation at th
170 d multinomial logistic regression with lasso regularization on the training set to determine a parsim
171 or distribution that imposes graph Laplacian regularization on transmission parameters.
172  points, and meanwhile impose the trace-norm regularization onto the unfolded coefficient tensor to a
173                        Through wavelet-basis regularization, our method sharpens signal without sharp
174                           DIRAS uses a fixed regularization parameter (lambda), which performs robust
175  to avoid the ad hoc selection of a specific regularization parameter and to capture valuable brain c
176         The sensitivity to the values of the regularization parameter and to the shape parameters is
177 sian approach can be taken to eliminate this regularization parameter entirely, by integrating it out
178                                          The regularization parameter is automatically set using Baye
179  An efficient method to optimally select the regularization parameter is proposed for obtaining a hig
180  a simple Bayesian approach to integrate the regularization parameter out analytically using a new pr
181 ct of this baseline correction method is the regularization parameter that prevents overfitting that
182 em as the solution depends, in general, on a regularization parameter with optimal value that is not
183   The number of components in the basis, the regularization parameter, and the mass spectral range fr
184 ity obtained is determined by the value of a regularization parameter, which must be carefully tuned
185 nted effects of label noise when setting the regularization parameter.
186 y obtained is determined by the value of the regularization parameter.
187 d have no selection criteria to optimize the regularization parameter.
188                                          The regularization parameters are chosen by a new, data-adap
189                            Estimation of the regularization parameters are performed effectively from
190 iding a means for self-consistently choosing regularization parameters from data, we derive posterior
191 truction methods with adaptive adjustment of regularization parameters.
192  R package presented here- produces complete regularization paths.
193 of a newly developed pure component Tikhonov regularization (PCTR) process that does not require labo
194 ts on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory p
195        Moreover, the classical approaches to regularization-penalizing the derivative of the transmis
196                                          Our regularizations (penalties) render features learned in h
197 ved and reconstructed data, while imposing a regularization penalty on the reconstructed data.
198  behavior in Rayleigh-Taylor (RT) mixing are regularizations (physical and numerical), which produce
199 rajectory prediction problem into a standard regularization problem; the solution becomes solving thi
200 onclusions: The proposed network-constrained regularization procedure efficiently utilizes the known
201  article, we introduce a network-constrained regularization procedure for linear regression analysis
202                         We establish a clear regularization procedure for the use of LDA in ultrafast
203                                          Our regularization procedure is based on a combination of th
204 , and find a stable solution by exploiting a regularization procedure to cope with large matrices.
205 proximal in the structure; and a statistical regularization procedure to prevent overfitting.
206 tudy provides a quantitative analysis of the regularization process by which ancestral forms graduall
207 ch as averaging, and establish crowding as a regularization process that simplifies the peripheral fi
208 Along the path of our analysis, we present a regularization process via complexification and explore
209              We introduce a novel stochastic regularization, ReVeaL, that empowers ML to discriminate
210 illation phenomenon could result from L(2/3) regularization's nonsmoothness.
211                           We present a novel regularization scheme called The Generalized Elastic Net
212                           We propose a novel regularization scheme over multitask regression called j
213 d populations, and it employs a novel spline regularization scheme that greatly reduces estimation er
214 st absolute shrinkage and selection operator regularization selected the clinical parameters: histolo
215                                 As expected, regularization significantly improved the determination
216 ly applied pairwise GC model (PGC) and other regularization strategies can lead to a significant numb
217 he recovery coefficient, compared with other regularization strategies such as the premature terminat
218               First, we establish a residual regularization strategy that applies constraints on the
219 al measured by Kevin probe and the iterative regularization strategy.
220 esents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predic
221 t re-balance preprocessing, different sparse regularization structures as well as different classifie
222 us adverse events chosen through elastic net regularization suggested that male sex, current smoking,
223                                  The network regularization takes advantage of prior knowledge of gen
224                               Furthermore, a regularization technique is employed to increase the num
225 stical model which uses a mutual information regularization technique to explicitly disentangle facto
226 atogram was reconstructed using the Tikhonov regularization technique with minimal instrumental disto
227 , such noise injection is commonly used as a regularization technique.
228 g relevant variables, penalization and other regularization techniques are routinely adopted.
229                                              Regularization techniques are used to perform the variab
230               Here, we investigate different regularization techniques for high-dimensional data deri
231                                              Regularization techniques help prevent overfitting train
232 d with Tikhonov-Phillips and maximum entropy regularization techniques that provide the simplest dist
233 dely applicable, and can incorporate various regularization techniques to maintain confidence in the
234 he robustness of the model architecture, the regularization techniques used, the loss function used d
235 rained using different optimization methods, regularization techniques, data augmentation techniques,
236                                        Using regularization techniques, we estimated the linear compo
237  Hellinger distance (HD) coupled with sparse regularization techniques.
238  define least-squares fitting, we motivate a regularization term in the objective function that penal
239             One of the key improvements is a regularization term in the objective function to enforce
240             PFC damage was correlated with a regularization term that limited updates to attention af
241 ease-disease similarities by incorporating a regularization term to balance the approximation error a
242        Importantly, we incorporate a dynamic regularization term to learn a latent space that is robu
243  extends the ML model's loss function with a regularization term to penalize high correlations betwee
244 iii) adversarial training, and (iv) adding a regularization term to the loss function.
245            Sparsity is achieved by adding an regularization term to the variational principle, which
246 control during training, such as an explicit regularization term.
247 nstructed image from the measured data and a regularization-term based upon the sum of the moduli of
248 Bayesian approach that incorporates data and regularization terms directly into a path integral.
249 es these prior structures by introducing new regularization terms to encourage weight similarity betw
250 ential Tilt Model (pETM) using network-based regularization that captures both mean and variance sign
251 s of model quality provides estimates of the regularization that will be required when these models a
252 rization improved 35% compared to 3DRP; with regularization the improvement dropped to 19%.
253 ased by a factor of 6 compared to 3DRP; with regularization the noise was increased only by a factor
254 kernel learning techniques and network-based regularization, the proposed method not only enhances th
255                                With a strong regularization, the transaxial and axial resolution impr
256 not require cardiac chamber segmentation and regularization, thus eliminating these issues.
257 ered classes of metabolites, (2) elastic net regularization to define metabolites that most strongly
258 tered classes of metabolites; 2) elastic net regularization to define metabolites that most strongly
259 esses atomic model information as structural regularization to elucidate such heterogeneity.
260  for sample sharing between studies and uses regularization to estimate a data-driven number of inter
261  the known labels of the protein, and uses a regularization to exploit the interactions between prote
262 ure representation together with group lasso regularization to extract a collection of sequence signa
263  that combines Parallel Tempering with Lasso regularization to identify minimal subsets of reactions
264 chographic imaging, by using total variation regularization to mitigate noise and artifacts in the re
265                         (2) sigLASSO uses L1 regularization to parsimoniously assign signatures, lead
266 We perform model selection using elastic net regularization to prevent overfitting.
267 , and that the mediodorsal thalamus provides regularization to promote efficient reuse.
268  Quantitative susceptibility mapping employs regularization to reduce artifacts, yet many recent deno
269 user-specified background signature, employs regularization to reduce noise in non-background signatu
270 ng species are combined with maximum entropy regularization to represent a continuous size-distributi
271 ute shrinkage and selection operator (LASSO) regularization to select covariates among 9 or 21 candid
272     Its logits provided rich information and regularization to students, mitigating the risk of overf
273 cal contact interaction model with numerical regularization to study the transition from partial to f
274 bling method is the introduction of sparsity regularization to the solution of the inverse problem, w
275  but also addresses the spatial and temporal regularizations to improve the prediction performance.
276 z., CGC-2SPR (CGC using two-step prior Ridge regularization) to resolve the problem by incorporating
277                 A 2-norm variant of Tikhonov regularization (TR) has been used with spectroscopic dat
278 le regime for all the systems using Tikhonov regularization (TR) method.
279           Besides, by introducing trajectory regularization (TR), we effectively alleviate the proble
280 9.994), closely followed by the ANN Bayesian Regularization (trainbr) model with a 14-72-1 architectu
281                     In both cases, Laplacian regularization turned out to be crucial for achieving a
282  regularizations of this problem, namely the regularization via adding small molecular diffusion and
283 wo-stage strategy by first performing sparse regularization via cross-validated elastic net, and then
284 via adding small molecular diffusion and the regularization via smoothing out the velocity field, are
285 ere developed using logistic regression with regularization via the least absolute shrinkage and sele
286                                        Lasso regularization was used to identify the components of th
287         Logistic regression with elastic net regularization was used to select textural features asso
288                                  Elastic net regularization was used to select variables to be includ
289                     Logistic regression with regularization was used.
290                              Using L1 and L2 regularizations, we developed a new variable selection m
291         Boundary element methods and numeric regularization were used to compute electrograms at 194
292 st Absolute Shrinkage and Selection Operator regularization were used to examine candidate predictors
293 arameter is automatically set using Bayesian regularization, which not only saves the computation tim
294        We compared the method of elastic net regularization, which uses repeated internal cross-valid
295                   Our method uses trace norm regularization with a highly efficient ADMM (alternating
296                                              Regularization with L(1) that based methods controls the
297 ability of solutions with greater degrees of regularization with the resolution of those that are les
298 est if the statistical method of elastic net regularization would improve the estimation of risk mode
299  Parkinsonism, it is unclear how the pattern regularization would originate from HFS.
300                                     Tikhonov regularization yields distance distributions that can be

 
Page Top