戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 ion tested, however, they showed a high root mean squared error.
2 specific PPVs in order to reduce the overall mean squared error.
3 ely used methods in terms of normalized root mean squared error.
4 han those of CLS and PLS methods in terms of mean squared error.
5 round truth were quantified through the root mean squared error.
6 res is able to predict tumor growth with 16% mean squared error.
7 aclass correlation coefficient, and the root mean squared error.
8 eas compressed sensing yields 20% normalized mean squared error.
9 n terms of the goodness of fit measure total mean squared error.
10 were reconstructed with less than 0.137 root mean squared error.
11 ve rate (TPR), true negative rate (TNR), and mean squared error.
12 hood estimator in terms of matrix and scalar mean squared error.
13 ower law functions, comparing model fit with mean squared error.
14 chieved, bias, standard error, coverage, and mean squared error.
15 ction function with the best cross-validated mean squared error.
16 ed the effect estimate, percentage bias, and mean squared error.
17  and an accuracy of [Formula: see text] root-mean-squared error.
18 r rate for the estimated mixing proportions (mean squared error 0.012) and high accuracy for the DR p
19  than the PTDM approach (Cox: bias = -0.002, mean squared error = 0.025; PTDM: bias = -1.411, mean sq
20 metabolites (cross-validated R(2) = 0.82 and mean squared error = 0.14).
21 3DO body fat test-retest precision was: root mean squared error = 0.81 kg male, 0.66 kg female.
22 rated superior performance (log10-scale root mean squared error = 0.88, R-squared = 0.66) than previo
23 .892, and 0.855 for binary classifiers; root mean squared error- 0.027 for DNN regression).
24 on sample (predicted-versus-observed r=0.47, mean squared error=0.019).
25 nal age at scan in preterm infants with root mean squared error 1.41 weeks.
26 ement with a reduced number of markers (root mean squared error 1.55-8.27( degrees ) real data, 1.27-
27 ce compared to a constant linear regression (mean squared error = 1.10 vs. 1.59, p < 0.001).
28 VPG had a Pearson correlation of 0.977 (root mean squared error, 1.57 mm Hg; P < .0001).
29 d with taxonomic resolution (normalised root mean squared error, 10.1-24%; coefficient of determinati
30 moderate levels of accuracy (normalised root mean squared error, 12.5-18.6%; coefficient of determina
31                                   The lowest mean squared error (14,354) corresponded to a linear reg
32 clearances were in very good agreement (root mean squared error 16.8 mL h(-1) g(-1)) across three lev
33 ificant accuracy (correlation=0.32, P=0.006; mean squared error=176.88, P=0.001).
34 n local genetic correlation estimation (with mean squared errors 2.23-4.13 times lower) and enhanced
35  squared error = 0.025; PTDM: bias = -1.411, mean squared error = 2.011).
36 oefficient = 0.95 (95% CI: 0.94, 0.95); root mean squared error = 8.24 days) and performed better tha
37 ccuracy varied among traits (normalized root mean squared error, 9.1-19.4%; coefficient of determinat
38 d to other robust methods, it has the lowest mean squared error across a range of realistic scenarios
39 ing, improving on existing methods by 17% in mean squared error and 26% in F1 score for chromatin loo
40 9 predictor variables had the lowest CV root mean squared error and a CV-R2 of 0.803.
41 performance between fitted models using root mean squared error and Alpaydin's paired F test.
42 of true positive rate, false discovery rate, mean squared error and effect size estimation error.
43 4% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201.
44 timator, both in theoretical computations of mean squared error and in data analysis.
45 sonication and without it, furthermore, root mean squared error and standard error values were obtain
46                                         Root mean squared errors and mean residual errors were used t
47                                      We used mean squared errors and Pearson's correlation coefficien
48 models having reasonably low normalized root-mean-squared errors and high correlations for both the f
49  but outperform them in terms of efficiency (mean squared error) and computational speed (up to >870
50 tive fidelity (accuracy, precision, and root mean squared error) and locally with relative error in v
51 (R-squared), MAE (Mean Absolute Error), MSE (Mean Squared Error), and MAPE (Mean Absolute Percentage
52 er metrics such as mean absolute error, root mean squared error, and contact precision.
53                                  Mean error, mean squared error, and mean absolute error were calcula
54 -statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic
55 (1) calculating the mean average error, root mean squared error, and Spearman's correlation coefficie
56  by standard error, bias, square root of the mean-squared error, and 95% confidence interval coverage
57 for the prediction error bias, variance, and mean-squared error are given under general measurement e
58 find that global comparison methods, such as mean squared error, are suitable for initial screening;
59 an inference with Gaussian weight priors and mean squared error as a negative log-likelihood.
60 es, we were able to train ML regressors with mean squared error as low as 0.07 +/- 0.02.
61                         For spiral scans and mean squared error based pre-training, this enables elec
62     Within year, hold-out validation yielded mean-squared-error-based R(2) (MSE-R(2)) (i.e., fit arou
63                                          The mean squared error between filtered and true images was
64 mulation with desired movements yielded root mean squared errors between approximately 18 and 26%.
65                      Meanwhile, average root mean squared errors between surface coordinates of 3DE a
66  models, decreasing the cross-validated root mean squared error by 1-42%, depending on the model.
67 fficients to the estimated channel's minimum mean-squared error by considering the access point (AP)
68 vorably to existing alternatives in terms of mean squared error, coverage, and CI size.
69 ecision was estimated from the combined root mean squared error (CRMSE) and the coefficient of determ
70 edure, our approach reduces the bias and the mean squared error, especially for modest effect sizes.
71 ed inverse probability weighting in terms of mean-squared error even under misspecification of one of
72 servations tends to lead to reduced bias and mean squared error for all estimated parameters in the r
73                                         Root mean squared error for BGB from testing data was 313 g m
74 d Senegal, we calculated bias, variance, and mean squared error for estimates of the prevalence ratio
75                   The difference between the mean squared error for groups A and B was not significan
76 ues, the average absolute difference and the mean squared error for MD forecasts with linear extrapol
77 p learning model has less than 8% normalized mean squared error for quantitation of L-COSY spectra.
78                                          The mean squared error for the automated method was signific
79                The authors compared bias and mean squared error for various PS implementations under
80 oth milk and meat spoilage, and typical root mean squared errors for prediction in test spectra were
81 s in the range approximately 1.1-2.2 A (root mean squared errors for the predicted bond distances of
82 erence panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease i
83 error for imputation and a 73.7% decrease in mean-squared error for joint-testing.
84 uch lower MAE (Mean Absolute Error) and MSE (Mean Squared Error) for predictions made within differen
85 her found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensit
86 structural similarity and < 0.003 normalized mean squared error from sampling only 12.5% of the k-spa
87  mean absolute error from 18 to 13% and root-mean-squared error from 22 to 17% on a newly generated v
88 erns is computed using four scoring metrics: mean squared error, Haar wavelet distance, mutual inform
89 ons give considerably different results from mean squared error highlighting the necessity to define
90 dependent data showed improvement, with root mean squared error improvement of 6% compared with the l
91 t-of-sample data, significantly reducing the mean squared error in a cross-validation study.
92 strategy's ability to minimize both bias and mean squared error in comparison with other common strat
93 ected variables were limited to 200 and root mean squared error in cross validation (RMSE(CV)) was us
94 duced bias due to unmeasured confounders and mean squared error in most scenarios assessed.
95  Overall, the updated model reduces the root mean squared error in the predicted concentration by 66%
96 ess function based on the combination of the mean-squared error in the calibration set data, the mean
97 uared error in the calibration set data, the mean-squared error in the monitoring set data, and the n
98 ic training data and percentage of increased mean-squared error (%IncMSE) as measurement of feature i
99 ody systems modeling achieve a task-averaged mean squared error less than 0.01, more than 15% better
100  categories with cross-validation-based root-mean-squared errors less than an order of magnitude.
101 ompared with the conventional linear minimum mean squared error (LMMSE) detector.
102                             Trained with the Mean Squared Error loss and Adam optimizer, the model de
103 ned to map scans to embeddings by minimizing mean squared error loss.
104 endent verification datasets with low error (mean squared error &lt; 0.1 and mean Spearman's rank 0.7).
105  validated CC model (average normalized root mean squared error &lt;/=11.3%) was then used to evaluate t
106 C ensemble averages have low normalized root mean squared errors (&lt; 0.3 for 81% of comparisons) and h
107  this scheme achieves the asymptotic minimax mean-squared error M(rho;beta) = lim(M,N --> infinity)in
108 nce was measured using error metrics such as mean squared error, mean absolute error, mean absolute p
109                              First, the root mean squared error measures the difference between the t
110   Weight variability was modeled with a root mean squared error method to reflect fluctuations in wei
111 fferent normal limits (maximum entropy [ME], mean-squared-error minimization [MSEM], and global minim
112       We also introduce the Bayesian minimum mean squared error (MMSE) conditional error estimator an
113 uation metrics, including concordance index, mean squared error, modified squared correlation coeffic
114       Predictive accuracies were assessed by mean squared error (MSE) and correlation coefficients (r
115 f each model was assessed by calculating the mean squared error (MSE) and mean absolute error (MAE).
116  show that, using Mean Absolute Error (MAE), Mean Squared Error (MSE) and Pearson Correlation Coeffic
117                                  The average Mean Squared Error (MSE) and Signal Reconstruction Error
118 all mice through image metrics, namely, root mean squared error (MSE) and structural similarity index
119 A within 1-4 weeks of the gold standard, the mean squared error (MSE) and the R-squared.
120 y of the generated images was evaluated with mean squared error (MSE) and the structural similarity i
121 sical models with up to 14.8% improvement in mean squared error (MSE) and up to 26.1% improvement in
122 the proposed method achieves up to 30% lower mean squared error (MSE) compared to LS estimation and 1
123 rating processes, MEGB achieved 35-76% lower mean squared error (MSE) compared to state-of-the-art al
124 are the estimators' performance by using the mean squared error (MSE) criterion.
125 esearch demonstrates high accuracy, with the Mean Squared Error (MSE) for regression models (Multiple
126 xperiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quali
127 ve rate (TPR), True Negative rate (TNR), and Mean squared error (MSE) observed as 95.8%, 95.0%, 95.2%
128 with a mean absolute error (MAE) of 0.11864, mean squared error (MSE) of 0.18796, and root mean squar
129                                          Its Mean Squared Error (MSE) of 1.06e(6) and Signal-to-Noise
130 n predicting water uptake, achieving testing Mean Squared Error (MSE) scores of 24.55 ( R2 score 0.63
131 g the expected diameter and estimates of the mean squared error (MSE) were generated.
132 s in a multistate framework, comparing mean, mean squared error (MSE), and bias of regression coeffic
133 ssed through correlation coefficient (R(2)), mean squared error (MSE), confusion matrices, receiver o
134 udy its sampling properties, mainly bias and mean squared error (MSE), for an approximation of order
135         Model performance was assessed using mean squared error (MSE), Pearson correlation coefficien
136 ing the coefficient of determination (R(2)), mean squared error (MSE), root mean squared error (RMSE)
137 d metrics such as mean absolute error (MAE), mean squared error (MSE), root mean squared logarithmic
138 a variety of quantitative metrics, including mean squared error (MSE), structural similarity index (S
139 valuated using Mean Absolute Error (MAE) and Mean Squared Error (MSE).
140  the coefficient of determination (R(2)) and mean squared error (MSE).
141 s approaches in terms of bias, variance, and mean squared error (MSE).
142 at such an approach does not always minimize mean squared error (MSE).
143 function demonstrates a 46% improvement over mean-squared error (MSE) loss across 125 patients.
144 ing these curves for CO(2) and CH(4) reduced mean-squared errors (MSE) by approximately 18%.
145 lute percentage error (MAPE), and normalized mean squared error (NRMSE) are [Formula: see text] mg/dL
146  is only negligible when the normalized root-mean-squared error (NRMSE) in the calibration falls belo
147  similarity index (SSIM) and normalized root-mean-squared error (nRMSE), alongside specialized struct
148  in the two-item regression (normalized root mean squared error [nRMSE] 0.164; normalized mean absolu
149 ile the discrimination network resulted in a mean squared error of < 0.004 cm and < 0.02 cm, for the
150 ller bias and smaller variance, often with a mean squared error of 0, in estimating the number of pri
151 ainst sleep diaries, the algorithm yielded a mean squared error of 0.04-0.06 and a total sleep time (
152 is 0.04 units of pH, 0.09 of accuracy, and a mean squared error of 0.167.
153 es C, which achieved a model fit with a root mean squared error of 0.20 log units.
154            Key performance metrics include a Mean Squared Error of 0.2269, R-Squared (R(2)) of 0.9624
155 eries predictions with a low normalized root mean squared error of 0.28.
156 ation ratios from the literature with a root mean squared error of 0.32-0.53 log units, without any a
157  on the determined descriptors (e.g., a root mean squared error of 0.39 for log Kow).
158 r of 5.7%, and iodine concentration had root mean squared error of 0.5 mg/cm(3), similar to previousl
159 veals a Pearson correlation of 0.86 and root mean squared error of 1.18 kcal/mol, an unprecedented le
160 ance on test data achieves a normalized root mean squared error of 8%, a figure of merit in space of
161  PS model can affect the bias, variance, and mean squared error of an estimated exposure effect.
162 the CFA, all 4 models were supported by Root Mean Squared Error of Approximation (RMSEA); Incremental
163 comparative fit index range: .929-.968; root mean squared error of approximation range: .032-.052).
164 reater than 0.97 were obtained with the root mean squared error of calibration (<2.01) and prediction
165  obtained with corresponding values for root mean squared error of calibration and prediction (<0.57
166 ne-cystatin C equation as quantified by root mean squared error of difference scores (differences bet
167 ts and the wild type strain, we decrease the mean squared error of predicted central metabolic fluxes
168 ry predictive abilities, with values of root mean squared error of prediction (RMSEP) of 0.50, 0.788,
169 edicted CAC scores with the smallest average mean squared error of regression and with a classificati
170 hat integrating TimesGAN reduces the average mean squared error of the ConvLSTM prediction by approxi
171 thods was evaluated with respect to bias and mean squared error of the estimated effects of a binary
172  available information, leading to a smaller mean squared error of the genotype risk ratio estimators
173    Our strategy was designed to optimize the mean squared error of the parameter estimate.
174  a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferre
175 oups (Positive Mode-R2 = 97%; Q2 = 93%; root mean squared error of validation (RMSEV) = 13%; Negative
176 is aeruginosa strains was possible with Root Mean Squared Error of Validation (RMSEV) lower or very c
177 brated pp-LFER and COSMO-RS models with root mean squared errors of 0.047 and 0.050, respectively.
178 he evaluated model had fair skill, with root-mean-squared error of 6%.
179 dex (CFI; >0.95 indicates good fit) and root-mean-squared error of approximation (RMSEA; <0.06 indica
180                                     The root-mean-squared error of prediction (RMSEP) is 92.17 mg/dL
181                                     The root-mean-squared error of prediction (RMSEP) of 1.8 mM (33.1
182 ents, and ground reaction forces showed root-mean-squared errors of less than 1.6 standard deviations
183 r the expectation method in terms of smaller mean-squared errors of the estimated QTL effects.
184 ation coefficient of -0.60 +/- 0.04 and root-mean-squared-error of 14.6 +/- 1.5 mmHg (p < 0.05).
185 ent of -0.80 +/- 0.02 (mean +/- SE) and root-mean-squared-error of 7.6 +/- 0.5 mmHg after a best-case
186                                         Root-mean-squared-error of predictions were 0.6-1.0 and 0.6-2
187 d a mean error in its weekly estimates (root mean squared error) of 60.3 overdose deaths for 2018 (co
188 ionally, we explored the accuracy (regarding mean squared error) of CTIS in estimating successive dif
189 ncertainty (interquartile range and the root-mean-squared error) of load estimates a modeling exercis
190 ll type-specific proportion variability, and mean squared error on sensitivity of cell type-specific
191 ods on experimental data, as measured by the mean squared error or Pearson's correlation coefficient.
192                  Popular choices such as the mean squared error or the mean absolute error are based
193  of these IDP-based models, in terms of root-mean-squared-error or area-under-the-curve for continuou
194 deep learning, with a 10x improvement in the mean-squared error over common Raman filtering methods.
195 ion, with performance assessed based on root mean squared error, Pearson correlation, and agreement w
196 tion differed for each gene as a function of mean squared error, per group sample sizes, and variabil
197 lly relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced.
198 Mean Absolute Error, Mean Absolute Relative, Mean Squared Error, R-squared, and Root Mean Square Erro
199 ents were above 0.92 and the normalized root mean squared error ranged between 4.2 and 12.7% with res
200                                     The root mean squared errors ranged between 10 and 20% of the max
201 greement with reference solutions, with root mean squared errors ranging from 0.02 to 0.10 Pa.
202 redicted partition coefficients exhibit root-mean-squared errors ranging from 0.19 to 0.48 log unit,
203 invertebrate phototransduction using minimum mean squared error reconstruction techniques based on Ba
204                         This method showed a mean squared error reduction of over 21.89% oversimple k
205 ith the average mean absolute error and root mean squared error, respectively, 4.5 and 5.3 times high
206 e analysis using the mean absolute error and mean squared error revealed that the TCN-SENet++ model o
207 rgy relationships (LSERs) regarding the root-mean-squared error (rms) and the maximum negative and po
208                        According to the root-mean-squared error (rms) and the maximum negative and po
209          The R(2) was found to be 0.92, Root Mean Squared Error (RMSE) 0.936, and Mean Absolute Error
210 termination coefficient of R(2) = 0.83 [root mean squared error (RMSE) = 3.01] for DE24, R(2) = 0.56
211 odel (SFNN) consistently achieves lower root mean squared error (RMSE) across other forecasting windo
212 mpared between outcome groups using the root mean squared error (RMSE) and maximum trajectory deviati
213 predicting rainfall, as measured by the root mean squared error (RMSE) and mean absolute error (MAE),
214 coefficient of determination (R(2)) and root mean squared error (RMSE) as statistical measures of goo
215 IRECT and Powell's method to reduce the root mean squared error (RMSE) between simulations and the re
216 round the WES TMB as reflected in lower root mean squared error (RMSE) for 26/29 (90%) of the clinica
217 rediction accuracies (92-98%) and lower root mean squared error (RMSE) for rust and senescence scores
218 ML models can decrease yield prediction root mean squared error (RMSE) from 7 to 20%.
219 gs of imputation methods based on three root-mean squared error (RMSE) measures and the rankings base
220 e to reconstruct dose up to 4.5 Gy with root mean squared error (RMSE) of +/- 0.35 Gy on a test datas
221 it achieves a testing R(2) of 0.916 and root mean squared error (RMSE) of 0.014 for MPS of the whole
222 ean squared error (MSE) of 0.18796, and root mean squared error (RMSE) of 0.18632.
223  of determination (R(2)) of 0.789 and a root mean squared error (RMSE) of 0.526 were obtained for the
224 lute error (MAE) of 8.29 and 12.89, and root mean squared error (RMSE) of 11.63 and 16.48 for (1)I an
225           The results showed an average root mean squared error (RMSE) of 19.7 mg/dL, with 76.6% accu
226  of 75.87 and 79.10% for UFP and BC and root mean squared error (RMSE) of 21,800 part/cm(3) and 1300
227  an average R(2) of 0.95 and an average root mean squared error (RMSE) of 24 psi for the two datasets
228 ive fidelity with an R2 of 0.9011 and a Root Mean Squared Error (RMSE) of 5.09 Megapascals (MPa) on t
229 olute error (MAE) of 4.03 degrees C and root mean squared error (RMSE) of 5.66 degrees C, thus offeri
230                  It achieved the lowest root mean squared error (RMSE) of 523.50, indicating superior
231 on log and location data features had a root mean squared error (rmse) of 6.80.
232 e ANSI/HPS N13.30-2011 standard for the root mean squared error (RMSE) of relative bias (Br) and rela
233                        All the R(2) and root mean squared error (RMSE) results of the test datasets w
234 ormed the other models, achieving lower root mean squared error (RMSE) values of 0.121, 0.155, and 0.
235 hange Sensitivity (SCS) and decrease in Root Mean Squared Error (RMSE) was observed when predicting t
236                                         Root mean squared error (RMSE) was used to evaluate goodness
237 n absolute percentage error (MAPE), and root mean squared error (RMSE) were calculated by comparing p
238 MEASURES: Mean absolute error (MAE) and root mean squared error (RMSE) were used to assess refractive
239       The mean absolute error (MAE) and root mean squared error (RMSE) were used to select the best f
240  We assess method performance using the root mean squared error (RMSE) when modelling the outcome var
241 ation (R(2)), mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE)
242 tion coefficient ([Formula: see text]), root mean squared error (RMSE), and mean absolute error (MAE)
243 R2), Average Absolute Difference (AAD), Root Mean Squared Error (RMSE), and Mean Absolute Percentage
244 rror-based performance metrics, such as root mean squared error (RMSE), cannot directly assess a key
245 Union (IoU), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Huber loss, and Intraclass Co
246 (PH) of 30 mins, the average values for root mean squared error (RMSE), mean absolute error (MAE), me
247 ated by the criteria percent bias (PB), root mean squared error (RMSE), normalized RMSE (NRMSE), and
248 rror (MAPE), mean absolute error (MAE), root mean squared error (RMSE), Pearson correlation coefficie
249 odel performance was assessed using the root mean squared error (RMSE), R2, and slope and intercept o
250 ity of the estimates as measured by the Root Mean Squared Error (RMSE), which is a function of both b
251  I and Castrop V1) were optimized using root mean squared error (RMSE).
252 physical activity relation by using the root mean squared error (RMSE).
253 han a single measurement [difference in root mean squared error (RMSE): 1.3 ng/mL; P< 0.001].
254 i.e. SD, coefficient of variation (CV), root mean squared error (RMSE)] and 4 patterns (stable, loss,
255 of determination (R(2)) = 0.87], lowest root mean squared error (RMSE; 0.87 kg), and fewest outliers
256 pecificity (classification models), and root mean squared error (RMSE; regression models).
257 a posteriori inverse squared model, the root mean squared errors (RMSE) for systolic and diastolic bl
258 of determination (R(2)) of 0.998-0.999, root mean squared errors (RMSE) of 0.16-0.80, and RPD values
259 he set of parameters that minimized the root-mean-squared error (RMSE) between the model and the expe
260      The system has an average distance root-mean-squared error (RMSE) error of 3 cm compared to the
261  from ideal synthetic data (growth rate root-mean-squared error (RMSE) of 0.0192; epidemic direction
262 imated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 mumol m(-2) s(-1) .
263 ating volume throughout compaction with root-mean-squared error (RMSE) values less than 6.8 and 4.3%,
264                                     The root-mean-squared error (RMSE) was used to compare the MR-bas
265      Model performance was evaluated by root-mean-squared error (RMSE).
266 and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an
267 ed partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDO
268 ated (r > 0.9) between modalities, with root-mean-squared-errors (RMSE) of 0.04 m/s, 2.3 steps/min, 0
269 ed with predictive errors quantified by root-mean-squared-errors (RMSE) of 6.49 and 5.83, respectivel
270  correlation at initial visit r = 0.83, root mean squared error [RMSE] = 1.26, follow-up visit r = 0.
271 ients of determination R(2)>/= 0.82 and root mean squared errors (RMSEs) </= 0.47 log(10)CFUg(-1).
272 -Vienna RNP-DeltaDeltaG method achieves root-mean-squared errors (RMSEs) of 1.3 kcal/mol on high-thro
273  = 0.77 and a relatively low Normalized Root Mean Squared Error score of 0.52 between actual and pred
274 tcome can be detrimental to an estimate in a mean squared error sense.
275 lihood estimator yields smaller variance and mean squared error than other estimators; and the struct
276 ods were associated with less bias and lower mean squared error than those obtained with complete-cas
277 to be optimistic but with lower absolute and mean squared errors than both methods of cross-validatio
278 ng yielded lower average standard errors and mean squared errors than did matching on age and sex.
279                          The best CNN gave a mean squared error that was on average 68% and 20% bette
280 hows constrained least squares achieves root-mean-squared errors that are comparable with those of th
281 estimates, with comparable residual and root mean squared error to SER estimates.
282                The developed ANN has minimum mean squared error values of 0.83, 40.92, 29.01, and 8.9
283                                     The root mean squared error values of prediction were 0.172 and 0
284                              For comparison, mean squared error values were calculated between the me
285                                              Mean-squared-error values were calculated between the me
286 ts was 0.869 (range 0.761 to 0.919) and root mean squared error was 0.239 (0.195 to 0.351) Gy.
287 s used (<10% in 35 of 45 scenarios), and the mean squared error was usually minimized with recall per
288                                      Minimum mean squared errors were 1.63 x 10-6, 3.11 x 10-5, and 4
289                      The R(2) and RMSE (Root Mean Squared Error) were statistically similar across mo
290 tion, our method has less than 5% normalized mean squared error, whereas compressed sensing yields 20
291 ent algorithms are typically optimized using mean squared error, which is widely criticized for its p
292  ground truth peak signal-to-noise ratio and mean squared error with coefficients of determination gr
293 tance, and vector angle restraints, the root-mean squared error with respect to existing X-ray struct

 
Page Top