戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (left1)

通し番号をクリックするとPubMedの該当ページを表示します
1                                              GPU parallelism improves speed and makes iteration a tra
2 e gas conditions with CO2 permeance of 1,020 GPU and CO2/N2 selectivity as high as 680, demonstrating
3  3.5x10(-7) mol m(-2) s(-1) Pa(-1) (ca. 1000 GPU).
4   We demonstrate the new software using 1024 GPUs in parallel on one of the world's fastest supercomp
5 from 4 to 10 (resp. With 2 CPU threads and 2 GPUs, H-BLAST can be faster than 16-threaded NCBI-BLASTX
6 howing a 23.5 CO(2)/N(2) selectivity at 2350 GPU of CO(2).
7 electivity of 663 with H(2) permeance of 240 GPU was achieved for promising green energy resource-H(2
8 aged membranes had a CO(2) permeance of 2504 GPU and ideal selectivity values of 37.2 and 23.8 for CO
9 lectivity of 58.8 and CO(2) permeance of 310 GPU at 35 degrees C.
10 ncreased propylene permeance (reaching 186.5 GPU) while maintaining an appealing propylene/propane se
11 -score-GPU is 68 times faster on an ATI 5870 GPU, on average, than the original CPU single-threaded i
12 smission rate for (LDH/FAS)(25)-PDMS of 7748 GPU together with CO(2) selectivity factors (SF) for SF(
13 atasets respectively on a single GPU node (8 GPUs) of the Cori Supercomputer.
14 speed of 7 frames per second using A100-80GB GPU.
15 n of Scrooge, KSW2, Edlib, Darwin-GPU, and a GPU implementation of GenASM, respectively.
16                                     Arioc, a GPU-accelerated short-read aligner, can compute WGS (who
17 a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based
18 For numerical implementation, we developed a GPU-accelerated version of differential evolution to nav
19 onvenient, the second module may be run in a GPU-free environment.
20 n achieve performance that exceeds that of a GPU-based coprocessor.
21 ate how the cost of divergent branching on a GPU can be amortized across algorithms like HCP in order
22 racy within a single day of computation on a GPU cluster.
23 es (approximately 13 slices) per second on a GPU with 12 GB RAM compared with 6-8 minutes per slice f
24                                 Running on a GPU, the training time is sped up 21x and 30x (over CPU)
25 te a human/mouse alignment in under 6 h on a GPU-containing node without pre-partitioning, maintainin
26 nd oxRNA molecular dynamics simulations on a GPU-enabled high performance computing server.
27                   In this work, we present a GPU-accelerated cosine similarity implementation for Tan
28                            Here we propose a GPU-accelerated biclustering algorithm, based on searchi
29 y in a matter of minutes without requiring a GPU.
30 E will be effectively inaccessible without a GPU, and some assume CUDA.
31 hout requiring any technical knowledge about GPUs, C++ or GeNN.
32   Finally, we developed a freely accessible, GPU and CPU-powered dashboard that combines interactive
33 allelization strategy and leverages advanced GPU features.
34 'Anton' machine and large ensembles of AMBER GPU simulations.
35  an automated workflow tool to perform AMBER GPU MD simulations.
36  and user-friendly environment and the AMBER GPU code for a robust and high-performance simulation en
37 dings by using a fully force-field-based and GPU-accelerated approach, which allows the simulations t
38 s and 69 times shorter for the CPU-based and GPU-based CNN pipelines (216.6 seconds +/- 40.5 and 204.
39         By leveraging parallel computing and GPU acceleration, our package achieves significant impro
40 umba for just-in-time compilation on CPU and GPU achieves hundred-fold speed improvements.
41 act of using hardware with different CPU and GPU features on the power consumption and latency for th
42 ory efficiency, easy access to multi-CPU and GPU hardware, and to distributed and cloud-based paralle
43  In our experiments with a four core CPU and GPU, SWIFTLINK achieves a 8.5x speed-up over the single-
44 00x on average), respectively, with FPGA and GPU acceleration.
45 00x on average), respectively, with FPGA and GPU acceleration.
46 raphy, deep learning, physical modeling, and GPU-accelerated robust optimization, with automatic anal
47 nd limitations of recent parallelization and GPU-based efforts are highlighted.
48 s free, open-source, platformindependent and GPU-vendor independent.
49 ke advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding co
50  It benefits from using multiple threads and GPU calculations.
51 for better efficiency among various CPUs and GPUs combinations.
52 rted hardware types (including both CPUs and GPUs) and perform well on all of them.
53 y the observation that, compared to CPUs and GPUs, cutting-edge FPGAs demonstrate-in certain cases-su
54 heterogeneous computer that couples CPUs and GPUs, to accelerate BLASTX and BLASTP-basic tools of NCB
55  coprocessor architectures such as FPGAs and GPUs.
56 average 1.81 x speedup over the state-of-art GPU acceleration with only 7.2% of the energy, and a spe
57                By employing state-of-the-art GPU simulations and nano-particle resolved colloidal exp
58 esigned to run on any commercially available GPU and on any operating system.
59 ware executes concurrently on each available GPU device.
60 with the number of chains, and the available GPU memory limits the size of protein complexes which ca
61                        We present a C-based, GPU-accelerated implementation of nested sampling that i
62 nstrate dramatic speed differentials between GPU assisted performance and CPU executions as the compu
63 differences in hardware architecture between GPUs and CPUs complicate the porting of existing code.
64 terface to handle the communications between GPUs.
65 uential NCBI-BLAST, the speedups achieved by GPU-BLAST range mostly between 3 and 4.
66 rty seconds when using an NVIDIA Tesla C2050 GPU.
67                                    Combining GPU and HCP, resulted in a speedup of at most 1,860-fold
68 sing a standard VR headset with a compatible GPU.
69 k, called GiCOPS, for efficient and complete GPU-acceleration of the modern database peptide search a
70        A detailed energy breakdown confirmed GPU usage as the dominant source of power consumption, u
71                 SCENA is equipped with CPU + GPU (Central Processing Units + Graphics Processing Unit
72  federate both computational resources (CPU, GPU, FPGA, etc.) and datastores to support popular bioin
73 paper, we design and implement a new-age CPU-GPU HPC framework, called GiCOPS, for efficient and comp
74       We implement co-parallelization of CPU-GPU processing, which leads to a significant reduction o
75                             Finally, the CPU-GPU methods and optimizations proposed in our work for c
76 mplementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of
77 ination and sample jitter in addition to CPU/GPU accelerated reconstruction for large datasets.
78  SneakySnake efficient to implement on CPUs, GPUs and FPGAs.
79  SneakySnake efficient to implement on CPUs, GPUs, and FPGAs.
80 of biochemical network models on NVIDIA CUDA GPUs.
81  CPU version of Scrooge, KSW2, Edlib, Darwin-GPU, and a GPU implementation of GenASM, respectively.
82 ics processing unit (GPU), we have developed GPU-BLAST, an accelerated version of the popular NCBI-BL
83 ource, hybrid multithreaded and distributed, GPU accelerated simulator of universal quantum circuits.
84 based architectures, a need for an efficient GPU accelerated strategy has emerged.
85 up in computation time on a cluster of eight GPUs compared to running the method on a single GPU.
86 st alternative method by 2.4-fold with eight GPUs.
87 mework (CUDAMPF) implemented on CUDA-enabled GPUs presented here, offers a finer-grained parallelism
88 the Smith-Waterman algorithm on CUDA-enabled GPUs.
89 anks to a PyTorch-based backend that enables GPU acceleration, pyaging is capable of rapid inference,
90            Cytokit offers (i) an end-to-end, GPU-accelerated image processing pipeline; (ii) efficien
91                                     Existing GPU based strategies have either been optimized for a sp
92  see text] improvement over several existing GPU-based database search algorithms for sufficiently la
93 is either comparable or better than existing GPU strategies.
94 se of sub-optimal algorithms in the existing GPU-accelerated methods resulting in significantly ineff
95 sive, thereby demanding the use of expensive GPUs.
96 sion of the code able to efficiently exploit GPU-accelerated systems for both the genetic relationshi
97  structures and memory allocation to exploit GPU parallel processing capabilities.
98  no method is currently available to exploit GPU technology for the reverse engineering of mechanisti
99                                     Extended GPU-based computational studies of a ternary complex con
100 g model, called Stacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoi
101                             Thanks to faster GPU-based algorithms, atomistic and coarse-grained simul
102 d in microarray gene expression analysis for GPU-equipped computers.
103  parallelized CCS algorithm using CUDA C for GPU computing.
104 ADEPT, a new sequence alignment strategy for GPU architectures that is domain independent, supporting
105                The implementation of MDR for GPUs (MDRGPU) includes core features of the widely used
106 n short-read alignments per second in a four-GPU system; it can align the reads from a human WGS sequ
107 tion approaches going all the way up to full GPU-accelerated ray tracing, they do not provide size-sp
108 o systems so that users can make use of GeNN GPU acceleration when developing their models in Brian,
109          In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to
110  training reports and a single desktop-grade GPU.
111  neural nets are implemented on power-hungry GPUs, beyond the reach of low-power edge-devices.
112      Our theoretical model is implemented in GPU (Graphics Processing Unit) accelerated software whic
113                                 HIP includes GPU-accelerated support for most common image processing
114  Massively Parallel Architectures, Including GPU Nodes (CAMPAIGN), a central resource for data cluste
115 was not sufficient to overcome the increased GPU cost.
116  out using the Amber software on inexpensive GPUs, providing approximately 1 mus/day per GPU, and >2.
117 curacy while achieving outstanding intranode GPU scalability.
118 very well with increasing CPU cores, and its GPU version, implemented in OpenCL, can have up to 158x
119  CPU strategies as well as the fastest known GPU methods for each domain.
120 fically, we characterize the effect of known GPU optimization techniques like use of shared memory.
121 showed localization of fluorescently labeled GPUs (~7% of total injected dose per gram tissue) in the
122 rabricks used here, these workflows are less GPU optimized and require benchmarking on the platform o
123 ing >10 times faster and requiring much less GPU memory than them.
124                          Further, leveraging GPU instances has significantly improved the run-time of
125                                       Mendel-GPU, our OpenCL software, runs on Linux platforms and is
126 sentation learning workflow with minibatched GPU-accelerated clustering algorithms allows us to scale
127                                      MMseqs2-GPU is available under an open-source license at .
128  a stochastic simulation algorithm on modern GPUs, we simulated the dynamics of over one million neur
129 roviding close-to-peak performance on modern GPUs.
130                             When two or more GPUs are available, Arioc's speed increases proportionat
131                                Blocked multi-GPU parallel inference has been integrated into the main
132 (BLASTP, SWIPE, SWIMM2.0), can exploit multi-GPU nodes with linear scaling, and features an impressiv
133 per, we provide an optimized intranode multi-GPU implementation that can efficiently solve large-scal
134  source packages to run in a single or multi-GPU environment.
135        We have adapted Arioc to recent multi-GPU hardware architectures that support high-bandwidth p
136  provide 40 or more CPU threads and multiple GPU (general-purpose graphics processing unit) devices.
137 s driver enables it to scale across multiple GPUs and allows easy integration into software pipelines
138 ions compute tasks optimally across multiple GPUs.
139  peer-to-peer memory accesses among multiple GPUs.
140 al works have considered leveraging multiple GPUs to accelerate the ptychographic reconstruction.
141  Monte Carlo simulations running on multiple GPUs; we find a rounded transition and localization phys
142  can run in parallel in a single or multiple GPUs and each kernel simulates and scores the error of a
143 (GSH)-responsive polyurethane nanoparticles (GPUs) using a GSH-cleavable disulfide bond containing po
144                                          New GPU code further increases the speed of RMSD and TM-scor
145 the better performance provided by an Nvidia GPU.
146 ithub.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus).
147  PDB file for pocket detection to our NVIDIA GPU-equipped servers through a WebGL graphical interface
148 d subsequently can be run on suitable NVIDIA GPU accelerators.
149 s, and OpenCL kernels support AMD and Nvidia GPUs.
150 tforms and is portable across AMD and nVidia GPUs.
151 stem consisting of x86 processors and NVIDIA GPUs.
152 nd tools, written in 'C for CUDA' for Nvidia GPUs.
153 to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce proce
154 e execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which
155 ementations negate the apparent advantage of GPU implementations.
156 comparisons inflate the actual advantages of GPU technology.
157 t of structures, WE organizes an ensemble of GPU-accelerated MD trajectory segments via intermittent
158                              No knowledge of GPU computing is required from the user.
159                                   The use of GPU does not deteriorate the accuracy of our results.
160 raries will lead to a widespread adoption of GPUs in computational genomics.
161     Embracing the multicore functionality of GPUs represents a major avenue of local accelerated comp
162 riant callers scaled well with the number of GPUs across platforms, whereas somatic variant callers e
163 hibited more variation between the number of GPUs and computing platforms.
164 rs exhibited more variation in the number of GPUs with the fastest runtimes, suggesting that, at leas
165          CUDASW++4.0 changes the standing of GPUs in protein sequence database search with Smith-Wate
166  as well as the state-of-art acceleration on GPU and FPGA platforms.
167 borative review, and that can be deployed on GPU servers or cloud platforms.
168  offers improved computational efficiency on GPU-enabled systems over popular RNN architectures.
169  products to be most efficiently executed on GPU.
170 an open-source software package that runs on GPU and TPU accelerators through the JAX machine learnin
171  dynamics simulations in implicit solvent on GPU processors were used to generate ensembles of trajec
172 ng capabilities to facilitate computation on GPUs with limited memory.
173 and 7x faster than PointNovo and DeepNovo on GPUs, respectively, thus being more suitable for the ana
174 xtension algorithm for better performance on GPUs, and offers a performance tuning mechanism for bett
175 orithms optimized for parallel processing on GPUs and CPUs.
176                       Memory requirements on GPUs have been reduced to fit widely available hardware,
177 en an impetus to accelerate MC simulation on GPUs whereas thread divergence remains a major issue for
178                        Here we benchmark one GPU-accelerated software suite called NVIDIA Parabricks
179      Here, we develop KegAlign, an optimized GPU-enabled pairwise aligner.
180       With these enhancements, the optimized GPU implementation was on average an estimated 12,144 ti
181 MorphOT, which allows the use of both CPU or GPU resources.
182  GPUs, providing approximately 1 mus/day per GPU, and >2.5 ms data presented here.
183 nts, the author developed a high-performance GPU-based framework for the material point method, prior
184 tients undergoing esophagectomy with planned GPU reconstruction.
185                          On cloud platforms, GPU-accelerated germline callers resulted in cost saving
186 remains a challenge, as only highly powerful GPUs, such as the A40 and A100, can (partially) compete
187 itude performance improvements over previous GPU-based approaches (CUDASW++3.0, ADEPT, SW#DB).
188 simulations, run using graphical processors (GPUs), were used to investigate the effect of conformati
189 sing software that uses graphics processors (GPUs) to address the most computationally intensive step
190 veloped a tool, called PtyGer (Ptychographic GPU(multiple)-based reconstruction), implementing our hy
191 ual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large dat
192                                     TM-score-GPU is 68 times faster on an ATI 5870 GPU, on average, t
193                                     TM-score-GPU was applied to six sets of models from Nutritious Ri
194 ore for Graphical Processing Units (TM-score-GPU), using a new and novel hybrid Kabsch/quaternion met
195 nhanced snapshot compressive imaging (SeSCI)(GPU) showed the best performance of the CS algorithms st
196                                        Since GPUs are not always available on the same computing envi
197  search process can be performed on a single GPU in a massively parallel fashion.
198  large-scale ptychography datasets, a single GPU is typically insufficient for analysis and reconstru
199  DNA based datasets respectively on a single GPU node (8 GPUs) of the Cori Supercomputer.
200 ent simulations with performance on a single GPU that rivals the speed achieved by hundreds of CPUs.
201 ands of simultaneous simulations on a single GPU, HDRL-FP enables rapid convergence to the optimal re
202 a 10 mm path in under 15 minutes on a single GPU, which is 5-8 orders of magnitude faster than compet
203 ses of extensive genomic regions on a single GPU.
204 s compared to running the method on a single GPU.
205 xploiting hierarchical parallelism on single GPU and takes full advantage of limited resources based
206 ightweight architecture to train on standard GPUs.
207  benchmark dataset of 71 protein structures, GPU-I-TASSER achieves on average a 10x speedup with comp
208 reparation using any computer and a suitable GPU.
209                             RABiTPy supports GPU and CPU processing as well as cloud computing.
210 s BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes.
211 thermore, H-BLAST is 1.5-4 times faster than GPU-BLAST.
212           In this study, we demonstrate that GPU-accelerated high-speed FLIM can effectively separate
213                  Our study demonstrates that GPUs can be used to greatly accelerate genomic workflows
214                                          The GPU implementation substantially decreases computational
215                                          The GPU version of Scrooge achieves a 4.0x, 80.4x, 6.8x, 12.
216                                          The GPU-based parallel implementation of the Gillespie stoch
217 uses the NVIDIA CUDA interface to access the GPU.
218  parallel Graphics Processing Unit, i.e. the GPU computing technology.
219 ve, single-threaded CPU implementations, the GPU yields a large improvement in the execution time.
220 t enables high-performance simulation in the GPU-resident NAMD3 molecular dynamics engine.
221  a model using the thread parallelism of the GPU.
222  by 25-fold solely by parallelization on the GPU.
223 g of biomolecules into a grid/lattice on the GPU.
224 urn, better facilitates its mapping onto the GPU.
225 obs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20.
226 -TASSER by extending its capabilities to the GPU architecture.
227                                    Using the GPU to store and render the image sequence data enables
228 es interactively on the system CPU while the GPU handles 3-D visualization tasks.
229 e more expensive than CPU runs because their GPU acceleration was not sufficient to overcome the incr
230                                         This GPU implementation allows for large-scale analysis of ep
231 fying Arioc's implementation to exploit this GPU memory architecture we obtained a further 1.8x-2.9x
232            We also improve its speed through GPU parallelism.
233 petitive processes and facilitates access to GPU clusters, whose high-performance processing power ma
234 mitation by using graphical processing unit (GPU) base-calling.
235 recent advances in graphics processing unit (GPU) computing have added a promising new option.
236                    Graphics processing unit (GPU) computing is an affordable alternative for accelera
237 oit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin.
238                  A Graphics Processing Unit (GPU) implementation and downsampling strategies enable t
239 aster than current graphics processing unit (GPU) implementations.
240 s pipeline with a graphical processing unit (GPU) in 1-12 h (depending on frame size).
241 data analysis in a graphics processing unit (GPU) infrastructure.
242 ver and distributed Graphic Processing Unit (GPU) parallel computations.
243 cessor cores and a graphics processing unit (GPU) simultaneously.
244 ork, relative to a graphics processing unit (GPU) that uses a comparable 12-nanometer technology proc
245 IA Volta 100 (V100) Graphic Processing Unit (GPU) to 46.71 seconds for our largest dataset of 11.3 mi
246  that can use the graphical processing unit (GPU) to reduce runtime.
247 rallel manner on a graphics processing unit (GPU) using CUDA platform.
248  a general-purpose graphics processing unit (GPU), we have developed GPU-BLAST, an accelerated versio
249  a modern consumer graphics processing unit (GPU), we report a 92x increase in the speed of an exhaus
250 rn traditional and graphics processing unit (GPU)-accelerated machines, from workstations to supercom
251           Here the graphics processing unit (GPU)-accelerated MMseqs2 delivers 6x faster single-prote
252               With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuati
253 ifts, and extended graphics processing unit (GPU)-based quantum mechanics/molecular mechanics (QM/MM)
254 its mapping onto a graphics processing unit (GPU).
255 rallel computing on graphic processing unit (GPU).
256  on an NVIDIA V100 graphics processing unit (GPU).
257  implemented in a graphical processing unit (GPU).
258 eature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, suc
259                  Graphical Processing Units (GPU) computing is now ubiquitous in the current-generati
260 ms accelerated on Graphics Processing Units (GPU), scaled magnetic versions of p-bit IMs could lead t
261 take advantage of graphics processing units (GPUs) and result in many fold higher speedups over previ
262                   Graphics processing units (GPUs) are capable of efficiently running highly parallel
263 on ATI and nVidia graphics processing units (GPUs) for maximal speed.
264 GPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/
265  infrastructures, Graphics Processing Units (GPUs) offer opportunities to accelerate genomic workflow
266                   Graphics processing units (GPUs) provide an inexpensive and computationally powerfu
267          Although graphics processing units (GPUs) provide high performance for such large-scale ptyc
268 port for multiple graphics processing units (GPUs) support, which makes it possible to run efficientl
269          By using graphics processing units (GPUs) the time needed to build these models is decreased
270 , we use multiple graphics processing units (GPUs) to compute the real frequency spectrum of the boso
271 s in parallel, on graphics processing units (GPUs) using NVidia Compute Unified Device Architecture (
272 uding inexpensive graphics processing units (GPUs), it has remained infeasible to simulate the foldin
273 tionary models on graphics processing units (GPUs), making use of the large number of processing core
274 r computing using graphics processing units (GPUs), such as PyTorch and TensorFlow.
275                   Graphics processing units (GPUs), the hardware responsible for rendering computer g
276  intensive use of graphics processing units (GPUs), we have been able to run the latter model for mor
277 performance grade graphics processing units (GPUs).
278 so one or several graphics processing units (GPUs).
279  with 4 cores of graphical processing units (GPUs; GK210, Nvidia Tesla K80, USA).
280 ty after esophagectomy with gastric pull-up (GPU).
281                                  MEDUSA uses GPU (graphics processing unit) accelerated hardware and
282                   Our proposed strategy uses GPU specific optimizations that do not rely on the natur
283 to parallelize evolutionary algorithms using GPU computing for the inference of mechanistic GRNs that
284 ctive data mining of spectral features using GPU-based manipulation of the spectral distribution.
285  past aortic stenosis (AS) are studied using GPU-accelerated patient-specific computational fluid dyn
286                                        Using GPUs and multiple cores, ASTRAL-MP is able to analyze da
287                                        Using GPUs to run MDR on a genome-wide dataset allows for stat
288 able to perform chromosome simulations using GPUs via OpenMM Python API, called Open-MiChroM.
289 on for batch effect removal, and can utilize GPU when available.
290                                 By utilizing GPUs, we achieve around a 30-fold speedup in protein and
291   Capturing metadata (e.g. package versions, GPU model) currently requires repetitious code and is di
292 ython, Matlab and Java as well as CPU versus GPU implementations.
293  temporal and computational (e.g. CPU versus GPU) scales, and then to visualize and interpret integra
294 d ensemble (WE) strategy in combination with GPU computing.
295 esults show that the GP model, combined with GPU-enabled PPLs, effectively retrieves true emission sc
296          It requires a windows computer with GPU that supports OpenCL 1.2 and OpenGL 4.0.
297 get genes leveraging parallel computing with GPU devices, (iii) all-in-one analytics with novel featu
298       Using DOCTORS, parallel computing with GPU enables the cone-beam CT dose estimation nearly in r
299 f our former DASC algorithm implemented with GPU.
300  and is accompanied by a Python version with GPU acceleration.

 
Page Top