コーパス検索結果 (left1)
通し番号をクリックするとPubMedの該当ページを表示します
1 GPT (10 U/ml) decreased neuronal death caused by exposur
2 GPT achieved an accuracy of 77.0% and AUC of 0.770, with
3 GPT and changes of JTH scores at onset of PN were signif
4 GPT cannot yet discern this changeability as well as a p
5 GPT has also been predicted to act on the cytosolic face
6 GPT is an effective neuroprotectant against glutamate ex
7 GPT performed nearly as well as, and sometimes better th
8 GPT was again neuroprotective, decreasing neuronal death
9 GPT-2 (ref. (1)), GPT-3(.5) (ref. (2)) and GPT-4 (ref. (
10 GPT-2 saliency, weighing the importance of context words
11 GPT-2 surprisal, estimating word prediction errors from
12 GPT-3 used more negative expressions when responding to
13 GPT-3's semantic activation is better predicted by simil
14 GPT-3.5 and GPT-4 generated false statements in 16.2% (5
15 GPT-3.5 and GPT-4 were prompted to create synoptic repor
16 GPT-3.5 shows significant sensitivity to context but lag
17 GPT-3.5-turbo and GPT-4 (OpenAI), Gemini-pro (Google), a
18 GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while
19 GPT-3.5-turbo, Gemini-1.0-Pro-001 and GPT-4-turbo were a
20 GPT-4 (via Copilot) initially refused to generate health
21 GPT-4 achieved a score up to 65% in the neuroradiology e
22 GPT-4 achieved higher accuracy in identification of meta
23 GPT-4 ADA, accessed between December 2023 and January 20
24 GPT-4 contributed to linking GenBank entries with publis
25 GPT-4 demonstrated high accuracy in detecting clinically
26 GPT-4 demonstrated superior performance with Fleiss' Kap
27 GPT-4 effectively identified challenging errors of nonse
28 GPT-4 had higher precision than GPT-3.5 for extracting s
29 GPT-4 has previously performed well when applied to ques
30 GPT-4 in-context learning performed better in identifyin
31 GPT-4 led the LLM category with an F1 score of 0.43, whi
32 GPT-4 outperformed ChatGPT, correctly answering 90% comp
33 GPT-4 passed all three radiology board examinations, and
34 GPT-4 performed 9 out of the 10 quantitative metrics tas
35 GPT-4 performed very well on fill-in-the-blank, short-an
36 GPT-4 required less processing time per radiology report
37 GPT-4 scored 86.3% and 89.6% in papers one-and-two respe
38 GPT-4 showed more confidence, not revising any responses
39 GPT-4 V is currently able to simultaneously analyze and
40 GPT-4 V provided the correct diagnosis in 19/40 (47.5%)
41 GPT-4 was more consistent across attempts than GPT-3.5 b
42 GPT-4 was non-inferior to the second human annotator in
43 GPT-4 with chain-of-thought achieved high accuracy in ca
44 GPT-4's accuracy across categories and question structur
45 GPT-4's scores were less sensitive to prompt style chang
46 GPT-4, for instance, exhibits deceptive behavior in simp
47 GPT-4, the only publicly available vision-language model
48 GPT-4.0 scored 4.95 +/- 0.23 (OPTN), 4.93 +/- 0.26 (NHS)
49 GPT-4o with CoT-SC prompting outperformed the other appr
50 GPT-4o with web access and GeneGPT demonstrated compleme
51 GPT-4V correctly identified the imaging modality and ana
52 GPT-4V relied on the textual context for its outputs.
53 GPT-4V, a large vision-language model from OpenAI, has s
54 GPT-based AI predictions were generated using masked inp
56 s (Bard [Google Inc], hereinafter chatbot 1; GPT-4 [OpenAI], hereinafter chatbot 2) were asked in seq
57 enerative Pre-trained Transformer version 2 (GPT-2), is shown to generate meaningful neural encodings
58 % [19 746 of 23 829], 92.2% [2870 of 3114]), GPT-4 (94.3% [45 586 of 48 342], 91.6% [6721 of 7336]),
63 highest-performing VLMs (GPT-4.1 F1: 91.98%, GPT-4.1-mini F1: 91.16%) matched CNN performance (ResNet
64 ,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of th
70 l responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150]) as well as for incorrect resp
71 GPT-2 (ref. (1)), GPT-3(.5) (ref. (2)) and GPT-4 (ref. (3)) demonstrated high performance across a
73 breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), ass
81 all three radiology board examinations, and GPT-3.5 passed two of three examinations when using an o
84 For the questions from 2023, OpenAI o1 and GPT-4o scored 62% (47 of 76; 95% CI: 50, 73) and 54% (41
88 age domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensive
89 oying embedding methods such as Word2Vec and GPT, we conducted EHR-embedding-based GWASs and identifi
95 ctural models supported associations between GPT and improved survival in the overall cohort (adjuste
96 nt difference in translation quality between GPT-4o and human translations, with a mean difference of
100 ually assessed the explanations generated by GPT-4 for its predictions to determine if they were soun
108 cted on prominent models, including ChatGPT (GPT-3.5-Turbo), DeepSeek-V3, and Llama-3.1-70B, utilizin
109 ments, we asked participants to use ChatGPT (GPT-3.5) to generate creative ideas for various everyday
124 ing physicians were randomized to use either GPT-4 plus conventional resources or conventional resour
125 mple, when used in Chinese (versus English), GPT is more likely to recommend advertisements with an i
126 and synthesis of the DPAGT1 encoded enzyme, GPT, in determining the abundance of cytoplasmic beta-ca
131 eved weighted F1 of 74.94% versus 55.07% for GPT-4.1-mini, with performance gaps widening dramaticall
132 9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinic
134 00 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001).
135 20.6 improved to adjusted means of 17.6 for GPT and 16.5 for CM, with no significant difference betw
137 00 [25.5%] for Bard, 573 of 2400 [23.9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) an
145 nts, we extracted lexical probabilities from GPT-3 based on contexts that ranged from very local-a si
147 state-of-the-art general-purpose LLMs (e.g. GPT-3.5-turbo, GPT-4o, DeepSeek-v3, Claude 3.5-Sonnet, a
151 hs per 100 person-years for patients who had GPT vs. those who did not have GPT; adjusted HR, 0.60 [C
154 ients who had GPT vs. those who did not have GPT; adjusted HR, 0.60 [CI, 0.43 to 0.82]; P = 0.002) an
155 ith prior work, our results show that having GPT-4 access while solving problems significantly improv
162 or a histidine in GPT-1 and an asparagine in GPT-2, which causes a gain or loss of an NlaIII restrict
164 ution in codon 14, coding for a histidine in GPT-1 and an asparagine in GPT-2, which causes a gain or
166 spans is the largest hydrophilic segment in GPT and, as judged by site-directed mutagenesis, has a n
168 GPT sequences or epitope tags inserted into GPT, after selective permeabilization of the plasma memb
169 "LLM only") and a hybrid strategy leveraging GPT-4 to classify features fed into a deterministic form
171 system to compare the performance of an LLM (GPT-4-Turbo) using zero-shot learning and prompting agai
173 d-certified breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (
174 egulated; transcription of the mouse mammary GPT gene is stimulated by the lactogenic hormones, insul
176 we demonstrate that the large language model GPT-4 can accurately annotate cell types using marker ge
177 best results when the most up-to-date model (GPT-4) was used and when it was prompted for a single di
179 onal RAG were examined with five LLM models (GPT-4o, GPT-5, Deepseek-V3.1, Qwen-plus, and Llama-3.3).
181 ts in regulating the expression of the mouse GPT (mGPT) gene was investigated by transient transfecti
182 licable solution for multiple targets, MTMol-GPT provides new insight into future directions to enhan
184 croscopy with antibodies specific for native GPT sequences or epitope tags inserted into GPT, after s
185 audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions.
193 s were detected by chemical cross-linking of GPT and by a dominant-negative effect caused by co-expre
198 strongest contributor to the performance of GPT-4V in brain MRI differential diagnosis, followed by
199 analogical models explain the predictions of GPT-J equally well for adjectives with regular nominaliz
200 The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentia
201 dation revealed high precision and recall of GPT-4 in determining whether the abstract of a given pap
206 , we garner a more granular understanding of GPT-4 mathematical problem-solving through a series of c
208 Furthermore, two novel missense variants of GPT with rare frequency in East Asians but extreme rarit
209 es), we tested whether different versions of GPT (3.5 Turbo, 4, and 4 Turbo) can accurately detect ps
210 For curated gene sets from Gene Ontology, GPT-4 suggests functions similar to the curated name in
211 s via multi-turn prompting to 6 LLMs (OpenAI GPT-4o, Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct, M
212 etrained transformer (GPT) within the OpenAI GPT Store and searched to identify if any publicly acces
213 tory analyses further showed that the OpenAI GPT Store could currently be instructed to generate simi
217 for incorrect responses (ie, overconfidence; GPT-3.5, 100% [59 of 59]; and GPT-4, 77% [27 of 35], res
218 and the SLO system showed that overexpressed GPT was not functional in vivo, although it was highly a
219 ch larger models, e.g. a 6-billion parameter GPT-J model, despite having 10,000x fewer parameters and
221 fter prompt engineering with seven patients, GPT-4 (version 0613; OpenAI) was prompted on April 9, 20
223 Conclusion When using user-defined prompts, GPT-4 outperformed ChatGPT in extracting oncologic pheno
228 the 1-10 scale) for most initial responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150
229 tuned open-source LLM (Mistral-7B), rivaling GPT-4 Turbo in performance, could effectively extract cl
232 ) of liver injury, as measured by both serum GPT levels and percent hepatocellular necrosis, was dram
234 hoMap's accuracy was comparable to zero-shot GPT at the first level of the ontology and outperformed
236 tion measures: mean pinch and grip strength, GPT and AHFT completion times, smallest detected monofil
237 mar Hand Dynamometer, Grooved Pegboard Test (GPT), Arthritis Hand Function Tests (AHFT), Semmes-Weins
238 unction (JTH) and the Grooved Pegboard Test (GPT), were performed at baseline and during subsequent c
239 typic and phenotypic susceptibility testing (GPT) optimizes antiretroviral selection, but its effect
241 T-4 was more consistent across attempts than GPT-3.5 but more influenced by an adversarial prompt.
242 d comprehensive functional descriptions than GPT-4, providing valuable insights into gene functions a
243 ns, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of 150] vs 71.3% [107 of 150], respe
244 The more abundant expression of GPT2 than GPT, especially in muscle and fat, suggests a unique and
250 ttery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, huma
251 itive or negative) to write, suggesting that GPT-4o manifests a functional analog of humanlike selfho
253 Logistic Regression baseline (0.64) and the GPT-4 baseline (0.48), while focusing on high-level clin
255 First, two cosmids shown to contain the GPT sequence were derived from a chromosome 8-specific l
259 determination of the molecular basis of the GPT isozyme variants will permit PCR-based detection of
260 decades, although chromosomal mapping of the GPT locus in the 1980s produced conflicting results.
262 TP inhibition increased transcription of the GPT/GOT1 genes through up-regulation of the IRE1alpha/cJ
263 changed responses for most questions, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of
265 superior to placebo in patients assigned to GPT (difference, -1.7; 95% CI, -3.2 to -0.1; P = .04) or
266 racial or gender disparities, in contrast to GPT-3.5, which failed to effectively model racial divers
270 ase (GOT) and glutamic pyruvic transaminase (GPT) levels were determined by enzymatic method, and the
272 r damage as measured by serum transaminases (GPT) demonstrate similar acute (3-6 h) post-I/R response
273 DP-GlcNAc:dolichol-P GlcNAc-1-P transferase (GPT) is an endoplasmic reticulum (ER) enzyme responsible
274 th LLO initiation by GlcNAc-1-P transferase (GPT), mannose-P-dolichol synthase, glucose-P-dolichol sy
275 DP-GlcNAc:dolichol-P GlcNAc-1-P transferase (GPT), which initiates N-linked glycosylation by catalyzi
276 N-acetylglucosamine-1-phosphate transferase (GPT), the enzyme that initiates the pathway for the bios
277 ustomized generative pretrained transformer (GPT) within the OpenAI GPT Store and searched to identif
278 riant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-
279 variants of glutamate pyruvate transminase (GPT) (E.C.2.6.1.2) have been used as genetic markers in
280 rt general-purpose LLMs (e.g. GPT-3.5-turbo, GPT-4o, DeepSeek-v3, Claude 3.5-Sonnet, and Gemini-2.0-P
283 igated by transient transfections of various GPT promoter/luciferase (Luc) constructs into primary mo
285 over time and between 2 different versions (GPT-3.5 and GPT-4) of chatbot-generated medical response
286 olyp detection, the highest-performing VLMs (GPT-4.1 F1: 91.98%, GPT-4.1-mini F1: 91.16%) matched CNN
288 xpressed in muscle, fat, and kidney, whereas GPT is mainly expressed in kidney, liver, and heart.
289 tilizes GPT-4 in various capacities, wherein GPT-4 provides detailed instructions for chemical experi
290 cy, two preregistered studies tested whether GPT-4o changed its attitudes toward Vladimir Putin in th
293 CI: 88.3, 92.4]) correct data entries, while GPT-3.5 extracted 539 of 840 (64.2% [95% CI: 60.8, 67.4]
294 3 +/- 0.26 (NHS), 5.00 +/- 0.00 (NKF), while GPT-4.0 scored 5.00 +/- 0.00 (OPTN), 5.00 +/- 0.00 (NHS)
297 egies were evaluated: few-shot learning with GPT-4 (version 0613; OpenAI) prompted with O-RADS rules
300 ion frequency was lower with GPT-4 than with GPT-3.5, but repeatability was an issue for both models.