戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (left1)

通し番号をクリックするとPubMedの該当ページを表示します
1                                              GPT (10 U/ml) decreased neuronal death caused by exposur
2                                              GPT achieved an accuracy of 77.0% and AUC of 0.770, with
3                                              GPT and changes of JTH scores at onset of PN were signif
4                                              GPT cannot yet discern this changeability as well as a p
5                                              GPT has also been predicted to act on the cytosolic face
6                                              GPT is an effective neuroprotectant against glutamate ex
7                                              GPT performed nearly as well as, and sometimes better th
8                                              GPT was again neuroprotective, decreasing neuronal death
9                                              GPT-2 (ref. (1)), GPT-3(.5) (ref. (2)) and GPT-4 (ref. (
10                                              GPT-2 saliency, weighing the importance of context words
11                                              GPT-2 surprisal, estimating word prediction errors from
12                                              GPT-3 used more negative expressions when responding to
13                                              GPT-3's semantic activation is better predicted by simil
14                                              GPT-3.5 and GPT-4 generated false statements in 16.2% (5
15                                              GPT-3.5 and GPT-4 were prompted to create synoptic repor
16                                              GPT-3.5 shows significant sensitivity to context but lag
17                                              GPT-3.5-turbo and GPT-4 (OpenAI), Gemini-pro (Google), a
18                                              GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while
19                                              GPT-3.5-turbo, Gemini-1.0-Pro-001 and GPT-4-turbo were a
20                                              GPT-4 (via Copilot) initially refused to generate health
21                                              GPT-4 achieved a score up to 65% in the neuroradiology e
22                                              GPT-4 achieved higher accuracy in identification of meta
23                                              GPT-4 ADA, accessed between December 2023 and January 20
24                                              GPT-4 contributed to linking GenBank entries with publis
25                                              GPT-4 demonstrated high accuracy in detecting clinically
26                                              GPT-4 demonstrated superior performance with Fleiss' Kap
27                                              GPT-4 effectively identified challenging errors of nonse
28                                              GPT-4 had higher precision than GPT-3.5 for extracting s
29                                              GPT-4 has previously performed well when applied to ques
30                                              GPT-4 in-context learning performed better in identifyin
31                                              GPT-4 led the LLM category with an F1 score of 0.43, whi
32                                              GPT-4 outperformed ChatGPT, correctly answering 90% comp
33                                              GPT-4 passed all three radiology board examinations, and
34                                              GPT-4 performed 9 out of the 10 quantitative metrics tas
35                                              GPT-4 performed very well on fill-in-the-blank, short-an
36                                              GPT-4 required less processing time per radiology report
37                                              GPT-4 scored 86.3% and 89.6% in papers one-and-two respe
38                                              GPT-4 showed more confidence, not revising any responses
39                                              GPT-4 V is currently able to simultaneously analyze and
40                                              GPT-4 V provided the correct diagnosis in 19/40 (47.5%)
41                                              GPT-4 was more consistent across attempts than GPT-3.5 b
42                                              GPT-4 was non-inferior to the second human annotator in
43                                              GPT-4 with chain-of-thought achieved high accuracy in ca
44                                              GPT-4's accuracy across categories and question structur
45                                              GPT-4's scores were less sensitive to prompt style chang
46                                              GPT-4, for instance, exhibits deceptive behavior in simp
47                                              GPT-4, the only publicly available vision-language model
48                                              GPT-4.0 scored 4.95 +/- 0.23 (OPTN), 4.93 +/- 0.26 (NHS)
49                                              GPT-4o with CoT-SC prompting outperformed the other appr
50                                              GPT-4o with web access and GeneGPT demonstrated compleme
51                                              GPT-4V correctly identified the imaging modality and ana
52                                              GPT-4V relied on the textual context for its outputs.
53                                              GPT-4V, a large vision-language model from OpenAI, has s
54                                              GPT-based AI predictions were generated using masked inp
55                            GPT-2 (ref. (1)), GPT-3(.5) (ref. (2)) and GPT-4 (ref. (3)) demonstrated h
56 s (Bard [Google Inc], hereinafter chatbot 1; GPT-4 [OpenAI], hereinafter chatbot 2) were asked in seq
57 enerative Pre-trained Transformer version 2 (GPT-2), is shown to generate meaningful neural encodings
58 % [19 746 of 23 829], 92.2% [2870 of 3114]), GPT-4 (94.3% [45 586 of 48 342], 91.6% [6721 of 7336]),
59  were examined with five LLM models (GPT-4o, GPT-5, Deepseek-V3.1, Qwen-plus, and Llama-3.3).
60                           Background GPT-4V (GPT-4 with vision, ChatGPT; OpenAI) has shown impressive
61                       Compared with GPT-3.5, GPT-4 achieved equal or higher F1 scores for all 14 extr
62            Conclusion Compared with GPT-3.5, GPT-4 more frequently extracted correct procedural data
63 highest-performing VLMs (GPT-4.1 F1: 91.98%, GPT-4.1-mini F1: 91.16%) matched CNN performance (ResNet
64 ,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of th
65                          Conclusion Although GPT-4 was superior to open-source models in zero-shot re
66                          Conclusion Although GPT-4V has shown promise in understanding natural images
67                            First, we analyse GPT's responses to a large set of measures in both Chine
68                        In subgroup analyses, GPT was associated with improved survival for the 2107 h
69        GPT-3.5-turbo, Gemini-1.0-Pro-001 and GPT-4-turbo were also unable to make binary discriminati
70 l responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150]) as well as for incorrect resp
71   GPT-2 (ref. (1)), GPT-3(.5) (ref. (2)) and GPT-4 (ref. (3)) demonstrated high performance across a
72 rts were successfully processed by GPT-4 and GPT-3.5.
73 breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), ass
74                                  GPT-3.5 and GPT-4 generated false statements in 16.2% (55 of 339) an
75                                  GPT-3.5 and GPT-4 were prompted to create synoptic reports from orig
76               Conclusion Default GPT-3.5 and GPT-4 were reliably accurate across three attempts, but
77 nd between 2 different versions (GPT-3.5 and GPT-4) of chatbot-generated medical responses.
78  proprietary GPT models, such as GPT-3.5 and GPT-4.
79 verconfidence; GPT-3.5, 100% [59 of 59]; and GPT-4, 77% [27 of 35], respectively; P = .89).
80                          Both Mistral-7B and GPT-4 Turbo showed substantial overall agreement with ra
81  all three radiology board examinations, and GPT-3.5 passed two of three examinations when using an o
82                                     GPT2 and GPT differ in mRNA expression in that GPT2 is highly exp
83                        NFTs, such as JTH and GPT, may have utility for predicting PN, but further tes
84   For the questions from 2023, OpenAI o1 and GPT-4o scored 62% (47 of 76; 95% CI: 50, 73) and 54% (41
85 bability for both the human participants and GPT-4 were little affected by context.
86                       BioGPT fine-tuning and GPT-4 in-context learning exhibited suboptimal results.
87                            GPT-3.5-turbo and GPT-4 (OpenAI), Gemini-pro (Google), and Llama-2-70B-cha
88 age domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensive
89 oying embedding methods such as Word2Vec and GPT, we conducted EHR-embedding-based GWASs and identifi
90 elines including much larger models, such as GPT-3-sized cpt-text-XL.
91 rformance of proprietary GPT models, such as GPT-3.5 and GPT-4.
92                      Earlier models, such as GPT-3.5, demonstrated high accuracy for top-three differ
93 ioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12).
94                                   Background GPT-4V (GPT-4 with vision, ChatGPT; OpenAI) has shown im
95 ctural models supported associations between GPT and improved survival in the overall cohort (adjuste
96 nt difference in translation quality between GPT-4o and human translations, with a mean difference of
97                                     The BIPM GPT facility uses state-of-the-art flow measurement, che
98                             Conversely, both GPT-4 and LLaMa-2 demonstrate a more balanced sensitivit
99                             Answers for both GPT-4 V and the non-ophthalmologists were evaluated by t
100 ually assessed the explanations generated by GPT-4 for its predictions to determine if they were soun
101                 First, messages generated by GPT-4 were broadly persuasive, in some cases increasing
102 ously released glutamate, neuroprotection by GPT is not dependent on added pyruvate.
103  large load of glutamate, neuroprotection by GPT was enhanced by adding pyruvate to the medium.
104 sed search engine (Perplexity AI, powered by GPT-4V).
105   All reports were successfully processed by GPT-4 and GPT-3.5.
106                                         Chat GPT-4o-mini was tested on the 50-article CONSORT-Text Cl
107                      Four of the 5 chatbots (GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok B
108 cted on prominent models, including ChatGPT (GPT-3.5-Turbo), DeepSeek-V3, and Llama-3.1-70B, utilizin
109 ments, we asked participants to use ChatGPT (GPT-3.5) to generate creative ideas for various everyday
110                                     ChatGPT, GPT-4, and Bard are highly advanced natural language pro
111 t the protein level to the previously cloned GPT.
112                                   The cloned GPT gene and associated polymorphisms will be useful for
113 xist, but only one ALT gene has been cloned, GPT.
114                                   Conclusion GPT-4 created near-perfect PDAC synoptic reports from or
115                                   Conclusion GPT-4V, in its earliest version, recognized medical imag
116                                 In contrast, GPT was superior to CM for all visits in the Clinical Gl
117                                  Conversely, GPT-4 performed poorly on questions with figures contain
118                                       CRISPR-GPT assists users in selecting CRISPR systems, experimen
119                                       CRISPR-GPT enables fully AI-guided gene-editing experiment desi
120                                       CRISPR-GPT leverages the reasoning capabilities of LLMs for com
121                                     A custom GPT-4o configuration integrated with National Center for
122                    A companion OpenAI custom GPT, Genomics Fetcher-Analyzer, connects ChatGPT with Na
123                           Conclusion Default GPT-3.5 and GPT-4 were reliably accurate across three at
124 ing physicians were randomized to use either GPT-4 plus conventional resources or conventional resour
125 mple, when used in Chinese (versus English), GPT is more likely to recommend advertisements with an i
126  and synthesis of the DPAGT1 encoded enzyme, GPT, in determining the abundance of cytoplasmic beta-ca
127                                 For example, GPT-4 emitted between 5 and 19 times more [Formula: see
128 enzymatically inactive (but properly folded) GPT mutants.
129                                          For GPT-4, chain-of-thought prompting was most accurate, out
130                                          For GPT-4o and Claude 3.5, CoT improved the accuracy, and Co
131 eved weighted F1 of 74.94% versus 55.07% for GPT-4.1-mini, with performance gaps widening dramaticall
132 9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinic
133 rovement in grades for GPT Base and 127% for GPT Tutor).
134 00 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001).
135  20.6 improved to adjusted means of 17.6 for GPT and 16.5 for CM, with no significant difference betw
136 3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001).
137 00 [25.5%] for Bard, 573 of 2400 [23.9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) an
138                                     Even for GPT-4o with Web Search, approximately 30% of individual
139 eloped an R software package GPTCelltype for GPT-4's automated cell type annotation.
140 s performance (48% improvement in grades for GPT Base and 127% for GPT Tutor).
141 5.8% +/- 5.0) and the highest error rate for GPT-4 in patients 60-80 years of age (8.3%).
142                                 Answers from GPT-4V were obtained between November 26 and December 10
143                              Assistance from GPT-4V did not help human raters.
144 blic figures' name embeddings extracted from GPT-3.
145 nts, we extracted lexical probabilities from GPT-3 based on contexts that ranged from very local-a si
146                                 Furthermore, GPT-J's behavior is sensitive to the individual word fre
147  state-of-the-art general-purpose LLMs (e.g. GPT-3.5-turbo, GPT-4o, DeepSeek-v3, Claude 3.5-Sonnet, a
148                                  Plasma GOT, GPT, and hepatic MMP-9 activity increased 2.5-fold, and
149 wed for a median of 3.3 years; 915 (34%) had GPT.
150                             Patients who had GPT had lower mortality rates than those who did not (2.
151 hs per 100 person-years for patients who had GPT vs. those who did not have GPT; adjusted HR, 0.60 [C
152                      The sequence of hamster GPT predicted multiple transmembrane segments.
153                                Thus, hamster GPT must cross the ER membrane at least three times, con
154 ients who had GPT vs. those who did not have GPT; adjusted HR, 0.60 [CI, 0.43 to 0.82]; P = 0.002) an
155 ith prior work, our results show that having GPT-4 access while solving problems significantly improv
156                         Here we explored how GPT-4 might be able to perform rudimentary structural bi
157            We have definitively mapped human GPT to the terminus of 8q using several methods.
158 reviously reported protein sequence of human GPT-1.
159 n, we mapped the cosmid containing the human GPT gene to chromosome band 8q24.3.
160                                    The human GPT genomic sequence spans 2,7 kb and consists of 11 exo
161       Finally, PCR primers specific to human GPT amplify sequences contained within a "half-YAC" from
162 or a histidine in GPT-1 and an asparagine in GPT-2, which causes a gain or loss of an NlaIII restrict
163                                   Changes in GPT and JTH scores over the first two cycles were often
164 ution in codon 14, coding for a histidine in GPT-1 and an asparagine in GPT-2, which causes a gain or
165                         GenAI implemented in GPT-4o was unable to provide a thematic analysis that is
166  spans is the largest hydrophilic segment in GPT and, as judged by site-directed mutagenesis, has a n
167                                For instance, GPT-4's accuracy at decoding a simple cipher is 51% when
168  GPT sequences or epitope tags inserted into GPT, after selective permeabilization of the plasma memb
169 "LLM only") and a hybrid strategy leveraging GPT-4 to classify features fed into a deterministic form
170 mance, outperforming much larger models like GPT-4V and Med-PaLM M (84B).
171 system to compare the performance of an LLM (GPT-4-Turbo) using zero-shot learning and prompting agai
172 than random chance (p = 0.002) with one LLM (GPT-4o) obtaining an accuracy of 37.34%.
173 d-certified breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (
174 egulated; transcription of the mouse mammary GPT gene is stimulated by the lactogenic hormones, insul
175     Mixing experiments indicated that mature GPT was competent for oligomerization.
176 we demonstrate that the large language model GPT-4 can accurately annotate cell types using marker ge
177 best results when the most up-to-date model (GPT-4) was used and when it was prompted for a single di
178 n the prompt and studied a subsequent model, GPT-4.
179 onal RAG were examined with five LLM models (GPT-4o, GPT-5, Deepseek-V3.1, Qwen-plus, and Llama-3.3).
180                      In standard Cox models, GPT was associated with improved survival (adjusted HR,
181 ts in regulating the expression of the mouse GPT (mGPT) gene was investigated by transient transfecti
182 licable solution for multiple targets, MTMol-GPT provides new insight into future directions to enhan
183            Extensive results show that MTMol-GPT generates various valid, novel, and effective multi-
184 croscopy with antibodies specific for native GPT sequences or epitope tags inserted into GPT, after s
185 audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions.
186                      Both AI models, notably GPT-4, showed capacity for empathy, indicating AI's pote
187           This study examined the ability of GPT to protect neurons of the hippocampal slice preparat
188 her than promote, nonspecific aggregation of GPT.
189 and characterized cDNA and genomic clones of GPT.
190 on, GPT2 seems to be the predominant form of GPT at the mRNA level in these tissues.
191 ll as activating a nonfunctional fraction of GPT.
192        In this study, we cloned a homolog of GPT and named it GPT2, and the corresponding protein ALT
193 s were detected by chemical cross-linking of GPT and by a dominant-negative effect caused by co-expre
194  residents were provided with the outputs of GPT-4V.
195                            Overexpression of GPT was unable to reverse the effects of translation arr
196         By contrast, the poor performance of GPT originated from a hyperconservative approach towards
197 e reports closely matched the performance of GPT-4.
198  strongest contributor to the performance of GPT-4V in brain MRI differential diagnosis, followed by
199 analogical models explain the predictions of GPT-J equally well for adjectives with regular nominaliz
200 The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentia
201 dation revealed high precision and recall of GPT-4 in determining whether the abstract of a given pap
202       Thus, transcriptional up-regulation of GPT/GOT1 genes is a major mechanism, in response to ER s
203                                  Sessions of GPT and CM were held weekly for the first 12 weeks and m
204 th and 10th predicted transmembrane spans of GPT were found to be cytosolic.
205 leavage sites; (iii) in vitro translation of GPT; and (iv) site-directed mutagenesis.
206 , we garner a more granular understanding of GPT-4 mathematical problem-solving through a series of c
207                                       Use of GPT was independently associated with improved survival
208  Furthermore, two novel missense variants of GPT with rare frequency in East Asians but extreme rarit
209 es), we tested whether different versions of GPT (3.5 Turbo, 4, and 4 Turbo) can accurately detect ps
210    For curated gene sets from Gene Ontology, GPT-4 suggests functions similar to the curated name in
211 s via multi-turn prompting to 6 LLMs (OpenAI GPT-4o, Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct, M
212 etrained transformer (GPT) within the OpenAI GPT Store and searched to identify if any publicly acces
213 tory analyses further showed that the OpenAI GPT Store could currently be instructed to generate simi
214 d residents did not significantly outperform GPT-4V.
215 tegorizing resectability, GPT-4 outperformed GPT-3.5 for each prompting strategy.
216 first level of the ontology and outperformed GPT at second and third levels.
217 for incorrect responses (ie, overconfidence; GPT-3.5, 100% [59 of 59]; and GPT-4, 77% [27 of 35], res
218 and the SLO system showed that overexpressed GPT was not functional in vivo, although it was highly a
219 ch larger models, e.g. a 6-billion parameter GPT-J model, despite having 10,000x fewer parameters and
220 tion-tuned models: Path-llama3.1-8B and Path-GPT-4o-mini-FT.
221 fter prompt engineering with seven patients, GPT-4 (version 0613; OpenAI) was prompted on April 9, 20
222                          The two polymorphic GPT isozymes are the results of a nucleotide substitutio
223  Conclusion When using user-defined prompts, GPT-4 outperformed ChatGPT in extracting oncologic pheno
224 -RAG enhanced the performance of proprietary GPT models, such as GPT-3.5 and GPT-4.
225                    For the external reports, GPT-4 extracted 760 of 840 (90.5% [95% CI: 88.3, 92.4])
226 at these motifs functions in vivo to repress GPT gene expression.
227              For categorizing resectability, GPT-4 outperformed GPT-3.5 for each prompting strategy.
228  the 1-10 scale) for most initial responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150
229 tuned open-source LLM (Mistral-7B), rivaling GPT-4 Turbo in performance, could effectively extract cl
230              Five foundational LLMs-OpenAI's GPT-4o, Google's Gemini 1.5 Pro, Anthropic's Claude 3.5
231                    For cultural sensitivity, GPT-3.5 scored 4.95 +/- 0.23 (OPTN), 4.93 +/- 0.26 (NHS)
232 ) of liver injury, as measured by both serum GPT levels and percent hepatocellular necrosis, was dram
233 injury and inflammation as measured by serum GPT levels and neutrophil infiltration.
234 hoMap's accuracy was comparable to zero-shot GPT at the first level of the ontology and outperformed
235                                   By source, GPT-3.5 linguistic accuracy was 4.84 +/- 0.37 (OPTN), 4.
236 tion measures: mean pinch and grip strength, GPT and AHFT completion times, smallest detected monofil
237 mar Hand Dynamometer, Grooved Pegboard Test (GPT), Arthritis Hand Function Tests (AHFT), Semmes-Weins
238 unction (JTH) and the Grooved Pegboard Test (GPT), were performed at baseline and during subsequent c
239 typic and phenotypic susceptibility testing (GPT) optimizes antiretroviral selection, but its effect
240 GeneAgent is consistently more accurate than GPT-4 by a significant margin.
241 T-4 was more consistent across attempts than GPT-3.5 but more influenced by an adversarial prompt.
242 d comprehensive functional descriptions than GPT-4, providing valuable insights into gene functions a
243 ns, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of 150] vs 71.3% [107 of 150], respe
244    The more abundant expression of GPT2 than GPT, especially in muscle and fat, suggests a unique and
245              GPT-4 had higher precision than GPT-3.5 for extracting superior mesenteric artery involv
246         From these results, we conclude that GPT is one of a very small number of multitransmembrane
247          In this report, we demonstrate that GPT forms functional oligomers, probably dimers.
248               These results demonstrate that GPT subunits can physically interact and influence each
249                   Bias assessment found that GPT-4 exhibited no racial or gender disparities, in cont
250 ttery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, huma
251 itive or negative) to write, suggesting that GPT-4o manifests a functional analog of humanlike selfho
252                                          The GPT mutants had no effect on two other dolichol-P-depend
253  Logistic Regression baseline (0.64) and the GPT-4 baseline (0.48), while focusing on high-level clin
254 ability depending on the type of cereal, the GPT and the antibody used.
255      First, two cosmids shown to contain the GPT sequence were derived from a chromosome 8-specific l
256         In addition, a cosmid containing the GPT sequence also contains a previously unmapped, polymo
257                 A DL language model like the GPT-2 may feature useful data about neural processes sub
258                           Here we modify the GPT(6) (generative pretrained transformer) architecture
259  determination of the molecular basis of the GPT isozyme variants will permit PCR-based detection of
260 decades, although chromosomal mapping of the GPT locus in the 1980s produced conflicting results.
261              A series of 5'-deletions of the GPT promoter identified a distal negative regulatory reg
262 TP inhibition increased transcription of the GPT/GOT1 genes through up-regulation of the IRE1alpha/cJ
263 changed responses for most questions, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of
264 his wavelength based on gas phase titration (GPT) measurements.
265  superior to placebo in patients assigned to GPT (difference, -1.7; 95% CI, -3.2 to -0.1; P = .04) or
266 racial or gender disparities, in contrast to GPT-3.5, which failed to effectively model racial divers
267          Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found onli
268 tudies revealed that the initial pre-trained GPT-3.5 model benefits from fine-tuning.
269             Glutamate pyruvate transaminase (GPT) is a highly active glutamate degrading enzyme that
270 ase (GOT) and glutamic pyruvic transaminase (GPT) levels were determined by enzymatic method, and the
271  and 4) serum glutamic pyruvic transaminase (GPT) levels.
272 r damage as measured by serum transaminases (GPT) demonstrate similar acute (3-6 h) post-I/R response
273 DP-GlcNAc:dolichol-P GlcNAc-1-P transferase (GPT) is an endoplasmic reticulum (ER) enzyme responsible
274 th LLO initiation by GlcNAc-1-P transferase (GPT), mannose-P-dolichol synthase, glucose-P-dolichol sy
275 DP-GlcNAc:dolichol-P GlcNAc-1-P transferase (GPT), which initiates N-linked glycosylation by catalyzi
276 N-acetylglucosamine-1-phosphate transferase (GPT), the enzyme that initiates the pathway for the bios
277 ustomized generative pretrained transformer (GPT) within the OpenAI GPT Store and searched to identif
278 riant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-
279  variants of glutamate pyruvate transminase (GPT) (E.C.2.6.1.2) have been used as genetic markers in
280 rt general-purpose LLMs (e.g. GPT-3.5-turbo, GPT-4o, DeepSeek-v3, Claude 3.5-Sonnet, and Gemini-2.0-P
281 bodies to well-defined gluten protein types (GPT) isolated from wheat, rye and barley flours.
282                       Each of these utilizes GPT-4 in various capacities, wherein GPT-4 provides deta
283 igated by transient transfections of various GPT promoter/luciferase (Luc) constructs into primary mo
284 he same format was given to ChatGPT (version GPT-4) without prior training.
285  over time and between 2 different versions (GPT-3.5 and GPT-4) of chatbot-generated medical response
286 olyp detection, the highest-performing VLMs (GPT-4.1 F1: 91.98%, GPT-4.1-mini F1: 91.16%) matched CNN
287 s LLMs being competitive with closed-weights GPT-4o.
288 xpressed in muscle, fat, and kidney, whereas GPT is mainly expressed in kidney, liver, and heart.
289 tilizes GPT-4 in various capacities, wherein GPT-4 provides detailed instructions for chemical experi
290 cy, two preregistered studies tested whether GPT-4o changed its attitudes toward Vladimir Putin in th
291                 These results revealed which GPT each antibody is most sensitive to and provided nove
292                             Conclusion While GPT-4V demonstrated a level of competence in text-based
293 CI: 88.3, 92.4]) correct data entries, while GPT-3.5 extracted 539 of 840 (64.2% [95% CI: 60.8, 67.4]
294 3 +/- 0.26 (NHS), 5.00 +/- 0.00 (NKF), while GPT-4.0 scored 5.00 +/- 0.00 (OPTN), 5.00 +/- 0.00 (NHS)
295                                Compared with GPT-3.5, GPT-4 achieved equal or higher F1 scores for al
296                     Conclusion Compared with GPT-3.5, GPT-4 more frequently extracted correct procedu
297 egies were evaluated: few-shot learning with GPT-4 (version 0613; OpenAI) prompted with O-RADS rules
298       Hallucination frequency was lower with GPT-4 than with GPT-3.5, but repeatability was an issue
299  parameter class, and performing on par with GPT-4o.
300 ion frequency was lower with GPT-4 than with GPT-3.5, but repeatability was an issue for both models.

 
Page Top