コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 te population models) and learning dynamics (reinforcement learning).
2 e control based on the outcome of retrieval (reinforcement learning).
3 "model-free" and "model-based" strategies in reinforcement learning.
4 ant consequences for computational models of reinforcement learning.
5 ntribution of these regions to probabilistic reinforcement learning.
6 neural control of instrumental behaviors by reinforcement learning.
7 d opposing effects on locomotor behavior and reinforcement learning.
8 hoice outcomes are incrementally updated via reinforcement learning.
9 acquire long-term reward predictions through reinforcement learning.
10 spatial attention that occurs during spatial reinforcement learning.
11 with an important role for these neurons in reinforcement learning.
12 -dimensional sensory inputs using end-to-end reinforcement learning.
13 considered to carry reward-related signal in reinforcement learning.
14 ia to solve the "curse of dimensionality" in reinforcement learning.
15 arallel contribution of MB and MF systems in reinforcement learning.
16 e D1A did not affect perceptual inference or reinforcement learning.
17 fic algorithms, such as prediction errors in reinforcement learning.
18 h a behavioral measure of opponent-selective reinforcement learning.
19 encode the full range of RPEs necessary for reinforcement learning.
20 engthening action-reward associations during reinforcement learning.
21 ode in circuits underpinning both affect and reinforcement learning.
22 physiological and neurochemical correlate of reinforcement learning.
23 ors to appetitively and aversively motivated reinforcement learning.
24 ncies between expected and actual rewards in reinforcement learning.
25 credit assignment problems in the context of reinforcement learning.
26 were used as electrophysiological indices of reinforcement learning.
27 as correlating with quantities derived from reinforcement learning.
28 cated in motivational outcome evaluation and reinforcement learning.
29 rcuits are important for action selection or reinforcement learning.
30 is important for optimal decision making and reinforcement learning.
31 roduce effects of dopaminergic medication on reinforcement learning.
32 CC behavioral patterns could be explained by reinforcement learning.
33 mine (DA) and regulates appetitive drive and reinforcement learning.
34 r to deviate sharply from the predictions of reinforcement learning.
35 udy failed to replicate previous findings on reinforcement learning.
36 ntial hypothesis posits that dopamine biases reinforcement learning.
37 sure may be a critical cellular component of reinforcement learning.
38 licit influence of movement error signals on reinforcement learning.
39 responses consistent with such "hierarchical reinforcement learning."
40 e neural dynamics of choice processes during reinforcement learning?
44 cent evidence indicates that, beyond classic reinforcement learning adaptations, individuals may also
46 s' behaviour was better explained by a basic reinforcement learning algorithm, adults' behaviour inte
49 bine them with model-free, experience-based, reinforcement learning algorithms to train the gliders.
54 MRI, we found that adolescents showed better reinforcement learning and a stronger link between reinf
56 nformation-processing system responsible for reinforcement learning and appropriate decision making.
57 triatal DA D2 receptors (D2Rs) also regulate reinforcement learning and are implicated in glucose-rel
58 ons relative to outcomes are both central to reinforcement learning and are thought to underlie finan
61 the representation of "state," in studies of reinforcement learning and decision making, and also in
63 rcement learning and a stronger link between reinforcement learning and episodic memory for rewarding
64 pamine is thought to play a critical role in reinforcement learning and goal-directed behavior, but i
65 problem through a harmonious combination of reinforcement learning and hierarchical sensory processi
66 ghly implicated both as a teaching signal in reinforcement learning and in motivating actions to obta
67 ptive function in this context, and also how reinforcement learning and incentive salience models may
68 ventral tegmental area (VTA) helps regulate reinforcement learning and motivated behavior, in part b
69 ion of corticostriatal circuitry involved in reinforcement learning and motivation, although the inte
73 sms that underlie these processes, including reinforcement learning and spike-timing-dependent plasti
74 d role for D3 receptors in select aspects of reinforcement learning and suggest that individual varia
75 the engagement of these two brain regions in reinforcement learning and their respective roles are fa
76 trated that the core brain areas involved in reinforcement learning and valuation, such as the ventra
78 s role in positively motivated behaviors and reinforcement learning, and its dysfunction in addiction
79 ge current reward prediction error models of reinforcement learning, and suggest that classical anima
80 ed with behavioral and neural dysfunction in reinforcement learning, and whether such dysfunction is
81 uences for brain and computational models of reinforcement learning are discussed.SIGNIFICANCE STATEM
82 by which different computational elements of reinforcement learning are dynamically coordinated acros
83 test whether the neural correlates of human reinforcement learning are sensitive to experienced risk
85 is research has established the viability of reinforcement learning as a model of behavioral adaptati
87 ynaptic transmission that in turn drives the reinforcement learning associated with the first alcohol
88 The resulting architecture includes striatal reinforcement learning based on egocentric representatio
89 ipheral glucose levels and glucose-dependent reinforcement learning behaviors and highlight the notio
90 based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previo
91 subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain.
92 at sensory processing, sequence learning and reinforcement learning, but are limited in their ability
93 the relative influence of the two systems in reinforcement learning, but few studies have manipulated
95 strengthen action-reward associations during reinforcement learning, but their role in human learning
96 ence suggests that the basal ganglia support reinforcement learning by adjusting action values accord
97 st that cerebellar damage indirectly impairs reinforcement learning by increasing motor noise, but do
98 e if response conflict acts as a cost during reinforcement learning by modulating experienced reward
99 error signal proposed to support model-free reinforcement learning, cached-value errors are typicall
100 n which the players learn based on a type of reinforcement learning called experience-weighted attrac
101 showing that individuals adopting a type of reinforcement learning, called aspiration learning, phen
102 arise from two computational strategies for reinforcement learning, called model-free and model-base
103 In particular, learning from reward, or reinforcement learning, can be driven by two distinct co
107 principle forms the basis for computational reinforcement learning controllers, which have been frui
108 n several time-dependent functions including reinforcement learning, decision making, and interval ti
110 stigated how opponent identity affects human reinforcement learning during a simulated competitive ga
111 al dopamine signaling, which is critical for reinforcement learning, efficient mobilization of effort
116 odel that augments the standard reward-based reinforcement learning formulation by associating a valu
117 s formal description of avoidance within the reinforcement learning framework provides a new means of
120 escribe inferences based on a combination of reinforcement learning from past feedback and participan
122 an instrumental loss-avoidance and win-gain reinforcement learning functional magnetic resonance ima
123 Computational modeling of trial-by-trial reinforcement learning further indicated that lower OFC
128 years, ideas from the computational field of reinforcement learning have revolutionized the study of
129 yield choice patterns similar to model-free reinforcement learning; however, samples can vary from t
130 behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framewo
131 ze two apparently distinct functions, one in reinforcement learning (i.e., prediction error) and anot
133 the DA D2 receptor in a behavioral study of reinforcement learning in a sample of 78 healthy male vo
134 eal an important role for the hippocampus in reinforcement learning in adolescence and suggest that r
138 implementations) suggests that the study of reinforcement learning in organisms will be best served
139 l link between IL-6 and striatal RPEs during reinforcement learning in the context of acute psycholog
141 of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spac
142 iting advances in the theory and practice of reinforcement learning, including developments in fundam
146 FICANCE STATEMENT In aversive and appetitive reinforcement learning, learned effects show extinction
147 phrenia (SZ) show deficits on tasks of rapid reinforcement learning, like probabilistic reversal lear
148 h tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of
149 c DA D2 receptors in motivational aspects of reinforcement learning may apply to humans as well.
150 al studies, the present results suggest that reinforcement learning may be a major proximate mechanis
152 ing is not accounted for by varying a single reinforcement learning mechanism, but by changing the se
153 end-guided choice can be modelled by using a reinforcement-learning mechanism that computes a longer-
154 egulation as resulting from a self-adjusting reinforcement-learning mechanism that infers latent stat
155 engagement are captured by a self-adjusting reinforcement-learning mechanism that tracks changing en
156 These associations can be attained with reinforcement learning mechanisms using a reward predict
157 iatal plasticity can be induced by classical reinforcement learning mechanisms, and might be central
160 ), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical doma
161 more simplistic learning as reflected by the reinforcement learning model (exceedance probability, Px
162 hierarchical Bayesian model with an advanced reinforcement learning model and by comparing the model
163 i delivered at random times and formulated a reinforcement learning model based on belief states.
164 all learning curves could be obtained with a reinforcement learning model endowed with a multiplicati
165 s behavioral and neural data compared with a reinforcement learning model inspired by rating systems
166 mood and anxiety group on a parameter of our reinforcement learning model that characterizes a prepot
167 n a multiple choice foraging task and used a reinforcement learning model to quantify explore-exploit
168 al responses, we designed a similarity-based reinforcement learning model wherein prediction errors g
169 To cope with uncertainty, we extended a reinforcement learning model with a belief state about t
170 ious BG components may be described within a reinforcement learning model, in which a broad repertoir
172 onally distinct forms of predictive model: a reinforcement-learning model of the environment obtained
174 their inferences over time, we pitted simple reinforcement learning models against more specific "com
175 ic reinforcement learning task combined with reinforcement learning models and fMRI, we found that ad
177 tal midline theta, and have implications for reinforcement learning models of cognitive control.
178 stimuli, not actions.SIGNIFICANCE STATEMENT Reinforcement learning models of the ventral striatum (V
180 Despite these robust behavioral effects, reinforcement learning models reliant on reward predicti
181 ly, these responses were sufficient to train reinforcement learning models to predict the behaviorall
182 We found that, across different conditions, reinforcement learning models were approximately as accu
184 iction errors underlie learning of values in reinforcement learning models, are represented by phasic
186 nly to the effect of learning rate in simple reinforcement learning models, we provide a template for
189 only used and neurophysiologically motivated reinforcement-learning models to simulated behavioral da
190 ity can further improve learning by having a reinforcement learning module separate from sensory proc
192 ional mechanisms, model-based and model-free reinforcement learning, neuronally implemented in fronto
194 ions may relate to abnormal decision making, reinforcement learning or somatic processing in TS.
195 Adolescents and adults carried out a novel reinforcement learning paradigm in which participants le
199 icated that in such an unstable environment, reinforcement learning parameters are downregulated depe
200 rediction error and incentive value, two key reinforcement learning parameters, before and after meth
202 mechanism between model-based and model-free reinforcement learning, placing such a mechanism within
205 warranted to determine if disrupted positive reinforcement learning predicts high-risk behavior follo
206 ion could then be used in collaboration with reinforcement learning principles toward an autonomous b
209 ts that two components are involved in these reinforcement learning processes: a critic, which estima
211 d drug use, may be because of disruptions in reinforcement-learning processes that enable behavior to
214 vioral task affects the neural mechanisms of reinforcement learning remains incompletely understood.
215 s compatible with model-free and model-based reinforcement learning, reports the subjective rather th
216 o learn and retain novel skills, but optimal reinforcement learning requires a balance between explor
218 ans trained on a wide range of tasks support reinforcement learning (RL) algorithms as accounting for
221 parate literatures have examined dynamics of reinforcement learning (RL) as a function of experience
222 of the current task state, which is used for reinforcement learning (RL) elsewhere in the brain.
223 hoices can be characterized by extending the reinforcement learning (RL) framework to incorporate age
228 ed fMRI analysis revealed a fractionation of reinforcement learning (RL) signals in the ventral stria
229 g a novel oculomotor paradigm, combined with reinforcement learning (RL) simulations, we show that mo
230 ng the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal
232 eract during learning.SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably p
235 ble systems, such as working memory (WM) and reinforcement learning (RL), contribute simultaneously t
236 We review the psychology and neuroscience of reinforcement learning (RL), which has experienced signi
238 ectories were well characterized by a simple reinforcement-learning (RL) model that maintained and co
239 or signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found
240 asic activity of dopamine neurons represents reinforcement learning's temporal difference prediction
242 reased ventral striatal RPE signaling during reinforcement learning (session 2), though there was no
244 uring the effective delivery of dopaminergic reinforcement learning signals broadcast to the striatum
245 w that dopamine-dependent mechanisms enhance reinforcement learning signals in the striatum and sharp
246 ve evaluation bias may be driven by aberrant reinforcement learning signals, which fail to update fut
248 sing a combination of evolutionary analysis, reinforcement learning simulations, and behavioral exper
249 The striatum is known to play a key role in reinforcement learning, specifically in the encoding of
254 ipiprazole on more cognitive facets of human reinforcement learning, such as learning from the forgon
257 n counterfactual learning, we administered a reinforcement learning task that involves both direct le
258 retrospectively predicted performance on the reinforcement learning task, demonstrating that the bias
261 event-related potentials in humans during a reinforcement learning task, we show strong evidence in
264 performance of two groups of participants on reinforcement learning tasks using a computational model
269 ion approach in humans with two well-studied reinforcement learning tasks: one isolating model-based
271 and norepinephrine neurotransmission support reinforcement learning, the role of dopamine has been em
273 between expected and actual outcomes), with reinforcement learning theories being based on predictio
275 cur without external feedback, yet normative reinforcement learning theories have difficulties explai
276 unifying framework encompassing Bayesian and reinforcement learning theories of associative learning.
277 Does learning in human observers comply with reinforcement learning theories, which describe how subj
283 tance of action exploration, a key idea from reinforcement learning theory, showing that motor variab
287 ncode reward prediction errors and can drive reinforcement learning through their projections to stri
288 ate their expectations after playing a DG by reinforcement learning to construct a model that explain
289 the DG and also to the wide applicability of reinforcement learning to explain many strategic interac
291 while also shedding light onto mechanisms of reinforcement learning using realistic biological assump
292 tal plasticity and individual differences in reinforcement learning, was found to predict the effect
293 l neuroimaging and computational modeling of reinforcement learning, we demonstrate positive habenula
295 to the responses of dopamine neurons during reinforcement learning, which have been shown to encode
296 to differentiate model-based from model-free reinforcement learning, while generating neurophysiologi
297 strate that bidding is well characterized by reinforcement learning with biased reward representation
299 stem is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventra
300 re we introduce an algorithm based solely on reinforcement learning, without human data, guidance or
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。