戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 umbens DA, locomotion, and brain-stimulation reward.
2 , this could potentially result in increased reward.
3 iction error (RPE), or actual minus expected reward.
4 tory control, and thus a hypersensitivity to reward.
5 ction or stimulus was more likely to lead to reward.
6 those representations associated with higher reward.
7 en decision thresholds and the net harvested reward.
8 rphine dependence without affecting morphine reward.
9  task to gain points for alcohol, food or no reward.
10 ain regions involved in food intake and food reward.
11 on scales proportionally to the value of the reward.
12 tap or a hold to earn the corresponding food reward.
13 s underlying alcohol self-administration and reward.
14  bias behavioral choices away from immediate rewards.
15 ge in cognitive or physical effort to obtain rewards.
16 e accurately and precisely than intermediate rewards.
17 d conflicting selection pressures to explain rewards.
18 ptations and restore motivation for non-drug rewards.
19 f mice and a human anticipating conventional rewards.
20 ibute to learning about stimuli that predict rewards.
21 e long-term dependencies between actions and rewards.
22 ber photometry signals after the delivery of rewards.
23 itize courses of action that lead to greater rewards.
24 lent success at obtaining stimuli they found rewarding.
25 es select against the evolution of prosocial rewarding.
26 tense, more interesting, and ultimately more rewarding.
27 d to discriminate based on relative quantity-reward (1 vs 3 food pellets) or effort (3 vs 9 lever pre
28  We also demonstrate preference for the high-reward (3 pellet) lever was selectively reestablished wh
29 hanged substantially toward risk aversion as reward accumulated within a context, and blood oxygen le
30 erefore, we propose an incentive scheme that rewards accurate minority predictions and show that this
31                          Morphine exerts its rewarding actions, at least in part, by inhibiting GABAe
32  learning to select rewarding images but not rewarding actions.
33            The present study measured neural reward activation in the context of a validated reward t
34 g procedure to explore the diurnal rhythm of reward activation.
35                           Here, we show that reward acts on lingering representations of environmenta
36 g process important for executive control of reward and addiction.
37 uroimaging studies have identified the brain-reward and decision-making systems that are involved in
38 lucose metabolism in regions associated with reward and drug dependence.
39    Both WT and KO mice discriminated between reward and no-reward levers; however, KO mice failed to
40 OT) has been implicated in mediating natural reward and OT-synthesizing neurons project to the ventra
41 viously, we showed in young individuals that reward and punishment feedback have dissociable effects
42                              Remarkably, the reward and punishment groups adapted to similar degree a
43  (STN) and globus pallidus internus (GPi) in reward and punishment processing, and deep brain stimula
44 mporal relationship between a stimulus and a reward and reported their response with anticipatory lic
45 urther research into circadian modulation of reward and underscores the methodological importance of
46 Crucially, cross-prediction showed that mean reward and variability representations are distinct and
47 ying mechanism whereby marijuana could exert rewarding and addictive/withdrawal effects.
48 accumulated to show that the Hb encodes both rewarding and aversive aspects of external stimuli, thus
49 Ethanol, like other drugs of abuse, has both rewarding and aversive properties.
50 ting uncertainty and the relative balance of rewards and losses.
51 arkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic
52 -neurons signal prediction information about rewards and punishments by displaying excitation to both
53 ti-category valuation task that incorporates rewards and punishments of different nature, we identify
54 ent response options and the actual received rewards and punishments.
55 ice learn associations between cues and food rewards and then use those associations to make choices.
56 ors (MORs) are central to pain control, drug reward, and addictive behaviors, but underlying circuit
57 ut how stress response, neural processing of reward, and depression are related in very young childre
58 ing a stressor, functional brain activity to reward, and depression severity in children 4 to 6 years
59 ty to cigarette rewards relative to monetary rewards, and by applying excitatory or inhibitory repeti
60 y relate dopamine to learning about variable rewards, and the neural encoding of associated teaching
61 s pattern, showing enhanced encoding of both reward- and loss-associated stimuli.
62 s) in reward-related brain activation during reward anticipation and outcome using fMRI (planned befo
63 d the monetary incentive delay task, probing reward anticipation, and a go/no-go task, probing respon
64 r inhibitory control and sensitivity to drug reward are two significant risk factors for drug abuse.
65 ipulated by changing the delay to or size of reward associated with a response direction across a ser
66  MFC rapidly evolved to encode the amount of reward associated with each stimulus.
67 tal factors contribute to adolescent-typical reward-associated behaviors with a particular focus on s
68 brain, resulting in long-term alterations in reward-associated behaviors.
69 contributes to the ability to learn stimulus-reward associations rapidly by shaping encoding within O
70 ntain or deviate from previously learned cue-reward associations.SIGNIFICANCE STATEMENT Dopaminergic
71 ntiousness and steeper discounting of future rewards at age 14 also predicts problematic drug use at
72 t networks, guided solely by delayed, phasic rewards at the end of each trial.
73 flect a prediction error of the brain, where rewards at unexpected times (10.00 h and 19.00 h) elicit
74 tation to several types of stimuli including rewarding, aversive, and neutral stimuli whereas VS dopa
75 sentangle these models, we design a two-step reward-based decision paradigm and implement it in a rea
76                                        While reward-based effects required long stimulus presentation
77 ion of the action, or their performance in a reward-based learning task.
78 uses on the mesolimbic nuclei as the core of reward behavior regulation.
79 ctively in the LH can profoundly affect food reward behavior, ultimately leading to obesity.
80 uvenile-adolescents, by potentially altering reward behavioral outcomes.SIGNIFICANCE STATEMENT The pr
81 ecific brain nucleus postulated to influence rewarding behaviour) with respect to wheel running and s
82 0 h and 19.00 h) elicit higher activation in reward brain regions than at expected (14.00 h) times.
83 gning a task where actions were consistently rewarded but probabilistically punished.
84 nds facilitates the maintenance of prosocial rewarding but prevents its invasion, and that spatial st
85 al correlates of seeking information about a reward, but it remains unknown whether, and how, neurons
86 d cell type that is critical for the brain's reward circuit, and how Delta(9)-tetrahydrocannabinol oc
87 demonstrate that specific projections of the reward circuitry are uniquely susceptible to the effects
88 t for DNA hydroxymethylation, in the brain's reward circuitry in modulating stress responses in mice.
89 re associated with altered brain function in reward circuitry in neurotypical adults and may increase
90  matrix (GM), a key primordium for the brain reward circuitry, is unique among brain regions for its
91 e have demonstrated that neuroadaptations in reward circuits following cocaine self-administration (S
92 feeding circuits within the hypothalamus and reward circuits within the ventral tegmental area (VTA).
93 itory input from the periphery to mesolimbic reward circuits.
94 lopentadienyl and carbene derivatives) and a rewarding collaboration between synthetic and theoretica
95 ally sustained activity during the period of reward consumption.
96 sted of one high- and one low-quality patch, reward contagion produced by higher leaf litter levels r
97  the nucleus accumbens (NAc) reduced cocaine reward-context associations and relapse-like behaviors i
98                     Moreover, the effects of reward contingency on choice were similar for patients w
99 sed, rats were slower to adapt to changes in reward contingency, and OFC encoding of response informa
100  the lateral hypothalamus (LH) is also a key reward-control locus in the brain.
101 attracts pollinators [3], but toxins in this reward could disrupt the mutualism and reduce plant fitn
102                                    Accepting reward coupled to a nociceptive stimulus resulted in dec
103 e environment, occurring even when potential reward cues have long disappeared.
104 are prone to attribute incentive salience to reward cues, which can manifest as a propensity to appro
105 is related to a transdiagnostic continuum of reward deficits in specific neural networks.
106 sk, with a temporal gap of 2 s added between reward deliveries, found that the rhythmic signals persi
107 n prefrontal sensorimotor control and rapid, reward-dependent reorganization of control dynamics.
108 ic good but do reward themselves (antisocial rewarding) deters cooperation in the absence of addition
109                                  Critically, reward devaluation by both cognitive and physical effort
110                      Objects associated with reward draw attention and evoke enhanced activity in vis
111 ng was driven by positive reinforcement (ie, reward drinkers) would have a better treatment response
112 rease (ghrelin) food intake and learned food reward-driven responding, thereby highlighting endocrine
113 lear HDAC5 mediate the behavioral effects of rewarding drugs via regulation of cocaine-associated sti
114 will occur and less able to forego immediate rewards due to higher financial need; they may thus appe
115  affects individuals' valuation of potential rewards during decision-making, independent from reward
116 e release by progressive ratio responding to reward, during which animals were allowed to effortlessl
117 velop strong associations between the drug's rewarding effects and environmental cues, creating power
118 ct of tryptophan metabolism) counteracts the rewarding effects of cannabinoids by acting as a negativ
119 ting behavioral tendencies toward danger and reward, enabling adaptive responding under this basic se
120 sory and frontal sources and later than mean reward encoding.
121 on, with punishment improving adaptation and reward enhancing retention.
122 cally adapts to the statistics of the recent reward environment, introducing an intrinsic temporal co
123 cision, one must anticipate potential future rewarding events, even when they are not readily observa
124    Rodents sniff in response to novel odors, reward expectation, and as part of social interactions [
125 rds during decision-making, independent from reward experience.
126 position of an encoded value on the scale of rewards experienced during learning.
127  via dynamic adjustment of learning based on reward feedback, while changes in its activity signal un
128 llinators in the recognition and learning of rewarding flowers.
129                          Foregoing immediate rewards for larger, later rewards requires that decision
130  based on projected future states, while the reward function assigns value to each state, together ca
131  the form of a state transition function and reward function, can be converted on-line into a decisio
132                      We show that increasing reward funds facilitates the maintenance of prosocial re
133 cocaine exposure as rats performed a complex reward-guided decision-making task in which predicted re
134 gs of abuse, during performance of a complex reward-guided decision-making task.
135                           To investigate how reward history modulates OFC activity, we recorded OFC e
136                     The mechanisms that link reward history to OFC computations remain obscure.
137 urons were more strongly modulated by recent reward history.
138 d toward the choice representing the maximal reward (i.e., initial dip).
139  found that fMRI pattern-based signatures of reward identity in lateral posterior OFC were modulated
140 S lesions had deficits in learning to select rewarding images but not rewarding actions.
141 uation modulates a cognitive map of expected reward in OFC and thereby alters general value signals i
142 radrenaline, mediate arousal, attention, and reward in the CNS.
143  with the neural response to anticipation of reward in the nucleus accumbens.
144 s based on the likelihood that they would be rewarded in a sustainable manner.
145 that optogenetic activation of NPF neuron is rewarding in olfactory conditioning experiments and that
146 ts that might explain why cannabinoid is not rewarding in rodents and might also account for individu
147 posite hemisphere eliminated the encoding of reward information.
148 ts suggest that altered neural processing of reward is already related to increased cortisol output a
149             Appropriate choice about delayed reward is fundamental to the survival of animals.
150                            Predicting future reward is paramount to performing an optimal action.
151 ns that are more likely or less likely to be rewarded is a critical aspect of goal-directed decision
152  contingent relationships between choice and reward, is reduced in lOFC patients compared with Contro
153 ns regarding the ability to realize deferred rewards, is associated with loss and risk aversion.
154 ults provide insights into how attention and reward jointly determine how we learn.
155  which people can expect to realise deferred rewards, leading to more present-oriented behaviour in a
156  brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes w
157  findings suggest that model-free aspects of reward learning in humans can be explained algorithmical
158 amined this by administering a probabilistic reward learning task to younger and older adults, and co
159  cortex (MFC), and amygdala mediate stimulus-reward learning, but the mechanisms through which they i
160 a comparable contribution of both signals to reward learning.
161 eral psychiatric conditions, many related to reward learning.
162 sociated with improved performance in a food-rewarded learning task.
163  KO mice discriminated between reward and no-reward levers; however, KO mice failed to discriminate b
164  either immediately before movement toward a reward location or just after arrival at a reward locati
165 a reward location or just after arrival at a reward location preferentially involved cells consistent
166 cts on feeding motivation and sensitivity to reward loss/gain consistent with human depression.
167                                              Reward magnitude also influenced learning, an effect tha
168 ward trials are manipulated by variations in reward magnitude and probability to win.
169 sociated alterations in dopamine signals for reward magnitude failed to subsequently discriminate bet
170 einforcement-related signals used to sustain reward-maximization.
171                    Alternatively, effects of reward may be part of a more general mechanism that prio
172 o locations in an arena, one associated with reward (mealworms) and one with punishment (air puff).
173 nergic neurons to reconsolidate the original reward memory.
174                    The network utilized both reward modulated and non-reward modulated STDP and imple
175 twork utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanism
176                They contribute to cognition, reward, mood, and nociception and are implicated in a ra
177                                              Reward motivation has been demonstrated to enhance decla
178  area (VTA) dopamine system is important for reward, motivation, emotion, learning, and memory.
179 functional connectivity within a distributed reward network, assessed using resting-state functional
180  emphasizes the need to understand how brain reward networks contribute to youth depression.
181 A person's apparent confidence in the likely reward of an action, for instance, makes qualities of th
182 r and posterior insula, and 3) to unexpected reward omission in the caudate body.
183 rticipant predicted the behavioral impact of reward on search performance in subsequent trials.
184 on-making in which choices of high-cost/high-reward options are sharply increased.
185  failed to subsequently discriminate between reward options.
186 ly switched between exploring and exploiting rewarding options.
187 vated by the potential for either a monetary reward or a monetary loss.
188 ic connectivity patterns linked to increased reward or cognitive load.
189 hereas VS dopamine showed excitation only to reward or reward-predicting cues.
190  either the value of the currently available reward or the vigor with which rats act to consume it.
191 within the VS in the execution of successful reward-oriented behavior.
192  within the VS in terms of the regulation of reward-oriented behavior.
193 nd (ii) are not forced to take the immediate reward out of financial need.
194                                       During reward outcome, individuals with substance addiction sho
195 timuli that positively or negatively predict rewarding outcomes influence choice between actions that
196 tion to a subject-specific frontal-cingulate reward pathway, this pattern of results was reversed.
197 pressed in dorsomedial prefrontal cortex and reward PEs in ventral striatum.
198 vity that are time locked to the delivery of rewards, phasic activation of these projections does not
199 hat abstained smokers exhibited a heightened reward positivity to cigarette rewards relative to monet
200 he informational value of each trial and the reward potential were separately manipulated.
201 e strength: striosomal neurons fired more to reward-predicting cues and encoded more information abou
202 dopamine showed excitation only to reward or reward-predicting cues.
203             Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected
204 are thought to encode novelty in addition to reward prediction error (the discrepancy between actual
205 loping view that the LHb promotes a negative reward prediction error in Pavlovian conditioning.SIGNIF
206  with interval duration, and doesn't reflect reward prediction error, timing, or value as single fact
207      We attribute this finding to a positive reward prediction error, whereby the animal perceives th
208 MRI, we show parallel encoding of effort and reward prediction errors (PEs) within distinct brain reg
209 lts show that the same signal that codes for reward prediction errors also codes the animal's certain
210 tral tegmental area (VTA) to striatum encode reward prediction errors and reinforce specific actions;
211 digm optimized to dissociate the subtypes of reward-prediction errors that function as the key comput
212 velop new, primarily inhibitory responses to reward-predictive cues across learning.
213  ensembles that are activated selectively by reward-predictive stimuli.
214  related to regions of the brain involved in reward processing and interoception.
215                        Dopamine function and reward processing are highly interrelated and involve co
216 cocaine on dependence-related variability in reward processing in cocaine-dependent individuals (CD)
217 rontostriatal functional connectivity during reward processing is predictive of response to a psychot
218 uli, designed to separate distinct phases of reward processing.
219  Furthermore, connectivity attenuation among reward-processing regions may be a particularly powerful
220 erior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and
221 dren were not influenced by the value of the rewards received per se, rather selection by a human age
222 e nucleus accumbens (NAc) is a primary brain reward region composed predominantly of medium spiny neu
223 on, acting in the nucleus accumbens, a brain reward region, is capable of increasing both addiction-
224 ime the ventral tegmental area (VTA)-a brain reward region-to be in a depression-like state.
225 ions in the nucleus accumbens (NAc), a brain reward region.
226 egmental area (VTA) activity is critical for reward/reinforcement and is tightly modulated by the lat
227  to menthol-induced enhancements of nicotine reward-related behavior and may help explain how smokers
228 ls with addiction vs control individuals) in reward-related brain activation during reward anticipati
229 ciated with lower DAT expression and greater reward-related brain activation.
230                The LHb is thought to provide reward-related contextual information to the mesolimbic
231 l tegmental area regulate behaviours such as reward-related learning, and motor control.
232           It has been hypothesized that such reward-related modulation of stimulus salience is concep
233  a heightened reward positivity to cigarette rewards relative to monetary rewards, and by applying ex
234 C projections were also required when a cued reward representation was used to modify Pavlovian condi
235 ective motivating influence of cue-triggered reward representations over reward-seeking decisions as
236                Learning to optimally predict rewards requires agents to account for fluctuations in r
237 oregoing immediate rewards for larger, later rewards requires that decision makers (i) believe future
238 ed with significantly greater proportions of reward responsive neurons (p<0.01) and significantly low
239 nal goal-approach responses according to the reward's current value.
240 s of specific available rewards to influence reward seeking and decision making.
241 ng has been a canonical setting for studying reward seeking and information gathering, from bacteria
242     To test the role of the RMTg in punished reward seeking, adult male Sprague Dawley rats were test
243 nishment probability, suggesting that during reward-seeking actions, risk of punishment diminishes VT
244 red by high-effort cost behavior, but not by reward-seeking behavior.
245 te subcortical structures that contribute to reward-seeking behaviours, such as the ventral striatum
246 of cue-triggered reward representations over reward-seeking decisions as assayed by Pavlovian-to-inst
247 estigators' Research Award (MIRA) program to reward senior PIs with research time in exchange for les
248 (S1) cortical areas via the projections from reward-sensitive dopaminergic neurons of the midbrain ve
249 ision-making processes, resulting in a novel reward-sensitive hyperflexible phenotype, which might re
250 re tested for on MRI contrasts that captured reward sensitivity and cognitive flexibility.
251 ween poor inhibitory control and amphetamine reward sensitivity at both behavioral and neural levels
252                     Incentive motivation and reward sensitivity were measured using the Effort Expend
253                      Previous studies of MFC reward signaling have inferred value coding upon tempora
254   Two other major findings were that the MFC reward signals persist beyond the period of fluid delive
255                               In both tasks, reward signals were widespread throughout multiple cereb
256 rences, the medial prefrontal cortex, social reward, social isolation, and drug use.
257          OXT increased excitatory drive onto reward-specific VTA dopamine (DA) neurons.
258  related to the default mode (DMS), salience/reward (SRS), and frontoparietal (FPS) subnetworks in rM
259  output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength.
260 rtance of the CeA in regulating responses to rewarding stimuli, shedding light on the broader neurobi
261 n-like behavior and increased preference for rewarding stimuli.
262 ence dopamine transmission in the mesolimbic reward system and can reduce drug-induced motor behavior
263 g in psychopathy, supporting the notion that reward system dysfunction comprises an important neurobi
264 ted contextual information to the mesolimbic reward system known to be involved in social behavior.
265 nterventions that target decision-making and reward systems differently to moderate overeating.
266 ng how social interactions can recruit brain reward systems to drive changes in affiliative behaviour
267 ard activation in the context of a validated reward task at 10.00 h, 14.00 h, and 19.00 h in healthy
268                       Using a novel monetary reward task during functional magnetic resonance brain i
269 re measured using the Effort Expenditure for Rewards Task (EEfRT), in which motivation for high-effor
270 , rats exhibit negative affect to a normally rewarding taste cue when it predicts impending but delay
271 and reward, with a numerically larger FRN to reward than punishment (with similar results on these tr
272  the delay between a visual stimulus and the reward that it portends.
273 roups showed a larger FRN to punishment than reward, the PE group showed similar FRNs to punishment a
274  do not contribute to the public good but do reward themselves (antisocial rewarding) deters cooperat
275 he efficacy of a novel gaze-contingent music reward therapy for social anxiety disorder designed to r
276                        Gaze-contingent music reward therapy, but not the control condition, also redu
277 ght sessions of either gaze-contingent music reward therapy, designed to divert patients' gaze toward
278 sed perceived intensity, while rejecting the reward to avoid pain resulted in increased perceived int
279 ral treatment that uses alternative non-drug reward to maintain abstinence.
280       Against this background we regarded it rewarding to implement similar magnetic centers into a c
281 uired for expectations of specific available rewards to influence reward seeking and decision making.
282 more concerned with assignment of credit for rewards to particular choices during value-guided learni
283 -effort/high-reward trials vs low-effort/low-reward trials are manipulated by variations in reward ma
284 T), in which motivation for high-effort/high-reward trials vs low-effort/low-reward trials are manipu
285 inations that differed only in the number of rewarded trials between goal reversals.
286                                     When <10 rewarded trials separated goal switches, OFC population
287 sform this correlation into a mechanism that rewards truth telling.
288 ignal change during anticipation of monetary reward using the monetary incentive delay task following
289 ided decision-making task in which predicted reward value was independently manipulated by changing t
290 ecifically sensitive to transient changes in reward value.
291 quires agents to account for fluctuations in reward value.
292 es are thought to be the result of increased reward-value coupled with an underdeveloped inhibitory c
293 bias in the representation of value: extreme reward values, both low and high, are stored significant
294                   Strikingly, an encoding of reward variability emerges simultaneously in parietal/se
295 to periodic visual stimuli when an immediate reward was given for every predictive movement.
296 arned to lick persistently when higher-value rewards were available and to suppress licking when lowe
297 ble and to suppress licking when lower-value rewards were available.
298                                    Mice were rewarded when the activity of the positive target neuron
299  group showed similar FRNs to punishment and reward, with a numerically larger FRN to reward than pun
300 truncation diminished morphine tolerance and reward without altering physical dependence, whereas the

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。
 
Page Top