Jump to content

Temporal difference learning

From Wikipedia, the free encyclopedia

Temporal difference(TD)learningrefers to a class ofmodel-freereinforcement learningmethods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, likeMonte Carlo methods,and perform updates based on current estimates, likedynamic programmingmethods.[1]

While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known.[2]This is a form ofbootstrapping,as illustrated with the following example:

Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.[2]

Temporal difference methods are related to the temporal difference model ofanimal learning.[3][4][5][6][7]

Mathematical formulation

[edit]

The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates thestate value functionof a finite-stateMarkov decision process(MDP) under a policy.Letdenote the state value function of the MDP with states,rewardsand discount rate[8]under the policy:[9]

We drop the action from the notation for convenience.satisfies theHamilton-Jacobi-Bellman Equation:

sois an unbiased estimate for.This observation motivates the following algorithm for estimating.

The algorithm starts by initializing a tablearbitrarily, with one value for each state of the MDP. A positivelearning rateis chosen.

We then repeatedly evaluate the policy,obtain a rewardand update the value function for the current state using the rule:[10]

whereandare the current and next states, respectively. The valueis known as the TD target, andis known as the TD error.

TD-Lambda

[edit]

TD-Lambdais a learning algorithm invented byRichard S. Suttonbased on earlier work on temporal difference learning byArthur Samuel.[11]This algorithm was famously applied byGerald Tesauroto createTD-Gammon,a program that learned to play the game ofbackgammonat the level of expert human players.[12]

The lambda () parameter refers to the trace decay parameter, with.Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions whenis higher, withproducing parallel learning to Monte Carlo RL algorithms.[13]

In neuroscience

[edit]

The TDalgorithmhas also received attention in the field ofneuroscience.Researchers discovered that the firing rate ofdopamineneuronsin theventral tegmental area(VTA) andsubstantia nigra(SNc) appear to mimic the error function in the algorithm.[3][4][5][6][7]The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the futurereward.

Dopaminecells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.[14]Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used forreinforcement learning.

The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.[15][16]It has also been used to study conditions such asschizophreniaor the consequences of pharmacological manipulations of dopamine on learning.[17]

See also

[edit]

Notes

[edit]
  1. ^Sutton & Barto (2018),p. 133.
  2. ^abSutton, Richard S. (1 August 1988)."Learning to predict by the methods of temporal differences".Machine Learning.3(1): 9–44.doi:10.1007/BF00115009.ISSN1573-0565.S2CID207771194.
  3. ^abSchultz, W, Dayan, P & Montague, PR. (1997). "A neural substrate of prediction and reward".Science.275(5306): 1593–1599.CiteSeerX10.1.1.133.6176.doi:10.1126/science.275.5306.1593.PMID9054347.S2CID220093382.{{cite journal}}:CS1 maint: multiple names: authors list (link)
  4. ^abMontague, P. R.; Dayan, P.; Sejnowski, T. J. (1996-03-01)."A framework for mesencephalic dopamine systems based on predictive Hebbian learning"(PDF).The Journal of Neuroscience.16(5): 1936–1947.doi:10.1523/JNEUROSCI.16-05-01936.1996.ISSN0270-6474.PMC6578666.PMID8774460.
  5. ^abMontague, P.R.; Dayan, P.; Nowlan, S.J.; Pouget, A.; Sejnowski, T.J. (1993)."Using aperiodic reinforcement for directed self-organization"(PDF).Advances in Neural Information Processing Systems.5:969–976.
  6. ^abMontague, P. R.; Sejnowski, T. J. (1994)."The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms".Learning & Memory.1(1): 1–33.doi:10.1101/lm.1.1.1.ISSN1072-0502.PMID10467583.S2CID44560099.
  7. ^abSejnowski, T.J.; Dayan, P.; Montague, P.R. (1995). "Predictive Hebbian learning".Proceedings of the eighth annual conference on Computational learning theory - COLT '95.pp. 15–18.doi:10.1145/225298.225300.ISBN0897917235.S2CID1709691.
  8. ^Discount rate parameter allows for atime preferencetoward more immediate rewards, and away from distant future rewards
  9. ^Sutton & Barto (2018),p. 134.
  10. ^Sutton & Barto (2018),p. 135.
  11. ^Sutton & Barto (2018),p. 130?.
  12. ^Tesauro (1995).
  13. ^Sutton & Barto (2018),p. 175.
  14. ^Schultz, W. (1998). "Predictive reward signal of dopamine neurons".Journal of Neurophysiology.80(1): 1–27.CiteSeerX10.1.1.408.5994.doi:10.1152/jn.1998.80.1.1.PMID9658025.S2CID52857162.
  15. ^Dayan, P. (2001)."Motivated reinforcement learning"(PDF).Advances in Neural Information Processing Systems.14.MIT Press: 11–18.
  16. ^Tobia, M. J., etc. (2016)."Altered behavioral and neural responsiveness to counterfactual gains in the elderly".Cognitive, Affective, & Behavioral Neuroscience.16(3): 457–472.doi:10.3758/s13415-016-0406-7.PMID26864879.S2CID11299945.{{cite journal}}:CS1 maint: multiple names: authors list (link)
  17. ^Smith, A., Li, M., Becker, S. and Kapur, S. (2006). "Dopamine, prediction error, and associative learning: a model-based account".Network: Computation in Neural Systems.17(1): 61–84.doi:10.1080/09548980500361624.PMID16613795.S2CID991839.{{cite journal}}:CS1 maint: multiple names: authors list (link)

Works cited

[edit]

Further reading

[edit]
[edit]