Learning latent structure: carving nature at its joints
- PMID: 20227271
- PMCID: PMC2862793
- DOI: 10.1016/j.conb.201002008
Learning latent structure: carving nature at its joints
Abstract
Reinforcement learning (RL) algorithms provide powerful explanations for simple learning and decision-making behaviors and the functions of their underlying neural substrates. Unfortunately, in real-world situations that involve many stimuli and actions, these algorithms learn pitifully slowly, exposing their inferiority in comparison to animal and human learning. Here we suggest that one reason for this discrepancy is that humans and animals take advantage of structure that is inherent in real-world tasks to simplify the learning problem. We survey an emerging literature on 'structure learning'--using experience to infer the structure of a task--and how this can be of service to RL, with an emphasis on structure in perception and action.
(c) 2010 Elsevier Ltd. All rights reserved.
Figures
References
-
- Sutton RS, Barto AG. Reinforcement Learning: An Introduction. MIT Press; 1998.
-
- Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275(5306):1593. - PubMed
-
- Houk JC, Adams JL, Barto AG. A model of how the basal ganglia generate and use neural signals that predict reinforcement. Models of information processing in the basal ganglia. 1995:249–270.
-
-
Kemp C, Tenenbaum JB. Structured statistical models of inductive reasoning. Psychological review. 2009;116(1):20–58. An elegant demonstration of how structured knowledge influences human reasoning in a large variety of domains.
-
-
- Mach E. The analysis of sensations. 1897
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources