January 8, 1999

Chris Drummond
School of Information Technology and Engineering
University of Ottawa

Symbols, Systematicity and Synergisms

In the solving of complex tasks, it is important to exploit the results of prior learning. If transfer occurs at the level of the whole task, the likelihood of previous learning being relevant is small. If the complex task can be broken down into smaller parts then this likelihood will be considerably increased. It is this structure sensitivity which is the critical property of Fodor's Language of Thought hypothesis. The main argument in support of this view is that thought is strongly systematic, that being able to think some thoughts is intrinsically linked to being able to think other thoughts. This work looks at the idea that being able to solve some tasks is intrinsically linked to being able to solve other tasks.

This talk discusses an approach that realises this systematicity property by combining symbolic and associative processes. An associative learning algorithm generates solutions to subtasks. These are composed by a process much like symbolic planning to form a solution to a complex task. This is further refined by the associative learning algorithm so that it becomes more synergistic, the solutions to subtasks becoming more interdependent. This talk will contrast this approach to others from the Artificial Intelligence community and related fields. It will demonstrate its viability by showing how it is implemented and used to solve a series of robot navigation tasks.

Back to the TAMALE home page