Date of Original Version
Abstract or Description
We present a method that allows an agent through active exploration to autonomously build a useful representation of its environment. The agent builds the representation by iteratively learning distinctions and predictive rules using those distinctions. We build on earlier work in which we showed that by motor babbling an agent could learn a representation and predictive rules that by inspection appeared reasonable. In this paper we add active learning and show that the agent can build a representation that allows it to learn predictive rules to reliably control its hand and to achieve a simple goal.