Date of Original Version

7-2011

Type

Conference Proceeding

Rights Management

All Rights Reserved

Abstract or Description

Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule or function that picks out examples of the concept across all possible worlds. We turn this idea into a computational model of concept learning, and demonstrate that this model helps to account for two aspects of human learning. Our first experiment explores how humans learn relational concepts such as “taller” that are defined with respect to a context set. Our second experiment explores modal inferences, or inferences about whether states of affairs are possible or impossible. Our model accounts for the results of both experiments, and suggests that possible worlds semantics can help to explain how humans learn and use concepts.

Included in

Psychology Commons

Share

COinS
 

Published In

Proceedings of the 33rd Annual Conference of the Cognitive Science Society.