Date of Original Version
Abstract or Table of Contents
We describe an ACT-R model for sentence memory that extracts both a parsed surface representation and a propositional representation. In addition, if possible for each sentence, pointers are added to a long-term memory referent which reflects past experience with the situation described in the sentence. This system accounts for basic results in sentence memory without assuming different retention functions for surface, propositional, or situational information. There is better retention for gist than for surface information because of the greater complexity of the surface representation and because of the greater practice of the referent for the sentence. This model’s only inference during sentence comprehension is to insert a pointer to an existing referent. Nonetheless, by this means it is capable of modeling many effects attributed to inferential processing. The ACT-R architecture also provides a mechanism for mixing the various memory strategies that participants bring to bear in these experiments.