login to PAWS
Baton Rouge, Louisiana |


Source Memory

How can we discriminate between memories that are an accurate portrayal of a prior experience and those that are not? This problem is a central one in understanding the memory system from a theoretical perspective and one that has applied value. According to the Source Monitoring Framework (Johnson, et al., 1993), source judgments involve weighing the type and amount of featural information in a retrieved memory trace according to certain decision processes. These decisions can be based on a comparison of these retrieved features to features expected of a given source (heuristic processes), or more systematic processes can be used to attempt to retrieve supporting information or to evaluate the consistency or plausibility of the memory. Finally, the decision criteria (the type and amount of evidence) for assigning source can vary according to task instructions, one’s goals, or other aspects of the task context. This theoretical approach motivates our research in the area of memory, and most research discussed in the sections below utilize tests that require participants to indicate the source of their memories.

Improving Memory Decisions

This line of research was motivated by the repeated finding in the false memory literature that although false memories are often vividly remembered, they are usually less vivid and detailed on average than accurate memories (e.g., Lane & Zaragoza, 1995). For us, this brought up the question of how people set the criteria for their source decisions and whether the accuracy of such decisions could be improved by selecting more diagnostic features. In a series of studies (Lane, Roussel, Villa & Morita, 2007; Lane, Roussel, Villa, Starns & Alonzo, 2008; Starns, Lane, Alonzo & Roussel, 2007), we examined the ability of participants to avoid false memories at the time of retrieval by manipulating whether they received task information (i.e., warnings of various types) prior to the test or received feedback about their decisions during the test. We see such provided task information as a form of metamnemonic knowledge which has the potential to change participants’ expectations about the upcoming or ongoing retrieval task. If the information allows participants to more accurately assess the parameters of the task (e.g., which features are most diagnostic), then they may be able to translate that knowledge into a more effective retrieval strategy. Our findings across these studies confirmed this hypothesis. This work also revealed several additional findings of interest. First, we found that retrieval warnings appear to improve recognition in the DRM (Roediger & McDermott, 1995) paradigm via a distributional shift rather than a change in decision criteria (Starns, et al., 2007). Under the assumption that the amount of evidence retrieved from memory will differ when different types of evidence are considered, we argued that warnings may lead participants to focus more on item-specific features in memory and less on relational information (e.g., semantic features). Finally, we found that feedback about source judgments on an initial training test reduced source errors on the final test relative to a no-feedback control condition. Thus, our results suggest that participants can make on-line adjustments in the types of evidence used to make source judgments.

Two findings from our initial research on the impact of feedback at test were particularly striking: 1) source misattribution errors to postevent information were reduced dramatically and 2) participants did not seem to know how or even if the feedback has helpful (in contrast of previous research using retrieval warnings). We have now conducted a number of studies following up on these findings. As one example, we have begun to investigate how people learn from feedback. One possibility is that feedback changes decisions through a resource-intensive process that involves identifying and maintaining corrections to decision criteria after making an error. Another possibility is that participants learn to change their criteria in a less resource-intensive, relatively implicit process. To address this question, we (Groft & Lane, under review) had high and low working memory capacity (WMC) participants view an eyewitness event and complete a postevent questionnaire that included misinformation. Half of each group received feedback on the training source memory test. Performance on the subsequent no-feedback assessment test revealed that high and low WMC participants benefited equally from feedback. These findings suggest that feedback may improve source memory accuracy without requiring substantial executive resources (e.g., through incremental reinforcement learning; see Han & Dobbins, 2009). Overall, our research has found that feedback effects can be found under a number of conditions, including 1) different source-monitoring paradigms, 2) tests delayed up to 48 hours after encoding, 3) encoding conditions that greatly increase the overlap between sources (divided attention at encoding), and 4) even old-new recognition tests that involve a difficult discrimination between item classes.

Impact of Intervening Retrieval on Memory

Memories are, by their very nature, dynamic. Once encoded, a memory representation can be shaped by many factors, including the very act of remembering. Although retrieving a memory may strengthen it and increase the likelihood of later retrieval, the manner in which it is retrieved can have a profound influence on what is subsequently remembered about the original experience. In earlier work on this topic, we (Lane, Mather, Villa & Morita, 2001) examined the impact of recalling an eyewitness event at different levels of detail on participants’ ability to remember the source of items on a subsequent test. Following the postevent questionnaire, participants were asked to think back to the original event and review it in very specific detail or review just the main points (a summary). Detailed review participants recalled more event and postevent details than summary review participants. Thus, they recalled more information overall. However, on the subsequent source test (conditions that should encourage close scrutiny of memories), detailed review participants were more likely to report having seen misleading items in the original event, even for items not mentioned in their review. A subsequent experiment ruled out the possibility that this effect was induced by a criterion shift. We argued that the detailed review participants may have been more likely to form an image of the suggested item and/or associate the item with contextual details from the events in the video. This process would make the characteristics of these suggested items more “event-like” and consequently more likely to be attributed to the video during the source test.

We have continued to explore similar issues in more recent work carried out in collaboration with Linda Henkel (Lane, Henkel, Roussel, & Groft, in preparation). In this research, participants saw or imagined seeing pictures of common objects that shared physical (i.e., similar looking objects), conceptual (i.e., objects from the same category), or no similarity, and subsequently tried to recall the items three times. The critical manipulation concerned the recall task. Our studies were motivated by a previous finding that participants who repeatedly recalled items without respect to source (i.e., free recall) were significantly more likely to falsely claim that they had seen the imagined items presented as pictures on a subsequent source test than were participants who had earlier indicated the source of each item as they recalled it (Henkel, 2004; Exp. 3). We had two major questions. First, what factors are involved in reducing the negative impact of repeated retrieval on subsequent source misattributions? To rule out the possibility that source recall only helps later source monitoring because participants are simply remembering their previous responses at recall, we compared free recall participants to participants who recalled only items from a single source (only the picture or only the imagined items). In Experiment 1, both source-constrained recall conditions committed fewer source misattributions on the final test than the free recall condition. Second, we sought to understand the effect of specificity of recall on the features of misattributed memories (errors). To address this question, we borrowed techniques used to study “feature importation” (e.g., Lyle & Johnson, 2006). One of the reasons people appear to distinctly “remember” false memories is because those memories can be accompanied by details from items that were actually perceived. In research on this topic, feature importation is seen to the degree to which participants attribute to false memories features (e.g., location) that are true of a related perceived item. In Experiments 2 & 3, we manipulated the location of the pictures on the screen, and at test, asked participants to identify the location of any item they claimed to have seen as a picture. We used this procedure to try to identify why source-constrained recall reduces errors. For example, source-constrained recall might reduce errors because it reduces feature importation (and thus errors are less compelling) or because it strengthens features that are diagnostic of source without affecting the level of importation. Our results are most consistent with the latter, as although source-constrained recall reduces errors, the errors that are committed are attributed to a congruent location at the same level as errors committed after free recall.

Eyewitness Suggestibility

Following a crime, witnesses can be exposed to misleading postevent information, for instance, during contact with other witnesses, while reading media accounts of the event, or during interactions with law enforcement personnel or attorneys. One concern is whether witnesses incorporate information from these sources into their accounts of what they perceived at the time of the crime (eyewitness suggestibility). This concern is well-founded as research has consistently found that participants will report misleading postevent items as having been in the witnessed event (the misinformation effect; Loftus & Palmer, 1974, Loftus, Miller & Burns, 1978). When these errors occur on a source-monitoring test (an indication participants believe they actually saw the postevent items in the witnessed event), we call them source misattribution errors.

Among other findings, our prior research in this area has revealed that source misattribution errors tend to increase as a function of 1) the extent to which postevent misinformation is reflectively and elaboratively processed (Zaragoza & Lane, 1994) and 2) the attentional resources available during encoding of the witnessed event, the encoding of misinformation, and at the time of retrieval (Lane, 2006; Zaragoza & Lane, 1998). In more recently published work, we examined the impact of generating elaborated descriptions of misinformation (Lane & Zaragoza, 2007). The role of imagery in helping create false memories (including those resulting from suggestion) is well-documented, but one issue that has not been addressed by previous research is what role, if any, the act of generating a perceptually detailed representation might play in promoting false memory creation. We manipulated how participants processed postevent items by varying whether they were required to generate or read details describing the physical appearance of the items (or in one experiment, simply read an unelaborated item). Our results revealed that generation increased both source misattribution errors and accurate memory for the real source of the items (the postevent questionnaire). Generation also increased claims of having a (false) vivid recollection of the suggested items in the event. We argue that participants may be more likely to construct and encode a richer, more elaborate representation of what the item looked like when they generate a description than when the item is read in a narrative, presupposed in a question, or when perceptual details are simply described. False memories created by generated descriptions are thus likely to be misattributed to the witnessed event because they contain characteristics that would be expected of items that were actually witnessed.

Eyewitness Identification

The issue of how we decide whether we have previously seen a person in a particular context is an important one with respect to the task of eyewitness identification. In this area, we are exploring issues similar to those we have examined in other memory paradigms. For example, in a series of experiments, we have examined the relationship between face identification and memory for associated details (Lane, Groft, Roussel, & Calamia, in preparation). Participants studied faces and associated details, and were later tested in a series of target-present and target-absent lineups (a multiple lineup paradigm). For each face they claimed to have seen, they were asked to recognize associated contextual details. Results revealed that accurate memory for some contextual details was associated with a higher likelihood of a correct lineup decision. Furthermore, there was evidence of feature importation for false identifications (i.e., claiming to have seen details associated with the real target). Thus, the issue of item-context binding appears important when considering eyewitness identification. In other experiments, we have attempted to understand participants’ identification or lineup rejection (“not present”) decisions in terms of strategies that have been intensively studied in the basic memory literature (e.g., the distinctiveness heuristic, recall-to-reject strategy). Overall, our results have shown that these strategies vary in effectiveness, and in some cases can be quite diagnostic of accuracy. Besides our empirical research, Chris Meissner and I have recently suggested that research in eyewitness identification may benefit from greater utilization of theories and methods of basic memory research (Lane & Meissner, 2008).

Beliefs about Eyewitness Memory

In real-world cases, it is often jurors who must evaluate eyewitness testimony. One key factor affecting jurors’ decisions is the beliefs they hold about eyewitness memory, and there is substantial survey research that documents the beliefs of laypeople can often differ from those of experts. Although the results of survey studies can provide useful information about people’s explicit beliefs about eyewitness memory issues, one understudied question is whether such beliefs tell us what they might do when faced with a situation where these beliefs are potentially relevant. This is a reasonable question, because psychological research has documented that people do not always act in accordance with their beliefs (attitude-behavior consistency), do not always apply what they have learned in one situation to another, and can learn and perform tasks correctly with little explicit knowledge of the features they are relying upon (implicit learning). Participants in this research (Alonzo & Lane, 2010) evaluated the accuracy of eyewitnesses depicted in brief trial transcripts and answered survey questions to assess their beliefs regarding complementary eyewitness memory issues. Although participants were sensitive to a number of factors in their evaluation of eyewitnesses, their performance on the transcripts did not correlate with the survey responses for most issues. These findings highlight the potential strengths and weaknesses of survey measures, and suggest the need for more diverse research investigating the understanding and use of knowledge about eyewitness memory by jurors.


Human beings appear to be quite flexible in the way they acquire knowledge about the world. For example, people go to school to deliberately study facts about various topics and also acquire knowledge directly from experience, such as when someone learns a route to a new restaurant without intending to do so (e.g., simply by being a passenger in a vehicle). A variety of behavioral and neuroscientific evidence has been used to argue that complex mental skills are learned and deployed through the use of two different and complementary types of processes (e.g., Reber, 1989; although the issue has also been debated). Although a variety of theoretical terms have been used to describe these two types of processes (most commonly explicit and implicit), we refer to these two categories as experience- and model-based processing (Sallas, Mathews, Lane & Sun, 2007). On this view, experience-based knowledge is acquired by abstracting over multiple encounters with members of a category, is often difficult to articulate, and important features of stimuli may be learned without the intention to do so. In contrast, model-based knowledge involves using a mental model or other explicit task representation to guide behavior, is easier to articulate, and is often more accurate, yet slower to access than experience-based knowledge (e.g., Domangue, Mathews, Sun, Roussel & Guidry, 2004). We use these terms rather than implicit/explicit because we argue that people can sometimes become explicitly aware of knowledge that they nevertheless learned in an experiential manner, and to allow for interactions between these two types of processing.

Interaction of Experience- and Model-Based Processing

Most prior research on implicit/explicit learning has tried to isolate a given process within a particular experimental task, under the assumption that individuals rely exclusively on one process rather than another during learning. Because of this, the question of whether and how these processes interact has largely been neglected. Recent work explored this issue using two different tasks. In the first (Sallas, et al., 2007), we used an artificial grammar task in which participants learn multi-consonant strings (artificial “words”) that follow a set of rules (grammar). In a series of experiments, we manipulated training conditions to emphasize different levels of knowledge acquisition, in accordance with predictions from different theories of artificial grammar learning (i.e., theories which are purely bottom-up vs. theories that allow for top-down and bottom-up interactions). Performance on separate production and grammar tests was most consistent with theories that allow for interaction between processes. Furthermore, our results suggested important training conditions for observing highly accurate and fluent grammar knowledge use. In other research (Lane, Mathews, Sallas, Prattini, & Sun, 2008), we utilized a dynamic system control task (Berry & Broadbent, 1984) that required participants to control a nuclear reactor by varying the number of pellets that were input. Although the task has a single input and output, participants rarely discover the formula underlying the system. However, they often learn to control the system before they can articulate how they are performing the task (i.e., it can be learned experientially). Even so, participants can improve performance when given task hints (model-based knowledge) before practice. This study was designed to examine how participants benefit from these instructions and potential impacts on the resulting representation. Our results showed that participants who were provided model-based knowledge improved their accuracy dramatically, even for states that were not explicitly given in task hints. However, results also revealed that learning in this manner can lead to “costs” such as slowed retrieval, and that this knowledge may not always transfer to new task situations as well as experientially-acquired knowledge (in contrast to prior theoretical claims). Our findings also questioned the theoretical assumption that people learn the dynamic control task by acquiring a highly specific “lookup” table representation.


We have had an interest in the application of our work to education for many years. Previous work included a project developing a computer-based tutor for teaching basic mathematics to highway workers (Mathews, 1999), one evaluating the effectiveness of a handheld microscope (Scope-on-a-Rope) for teaching science concepts to elementary school children, and another looking at the study habits of ADHD and non-ADHD college students (Akvokat, Lane, & Luo, in press). However, our most recent research in this area is much broader in its theoretical scope and implications. We are a part of a multi-disciplinary team that was recently awarded an NSF grant. Our specific research project investigates the components of teacher expertise and explores a possible avenue for increasing teacher expertise in the context of a masters-level teacher program at LSU.

Expertise has been defined as the ability to consistently produce outstanding performance in a domain (e.g., Ericsson, 2009). Furthermore, research has repeatedly documented that the amount of a person’s “experience” in a domain is not a reliable predictor of his or her level of performance (e.g., Bereiter & Scardamalia, 1993; Krampe & Charness, 2006). In other words, one does not simply get better the longer one has been in a profession. Instead, achieving expertise depends on the amount of deliberate practice one has performed (e.g., Ericsson, Charness, Feltovich, & Hoffman, 2006; Ericsson, Krampe & Tesch-Romer, 1993). Deliberate practice has been defined as systematic, effortful activity with the goal of improving performance on a focused skill, with immediate, detailed feedback, often from a coach or teacher. Despite the importance of these attributes, they are often missing from the environment of many professionals, including teachers (e.g., Dunn & Shriner, 1999). One aspect of our research involves helping middle- and high-school mathematics teachers gain expertise more efficiently by teaching them how to guide their own deliberate practice. The training involves sections on 1) effectively setting and pursuing goals, 2) principles of learning, 3) collaborative teamwork skills, and 4) components of deliberate practice. Although the content of the training is derived from the research in psychology and cognitive science, the focus is on translating this information in ways that can support teaching practice. Furthermore, the training provides support for planning and decision-making, including how to anticipate and handle obstacles to applying deliberate practice in their classrooms. Initial results are encouraging, but our long-term focus is on relating the amount and type of deliberate practice teachers engage in to their classroom performance. A second aspect of our research concerns the components of teacher expertise. We are exploring this issue in two different ways. First, we are conducting a series of in-depth interviews on the practices of expert teachers and experienced but non-expert teachers. Second, we are using an individual differences approach. Specifically, we are obtaining laboratory and survey-based measures of basic cognition, knowledge, motivational beliefs, social cognition, and personality on a set of teachers, and relating these to measures of teaching performance. The goal of both studies is to reveal a deeper understanding of the features of teacher expertise, with the goal of developing a theoretical model.


We also have a long-standing interest in the cognitive processes underlying medical practice, and how these processes might be supported. In prior work, we collaborated on an NSF-funded grant with an interdisciplinary team of researchers that included Sonja Wiley-Patton and Andrea Houston from LSU’s Information Systems and Decision Sciences (ISDS) department. This research investigated how medical technology implementation affects the practices of nurses and doctors.

Our more recent work examines how people learn in a laboratory task that has many features in common with those faced by doctors treating patients. Given the complexity of the world we face, it is almost surprising people can learn how and in what ways their actions (e.g., prescribing a medication) affect others. In many situations, professionals have multiple options for action and multiple ways that they could measure the impact of their actions. Furthermore, there is often “noise” in such environments (e.g., the same action may have different effects at different times) and feedback can often be delayed or absent. For example, imagine that you are a family doctor and are responsible for the health of a number of patients. For any given patient, you could try different interventions (e.g., prescribe a medication or a surgical procedure) as a means of affecting their illness. Such interventions might positively affect some aspects of their well-being (e.g., blood pressure) and have a negative or no effect on others (e.g., insomnia). Furthermore, the effects of any intervention are likely not immediately obvious, assuming one gets adequate feedback at all. Despite the complexity and potential ambiguity, professionals who face similar situations are often quite confident that they acquire specific and accurate knowledge about the impact of their interventions as a result of experience (e.g., doctors treating patients; managers supervising employees).

In this research, participants see “patients” suffering from the same disorder multiple times and receive information about their health on a number of parameters, for example, blood pressure (some experiments use a managerial version of this task). Their goal is to keep a key health measure in the “excellent” range while keeping other measures at least in the “acceptable” range (i.e., avoid negative side effects), and also to learn about the effects of different drugs. After a number of rounds with the patients, they are asked to prescribe the best drug for each patient and to indicate what they know about the effects of each drug. Across a number of studies, we have found similar findings. Although participants appear to be learning which drugs are relatively most effective overall, they lack specific knowledge about the impact of such drugs. Specifically, participants avoid prescribing an intervention that has a positive effect on a key (primary) measure and a negative “side effect” on another (secondary) measure, yet when asked explicitly about the impact of the intervention they respond by reducing their judgments of its positive impact and indicate little knowledge of the negative side effect. Thus, participants appear unaware they are integrating across the effects of the drug on different health measures. These effects appear quite robust as they occur under situations where participants have the ability to make a prescription (e.g., Tall, Mathews, Lane & Sun, under review), as well as situations where participants do not (and thus everyone gets to see the same information about each patient, e.g., Tall, Mathews, & Lane, 2009, Psychonomics Society). In addition to the goal of understanding the mechanisms underlying these effects, we are also exploring potential avenues for supporting good decision-making in this task. For example, we have examined the effects of providing decision strategy support (Tall, et al., 2009) and using color coding to emphasize the impact of drugs upon health measures (Tall, Mathews, & Lane, submitted). Although the former has beneficial effects under conditions where different patients have different reactions to specific drugs, the impact of color coding appears to be a particularly good general way to enhance attention to side effects.


National Science Foundation. Louisiana Math and Science Teacher Institute. PI - James Madden (Mathematics). 5 years (2009-2014). Sub-project - Enhancing Teacher Expertise: Deliberate Practice and Social Networks, Co-PIs Sean Lane, Robert Mathews and Tracey Rizzuto.

National Science Foundation. Physicians’ Adoption of Information Technology for Clinical Purposes. Sonja Wiley-Patton, Andrea Houston, Sean Lane, Robert Mathews, and Stephanie Mills (2004-2007).

Army Research Institute. The Integration of Implicit and Explicit Learning in Skill Acquisition. Ron Sun and Robert Mathews (2001-2008).

Spencer Foundation. Why Don’t the Medications Used to Treat Attention-Deficit/Hyperactivity Disorder Improve Academic Achievement? Co-PIs Claire Advokat and Sean Lane (2008-2009).

Louisiana Board of Regents Research Competitiveness Sub-program. Decision processes and false memories. PI -Sean Lane (2004-2007).


This page was last updated on