The research study by Nash and Wade (2009) explores the ability of fabricated evidence to produce false confessions. A number of previous research studies, including those by Kassin and Kiechel (1996), Horselenberg, Merckelbach, and Josephs (2003), Redlich and Goodman (2003), show that fake evidence can make most people confess doing something that they have not done (as cited in Nash & Wade, 2009). For example, the participants of Kassin and Kiechel’s experiment (as cited in Nash & Wade, 2009) were told to complete a computer-based task and were warned not to hit the Alt key, as it would cause the computer to crash. The computer was programmed to crash anyway. Right after that happened, the subjects were faced by the false eyewitness’s testimony that they had pressed the Alt key. As a result, all the participants confessed doing it, and 65% of them internalized this non-existent act, explaining it as their having pressed the key accidentally and having forgotten it. However, previous research findings disagree on whether seeing the evidence or simply being told of it has a stronger impact. While research studies by Kassin and Dunn (1997) and King, Dent and Miles (1991) demonstrate a greater persuasive power of visual evidence, in Garry and Wade’s experiment (2005) people were more easily misguided by verbal descriptions than by doctored photographs (as cited in Nash & Wade, 2009).
The purpose of Nash and Wade’s research study (2009) was to find out whether more people would believe false accusations about themselves if they see a doctored video or if they are simply told of this video.
The variables examined in this research study include compliance, internalization, and confabulation. Compliance was measured as a number of subjects who signed the confession form. Internalization and confabulation were determined on the grounds of expert analysis of the statements that the participants made when speaking to a confederate. Two trained observers applied the trichotomous coding scheme by Redlich and Goodman (2003) (as cited in Nash & Wade, 2009) to the transcripts of discussions to determine whether the subjects truly believed that they had committed a non-existent fraud (internalization), and whether they invented any explanations of how it could have happened (confabulation). The absence of false beliefs was labeled as “no internalization.” The subjects who assumed that they might have done it were considered to have “partial internalization,” and those who were sure that they had done it were categorized as “full internalization.” Confabulation was assessed in a similar way. The participants who failed to tell how the event could have occurred were labeled as “no confabulation,” those who speculated on the possible reasons but did not insist it was the case – as “hypothesizing,” and those who described the act as though they had actually committed it – as “full confabulation.”
The hypothesis for Nash and Wade’s first experiment was that a larger number of participants would admit a non-existent violation if they were shown a fake video of themselves doing it rather than simply told that such video existed. The hypothesis for their second experiment in the same research study was that less people would be provoked into false confessions if the incriminated act looked less probable (Nash & Wade, 2009).
The participants in the first experiment were thirty university students who were randomly assigned to the “see-video” and “told-video” groups. At the first stage, the experimenter asked them to play a gambling session with a computer. Each subject worked on the task individually. He or she received an amount of physical fake money, while another amount represented the “bank.” The computer program asked questions, and the participant had to choose the right answer. If the response was correct, the student could take money from the “bank”; when it was wrong, they had to put money there. The participants were aware that the process was video recorded and believed that their aim was to win as much money as possible.
At the second stage, the experimenter told each subject that they had taken money from the “bank” when they should have put it there, and hence they would not receive the promised reward for participation in the experiment. The students from the “see-video group” also watched a doctored video that clearly depicted them doing it. A green tick on the screen (the computer feedback for the right answer) had been replaced by a red cross (the computer reaction for a wrong answer) with the help of video editing software. As for the students from the “told-video” group, the experimenter simply informed them that video evidence existed. Immediately after, a confession form was offered to each participant. If they refused to sign, the experimenter commented that they could go to the professor in charge, but the professor would probably also find them guilty because of the video proof. Then, the subject was asked to sign the form again. The fact of signing it at the first or second request was used as a measure of compliance.
Next, the participant had to spend some time waiting. A confederate, who posed as another subject, interrogated him or her about the experiment. The conversation was secretly recorded and used as a measure of internalization and confabulation.
Finally, the experimenter returned and asked the participant how he or she could have happened to take money in violation of the rules. If the student articulated any guesses, the experimenter asked to write them on the form’s other side. These notes were also used to measure confabulation.
The second experiment followed the same pattern with the other subjects. The only difference was the essence of the false claim – students were accused of having unrightfully taken the money three times, not once. The research hypothesis was that in this case less people would be misled by fake evidence, as this event seemed less probable.
The results of this research study demonstrate that fake video evidence has the power to produce false confessions and beliefs in the majority of cases. Not a single subject had actually taken money from the “bank” when they should not, yet all the participants in the first experiment and 93% of those in the second experiment signed the form. Furthermore, from 60% to 90% of people in each group developed at least a partial belief that they might have committed this fraud, and from 13% to 73% were fully convinced that they had done it. This is in line with the previous findings that when people are provided with evidence that contradicts their memories of the event, they prefer the version they see over the one they remember.
The first research hypothesis was supported, as the participants from the “see-video” group in both experiments demonstrated higher levels of full internalization and confabulation than their counterparts from the “told-video” group. Besides, the “told-video” groups had less compliance, as more participants signed the confession form only at the second request. The demonstration of video evidence was proven to have a stronger effect than a verbal suggestion that such evidence exists.
The second research hypothesis was partly supported, as there was less compliance in both groups in the second experiment. The levels of internalization and confabulation in the “told-video” group were significantly lower, while in the “see-video” group they were slightly higher. Therefore, it may be concluded that people tend to believe even in a hardly possible event when it is backed by a demonstration of video evidence. A mere notion that such video exists does not have a comparable power – there was only 13% of internalization in the “told-video” group and no confabulation at all, while in the “see-video” group 73% of the participants fully internalized the belief, and 33% hypothesized or fully confabulated about it.
These research findings point out the danger that the manipulations with a doctored video pose to people’s awareness of their own actions. For example, blackmailers may use an incriminating doctored video to persuade their victims to pay. It is highly probable that the victim will believe that he or she has actually committed this non-existent act, e. g. while being drunk, and lost memory of it. However, a doctored video can be also used with therapeutic purposes. Nash and Wade (2009, pp. 633-634) report the successful correction of disruptive behavior in children with the means of the doctored video, which efficiently altered not only the children’s perception of themselves but also their memories.
This research study expands upon the concepts of retroactive interference and serial position effect. The new information that the subjects received from the doctored video was a strong retroactive interference – it made it more difficult for the participants to recall previous events. The serial position effect explains why the students could not recall what exactly their answer to one of the test questions was and were easily misled into believing it was wrong. This question was one of fifteen, and probably the one in the middle, while in any long sequence of similar items, the human mind would probably memorize only the first and the last ones (Revlin, 2012, p. 130).
The research question that remains unanswered is whether sufficient knowledge of video editing techniques would have reduced the participants’ belief in doctored evidence. Further research is needed to determine if the subjects who are told about video editing software right before the experiment will be more difficult to mislead. Another promising area for investigation is to conduct a similar study with a different method used to assess internalization.