Fingerprint identification and its aura of infallibility

Issue March 2006 By Andrea R. Barter, Esq.

A burglary went wrong in 1910. The result was not only the murder of the homeowner, but the first American arrest, conviction and death sentence based on the analysis of fingerprints, ushering in a new era of law enforcement. Since then, the legal system has treated fingerprint comparisons as invaluable and essentially infallible.

But critics of fingerprint analysis say that the courts have gotten it wrong for nearly 100 years — fingerprint analysis does not deserve its aura of invincibility, as demonstrated by several recent high-profile cases.

In 2004, senior FBI fingerprint examiners misidentified Oregon attorney Brandon Mayfield as the source of fingerprints on evidence from the terrorist bombing in Madrid, Spain. The FBI initially blamed the mistake on the quality of the digital image supplied by Spain, then blamed the error on the suggestive effect of the computer database match, the inherent pressure of working on an extremely high-profile case, and verifying examiners’ knowledge of the previous examiner’s qualifications and conclusions.

In 2004, a convicted murderer in Boston, Stephan Cowans, was exonerated by DNA; upon review, his fingerprint had been misidentified by police examiners. Had the true perpetrator not left recoverable DNA (widely considered the most powerful form of forensic evidence) at the scene, it is extremely unlikely Cowans would have been able to prove his innocence. He also proved that what had been presumed to be a correct latent print individualization was in fact an erroneous latent print individualization. A private consultant concluded that poor training of the department’s fingerprint analysts was to blame.

And this December, evidence used to convict Terry L. Patterson of murdering a Boston police detective was ruled inadmissible. His conviction was based in large part on the expert testimony of a member of the Boston Police latent fingerprint section, who opined that four fingerprint impressions could be analyzed collectively because he believed them to be simultaneous impressions — a leap in logic not accepted in the scientific community.

If fingerprint identification is considered so reliable, why do so many questions remain?

According to internationally recognized latent print examiner Pat Wertheim of the Arizona Department of Public Safety, “There have always been misidentifications, but the defense community has not questioned it as strenuously nor sought outside experts, so they’ve gone undetected. This is not a recent proliferation, we are just more aware of them because now we are looking for them.”

The methodology

The basic principle of fingerprint identification is that the patterns of friction ridges on fingertips, palms, toes and soles are unique and permanent to each individual. The prints are unique as to each finger and toe of each person.

The idea that each fingerprint is unique and can be used for identification was first proposed by English administrators in 1858. It was not until 1892 that a method for taking “rolled” fingerprints and a classification system based on three ridge shapes – arches, loops and whorls – was developed. The modern fingerprint indexing system is based on this system with some refinements.

According to Lisa Steele of Steele and Associates in Bolton and author of the amicus brief for several defense attorney groups in Commonwealth v. Patterson, since its early days, fingerprint theory has been expanded by police departments and examiners in police crime labs. Fingerprint theory has rarely been examined by scientists outside this field using blind and double-blind studies. In fact, until 1999, fingerprint theory was never skeptically examined by courts under either the Frye or Daubert standards.

Most examiners are now taught the ACE-V (“Analysis, Comparison, Evaluation – Verification”) method of fingerprint identification. Level 1 starts with comparing the general pattern to see if there is broad agreement. For example, the same finger could not have made prints from different classes of patterns, such as an arch and a loop.

Next, in Level 2, the examiner looks for ridge characteristics of both prints that are of the same type and shape. Ridge endings, bifurcations, enclosures and so forth must be the same.

Next, the examiner makes a qualitative comparison of points of similarity. The exact number of points required for a match varies in the United States, but 10 to 12 points is considered standard.

In ACE-V, the final step is verification. The general rule is that all positive identification opinions must be verified by a second qualified expert. According to Steele, the second expert may repeat the entire process, but the comparison need not be blind. That is, the second expert may know from the outset that another examiner has already made the positive identification.

Examiners are taught this method, but after nearly a century of practice, no properly designed, controlled and conducted study of the accuracy of latent print individualizations exists, according to Lyn Haber and Ralph Norman Haber in their article, Error Rates for Human Fingerprint Examiners.
Which leaves the question: When there is an error, is it the method or the examiner?

“I can’t see any distinction between the two,” said Simon A. Cole, a frequently consulted fingerprint expert and assistant professor of Criminology, Law and Society at the University of California, Irvine.

“No one has shown me a latent print identification method that does not involve an examiner. In other words, ‘the method’ is to have an examiner make a determination. There is no method without an examiner. Talking about
the error rate of a ‘method’ without an examiner is like talking about the crash rate of an automobile without a driver. Therefore, when an error does occur, there is no way of distinguishing between one attributed to ‘method’ or ‘examiner,’” explained Cole.

Problems with Mayfield, Cowans and Patterson

Riddled with errors, Brandon Mayfield’s misidentification as a terrorist bomber illustrates just how many problems can arise in only one investigation.

In May 2004, the Federal Bureau of Investigation arrested Mayfield, an Oregon attorney, as a material witness in an investigation of the terrorist attacks on commuter trains in Madrid. Mayfield had been identified by the FBI Laboratory as the source of a fingerprint, labeled LFP 17, found on a bag of detonators connected to the attacks. Two weeks after Mayfield was arrested, the Spanish National Police informed the FBI that it had identified an Algerian national as the source of the fingerprint. After the FBI Laboratory examined the fingerprints of the Algerian, it withdrew its identification of Mayfield and he was released.

According to the Office of the Inspector General (OIG), the unusual similarity between LFP 17 and Mayfield’s fingerprint was a major factor in the misidentification that confused three experienced FBI examiners and a court-appointed expert. The OIG concluded that the examiners committed errors in the examination, and that the misidentification could have been prevented through a more rigorous application of several principles of latent fingerprint identification. Their concerns included:

• The enormous size of the Integrated Automated Fingerprint Identification System database and the power of the IAFIS program can find a confusingly similar candidate print, thereby elevating the danger of encountering a close non-match.

• Bias from the known prints of Mayfield — The examiners’ interpretation of some features in LFP 17 was adjusted or influenced by reasoning “backward” from features that were visible in Mayfield’s known prints.

• The examiners’ reliance on extremely tiny, “Level 3,” details, including shapes interpreted as individual pores, incipient dots between ridges and ridge edges. Because Level 3 details are so small, the appearance of such details in fingerprints is highly variable, even between different fingerprints made by the same finger. As a result, the reliability of Level 3 details is controversial within the latent fingerprint community.

• Failure to appropriately apply the “one discrepancy rule,” in which a single difference in appearance between a latent print and a known fingerprint must preclude an identification unless the examiner has a valid explanation for the difference. Implicit in this standard is the requirement that the examiner have equivalent certainty in the validity of each explanation for each difference in appearance between prints.

• The FBI Laboratory’s overconfidence in the skill and superiority of its examiners prevented it from taking the Spanish lab’s report as seriously as it should have.

• The FBI’s verification procedures require that every identification be verified by a second examiner. Under procedures in place at the time of the Mayfield identification, the verifier was aware that an identification had already been made, possibly introducing a bias that may prevent or discourage a verifier from challenging an identification.

According to Steele, the Mayfield case demonstrated that three senior officers in the FBI can make a ghastly mistake, and an outside examiner, who is very well regarded, can follow along and make the same mistake.

“But for the Spanish authorities, Mayfield might still be in jail. If this had been a U.S. case and we didn’t have the Spanish saying, ‘Stop: We have a better suspect,’ I’m not as certain that a U.S. department would have been as confident going to the FBI and saying ‘you are wrong,’” said Steele.

Cole believes the Mayfield cased demonstrates how unlikely it is that misattributions will be exposed. “In this case, a media leak in Europe forced the FBI to arrest Mayfield before they wanted to. Had that not occurred, the Spanish police might have convinced the FBI of the error before the suspicion of Mayfield was made public, and the FBI might still be claiming, as they were pre-Mayfield, to have never made a misattribution,” said Cole.

Cowans’ lucky break

In a high-profile case that concluded on Feb. 2, 2004, prosecutors agreed to vacate the conviction of Stephan Cowans based upon DNA test results that showed he was not the person who left his fingerprints on a glass at the scene of a violent shooting incident.

Cowans was charged with shooting Sgt. Gregory Gallagher during a fight in 1997. Gallagher and another eyewitness identified Cowans as the assailant at trial. Two Boston fingerprint lab examiners testified that a fingerprint from a drinking glass used by the assailant matched Cowans’. He was sentenced to 35 to 50 years in prison.

The New England Innocence Project arranged for independent DNA testing of the glass, a sweatshirt and a baseball cap that were also found at the scene of Gallagher’s shooting. Tests determined that saliva on the glass and sweat on the clothing came from the same person, but that person was not Cowans. Cowans was eventually freed.

An investigation of the misidentification by a private consultant concluded that poor training of the department’s fingerprint analysts was to blame for the blunder. Finding that the examining officers were not prepared to do complex fingerprint analysis, the report said the unit did not train its officers to keep up with standard practices in fingerprint analysis and had low performance standards. Even the commander of the police department’s forensic technology division admitted the unit had little or no protocol or standardization of procedures. Wertheim, stressing that his comments are only his opinion and not the position of his agency, attributes the results to dishonesty on the part of one or more of the fingerprint experts.

Cowans demonstrates the difficulty and extraordinary evidence required to convince criminal justice system personnel that a latent fingerprint individualization is erroneous.

Neither the verifying examiner nor Cowan’s own experts detected the error. Revealing a practitioner error required: the sheer luck of the true perpetrator leaving recoverable DNA at the crime scene; the good fortune of that DNA being preserved after his conviction; the acceptance of biohazard duty in prison in order to pay for post-conviction DNA testing; the responsibility of the court in approving such testing; and the upstanding behavior of the commonwealth in ordering his immediate release.

“This suggests to me that Cowans is far from the only person wrongly convicted on fingerprint evidence; he was just the lucky one who was able to prove it,” said Cole.

A novel approach

In 1995, opponents of fingerprint identification had reason to fear the camel’s nose was under the tent when a novel application of ACE-V was employed to convict Terry L. Patterson of murdering a Boston police detective.

Some fingerprint evidence involved with the Cowans case (this image has been adjusted by Lawyers Journal for printing purposes).
A member of the Boston police latent fingerprint section testified that four latent impressions found on the victim’s vehicle were left by Patterson. Although no single impression, on its own, could reliably be matched to its allegedly corresponding finger, the fingerprint examiner based his testimony on the cumulative similarities observed between the impressions and their corresponding fingers. The examiner testified that the four impressions could be analyzed collectively because he believed them to be simultaneous impressions — impressions of multiple fingers made by the same hand at the same time.

The commonwealth argued that ACE-V is reliable when applied to simultaneous fingerprints, or impressions of multiple fingers left by the same hand at the same time.

Although a resourceful application of ACE-V, it had no verifiable basis in science. The Supreme Judicial Court ultimately recognized this and in December vacated the lower court’s order that allowed ACE-V to be applied to simultaneous fingerprints.

The case reaffirms the proposition that scientific reliability requires not just a method that has been established and verified, but that its application must also be scientifically verified and established.

Wertheim said the lesson is that “sometimes the courts may demand that fingerprint examiners be more conservative than we really believe we ought to be.”

The psychology of fingerprinting

Print examiners generally assert their methodology is perfect: Errors are attributable only to the examiner. But the defense bar argues that because part of the methodology relies on subjective judgments by the examiner, human error is built into the methodology.

Basic fingerprint identification theory suggests that fingerprint analysis and matching are based solely on the information provided by the print. But motivation, peer pressure, emotional state and context are just a few examples of the many possible psychological factors involved in fingerprint identification.

“There are fundamental things that go on with how the brain takes shortcuts to reach results,” said Steele. “They are not bad people or poorly trained, but the human brain will play all sorts of tricks on you… If your brain is so focused on one thing, you can miss a six-foot gorilla walking across a video screen. Part of the problem is human nature; we hit the same problem in witness identifications. They are motivated to pick the right person but often get it wrong. Psychology tells us factors about what makes it more likely an examiner will get it wrong, and one is biasing information.”

She points to the psychological phenomena known as “confirmation bias.” If the examiner has a prior belief or expectation that two fingerprints will, or will not, match, then two potential psychological biases arise. “Cognitive confirmation bias” is a tendency to seek out and interpret evidence in ways that fit existing beliefs. “Behavioral confirmation bias,” commonly referred to as the self-fulfilling prophecy, is a tendency for people to unwittingly procure support for their beliefs. The danger of confirmation bias affecting an examiner’s subjective opinion is rarely discussed in the fingerprint examination literature or in the court cases upholding admissibility of the technique.

Confirmation bias can cause examiners to overestimate the quality of a latent print or attribute a discrepancy to an explainable distortion when they have external reasons to expect a match. It can also cause them to underestimate the quality of an image or regard an explainable distortion as a discrepancy when they have external reasons to expect a non-match.

Steele argues that confirmation bias can play a significant role in distorting test results regardless of the validity of the underlying theory. Evidentiary matter may be presented to forensic scientists in a suggestive manner. The examiner may be given crime scene evidence, autopsy evidence and a fingerprint exemplar clearly labeled as the suspect’s. This may be accompanied by a written or oral synopsis of the reasons the investigator believes the suspect is guilty or a description of a suspect’s prior record for similar offenses. In high-profile cases, there may be immense public pressure to validate eyewitnesses statements or an admission with “neutral” scientific evidence. This suggestiveness, coupled with the understandable prosecution sympathies of many examiners, may skew subjective judgments.

“Whatever psychological bias it is that leads examiners to corroborate one another’s identifications must be pretty powerful because it impelled two FBI examiners to corroborate the Mayfield misattribution,” said Cole. “It also impelled Mayfield’s own expert to corroborate the misattribution, when his bias, if anything, might be presumed to
tend the other way, toward the interest of his client.”

According to Steele, it is easy for an examiner to interpret the observations to fit his or her expectations.

“At rock bottom, we want the right bad guy put away on the right evidence. We have to care about psychology because we need to know how the decision was made,” said Steele. “If part of process takes place in your head, we need to know how that works. Most of this is first-year psychology. But we care because if you get the wrong answer, not only has a lot of money and time been wasted prosecuting wrong guy, but the right guy stays out there, possibly committing crimes,” she added.

Arts and science

“One of the things we lack in fingerprints is an accurate and reliable way of determining error rate,” said Wertheim. But part of the problem with the way critics attack fingerprinting is that “error rate” is a broad category that doesn’t separate the error rate inherent in the methodology from the error rate of the individual practitioner.

“Even if you could establish an error rate in general, I believe our most vocal critics would concede that some experts make more mistakes than other experts. So even if we could calculate an overall error rate, it would be inappropriate to imply to the court that the error rate could be used as a predictor of a mistake in a specific case at trial,” said Wertheim.

In defending fingerprint identification methods, he noted the confusion caused by referring to fingerprinting as a science. He explained that it is an applied science, with an artistic component to it.

“A lot of fingerprint critics try to define science in the narrow way that you might define mathematics or astronomy. There is a large area that we refer to as ‘applied science’ and in that, there is a subset of forensic sciences,” said Wertheim. “You can’t say it is not scientific simply because it doesn’t follow the same rules as mathematics. Fingerprinting is an applied science, not an exact science, but it does have a scientific foundation and an identification or exclusion is a scientific conclusion.”

Indeed, to date, there have been 42 Daubert challenges to fingerprints. None have been successful.


The International Association for Identification will hold its 2006 Annual Educational Conference in Boston, July 2-7.