Forensic Science

The television show “CSI,” each week has forensic evidence portrayed as glitzy, high-tech — and virtually infallible. Unfortunately, this depiction is often a far cry from reality. A significant report issued by the President’s Council of Advisors on Science and Technology (PCAST) persuasively explains that expert evidence based on a number of forensic methods — such as bite mark analysis, firearms identification, footwear analysis and microscopic hair comparisons — lacks adequate scientific validation. Quite simply, these techniques have not yet been proved to be reliable forms of legal proof.  There are several kinds of problems with forensic science in criminal cases, including:

  • Unreliable or invalid forensic discipline. Studies have demonstrated that some forensic methods used in criminal investigations cannot consistently produce accurate results. Bite mark comparison in which an identification of a biter is made from a bite mark made on skin is an example of an analysis that is unreliable and inaccurate.
  • Insufficient validation of a method. Some of the forensic disciplines in use may be capable of consistently producing accurate results, but there has not been sufficient research to establish validity. Accuracy of a method should be established using large, well-designed studies. Without these studies, the results of an analysis cannot be interpreted.  Analysis of shoeprints as a basis of identifying the unique source of a print is an example of a method that has not been sufficiently validated.
  • Misleading testimony.
    • Sometimes forensic testimony overstates or exaggerates the significance of similarities between evidence from a crime scene and evidence from an individual (a “suspect” or “person of interest”), or oversimplifies the data. Examples include testimony that suggests a collection of features is unique or overstates how rare or unusual it would be to see these features, implying that it is quite likely that the suspect is the source of the evidence, and testimony that doesn’t convey all possible conclusions, as can arise with masking in serology testing.
    • Sometimes forensic testimony understates, downplays, or omits the significance of an analysis that establishes that an individual should be excluded as a possible suspect. An example is testimony that an analysis is “inconclusive” when in fact, the analysis excluded the suspect.
    • Sometimes forensic testimony fails to include information on the limitations of the methods used in the analysis, such as the method’s error rates and situations in which the method has, and has not, been shown to be valid.
  • Mistakes. Like everyone, forensic practitioners can make mistakes, including mixing up samples or contaminating specimens. These can occur in any type of science or laboratory testing, even in well-developed and well-validated fields.
  • Misconduct. In some cases, forensic analysts have fabricated results, hidden exculpatory evidence, or reported results when testing had not been conducted. So where does a judge start? We surely do not need to become judicial forensic science troglodytes.  Professor  David H. Kaye (Pennsylvania State University, Penn State Law) has posted How Daubert and Its Progeny Have Failed Criminalistics Evidence and a Few Things the Judiciary Could Do About It (Fordham Law Review, Vol. 86, No. 4, 2018, Forthcoming) on SSRN. Here is the abstract:

A recent report of the President’s Council of Advisors on Science and Technology questioned the validity of several types of criminalistics identification evidence and recommended “a best practices manual and an Advisory Committee note, providing guidance to Federal judges concerning the admissibility under Rule 702 of expert testimony based on forensic feature-comparison methods.”

This article supplies information on why and how judicial bodies concerned with possible rules changes—and courts applying the current rules—can improve their regulation of criminalistics identification evidence. First, it describes how courts have failed to faithfully apply Daubert v. Merrell Dow Pharmaceutical’s criteria for scientific validity to this type of evidence. It shows how ambiguities and flaws in the terminology adopted in Daubert have been exploited to shield some test methods from critical judicial analysis.

Second, it notes how part of the Supreme Court’s opinion in Kumho Tire Co. v. Carmichael has enabled courts to lower the bar for what is presented as scientific evidence by maintaining that there is no difference between that evidence and other expert testimony (that need not be scientifically validated). It suggests that if the theory of admissibility is that the evidence is nonscientific expert knowledge, then only a “de-scientized” version of evidence should be admitted.

Third, it sketches various meanings of the terms “reliability” and “validity” in science and statistics on the one hand, and in the rules and opinions on the admissibility of expert evidence, on the other.

Finally, it articulates two distinct approaches to informing judges or jurors of the import of similarities in features—the traditional one in which examiners opine on the truth and falsity of source hypotheses—and a more finely grained one in which criminalists report only on the strength of the evidence. It contends that courts should encourage the latter, likelihood based testimony when it has a satisfactory, empirically established basis.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s