Category Archives: NAS Report

Judging Forensics

Federal district judge for the Southern District of New York Jed S. Rakoff delivered the keynote address, “Judging Forensics” during the Forensics, Statistics and Law conference at the University of Virginia School of Law on March 26, 2018. The address can be viewed online here.

Judge Rakoff’s presentation commemorated the 25th anniversary of the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals Inc. which reshaped how judges evaluate scientific and expert evidence. The presentation looked at how courts have considered the admissibility of testimony about scientific evidence and specifically forensic evidence. Judge Rakoff cited a study which found that in Daubert challenges between 1993 and 2001, defense proffers of expert testimony were rejected 92 percent of the time, whereas where the prosecution was the proponent of the evidence, expert testimony was admitted 95 percent of the time. Judge Rakoff examined some reasons for that disparity.

He addressed the NAS Report and PCAST Report and several examples of unreliable forensic science and statistical evidence, including a hair comparison case and a case where a mathematics professor improperly calculated the likelihood of a two suspects driving a specific car and was allowed to testify to that evidence. The question and answer session offered important insights into how these issues can be addressed.

Recordings of additional presentations and panels are available on the UVA Law YouTube channel (scroll down to “uploads”) or here. Attorneys may be interested in viewing Dr. Peter Stout’s presentation on the use of blinds at the Houston Forensic Science Center (at 27:50), Henry Swofford’s presentation on the use of statistical software in fingerprint comparisons at the Defense Forensic Science Center (at 1:12), and Dr. Alicia Carriquiry’s presentation on statistics and the evaluation of forensic evidence.

1 Comment

Filed under Experts, Meetings/Events, NAS Report, PCAST Report, Resources

ABA Resolution concerning forensic evidence

The ABA House of Delegates approved a Resolution in 2012 urging judges to consider several factors when determining the manner in which expert testimony is presented in criminal trials. The Resolution and its accompanying report urge attorneys and judges to seek “innovative solutions” to help jurors understand the significance and limitations of scientific evidence, such as altering trial structure to allow expert witnesses for both parties to testify consecutively and  avoiding declaring a witness to be an expert in front of the jury.

The ABA Resolution and report draw heavily from other ABA standards and from the 2009 NAS Report on the state of forensic science in the United States. More information on the landmark NAS Report can be found here.

The ABA report also critiques trial attorneys’ lack of substantive knowledge regarding scientific evidence and their ability to effectively challenge misleading forensic testimony. “Until an elevation in the knowledge base of trial attorneys is achieved,” the ABA report warns, “the adversarial system will continue to falter with respect to the proper presentation of forensic scientific evidence.”

The ABA Resolution lists several areas of concern for testimony by forensic experts. Highlights include:

Use of Clear and Consistent Terminology

The Resolution urges judges to consider “whether expert witnesses use clear and consistent terminology in presenting their opinions.” The report warns that terms such as “match,” “consistent with,” “similar in all respects tested,” and “cannot be excluded as the source of” have no accepted definition or standardized meaning in the scientific community.

Limitations of Forensic Techniques

The Resolution urges judges to consider whether experts present testimony in a way that accurately conveys any limitations in the forensic techniques they employ. The report points out that experts in disciplines such as microscopic hair analysis sometimes exaggerate the reliability of subjective techniques with misleading phrases like “zero error rate,” claiming that these methods are error-free when performed “correctly.” The report also criticizes the use of phrases with no accepted scientific meaning, such as “reasonable scientific certainty.”

Avoiding Claims of Uniqueness

The Resolution also advocates precluding experts from offering explicit or implied claims of uniqueness unless their findings are supported by empirical research. The report notes that fields such as firearms comparison and handwriting analysis often rely on subjective comparison by analysts with no empirical research to validate their techniques. Testimony by such experts gives jurors an impression that such “matches” represent absolute identification. In particular, the report recommends prohibiting experts from testifying that a match has been made “to the exclusion of all others” unless the experts’ methodology has been validated by empirical statistical research.

Although most judges are unlikely to exclude evidence solely on the basis of the ABA Resolution, attorneys may attempt to use the Resolution to limit the scope and impact of expert testimony in their cases. It isn’t clear how much weight individual judges will give the ABA Resolution, or whether they will interpret the Resolution as placing a higher burden on parties seeking to use expert testimony than already required under North Carolina law. Nevertheless, the Resolution provides strong support for attorneys trying to preclude experts from offering misleading testimony about the significance of their findings, and it calls on judges to monitor the presentation of forensic evidence more closely.

Leave a comment

Filed under Experts, NAS Report, Resources

The (Sorry) State of Forensics in the US [and perhaps the world]

Reposted from The Wrongful Convictions Blog

by Phil Locke, Science and Technology Advisor, Ohio Innocence Project

In 2009, The National Academies of Science of the United States published its Congressionally commissioned report:  “Strengthening Forensic Science in the United States – A Path Forward.”  Chapter 5 of the report presents a review of a number of forensic disciplines and their shortcomings.  A qualitative summary (by this author) of Chapter 5 findings is presented in the following chart:

I have “graded” each of the forensic disciplines on the following attributes:

1)  Statistical reliability for “class inclusion” of a suspect

2)  Statistical reliability for “individual identification” of a suspect.

3)  Statistical reliability for “class and individual exclusion” of a suspect.

4)  Verified scientific validity with documented statistics.

5)  Clear non-ambiguous terminology related to statistical validity of results.

6)  Does not rely on competence, training, experience, or judgement of individual examiners.

Of great concern is all the red, pink, and yellow on the chart.  Here is a link to a downloadable (and legible) copy of the chart:

NAS Ch5 Summary_Rev_4 copy

The National Academies were given the Congressional “charge” in the Fall of 2005 to investigate and report on the state of forensics in the US.  By the Fall of 2006, a panel of 52 scientists, academics, and experts had been assembled, and started work.  Two and a half years later, and after exhaustive review of all findings, the report was published.  What it had to say about forensics in the US (and by extension, the world) was not very good.  In summary, what they found was that forensics (with the exception of DNA) lacks scientific rigor and statistical validation.

It can be said that every forensic discipline (with the exception of DNA) fails the test of “show me the data from which I can compute a probability of occurrence.” This is, of course, echoed in the Daubert doctrine’s “no known error rate”.

Has this lack of scientific rigor and statistical validity led to wrongful convictions?  ABSOLUTELY.

But …. more about the validity of forensics in future posts.


Filed under NAS Report, Resources