Monday, April 11, 2016

March 28-April1: Convicting the Innocent

Hello Readers,

This week I’ve been working on a case in the federal habeas process. I’ve been reading old briefs, familiarizing myself with news articles and other media on the case, and researching information on the experts who testified in the trial and appeals process. I also sat in on a phone conversation between Colleen and the client. Those interactions are important to me personally because it’s a reminder that false convictions are not just statistics; they’re real people with real lives facing dire consequences for crimes they did not commit.

I’ve also been reading some of Brandon L. Garrett’s Convicting the Innocent. This book explores the first 250 DNA exonerations in the United States and asks the question “What went wrong?” So far, I’ve only read the chapters relevant to my project, but it’s so interesting that I might just have to get my own copy and read the whole thing. The chapters I’ve read so far are his Introduction and his chapter titled Flawed Forensics. In the introduction, Garrett overviews all the mistakes made in the 250 exonerations. The mistakes he delves into are false confession, wrongful identification by victims or eyewitnesses, flawed forensics, criminal informants, and ineffective defense counsel.  

His chapter Flawed Forensics specifically explores what went wrong with the forensics in the relevant exonerations. One thing he says that particularly resonates with me due to the cases I’ve been working on with AIP was that he expected the analysts to do the best they could with the science and technology that they had. He expected the flawed results to merely be unavoidable mistakes due to unsophisticated science or limited technology. Afterall, these exoneration cases are decades old. But he was shocked to learn that “even by the standards of the 1980’s...forensics analysts should have know the evidence they presented was unsound.” This is also what I’ve found working at AIP.  In the case I spent all of February working on, the analyst who testified to the damning “scientific” practice that convicted AIP’s client had published work showing the practice to be unreliable. He knowingly hid this information at trial and testified to evidence he knew to be scientifically baseless. In the case I’m currently familiarizing myself with, the prosecution’s trial expert also has published work that contradicts the testimony he gave at trial against AIP’s client. Garrett goes into stories about forensics evidence that was “simply false.” It’s infuriating to think that some analysts taking the stand are knowingly distorting evidence, because by doing so not only are they putting the wrong people in jail, they are often protecting the guilty party.

Furthermore, many of these forensic practices have no scientific definition of consistent, meaning there is no agreed upon way to test or examine them and therefore they are completely subjective and based on the opinion of the individual analysts rather than scientifically sound evidence.  An example of this is bite mark comparison, which is a method formerly used to compare bite marks found on victims to teeth molds of suspects. Two years ago, I went to see Ray Krone speak, a man who spent ten years on death row for a murder that DNA later showed he did not commit. Bite mark comparison was the forensic evidence used to connect him to the crime. The problem with this practice is that whether the bite marks look alike, and how alike they appear to be, is completely up to the individual analyst. In fact, the prosecutors met with an analyst who said that the teeth molds from Ray Krone were distinctly different from the bite mark on the victim. This analyst was not called to testify at trial, and his opinion was not disclosed to the defense. Because of cases like Ray Krone’s, bite mark comparison can now only be used to exclude suspects (for example, if a bite mark on a victim had more teeth than a suspect, then that information could be used to show that suspect definitely is not the perpetrator.) It’s great that this progress is being made, but it came at the cost of Ray Krone and others spending decades in prison. And there are practices that are equally as subjective still being used today.

Even scientifically sound evidence is never completely infallible. The TED Talk I posted a few weeks ago goes into the mistakes that can lead to flawed DNA evidence (linked here: https://www.youtube.com/watch?v=Lw-zyoYlIsA). In his book, Garrett says that in 3 out of the 250 cases DNA wrongfully showed guilt. In these cases, one analyst did not finish this testing (which later showed the exoneree’s innocence), another analyst misrepresented the statistics, wrongfully saying the DNA found uniquely identified the defendant, and the other involved a laboratory error. That is not to say that DNA is not a highly sophisticated science; DNA is very important and it is much more likely that it will reveal the truth than cause a false conviction. In every single one of the exonerations discussed in this book, DNA ultimately freed these inmates. This is just a reminder that no evidence is perfect.

I think false convictions are often viewed as highly rare and the sacrifice we need to make to put the “bad guys” away. But this book offers insight into systemic problems that are certainly not uncommon and have most definitely affected more people than the 250 exonerees that this book covers. This is a complex topic that I will need more than one blog post to discuss. If you want to know more, please continue to follow my blog and consider checking out this book. Until next time.

Bibliography

Garrett, Brandon. Convicting the Innocent: Where Criminal Prosecutions Go Wrong. Cambridge, MA: Harvard UP, 2011. Print.

2 comments:

  1. Great summary. Glad bite marks can only be used to rule out a suspect, and really hope there is progress in "standardizing" the use of other types of evidence.

    ReplyDelete
    Replies
    1. Yes! Inclusion v. exclusion is very important! Many of these practices (especially comparative ones like bite mark or hair comparison) can be soundly used to rule out suspects, but are too varied in results to uniquely identify one person.

      Delete