Sunday, April 24, 2016

Forensics and The Law

Hello Readers,

In my last post I discussed Convicting the Innocent which discusses how false or misleading forensic evidence contributed in the first 250 DNA exonerations case. It discussed several types of forensic practices and how they can be twisted or how they can be just flat out bad “science.” This week I wanted to talk about some ways the law regulates these possibly unreliable practices and how we can keep lies from coming into our courtrooms.

So, first of all, almost all experts are paid for their testimony by the party who called them, whether they be for the defense or plaintiff. There may be a few good souls who do pro bono (free) work, but this is rare. This should not affect their testimony nor bias them in any way, and it is illegal to pay an expert to give a certain opinion. That being said, we can’t always be sure whether an expert is biased, and further most parties will choose experts who they expect to give a favorable opinion. It is the job of the opposing counsel to obtain testimony about whether the expert is being paid, whether the their payment is biasing their testimony, or whether any other factor is biasing their testimony. If this information isn’t brought out on the stand, the jury will never hear it. For prosecution’s witnesses in particular, many of the analysts called work for law enforcement, which also has the potential to affect their testimony. Analysts do not typically work blind, so if for example a suspect confesses, they may know that when they’re working, giving them an idea of what the “right” conclusion is supposed to be. Again, if this information is not brought out on the stand, the jury will never know whether the information they had about the crime affected their judgement.

The two big court cases that regulate expert testimony are Daubert v. Merrell Dow Pharmaceuticals and  Frye v. United States.
In Frye, the court considered the admissibility of a practice called the systolic blood pressure test, which is kind of like a polygraph test. As a result of this case, the court imposed a new burden on expert testimony: it must be “generally accepted” by the relevant scientific community. The big problem with this standard is its vagueness. What counts as generally accepted? Does a majority of the community have to accept it? A large percentage? Does it have to be unchallenged? How much research has to be done on a theory before it counts?  But then again, a lot of standards in law are kept vague in order to apply to a variety of cases. Something can sound nice and specific, and then we need to start making a million exceptions because unique situations come up where the standard just doesn’t work. But it also leaves a lot of room for interpretation. It also seemed to contradict Rule 702, which states that if scientific or technical knowledge will help the trier of fact (the judge or the jury) understand the evidence or decide the facts of an issue, a qualified expert may testify. It’s possible Rule 702 could allow a witness to testify, while the Frye standard says the evidence isn’t generally accepted. So which one should a judge prioritize?

Luckily, Daubert created a bit more clarity. First, it clarified that Rule 702 was the decider of admissibility for expert testimony. It also made the judge the “gatekeepers” when it comes to expert testimony relying upon scientific evidence, and they had to decide whether evidence is scientifically reliable. But this left judges confused as to which practices to apply Daubert. This led to the amending of Rule 702, which laid more specific guidelines for applying Daubert and more specific guidelines for what to consider regarding admitting expert testimony. These guidelines aren’t meant for the judge to decide whether an expert’s conclusions are correct, rather whether they were reached used scientifically valid evidence and practices. Unfortunately, judges are not scientists and may not have the knowledge to properly decide when to admit evidence. Therefore, faulty forensic testimony is often still allowed.

This is not meant to be a comprehensive list of every law regarding expert testimony, but these are the major cases that litigate the issue. I’ve also been working hard on my final presentation and product, and I’m very excited to talk about what I’ve learned.

Monday, April 11, 2016

March 28-April1: Convicting the Innocent

Hello Readers,

This week I’ve been working on a case in the federal habeas process. I’ve been reading old briefs, familiarizing myself with news articles and other media on the case, and researching information on the experts who testified in the trial and appeals process. I also sat in on a phone conversation between Colleen and the client. Those interactions are important to me personally because it’s a reminder that false convictions are not just statistics; they’re real people with real lives facing dire consequences for crimes they did not commit.

I’ve also been reading some of Brandon L. Garrett’s Convicting the Innocent. This book explores the first 250 DNA exonerations in the United States and asks the question “What went wrong?” So far, I’ve only read the chapters relevant to my project, but it’s so interesting that I might just have to get my own copy and read the whole thing. The chapters I’ve read so far are his Introduction and his chapter titled Flawed Forensics. In the introduction, Garrett overviews all the mistakes made in the 250 exonerations. The mistakes he delves into are false confession, wrongful identification by victims or eyewitnesses, flawed forensics, criminal informants, and ineffective defense counsel.  

His chapter Flawed Forensics specifically explores what went wrong with the forensics in the relevant exonerations. One thing he says that particularly resonates with me due to the cases I’ve been working on with AIP was that he expected the analysts to do the best they could with the science and technology that they had. He expected the flawed results to merely be unavoidable mistakes due to unsophisticated science or limited technology. Afterall, these exoneration cases are decades old. But he was shocked to learn that “even by the standards of the 1980’s...forensics analysts should have know the evidence they presented was unsound.” This is also what I’ve found working at AIP.  In the case I spent all of February working on, the analyst who testified to the damning “scientific” practice that convicted AIP’s client had published work showing the practice to be unreliable. He knowingly hid this information at trial and testified to evidence he knew to be scientifically baseless. In the case I’m currently familiarizing myself with, the prosecution’s trial expert also has published work that contradicts the testimony he gave at trial against AIP’s client. Garrett goes into stories about forensics evidence that was “simply false.” It’s infuriating to think that some analysts taking the stand are knowingly distorting evidence, because by doing so not only are they putting the wrong people in jail, they are often protecting the guilty party.

Furthermore, many of these forensic practices have no scientific definition of consistent, meaning there is no agreed upon way to test or examine them and therefore they are completely subjective and based on the opinion of the individual analysts rather than scientifically sound evidence.  An example of this is bite mark comparison, which is a method formerly used to compare bite marks found on victims to teeth molds of suspects. Two years ago, I went to see Ray Krone speak, a man who spent ten years on death row for a murder that DNA later showed he did not commit. Bite mark comparison was the forensic evidence used to connect him to the crime. The problem with this practice is that whether the bite marks look alike, and how alike they appear to be, is completely up to the individual analyst. In fact, the prosecutors met with an analyst who said that the teeth molds from Ray Krone were distinctly different from the bite mark on the victim. This analyst was not called to testify at trial, and his opinion was not disclosed to the defense. Because of cases like Ray Krone’s, bite mark comparison can now only be used to exclude suspects (for example, if a bite mark on a victim had more teeth than a suspect, then that information could be used to show that suspect definitely is not the perpetrator.) It’s great that this progress is being made, but it came at the cost of Ray Krone and others spending decades in prison. And there are practices that are equally as subjective still being used today.

Even scientifically sound evidence is never completely infallible. The TED Talk I posted a few weeks ago goes into the mistakes that can lead to flawed DNA evidence (linked here: https://www.youtube.com/watch?v=Lw-zyoYlIsA). In his book, Garrett says that in 3 out of the 250 cases DNA wrongfully showed guilt. In these cases, one analyst did not finish this testing (which later showed the exoneree’s innocence), another analyst misrepresented the statistics, wrongfully saying the DNA found uniquely identified the defendant, and the other involved a laboratory error. That is not to say that DNA is not a highly sophisticated science; DNA is very important and it is much more likely that it will reveal the truth than cause a false conviction. In every single one of the exonerations discussed in this book, DNA ultimately freed these inmates. This is just a reminder that no evidence is perfect.

I think false convictions are often viewed as highly rare and the sacrifice we need to make to put the “bad guys” away. But this book offers insight into systemic problems that are certainly not uncommon and have most definitely affected more people than the 250 exonerees that this book covers. This is a complex topic that I will need more than one blog post to discuss. If you want to know more, please continue to follow my blog and consider checking out this book. Until next time.

Bibliography

Garrett, Brandon. Convicting the Innocent: Where Criminal Prosecutions Go Wrong. Cambridge, MA: Harvard UP, 2011. Print.