What Do We Do When We Set Bail?

posted by Judge_Burke @ 16:07 PM
April 7, 2020

Paul S. Heaton (University of Pennsylvania Law School) has posted The Expansive Reach of Pretrial Detention (North Carolina Law Review, Vol. 98, Pg. 369, 2020) on SSRN. Here is the abstract:

Today we know much more about the effects of pretrial detention than we did even five years ago. Multiple empirical studies have emerged that shed new light on the far-reaching impacts of bail decisions made at the earliest stages of the criminal adjudication process. The takeaway from this new generation of studies is that pretrial detention has substantial downstream effects on both the operation of the criminal justice system and on defendants themselves, causally increasing the likelihood of a conviction, the severity of the sentence, and, in some jurisdictions, defendants’ likelihood of future contact with the criminal justice system. Detention also reduces future employment and access to social safety nets. This growing evidence of pretrial detention’s high costs should give impetus to reform efforts that increase due process protections to ensure detention is limited to only those situations where it is truly necessary and identify alternatives to detention that can better promote court appearance and public safety.


(How Much) Do Mandatory Minimums Matter

posted by Judge_Burke @ 1:30 AM
April 7, 2020

Stephanie Holmes Didwania (Temple University – James E. Beasley School of Law) has posted (How Much) Do Mandatory Minimums Matter? on SSRN. Here is the abstract:

Understanding the relationship between mandatory minimums and sentencing outcomes is often difficult due to the endogeneity of mandatory minimum charging. This paper examines the prosecutorial and judicial response to what was arguably the most sweeping change to mandatory minimum charging policy in the last several decades: an August 2013 memo promulgated by then-Attorney General Eric Holder instructing all federal prosecutors to stop charging mandatory minimums in drug cases involving certain low-level offenders. This paper shows that the charging policy did not work as intended. Although prosecutors appear to have complied with the Memo’s charging directive, the policy change at most modestly reduced sentences for eligible defendants. I suggest that the Memo’s failure to meaningfully reduce sentence length in the eligible population can be explained by two facts. First, prior to the policy change, many defendants who would have been eligible to benefit from the Memo already received sentences below the mandatory minimum through two statutory exceptions. Second, the U.S. Sentencing Guidelines, which are tied to the mandatory minimum provisions, were not affected by the Memo.
One might also expect that the Memo would have reduced racial disparity in sentencing in light of prior work showing that black defendants are more likely to be charged with mandatory minimums than similarly-situated white defendants, and that this charging disparity contributes to sentencing disparity. I do not find evidence that the Memo affected sentencing disparity. I conclude that advocates seeking to reduce sentences for federal drug defendants should expand their efforts to promoting interventions that reach more serious offenders and on reforming the U.S. Sentencing Guidelines.


How Are Courts Reacting?

posted by Judge_Burke @ 19:13 PM
April 3, 2020
Courts across the country are quickly adapting formal policies to address the Covid-19 public health crisis. The Brennan Center put together this resource compiling court orders and official announcements from federal and state courts across the country.
As of April 3, the 94 federal district courts and 13 circuit courts of appeals have responded individually to the crisis, with policies ranging from carrying on business as usual, restricting courthouse access for those with possible Covid-19 exposure, and suspending jury trials and other in-person proceedings. At the state level, courts have similarly responded to the crisis, with 34 states restricting or suspending most in-person proceedings, and 16 states giving the option for local jurisdictions to implement such measures, according to the National Center for State Courts.
Members of the legal community have raised concerns that the changes being implemented as a result of the pandemic could have long standing consequences for the country’s court system. “We’re going to have to completely rethink how much has to be done in person, how much can be done using technology … Our operations will never be the same,” said Texas Supreme Court Justice Nathan Hecht to ABC News.

How Good Are We At Predicting Recidivism?

posted by Judge_Burke @ 14:43 PM
April 3, 2020

Scientific American includes an article: “Will Past Criminals Reoffend? Humans Are Terrible at Guessing, and Computers Aren’t Much Better; A new study finds algorithms’ predictions are slightly superior but not under all circumstances” by Sophie Bushwick.

Here are some excerpts:

For decades, many researchers thought that statistics were better than humans were at predicting whether a released criminal would end up back in jail. Today commercial risk-assessment algorithms help courts all over the country with this type of forecasting. Their results can inform how legal officials decide on sentencing, bail and the offer of parole.


The widespread adoption of semi-automated justice continues despite the fact that, over the past few years, experts have raised concerns over the accuracy and fairness of these tools.


Most recently, a new Science Advances paper, published on Friday, found that algorithms performed better than people at predicting if a released criminal would be rearrested within two years. Researchers who worked on a previous study have contested these results, however. The one thing current analyses agree on is that nobody is close to perfect—both human and algorithmic predictions can be inaccurate and biased.


The new research is a direct response to a 2018 Science Advances paper that found untrained humans performed as well as a popular risk-assessment software called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) at forecasting recidivism, or whether a convicted criminal would reoffend.


That study drew a great deal of attention, in part because it contradicted perceived wisdom. Clinical psychologist “Paul Meehl stated, in a famous book in 1954, that actuarial, or statistical, prediction was almost always better than unguided human judgment,” says John Monahan, a psychologist at the University of Virginia School of Law, who was not involved in the most recent study but has worked with one of its authors.


“And over the past six decades, scores of studies have proven him right.”


When the 2018 paper came out, COMPAS’s distributor, the criminal justice software company Equivant (formerly called Northpointe), posted an official response on its Web site saying the study mischaracterized the risk-assessment program and questioning the testing method used.


To test the conclusions of the 2018 paper, researchers at Stanford University and University of California, Berkeley, initially followed a similar method. Both studies used a data set of risk assessments performed by COMPAS. The data set covered about 7,000 defendants in Broward County in Florida and included each individual’s “risk factors”—salient information such as sex, age, the crime with which that person was charged and the number of his or her previous offenses. It also contained COMPAS’s prediction for whether the defendant would be rearrested within two years of release and confirmation of whether that prediction came true. From that information, the researchers could gauge COMPAS’s accuracy.


Additionally, the researchers used the data to create profiles, or vignettes, based on each defendant’s risk factors, which they showed to several hundred untrained humans recruited through the Amazon Mechanical Turk platform. They then asked the participants whether they thought a person in a vignette would commit another crime within two years.


The study from 2018 found that COMPAS displayed about 65 percent accuracy. Individual humans were slightly less correct, and the combined human estimate was slightly more so.


Following the same procedure as the researchers in that paper, the more recent one confirmed these results.


“The first interesting thing we notice is that we could, in fact, replicate their experiment,” says Sharad Goel, a co-author of the new study and a computational social scientist at Stanford.


“But then we altered the experiment in various ways, and we extended it to several other data sets.” Over the course of these additional tests, he says, algorithms displayed more accuracy than humans.


First, Goel and his team expanded the scope of the original experiment. For example, they tested whether accuracy changed when predicting rearrest for any offense versus a violent crime.


They also analyzed evaluations from multiple programs: COMPAS, a different risk-assessment algorithm called the Level of Service Inventory-Revised (LSI-R) and a model that the researchers built themselves.


Second, the team tweaked the parameters of its experiment in several ways. For example, the previous study gave the human subjects feedback after they made each prediction, allowing people to learn more as they worked.


Goel argues that this approach is not true to real-life scenarios. “This type of immediate feedback is not feasible in the real world—judges, correctional officers, they don’t know outcomes for weeks or months after they made the decision,” he says. So the new study gave some subjects feedback while other received none.


“What we found there is that if we didn’t provide immediate feedback, then the performance dropped dramatically for humans,” Goel says.


The researchers behind the original study disagree with the idea that feedback renders their experiment unrealistic. Julia Dressel was an undergraduate computer science student at Dartmouth College when she worked on that paper and is currently a software engineer for Recidiviz, a nonprofit organization that builds data analytics tools for criminal justice reform. She notes that the people on Mechanical Turk may have no experience with the criminal justice system, whereas individuals predicting criminal behavior in the real world do. Her co-author Hany Farid, a computer scientist who worked at Dartmouth in 2018 and who is currently at U.C. Berkeley, agrees the people who use tools such as COMPAS in real life have more expertise than those who received feedback in the 2018 study. “I think they took that feedback a little too literally, because surely judges and prosecutors and parole boards and probation officers have a lot of information about people that they accumulate over years. And they use that information in making decisions,” he says.


The new paper also tested whether revealing more information about each potential backslider changed the accuracy of predictions. The original experiment provided only five risk factors about each defendant to the predictors. Goel and his colleagues tested this condition and compared it with the results when they provided 10 additional risk factors. The higher-information situation was more akin to a real court scenario, when judges have access to more than five pieces of information about each defendant. Goel suspected this scenario might trip up humans because the additional data could be distracting. “It’s hard to incorporate all of these things in a reasonable way,” he says.


Despite his reservations, the researchers found that the humans’ accuracy remained the same, although the extra information could improve an algorithm’s performance.


Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism.


“It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says.


But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly.


“There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”


Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases.


For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense.


“Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.”


Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper.


“Part of the problem with this idea that you’re going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.


Dressel also notes that even when they outperform humans, the risk assessment tools tested in the new study do not have very high accuracy.


“The COMPAS tool is around 65 percent, and the LSI-R is around 70 percent accuracy. And when you’re thinking about how these tools are being used in a courtroom context, where they have very profound significance—and can very highly impact somebody’s life if they are held in jail for weeks before their trial—I think that we should be holding them to a higher standard than 65 to 70 percent accuracy—and barely better than human predictions.”


Although all of the researchers agreed that algorithms should be applied cautiously and not blindly trusted, tools such as COMPAS and LSI-R are already widely used in the criminal justice system. “I call it techno utopia, this idea that technology just solves our problems,” Farid says.


“If the past 20 years have taught us anything, it should have [been] that that is simply not true.”



A View of State Courts

posted by Judge_Burke @ 23:02 PM
April 1, 2020

Michael Pollack (Benjamin N. Cardozo School of Law) has posted Courts Beyond Judging (Brigham Young University Law Review, 2021 Forthcoming) on SSRN. Here is the abstract:

Across all fifty states, a woefully understudied institution of government is responsible for a broad range of administrative, legislative, law enforcement, and judicial functions. That important institution is the state courts. While the literature has examined the federal courts and federal judges from innumerable angles, study of the state courts as institutions of state government — and not merely as sources of doctrine and resolvers of disputes — has languished. This Article remedies that oversight by drawing attention for the first time to the wide array of roles state courts serve, and by evaluating the suitability of both the allocation of these tasks and the various procedures by which they are carried out across the country.

In every state, on top of the ordinary adversarial dispute-resolution function that we expect judges to serve, it is state court judges who are charged with administrative functions like approving applications to change one’s name, to enter the legal profession, or to exercise constitutional rights like accessing abortion care without parental knowledge or consent. And it is often state court judges who are charged with or who have taken on a range of legislative and policymaking functions like redistricting and establishing specialized criminal courts for veterans, persons in need of drug treatment, and others. And in some states, it is state court judges who have the law enforcement power to decide whether a prosecutor’s charging choice was a wise exercise of her discretion. These are not mere odds and ends of governing either; weighty interests hang in the balance across the board.

In addition to developing this more complete portrait of the state courts — and of important variation in how these roles are structured across the states — this Article examines whether the interests at stake in each context are appropriately served when state court judges handle them. In some arenas, they are, and this Article places these facets of state court practice on firmer theoretical footing. In others, however, there is cause for concern. With respect to these tasks, this Article argues that state court judges need to be better guided by statute and subject to reason-giving and record-developing requirements that would channel their discretion, improve their decisionmaking, and enable more rigorous appellate review. But most important of all, this Article calls for states to make more conscious choices about structuring the roles they assign to state courts, and for scholars to devote more careful attention to these powerful and nuanced institutions.


Colorado Bans Ice Arrests In Courthouses

posted by Judge_Burke @ 13:52 PM
March 31, 2020

For the time being the pandemic may well prohibit much of the ordinary business conducted in state courthouses, but hopefully we will return to normal. In Colorado there will be a change when that happens. “Immigration and Customs Enforcement officers will no longer be allowed to arrest people for civil immigration violations in or around courthouses in Colorado.

Gov. Jared Polis signed Senate Bill 83 into law Monday. It prohibits ICE from making civil arrests while a person is in the courthouse or on its property or if the person is going to or from a court proceeding.

The bill excludes civil arrests related to a judge’s contempt-of-court order or other judicially issued process. A violation of the law could lead to a judge finding the agent in contempt of court, or the person could be subject to civil penalties from the attorney general.” For more: A new Colorado law bars ICE agents from making civil arrests at courthouses in the state. THE DENVER POST


The Complications Of A Gender Change

posted by Judge_Burke @ 13:37 PM
March 30, 2020

Fairly routinely courts change a person’s name. Many state court judges also change the birth certificate because there are a lot of problems when the birth certificate does not match the gender and name on for example the driver’s license. And then it gets complicated by what happens when you were born in another state. “Wyoming Supreme Court to decide on birth certificate gender changes”: Isabella Alves of The Wyoming Tribune Eagle has this report.


Transparency in Plea Bargaining

posted by Judge_Burke @ 14:49 PM
March 27, 2020

Jenia Iontcheva Turner (Southern Methodist University – Dedman School of Law) has posted Transparency in Plea Bargaining (Notre Dame Law Review, Vol. 96, No. 1, Forthcoming) on SSRN. Here is the abstract:


Plea bargaining is the dominant method by which our criminal justice system resolves cases. More than 95% of state and federal convictions today are the product of guilty pleas. Yet the practice continues to draw widespread criticism. Critics charge that it is too coercive and leads innocent defendants to plead guilty, that it obscures the true facts in criminal cases and produces overly lenient sentences, and that it enables disparate treatment of similarly situated defendants.

Another feature of plea bargaining — its lack of transparency — has received less attention, but is also concerning. In contrast to the trials it replaces, plea bargaining occurs privately and off-the-record. Victims and the public are excluded, and the defendant is typically absent. While the Sixth and First Amendments rights of public access extend to a range of pretrial criminal proceedings, they do not apply to plea negotiations. For the most part, rules and statutes also fail to require transparency in the process. As a result, plea bargaining is largely shielded from outside scrutiny, and critical plea-related data are missing.

There are some valid reasons for protecting aspects of plea negotiations from public scrutiny. Confidentiality fosters candor in the discussions and may encourage prosecutors to use their discretion more leniently. It can help protect cooperating defendants from retaliation. And it may expedite cases and conserve resources.

Yet the secrecy of the process also raises concerns. It prevents adequate oversight of coercive plea bargains, untruthful guilty pleas, and unequal treatment of defendants. It can hinder defense attorneys from providing fully informed advice to their clients. It can also potentially impair victims’ rights and interests. Finally, the absence of transparency leaves judges with few guideposts by which to evaluate plea bargains and inhibits informed public debate about criminal justice reform.

This Article reviews plea bargaining laws and practices across the United States and argues that we can do more to enhance the documentation and transparency of plea bargaining. It then proposes concrete areas in which transparency can be improved without significant costs to the criminal justice system.


What Do Attorneys Think About Risk Assessment Tools?

posted by Judge_Burke @ 21:20 PM
March 24, 2020

Anne MetzJohn MonahanLuke Siebert and Brandon L. Garrett (University of Lynchburg, University of Virginia School of Law, Masters in Public Health Candidate, University of Virginia School of Medicine; Research Assistant, University of Virginia School of Law and Duke University School of Law) have posted Valid or Voodoo: A Qualitative Study of Attorney Attitudes Towards Risk Assessment in Sentencing and Plea Bargaining on SSRN. Here is the abstract:

Prior research largely has explored judicial attitudes toward risk assessment in sentencing. Little is known about how other court actors, specifically, prosecutors and defense attorneys, make use of risk information at sentencing hearings and during plea negotiations. Here, we report a qualitative study on the use of risk assessment by prosecutors and defense attorneys in Virginia. A prior quantitative study (n=70) pointed to a statistically significant difference in how prosecutors and defense attorneys regard the role of recidivism risk in sentencing hearings and in plea bargaining. Based on the results of the quantitative study, we collected follow-up qualitative data via interview (n=30) to explain this unexpected difference. Three themes emerged from the interviews: Who is the lawyer’s identified client? (With prosecutors choosing the general public and defense attorneys choosing the particular defendant); Does past behavior strongly predict future behavior? (With prosecutors being more likely than defense attorneys to believe it does; and Is the Nonviolent Risk Assessment a statistically valid tool for assessing recidivism risk? (With prosecutors and defense attorneys equally likely to believe that the tool was no more valid than their own intuitive professional experience. Virginia is regarded as one of the leading innovators in the use of risk assessment. Thus, as more states and the federal government adopt a risk-based approach to sentencing, studies on Virginia can provide useful guidance on the implementation process.


Procedural Fairness

posted by Judge_Burke @ 19:00 PM
March 17, 2020
Procedural Justice During a Pandemic

by Steve Leben

Wow. It seems that the world has changed around us in a heartbeat. The changes are disruptive and unsettling. And that’s true for just about everybody—inside and outside the courthouse.

As judges and others connected to the justice system work through this, we are making orders and changes to how we handle cases that will have profound effects on people. The stakes are high, and the amount of time we can spend on individual cases will usually be—understandably at this moment—quite limited. Even so, we need to keep procedural-justice principles in mind; they represent the public’s expectations of us.

One of the core principles is that we need to be transparent and explain our decisions. Even in making orders on our own motion that change hearing dates and keep some orders in effect pending a postponed hearing, we can explain why we’re doing that. Some may respond that it’s obvious why we’re doing these things. But it may be completely clear to all who are affected. We can at least provide some explanation for the decisions made, including the key considerations we took into account.

For example, in civil-protection-order cases, we may well be leaving a temporary ex parte order of protection in place for an extended period. Perhaps the order was unfair from the outset, having been based on a one-sided understanding of the situation. Even if the order is fair, the party on the receiving end—who has not yet been heard—may perceive its fairness differently. And now we’re leaving it in place without hearing from that party. We should at least provide an explanation of why we did that. And if possible, we should also provide some mechanism for written motions for relief in truly unjust circumstances. Doing that would meet two of the key procedural-justice principles—both providing an explanation and some forum in which we will listen to other viewpoints.

Another important principle of procedural justice is showing respect for those who are coming through or working in our court system. Let’s keep that one in mind too; there are creative ways to show respect for others. One is by recognizing that the demands on all of us may be quite different for a while. Many will be faced with the need to take care of children or other family members while still interacting with the courts. Texas trial judge Emily Miskel (@emilymiskel on Twitter) came up with a creative but respectful solution: an order suspending the normal business dress code for both in-person and remote appearances.

For practical and comprehensive information about handling court cases during this pandemic, check out the National Center for State Courts website, http://www.ncsc.org.

One more thing: take care of yourself. You can’t do a good job making decisions for others unless you take care of yourself.

There’s a book I reviewed a few years ago by law professors Nancy Levit and Doug Linder called The Happy Lawyer: Making a Good Life in the Law. My review focused on how judges could use the research found there to be better judges. Levit and Linder reported that the two biggest factors in improving happiness were control and social connections. Judges usually have control of lots of the things we do, and trial judges often have ample opportunity for social interactions. This pandemic is quickly turning all of that on its head. We seem to lose control hour by hour, day by day, of more and more of what’s going on in our daily activities. And we also are losing our social connections.

Yet as judges, we still must make decisions that will have significant effects on other people’s lives. We need to be sure we remain in the mental and emotional shape to do that well.

Social psychologist Pam Casey, Kevin Burke, and I put together an article about how judges generally can be at the top of their mental game. Give some consideration to what you may need to do right now to keep yourself in the right mental frame to be your best as a judge.

I only realized this morning that part of what was both distracting and annoying me was the loss of control. I realized that when I found myself ironing the no-iron shirts that come out of the dryer in almost-good-enough shape. Some of them could use just a touch of the iron, but usually I don’t go there. Today I did—with starch. I realized that this was just something I could control. It was a little thing, but I needed it today. And I’m grateful that the experience helped me to step back and think more about what’s going on and how I can best deal with it.

With a quick check back at what Levit and Linder had taught me, I saw how this fit into a bigger picture. I’ll think more now about how to keep a sense of control and some social connections as I work through the next weeks or months. I hope you will think about what you need to do for you too; we need our judges at the best they can be right now.

These are some of my thoughts. I, like you, have little training for a moment like this. I welcome your thoughts and suggestions in the comments.

Good luck to all of us as we work through these times, day by day, courthouse by courthouse.—Steve Leben

Steve Leben | March 16, 2020 at 3:09 pm | Categories: Uncategorized | URL: https://wp.me/p1T7De-cq