Unknown's avatar

About kevinburkeaja

Judge Burke is a Senior District Court Judge in Minnesota. He is a past president of the American Judges Association.

ABA Speaks On The Judicial Ethics of Imposing Fines & Fees

Formal Opinion 490 March 24, 2020

Ethical Obligations of Judges in Collecting Legal Financial Obligations and Other Debts. Here is the summary:

This opinion addresses the ethical requirement of judges under the Model Code of Judicial Conduct, Rules 1.1 and 2.6, to undertake a meaningful inquiry into a litigant’s ability to pay court fines, fees, restitution, other charges, bail, or civil debt before using incarceration as punishment for failure to pay, as inducement to pay or appear, or as a method of purging a financial obligation whenever state or federal law so provides. Meaningful inquiry is also required by Rules 1.2, 2.2, and 2.5 as a fundamental element of procedural justice necessary to maintain the integrity, impartiality, and fairness of the administration of justice and the public’s faith in it. According to the same Rules, a judge may not set, impose, or collect legal financial obligations under circumstances that give the judge an improper incentive either to multiply legal financial obligations or to fail to inquire into a litigant’s ability to pay. The opinion also discusses innovative guidance on best practices for making ability to pay inquiries, including model bench cards, methods of notice, and techniques for efficiently eliciting relevant financial information from litigants.

What To Do About ICE Arrests In State Courthouses?

Legislators in Colorado and Washington are considering bills that would prevent U.S. Immigrations and Customs Enforcement (ICE) from making arrests in and around courthouses in those states.
Colorado’s proposal, SB 83, follows the state’s enactment of a new law last year that bars local jails from keeping people in custody for ICE beyond their release dates. If passed, the bill “would be one of the — if not the — strongest statewide limitations yet on the agency’s ability to carry out immigration enforcement in Colorado,” according to the Colorado Sun. Washington’s proposal, SB 6522, follows a lawsuit filed last year by the state’s attorney general against the federal government to stop such arrests.
Over the past year, the Brennan Center has documented efforts in states across the country, including in California, Oregon, New York, and New Jersey, to prohibit or limit ICE’s ability to make courthouse arrests.

What Do We Do When We Set Bail?

Paul S. Heaton (University of Pennsylvania Law School) has posted The Expansive Reach of Pretrial Detention (North Carolina Law Review, Vol. 98, Pg. 369, 2020) on SSRN. Here is the abstract:

Today we know much more about the effects of pretrial detention than we did even five years ago. Multiple empirical studies have emerged that shed new light on the far-reaching impacts of bail decisions made at the earliest stages of the criminal adjudication process. The takeaway from this new generation of studies is that pretrial detention has substantial downstream effects on both the operation of the criminal justice system and on defendants themselves, causally increasing the likelihood of a conviction, the severity of the sentence, and, in some jurisdictions, defendants’ likelihood of future contact with the criminal justice system. Detention also reduces future employment and access to social safety nets. This growing evidence of pretrial detention’s high costs should give impetus to reform efforts that increase due process protections to ensure detention is limited to only those situations where it is truly necessary and identify alternatives to detention that can better promote court appearance and public safety.

(How Much) Do Mandatory Minimums Matter

Stephanie Holmes Didwania (Temple University – James E. Beasley School of Law) has posted (How Much) Do Mandatory Minimums Matter? on SSRN. Here is the abstract:

Understanding the relationship between mandatory minimums and sentencing outcomes is often difficult due to the endogeneity of mandatory minimum charging. This paper examines the prosecutorial and judicial response to what was arguably the most sweeping change to mandatory minimum charging policy in the last several decades: an August 2013 memo promulgated by then-Attorney General Eric Holder instructing all federal prosecutors to stop charging mandatory minimums in drug cases involving certain low-level offenders. This paper shows that the charging policy did not work as intended. Although prosecutors appear to have complied with the Memo’s charging directive, the policy change at most modestly reduced sentences for eligible defendants. I suggest that the Memo’s failure to meaningfully reduce sentence length in the eligible population can be explained by two facts. First, prior to the policy change, many defendants who would have been eligible to benefit from the Memo already received sentences below the mandatory minimum through two statutory exceptions. Second, the U.S. Sentencing Guidelines, which are tied to the mandatory minimum provisions, were not affected by the Memo.
One might also expect that the Memo would have reduced racial disparity in sentencing in light of prior work showing that black defendants are more likely to be charged with mandatory minimums than similarly-situated white defendants, and that this charging disparity contributes to sentencing disparity. I do not find evidence that the Memo affected sentencing disparity. I conclude that advocates seeking to reduce sentences for federal drug defendants should expand their efforts to promoting interventions that reach more serious offenders and on reforming the U.S. Sentencing Guidelines.

How Are Courts Reacting?

Courts across the country are quickly adapting formal policies to address the Covid-19 public health crisis. The Brennan Center put together this resource compiling court orders and official announcements from federal and state courts across the country.
 
As of April 3, the 94 federal district courts and 13 circuit courts of appeals have responded individually to the crisis, with policies ranging from carrying on business as usual, restricting courthouse access for those with possible Covid-19 exposure, and suspending jury trials and other in-person proceedings. At the state level, courts have similarly responded to the crisis, with 34 states restricting or suspending most in-person proceedings, and 16 states giving the option for local jurisdictions to implement such measures, according to the National Center for State Courts.
 
Members of the legal community have raised concerns that the changes being implemented as a result of the pandemic could have long standing consequences for the country’s court system. “We’re going to have to completely rethink how much has to be done in person, how much can be done using technology … Our operations will never be the same,” said Texas Supreme Court Justice Nathan Hecht to ABC News.

How Good Are We At Predicting Recidivism?

Scientific American includes an article: “Will Past Criminals Reoffend? Humans Are Terrible at Guessing, and Computers Aren’t Much Better; A new study finds algorithms’ predictions are slightly superior but not under all circumstances” by Sophie Bushwick.

Here are some excerpts:

For decades, many researchers thought that statistics were better than humans were at predicting whether a released criminal would end up back in jail. Today commercial risk-assessment algorithms help courts all over the country with this type of forecasting. Their results can inform how legal officials decide on sentencing, bail and the offer of parole.

 

The widespread adoption of semi-automated justice continues despite the fact that, over the past few years, experts have raised concerns over the accuracy and fairness of these tools.

 

Most recently, a new Science Advances paper, published on Friday, found that algorithms performed better than people at predicting if a released criminal would be rearrested within two years. Researchers who worked on a previous study have contested these results, however. The one thing current analyses agree on is that nobody is close to perfect—both human and algorithmic predictions can be inaccurate and biased.

 

The new research is a direct response to a 2018 Science Advances paper that found untrained humans performed as well as a popular risk-assessment software called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) at forecasting recidivism, or whether a convicted criminal would reoffend.

 

That study drew a great deal of attention, in part because it contradicted perceived wisdom. Clinical psychologist “Paul Meehl stated, in a famous book in 1954, that actuarial, or statistical, prediction was almost always better than unguided human judgment,” says John Monahan, a psychologist at the University of Virginia School of Law, who was not involved in the most recent study but has worked with one of its authors.

 

“And over the past six decades, scores of studies have proven him right.”

 

When the 2018 paper came out, COMPAS’s distributor, the criminal justice software company Equivant (formerly called Northpointe), posted an official response on its Web site saying the study mischaracterized the risk-assessment program and questioning the testing method used.

 

To test the conclusions of the 2018 paper, researchers at Stanford University and University of California, Berkeley, initially followed a similar method. Both studies used a data set of risk assessments performed by COMPAS. The data set covered about 7,000 defendants in Broward County in Florida and included each individual’s “risk factors”—salient information such as sex, age, the crime with which that person was charged and the number of his or her previous offenses. It also contained COMPAS’s prediction for whether the defendant would be rearrested within two years of release and confirmation of whether that prediction came true. From that information, the researchers could gauge COMPAS’s accuracy.

 

Additionally, the researchers used the data to create profiles, or vignettes, based on each defendant’s risk factors, which they showed to several hundred untrained humans recruited through the Amazon Mechanical Turk platform. They then asked the participants whether they thought a person in a vignette would commit another crime within two years.

 

The study from 2018 found that COMPAS displayed about 65 percent accuracy. Individual humans were slightly less correct, and the combined human estimate was slightly more so.

 

Following the same procedure as the researchers in that paper, the more recent one confirmed these results.

 

“The first interesting thing we notice is that we could, in fact, replicate their experiment,” says Sharad Goel, a co-author of the new study and a computational social scientist at Stanford.

 

“But then we altered the experiment in various ways, and we extended it to several other data sets.” Over the course of these additional tests, he says, algorithms displayed more accuracy than humans.

 

First, Goel and his team expanded the scope of the original experiment. For example, they tested whether accuracy changed when predicting rearrest for any offense versus a violent crime.

 

They also analyzed evaluations from multiple programs: COMPAS, a different risk-assessment algorithm called the Level of Service Inventory-Revised (LSI-R) and a model that the researchers built themselves.

 

Second, the team tweaked the parameters of its experiment in several ways. For example, the previous study gave the human subjects feedback after they made each prediction, allowing people to learn more as they worked.

 

Goel argues that this approach is not true to real-life scenarios. “This type of immediate feedback is not feasible in the real world—judges, correctional officers, they don’t know outcomes for weeks or months after they made the decision,” he says. So the new study gave some subjects feedback while other received none.

 

“What we found there is that if we didn’t provide immediate feedback, then the performance dropped dramatically for humans,” Goel says.

 

The researchers behind the original study disagree with the idea that feedback renders their experiment unrealistic. Julia Dressel was an undergraduate computer science student at Dartmouth College when she worked on that paper and is currently a software engineer for Recidiviz, a nonprofit organization that builds data analytics tools for criminal justice reform. She notes that the people on Mechanical Turk may have no experience with the criminal justice system, whereas individuals predicting criminal behavior in the real world do. Her co-author Hany Farid, a computer scientist who worked at Dartmouth in 2018 and who is currently at U.C. Berkeley, agrees the people who use tools such as COMPAS in real life have more expertise than those who received feedback in the 2018 study. “I think they took that feedback a little too literally, because surely judges and prosecutors and parole boards and probation officers have a lot of information about people that they accumulate over years. And they use that information in making decisions,” he says.

 

The new paper also tested whether revealing more information about each potential backslider changed the accuracy of predictions. The original experiment provided only five risk factors about each defendant to the predictors. Goel and his colleagues tested this condition and compared it with the results when they provided 10 additional risk factors. The higher-information situation was more akin to a real court scenario, when judges have access to more than five pieces of information about each defendant. Goel suspected this scenario might trip up humans because the additional data could be distracting. “It’s hard to incorporate all of these things in a reasonable way,” he says.

 

Despite his reservations, the researchers found that the humans’ accuracy remained the same, although the extra information could improve an algorithm’s performance.

 

Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism.

 

“It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says.

 

But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly.

 

“There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”

 

Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases.

 

For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense.

 

“Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.”

 

Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper.

 

“Part of the problem with this idea that you’re going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.

 

Dressel also notes that even when they outperform humans, the risk assessment tools tested in the new study do not have very high accuracy.

 

“The COMPAS tool is around 65 percent, and the LSI-R is around 70 percent accuracy. And when you’re thinking about how these tools are being used in a courtroom context, where they have very profound significance—and can very highly impact somebody’s life if they are held in jail for weeks before their trial—I think that we should be holding them to a higher standard than 65 to 70 percent accuracy—and barely better than human predictions.”

 

Although all of the researchers agreed that algorithms should be applied cautiously and not blindly trusted, tools such as COMPAS and LSI-R are already widely used in the criminal justice system. “I call it techno utopia, this idea that technology just solves our problems,” Farid says.

 

“If the past 20 years have taught us anything, it should have [been] that that is simply not true.”

 

A View of State Courts

Michael Pollack (Benjamin N. Cardozo School of Law) has posted Courts Beyond Judging (Brigham Young University Law Review, 2021 Forthcoming) on SSRN. Here is the abstract:

Across all fifty states, a woefully understudied institution of government is responsible for a broad range of administrative, legislative, law enforcement, and judicial functions. That important institution is the state courts. While the literature has examined the federal courts and federal judges from innumerable angles, study of the state courts as institutions of state government — and not merely as sources of doctrine and resolvers of disputes — has languished. This Article remedies that oversight by drawing attention for the first time to the wide array of roles state courts serve, and by evaluating the suitability of both the allocation of these tasks and the various procedures by which they are carried out across the country.

In every state, on top of the ordinary adversarial dispute-resolution function that we expect judges to serve, it is state court judges who are charged with administrative functions like approving applications to change one’s name, to enter the legal profession, or to exercise constitutional rights like accessing abortion care without parental knowledge or consent. And it is often state court judges who are charged with or who have taken on a range of legislative and policymaking functions like redistricting and establishing specialized criminal courts for veterans, persons in need of drug treatment, and others. And in some states, it is state court judges who have the law enforcement power to decide whether a prosecutor’s charging choice was a wise exercise of her discretion. These are not mere odds and ends of governing either; weighty interests hang in the balance across the board.

In addition to developing this more complete portrait of the state courts — and of important variation in how these roles are structured across the states — this Article examines whether the interests at stake in each context are appropriately served when state court judges handle them. In some arenas, they are, and this Article places these facets of state court practice on firmer theoretical footing. In others, however, there is cause for concern. With respect to these tasks, this Article argues that state court judges need to be better guided by statute and subject to reason-giving and record-developing requirements that would channel their discretion, improve their decisionmaking, and enable more rigorous appellate review. But most important of all, this Article calls for states to make more conscious choices about structuring the roles they assign to state courts, and for scholars to devote more careful attention to these powerful and nuanced institutions.

Colorado Bans Ice Arrests In Courthouses

For the time being the pandemic may well prohibit much of the ordinary business conducted in state courthouses, but hopefully we will return to normal. In Colorado there will be a change when that happens. “Immigration and Customs Enforcement officers will no longer be allowed to arrest people for civil immigration violations in or around courthouses in Colorado.

Gov. Jared Polis signed Senate Bill 83 into law Monday. It prohibits ICE from making civil arrests while a person is in the courthouse or on its property or if the person is going to or from a court proceeding.

The bill excludes civil arrests related to a judge’s contempt-of-court order or other judicially issued process. A violation of the law could lead to a judge finding the agent in contempt of court, or the person could be subject to civil penalties from the attorney general.” For more: A new Colorado law bars ICE agents from making civil arrests at courthouses in the state. THE DENVER POST

The Complications Of A Gender Change

Fairly routinely courts change a person’s name. Many state court judges also change the birth certificate because there are a lot of problems when the birth certificate does not match the gender and name on for example the driver’s license. And then it gets complicated by what happens when you were born in another state. “Wyoming Supreme Court to decide on birth certificate gender changes”: Isabella Alves of The Wyoming Tribune Eagle has this report.

Transparency in Plea Bargaining

Jenia Iontcheva Turner (Southern Methodist University – Dedman School of Law) has posted Transparency in Plea Bargaining (Notre Dame Law Review, Vol. 96, No. 1, Forthcoming) on SSRN. Here is the abstract:

 

Plea bargaining is the dominant method by which our criminal justice system resolves cases. More than 95% of state and federal convictions today are the product of guilty pleas. Yet the practice continues to draw widespread criticism. Critics charge that it is too coercive and leads innocent defendants to plead guilty, that it obscures the true facts in criminal cases and produces overly lenient sentences, and that it enables disparate treatment of similarly situated defendants.

Another feature of plea bargaining — its lack of transparency — has received less attention, but is also concerning. In contrast to the trials it replaces, plea bargaining occurs privately and off-the-record. Victims and the public are excluded, and the defendant is typically absent. While the Sixth and First Amendments rights of public access extend to a range of pretrial criminal proceedings, they do not apply to plea negotiations. For the most part, rules and statutes also fail to require transparency in the process. As a result, plea bargaining is largely shielded from outside scrutiny, and critical plea-related data are missing.

There are some valid reasons for protecting aspects of plea negotiations from public scrutiny. Confidentiality fosters candor in the discussions and may encourage prosecutors to use their discretion more leniently. It can help protect cooperating defendants from retaliation. And it may expedite cases and conserve resources.

Yet the secrecy of the process also raises concerns. It prevents adequate oversight of coercive plea bargains, untruthful guilty pleas, and unequal treatment of defendants. It can hinder defense attorneys from providing fully informed advice to their clients. It can also potentially impair victims’ rights and interests. Finally, the absence of transparency leaves judges with few guideposts by which to evaluate plea bargains and inhibits informed public debate about criminal justice reform.

This Article reviews plea bargaining laws and practices across the United States and argues that we can do more to enhance the documentation and transparency of plea bargaining. It then proposes concrete areas in which transparency can be improved without significant costs to the criminal justice system.