Archive for the ‘ Uncategorized ’ Category

Data is not the plural of anecdote: Legal Scholarship Edition

After a long hiatus due to the weekend and Independence Day, I finally found something that piqued my interest to blog about: a debate about the usefulness of legal scholarship in regards to judging and legal practice. Chief Justice Roberts, in the Fourth Circuit Judicial Conference, noted that law review articles are often useless to judges because they focus on arcane philosophical points rather than resolving difficulties in doctrine or exploring important areas of law. In response, law professor Sherrilyn Ifill penned a response, as posted by Danielle Citron at Concurring Opinions, arguing that law review articles do in fact do doctrinal lifting if only judges would read them. For her argument, she points to a 2007 article used in a recent Court of Appeals case regarding 4th Amendment searches and GPS technology. She then goes on to list 4 examples of law review articles dealing with recent doctrinal issues, which she rounds out with the statement:

“Hutchins’ article is hardly an anomaly.  A recent review of articles posted on the Social Services Research Network, revealed a treasure trove of excellent articles that would greatly assist judges in their work.”

Despite these examples however, Professor Ifill fails to make the argument that she thought she has in the conclusion that law review articles are generally useful to judges if they would read them. She makes a total of three mistakes in her argument:

Mistake One: A few data points do not make a general trend. Her argument supposes that these articles are representative of the general population of articles. However, 5 examples are nowhere near the number necessary to even begin making descriptions using statistical methods. In order to get the gravity of the argument, we must first know the entire population of articles. Taking information from the Washington and Lee Law Journal rankings,  they contain 986 U.S.-based English law journals. Already, our five examples are lost adrift a sea of journals as they constitute less than one percent. Some may contend that some journals are better quality than others, but such arguments necessarily delve into what “quality” means, and whether it is related to the concept of “useful to judges.” Taking our numbers a step further, we need to know how many articles each journal publishes. Using George Mason Law Review as a measuring stick, let’s assume that a law review publishes four articles per issue (not including notes as I want to look at professorial output, not student output) and four issues per year (in the end, some journals publish more per issue but less issues, etc. so it balances out). This leaves us with a total of 15,776 articles per year (4*4*986). So, five articles out of 15,776 comes out to 0.00316, definitely not representative of the population of law review articles. In this instance, what little support Professor Ifill presents is definitely insufficient to make her case.

Mistake Two: SSRN != Published != SSRN. (For those who are not familiar, != is “does not equal in Java.”) Four of the examples provided by Professor Ifill are from SSRN, which is a self-listing by professors of their work, including everything from just abstracts to published works. Although I am a huge fan of SSRN (link to my author page is in the About page), SSRN is not sufficient to reach any conclusions about the state of legal scholarship. First, many of the articles listed on SSRN are works in progress or attempts by authors to draw attention to recent scholarship which has yet to be chosen for publication. Second, SSRN does not include every work published in a law review, so in some ways it would tend to undercount some scholarship while overcounting others. Some academics are not on SSRN, while others update their pages sparingly, and some may not list publications on their directly to draw attention to their major works. The double disconnect means that we have trouble drawing the analogy about legal academia from SSRN as it fails to work in either direction.

Mistake Three: Academia as a category is nearly nonsensical. By attempting to argue that the majority of scholarship is useful, Professor Ifill glosses over the individual characteristics and motivations of particular research. Some pieces are not even intended for judicial consumption, and are targeted toward legislators and other policy makers considering a certain course of action. Although I am unfamiliar with the pieces she references, I do have personal experience with a piece recently cited by the 7th Circuit Court of Appeals that was intended for judicial consumption to resolve doctrinal issues. Before I became the analyst for FantasySCOTUS, Josh gave an assignment to all of his potential interns to work on a segment of the Pandora’s Box. From this perspective, I knew that the main reason behind the article was to discuss the impact of potential victory for McDonald v. Chicago, and to chart the course of law after that decision. Like Pandora’s Box, I’m certain that many of the articles referenced by Professor Ifill are the conscious decision of authors to solve a particularly thorny problem, and could be the product of a practically minded author (both Josh and his coauthor Ilya Shapiro are not tenure track professors). The idiosyncratic nature of both articles and authors would tend to increase the threshold amount of data necessary to make the argument that legal academia in general is useful to justices in doctrinal issues.

In conclusion, Professor Ifill failed to make a persuasive case that legal academia is useful. However, I would also argue that not all legal scholarship SHOULD be useful in that sense. We need high ideas, programmatic comparisons, and doctrinal guidance in order to make any system work, and legal academia is no different. Maybe part of my interest is that my first article is mainly an ideas piece about a novel approach to Supreme Court predictions. However, I feel strongly that not enough scholars make the effort to connect their idea pieces to doctrine or legal practice, and that is what judges are picking up on. For example, my first piece is only the beginning of many, and as I move through my career, I am looking for plenty of opportunities to apply the ideas to different situations such as litigation and cert grants. However, an alternative method of making that connect is to engage in blogging and other forms of communication. By taking the time to show how “the effect of Kant on the evidentiary rules of Bulgaria” relates to something occurring in our legal system right now, and might provide an answer to a question plaguing courts and litigants, we advance scholarship into a happy medium of ideas and action.

Advertisements

The Problem of Prognostication

Reason.com today has an article titled “It’s Hard To Make Predictions, Especially About The Future” Basically, the thesis of the article is that “Experts” have a particularly bad track record with making predictions about the future, so much so that a random process (here called a monkey and a dart board) could actually perform better than Expert (meaning that the Expert could be MORE accurate by having a secret dart board monkey). The examples used in the article include 20+ year predictions about the fate of countries, the economy, the environment, etc. All big events that have a huge effect on the entire globe. The author of a book on this subject then uses behaviorial economics to explain why everyone tends to flock to these often erroneous experts.

As I feel that “experts” tend to be puffed up prognosticators who either make heavily hedged non-predictions, or predictions that are extremely difficult to counter or measure, I am inclined to agree with this article. However, a Bloomberg article today about the partisan predictability of the Supreme Court shows that there are areas where we can make predictions, and often do. If we are so bad at predictions, then we should get these small things wrong as well? I think that we can make predictions like Supreme Court outcomes while getting the big things wrong. One major difference between the two is that Supreme Court predictions are about what the outcome of a case will be this term, not a case 20 years from now. The further we get away from the event in time, the less likely we are to get it right as much can happen in between the prediction and the event. Another factor is that the big predictions tend to be about chaotic systems where small changes early on have a significant impact much later. Finally, and this is the major difference between big predictions and predicting the outcome of a case: A Supreme Court case has at most 9 variables (the justices, or 11 if you count the parties), whereas the big predictions have a near limitless number of variables. In this sense, the complex equations of the experts in economics are useless for predictions, but the unquantifiable shiftiness of human nature and bias makes them easier to predict. Finally, Black Swan Theory neatly explains why the end of the world predictions are wrong, while Supreme Court opinions are rather mundane and common in contrast.

However, we can learn more about the predictions from big events by looking at how the predictions of small events work. My work with FantasySCOTUS gives me too major insights into predictive models and markets: the more you embrace and allow uncertainty, the more accurate we are, and that what we call “experts” are not necessarily knowledgeable. In the article I coauthored with Josh Blackman and Adam Aft, we account for uncertainty by finding confidence intervals for each of our predictions, which lets us know how much to discount certain predictions by. By holding back on weak predictions and putting forward strong ones, we’re more or less detected where and when predictions are going to be accurate (although sometimes our users get things significantly wrong). In respect to experts, we tend to find that factors such as credentials (degrees or publications) or work directly in the field are not indicators of strong predictive ability. Sometimes the exceptionally smart non-lawyer does a better job of predicting the outcome of a case than the Constitutional litigator. One possible explanation is that credentials do not select for predictive capacity, so we are trying to fit experts into a role they are not suited for (although some experts are more than willing to oblige).

As for predictive modeling and mathematics, I have to be on guard to not become one of those occultist economist who uses their crystal balls to gaze into the future. Generally speaking, I always stick to basic level descriptive statistics. By limiting myself to these tools, I know better than to try to make the world fit my visions. Instead, I apply statistics as a tool to give expression to the traits of the predictions our many users have made. So far, it has done very well for understanding how people view the court, and how cases are likely to come out. In the old crowd versus experts debate, I don’t see why we can’t crowdsource to find the real experts, not merely those who got a degree and took some classes.

McDonald v. Chicago Re-Redux

Last night, I made a post about the similarities between McDonald v. Chicago and the ACA lawsuits. Earlier today, Randy Barnett, who I mentioned is playing the role of Alan Gura in this drama, posted his thoughts on the recent Sixth Circuit decision. He opens with this paragraph:

“Volokh readers will remember when two widely-respected conservative Court of Appeals judges, Judge Easterbrook and Judge Posner were on a unanimous Seventh Circuit panel denying both the Due Process and Privileges or Immunities challenge to Chicago’s hand-gun ban. One year later, the Due Process challenge was upheld 5–4 in McDonald v. Chicago. My friend and current adversary, Walter Delliger said yesterdaythat the opinion by Judge Jeff Sutton to uphold the individual mandate “is a complete vindication of the constitutionality of the Affordable Care Act.” Not so fast. Sutton’s opinion was no surprise to anyone who was in the courtroom in Cincinnati. Nor would a contrary opinion have been surprising. Sutton was scrupulously critical of both sides that day. Indeed, his opinion shares the “on the one hand” and “on the other hand” character of his questioning. And it also bears some resemblance to Judge Easterbrook’s opinion in McDonald.”

I’m not the only one who sees that the cases are walking down the well-trod yet confusing path of McDonald. Seems like I can now upgrade from navel-gazing to armchair analysis…