Friday, July 06, 2018

Take a coin, any coin

It is good to see an article on Bayesian reasoning with conditional probabilities in the current issue of the Times Literary Supplement: “Thomas Bayes and the crisis in science” by David Papineau (June 28, 2018).

As Professor Papineau points out, Bayesian analysis is used in many fields, including law.

One of the difficulties in discussing Bayesian reasoning, or indeed any complex subject, is that clear and simple points can become obscured by technical terms.

It took me a while to get to grips with Professor Papineau’s coin-tossing illustration. What it is designed to illustrate is an error of reasoning that is, apparently, found in too many published scientific studies. Essentially, the error involves drawing a conclusion from too little information.

If you take a coin – any coin – and toss it five times, and if you get five heads, how likely is it that the coin is biased? Pretend that you do not have special coin-tossing skills that allow you to determine the result of a toss. Also pretend that it doesn't occur to you to just keep tossing the coin to see what proportion of the sequences of five-tosses give results of five-heads.

After only a little reflection you realise that an unbiased coin will, on average, produce five-heads once every 32 times the five-toss sequence is carried out. One in 32 gives a probability of 0.03, approximately. The probability of getting five-heads from an unbiased coin looks very low, and you might be tempted to conclude that, therefore, there is a probability of 0.97 that the coin is biased.

Apparently, a significant number of scientific studies have been published in peer-reviewed journals, reporting conclusions arrived at through that sort of reasoning.

Bayesian analysis, if you are able to do it, will quickly show you that such conclusions are ridiculous, or, as Professor Papineau says, “silly” or “nonsense on stilts”.

If you are a lawyer, you might have to convince a judge or jury that an apparently obvious conclusion, reached by a respected expert, is wrong. It is far from easy to do this, and that may be why Bayesian analysis is taking so long to be routinely applied in courtrooms.

Fundamentally, the probability of getting five-heads if the coin is not biased, is not the same as the probability of the coin not being biased if it produced five-heads. The probability of A, given B, is not the same as the probability of B, given A.

My favourite way of illustrating this is to say: the probability of an animal having four legs, given that it is a sheep, is not the same as the probability of it being a sheep, given that it has four legs. The first tells you something about sheep, the second something about quadrupeds.

We know something about an unbiased coin: about three per cent of the times it is tossed five times it will produce a sequence of five-heads. But what do we know about a coin that has produced a five-head sequence? Is it biased or unbiased? If it is biased, how biased is it? Does it always produce five-heads or only some proportion of the times it is tossed five times? Is a biased coin commonly found or is it rare? Those things need to be known in calculating the probability that the tossed coin which produces a five-head sequence is biased.

At the risk of over-explaining this, let’s ignore - just for a moment - the rarity of biased coins and consider possible results of 100 five-toss sequences for a biased, and an unbiased, coin:

                                    Biased             Unbiased
            Five-heads       25                      3
             Other               75                     97

These results give three per cent of the results for the unbiased coin showing five-heads. The biased coin was, in this example, biased in such a way that it showed five-heads 25 per cent of the time and any other result 75 per cent of the time. So, of the five-heads results, three were from the unbiased coin and 25 from the biased coin, so the percentage of five-heads results that were from the biased coin is 25/28 times 100, or 89.3 per cent. So the probability of the tossed coin being biased, given the five-head result is approximately 0.89, which would not be regarded scientifically as significant proof of bias.

This is not to say that the result is of no use. It does tend to prove the coin is biased. The strength of its tendency to prove bias is the likelihood ratio: the ratio of the probability of five-heads, given the coin is biased (from the above table this is 0.25) to the probability of five-heads, given the coin is unbiased (0.03), a ratio of 8.3 to 1. On the issue of bias, the result should be reported as: whatever the other evidence of bias may be, this result is 8.3 times more likely if the coin is biased than if it is not biased. The other evidence may be from a survey of coins which measured how often we can expect to find biased coins.

Now suppose that such a biased coin is only found once in every ten thousand coins, and that all biased coins have the same bias. The probability of the coin you have tossed being biased is, when you do the calculation using a Baysean formula, 0.0008. Eight occurrences in ten thousand. Much lower than the 0.97 probability (97 occurrences in 100) of the coin being biased that might have been reported in a peer-reviewed journal.

Again, this is not as surprising as it may seem at first glance. There may be only one biased coin in 10,000 coins, and one occurrence of five-heads from a biased coin in 40,000 coins (using the one-in-four frequency in the table), but, in round figures, there will also be 1200 occurrences (three per cent) of five-heads from unbiased coins in those 40,000 coins. This is why, on this occurrence of biased coins, a five-head result is much more likely (1200 times more likely) to be from an unbiased coin than from a biased one.

Only a very brave judge or juror would bet a significant sum that a coin which when tossed produced a five-head sequence was not biased. The bets would go the other way and those significant sums would most probably be lost.

And, as an afterthought: if you feel estimating prior probabilities is a bit haphazard, the Bayesian formula can be turned around to tell you what priors you would need in order to get in the above example P(the coin is biased) = 0.95. You would, before doing the experiment, need to be convinced to a probability of about 0.70 that the coin was biased. This sort of approach is discussed in a paper by David Colquhoun (available courtesy of The Royal Society Publishing). If, as a lawyer, you want an easy introduction to Bayesian reasoning, see my draft paper on propensity evidence.

Friday, June 22, 2018

Lane v The Queen: error classification and a nudge for Weiss

Good to see Weiss v The Queen (2005) 224 CLR 300 getting another nudge into the obscurity it so richly deserves, in Lane v The Queen [2018] HCA 28 (20 June 2018).

Lane raises, for reflective readers, the difficulty of distinguishing trial errors that go to what Australians call the presuppositions, and errors that are less fundamental but which nevertheless require the quashing of a conviction.

The point of trying to distinguish these types of errors from each other is that when the former occur there is no need for an appellate court to ask whether the verdict could have been affected by the error, whereas when the latter occur the appellate court asks itself whether there is a real risk that the verdict would have been more favourable to the defendant (appellant) if the error had not happened.

Presuppositional errors require quashing of convictions, whereas other errors (beyond the trivial or irrelevant) raise the “real risk” question.

It is probably not inaccurate to think of presuppositional errors as those which undermine the fairness of trials. In Lane, the joint judgment of Kiefel CJ, Bell, Keane and Edelman JJ treats the error as presuppositional: the jury had not been told that unanimity on a particular factual issue was, in the circumstances of the case, required. While recognising the limited utility of classifications of errors, the joint judgment says that it does put the focus on the effects of the errors (at [46]), and that here the misdirection was apt to prevent the performance by the jury of its function of reaching a unanimous verdict. This required, without further inquiry, the quashing of the conviction.

We could say that a trial resulting in a verdict that did not comply with the law was not a fair trial.

The other view of the error in Lane was taken by Gageler J, who agreed with the orders made in the joint judgment. Here, the question was simply whether the possibility of lack of unanimity was more than theoretical (at [58]). In the circumstances, it could not be said that without the error the jury would have returned the same verdict (at [63]).

The joint judgment does not engage with Gageler J’s approach, so without an explanation for why it is wrong it has more weight than it would otherwise have. Even so, Lane is authority for the proposition that where the circumstances of a case are such that a jury may not have been unanimous on an issue where unanimity was required, a resulting conviction will have to be quashed.

My opening and scornful remark about Weiss is addressed to its endorsement of the appeal-judges-as-jurors view of what an appeal court can do. I am one of those who think that appellate judges should never make determinations of guilt. Their function is to assess whether there is a real risk that a verdict more favourable to the defendant (appellant) would have been returned if the error had not occurred, or whether the trial was unfair or was a nullity.

There are some comments in Lane which reject the notion of appeal judges as triers of fact, but those comments need to be read in the context of Lane. So, in that context, Weiss has received its nudge.


Update: The day after Lang was delivered, the New Zealand Supreme Court decided that an error at trial resulting in the jury being instructed incorrectly on mens rea elements required the appellate court to apply the “real risk” analysis, and made no reference to the more fundamental trial fairness ground. Readers are not, therefore, assisted in discovering why this was not a fairness issue. The decision is currently subject to suppression orders, so is only available to people who have access to the databases: [2018] NZSC 56.

Wednesday, May 16, 2018

Reviewing the Evidence Act 2006

Well jurists, it’s only a month to go before your submissions on the New Zealand Law Commission’s Second Review of the Evidence Act 2006Issues Paper 42, are due in.

You don’t have to answer all questions, so you can focus on your favourite topics.

Mine are done, as you can see.

Saturday, May 05, 2018

An admirable dissent

On rare occasions you read a dissenting judgment that is reasoned with such brilliant clarity that you may bruise your hands in applauding.

So it is with S (CA377/2017) v R [2018] NZCA 101 (19 April 2018).

Counsel had not told the defendant that there was the option of having a judge alone trial (JAT) and, without consulting the client on the matter elected jury trial on his behalf.

After being convicted at trial the client became aware that he could have had a JAT, and deposed that he would have chosen that mode of trial if the matter had been discussed with him.

What was the status of the error? Under s 232 of the Criminal Procedure Act 2011, if it rendered the trial unfair it would be unnecessary to show that it had affected the outcome of the trial.

The majority two judges of the Court of Appeal held that the error did not render the trial unfair, and this was the point on which one judge dissented.

In the absence of local case law, the majority were guided by the Supreme Court of Canada in R v Turpin [1989] 1 SCR 1296, the Supreme Court of the United States in Singer v United States 380 US 24 (1965), and the High Court of Australia in Brown v R (1986) 160 CLR 171.

This led to the position that, as there was no “right” to a JAT, but only a right to elect jury trial (with JAT being the default position – what one might think of as the factory setting), the trial was not unfair in terms of s 232(4)(b). Patience with subtlety is necessary to follow the reasoning.

Nor, said the majority, was the error fundamental because it had not been included in a list of fundamental errors compiled in an earlier decision of the Court. (But, as the dissenter observed, neither had it been specifically excluded.)

And there was nothing to indicate that the error had affected the outcome of the trial.

It would be wrong for counsel to rely on the majority judgment as permission to avoid taking instructions on election of jury trial whenever there is a choice to be made, pending resolution of the issue in the Supreme Court (in this or a similar case). The Court certainly did not intend to give permission to make errors.

The dissent essentially takes the position that, just as it would be a fundamental error to fail to inform a defendant of the right to elect jury trial, so too is it a fundamental error to fail to inform a client of the option of judge alone trial. It fits with other fundamental errors identified in Hall v R [2015] NZCA 403 at [65]: decisions as to plea, giving evidence, and presenting a defence, and with the duty referred to at [71].

There was no doubt that the jury trial that happened in this case was in its substance fair. What s 232(4) relevantly requires, to amount to a miscarriage of justice, is an error in relation to the trial that resulted in an unfair trial. An “unfair trial” is not defined, but there could be two types of unfairness: substantive and procedural. Is a trial procedurally fair if it proceeds in a mode that was not, when there was a choice, chosen by the defendant?

Thursday, April 12, 2018

Coming to law from science


“Chief Justice French’s background in science has been useful in expressing ideas. He has suggested that identifying elements of administrative justice is “a little like the identification of ‘fundamental’ particles in physics. When pressed, they can transform one into another or cascade into one or more of the traditional grounds of review developed at common law”. [Robert French “The Rule of Law as a Many Coloured Dream Coat” (Singapore Academy of Law 20th Annual Lecture, Singapore, 18 September 2013) at 18.] It has also come in handy when cases before the Court have dealt with scientific concerns, such as D’Arcy v Myriad Genetics Inc, [[2015] HCA 35, (2015) 325 ALR 100] a case about the patentability of DNA. But I wonder whether the real insight to be obtained from what his scientific background has brought to the Chief Justice’s work is to be picked up from his reference to his gratitude that he was exposed to a “culture” of science. That may give some insight into a style of leadership that, to an outside view, seems more collaborative and cooperative, less competitive than is sometime encountered in appellate courts, perhaps because their members are often drawn from a section of the profession with a very different, more competitive culture.” (footnotes from original, inserted in square brackets)


Science is about finding out what happens, theorising about why it happens, and using that to predict what will happen. Observations usually involve measurement and consequently mathematics. From observations theories can be formulated, again they are usually mathematical. The mathematics should suggest what future observations will be. Predicting observations using mathematics is not always accurate, in which case refinements of the theory are needed. Refinements are prompted by unexpected observations.

For example, looking at magnets and wires, inconsistencies between the predictions of classical mechanics and Maxwell's equations about the forces impelling a current in a conductor, depending on whether the conductor or the magnet is moved, prompted Einstein - at least according to the way he wrote his paper - to develop what later came to be known as the special theory of relativity. The paper announcing this was called (in English translation), On the Electrodynamics of Moving Bodies. Measurements of an event made from different frames of reference (here, in the special case of reference frames moving in straight lines at constant velocities) depend on the point of view, and this in turn has implications for measurements within a single frame of reference. Using observations on the constancy of the speed of light in a vacuum, and theorising that the laws of physics are the same everywhere, Einstein borrowed mathematical techniques developed by Lorentz and showed that some refinements - albeit extremely small ones for the events we normally observe - must be made to Newton’s laws of motion. In a later addendum he showed that the same mathematics he had used also predicted how the energy in matter is proportionate to its mass.

While that sort of mathematics has proved to have great predictive value where observations are made at the macroscopic level, it is not so useful at the sub-atomic level. It seems that the smaller something is, the greater the need for a mathematics incorporating probability. At the sub-atomic level, mathematics is a less accurate predictive tool than it is for events at a larger scale. To compensate for the reduced usefulness of basic mathematics at the sub-atomic level, new forms of mathematics are devised, starting with quantum mechanics. Specialists develop new forms of mathematics to meet the needs of inquiry; Descartes combined algebra and geometry, Newton and Leibniz independently developed calculus, and today there are many forms of specialised mathematics, taking their topics far beyond a lay-person’s understanding.

Unless a mathematical refinement has predictive value for those who must use it, it is worthless to science. The same need for predictive value applies to theories that are not mathematical.But having predictive value is not the same as identifying what is real. The correct interpretation of reality using quantum mechanics has yet to be achieved. A theory may predict observations while not necessarily saying what is real.

Law is like science in that in considering a legal problem a lawyer will try to predict what a court would decide the answer should be. The facts of the legal problem are like measurements in science. But they also claim to speak of reality. Deciding what should be the legal consequence of the forensically decided reality can be like using a scientific theory to predict the result of an experiment. Where a judge has a discretion, or where judgment must be exercised by a court, there is room for a predictive theory to be developed. Those areas of law, where there are discretions to be exercised and evaluations to be made, are different from other areas where the answer to a legal problem can simply be looked up. Discretion and judicial evaluation invite analysis and development of predictive theory.

Two areas of judicial decision-making that have particularly interested me both involve evaluative judgments: deciding whether improperly obtained evidence should be ruled inadmissible, and deciding whether the evidence in a case is sufficient proof of guilt.

My study of the decision whether a court should rule improperly obtained evidence inadmissible is available at https://www.tinyurl.com/dbmadmissibility . There is a method behind my theory which has mathematical analogues: the Cartesian plane, a diagrammatic representation of results of cases, a boundary curve reflecting the rationality of the decision process. It provides a pictorial representation of results, and a method for identifying wrong decisions. Wrong decisions are like inaccurate scientific observations; they do not require rejection of an inconsistent theory unless they build up in number and have consistency among themselves to the point where it is no longer useful to call them wrong.

The sufficiency of evidence as proof of guilt is an inherently probabilistic question. Reasoning with conditional probabilities is something we all do instinctively, but mathematical analysis can reveal fallacies in intuitive thinking. Analogies from mathematical theory can indicate the probative value of items of evidence and the effect of those on the probability that a defendant is guilty. Law does not require mathematical precision, but mathematical method can be a useful tool. I illustrate this in my draft paper (draft because I like to have the opportunity to keep these papers up to date) available at https://tinyurl.com/dbmpropensity .

 Those are illustrations of some of the ways in which a background in science can be of assistance to a lawyer.

Saturday, April 07, 2018

Kalbasi v Western Australia: analysing conviction appeals without Weiss

In Kalbasi v Western Australia [2018] HCA 7 the Court split 4-3 on whether Mr Kalbasi’s conviction was a substantial miscarriage of justice.

In trying to answer this question the judges used a notoriously difficult decision of the Court, Weiss v The Queen (2005) 224 CLR 300;  [2005] HCA 81. The differences in the conclusions reached by the judges suggests that Weiss doesn’t work.

In New Zealand we no longer struggle to decide whether a miscarriage of justice is “substantial”. The reformed law is in s 232 of the Criminal Procedure Act 2011.

True to say, Weiss has some lingering influence here, by way of applying Matenga v R [2009] NZSC 18, as can be seen in Wiley v R [2016] NZCA 28 at [18], [49], [51], but that may be only a clinging-to-the-wreckage instinct which the Supreme Court could well correct when it decides the appeals in Z v R (the leave decision was [2017] NZSC 172, 17 November 2017, not available online.)

How would Kalbasi have been decided under s 232?

Kalbasi is a wonderful example of a plethora of appeal issues arising from relatively straightforward facts. Jeremy Gans discusses these at the HCA blog.

I think that, applying s 232 here, we would agree with the conclusion reached by the majority in Kalbasi.

Was the trial unfair (s 232(4)(b))? At common law a trial is fair if the law was accurately applied to facts that had been determined impartially. Impartially includes without bias and without apparent bias, and requires that the fact-finder has given appropriate weight to the various items of evidence and has reasoned correctly.

Although there was an error of law in Kalbasi – everyone thought the presumption of purpose of supply applied, but it didn’t because the charge was only one of attempting to have possession (of methamphetamine) for the purpose of supply. It was an attempt because the police had substituted salt for the drug in the package. The error was immaterial for two reasons: the defence that was relied on (absence of proof of possession) made the subsequent issue of purpose irrelevant, and the quantity of the drug had been about 2000 times that at which the presumption is triggered, so there would have been, without a presumption, a strong factual inference for the defendant to raise a doubt about if that purpose had been contested.

So as a practical matter, the error of law didn’t matter. In some trials it is necessary for all defences to be considered, even those on which the defendant has not relied, but in this case the facts made a contest on the issue of purpose hopeless for the defendant. The error of law was inconsequential on these facts.

Were the facts determined impartially? The issue on possession was whether the defendant had exercised a power of control over what he thought was the drug. Control was properly explained to the jury. The defence was that the defendant did not have control because he was just present to take a small quantity of the drug for his own use. Usually, this would be a defence offered to negate the allegation of purpose of supply. But in the circumstances here the tactical decision to challenge possession rather than purpose was not unreasonable.

The defendant did not give evidence, and there was no criticism of that choice. It left the issue of possession, and more precisely of control, as a matter of inference. There were circumstances that supported the conclusion that Mr Kalbasi had a greater interest than merely obtaining a small quantity of the drug for his own use.

Given that the trial was fair, was there a real risk that the outcome of the trial had been affected by any error, irregularity or occurrence (s 232(4)(a))?

The judge had used a library book analogy to explain the difference between ownership and possession. The same analogy could have more pertinently illustrated the difference between custody and control. If you are the only visitor in a small library, and the librarian leaves the room briefly, you may be said to have custody of the books, but you would only have control of a book you picked out of the shelves. Control may be temporary and conditional on return, and it may be shared, and the evidence was that Mr Kilbasi had worn a latex glove and assisted with cutting or inspecting what he thought was the drug. So even if the library book analogy had not been used in the most apposite way, the jury would not have been misled about what control is.


There was no real risk that the outcome of the trial had been affected by an error, and the conviction was not a miscarriage of justice.

Saturday, March 24, 2018

When judges get nasty

It’s good to see the Chief Justice taking an interest in judicial bullying of counsel.

I imagine there have been judicial bullies as long as there have been courts. Bullies can usually be quite nice people, but under pressure the character flaw is revealed.

My own method for dealing with bullying judges is rather unsubtle, as this example illustrates.


I am pleased to report the whole thing was settled amicably, the judge saying that we both seemed to be having a bad day at the office.