Who thinks probability is just a number? A plea.

Many people think – perhaps they were taught it – that it is meaningful to talk about the unconditional probability of ‘Heads’ (I.e. P(Heads)) for a real coin, and even that there are logical or mathematical arguments to this effect. I have been collecting and commenting on works which have been – too widely – interpreted in this way, and quoting their authors in contradiction. De Finetti seemed to be the only example of a respected person who seemed to think that he had provided such an argument. But a friendly economist has just forwarded a link to a recent work that debunks this notion, based on wider  reading of his work.

So, am I done? Does anyone have any seeming mathematical sources for the view that ‘probability is just a number’ for me to consider?

I have already covered:

There are some more modern authors who make strong claims about probability, but – unless you know different – they rely on the above, and hence do not need to be addressed separately. I do also opine on a few less well known sources: you can search my blog to check.

Dave Marsay

Is ‘probability’ a systemic issue?

My previous post was in part motivated by the current inquiry into the UK’s post office scandal. I note that one investigator said that the computer problems were not ‘systemic’ because there was no specific evidence that they had directly affected that many branches. Oh dear!

Read more of this post

The ‘absolute certainty’ uncertainty principle

Are claims to ‘absolute certainty’ evidence of a belief that ought to be challenged?

Read more of this post

AI Safety: Uncertain consequences

The UK government is hosting a summit on the safety of ‘artificial intelligence’. Here I summarise their concerns, followed by my some additional concerns that I have.

Read more of this post

What is ‘evidence’?

As a mathematician, I tend to worry about mathematical theories, such as those concerning ‘evidence’, and how they are applied, particularly how they are misunderstood and misapplied.

Read more of this post

Applications of Statistics

Lars Syll has commented on a book by David Salsburg, criticising workaday applications of statistics. Lars has this quote:

Kolmogorov established the mathematical meaning of probability: Probability is a measure of sets in an abstract space of events.

This is not quite right.

  • Kolmogorov established a possible meaning, not ‘the’ meaning. (Actually Wittgenstein anticipated him.)
  • Even taking this theory, it is not clear why the space should be ‘measurable‘. More generally one has ‘upper’ and ‘lower’ measures, which need not be equal. One can extend the more familiar notions of probability, entropy, information and statistics to such measures. Such extended notions seem more credible.
  • In practice one often has some ‘given data’ which is at least slightly distant from the ‘real’ ‘events’ of interest. The data space is typically rather a rather tame ‘space’, so that a careful use of statistics is appropriate. But one still has the problem of ‘lifting’ the results to the ‘real events’.

These remarks seem to cover the criques of Syll and Salsburg, but are more nuanced. Statistical results, like any mathematics, need to be interpreted with care. But, depending on which of the above remarks apply, the results may be more or less easy to interpret: not all naive statistics are equally dubious!

Dave Marsay

AI pros and cons

Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher The Metamorphosis Atlantic August 2019.

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.

The authors are looking for comments. My initial reaction is here. I hope to say more. Meanwhile, I’d appreciate your reactions.

 

Dave Marsay

The limits of pragmatism

This is a personal attempt to identify and articulate a fruitful form of pragmatism, as distinct from what seems to me the many dangerous forms. My starting point is Wikipedia and my notion that the differences it notes can sometimes matter.

Read more of this post

Which pragmatism as a guide to life?

Much debate on practical matters ends up in distracting metaphysics. If only we could all agree on what was ‘pragmatic’. My blog is mostly negative, in so far as it rubbishes various suggestions, but ‘the best is trhe enemy of the good’, and we do need to do something.

Unfortunately, different ‘schools’ start from a huge variety of different places, so it is difficult to compare and contrast approaches. But it is about time I had a go. (In part inspired by a recent public engagement talk on mathematics).

Read more of this post

What logical term or concept ought to be more widely known?

Various What scientific term or concept ought to be more widely known? Edge, 2017.

Various public scientists suggest ideas that ought to be more widely known. I add a few.

Read more of this post

Heuristics or Algorithms: Confused?

The Editor of the New Scientist (Vol. 3176, 5 May 2018, Letters, p54) opined in response to Adrian Bowyer’s wish to distinguish between ‘heuristics’ and ‘algorithms’ in AI that:

This distinction is no longer widely made by practitioners of the craft, and we have to follow language as it is used, even when it loses precision.

Read more of this post

Probability as a guide to life

Probability is the very guide to life.’

Cicero may have been right, but ‘probability’ means something quite different nowadays to what it did millennia ago. So what kind of probability is a suitable guide to life, and when?

Suppose that we are told that ‘P(X) = p’. Often there is some implied real or virtual population, P, a proportion ‘p’ of which has the property ‘X’. To interpret such a probability statement we need to know what the relevant population is. Such statements are then normally reliable. More controversial are conditional probabilities, such as ‘P(X|Y) = p’. If you satisfy Y, does P(X)=p ‘for you’?

Suppose that:

  1. All the properties of interest (such as X and Y) can be expressed as union of some disjoint basis, B.
  2. For all such basis properties, B, P(X|B) is known.
  3. That the conditional probabilities of interest are derived from the basis properties in the usual way. (E..g. P(X|B1ÈB2) = P(B1).P(X|B1)+P(B2).P(X|B2)/P(B1ÈB2).)

The conditional probabilities constructed in this way are meaningful, but if we are interested in some other set, Z, the conditional probability P(X|Z) could take a range of values. But then we need to reconsider decision making. Instead of maximising a probability (or utility), the following heuristics that may apply:

  • If the range makes significant difference, try to get more precise data. This may be by taking more samples, or by refining the properties considered.
  • Consider the best outcome for the worst-case probabilities.
  • If the above is not acceptable, make some reasonable assumptions until there is an acceptable result possible.

For example, suppose that some urn, each contain a mix of balls, some of which are white. We can choose an urn and then pick a ball at random. We want white balls. What should we do. The conventional rule consists of assessing the proportion of white balls in each, and picking an urn with the most. This is uncontroversial if our assessments are reliable. But suppose we are faced with an urn with an unknown mix? Conventionally our assessment should not depend on whether we want to obtain or avoid a white ball. But if we want white balls the worst-case proportion is no white balls, and we avoid this urn, whereas if we want to avoid white balls the worst-case proportion is all white balls, and we again avoid this urn.

If our assessments are not biased then we would expect to do better with the conventional rule most of the time and in the long-run. For example, if the non-white balls are black, and urns are equally likely to be filled with black as white balls, then assessing that an urn with unknown contents has half white balls is justified. But in other cases we just don’t know, and choosing this urn we could do consistently badly. There is a difference between an urn whose contents are unknown, but for which you have good grounds for estimating proportion, and an urn where you have no grounds for assessing proportion.

If precise probabilities are to be the very guide to life, it had better be a dull life. For more interesting lives imprecise probabilities can be used to reduce the possibilities. It is often informative to identify worst-case options, but one can be left with genuine choices. Conventional rationality is the only way to reduce living to a formula: but is it such a good idea?

Dave Marsay

How can economics be a science?

This note is prompted by Thaler’s Nobel prize, the reaction to it, and attempts by mathematicians to explain both what they do do and what they could do.

Read more of this post

Why do people hate maths?

New Scientist 3141 ( 2 Sept 2017) has the cover splash ‘Your mathematical mind: Why do our brains speak the language of reality?’. The article (p 31) is titled ‘The origin of mathematics’.

I have made pedantic comments on previous articles on similar topics, to be told that the author’s intentions have been slightly skewed in the editing process. Maybe it has again. But some interesting (to me) points still arise.

Firstly, we are told that brain scans showthat:

a network of brain regions involved in mathematical thought that was activated when mathematicians reflected on problems in algebra, geometry and topology, but not when they were thinking about non-mathsy things. No such distinction was visible in other academics. Crucially, this “maths network” does not overlap with brain regions involved in language.

It seems reasonable to suppose that many people do not develop such a maths capability from experience in ordinary life or non-mathsy subjects, and perhaps don’t really appreciate its significance. Such people would certainly find maths stressful, which may explain their ‘hate’. At least we can say – contradicting the cover splash – that most people lack a mathematical mind, which may explain the difficulties mathematicians have in communicating.

In addition, I have come across a few seemingly sensible people who may seem to hate maths, although I would rather say that they hate ‘pseudo-maths’. For example, it may be true that we have a better grasp on reality if we can think mathematically – as scientists and technologists routinely do – but it seems a huge jump – and misleading – to claim that mathematics is ‘the language of reality’ in any more objective sense. By pseudo-maths I mean something that appears to be maths (at least to the non-mathematician) but which uses ordinary reasoning to make bold claims (such as ‘is the language of reality’).

But there is a more fundamental problem. The article cites Ashby to the effect that ‘effective control’ relies on adequate models. Such models are of course computational and as such we rely on mathematics to reason about them. Thus we might say that mathematics is the language of effective control. If – as some seem to – we make a dichotomy between controllable and not controllable systems then mathematics is the pragmatic language of reality. Here we enter murky waters. For example, if reality is socially constructed then presumably pragmatic social sciences (such as economics) are necessarily concerned with control, as in their models. But one point of my blog is that the kind of maths that applies to control is only a small portion. There is at least the possibility that almost all things of interest to us as humans are better considered using different maths. In this sense it seems to me that some people justifiably hate control and hence related pseudo-maths. It would be interesting to give them a brain scan to see if  their thinking appeared mathematical, or if they had some other characteristic networks of brain regions. Either way, I suspect that many problems would benefit from collaborations between mathematicians and those who hate pseudo-mathematic without necessarily being professional mathematicians. This seems to match my own experience.

Dave Marsay

Mathematical Modelling

Mathematics and modelling in particular is very powerful, and hence can be very risky if you get it wrong, as in mainstream economics. But is modelling inappropriate – as has been claimed – or is it just that it has not been done well enough?

Read more of this post

Can polls be reliable?

Election polls in many countries have seemed unusually unreliable recently. Why? And can they be fixed?

The most basic observation is that if one has a random sample of a population in which x% has some attribute then it is reasonable to estimate that x% of the whole population has that attribute, and that this estimate will tend to be more accurate the larger the sample is. In some polls sample size can be an issue, but not in the main political polls.

A fundamental problem with most polls is that the ‘random’ sample may not be uniformly distributed, with some sub-groups over or under represented. Political polls have some additional issues, that are sometimes blamed:

  • People with certain opinions may be reluctant to express them, or may even mislead.
  • There may be a shift in opinions with time, due to campaigns or events.
  • Different groups may differ in whether they actually vote, for example depending on the weather.

I also think that in the UK the trend to postal voting may have confused things, as postal voters will have missed out on the later stages of campaigns, and on later events. (Which were significant in the UK 2017 general election.)

Pollsters have a lot of experience at compensating for these distortions, and are increasingly using ‘sophisticated mathematical tools’. How is this possible, and is there any residual uncertainty?

Back to mathematics, suppose that we have a science-like situation in which we know which factors (e.g. gender, age, social class ..) are relevant. With a large enough sample we can partition the results by combination of factors, measure the proportions for each combination, and then combine these proportions, weighting by the prevalence of the combinations in the whole population. (More sophisticated approaches are used for smaller samples, but they only reduce the statistical reliability.)

Systematic errors can creep in in two ways:

  1. Instead of using just the poll data, some ‘laws of politics’ (such as the effect of rain) or other heuristics (such as that the swing among postal votes will be similar to that for votes in person) may be wrong.
  2. An important factor is missed. (For example, people with teenage children or grandchildren may vote differently from their peers when student fees are an issue.)

These issues have analogues in the science lab. In the first place one is using the wrong theory to interpret the data, and so the results are corrupted. In the second case one has some unnoticed ‘uncontrolled variable’ that can really confuse things.

A polling method using fixed factors and laws will only be reliable when they reasonably accurately the attributes of interest, and not when ‘the nature of politics’ is changing, as it often does and as it seems to be right now in North America and Europe. (According to game theory one should expect such changes when coalitions change or are under threat, as they are.) To do better, the polling organisation would need to understand the factors that the parties were bringing into play at least as well as the parties themselves, and possibly better. This seems unlikely, at least in the UK.

What can be done?

It seems to me that polls used to be relatively easy to interpret, possibly because they were simpler. Our more sophisticated contemporary methods make more detailed assumptions. To interpret them we would need to know what these assumptions were. We could then ‘aim off’, based on our own judgment. But this would involve pollsters in publishing some details of their methods, which they are naturally loth to do. So what could be done? Maybe we could have some agreed simple methods and publish findings as ‘extrapolations’ to inform debate, rather than predictions. We could then factor in our own assumptions. (For example, our assumptions about students turnout.)

So, I don’t think that we can expect reliable poll findings that are predictions, but possibly we could have useful poll findings that would inform debate and allow us to take our own views. (A bit like any ‘big data’.)

Dave Marsay

 

The search for MH370: uncertainty

There is an interesting podcast about the search for MH370 by a former colleague. I think it illustrates in a relatively accessible form some aspects of uncertainty.

According to the familiar theory, if one has an initial probability distribution over the globe for the location of MH370’s flight recorder, say, then one can update it using Bayes’ rule to get a refined distribution. Conventionally, one should search where there is a higher probability density (all else being equal). But in this case it is fairly obvious that there is no principled way of deriving an initial distribution, and even Bayes’ rule is problematic. Conventionally, one should do the best one can, and search accordingly.

The podcaster (Simon) gives examples of some hypotheses (such as the pilot being well, well-motivated and unhindered throughout) for which the probabilistic approach is more reasonable. One can then split one’s effort over such credible hypotheses, not ruled out by evidence.

A conventional probabilist would note that any ‘rational’ search would be equivalent to some initial probability distribution over hypotheses, and hence some overall distribution. This may be so, but it is clear from Simon’s account that this would hardly be helpful.

I have been involved in similar situations, and have found it easier to explain the issues to non-mathematicians when there is some severe resource constraint, such as time. For example, we are looking for a person. The conventional approach is to maximise our estimated probability of finding them based on our estimated probabilities of them having acted in various ways (e.g., run for it, hunkered down). An alternative is to consider the ways they may ‘reasonably’ be thought to have acted and then to seek to maximize the worst case probability of finding them. Then again, we may have a ranking of ways that they may have acted, and seek to maximize the number of ways for which the probability of our success exceeds some acceptable amount (e.g. 90%). The key point here is that there are many reasonable objectives one might have, for only one of which the conventional assumptions are valid. The relevant mathematics does still apply, though!

Dave Marsay

Design a site like this with WordPress.com
Get started