Bayes’ Broadsword

Yesterday I gave a talk at the joint Bloomberg-London Futurist meeting “The state of the future” about the future of decisionmaking. Parts were updates on my policymaking 2.0 talk (turned into this chapter), but I added a bit more about individual decisionmaking, rationality and forecasting.

The big idea of the talk: ensemble methods really work in a lot of cases. Not always, not perfectly, but they should be among the first tools to consider when trying to make a robust forecast or decision. They are Bayes’ broadsword:

Bayesbroadsword

Forecasting

One of my favourite experts on forecasting is J Scott Armstrong. He has stressed the importance of evidence based forecasting, including checking how well different methods work. The general answer is: not very well, yet people keep on using them. He has been pointing this out since the 70s. It also turns out that expertise only gets you so far: expert forecasts are not very reliable either, and the accuracy levels out quickly with increasing level of expertise. One implication is that one should at least get cheap experts since they are about as good as the pricey ones. It is also known that simple models for forecasting tends to be more accurate than complex ones, especially in complex and uncertain situations (see also Haldane’s “The Dog and the Frisbee”). Another important insight is that it is often better to combine different methods than try to select the one best method.

Another classic look at prediction accuracy is Philip Tetlock’s Expert Political Judgment (2005) where he looked at policy expert predictions. They were only slightly more accurate than chance, worse than basic extrapolation algorithms, and there was a negative link to fame: high profile experts have an incentive to be interesting and dramatic, but not right. However, he noticed some difference between “hedgehogs” (people with One Big Theory) and “foxes” (people using multiple theories), with the foxes outperforming hedgehogs.

OK, so in forecasting it looks like using multiple methods, theories and data sources (including experts) is a way to get better results.

Statistical machine learning

A standard problem in machine learning is to classify something into the right category from data, given a set of training examples. For example, given medical data such as age, sex, and blood test results, diagnose what a particular disease a patient might suffer from. The key problem is that it is non-trivial to construct a classifier that works well on data different from the training data. It can work badly on new data, even if it works perfectly on the training examples. Two classifiers that perform equally well during training may perform very differently in real life, or even for different data.

The obvious solution is to combine several classifiers and average (or vote about) their decisions: ensemble based systems. This reduces the risk of making a poor choice, and can in fact improve overall performance if they can specialize for different parts of the data. This also has other advantages: very large datasets can be split into manageable chunks that are used to train different components of the ensemble, tiny datasets can be “stretched” by random resampling to make an ensemble trained on subsets, outliers can be managed by “specialists”, in data fusion different types of data can be combined, and so on. Multiple weak classifiers can be combined into a strong classifier this way.

The method benefits from having diverse classifiers that are combined: if they are too similar in their judgements, there is no advantage. Estimating the right weights to give to them is also important, otherwise a truly bad classifier may influence the output.

Iris data classified using an ensemble of classification methods.
Iris data classified using an ensemble of classification methods (LDA, NBC, various kernels, decision tree). Note how the combination of classifiers also roughly indicates the overall reliability of classifications in a region.

The iconic demonstration of the power of this approach was the Netflix Prize, where different teams competed to make algorithms that predicted user ratings of films from previous ratings. As part of the rules the algorithms were made public, spurring innovation. When the competition concluded in 2009, the leading teams all consisted of ensemble methods where component algorithms were from past teams. The two big lessons were (1) that a combination of not just the best algorithms, but also less accurate algorithms, were the key to winning, and (2) that organic organization allows the emergence of far better performance than having strictly isolated teams.

Group cognition

Condorcet’s jury theorem is perhaps the classic result in group problem solving: if a group of people hold a majority vote, and each has a probability p>1/2 of voting for the correct choice, then the probability the group will vote correctly is higher than p and will tend to approach 1 as the size of the group increases. This presupposes that votes are independent, although stronger forms of the theorem have been proven. (In reality people may have different preferences so there is no clear “right answer”)

Probability that groups of different sizes will reach the correct decision as a function of the individual probability of voting right.
Probability that groups of different sizes will reach the correct decision as a function of the individual probability of voting right.

By now the pattern is likely pretty obvious. Weak decision-makers (the voters) are combined through a simple procedure (the vote) into better decision-makers.

Group problem solving is known to be pretty good at smoothing out individual biases and errors. In The Wisdom of Crowds Surowiecki suggests that the ideal crowd for answering a question in a distributed fashion has diversity of opinion, independence (each member has an opinion not determined by the other’s), decentralization (members can draw conclusions based on local knowledge), and the existence of a good aggregation process turning private judgements into a collective decision or answer.

Perhaps the grandest example of group problem solving is the scientific process, where peer review, replication, cumulative arguments, and other tools make error-prone and biased scientists produce a body of findings that over time robustly (if sometimes slowly) tends towards truth. This is anything but independent: sometimes a clever structure can improve performance. However, it can also induce all sorts of nontrivial pathologies – just consider the detrimental effects status games have on accuracy or focus on the important topics in science.

Small group problem solving on the other hand is known to be great for verifiable solutions (everybody can see that a proposal solves the problem), but unfortunately suffers when dealing with “wicked problems” lacking good problem or solution formulation. Groups also have scaling issues: a team of N people need to transmit information between all N(N-1)/2 pairs, which quickly becomes cumbersome.

One way of fixing these problems is using software and formal methods.

The Good Judgement Project (partially run by Tetlock and with Armstrong on the board of advisers) participated in the IARPA ACE program to try to improve intelligence forecasts. They used volunteers and checked their forecast accuracy (not just if they got things right, but if claims that something was 75% likely actually came true 75% of the time). This led to a plethora of fascinating results. First, accuracy scores based on the first 25 questions in the tournament predicted subsequent accuracy well: some people were consistently better than others, and it tended to remain constant. Training (such a debiasing techniques) and forming teams also improved performance. Most impressively, using the top 2% “superforecasters” in teams really outperformed the other variants. The superforecasters were a diverse group, smart but by no means geniuses, updating their beliefs frequently but in small steps.

The key to this success was that a computer- and statistics-aided process found the good forecasters and harnessed them properly (plus, the forecasts were on a shorter time horizon than the policy ones Tetlock analysed in his previous book: this both enables better forecasting, plus the all-important feedback on whether they worked).

Another good example is the Galaxy Zoo, an early crowd-sourcing project in galaxy classification (which in turn led to the Zooniverse citizen science project). It is not just that participants can act as weak classifiers and combined through a majority vote to become reliable classifiers of galaxy type. Since the type of some galaxies is agreed on by domain experts they can used to test the reliability of participants, producing better weightings. But it is possible to go further, and classify the biases of participants to create combinations that maximize the benefit, for example by using overly “trigger happy” participants to find possible rare things of interest, and then check them using both conservative and neutral participants to become certain. Even better, this can be done dynamically as people slowly gain skill or change preferences.

The right kind of software and on-line “institutions” can shape people’s behavior so that they form more effective joint cognition than they ever could individually.

Conclusions

The big idea here is that it does not matter that individual experts, forecasting methods, classifiers or team members are fallible or biased, if their contributions can be combined in such a way that the overall output is robust and less biased. Ensemble methods are examples of this.

While just voting or weighing everybody equally is a decent start, performance can be significantly improved by linking it to how well the participants perform. Humans can easily be motivated by scoring (but look out for disalignment of incentives: the score must accurately reflect real performance and must not be gameable).

In any case, actual performance must be measured. If we cannot tell if some method is more accurate than something else, then either accuracy does not matter (because it cannot be distinguished or we do not really care), or we will not get the necessary feedback to improve it. It is known from the expertise literature that one of the key factors for it to be possible to become an expert on a task is feedback.

Having a flexible structure that can change is a good approach to handling a changing world. If people have disincentives to change their mind or change teams, they will not update beliefs accurately.

I got a good question after the talk: if we are supposed to keep our models simple, how can we use these complicated ensembles? The answer is of course that there is a difference between using a complex and a complicated approach. The methods that tend to be fragile are the ones with too many free parameters, too much theoretical burden: they are the complex “hedgehogs”. But stringing together a lot of methods and weighting them appropriately merely produces a complicated model, a “fox”. Component hedgehogs are fine as long as they are weighed according to how well they actually perform.

(In fact, adding together many complex things can make the whole simpler. My favourite example is the fact that the Kolmogorov complexity of integers grows boundlessly on average, yet the complexity of the set of all integers is small – and actually smaller than some integers we can easily name. The whole can be simpler than its parts.)

In the end, we are trading Occam’s razor for a more robust tool: Bayes’ Broadsword. It might require far more strength (computing power/human interaction) to wield, but it has longer reach. And it hits hard.

Appendix: individual classifiers

I used Matlab to make the illustration of the ensemble classification. Here are some of the component classifiers. They are all based on the examples in the Matlab documentation. My ensemble classifier is merely a maximum vote between the component classifiers that assign a class to each point.

Iris data classified using a naive Bayesian classifier assuming Gaussian distributions.
Iris data classified using a naive Bayesian classifier assuming Gaussian distributions.
Iris data classified using a decision tree.
Iris data classified using a decision tree.
Iris data classified using Gaussian kernels.
Iris data classified using Gaussian kernels.
Iris data classified using linear discriminant analysis.
Iris data classified using linear discriminant analysis.

 

Strategies for not losing things

Lost keysA dear family member has an annoying tendency to lose things – sometimes causing a delaying “But where did I put the keys?” situation when leaving home, sometimes brief panics when wallets go missing, and sometimes causing losses of valuable gadgets. I rarely lose things. This got me thinking about the difference in our approaches. Here are some strategies I seem to follow to avoid losing things.

This is intended more as an exploration of the practical philosophy and logistics of everyday life than an ultimate manual for never losing anything ever.

Since we spend so much of our time in everyday life, the returns of some time spent considering and improving it are large, even if the improvement is about small things.

Concentric layers

I think one of my core principles is to keep important stuff on me. I always keep my phone in my breast pocket, my glasses on my nose, my wallet and keys in my pocket. On travel, my passport is there too. My laptop, travel/backup drive, business cards, umbrella, USB connectors etc. are in the backpack I carry around or have in the same room. If I had a car, I would have tools, outdoor equipment and some non-perishable snacks in the trunk. Books I care about are in my own bookshelf, other books distributed across my office or social environment.

The principle is to ensure that the most important, irreplaceable things are under your direct personal control. The probability of losing stuff goes up as it moves away from our body.

Someone once said: “You do not own the stuff you cannot carry at a dead run.” I think there is a great deal of truth to that. If things turn pear-shaped I should in principle be able to bail out with what I got on me.

A corollary is that one should reduce the number of essential things one has to carry around. Fewer things to keep track of. I was delighted when my clock and camera merged with my phone. The more I travel, the less I pack. Fewer but more essential things also increases the cost of losing them: there is a balance to be made between resilience and efficiency.

Layering also applies to our software possessions. Having files in the cloud is nice as long as the cloud is up, the owner of the service behaves nicely to you, and you can access it. Having local copies on a hard drive means that you have access regardless. This is extra important for those core software possessions like passwords, one time pads, legal documents or proofs of identity – ideally they should be on a USB drive or other offline medium we carry at all times, making access hard for outsiders.

For information redundant remote backup copies also works great (a friend lost 20 years of files to a burglar – and her backup hard drives were next to the computer, so they were stolen too). But backups are very rarely accessed: they form a very remote layer. Make sure the backup system actually does work before trusting it: as a general rule you want to have ways to notice when you have lost something, but remote possessions can often quietly slip away.

Minimax

Another useful principle, foreshadowed above, is minimax: minimize the max loss. Important stuff should be less likely to be lost than less important stuff. The amount of effort I put into thinking up what could go wrong and what to do about it should be proportional to the importance of the thing.

Hence, think about what the worst possible consequence of a loss. A lost pen: annoying if there isn’t another nearby. A lost book: even more annoying. A lost key: lost time, frustration and quite possibly locksmith costs. Lost credit card: hassle to get it blocked and replaced, loss of chance to buy things. Identity theft: major hassle, long term problems. Lost master passwords: loss of online identity and perhaps reputation. Loss of my picture archive: loss of part of my memory.

The rational level of concern should be below the probability of loss times the consequences. We can convert consequences into time: consider how long it would take to get a new copy of a book, get a new credit card, or handle somebody hijacking your Facebook account (plus lost time due to worry and annoyance). The prior probability of loosing books may be about 1%, while identity theft has an incidence of 0.2% per year. So if identity theft would cause a month of work to you, it is probably worth spending a dedicated hour each year to minimize the risk.

Remember XKCDs nice analysis of how long it is rational to optimize daily tasks.

Things you have experience of losing a few times obviously require more thought. Are there better ways of carrying them, could you purchase suitable fasteners – or is their loss actually acceptable? Conversely, can the damage from the loss be mitigated? Spare keys or email accounts are useful to have.

There is of course a fuzzy border between conscientiousness, rationality and worry.

Scenarios

A piece of the puzzleI have the habit of running through scenarios about possible futures whenever I do things. “If I leave this thing here, will I find it again?” “When I come to the airport security check, how do I minimize the number of actions I will need to take to put my stuff in the trays?” The trick is to use these scenarios to detect possible mistakes or risks before they happen, especially in the light of the minimax principle.

Sometimes they lead to interesting realizations: a bank ID device was stored right next to a card with a bank ID code in my wallet: while not enough to give a thief access to my bank account they would pass by two of the three steps (the remaining was a not too strong password). I decided to move the device to another location near my person, making a loss of both the code and the device significantly less probable in a robbery or lost wallet.

The point is not to plan for everything, but over time as you notice them patch holes in your everyday habits. Again, there is a fine line between forethought and worrying. I think the defining feature is emotional valence: if the thought makes you upset rather than “OK, let’s not do that” then you are worrying and should stop. The same for scenarios you cannot actually do anything about.

When something did go wrong, we should think through how to not end up like that again. But it also helps to notice when something nearly went wrong, and treat that as seriously as if it had gone wrong – there are many more teachable instances of that kind than actual mistakes, although they often are less visible.

Poka-yoke

I love the idea of mistake-proofing my life. The trick is to set things up so my behaviour will be shaped to avoid the mistake: the standard example is putting your keys in your shoes or on the door handle, so that it is nearly impossible to leave home without them.

Often a bit of forethought can help construct poka-yokes. When washing clothes, the sound of the machine reminds me that it is ongoing, but when it ends there is no longer a reminder that I should hang the clothes – so I place coat hangers on the front door handle (for a morning wash) or in my bed (for an evening wash) to make it impossible to leave/go to bed without noticing the extra task.

Another mini-strategy is gestalt: put things together on a tray, so that they all get picked up or there will be an easier noticeable lack of a key item. Here the tray acts as a frame forcing grouping of the objects. Seeing it can also act as a trigger (see below). For travel, I have ziploc bags with currency, travel plugs, and bus cards relevant for different destinations.

Habits

Lost memoryOne of the main causes of loss is attention/working memory lapses: you put the thing there for a moment, intending to put it back where it belongs, but something interferes and you forget where you placed it.

The solution is not really to try to pay more attention since it is very hard to do all the time (although training mindfulness and actually noticing what you do is perhaps healthy for other reasons). The trick is to ensure that other unconscious processes – habits – help fix the situation. If you always put stuff where it should be by habit, it does not matter that your attention lapses.

The basic approach is to have a proper spot where one habitually puts the particular thing. First decide on the spot, and start putting it there. Then continue doing this. Occasional misses are OK, the point is to make this an automatic habit.

Many things have two natural homes: their active home when you bring them with you, and  a passive home when they are not on you. Glasses on your nose or on your nightstand, cellphone in your pocket or in the charger. As long as you have a habit of putting them in the right home when you arrive at it there is no problem. Even if you miss doing that, you have a smaller search space to go through when trying to find them.

One can also use triggers, a concrete cue, to start the action. When going to be, put the wedding ring on the bed stand. When leaving the car, when you are one pace beyond it turn and lock the door. The trick here is that the cue can be visualized beforehand as leading to the action: imagine it vividly, ensuring that they are linked. Every time you follow the trigger with the action they get strengthened.

Another cause of lost items is variability: habits are all about doing the same thing again and again, typically at the same time and place. But I have a fairly variable life where I travel, change my sleep times and do new things at a fairly high rate. Trigger habits can still handle this, if the trigger is tied to some reliable action like waking up in the morning, shaving or going to bed – look out for habits that only make sense when you are at home or doing your normal routine.

One interesting option is negative habits: things you never do. The superstition that it is bad luck to put the keys on the table serves as a useful reminder not to leave them in a spot where they are more likely to be forgotten. It might be worth culturing a few similar personal superstitions to inhibit actions like leaving wallets on restaurant counters (visualize how the money will flee to the proprietor).

Checklists might be overkill, but they can be very powerful. They can be habits, or literal rituals with prescribed steps. The habit could just be a check that the list of everyday objects are with you, triggered whenever you leave a location. I am reminded of the old joke about the man who always made the sign of the cross when leaving a brothel. A curious neighbour eventually asks him why he, such an obviously religious man, regularly visited such a place. The man responds: “Just checking: glasses, testicles, wallet and watch.”

Personality

I suspect a lot just hinges on personality. I typically do run scenarios of every big and small possibility through my head, I like minimizing the number of things I need to carry, and as I age I become more conscientious (a common change in personality, perhaps due to learning, perhaps due to biological changes). Others have other priorities with their brainpower.

But we should be aware of who we are and what our quirks are, and take steps based on this knowledge.

The goal is to maximize utility and minimize hassle, not to be perfect. If losing things actually doesn’t bother you or prevent you from living a good life this essay is fairly irrelevant. If you spend too much time and effort preventing possible disasters, then a better time investment is to recognize this and start living a bit more.