How To Make Better Decisions
Making tough decisions can paralyze even the most pragmatic people. Faced with too many options, and with no idea what the future holds, it often feels best just to ignore the situation. On top of that, humans are full of blind spots and biases, and future outcomes can be too bewilderingly complex to anticipate.
Thankfully, there are plenty of techniques that can help us work around this hard-wired hindrance. They can be applied in everyday situations or even monumental decisions that need a lot of time to process. Depending on your disposition, better decision-making can be as precise as a mathematical model or as simple as mulling it over.
We all fall prey to our blind spots while making decisions, even George Washington.
In the summer of 1776, the Revolutionary War in North America was in full swing. The Americans, led by George Washington, sought to break free from the shackles of British rule. But the British refused to let go. As they massed their navy with New York in their sights, Washington was left in a quandary. Although it was clear that an attack was coming, it was less clear how the British would launch it.
This lesson from history demonstrates just how complex decision-making in real-life situations can be.
Washington was faced with what’s known as a full-spectrum decision. That means numerous factors had to be taken into account for the right decision to be made.
In the battle for New York, Washington had a lot to think about. Where were there landing sites for British ships on the New York coast? What effect would the strong currents of the East River have in moving his own troops from New York to Brooklyn?
Washington also had to consider the damage British cannons could do against New York’s fortifications and the potential risk of life for his own soldiers in pitched battle. He even had to consider the internal American politics in the Continental Congress, which demanded that he stand firm against the British.
Needless to say, Washington had a tough time deciding what to do, and eventually, he found himself making the wrong decision. He actually erred in the very first one he made. He shouldn’t even have tried to defend New York at all. Since the superior British outnumbered his forces, it would have been much easier to retreat inland. But this mistake is not unique to Washington – we are often prone to forget our blind spots when making decisions.
There’s a name for this common error in human reasoning. It’s known as loss aversion. Studies repeatedly show it to be a characteristic innate to humans. We prefer to resist losses than to seek gains, even when it’d be better in the long run to do the opposite. Washington, however, was smart enough not to stick it out until his troops were completely crushed. Once his forces began to lose, he quickly signaled the retreat. He was still a born leader, and the Revolutionary War would eventually be won, despite the many difficult decisions along the way.
Good decisions arise from considering diverse points of view from a diverse range of people.
As a general rule, governments and corporations function as hierarchies; the bosses make the important decisions. Unfortunately, that’s often not how the best decisions are made. In reality, complex decisions need to be supported by many points of view.
The water department for the Greater Vancouver area is a good example here. Faced with population growth, they needed to expand the freshwater resources available. This called for some complex decision-making. Resource options included using three existing reservoirs, building a pipeline to far-off lakes or drilling well fields alongside a nearby river.
To make the right decision, the department took numerous perspectives into consideration. That meant asking people living near the potential sources, indigenous tribes with sacred connections to the waters, environmental organizations, as well as health and water-security specialists.
A solution was found that satisfied all – a mile long, earthquake-secure pipeline built to draw water from a dam on the Coquitlam River. This sort of broad approach to problem-solving leads to better decisions. That’s because the possible advantages and disadvantages of each solution are clarified as part of the process. In short, a diversity of perspectives ensures better decision-making.
This is also backed up by a series of studies conducted by psychologist Samuel Sommer around 2010. He created mock trials to test juries’ decision-making processes. The results showed that racially mixed juries were overwhelmingly better at doing their job than white-only juries. Diverse juries spotted more interpretations of evidence submitted, were more accurate in recalling the facts of the case and had longer and more forensic deliberations. On the other hand, ethnically homogenous groups made decisions too hastily and did so without questioning biased assumptions. Scientists have extrapolated from this that the same is probably true of homogeneity in general, whether that be of gender or political orientation. However, further studies are needed to substantiate that view.
The average human can't predict the future – and experts are even worse at it.
Decision-making would be pretty easy if we already knew what the future holds. If you were aware of where real estate prices were going to skyrocket in twenty years, it would be a no-brainer to buy property there. Unfortunately, humans are dreadful at guessing the future.
The political scientist Philip Tetlock demonstrated this more than 20 years ago in his “forecasting tournaments.” In these, participants competed against others to predict what the future held for subjects like the environment or gender relations. The questions posited looked at long-term political and economic developments – would a member of the European Union leave it within the decade, or would the US experience an economic downturn in the next five years?
Tetlock collected 28,000 predictions from these forecasting tournaments and then waited to find out their accuracy. At the same time, he compared those predictions with two very simple algorithmic predictions. One algorithm forecasted no change, while the other indicated that change would continue at the current rate. With grim inevitability, human predictions were almost always less accurate than the standard forecast predicting the continuation of current trends.
So much for the average Joe. But what about experts? Well, it turned out that they were particularly poor at predicting the future. Incredibly, experts in economics and politics did worse in Tetlock’s experiment than people with no specialized knowledge!
That might seem surprising, but non-experts did better because they took a broad view, taking various factors into account. This is a characteristic found among the best forecasters of the future. When asked about the health of the economy in five years, generalists considered market trends, but also technological innovation, education, farming practices, population growth and more. Experts, on the other hand, just couldn’t break out of their own fields. Their specialist, personal opinions meant they made wildly inaccurate predictions. Economists, for instance, were either convinced that capitalism would crumble or that growth would reach unprecedented levels.
Future events depend on unpredictable converging factors, which aren’t always predictable from current trends.
When it comes to envisaging the development of current trends, humans tend to believe that things will just go on as they are indefinitely. Unfortunately, as a rule of thumb, that doesn’t always hold true – chaos and unpredictability cannot be dismissed when anticipating the future.
George Orwell was no different in his assumptive forecasting when writing his famous near-future dystopia, 1984. He began writing the novel in 1944, when Nazi power had taken hold in Europe. He had witnessed the rise of fascism in previous decades, as well, so it made sense that he foresaw the continued trend of dictatorial systems of government. But – thankfully – that didn’t turn out to be the case.
In reality, future events occur because various factors converge. And these, by nature, are frequently unforeseeable. Consider the largely unanticipated rise of the personal computer. Computers, as we know them, came to exist because various breakthroughs in numerous fields all converged simultaneously. Mathematics and robotics made great strides forward, while advances in microwave signal processing and silicon circuits also developed.
If you’d wanted to predict the rise of computers, it would have been necessary to know that developments in mathematics would massively expand the potential of computer programming languages. Equally, you’d have to have surmised that silicon circuits would be better than the vacuum tubes used in old-fashioned computers. Finally, you’d have to have known that older technologies, like radio waves, would be repurposed to transmit binary information. Needless to say, almost no one was in a position to predict what would later become not only possible, but commonplace.
Using red teams assists in planning and prediction, even in covert operations.
As we have seen, predicting the future is no easy task. With that said, there are still techniques that can help in prediction and decision-making. One increasingly popular method used for predicting the future is using red teams.
A red team is a group that is created within an organization. The team acts as though it were the ‘enemy’ when the larger organization is in the process of making strategic decisions. For instance, an army unit might be considering the options open to them for launching an effective attack. Once they’ve formulated their plans, they’ll pass them onto a red team. The red team will then enter fully into the mind of the enemy and map out all the different ways they could counter a potential attack.
Using red teams has proven highly effective in a variety of spheres. Famously, they were instrumental in the decision-making that resulted in the operation that led to the death of Osama Bin Laden in May 2011. Early in 2011, the American National Counterterrorism Center was drawing up plans to storm a mysterious compound in Abbottabad, Pakistan. It was there they suspected Osama Bin Laden was in hiding. The assembled red team reckoned that there was only a 50/50 chance Bin Laden would be found in the compound. But, more critically, their input meant the US government was better prepared for the unexpected.
It was actually the red team that spotted the issue that having an unauthorized American army aircraft crossing Pakistani airspace might cause problems. Consequently, in the months before the attack, the government worked on diplomatic ties with Russia and with the shipping network in the Baltic Sea – all so that the convert squad would have an additional route in and out of Pakistan. There was a chance they would need it if the Pakistani government reacted aggressively to the covert operation. Of course, they found Bin Laden in the compound. The preparation of the red team helped make the mission more robust.
Governments use cost-benefit analysis for decision-making, even for environmental protection.
Ronald Reagan’s legacy as a US president is largely defined by his conservative agenda. Just think of his professed desire to cut taxes and government spending, as well as bolster the military. What’s less known, though, are his ideas that found bipartisan support, most notably his advocacy of cost-benefit analysis in decision-making.
In February 1981, Reagan signed an Executive Order. This necessitated a cost-benefit analysis to be conducted for every new regulation under governmental consideration. This analysis required that potential benefits and costs of a putative new regulation be listed. Critically, that remit stretched beyond monetary terms. Proposed measures had to demonstrate the overall benefit. Finally, alternatives to the regulation had to be examined to ensure the best solution was the one being executed.
Cost-benefit analysis is demonstrably for the good. And nowhere is this clearer than in environmental protection. Under the Obama administration, cost-benefit analysis was used to support the argument for increasing environmental protection. Simply, the social costs of carbon dioxide emissions were calculated.
Experts from the Office of Energy and Climate Change, the Council on Environmental Quality and many other governmental departments and agencies sat down together to work out the long-term effects and costs of the carbon that was being released into the atmosphere. Envisaged social costs included reduced agricultural yields, catastrophic weather events and forced population migration due to rising sea levels. The experts concluded that the social cost was $36 per ton of carbon dioxide released. Even though many external parties thought that this was still too conservative a calculation, it was the first time environmental costs had been monetized by the US government. It was the first step to ensuring the issue would be taken more seriously in governmental decision-making in the future.
Linear value modeling supports decision-making, no matter whether you’re a human or a machine.
Sometimes decision-making requires making some tough calls. Fortunately, there are tools that can help. Linear value modeling, a method used in statistics, can assist in complex decision-making. Linear value modeling maps out possible options and weighs them according to the value you’ve given them.
Imagine you're deciding whether or not to get married. Value-based considerations might include finances, the possibility of having children, the value of free time alone and the desire for a life companion.
For each consideration, you would assign a value calculated on the likelihood of satisfaction based respectively on marrying or not. So, your chances of having a child might be 30% if you stay single, but 70% if you marry. You then weigh each possible outcome. The weighting scale goes from 0 to 1, where 0 is ‘unimportant’ and 1 is ‘very important.’ So, if the idea of having children appeals to you, it might score 0.75, whereas you might give your freedom a weight of 0.25. You’d then multiply those weightings by the relevant likelihood percentages calculated earlier. Once you’ve done the sums, you’ll know which decision is the one you should opt for.
And the uses of linear value modeling don’t stop with humans. The process can also help machines make decisions. Most likely, as self-driving cars become more common, they’ll start relying on linear value modeling. They’ll need it to work out the merits of outcomes based on given maneuvers, as well as the probability of those outcomes actually occurring.
Imagine a pedestrian jumping out into a bidirectional right-hand-traffic road as a self-driving car approaches. The car must now assess the situation and make a choice. If it swerves to the right, there’ll be less chance of colliding with another vehicle, but an increased risk of hitting the pedestrian. This could easily be fatal for the pedestrian, so avoiding this outcome likely has a large weight in the car’s value system. If the car is not driving too quickly, it may prefer to swerve left. It’s quite likely that it will hit another vehicle, but as this would be unlikely to cause fatalities, the car has a lower weight on avoiding this outcome.
Mathematical decision-making has its limits, but mulling things over will still get you a long way.
If mathematical decision-making is such a great idea, then you might think that only those who are able to do the sums are capable of making good decisions. But thankfully, that’s not the case. Those who are less mathematically minded can also make great decisions, simply by doing a little ruminating.
If you think about it, making a decision functions much like mathematical analysis. By mulling something over, you’re considering different options available to you over an extended period of time. The key is to allow yourself enough time to do this. That way, you won't forget important variables or overlook an option that might resolve the current set of conflicting advantages and disadvantages.
Once you've taken all possibilities into account, it’s important to give yourself a rest. Take some time out. Go for a long walk or get creative, just make sure you allow your mind to wander. This gives the brain’s default system – the part that processes stuff in the background while you’re doing everyday tasks – the time to filter through all that information and shape an informed intuitive decision.
The other advantage of mulling things over is that mathematical decision-making has its limits. Let’s return to the strike on Osama bin Laden’s compound. Irrespective of the work done by the red team, there was no mathematically certain way to calculate whether Bin Laden was actually there or not. It was the slow process of careful consideration and examining the situation from all sides that allowed the Obama administration to make assumptions about Bin Laden’s location. Mathematically, they knew there was only a 50 percent chance of Bin Laden really being there, so it was intuition that saved the day.
We’ve learnt a lot. Making decisions is never easy, and even math can't always get you to the finish line. But if you do a few sums and take the time to mull the variables and outcomes over, you’ll be well on your way to making a sound and informed decision.
Decision-making is difficult for each and every one of us. That’s because humans have a hard time predicting what the future outcome of any given decision will be and whether we will be happy with that outcome. That’s why it’s important to take the time to make decisions. Both the traditional approach of just chewing things over or a more technical and mathematical approach of mapping out all options and variables can be useful.