When researching on the topic of cognitive biases, examples and the reasons behind their origin, we have noticed at Selfmastered that most of the content out there merely focuses on the surface, and miss out on the true cognitive biases definition and origins.
We could write an article, list a few of the most common cognitive biases, add in the basic cognitive biases definition and we would have exactly the same, superficial article that is popular.
Knowing the different kinds and examples of cognitive biases is important, we are not saying otherwise. However, if examples of cognitive biases are presented with only a small introduction and incorrect definition, important aspects such as their usefulness, origin, and place in our cognitive toolbox get overlooked and never really explored.
In fact, the reason why the study and understanding of cognitive biases are important is due to their practical implications in areas like entrepreneurship, peak performance, finance, management, uncertainty, and decision making. Topics which are our bread and butter.
To not remain at the superficial level, we have read and reviewed the main papers which gave origin to the study of cognitive biases. Mainly, the ones were written by Amos Tversky and Daniel Kahneman, but also others written by lesser-known authors (all sources used are linked at the end).
Let’s start from the beginning, which is to give a proper cognitive biases definition.
Cognitive biases are seen as systematic patterns of deviation from the norm or rationality in judgement (1). Whereas most content on cognitive biases accept the version of them being systematic errors, cognitive biases are born as solutions to decision making problems.
This is an important mental shift to undertake when studying cognitive biases. As we will learn later on in the article, cognitive biases are the result of continuous decisions we make throughout our decision making process. They only become an issue when: A) we do not know about our cognitive biases, and B) we view them under an incorrect lens.
What do we mean when we say we need to view them under a different lens?
Take, for example, the mechanism behind fever. When viewed under the lens of comfort, fevers make people feel miserable, which can lead to believe that fevers are design flaws. When we realize that fevers are the body’s mechanism to increase temperature as a natural defence against pathogens, fevers cease to be considered a flaw, but a feature of the human body.
This exact line of reasoning, viewing the same mechanism under a different lens, is why cognitive biases are seen as design flaws. In reality, cognitive biases are the result of decisions made via shortcuts and heuristics, only that we only shine light on them when they go wrong, ignoring that most of the time they work.
In fact, in Haselton’s paper (1), he argues that cognitive biases are design features, not design flaws. The reason for this is that “some design features that appear to be flaws when viewed in one way are revealed to be adaptations when viewed differently”, as exemplified above with the analogy of fever.
While we have given some indication as to why cognitive biases occur, let’s dive deeper into the reasons behind their origin.
Cognitive biases, simply put, are the result of how we make decisions every day.
According to Kahneman and Tversky (4), “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgemental operations.”
The reason for relying on heuristics to do most of the heavy lifting in terms of decision making is due to them being effective and highly economical (in terms of time and energy). However, this over-reliance on using a small number of heuristics leads to systematic errors (the common view of cognitive biases).
This is the main point we at Self-mastered argue: most who write about cognitive biases do not understand that the reason behind why we rely on heuristics and rules of thumb for most of our decisions is because they work most of the time. The idea is to not eliminate them from our toolbox of decision making, but to know when to rely on them or not.
Only when they go wrong do we see the flaw in their use. Reality, however, indicates that they do work most of the time.
According to Haselton: “Much experimental work has focused on the cases where these rules (in regards to the effectiveness heuristics) lead to illogicality or error, but the assumption is that over a broad range of fitness-relevant past scenarios, they were highly effective.”
Due to this exact same reason, that these few heuristics and rules of thumb we offload most of our decision making work most of the time, they have been “passed on” evolutionary wise, shaping how people from the past and present make decisions.
However, while heuristics are mentioned across the article, they are not the only reason for a cognitive bias to occur, only the most popular one.
Treating all forms of cognitive biases as originating for the same reasons is also a flaw in most content related to biases we have read out there. Cognitive biases are not born equal, they can be originated from:
The above classification is mainly taken from Haselton’s work, which when reviewed, we think it’s the closest approximation to reality when it comes to the different origins of cognitive biases.
Heuristics are the result from evolutionary or information processing constraints, according to Haselton. They’re mechanisms that work well in most circumstances, but are prone to break down in systematic ways (the main argument we have made across the article).
But heuristics are not the only reason for the appearance of cognitive biases.
Error management biases, according to Haselton, are “selection favoured biases toward the less costly error. Error rates are increased, but net costs are reduced. The idea is that it’s best to make small and constant uncostly errors over making costly errors.”
As we have mentioned above, we rely on a small number of rules of thumb to do most of our decision making because the cost of error is small. Especially when compared with the amount of times they are proven a success.
Artifacts are apparent biases and errors of research strategies. According to Haselton, they result from the application of inappropriate normative standards or placement of humans in unnatural settings.
While artifacts are more concerned in the area of statistics and studies, it’s still important to mention. Let’s go back to heuristics.
Why do we use these small number of heuristics to do most of our decision making? Well, as mentioned above, the main reason is that they work most of the time. As we wrote in How To Algorithmize Peak Performance, decisions can be made from the basal ganglia or the prefrontal cortex.
The former concerns more automatic and auto-pilot actions such as brushing your teeth before bed, taking a shower every day, putting your shoes on, etc. Whilst the latter concerns decisions such as which task to perform, where to go to eat, etc.
The idea behind algorithmizing peak performance is that there are tasks and decisions made with almost no cost to cognitive processing power, those which are more repetitive and done in the basal ganglia; whereas there are tasks that do need for more cognitive processing power to be dealt with.
Heuristics are part of those decisions which the basal ganglia perform. They are “cheap” in terms of cost of cognitive power, they are reliable (most of the time), and even when they fail, the cost of failure is minute.
In fact, Haselton argues that: “Decision-making adaptations can be simple but still as effective as complex strategies on real-world tasks. […] These simple strategies, and others like them, form the armamentarium that natural selection has tended to use in creating decision-making adaptations. Combinations of these strategies are used by an array of distinct, domain-specific, evolved mental mechanisms.”
And this is the real reason behind heuristics and rules of thumb being the preferred choice of our brains, throughout millions of years of evolution, to rely on them for most of our decision making:
From a strategic point of view, it makes sense to rely on heuristics and rules of thumb which translate to the right decisions most of the time, and when they get it wrong they do not truly affect you too much.
We do not argue that cognitive biases represent no danger nor cost, far from that. You will not die for falling for the decoy effect or for anchoring, but they can lead to poor decisions in the occasions they do not work.
At Selfmastered, while we know that cognitive biases are not important in regards to survival when they’re wrong, we do not seek to merely survive. No, we seek for the reader to accompany us on a journey towards self-mastery, which means excelling, which means managing oneself. Not merely surviving.
The original focus behind the study of cognitive biases by Mr. Tversky and Mr. Kahneham is in regard to the difficulty in judging probability or frequency. These studies led to the noticing of heuristics like representativeness and availability, fallacies like the gambler’s fallacy, and biases such as anchoring and adjustment.
In fact, according to Kahneman and Tversky: “The study of biases has practical implications in decision making, and they can illuminate the psychological processes that underlie perception and judgment (3).”
By learning the real cognitive biases definition and its origins, seeing them as the result and consequence of a small repertoire of heuristics and rules of thumb that we rely on, we can get to truly know our cognitive toolbox.
It is our mission at Selfmastered to help you upgrade your cognitive toolbox to have the best possibilities of achieving self-mastery. This starts by first doing some myth-busting on popular topics incorrectly taught, to then give a better understanding of the world. One closer to reality.