4 Misconceptions in Psychology and How They Were Debunked

How many times have you heard that we only use ten percent of our brains, or whether the left or right side of your brain is dominant affects your personality and talents? These are only two of the vast number of misconceptions that some people still hold about psychology, some of which relate to theories that have been debunked for decades. Like any other science, progress in psychology is made through this process of theories being formulated, tested and then either affirmed or superseded – but what happens among scientists can take a while to filter through to the general population. If you’re interested in psychology and the progress that has been made, here are some of the mostly commonly cited mistaken theories, misconceptions and outdated ideas that you might still encounter, and how they came to be corrected.

1. Phrenology: the shape of someone’s head tells you about their personality

You might still sometimes see phrenology models for sale, but it’s usually as quirky decoration these days.

At an instinctive level, a lot of us believe that the way someone looks tells us something about their personality, even their ethics – such as seeing a photo of a criminal in the news and thinking that yes, they do have a shifty look about them. Disturbingly, AI researchers have even tried to create an algorithm on this basis, identifying people as potential criminals based on facial recognition.
It’s true that there are some things that correlate with criminality that are evident in someone’s face. Most obviously there are the effects of poverty, such as bad skin, the impacts on the body of lead poisoning, poor dentistry and so on – and poverty, of course, is a key factor in driving people to crime. Another example is the effect of foetal alcohol syndrome, which affects the face, and can also have neurological symptoms that may lead people with foetal alcohol syndrome to get into trouble with the law at higher rates than those not affected.
But phrenology, a discipline that arose in the late 18th century, went much further than this, holding that the shape of someone’s skull revealed things about their personality, such as how cautious they were, how truthful or how kind. By the 1840s, phrenology had mostly been dismissed by the scientific community. In particular, Jean Pierre Flourens carried out experiments on the brains of rabbits and pigeons, observing the effects on the animals’ behaviour, and thereby demonstrating that the effects of changes to parts of the brain were quite different from the ones that phrenologists associated with those parts of the brain. Disagreements between phrenologists about what each part of the skull represented also helped to expose the flaws in their scientific reasoning.
Unfortunately that wasn’t the end of phrenology or its sister disciplines, physiognomy and craniometry, which similarly used physical features to determine personality and other traits. Though largely abandoned by legitimate scientists, phrenology was picked up in popular science, and used internationally to justify racism. The argument was that the difference between the phrenological attributes of different races demonstrated the inherent difference in their skills and morality, and therefore justified discrimination on racial grounds – in particular, in the argument that phrenology showed some races were naturally more inclined to criminality. This was a belief that fed into the Rwandan genocide and Nazi racial theories, and is still popular among some white nationalist groups. A theory that scientists have doubted for over 170 years is still having a negative impact on people’s lives today.

2. The Stanford Prison Experiment: anyone can be evil in the wrong situation

The Stanford Prison Experiment aimed to explain whether our environment or our personality leads to abusive and authoritarian behaviour.

The Stanford Prison Experiment is surely one of the most famous experiments in the history of psychology. In 1971, psychology professor Philip Zimbardo of Stanford University wanted to investigate the causes of abusive behaviour in prisons – in particular, whether this related to the personalities of guards and prisoners, or whether the environment of a prison led to abuse. 24 students were paid to take part (they are typically described as volunteers, but were paid the equivalent of nearly $100 per day), and half were assigned the role of prisoners, half of guards. They had all been screened for psychological well-being and criminal records, and were chosen on the basis that they ought to have been less likely to exhibit abusive behaviour.
Within a couple of days, these apparently respectable, upstanding young men had started to go off the rails. About a third of the “guards” started acting sadistically towards the “prisoners”, including taking their mattresses away, refusing to empty their sanitation buckets, placing a prisoner in solitary confinement in a closet and then beating the door, and other humiliating acts. After just six days, the experiment was called off at the urging of Zimbardo’s girlfriend, Christina Maslach, a psychology graduate student who found the conditions the prisoners were forced to endure inhumane.
Zimbardo concluded that the experiment demonstrated that in the wrong circumstances, anyone can do evil things. Motivated by that belief, in 2004 he even testified in the defence of one of the prison guards from Abu Ghraib, who was charged with and pleaded guilty to torturing prisoners there. Zimbardo’s conclusion has been taken at face value in popular psychology, with the idea repeated that the Stanford Prison Experiment shows that certain circumstances are all it takes to make good people turn evil.
But in the past couple of decades, psychologists are increasingly casting doubt on Zimbardo’s conclusions. First, there’s the obvious problem with using the Stanford Prison Experiment to judge human nature in general, rather than just the nature of the very narrow spectrum of humanity that the volunteers represented: mostly white, mostly middle-class men of college age from the USA and Canada, all from a very similar cultural background, who were inclined to respond to an advert about a “psychological study of prison life”. They couldn’t in any sense be said to represent a diverse group.
Yet it’s doubtful whether the experiment even reveals much about the nature of young, white, middle-class men in the Stanford area, let alone anyone else. The experiment couldn’t be properly repeated in later years as ethical standards became stricter, but attempts at replicating it to a certain extent didn’t get the same results; instead, a BBC version of the experiment had rebellious prisoners and guards who were uncomfortable exerting their authority. One key difference was that Zimbardo had encouraged the guards to see the prisoners as lesser, while the BBC experiment allowed any dynamic to develop naturally. This is a key flaw in the original experiment – Zimbardo was far too personally involved to observe the experiment from a neutral position, including encouraging the guards to see the prisoners as lesser. Ultimately, the Stanford Prison Experiment was so flawed that there’s very little we can reliably conclude from it, except that a bad situation can encourage four young men to behave sadistically when egged on by the researcher in charge.

3. The Rorschach test: what you see in inkblots can indicate mental health problems

What do you see in the inkblot?

The Rorschach test, developed by Swiss psychiatrist Hermann Rorschach, is possibly the psychological test that’s best known in popular culture, from crime dramas to the character of Rorschach in the comic and film Watchmen.
The concept is simple: the subject is shown a series of inkblots and asked what they can see. Psychologists administering the test have a list of ‘normal’ and ‘abnormal’ answers. Depending on the answers given, the psychologist assesses whether the subject has assorted mental illnesses, or – in children – if they have been abused. The idea that is the basis of the Rorschach test, that our interpretation of an ambiguous image reveals something about our mental state, is an ancient one; Hermann Rorschach simply attempted to formalise it. For instance, he suggested that people with schizophrenic depression will see fewer animals in the inkblots than a neurotypical subject. Subsequent scientists have tried to standardise the test further, to ensure that psychologists interpreted their patients’ responses correctly.
The problem is that the test doesn’t work. Some researchers have suggested that it can be used to diagnose schizophrenia, but little else; and indeed, though Rorschach supported the use of the test to distinguish between multiple different mental illnesses, he opposed it being used as a general-purpose personality test. Other researchers suggest that the test is entirely useless as a diagnostic tool.
There are several reasons for casting doubt on the test. The first, and most damning, is that the scientific evidence that a Rorschach test is a functional diagnostic tool is very thin. There are lots of instance of anecdotes where a subject has unintentionally revealed themselves through a Rorschach test, but very few trials where Rorschach tests have successfully held their own against more modern diagnostic tools.
Beyond this, so much depends on the interpretation of the test that psychologists can very easily see results that confirm their own preconceptions about the subject, rather than the test providing them with independent data. The test also doesn’t accommodate differences in culture very effectively; what is an ‘abnormal’ response among white Europeans can be more common – for instance – among African-Americans, leading to greater rates of false positives among minority groups. In this way, the test identifies less whether the subject has a mental illness, and more whether they are unusual by subjective cultural standards.
Finally, if the Rorschach test ever worked, the internet is in the process of killing it off. Pro-Rorschach psychologists agree that subjects need to come to the images ‘fresh’; if you’ve seen them before, it doesn’t work. For most of the 20th century they were kept a secret among psychologists, but that doesn’t work any more. The inkblots are featured on TV, in films, and all over the internet – there’s even a full list with the most common responses on Wikipedia. Chances are, most people have seen one or two of the Rorschach inkblots before they ever encounter them in psychological testing; and for people with an interest in getting a particular diagnosis, there are websites that will coach them in what to say when given the test. You might still encounter an inkblot or two in future, but it’s likely to be as a prompt for conversation, not for diagnosis.

4. Primal therapy: screaming, hysterics and violence help to deal with pain

Primal therapy encouraged patients to let out all their emotions, even by screaming.

How to help patients cope with trauma, especially childhood trauma, has long been a challenge for psychologists. It’s been clear for some time that the traditional ‘stiff upper lip’ approach of ignoring and repressing trauma only makes matter worse. Primal therapy, developed by psychologist Arthur Janov, takes the opposite approach: patients are required to relive their traumatic early childhood experiences, fully feeling and re-experiencing all of the pain of that time. They can then react in all the ways that they might not have been able to at the time, such as screaming, shouting, crying, and even hitting something like a punching bag or a pillow. This allows them to process the trauma in a way that they couldn’t originally, and after an average of eight months of therapy sessions of this kind, the trauma is resolved and the patient cured.
But guess what? It doesn’t work. Undoubtedly shouting and screaming can offer a temporary relief – it’s natural to feel better after a good yell or cry – but there’s no evidence that primal therapy does more than provide this sort of short-term catharsis. In the long term, some psychologists have suggested it may do more harm than good, not only because it stops people from attending other therapies that may actually be beneficial for them. Primal therapy was fashionable in the late 60s and early 70s, and had many celebrity fans at the time, but as the 70s turned into the 80s, the shortage of evidence that it actually worked was increasingly hard to ignore, and it was largely abandoned as a fad.
If there’s one group for whom primal therapy can be helpful, it’s what doctors refer to as the ‘worried well’. That’s a term for people who don’t have anything seriously wrong with them, but who could use a bit of attention and reassurance to feel better, or failing that, a good placebo – and that’s exactly what primal therapy can be.

Image credit: inkblot 1; phrenology; prison; inkblot 2; the Scream.