This is a three-part post on bias in research, reductionism and bias in behavior.
Bias is an inherent part of human nature and can significantly influence psychological research. It occurs when researchers or participants let personal beliefs, assumptions, or perspectives shape how they collect, analyze, or interpret data.
Biases can emerge at various stages, including during research, interpretation of findings, and in the behavior of both researchers and participants.
In this post, we’ll explore different types of biases in research and their impact on the integrity of findings. Read on to uncover answers to questions like: What is researcher bias? How does participant bias affect results? With real-world examples!
Types: Researcher | Participant | Sampling | Publication | Gender | Cultural | Cognitive
Types of Biases in Research
1. Researcher Bias
This occurs when a researcher’s personal beliefs, expectations, or preferences unintentionally influence the design, conduct, or interpretation of a study. For example, if a researcher strongly believes in the effectiveness of a particular therapy, they might unconsciously design the study in a way that favors that belief or misinterpret ambiguous data to support it.
Example: A researcher studying the effectiveness of Cognitive Behavioral Therapy (CBT) for depression might focus on the positive outcomes and dismiss negative or neutral results, creating a biased conclusion.
Psychologist Philip Zimbardo’s Stanford Prison Experiment is a famous example of researcher bias. He unintentionally influenced the study’s outcomes by becoming too involved in his role as the “prison superintendent”. AND Zimbardo’s personal beliefs about authority power dynamics led him to overlook ethical concerns and the escalating abuse by participants.
Zimbardo later admitted that his bias and involvement likely influenced both the “guards” and “prisoners,” contributing to the study’s dramatic and harmful outcomes.
2. Participant Bias (Hawthorne Effect)
Participants may alter their behavior simply because they know they are being observed. This is known as the Hawthorne Effect. If participants act differently in a study to please the researcher or present themselves in a better light, the data may not reflect their true behavior or attitudes.
The term “Hawthorne Effect” originates from a series of studies conducted at the Western Electric Hawthorne Works in the 1920s -1930s. Researchers initially wanted to study how different lighting conditions affected worker productivity. However, they found that productivity increased not only when the lighting was improved, but also when it was reduced. This was simply because the workers knew they were being observed.
The participants, aware they were part of an experiment, worked harder to meet the expectations of the researchers, which led to biased conclusions about the factors influencing productivity.
3. Sampling Bias
Sampling bias occurs when the participants in a study are not representative of the wider population, often due to the method of selection. If certain groups are over- or under-represented, the results of the study may not generalize to the broader population.
The Framingham Heart Study, which began in 1948, focused primarily on the white, middle-class in Massachusetts. Such a limited sample raised concerns about its generalizability to the broader population.
However, while the original study was not fully representative of all demographics, subsequent research built on its findings and became more inclusive, addressing the sampling bias.
Read here for more on sampling types, strengths and weaknesses
4. Publication Bias
This occurs when studies with positive, statistically significant results are more likely to be published than those with null or negative results. This bias is sometimes driven by factors like funding pressures, the desire for researchers to secure future grants, or the academic community’s tendency to prioritize novel or exciting findings. As a result, the academic literature can become skewed, leading to a distorted view of the effectiveness or importance of a phenomenon.
Example: A study that finds a drug is ineffective may not get published, while one that shows positive results will. Over time, this gives the false impression that the drug is highly effective.
One well-known example of publication bias involves research on the effectiveness of antidepressants. Many studies with negative/ null results—where the drugs didn’t show significant improvement—were not published, leading to an inflated perception of the drugs’ effectiveness. In what psychologist Robert Rosenthal termed the “File Drawer Problem“, this refers to the many “negative” studies left unpublished, “stored away in file drawers”.
Selective publication can mislead clinicians and the public into overestimating the benefits of these medications.
5. Gender Bias
Gender bias occurs when gender influences the research process, from the choice of research topics and participant selection to data interpretation. Historically, psychological research and their subjects has been skewed toward male subjects.
We criticise such data as being “androcentric” -> leading to a lack of understanding of female behavior and experiences.
6. Cultural Bias
This occurs when research fails to account for cultural differences or researchers unintentionally impose their own cultural norms onto participants from different backgrounds.
Early versions of the Stanford-Binet IQ test were culturally biased towards Western, middle-class norms, demanded knowledge of Western cultural references and educational experiences. This meant that non-Western individuals often scored lower, not because they were less intelligent, but because the test did not accurately measure their cognitive abilities.
Cognitive Biases
Moving onto cognitive biases. Cognitive biases occur when people make decisions based on limited information or recent experiences, rather than a full range of data. We often tap into heuristics, which are mental shortcuts in our time-starved world to help us make snap decisions (aka Daniel Kahneman’s Thinking, Fast and Slow).
7. Framing
Framing bias occurs when people make decisions or form opinions based on how information is presented, rather than on the content itself. The way a problem or issue is framed—whether positively or negatively—can significantly affect people’s perceptions and choices. This bias can lead to different conclusions based on the same underlying information, simply because it was worded differently.
The Asian disease problem is a classic thought experiment in psychology and behavioral economics, created by Tversky and Kahneman, that demonstrates how the way information is framed—positively (lives saved) or negatively (lives lost)—can significantly influence people’s choices, even when the outcomes are logically identical. When the outcome was framed positively (ie. “200 people will be saved”), participants were more likely to choose the option that saved 200 people. However, when the same outcome was framed negatively (ie. “400 people will die”), participants were more likely to choose a different treatment option, even though both had identical outcomes.
8. Confirmation Bias
This is the tendency to search for, interpret, and remember information that confirms one’s pre-existing beliefs or theories while ignoring or dismissing information that contradicts them.
Example: If someone is convinced that social media negatively impacts mental health, they might only look for studies that support this view, ignoring research that shows positive effects.
In the case of Freud’s psychoanalysis theories, we could say too often, he focused on interpreting evidence that supported his ideas about the unconscious mind, repression, and sexuality, while disregarding or minimizing evidence that contradicted his theories. His confirmation bias led him to see evidence of unconscious motivations even in cases where they might not have existed.
9. Anchoring Bias
Anchoring bias occurs when individuals rely too heavily on the first piece of information (the “anchor”) they receive when making decisions. Even if subsequent information contradicts the initial anchor, people tend to give disproportionate weight to that first piece of information, leading to biased judgments.
In a study by Tversky and Kahneman (1974), participants were asked to estimate the percentage of African countries in the United Nations after being shown a random number (either 10 or 65) on a spinning wheel. Those who saw the number 10 estimated a much lower percentage of African countries in the UN than those who saw the number 65. Even though the number was irrelevant to the question, it served as an anchor, influencing their estimation.
How to Minimize Bias in Research
Psychologists use several methods to try to minimize bias and ensure the integrity of their research:
- Random Sampling: Ensuring that participants are chosen randomly from a population to increase the representativeness of the sample
- Blinding: This involves keeping participants and/or researchers unaware of key aspects of the study (such as which group they are in) to prevent expectations from influencing the results
- Inter-Rater Reliability: Using multiple observers and ensuring they agree on the interpretation of data can reduce bias from subjective judgment
Researchers must also be reflexive, which means being aware of their own biases and how these might influence their research. By consciously reflecting on their own beliefs and preconceptions, researchers can work to eliminate the potential for bias in their work.
Why Is Bias Important?
Bias distorts research findings, undermining their validity and reliability. This can result in false conclusions that mislead theory development. As IB students, it’s crucial to develop a sharp awareness of these biases to critically evaluate research, ensuring you critically evaluate your findings.
In your studies, always ask:
- Are the methods free from bias?
- Is the sample truly representative?
- Did the researchers acknowledge their biases?
Read here for more on Bias, Reductionism
Read here for more on Bias in Behavior