Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.1 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Explain what internal validity is and why experiments are considered to be high in internal validity.
  • Explain what external validity is and evaluate studies in terms of their external validity.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.

What Is an Experiment?

As we saw earlier in the book, an experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. Do changes in an independent variable cause changes in a dependent variable? Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables. Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words manipulation and control have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate the independent variable by systematically changing its levels and control other variables by holding them constant.

Internal and External Validity

Internal validity.

Recall that the fact that two variables are statistically related does not necessarily mean that one causes the other. “Correlation does not imply causation.” For example, if it were the case that people who exercise regularly are happier than people who do not exercise regularly, this would not necessarily mean that exercising increases people’s happiness. It could mean instead that greater happiness causes people to exercise (the directionality problem) or that something like better physical health causes people to exercise and be happier (the third-variable problem).

The purpose of an experiment, however, is to show that two variables are statistically related and to do so in a way that supports the conclusion that the independent variable caused any observed differences in the dependent variable. The basic logic is this: If the researcher creates two or more highly similar conditions and then manipulates the independent variable to produce just one difference between them, then any later difference between the conditions must have been caused by the independent variable. For example, because the only difference between Darley and Latané’s conditions was the number of students that participants believed to be involved in the discussion, this must have been responsible for differences in helping between the conditions.

An empirical study is said to be high in internal validity if the way it was conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Thus experiments are high in internal validity because the way they are conducted—with the manipulation of the independent variable and the control of extraneous variables—provides strong support for causal conclusions.

External Validity

At the same time, the way that experiments are conducted sometimes leads to a different kind of criticism. Specifically, the need to manipulate the independent variable and control extraneous variables means that experiments are often conducted under conditions that seem artificial or unlike “real life” (Stanovich, 2010). In many psychology experiments, the participants are all college undergraduates and come to a classroom or laboratory to fill out a series of paper-and-pencil questionnaires or to perform a carefully designed computerized task. Consider, for example, an experiment in which researcher Barbara Fredrickson and her colleagues had college students come to a laboratory on campus and complete a math test while wearing a swimsuit (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). At first, this might seem silly. When will college students ever have to complete math tests in their swimsuits outside of this experiment?

The issue we are confronting is that of external validity. An empirical study is high in external validity if the way it was conducted supports generalizing the results to people and situations beyond those actually studied. As a general rule, studies are higher in external validity when the participants and the situation studied are similar to those that the researchers want to generalize to. Imagine, for example, that a group of researchers is interested in how shoppers in large grocery stores are affected by whether breakfast cereal is packaged in yellow or purple boxes. Their study would be high in external validity if they studied the decisions of ordinary people doing their weekly shopping in a real grocery store. If the shoppers bought much more cereal in purple boxes, the researchers would be fairly confident that this would be true for other shoppers in other stores. Their study would be relatively low in external validity, however, if they studied a sample of college students in a laboratory at a selective college who merely judged the appeal of various colors presented on a computer screen. If the students judged purple to be more appealing than yellow, the researchers would not be very confident that this is relevant to grocery shoppers’ cereal-buying decisions.

We should be careful, however, not to draw the blanket conclusion that experiments are low in external validity. One reason is that experiments need not seem artificial. Consider that Darley and Latané’s experiment provided a reasonably good simulation of a real emergency situation. Or consider field experiments that are conducted entirely outside the laboratory. In one such experiment, Robert Cialdini and his colleagues studied whether hotel guests choose to reuse their towels for a second day as opposed to having them washed as a way of conserving water and energy (Cialdini, 2005). These researchers manipulated the message on a card left in a large sample of hotel rooms. One version of the message emphasized showing respect for the environment, another emphasized that the hotel would donate a portion of their savings to an environmental cause, and a third emphasized that most hotel guests choose to reuse their towels. The result was that guests who received the message that most hotel guests choose to reuse their towels reused their own towels substantially more often than guests receiving either of the other two messages. Given the way they conducted their study, it seems very likely that their result would hold true for other guests in other hotels.

A second reason not to draw the blanket conclusion that experiments are low in external validity is that they are often conducted to learn about psychological processes that are likely to operate in a variety of people and situations. Let us return to the experiment by Fredrickson and colleagues. They found that the women in their study, but not the men, performed worse on the math test when they were wearing swimsuits. They argued that this was due to women’s greater tendency to objectify themselves—to think about themselves from the perspective of an outside observer—which diverts their attention away from other tasks. They argued, furthermore, that this process of self-objectification and its effect on attention is likely to operate in a variety of women and situations—even if none of them ever finds herself taking a math test in her swimsuit.

Manipulation of the Independent Variable

Again, to manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. The different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore not conducted an experiment. This is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating the third-variable problem.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to do an experiment on the effect of early illness experiences on the development of hypochondriasis. This does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this in detail later in the book.

In many experiments, the independent variable is a construct that can only be manipulated indirectly. For example, a researcher might try to manipulate participants’ stress levels indirectly by telling some of them that they have five minutes to prepare a short speech that they will then have to give to an audience of other participants. In such situations, researchers often include a manipulation check in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. For example, researchers trying to manipulate participants’ stress levels might give them a paper-and-pencil stress questionnaire or take their blood pressure—perhaps right after the manipulation or at the end of the procedure—to verify that they successfully manipulated this variable.

Control of Extraneous Variables

An extraneous variable is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their shoe size. They would also include situation or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” , which makes the effect of the independent variable is easier to detect (although real data never look quite that good).

Table 6.1 Hypothetical Noiseless Data and Realistic Noisy Data

Idealized “noiseless” data Realistic “noisy” data
4 3 3 1
4 3 6 3
4 3 2 4
4 3 4 0
4 3 5 5
4 3 2 7
4 3 3 2
4 3 1 5
4 3 6 1
4 3 8 2
= 4 = 3 = 4 = 3

One way to control extraneous variables is to hold them constant. This can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, straight, female, right-handed, sophomore psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger straight women would apply to older gay men. In many situations, the advantages of a diverse sample outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable is an extraneous variable that differs on average across levels of the independent variable. For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs at each level of the independent variable so that the average IQ is roughly equal, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants at one level of the independent variable to have substantially lower IQs on average and participants at another level to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse, and this is exactly what confounding variables do. Because they differ across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable. Figure 6.1 “Hypothetical Results From a Study on the Effect of Mood on Memory” shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 6.1 Hypothetical Results From a Study on the Effect of Mood on Memory

Hypothetical Results From a Study on the Effect of Mood on Memory

Because IQ also differs across conditions, it is a confounding variable.

Key Takeaways

  • An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables.
  • Studies are high in internal validity to the extent that the way they are conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Experiments are generally high in internal validity because of the manipulation of the independent variable and control of extraneous variables.
  • Studies are high in external validity to the extent that the result can be generalized to people and situations beyond those actually studied. Although experiments can seem “artificial”—and low in external validity—it is important to consider whether the psychological processes under study are likely to operate in other people and situations.
  • Practice: List five variables that can be manipulated by the researcher in an experiment. List five variables that cannot be manipulated by the researcher in an experiment.

Practice: For each of the following topics, decide whether that topic could be studied using an experimental research design and explain why or why not.

  • Effect of parietal lobe damage on people’s ability to do basic arithmetic.
  • Effect of being clinically depressed on the number of close friendships people have.
  • Effect of group training on the social skills of teenagers with Asperger’s syndrome.
  • Effect of paying people to take an IQ test on their performance on that test.

Cialdini, R. (2005, April). Don’t throw in the towel: Use social influence research. APS Observer . Retrieved from http://www.psychologicalscience.org/observer/getArticle.cfm?id=1762 .

Fredrickson, B. L., Roberts, T.-A., Noll, S. M., Quinn, D. M., & Twenge, J. M. (1998). The swimsuit becomes you: Sex differences in self-objectification, restrained eating, and math performance. Journal of Personality and Social Psychology, 75 , 269–284.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn & Bacon.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Chapter 2: Psychological Research

The scientific method.

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession (Figure 1). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.

A skull has a large hole bored through the forehead.

Figure 1 . Some of our ancestors, believed that trephination—the practice of making a hole in the skull—allowed evil spirits to leave the body, thus curing mental illness.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in informing decisions in our personal lives and in the public domain.

The Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Figure 2 . The scientific method is a process for gathering data and processing information. It provides well-defined steps to standardize how scientific knowledge is gathered through a logical, rational problem-solving method.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Video 1.  The Scientific Method explains the basic steps taken for most scientific inquiry.

The Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests (Figure 3).

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Figure 3 . The scientific method of research includes proposing hypotheses, conducting research, and creating or modifying theories based on results.

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race, and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 4). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

Figure 4 . Many of the specifics of (a) Freud’s theories, such as (b) his division of the mind into id, ego, and superego, have fallen out of favor in recent decades because they are not falsifiable. In broader strokes, his views set the stage for much of psychological thinking today, such as the unconscious nature of the majority of psychological processes.

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Want to participate in a study? Visit this Psychological Research on the Net website and click on a link that sounds interesting to you in order to participate in online research.

Why the Scientific Method Is Important for Psychology

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 1). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

Figure 5 . An institution’s IRB meets regularly to review experimental proposals that involve human participants. (credit: modification of work by Lowndes Area Knowledge Exchange (LAKE)/Flickr)

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 6). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Figure 6 . A participant in the Tuskegee Syphilis Study receives an injection.

Visit this CDC website to learn more about the Tuskegee Syphilis Study.

Research Involving Animal Subjects

A photograph shows a rat.

Figure 7 . Rats, like the one shown here, often serve as the subjects of animal research.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

  • Modification and adaptation. Provided by : Lumen Learning. License : CC BY-SA: Attribution-ShareAlike
  • Psychology and the Scientific Method: From Theory to Conclusion, content on the scientific method principles. Provided by : Boundless. Located at : https://courses.lumenlearning.com/boundless-psychology/ . License : CC BY-SA: Attribution-ShareAlike
  • Introduction to Psychological Research, Why is Research Important?, Ethics. Authored by : OpenStax College. Located at : http://cnx.org/contents/[email protected]:Hp5zMFYB@9/Why-Is-Research-Important . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/contents/[email protected]
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. Located at : https://www.flickr.com/photos/mcmscience/17664002728 . License : CC BY: Attribution

Footer Logo Lumen Candela

Privacy Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

The Practice of Experimental Psychology: An Inevitably Postmodern Endeavor

The aim of psychology is to understand the human mind and behavior. In contemporary psychology, the method of choice to accomplish this incredibly complex endeavor is the experiment. This dominance has shaped the whole discipline from the self-concept as an empirical science and its very epistemological and theoretical foundations, via research practice and the scientific discourse to teaching. Experimental psychology is grounded in the scientific method and positivism, and these principles, which are characteristic for modern thinking, are still upheld. Despite this apparently stalwart adherence to modern principles, experimental psychology exhibits a number of aspects which can best be described as facets of postmodern thinking although they are hardly acknowledged as such. Many psychologists take pride in being “real natural scientists” because they conduct experiments, but it is particularly difficult for psychologists to evade certain elements of postmodern thinking in view of the specific nature of their subject matter. Postmodernism as a philosophy emerged in the 20th century as a response to the perceived inadequacy of the modern approach and as a means to understand the complexities, ambiguities, and contradictions of the times. Therefore, postmodernism offers both valuable insights into the very nature of experimental psychology and fruitful ideas on improving experimental practice to better reflect the complexities and ambiguities of human mind and behavior. Analyzing experimental psychology along postmodern lines begins by discussing the implications of transferring the scientific method from fields with rather narrowly defined phenomena—the natural sciences—to a much broader and more heterogeneous class of complex phenomena, namely the human mind and behavior. This ostensibly modern experimental approach is, however, per se riddled with postmodern elements: (re-)creating phenomena in an experimental setting, including the hermeneutic processes of generating hypotheses and interpreting results, is no carbon copy of “reality” but rather an active construction which reflects irrevocably the pre-existing ideas of the investigator. These aspects, analyzed by using postmodern concepts like hyperreality and simulacra, did not seep in gradually but have been present since the very inception of experimental psychology, and they are necessarily inherent in its philosophy of science. We illustrate this theoretical analysis with the help of two examples, namely experiments on free will and visual working memory. The postmodern perspective reveals some pitfalls in the practice of experimental psychology. Furthermore, we suggest that accepting the inherently fuzzy nature of theoretical constructs in psychology and thinking more along postmodern lines would actually clarify many theoretical problems in experimental psychology.

Introduction

Postmodernism is, in essence, an attempt to achieve greater clarity in our perception, thinking, and behavior by scrutinizing their larger contexts and preconditions, based on the inextricably intertwined levels of both the individual and the society. Psychology also studies the human mind and behavior, which indicates that psychology should dovetail with postmodern approaches. In the 1990s and early 2000s, several attempts were made to introduce postmodern thought as potentially very fruitful ideas into general academic psychology ( Jager, 1991 ; Kvale, 1992 ; Holzman and Morss, 2000 ; Holzman, 2006 ). However, overall they were met with little response.

Postmodern thoughts have been taken up by several fringe areas of academic psychology, e.g., psychoanalysis ( Leffert, 2007 ; Jiménez, 2015 ; but see Holt, 2005 ), some forms of therapy and counseling ( Ramey and Grubb, 2009 ; Hansen, 2015 ), humanistic ( Krippner, 2001 ), feminist and gender ( Hare-Mustin and Marecek, 1988 ; Sinacore and Enns, 2005 ), or cultural psychology ( Gemignani and Peña, 2007 ).

However, there is resistance against suggestions to incorporate postmodern ideas into the methodology and the self-perception of psychology as academic—and scientific!—discipline. In fact, postmodern approaches are often rejected vehemently, sometimes even very vocally. For instance, Gergen (2001) argued that the “core tenets” of postmodernism are not at odds with those of scientific psychology but rather that they can enrich the discipline by opening up new possibilities. His suggestions were met with reservation and were even outright rejected on the following grounds: postmodernism, “like anthrax of the intellect, if allowed [our italics] into mainstream psychology, […] will poison the field” ( Locke, 2002 , 458), that it “wishes to return psychology to a prescientific subset of philosophy” ( Kruger, 2002 , 456), and that psychology “needs fewer theoretical and philosophical orientations, not more” ( Hofmann, 2002 , 462; see also Gergen ’s, 2001 , replies to the less biased and more informed commentaries on his article).

In the following years, and continuing the so-called science wars of the 1990s ( Segerstråle, 2000 ), several other attacks were launched against a perceived rise or even dominance of postmodern thought in psychology. Held(2007 ; see also the rebuttal by Martin and Sugarman, 2009 ) argued that anything postmodern would undermine rationality and destroy academic psychology. Similarly, postmodernism was identified—together with “radical environmentalism” and “pseudoscience” among other things—as a “key threat to scientific psychology” ( Lilienfeld, 2010 , 282), or as “inimical to progress in the psychology of science” ( Capaldi and Proctor, 2013 , 331). The following advice was given to psychologists: “We [psychologists] should also push back against the pernicious creep of these untested concepts into our field” ( Tarescavage, 2020 , 4). Furthermore, the term “postmodern” is even employed as an all-purpose invective in a popular scientific book by psychologist Steven Pinker (2018) .

Therefore, it seems that science and experimental psychology on the one hand and postmodern thinking on the other are irreconcilable opposites. However, following Gergen (2001) and Holtz (2020) , we argue that this dichotomy is only superficial because postmodernism is often misunderstood. A closer look reveals that experimental psychology contains many postmodern elements. Even more, there is reason to assume that a postmodern perspective may be beneficial for academic psychology: First, the practice of experimental psychology would be improved by integrating postmodern thinking because it reveals a side of the human psyche for which experimental psychology is mostly blind. Second, the postmodern perspective can tell us much about the epistemological and social background of experimental psychology and how this affects our understanding of the human psyche.

A Postmodern Perspective on Experimental Psychology

Experimental psychology and the modern scientific worldview.

It lies within the nature of humans to try to find out more about themselves and their world, but the so-called Scientific Revolution of the early modern period marks the beginning of a new era in this search for knowledge. The Scientific Revolution, which has led to impressive achievements in the natural sciences and the explanation of the physical world (e.g., Olby et al., 1991 ; Henry, 1997 ; Cohen, 2015 ; Osterlind, 2019 ), is based on the following principle: to “measure what can be measured and make measurable what cannot be measured.” This famous appeal—falsely attributed to Galileo Galilei but actually from the 19th century ( Kleinert, 2009 )—illustrates the two fundamental principles of modern science: First, the concept of “measurement” encompasses the idea that phenomena can be quantified, i.e., expressed numerically. Second, the concept of “causal connections” pertains to the idea that consistent, non-random relationships can be established between measurable phenomena. Quantification allows that relationships between phenomena can be expressed, calculated, and predicted in precise mathematical and numerical terms.

However, there are two important issues to be aware of. First, while it is not difficult to measure “evident” aspects, such as mass and distance, more complex phenomena cannot be measured easily. In such cases, it is therefore necessary to find ways of making these “elusive” phenomena measurable. This can often only be achieved by reducing complex phenomena to their simpler—and measurable!—elements. For instance, in order to measure memory ability precisely, possible effects of individual preexisting knowledge which introduce random variance and thus impreciseness have to be eliminated. Indeed, due to this reason, in many memory experiments, meaningless syllables are used as study material.

Second, it is not difficult to scientifically prove a causal relationship between a factor and an outcome if the relationship is simple, that is, if there is only one single factor directly influencing the outcome. In such a case, showing that a manipulation of the factor causes a change in the outcome is clear evidence for a causal relationship because there are no other factors which may influence the outcome as well. However, in situations where many factors influence an outcome in a complex, interactive way, proving a causal relationship is much more difficult. To prove the causal effect of one factor in such a situation the effects of all other factors—called confounding factors from the perspective of the factor of interest—have to be eliminated so that a change in the outcome can be truly attributed to a causal effect of the factor of interest. However, this has an important implication: The investigator has to divide the factors present in a given situation into interesting versus non-interesting factors with respect to the current context of the experiment. Consequently, while experiments reveal something about local causal relationships, they do not necessarily provide hints about the net effect of all causal factors present in the given situation.

The adoption of the principles of modern science has also changed psychology. Although the beginnings of psychology—as the study of the psyche —date back to antiquity, psychology as an academic discipline was established in the mid to late 19th century. This enterprise was also inspired by the success of the natural sciences, and psychology was explicitly modeled after this example by Wilhelm Wundt—the “father of experimental psychology”—although he emphasized the close ties to the humanities as well. The experiment quickly became the method of choice. There were other, more hermeneutic approaches during this formative phase of modern psychology, such as psychoanalysis or introspection according to the Würzburg School, but their impact on academic psychology was limited. Behaviorism emerged as a direct reaction against these perceived unscientific approaches, and its proponents emphasized the scientific character of their “new philosophy of psychology.” It is crucial to note that in doing so they also emphasized the importance of the experiment and the necessity of quantifying directly observable behavior in psychological research. Behaviorism quickly became a very influential paradigm which shaped academic psychology. Gestalt psychologists, whose worldview is radically different from behaviorism, also relied on experiments in their research. Cognitive psychology, which followed, complemented, and partly superseded behaviorism, relies heavily on the experiment as a means to gain insight into mental processes, although other methods such as modeling are employed as well. Interestingly, there is a fundamental difference between psychoanalysis and humanistic psychology, which do not rely on the experiment, and the other above-mentioned approaches as the former focus on the psychic functioning of individuals, whereas the latter focus more on global laws of psychic functioning across individuals. This is reflected in the fact that psychological laws in experimental psychology are established on the arithmetic means across examined participants—a difference we will elaborate on later in more detail. Today, psychology is the scientific —in the sense of empirical-quantitative—study of the human mind and behavior, and the experiment is often considered the gold standard in psychological research (e.g., Mandler, 2007 ; Goodwin, 2015 ; Leahey, 2017 ).

The experiment is closely associated with the so-called scientific method ( Haig, 2014 ; Nola and Sankey, 2014 ) and the epistemological tenets philosophy of positivism—in the sense as Martin (2003) ; Michell (2003) , and Teo (2018) explain—which sometimes exhibit characteristics of naïve empiricism. Roughly speaking, the former consists of observing, formulating hypotheses, and testing these hypotheses in experiments. The latter postulates that knowledge is based on sensory experience, that it is testable, independent of the investigator and therefore objective as it accurately depicts the world as it is. This means that in principle all of reality can not only be measured but eventually be entirely explained by science. This worldview is attacked by postmodern thinkers who contend that the world is far more complex and that the modern scientific approach cannot explain all of reality and its phenomena.

The Postmodern Worldview

Postmodern thinking (e.g., Bertens, 1995 ; Sim, 2011 ; Aylesworth, 2015 ) has gained momentum since the 1980s, and although neither the term “postmodernism” nor associated approaches can be defined in a unanimous or precise way, they are characterized by several intertwined concepts, attitudes, and aims. The most basic trait is a general skepticism and the willingness to question literally everything from the ground up—even going so far as to question not only the foundation of any idea, but also the question itself. This includes the own context, the chosen premises, thinking, and the use of language. Postmodernism therefore has a lot in common with science’s curiosity to understand the world: the skeptical attitude paired with the desire to discover how things really are.

Postmodern investigations often start by looking at the language and the broader context of certain phenomena due to the fact that language is the medium in which many of our mental activities—which subsequently influence our behavior—take place. Thus, the way we talk reveals something about how and why we think and act. Additionally, we communicate about phenomena using language, which in turn means that this discourse influences the way we think about or see those phenomena. Moreover, this discourse is embedded in a larger social and historical context, which also reflects back on the use of language and therefore on our perception and interpretation of certain phenomena.

Generally speaking, postmodern investigations aim at detecting and explaining how the individual is affected by societal influences and their underlying, often hidden ideas, structures, or mechanisms. As these influences are often fuzzy, contradictory, and dependent on their context, the individual is subject to a multitude of different causalities, and this already complex interplay is further complicated by the personal history, motivations, aims, or ways of thinking of the individual. Postmodernism attempts to understand all of this complexity as it is in its entirety.

The postmodern approaches have revealed three major general tendencies which characterize the contemporary world: First, societies and the human experience since the 20th century have displayed less coherence and conversely a greater diversity than the centuries before in virtually all areas, e.g., worldviews, modes of thinking, societal structures, or individual behavior. Second, this observation leads postmodern thinkers to the conclusion that the grand narratives which dominated the preceding centuries and shaped whole societies by providing frames of references have lost—at least partially—their supremacy and validity. Examples are religious dogmas, nationalism, industrialization, the notion of linear progress—and modern science because it works according to certain fundamental principles. Third, the fact that different but equally valid perspectives, especially on social phenomena or even whole worldviews, are possible and can coexist obviously affects the concepts of “truth,” “reality,” and “reason” in such a way that these concepts lose their immutable, absolute, and universal or global character, simply because they are expressions and reflections of a certain era, society, or worldview.

At this point, however, it is necessary to clarify a common misconception: Interpreting truth, reality, or reason as relative, subjective, and context-dependent—as opposed to absolute, objective, and context-independent—does naturally neither mean that anything can be arbitrarily labeled as true, real, or reasonable, nor, vice versa, that something cannot be true, real, or reasonable. For example, the often-quoted assumption that postmodernism apparently even denies the existence of gravity or its effects as everything can be interpreted arbitrarily or states that we cannot elucidate these phenomena with adequate accuracy because everything is open to any interpretation ( Sokal, 1996 ), completely misses the point.

First, postmodernism is usually not concerned with the laws of physics and the inanimate world as such but rather focuses on the world of human experience. However, the phenomenon itself, e.g., gravity, is not the same as our scientific knowledge of phenomena—our chosen areas of research, methodological paradigms, data, theories, and explanations—or our perception of phenomena, which are both the results of human activities. Therefore, the social context influences our scientific knowledge, and in that sense scientific knowledge is a social construction ( Hodge, 1999 ).

Second, phenomena from human experience, although probably more dependent on the social context than physical phenomena, cannot be interpreted arbitrarily either. The individual context—such as the personal history, motivations, aims, or worldviews—determines whether a certain behavior makes sense for a certain individual in a certain situation. As there are almost unlimited possible backgrounds, this might seem completely random or arbitrary from an overall perspective. But from the perspective of an individual the phenomenon in question may be explained entirely by a theory for a specific—and not universal—context.

As described above, the postmodern meta-perspective directly deals with human experience and is therefore especially relevant for psychology. Moreover, any discipline—including the knowledge it generates—will certainly benefit from understanding its own (social) mechanisms and implications. We will show below that postmodern thinking not only elucidates the broader context of psychology as an academic discipline but rather that experimental psychology exhibits a number of aspects which can best be described as facets of postmodern thinking although they are not acknowledged as such.

The Postmodern Context of Experimental Psychology

Paradoxically, postmodern elements have been present since the very beginning of experimental psychology although postmodernism gained momentum only decades later. One of the characteristics of postmodernism is the transplantation of certain elements from their original context to new contexts, e.g., the popularity of “Eastern” philosophies and practices in contemporary “Western” societies. These different elements are often juxtaposed and combined to create something new, e.g., new “westernized” forms of yoga ( Shearer, 2020 ).

Similarly, the founders of modern academic psychology took up the scientific method, which was originally developed in the context of the natural sciences, and transplanted it to the study of the human psyche in the hope to repeat the success of the natural sciences. By contrast, methods developed specifically in the context of psychology such as psychoanalysis ( Wax, 1995 ) or introspection according to the Würzburg School ( Hackert and Weger, 2018 ) have gained much less ground in academic psychology. The way we understand both the psyche and psychology has been shaped to a great extent by the transfer of the principles of modern science, namely quantitative measurement and experimental methods, although it is not evident per se that this is the best approach to elucidate mental and behavioral phenomena. Applying the methods of the natural sciences to a new and different context, namely to phenomena pertaining to the human psyche , is a truly postmodern endeavor because it juxtaposes two quite distinct areas and merges them into something new—experimental psychology.

The postmodern character of experimental psychology becomes evident on two levels: First, the subject matter—the human psyche —exhibits a postmodern character since mental and behavioral phenomena are highly dependent on the idiosyncratic contexts of the involved individuals, which makes it impossible to establish unambiguous general laws to describe them. Second, experimental psychology itself displays substantial postmodern traits because both its method and the knowledge it produces—although seemingly objective and rooted in the modern scientific worldview—inevitably contain postmodern elements, as will be shown below.

The Experiment as Simulacrum

The term “simulacrum” basically means “copy,” often in the sense of “inferior copy” or “phantasm/illusion.” However, in postmodern usage “simulacrum” has acquired a more nuanced and concrete meaning. “Simulacrum” is a key term in the work of postmodern philosopher Jean Baudrillard, who arguably presented the most elaborate theory on simulacra (1981/1994). According to Baudrillard, a simulacrum “is the reflection of a profound [‘real’] reality” (16/6). Simulacra, however, are more than identical carbon copies because they gain a life of their own and become “real” in the sense of becoming an own entity. For example, the personality a pop star shows on stage is not “real” in the sense that it is their “normal,” off-stage personality, but it is certainly “real” in the sense that it is perceived by the audience even if they are aware that it might be an “artificial” personality. Two identical cars can also be “different” for one might be used as a means of transportation while the other might be a status symbol. Even an honest video documentation of a certain event is not simply a copy of the events that took place because it lies within the medium video that only certain sections can be recorded from a certain perspective. Additionally, the playback happens in other contexts as the original event, which may also alter the perception of the viewer.

The post-structuralist—an approach closely associated with postmodernism—philosopher Roland Barthes pointed out another important aspect of simulacra. He contended that in order to understand something—an “object” in Barthes’ terminology—we necessarily create simulacra because we “ reconstruct [our italics] an ‘object’ in such a way as to manifest thereby the rules of functioning [⋯] of this object” ( Barthes, 1963 , 213/214). In other words, when we investigate an object—any phenomenon, either material, mental, or social—we have to perceive it first. This means that we must have some kind of mental representation of the phenomenon/object—and it is crucial to note that this representation is not the same thing as the “real” object itself. All our mental operations are therefore not performed on the “real” object but on mental representations of the object. We decompose a phenomenon in order to understand it, that is, we try to identify its components. In doing so, we effect a change in the object because our phenomenon is no longer the original phenomenon “as it is” for we are performing a mental operation on it, thereby transforming the original phenomenon. Identifying components may be simple, e.g., dividing a tree into roots, trunk, branches, and leaves may seem obvious or even “natural” but it is nevertheless us as investigators who create this structure—the tree itself is probably not aware of it. Now that we have established this structure, we are able to say that the tree consists of several components and name these components. Thus, we have introduced “new” elements into our understanding of the tree. This is the important point, even though the elements, i.e., the branches and leaves themselves “as they are,” have naturally always been “present.” Our understanding of “tree” has therefore changed completely because a tree is now something which is composed of several elements. In that sense, we have changed the original phenomenon by adding something—and this has all happened in our thinking and not in the tree itself. It is also possible to find different structures and different components for the tree, e.g., the brown and the green, which shows that we construct this knowledge.

Next, we can investigate the components to see how they interact with and relate to each other and to the whole system. Also, we can work out their functions and determine the conditions under which a certain event will occur. We can even expand the scope of our investigation and examine the tree in the context of its ecosystem. But no matter what we do or how sophisticated our investigation becomes, everything said above remains true here, too, because neither all these actions listed above nor the knowledge we gain from them are the object itself. Rather, we have added something to the object and the more we know about our object, the more knowledge we have constructed. This addition is what science—gaining knowledge—is all about. Or in the words of Roland Barthes: “the simulacrum is intellect added to object, and this addition has an anthropological value, in that it is man himself, his history, his situation, his freedom and the very resistance which nature offers to his mind” (1963/1972, 214/215).

In principle, this holds truth regarding all scientific investigations. But the more complex phenomena are, the more effort and personal contribution is required on behalf of the investigator to come up with structures, theories, or explanations. Paraphrasing Barthes: When dealing with complex phenomena, more intellect must be added to the object, which means in turn that there are more possibilities for different approaches and perspectives, that is, the constructive element becomes larger. As discussed previously, this does not mean that investigative and interpretative processes are arbitrary. But it is clear from this train of thought that “objectivity” or “truth” in a “positivist,” naïve empiricist “realist,” or absolute sense are not attainable. Nevertheless, we argue here that this is not a drawback, as many critics of postmodernism contend (see above), but rather an advantage because it allows more accurate scientific investigations of true-to-life phenomena, which are typically complex in the case of psychology.

The concepts of simulacra by Baudrillard and Barthes can be combined to provide a description of the experiment in psychology. Accordingly, our understanding of the concept of the “simulacrum” entails that scientific processes—indeed all investigative processes—necessarily need to duplicate the object of their investigation in order to understand it. In doing so, constructive elements are necessarily introduced. These elements are of a varying nature, which means that investigations of one and the same phenomenon may differ from each other and different investigations may find out different things about the phenomenon in question. These investigations then become entities on their own—in the Baudrillardian sense—and therefore simulacra.

In a groundbreaking article on “the meaning and limits of exact science” physicist Max Planck stated that “[a]n experiment is a question which science poses to nature, and a measurement is the recording of nature’s answer” ( Planck, 1949 , 325). The act of “asking a question” implies that the person asking the question has at least a general idea of what the answer might look like ( Heidegger, 1953 , §2). For example: When asking someone for their name, we obviously do not know what they are called, but we assume that they have a name and we also have an idea of how the concept “name” works. Otherwise we could not even conceive, let alone formulate, and pose our question. This highlights how a certain degree of knowledge and understanding of a concept is necessary so that we are able to ask questions about it. Likewise, we need to have a principal idea or assumption of possible mechanisms if we want to find out how more complex phenomena function. It is—at least at the beginning—irrelevant whether these ideas are factually correct or entirely wrong, for without them we would be unable to approach our subject matter in the first place.

The context of the investigator—their general worldview, their previous knowledge and understanding, and their social situation—obviously plays an important part in the process of forming a question which can be asked in the current research context. Although this context may be analyzed along postmodern lines in order to find out how it affects research, production of knowledge, and—when the knowledge is applied—possible (social) consequences, there is a much more profound implication pertaining to the very nature of the experiment as a means to gain knowledge.

Irrespective of whether it is a simple experiment in physics such as Galileo Galilei’s or an experiment on a complex phenomenon from social or cognitive psychology, the experiment is a situation which is specifically designed to answer a certain type of questions, usually causal relationships, such as: “Does A causally affect B?” Excluding the extremely complex discussion on the nature of causality and causation (e.g., Armstrong, 1997 ; Pearl, 2009 ; Paul and Hall, 2013 ), it is crucial to note that we need the experiment as a tool to answer this question. Although we may theorize about a phenomenon and infer causal relationships simply by observing, we cannot—at least according to the prevailing understanding of causality in the sciences—prove causal relationships without the experiment.

The basic idea of the experiment is to create conditions which differ in only one single factor which is suspected as a causal factor for an effect. The influence of all other potential causal relationships is kept identical because they are considered as confounding factors which are irrelevant from the perspective of the research question of the current experiment. Then, if a difference is found in the outcome between the experimental conditions, this is considered as proof that the aspect in question exerts indeed a causal effect. This procedure and the logic behind it are not difficult to understand. However, a closer look reveals that this is actually far from simple or obvious.

To begin with, an experiment is nothing which occurs “naturally” but a situation created for a specific purpose, i.e., an “artificial” situation, because other causal factors exerting influence in “real” life outside the laboratory are deliberately excluded and considered as “confounding” factors. This in itself shows that the experiment contains a substantial postmodern element because instead of creating something it rather re- creates it. This re-creation is of course based on phenomena from the “profound” reality—in the Baudrillardian sense—since the explicit aim is to find out something about this profound reality and not to create something new or something else. However, as stated above, this re-creation must contain constructive elements reflecting the presuppositions, conceptual-theoretical assumptions, and aims of the investigator. By focusing on one factor and by reducing the complexity of the profound reality, the practical operationalization and realization thus reflect both the underlying conceptual structure and the anticipated outcome as they are specifically designed to test for the suspected but hidden or obscured causal relationships.

At this point, another element becomes relevant, namely the all-important role of language, which is emphasized in postmodern thinking (e.g., Harris, 2005 ). Without going into the intricacies of semiotics, there is an explanatory gap ( Chalmers, 2005 )—to borrow a phrase from philosophy of mind—between the phenomenon on the one hand and the linguistic and/or mental representation of it on the other. This relationship is far from clear and it is therefore problematic to assume that our linguistic or mental representations—our words and the concepts they designate—are identical with the phenomena themselves. Although we cannot, at least according to our present knowledge and understanding, fully bridge this gap, it is essential to be aware of it in order to avoid some pitfalls, as will be shown in the examples below.

Even a seemingly simple word like “tree”—to take up once more our previous example—refers to a tangible phenomenon because there are trees “out there.” However, they come in all shapes and sizes, there are different kinds of trees, and every single one of them may be labeled as “tree.” Furthermore, trees are composed of different parts, and the leaf—although part of the tree—has its own word, i.e., linguistic and mental representation. Although the leaf is part of the tree—at least according to our concepts—it is unclear whether “tree” also somehow encompasses “leaf.” The same holds true for the molecular, atomic, or even subatomic levels, where there “is” no tree. Excluding the extremely complex ontological implications of this problem, it has become clear that we are referring to a certain level of granularity when using the word “tree.” The level of granularity reflects the context, aims, and concepts of the investigator, e.g., an investigation of the rain forest as an ecosystem will ignore the subatomic level.

How does this concern experimental psychology? Psychology studies intangible phenomena, namely mental and behavioral processes, such as cognition, memory, learning, motivation, emotion, perception, consciousness, etc. It is important to note that these terms designate theoretical constructs as, for example, memory cannot be observed directly. We may provide the subjects of an experiment a set of words to learn and observe later how many words they reproduce correctly. A theoretical construct therefore describes such relationships between stimulus and behavior, and we may draw conclusions from this observable data about memory. But neither the observable behavior of the subject, the resulting data, nor our conclusions are identical with memory itself.

This train of thought demonstrates the postmodern character of experimental psychology because we construct our knowledge. But there is more to it than that: Even by trying to define a theoretical construct as exactly as possible—e.g., memory as “the process of maintaining information over time” ( Matlin, 2012 , 505) or “the means by which we retain and draw on our past experiences to use this information in the present” ( Sternberg and Sternberg, 2011 , 187)—the explanatory gap between representation and phenomenon cannot be bridged. Rather, it becomes even more complicated because theoretical constructs are composed of other theoretical constructs, which results in some kind of self-referential circularity where constructs are defined by other constructs which refer to further constructs. In the definitions above, for instance, hardly any key term is self-evident and unambiguous for there are different interpretations of the constructs “process,” “maintaining,” “information,” “means,” “retain,” “draw on,” “experiences,” and “use” according to their respective contexts. Only the temporal expressions “over time,” “past,” and “present” are probably less ambiguous here because they are employed as non-technical, everyday terms. However, the definitions above are certainly not entirely incomprehensible—in fact, they are rather easy to understand in everyday language—and it is quite clear what the authors intend to express . The italics indicate constructive elements, which demonstrates that attempts to give a precise definition in the language of science result in fuzziness and self-reference.

Based on a story by Jorge Luis Borges, Baudrillard (1981) found an illustrative allegory: a map so precise that it portrays everything in perfect detail—but therefore inevitably so large that it shrouds the entire territory it depicts. Similarly, Taleb (2007) coined the term “ludic fallacy” for mistaking the model/map—in our context: experiments in psychology—for the reality/territory, that is, a mental or behavioral phenomenon. Similar to the functionality of a seemingly “imprecise” map which contains only the relevant landmarks so the user may find their way, the fuzziness of language poses no problems in everyday communication. So why is it a problem in experimental psychology? Since the nature of theoretical constructs in psychology lies precisely in their very fuzziness, the aim of reaching a high degree of granularity and precision in experimental psychology seems to be unattainable (see the various failed attempts to create “perfect” languages which might depict literally everything “perfectly,” e.g., Carapezza and D’Agostino, 2010 ).

Without speculating about ontic or epistemic implications, it is necessary to be aware of the explanatory gap and to refrain from identifying the experiment and the underlying operationalization with the theoretical construct. Otherwise, this gap is “filled” unintentionally and uncontrollably if the results of an experiment are taken as valid proof for a certain theoretical construct, which is actually fuzzy and potentially operationalizable in a variety of ways. If this is not acknowledged, words, such as “memory,” become merely symbols devoid of concrete meaning, much like a glass bead game—or in postmodern terminology: a hyperreality.

Experiments and Hyperreality

“Hyperreality” is another key term in the work of Jean Baudrillard (1981) and it denotes a concept closely related to the simulacrum. Accordingly, in modern society the simulacra are ubiquitous and they form a system of interconnected simulacra which refer to each other rather than to the real, thereby possibly hiding or replacing the real. Consequently, the simulacra become real in their own right and form a “more real” reality, namely the hyperreality. One may or may not accept Baudrillard’s conception, especially the all-embracing social and societal implications, but the core concept of “hyperreality” is nevertheless a fruitful tool to analyze experimental psychology. We have already seen that the experiment displays many characteristics of a simulacrum, so it is not surprising that the concept of hyperreality is applicable here as well, although in a slightly different interpretation than Baudrillard’s.

The hyperreal character of the experiment can be discussed on two levels: the experiment itself and the discourse wherein it is embedded.

On the level of the experiment itself, two curious observations must be taken into account. First, and in contrast to the natural sciences where the investigator is human and the subject matter (mostly) non-human and usually inanimate, in psychology both the investigator and the subject matter are human. This means that the subjects of the experiment, being autonomous persons, are not malleable or completely controllable by the investigator because they bring their own background, history, worldview, expectations, and motivations. They interpret the situation—the experiment—and act accordingly, but not necessarily in the way the investigator had planned or anticipated ( Smedslund, 2016 ). Therefore, the subjects create their own versions of the experiment, or, in postmodern terminology, a variety of simulacra, which may be more or less compatible with the framework of the investigator. This holds true for all subjects of an experiment, which means that the experiment as a whole may also be interpreted as an aggregation of interconnected simulacra—a hyperreality.

The hyperreal character becomes even more evident because what contributes in the end to the interpretation of the results of the experiment are not the actual performances and results of the individual subjects as they were intended by them but rather how their performances and results are handled, seen, and interpreted by the investigator. Even if the investigator tries to be as faithful as possible and aims at an exact and unbiased measurement—i.e., an exact copy—there are inevitably constructive elements which introduce uncertainty into the experiment. Investigators can never be certain what the subjects were actually doing and thinking so they must necessarily work with interpretations. Or in postmodern terms: Because the actual performances and results of the subjects are not directly available the investigators must deal with simulacra. These simulacra become the investigators’ reality and thus any further treatment—statistical analyses, interpretations, or discussions—becomes a hyperreality, that is, a set of interconnected simulacra which have become “real.”

On the level of the discourse wherein the experiment is embedded, another curious aspect also demonstrates the hyperreal character of experimental psychology. Psychology is, according to the standard definition, the scientific study of mental and behavioral processes of the individual (e.g., Gerrig, 2012 ). This definition contains two actually contradictory elements. On the one hand, the focus is on processes of the individual. On the other hand, the—scientific—method to elucidate these processes does not look at individuals per se but aggregates their individual experiences and transforms them into a “standard” experience. The results from experiments, our knowledge of the human psyche, reflect psychological functioning at the level of the mean across individuals. And even if we assume that the mean is only an estimator and not an exact description or prediction, the question remains open how de-individualized observations are related to the experience of an individual. A general mechanism, a law—which was discovered by abstracting from a multitude of individual experiences—is then ( re -)imposed in the opposite direction back onto the individual. In other words, a simulacrum—namely, the result of an experiment—is viewed and treated as reality, thus becoming hyperreal. Additionally, and simply because it is considered universally true, this postulated law acquires thereby a certain validity and “truth”—often irrespective of its actual, factual, or “profound” truth—on its own. Therefore, it can become impossible to distinguish between “profound” and “simulacral” truth, which is the hallmark of hyperreality.

Measuring the Capacity of the Visual Working Memory

Vision is an important sensory modality and there is extensive research on this area ( Hutmacher, 2019 ). Much of our daily experience is shaped by seeing a rich and complex world around us, and it is therefore an interesting question how much visual information we can store and process. Based on the development of a seminal experimental paradigm, Luck and Vogel (1997) have shown that visual working memory has a storage capacity of about four items. This finding is reported in many textbooks (e.g., Baddeley, 2007 ; Parkin, 2013 ; Goldstein, 2015 ) and has almost become a truism in cognitive psychology.

The experimental paradigm developed by Luck and Vogel (1997) is a prime example of an experiment which closely adheres to the scientific principles outlined above. In order to make a very broad and fuzzy phenomenon measurable, simple abstract forms are employed as visual stimuli—such as colored squares, triangles, or lines, usually on a “neutral,” e.g., gray, background—which can be counted in order to measure the capacity of visual working memory. Reducing the exuberant diversity of the “outside visual world” to a few abstract geometric forms is an extremely artificial situation. The obvious contrast between simple geometrical forms and the rich panorama of the “real” visual world illustrates the pitfalls of controlling supposed confounding variables, namely the incontrollable variety of the “real” world and how we see it. Precisely by abstracting and by excluding potential confounding variables it is possible to count the items and to make the capacity of the visual working memory measurable. But in doing so the original phenomenon—seeing the whole world—is lost. In other words: A simulacrum has been created.

The establishment of the experimental paradigm by Luck and Vogel has led to much research and sparked an extensive discussion how the limitation to only four items might be explained (see the summaries by Brady et al., 2011 ; Luck and Vogel, 2013 ; Ma et al., 2014 ; Schurgin, 2018 ). However, critically, several studies have shown that the situation is different when real-world objects are used as visual stimuli rather than simple abstract forms, revealing that the capacity of the visual working memory is higher for real-world objects ( Endress and Potter, 2014 ; Brady et al., 2016 ; Schurgin et al., 2018 ; Robinson et al., 2020 ; also Schurgin and Brady, 2019 ). Such findings show that the discourse about the mechanisms behind the limitations of the visual working memory is mostly about an artificial phenomenon which has no counterpart in “reality”—the perfect example of a hyperreality.

This hyperreal character does not mean that the findings of Luck and Vogel (1997) or similar experiments employing artificial stimuli are irrelevant or not “true.” The results are true—but it is a local truth, only valid for the specific context of specific experiments, and not a global truth which applies to the visual working memory in general . That is, speaking about “visual working memory” based on the paradigm of Luck and Vogel is a mistake because it is actually about “visual working memory for simple abstract geometrical forms in front of a gray background.”

Free Will and Experimental Psychology

The term “free will” expresses the idea of having “a significant kind of control [italics in the original] over one’s actions” ( O’Connor and Franklin, 2018 , n.p.). This concept has occupied a central position in Western philosophy since antiquity because it has far-reaching consequences for our self-conception as humans and our position in the world, including questions of morality, responsibility, and the nature of legal systems (e.g., Beebee, 2013 ; McKenna and Pereboom, 2016 ; O’Connor and Franklin, 2018 ). Being a topic of general interest, it is not surprising that experimental psychologists have tried to investigate free will as well.

The most famous study was conducted by Libet et al. (1983) , and this experiment has quickly become a focal point in the extensive discourse on free will because it provides empirical data and a scientific investigation. Libet et al.’s experiment seems to show that the subjective impression when persons consciously decide to act is in fact preceded by objectively measurable but unconscious physical processes. This purportedly proves that our seemingly voluntary actions are actually predetermined by physical processes because the brain has unconsciously reached a decision already before the person becomes aware of it and that our conscious intentions are simply grafted onto it. Therefore, we do not have a free will, and consequently much of our social fabric is based on an illusion. Or so the story goes.

This description, although phrased somewhat pointedly, represents a typical line of thought in the discourse on free will (e.g., the prominent psychologists Gazzaniga, 2011 ; Wegner, 2017 ; see Kihlstrom, 2017 , for further examples).

Libet’s experiment sparked an extensive and highly controversial discussion: For some authors, it is a refutation or at least threat to various concepts of free will, or, conversely, an indicator or even proof for some kind of material determinism. By contrast, other authors deny that the experiment refutes or counts against free will. Furthermore, a third group—whose position we adopt for our further argumentation—denies that Libet’s findings are even relevant for this question at all (for summaries of this complex and extensive discussion and various positions including further references see Nahmias, 2010 ; Radder and Meynen, 2013 ; Schlosser, 2014 ; Fischborn, 2016 ; Lavazza, 2016 ; Schurger, 2017 ). Libet’s own position, although not entirely consistent, opposes most notions of free will ( Roskies, 2011 ; Seifert, 2011 ). Given this background, it is not surprising that there are also numerous further experimental studies on various aspects of this subject area (see the summaries by Saigle et al., 2018 ; Shepard, 2018 ; Brass et al., 2019 ).

However, we argue that this entire discourse is best understood along postmodern lines as hyperreality and that Libet’s experiment itself is a perfect example of a simulacrum. A closer look at the concrete procedure of the experiment shows that Libet actually asked his participants to move their hand or finger “at will” while their brain activity was monitored with an EEG. They were instructed to keep watch in an introspective manner for the moment when they felt the “urge” to move their hand and to record this moment by indicating the clock-position of a pointer. This is obviously a highly artificial situation where the broad and fuzzy concept of “free will” is abstracted and reduced to the movement of the finger, the only degree of freedom being the moment of the movement. The question whether this is an adequate operationalization of free will is of paramount importance, and there are many objections that Libet’s setup fails to measure free will at all (e.g., Mele, 2007 ; Roskies, 2011 ; Kihlstrom, 2017 ; Brass et al., 2019 ).

Before Libet, there was no indication that the decision when to move a finger might be relevant for the concept of free will and the associated discourse. The question whether we have control over our actions referred to completely different levels of granularity. Free will was discussed with respect to questions such as whether we are free to live our lives according to our wishes or whether we are responsible for our actions in social contexts (e.g., Beebee, 2013 ; McKenna and Pereboom, 2016 ; O’Connor and Franklin, 2018 ), and not whether we lift a finger now or two seconds later. Libet’s and others’ jumping from very specific situations to far-reaching conclusions about a very broad and fuzzy theoretical construct illustrates that an extremely wide chasm between two phenomena, namely moving the finger and free will, is bridged in one fell swoop.

In other words, Libet’s experiment is a simulacrum as it duplicates a phenomenon from our day-to-day experience—namely free will—but in doing so the operationalization alters and reduces the theoretical construct. The outcome is a questionable procedure whose relationship to the phenomenon is highly controversial. Furthermore, the fact that, despite its tenuous connection to free will, Libet’s experiment sparked an extensive discussion on this subject reveals the hyperreal nature of the entire discourse because what is being discussed is not the actual question—namely free will—but rather a simulacrum. Everything else—the arguments, counter-arguments, follow-up experiments, and their interpretations—built upon Libet’s experiment are basically commentaries to a simulacrum and not on the real phenomena. Therefore, a hyperreality is created where the discourse revolves around entirely artificial phenomena, but where the arguments in this discussion refer back to and affect the real as suggestions are made to alter the legal system and our ideas of responsibility—which, incidentally, is not a question of empirical science but of law, ethics, and philosophy.

All of the above is not meant to say that this whole discourse is meaningless or even gratuitous—on the contrary, our understanding of the subject matter has greatly increased. Although our knowledge of free will has hardly increased, we have gained much insight into the hermeneutics and methodology—and pitfalls!—of investigations of free will, possible consequences on the individual and societal level, and the workings of scientific discourses. And this is exactly what postmodernism is about.

As shown above, there are a number of postmodern elements in the practice of experimental psychology: The prominent role of language, the gap between the linguistic or mental representation and the phenomenon, the “addition of intellect to the object,” the simulacral character of the experiment itself in its attempt to re-create phenomena, which necessarily transforms the “real” phenomenon due to the requirements of the experiment, and finally the creation of a hyperreality if experiments are taken as the “real” phenomenon and the scientific discourse becomes an exchange of symbolic expressions referring to the simulacra created in experiments, replacing the real. All these aspects did not seep gradually into experimental psychology in the wake of postmodernism but have been present since the very inception of experimental psychology as they are necessarily inherent in its philosophy of science.

Given these inherent postmodern traits in experimental psychology, it is puzzling that there is so much resistance against a perceived “threat” of psychology’s scientificness. Although a detailed investigation of the reasons lies outside the scope of this analysis, we suspect there are two main causes: First, an insufficient knowledge of the history of science and understanding of philosophy of science may result in idealized concepts of a “pure” natural science. Second, lacking familiarity with basic tenets of postmodern approaches may lead to the assumption that postmodernism is just an idle game of arbitrary words. However, “science” and “postmodernism” and their respective epistemological concepts are not opposites ( Gergen, 2001 ; Holtz, 2020 ). This is especially true for psychology, which necessarily contains a social dimension because not only the investigators are humans but also the very subject matter itself.

The (over-)reliance on quantitative-experimental methods in psychology, often paired with a superficial understanding of the philosophy of science behind it, has been criticized, either from the theoretical point of view (e.g., Bergmann and Spence, 1941 ; Hearnshaw, 1941 ; Petrie, 1971 ; Law, 2004 ; Smedslund, 2016 ) or because the experimental approach has failed to produce reliable, valid, and relevant applicable knowledge in educational psychology ( Slavin, 2002 ). It is perhaps symptomatic that a textbook teaching the principles of science for psychologists does not contain even one example from experimental psychology but employs only examples from physics, plus Darwin’s theory of evolution ( Wilton and Harley, 2017 ).

On the other hand, the postmodern perspective on experimental psychology provides insight into some pitfalls, as illustrated by the examples above. On the level of the experiment, the methodological requirements imply the creation of an artificial situation, which opens up a gap between the phenomenon as it is in reality and as it is concretely operationalized in the experimental situation. This is not a problem per se as long as is it clear—and clearly communicated!—that the results of the experiment are only valid in a certain context. The problems begin if the movement of a finger is mistaken for free will. Similarly, being aware that local causalities do not explain complex phenomena such as mental and behavioral processes in their entirety also prevents (over-) generalization, especially if communicated appropriately. These limitations make it clear that the experiment should not be made into an absolute or seen as the only valid way of understanding the psyche and the world.

On the level of psychology as an academic discipline, any investigation must select the appropriate level of granularity and strike a balance between the methodological requirements and the general meaning of the theoretical concept in question to find out something about the “real” world. If the level of granularity is so fine that results cannot be tied back to broader theoretical constructs rather than providing a helpful understanding of our psychological functioning, academic psychology is in danger of becoming a self-referential hyperreality.

The postmodern character of experimental psychology also allows for a different view on the so-called replication crisis in psychology. Authors contending that there is no replication crisis often employ arguments which exhibit postmodern elements, such as the emphasis on specific local conditions in experiments which may explain different outcomes of replication studies ( Stroebe and Strack, 2014 ; Baumeister, 2019 ). In other words, they invoke the simulacral character of experiments. This explanation may be valid or not, but the replication crisis has shown the limits of a predominantly experimental approach in psychology.

Acknowledging the postmodern nature of experimental psychology and incorporating postmodern thinking explicitly into our research may offer a way out of this situation. Our subject matter—the psyche —is extremely complex, ambiguous, and often contradictory. And postmodern thinking has proven capable of successfully explaining such phenomena (e.g., Bertens, 1995 ; Sim, 2011 ; Aylesworth, 2015 ). Thus, paradoxically, by accepting and considering the inherently fuzzy nature of theoretical constructs, they often become much clearer ( Ronzitti, 2011 ). Therefore, thinking more along postmodern lines in psychology would actually sharpen the theoretical and conceptual basis of experimental psychology—all the more as experimental psychology has inevitably been a postmodern endeavor since its very beginning.

Author Contributions

RM, CK, and CL developed the idea for this article. RM drafted the manuscript. CK and CL provided feedback and suggestions. All authors approved the manuscript for submission.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Armstrong D. M. (1997). A World of States of Affairs. Cambridge: CUP. [ Google Scholar ]
  • Aylesworth G. (2015). “ Postmodernism ,” in The Stanford Encyclopedia of Philosophy , ed. Zalta E. N. Available online at: https://plato.stanford.edu/entries/postmodernism/ [ Google Scholar ]
  • Baddeley A. (2007). Working Memory, Thought, and Action. Oxford: OUP. [ Google Scholar ]
  • Barthes R. (1963). “ L’activité structuraliste ,” in Essais Critiques (pp. 215–218). Paris: Éditions du Seuil. [“Structuralist activity.” Translated by R. Howard (1972). In Critical Essays , ed. Barthes R. (Evanston: Northern University Press; ), 213–220]. [ Google Scholar ]
  • Baudrillard J. (1981). Simulacres et Simulation . Paris: Galilée. [ Simulacra and Simulation . Translated by S. F. Glaser (1994) . Ann Arbor: The University of Michigan Press.] [ Google Scholar ]
  • Baumeister R. F. (2019). “ Self-control, ego depletion, and social psychology’s replication crisis ,” in Surrounding Self-control (Appendix to chap. 2) , ed. Mele A. (New York, NY: OUP; ). 10.31234/osf.io/uf3cn [ CrossRef ] [ Google Scholar ]
  • Beebee H. (2013). Free Will: An Introduction. New York, NY: Palgrave Macmillan. [ Google Scholar ]
  • Bergmann G., Spence K. W. (1941). Operationism and theory in psychology. Psychol. Rev. 48 1–14. 10.1037/h0054874 [ CrossRef ] [ Google Scholar ]
  • Bertens H. (1995). The Idea of the Postmodern. A History. London: Routledge. [ Google Scholar ]
  • Brady T. F., Konkle T., Alvarez G. A. (2011). A review of visual memory capacity: beyond individual items and toward structured representations. J. Vis. 11 1–34. 10.1167/11.5.4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brady T. F., Störmer V. S., Alvarez G. A. (2016). Working memory is not fixed-capacity: more active storage capacity for real-world objects than for simple stimuli. Proc. Natl. Acad. Sci. U.S.A. 113 7459–7464. 10.1073/pnas.1520027113 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brass M., Furstenberg A., Mele A. R. (2019). Why neuroscience does not disprove free will. Neurosci. Biobehav. Rev. 102 251–263. 10.1016/j.neubiorev.2019.04.024 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Capaldi E. J., Proctor R. W. (2013). “ Postmodernism and the development of the psychology of science ,” in Handbook of the Psychology of Science , eds Feist G. J., Gorman M. E. (New York, NY: Springer; ), 331–352. [ Google Scholar ]
  • Carapezza M., D’Agostino M. (2010). Logic and the myth of the perfect language. Logic Philos. Sci. 8 1–29. 10.1093/oso/9780190869816.003.0001 [ CrossRef ] [ Google Scholar ]
  • Chalmers D. (2005). “ Phenomenal concepts and the explanatory gap ,” in Phenomenal Concepts and Phenomenal Knowledge. New Essays on Consciousness and Physicalism , eds Alter T., Walter S. (Oxford: OUP; ), 167–194. 10.1093/acprof:oso/9780195171655.003.0009 [ CrossRef ] [ Google Scholar ]
  • Cohen H. F. (2015). The Rise of Modern Science Explained: A Comparative History. Cambridge: CUP. [ Google Scholar ]
  • Endress A. D., Potter M. C. (2014). Large capacity temporary visual memory. J. Exp. Psychol. Gen. 143 548–565. 10.1037/a0033934 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fischborn M. (2016). Libet-style experiments, neuroscience, and libertarian free will. Philos. Psychol. 29 494–502. 10.1080/09515089.2016.1141399 [ CrossRef ] [ Google Scholar ]
  • Gazzaniga M. S. (2011). Who’s in Charge? Free Will and the Science of the Brain. New York, NY: Ecco. [ Google Scholar ]
  • Gemignani M., Peña E. (2007). Postmodern conceptualizations of culture in social constructionism and cultural studies. J. Theor. Philos. Psychol. 27–28 276–300. 10.1037/h0091297 [ CrossRef ] [ Google Scholar ]
  • Gergen K. J. (2001). Psychological science in a postmodern context. Am. Psychol. 56 803–813. 10.1037/0003-066X.56.10.803 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gergen K. J. (2002). Psychological science: to conserve or create? Am. Psychol. 57 463–464. 10.1037/0003-066X.57.6-7.463 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gerrig R. J. (2012). Psychology and Life , 20th Edn Boston: Pearson. [ Google Scholar ]
  • Goldstein E. B. (2015). Cognitive Psychology: Connecting Mind, Research and Everyday Experience. Stamford: Cengage Learning. [ Google Scholar ]
  • Goodwin C. J. (2015). A History of Modern Psychology , 5th Edn Hoboken, NJ: Wiley. [ Google Scholar ]
  • Hackert B., Weger U. (2018). Introspection and the Würzburg school: implications for experimental psychology today. Eur. Psychol. 23 217–232. 10.1027/1016-9040/a000329 [ CrossRef ] [ Google Scholar ]
  • Haig B. D. (2014). Investigating the Psychological World: Scientific Method in the Behavioral Sciences. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Hansen J. T. (2015). The relevance of postmodernism to counselors and counseling practice. J. Ment. Health Counsel. 37 355–363. 10.17744/MEHC.37.4.06 [ CrossRef ] [ Google Scholar ]
  • Hare-Mustin R. T., Marecek J. (1988). The meaning of difference: gender theory, postmodernism, and psychology. Am. Psychol. 43 455–464. 10.1037//0003-066X.43.6.455 [ CrossRef ] [ Google Scholar ]
  • Harris R. (2005). The Semantics of Science. London: Continuum. [ Google Scholar ]
  • Hearnshaw L. S. (1941). Psychology and operationism. Aust. J. Psychol. Philos. 19 44–57. 10.1080/00048404108541506 [ CrossRef ] [ Google Scholar ]
  • Heidegger M. (1953). Sein und Zeit (7. Aufl.). Tübingen: Niemeyer. [ Being and Time . Translated by J. Stambaugh, revised by D. J. Schmidt (2010) . Albany: SUNY Press.] [ Google Scholar ]
  • Held B. S. (2007). Psychology’s Interpretive Turn: The Search for Truth and Agency in Theoretical and Philosophical Psychology. Washington, DC: APA. [ Google Scholar ]
  • Henry J. (1997). The Scientific Revolution and the Origins of Modern Science. Basingstoke: Macmillan. [ Google Scholar ]
  • Hodge B. (1999). The Sokal ‘Hoax’: some implications for science and postmodernism. Continuum J. Media Cult. Stud. 13 255–269. 10.1080/10304319909365797 [ CrossRef ] [ Google Scholar ]
  • Hofmann S. G. (2002). More science, not less. Am. Psychol. 57 : 462 10.1037//0003-066X.57.6-7.462a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holt R. R. (2005). “ The menace of postmodernism to a psychoanalytic psychology ,” in Relatedness, Self-definition and Mental Representation: Essays in Honor of Sidney J. Blatt , eds Auerbach J. S., Levy K. N., Schaffer C. E. (London: Routledge; ), 288–302. 10.4324/9780203337318_chapter_18 [ CrossRef ] [ Google Scholar ]
  • Holtz P. (2020). Does postmodernism really entail a disregard for the truth? Similarities and differences in postmodern and critical rationalist conceptualizations of truth, progress, and empirical research methods. Front. Psychol. 11 : 545959 10.3389/fpsyg.2020.545959 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holzman L. (2006). Activating postmodernism. Theory Psychol. 16 109–123. 10.1177/0959354306060110 [ CrossRef ] [ Google Scholar ]
  • Holzman L., Morss J. (eds). (2000). Postmodern Psychologies, Societal Practice, and Political Life. New York, NY: Routledge. [ Google Scholar ]
  • Hutmacher F. (2019). Why is there so much more research on vision than on any other sensory modality? Front. Psychol. 10 : 2246 . 10.3389/fpsyg.2019.02246 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jager B. (1991). Psychology in a postmodern era. J. Phenomenol. Psychol. 22 60–71. 10.1163/156916291X00046 [ CrossRef ] [ Google Scholar ]
  • Jiménez J. P. (2015). Psychoanalysis in postmodern times: some questions and challenges. Psychoanal. Inquiry 35 609–624. 10.1080/07351690.2015.1055221 [ CrossRef ] [ Google Scholar ]
  • Kihlstrom J. F. (2017). Time to lay the Libet experiment to rest: commentary on Papanicolaou (2017). Psycho. Conscious. Theory Res. Pract. 4 324–329. 10.1037/cns0000124 [ CrossRef ] [ Google Scholar ]
  • Kleinert A. (2009). Der messende Luchs. NTM. Z. Gesch. Wiss. Tech. Med. 17 199–206. 10.1007/s00048-009-0335-4 [ CrossRef ] [ Google Scholar ]
  • Krippner S. (2001). “ Research methodology in humanistic psychology in the light of postmodernity ,” in The Handbook of Humanistic Psychology: Leading Edges in Theory, Research, and Practice , eds Schneider K. J., Bugental J. F., Pierson J. F. (Thousand Oaks, CA: SAGE Publications; ), 290–304. 10.4135/9781412976268.n22 [ CrossRef ] [ Google Scholar ]
  • Kruger D. J. (2002). The deconstruction of constructivism. Am. Psychol. 57 456–457. 10.1037/0003-066X.57.6-7.456 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kvale S. (ed.) (1992). Psychology and Postmodernism. London: SAGE. [ Google Scholar ]
  • Lavazza A. (2016). Free will and neuroscience: from explaining freedom away to new ways of operationalizing and measuring it. Front. Hum. Neurosci. 10 : 262 . 10.3389/fnhum.2016.00262 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Law J. (2004). After Method: Mess in Social Science Research. London: Routledge. [ Google Scholar ]
  • Leahey T. H. (2017). A History of Psychology: From Antiquity to Modernity , 8th Edn New York, NY: Routledge. [ Google Scholar ]
  • Leffert M. (2007). A contemporary integration of modern and postmodern trends in psychoanalysis. J. Am. Psychoanal. Assoc. 55 177–197. 10.1177/00030651070550011001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Libet B., Gleason C. A., Wright E. W., Pearl D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain 106 623–642. 10.1093/brain/106.3.623 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lilienfeld S. O. (2010). Can psychology become a science? Pers. Individ. Differ. 49 281–288. 10.1016/j.paid.2010.01.024 [ CrossRef ] [ Google Scholar ]
  • Locke E. A. (2002). The dead end of postmodernism. Am. Psychol. 57 : 458 10.1037/0003-066X.57.6-7.458a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luck S. J., Vogel E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature 390 279–281. 10.1038/36846 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luck S. J., Vogel E. K. (2013). Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn. Sci. 17 391–400. 10.1016/j.tics.2013.06.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ma W. J., Husain M., Bays P. M. (2014). Changing concepts of working memory. Nature Neurosci. 17 347–356. 10.1038/nn.3655 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mandler G. (2007). A History of Modern Experimental Psychology: From James and Wundt to Cognitive Science. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Martin J. (2003). Positivism, quantification and the phenomena of psychology. Theory Psychol. 13 33–38. 10.1177/0959354303013001760 [ CrossRef ] [ Google Scholar ]
  • Martin J., Sugarman J. (2009). Middle-ground theorizing, realism, and objectivity in psychology: a commentary on Held (2007). Theory Psychol. 19 115–122. 10.1177/0959354308101422 [ CrossRef ] [ Google Scholar ]
  • Matlin M. W. (2012). Cognition , 8th Edn Hoboken: Wiley. [ Google Scholar ]
  • McKenna M., Pereboom D. (2016). Free Will: A Contemporary Introduction. New York, NY: Routledge. [ Google Scholar ]
  • Mele A. R. (2007). “ Decisions, intentions, urges, and free will: why libet has not shown what he says he has ,” in Causation and Explanation , eds Campbell J. K., O’Rourke M., Silverstein H. S. (Cambridge, MA: MIT Press; ), 241–263. [ Google Scholar ]
  • Michell J. (2003). The quantitative imperative: positivism, naïve realism and the place of qualitative methods in psychology. Theory Psychol. 13 5–31. 10.1177/0959354303013001758 [ CrossRef ] [ Google Scholar ]
  • Nahmias E. (2010). “ Scientific challenges to free will ,” in A Companion to the Philosophy of Action , eds Sandis C., O’Connor T. (Malden: Wiley-Blackwell; ), 345–310. 10.1002/9781444323528.ch44 [ CrossRef ] [ Google Scholar ]
  • Nola R., Sankey H. (2014). Theories of Scientific Method. An Introduction. Stocksfield: Acumen. [ Google Scholar ]
  • O’Connor T., Franklin C. (2018). “ Free will ,” in The Stanford Encyclopedia of Philosophy , ed. Zalta E. N. Available online at: https://plato.stanford.edu/entries/freewill/ [ Google Scholar ]
  • Olby R. C., Cantor G. N., Christie J. R. R., Hodge M. J. S. (eds). (1991). Companion to the History of Modern Science. London: Routledge. [ Google Scholar ]
  • Osterlind S. J. (2019). The Error of Truth: How History and Mathematics Came Together to Form Our Character and Shape Our Worldview. Oxford: OUP. [ Google Scholar ]
  • Parkin A. J. (2013). Essential Cognitive Psychology (classic edition) . London: Psychology Press. [ Google Scholar ]
  • Paul L. A., Hall N. (2013). Causation: A User’s Guide. Oxford: OUP. [ Google Scholar ]
  • Pearl J. (2009). Causality. Models, Reasoning, and Inference , 2nd Edn Cambridge: CUP. [ Google Scholar ]
  • Petrie H. G. (1971). A dogma of operationalism in the social sciences. Philos. Soc. Sci. 1 145–160. 10.1177/004839317100100109 [ CrossRef ] [ Google Scholar ]
  • Pinker S. (2018). Enlightenment Now. The Case for Reason, Science, Humanism, and Progress. New York, NY: Viking. [ Google Scholar ]
  • Planck M. (1949). The meaning and limits of exact science. Science 110 319–327. 10.1126/science.110.2857.319 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Radder H., Meynen G. (2013). Does the brain “initiate” freely willed processes? A philosophy of science critique of Libet-type experiments and their interpretation. Theory Psychol. 23 3–21. 10.1177/0959354312460926 [ CrossRef ] [ Google Scholar ]
  • Ramey H. L., Grubb S. (2009). Modernism, postmodernism and (evidence-based) practice. Contemp. Fam. Ther. 31 75–86. 10.1007/s10591-009-9086-6 [ CrossRef ] [ Google Scholar ]
  • Robinson M. M., Benjamin A. S., Irwin D. E. (2020). Is there a K in capacity? Assessing the structure of visual short-term memory. Cogn. Psychol. 121 101305 . 10.1016/j.cogpsych.2020.101305 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ronzitti G. (ed.) (2011). Vagueness: A Guide. Dordrecht: Springer. [ Google Scholar ]
  • Roskies A. L. (2011). “ Why Libet’s studies don’t pose a threat to free will ,” in Conscious Will and Responsibility , eds Sinnott-Armstrong W., Nadel L. (Oxford: OUP; ), 11–22. 10.1093/acprof:oso/9780195381641.003.0003 [ CrossRef ] [ Google Scholar ]
  • Saigle V., Dubljević V., Racine E. (2018). The impact of a landmark neuroscience study on free will: a qualitative analysis of articles using Libet and colleagues’ methods. AJOB Neurosci. 9 29–41. 10.1080/21507740.2018.1425756 [ CrossRef ] [ Google Scholar ]
  • Schlosser M. E. (2014). The neuroscientific study of free will: a diagnosis of the controversy. Synthese 191 245–262. 10.1007/s11229-013-0312-2 [ CrossRef ] [ Google Scholar ]
  • Schurger A. (2017). “ The neuropsychology of conscious volition ,” in The Blackwell Companion to Consciousness , eds Schneider S., Velmans M. (Malden: Wiley Blackwell; ), 695–710. 10.1002/9781119132363.ch49 [ CrossRef ] [ Google Scholar ]
  • Schurgin M. W. (2018). Visual memory, the long and the short of it: a review of visual working memory and long-term memory. Attention Percept. Psychophys. 80 1035–1056. 10.3758/s13414-018-1522-y [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schurgin M. W., Brady T. F. (2019). When “capacity” changes with set size: ensemble representations support the detection of across-category changes in visual working memory. J. Vis. 19 1–13. 10.1167/19.5.3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schurgin M. W., Cunningham C. A., Egeth H. E., Brady T. F. (2018). Visual long-term memory can replace active maintenance in visual working memory. bioRxiv [Preprint]. 10.1101/381848 [ CrossRef ] [ Google Scholar ]
  • Segerstråle U.C.O. (ed.) (2000). Beyond the Science Wars: The Missing Discourse about Science and Society. Albany: SUNY Press. [ Google Scholar ]
  • Seifert J. (2011). In defense of free will: a critique of Benjamin Libet. Rev. Metaphys. 65 377–407. [ Google Scholar ]
  • Shearer A. (2020). The Story of Yoga: From Ancient India to the Modern West. London: Hurst & Company. [ Google Scholar ]
  • Shepard J. (2018). How libet-style experiments may (or may not) challenge lay theories of free will. AJOB Neurosci. 9 45–47. 10.1080/21507740.2018.1425766 [ CrossRef ] [ Google Scholar ]
  • Sim S. (2011). The Routledge Companion to Postmodernism , 3rd Edn London: Routledge. [ Google Scholar ]
  • Sinacore A. L., Enns C. Z. (2005). “ Diversity feminisms: postmodern, women-of-color, antiracist, lesbian, third-wave, and global perspectives ,” in Teaching and Social Justice: Integrating Multicultural and Feminist Theories in the Classroom , eds Enns C. Z., Sinacore A. L. (Washington, DC: APA; ), 41–68. 10.1037/10929-003 [ CrossRef ] [ Google Scholar ]
  • Slavin R. E. (2002). Evidence-based education policies: transforming educational practice and research. Educ. Res. 31 15–21. 10.3102/0013189X031007015 [ CrossRef ] [ Google Scholar ]
  • Smedslund J. (2016). Why psychology cannot be an empirical science. Integrative Psychol. Behav. Sci. 50 185–195. 10.1007/s12124-015-9339-x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sokal A. D. (1996). A physicist experiments with cultural studies. Lingua Franca 6 62–64. [ Google Scholar ]
  • Sternberg R. J., Sternberg K. (2011). Cognitive Psychology , 6th Edn Wadsworth: Cengage Learning. [ Google Scholar ]
  • Stroebe W., Strack F. (2014). The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 59–71. 10.1177/1745691613514450 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taleb N. N. (2007). The Black Swan: The Impact of the Highly Improbable. New York, NY: Random House. [ Google Scholar ]
  • Tarescavage A. M. (2020). Science Wars II: the insidious influence of postmodern ideology on clinical psychology (commentary on “Implications of ideological bias in social psychology on clinical practice”). Clin. Psychol. Sci. Pract. 27 : e12319 10.1111/cpsp.12319 [ CrossRef ] [ Google Scholar ]
  • Teo T. (2018). Outline of Theoretical Psychology. London: Palgrave Macmillan. [ Google Scholar ]
  • Wax M. L. (1995). Method as madness science, hermeneutics, and art in psychoanalysis. J. Am. Acad. Psychoanal. 23 525–543. 10.1521/jaap.1.1995.23.4.525 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wegner D. M. (2017). The Illusion of Conscious Will , 2nd Edn Cambridge, MA: MIT Press. [ Google Scholar ]
  • Wilton R., Harley T. (2017). Science and Psychology. London: Routledge. [ Google Scholar ]

Logo for University of Central Florida Pressbooks

Psychological Research

The Scientific Process

Learning objectives.

  • Explain the steps of the scientific method
  • Differentiate between theories and hypotheses

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 3). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Want to participate in a study? Visit this Psychological Research on the Net website and click on a link that sounds interesting to you in order to participate in online research.

Why the Scientific Method Is Important for Psychology

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

CC licensed content, Original

  • Modification and adaptation. Provided by : Lumen Learning. License : CC BY-SA: Attribution-ShareAlike

CC licensed content, Shared previously

  • Why is Research Important?. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-1-why-is-research-important . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Psychology and the Scientific Method: From Theory to Conclusion, content on the scientific method principles. Provided by : Boundless. Located at : https://www.boundless.com/psychology/textbooks/boundless-psychology-textbook/researching-psychology-2/the-scientific-method-26/psychology-and-the-scientific-method-from-theory-to-conclusion-123-12658/images/the-scientific-method/ . License : CC BY-SA: Attribution-ShareAlike

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

General Psychology Copyright © by OpenStax and Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Ch 2: Psychological Research Methods

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Is it okay to talk on a cell phone while driving? Are headphones good to use in a car? What impact does text messaging have on reaction time while driving? These are types of questions that psychologist David Strayer asks in his lab.

Watch this short video to see how Strayer utilizes the scientific method to reach important conclusions regarding technology and driving safety.

You can view the transcript for “Understanding driver distraction” here (opens in new window) .

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

Introduction to the Scientific Method

Learning objectives.

  • Explain the steps of the scientific method
  • Describe why the scientific method is important to psychology
  • Summarize the processes of informed consent and debriefing
  • Explain how research involving humans or animals is regulated

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

The Scientific Process

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 5). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Why the scientific method is important for psychology.

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Ethics in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 6). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent  form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing  upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 7). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Learn more about the Tuskegee Syphilis Study on the CDC website .

Research Involving Animal Subjects

A photograph shows a rat.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Introduction to Approaches to Research

  • Differentiate between descriptive, correlational, and experimental research
  • Explain the strengths and weaknesses of case studies, naturalistic observation, and surveys
  • Describe the strength and weaknesses of archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Explain what a correlation coefficient tells us about the relationship between variables
  • Describe why correlation does not mean causation
  • Describe the experimental process, including ways to control for bias
  • Identify and differentiate between independent and dependent variables

Three researchers review data while talking around a microscope.

Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research.

Experiments are conducted in order to determine cause-and-effect relationships. In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.

When scientists passively observe and measure phenomena it is called correlational research. Here, psychologists do not intervene and change behavior, as they do in experiments. In correlational research, they identify patterns of relationships, but usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

Watch It: More on Research

If you enjoy learning through lectures and want an interesting and comprehensive summary of this section, then click on the Youtube link to watch a lecture given by MIT Professor John Gabrieli . Start at the 30:45 minute mark  and watch through the end to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the lecture, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.

You can view the transcript for “Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011” here (opens in new window) .

Descriptive Research

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Descriptive research is distinct from correlational research , in which psychologists formally test whether a relationship exists between two or more variables. Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. It aims to determine if one variable directly impacts and causes another. Correlational and experimental research both typically use hypothesis testing, whereas descriptive research does not.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in the text, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

The three main types of descriptive studies are, naturalistic observation, case studies, and surveys.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this module: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation : observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

A photograph shows two police cars driving, one with its lights flashing.

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 9).

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 10). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

(a) A photograph shows Jane Goodall speaking from a lectern. (b) A photograph shows a chimpanzee’s face.

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize  the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the module on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias . Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability : a measure of reliability that assesses the consistency of observations by different observers.

Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 11). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods . A sample is a subset of individuals selected from a population , which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: people don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Think It Over

Archival research.

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research  is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research . In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of observing a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 13).

A photograph shows pack of cigarettes and cigarettes in an ashtray. The pack of cigarettes reads, “Surgeon general’s warning: smoking causes lung cancer, heart disease, emphysema, and may complicate pregnancy.”

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition  rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Correlational Research

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect . While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable , is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research , we would be overstepping our bounds by making this assumption.

A photograph shows a bowl of cereal.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 15)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Watch this clip from Freakonomics for an example of how correlation does  not  indicate causation.

You can view the transcript for “Correlation vs. Causality: Freakonomics Movie” here (opens in new window) .

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations , or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 16).

A photograph shows the moon.

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias . Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?

Experiments

Causality: conducting experiments and using the data, experimental hypothesis.

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon (Figure 17).

A photograph shows a child pointing a toy gun.

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study , meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

A photograph shows three glass bottles of pills labeled as placebos.

In a double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 18).

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 19). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child pointing a toy gun.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 20). If possible, we should use a random sample   (there are other types of samples, but for the purposes of this section, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Introduction to Statistical Thinking

Psychologists use statistics to assist them in analyzing data, and also to give more precise measurements to describe whether something is statistically significant. Analyzing data using statistics enables researchers to find patterns, make claims, and share their results with others. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis.

  • Define reliability and validity
  • Describe the importance of distributional thinking and the role of p-values in statistical inference
  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions
  • Describe the basic structure of a psychological research article

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 21). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.

A photograph shows a child being given an oral vaccine.

Reliability and Validity

Dig deeper:  everyday connection: how valid is the sat.

Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).

Statistical Significance

Coffee cup with heart shaped cream inside.

Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day also had a 10% lower chance of dying (women’s chances were 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? We will explore these results in more depth in the next section about drawing conclusions from statistics. Modern society has become awash in studies such as this; you can read about several such studies in the news every day.

Conducting such a study well, and interpreting the results of such studies requires understanding basic ideas of statistics , the science of gaining insight from data. Key components to a statistical investigation are:

  • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals? Were changes made to the participants’ coffee habits during the course of the study?
  • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
  • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
  • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)

Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this section, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.

Distributional Thinking

When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. Let’s take a look at an example.

Example 1 : Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Figure 23.

Table showing patients' reading levels and pahmphlet's reading levels.

  • Data vary . More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
  • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.

Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 24.

Bar graph showing that the reading level of pamphlets is typically higher than the reading level of the patients.

Figure 24 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.

Finding Significance in Data

Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Let’s take a look at another example.

Example 2 : In a study reported in the November 2007 issue of Nature , researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with.

The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand?

Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process.

Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.

If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value . The p-value represents the likelihood that experimental results happened by chance. Within psychology, the most common standard for p-values is “p < .05”. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. We call this statistical significance .

So, in the study above, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy.

If we compare the p-value to some cut-off value, like 0.05, we see that the p=value is smaller. Because the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.

Drawing Conclusions from Statistics

Generalizability.

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 3 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a r andom sample  that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error. A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 4 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 26, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 26 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment  tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 27 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics
  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

How to Read Research

In this course and throughout your academic career, you’ll be reading journal articles (meaning they were published by experts in a peer-reviewed journal) and reports that explain psychological research. It’s important to understand the format of these articles so that you can read them strategically and understand the information presented. Scientific articles vary in content or structure, depending on the type of journal to which they will be submitted. Psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the American Psychological Association (APA). In general, the structure follows: abstract, introduction, methods, results, discussion, and references.

  • Abstract : the abstract is the concise summary of the article. It summarizes the most important features of the manuscript, providing the reader with a global first impression on the article. It is generally just one paragraph that explains the experiment as well as a short synopsis of the results.
  • Introduction : this section provides background information about the origin and purpose of performing the experiment or study. It reviews previous research and presents existing theories on the topic.
  • Method : this section covers the methodologies used to investigate the research question, including the identification of participants , procedures , and  materials  as well as a description of the actual procedure . It should be sufficiently detailed to allow for replication.
  • Results : the results section presents key findings of the research, including reference to indicators of statistical significance.
  • Discussion : this section provides an interpretation of the findings, states their significance for current research, and derives implications for theory and practice. Alternative interpretations for findings are also provided, particularly when it is not possible to conclude for the directionality of the effects. In the discussion, authors also acknowledge the strengths and limitations/weaknesses of the study and offer concrete directions about for future research.

Watch this 3-minute video for an explanation on how to read scholarly articles. Look closely at the example article shared just before the two minute mark.

https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/

Practice identifying these key components in the following experiment: Food-Induced Emotional Resonance Improves Emotion Recognition.

In this chapter, you learned to

  • define and apply the scientific method to psychology
  • describe the strengths and weaknesses of descriptive, experimental, and correlational research
  • define the basic elements of a statistical investigation

Putting It Together: Psychological Research

Psychologists use the scientific method to examine human behavior and mental processes. Some of the methods you learned about include descriptive, experimental, and correlational research designs.

Watch the CrashCourse video to review the material you learned, then read through the following examples and see if you can come up with your own design for each type of study.

You can view the transcript for “Psychological Research: Crash Course Psychology #2” here (opens in new window).

Case Study: a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to to learn more about rare examples with the goal of describing that particular thing.

  • Ted Bundy was one of America’s most notorious serial killers who murdered at least 30 women and was executed in 1989. Dr. Al Carlisle evaluated Bundy when he was first arrested and conducted a psychological analysis of Bundy’s development of his sexual fantasies merging into reality (Ramsland, 2012). Carlisle believes that there was a gradual evolution of three processes that guided his actions: fantasy, dissociation, and compartmentalization (Ramsland, 2012). Read   Imagining Ted Bundy  (http://goo.gl/rGqcUv) for more information on this case study.

Naturalistic Observation : a researcher unobtrusively collects information without the participant’s awareness.

  • Drain and Engelhardt (2013) observed six nonverbal children with autism’s evoked and spontaneous communicative acts. Each of the children attended a school for children with autism and were in different classes. They were observed for 30 minutes of each school day. By observing these children without them knowing, they were able to see true communicative acts without any external influences.

Survey : participants are asked to provide information or responses to questions on a survey or structure assessment.

  • Educational psychologists can ask students to report their grade point average and what, if anything, they eat for breakfast on an average day. A healthy breakfast has been associated with better academic performance (Digangi’s 1999).
  • Anderson (1987) tried to find the relationship between uncomfortably hot temperatures and aggressive behavior, which was then looked at with two studies done on violent and nonviolent crime. Based on previous research that had been done by Anderson and Anderson (1984), it was predicted that violent crimes would be more prevalent during the hotter time of year and the years in which it was hotter weather in general. The study confirmed this prediction.

Longitudinal Study: researchers   recruit a sample of participants and track them for an extended period of time.

  • In a study of a representative sample of 856 children Eron and his colleagues (1972) found that a boy’s exposure to media violence at age eight was significantly related to his aggressive behavior ten years later, after he graduated from high school.

Cross-Sectional Study:  researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

  • In 1996, Russell surveyed people of varying age groups and found that people in their 20s tend to report being more lonely than people in their 70s.

Correlational Design:  two different variables are measured to determine whether there is a relationship between them.

  • Thornhill et al. (2003) had people rate how physically attractive they found other people to be. They then had them separately smell t-shirts those people had worn (without knowing which clothes belonged to whom) and rate how good or bad their body oder was. They found that the more attractive someone was the more pleasant their body order was rated to be.
  • Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf

Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.

Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003

Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.

Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&

Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.

Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.

Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.

Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html

Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.

Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx

Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf

Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.

Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf

McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview

Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.

Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.

Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286

Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.

Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.

CC licensed content, Original

  • Psychological Research Methods. Provided by : Karenna Malavanti. License : CC BY-SA: Attribution ShareAlike

CC licensed content, Shared previously

  • Psychological Research. Provided by : OpenStax College. License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-introduction .
  • Why It Matters: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/introduction-15/
  • Introduction to The Scientific Method. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-the-scientific-method/
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. License : CC BY: Attribution   Located at : https://www.flickr.com/photos/mcmscience/17664002728 .
  • The Scientific Process. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-the-scientific-process/
  • Ethics in Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/ethics/
  • Ethics. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-4-ethics . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction .
  • Introduction to Approaches to Research. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution NonCommercial ShareAlike   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-approaches-to-research/
  • Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011. Authored by : John Gabrieli. Provided by : MIT OpenCourseWare. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : https://www.youtube.com/watch?v=syXplPKQb_o .
  • Paragraph on correlation. Authored by : Christie Napa Scollon. Provided by : Singapore Management University. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/research-designs?r=MTc0ODYsMjMzNjQ%3D . Project : The Noba Project.
  • Descriptive Research. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-clinical-or-case-studies/
  • Approaches to Research. Authored by : OpenStax College.  License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-2-approaches-to-research
  • Analyzing Findings. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-3-analyzing-findings . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction.
  • Experiments. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Research Review. Authored by : Jessica Traylor for Lumen Learning. License : CC BY: Attribution Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Introduction to Statistics. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-statistical-thinking/
  • histogram. Authored by : Fisher’s Iris flower data set. Provided by : Wikipedia.
  • License : CC BY-SA: Attribution-ShareAlike   Located at : https://en.wikipedia.org/wiki/Wikipedia:Meetup/DC/Statistics_Edit-a-thon#/media/File:Fisher_iris_versicolor_sepalwidth.svg .
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman . Provided by : California Polytechnic State University, San Luis Obispo.  
  • License : CC BY-NC-SA: Attribution-NonCommerci al-S hareAlike .  License Terms : http://nobaproject.com/license-agreement   Located at : http://nobaproject.com/modules/statistical-thinking . Project : The Noba Project.
  • Drawing Conclusions from Statistics. Authored by: Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-drawing-conclusions-from-statistics/
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/statistical-thinking .
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License: CC BY: Attribution
  • How to Read Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/how-to-read-research/
  • What is a Scholarly Article? Kimbel Library First Year Experience Instructional Videos. 9. Authored by:  Joshua Vossler, John Watts, and Tim Hodge.  Provided by : Coastal Carolina University  License :  CC BY NC ND:  Attribution-NonCommercial-NoDerivatives Located at :  https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/
  • Putting It Together: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/putting-it-together-psychological-research/
  • Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:

All rights reserved content

  • Understanding Driver Distraction. Provided by : American Psychological Association. License : Other. License Terms: Standard YouTube License Located at : https://www.youtube.com/watch?v=XToWVxS_9lA&list=PLxf85IzktYWJ9MrXwt5GGX3W-16XgrwPW&index=9 .
  • Correlation vs. Causality: Freakonomics Movie. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=lbODqslc4Tg.
  • Psychological Research – Crash Course Psychology #2. Authored by : Hank Green. Provided by : Crash Course. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=hFV71QPvX2I .

Public domain content

  • Researchers review documents. Authored by : National Cancer Institute. Provided by : Wikimedia. Located at : https://commons.wikimedia.org/wiki/File:Researchers_review_documents.jpg . License : Public Domain: No Known Copyright

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

committee of administrators, scientists, and community members that reviews proposals for research involving human participants

process of informing a research participant about what to expect during an experiment, any risks involved, and the implications of the research, and then obtaining the person’s consent to participate

purposely misleading experiment participants in order to maintain the integrity of the experiment

when an experiment involved deception, participants are told complete and truthful information about the experiment at its conclusion

committee of administrators, scientists, veterinarians, and community members that reviews proposals for research involving non-human animals

research studies that do not test specific relationships between variables

research investigating the relationship between two or more variables

research method that uses hypothesis testing to make inferences about how one variable impacts and causes another

observation of behavior in its natural setting

inferring that the results for a sample apply to the larger population

when observations may be skewed to align with observer expectations

measure of agreement among observers on how they record and classify a particular event

observational research study focusing on one or a few people

list of questions to be answered by research participants—given as paper-and-pencil questionnaires, administered electronically, or conducted verbally—allowing researchers to collect data from a large number of people

subset of individuals selected from the larger population

overall group of individuals that the researchers are interested in

method of research using past records or data sets to answer various research questions, or to search for interesting patterns or relationships

studies in which the same group of individuals is surveyed or measured repeatedly over an extended period of time

compares multiple segments of a population at a single time

reduction in number of research participants as some drop out of the study over time

relationship between two or more variables; when two variables are correlated, one variable changes as the other does

number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

two variables change in the same direction, both becoming either larger or smaller

two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

seeing relationships between two things when in reality no such relationship exists

tendency to ignore evidence that disproves ideas or beliefs

group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance

serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups

description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables

researcher expectations skew the results of the study

experiment in which the researcher knows which participants are in the experimental group and which are in the control group

experiment in which both the researchers and the participants are blind to group assignments

people's expectations or beliefs influencing or determining their experience in a given situation

variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group

variable that the researcher measures to see how much effect the independent variable had

subjects of psychological research

subset of a larger population in which every member of the population has an equal chance of being selected

method of experimental group assignment in which all participants have an equal chance of being assigned to either group

consistency and reproducibility of a given result

accuracy of a given result in measuring what it is designed to measure

determines how likely any difference between experimental groups is due to chance

statistical probability that represents the likelihood that experimental results happened by chance

Psychological Science is the scientific study of mind, brain, and behavior. We will explore what it means to be human in this class. It has never been more important for us to understand what makes people tick, how to evaluate information critically, and the importance of history. Psychology can also help you in your future career; indeed, there are very little jobs out there with no human interaction!

Because psychology is a science, we analyze human behavior through the scientific method. There are several ways to investigate human phenomena, such as observation, experiments, and more. We will discuss the basics, pros and cons of each! We will also dig deeper into the important ethical guidelines that psychologists must follow in order to do research. Lastly, we will briefly introduce ourselves to statistics, the language of scientific research. While reading the content in these chapters, try to find examples of material that can fit with the themes of the course.

To get us started:

  • The study of the mind moved away Introspection to reaction time studies as we learned more about empiricism
  • Psychologists work in careers outside of the typical "clinician" role. We advise in human factors, education, policy, and more!
  • While completing an observation study, psychologists will work to aggregate common themes to explain the behavior of the group (sample) as a whole. In doing so, we still allow for normal variation from the group!
  • The IRB and IACUC are important in ensuring ethics are maintained for both human and animal subjects

Psychological Science: Understanding Human Behavior Copyright © by Karenna Malavanti is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • The 25 Most Influential Psychological Experiments in History

Most Influential Psychological Experiments in History

While each year thousands and thousands of studies are completed in the many specialty areas of psychology, there are a handful that, over the years, have had a lasting impact in the psychological community as a whole. Some of these were dutifully conducted, keeping within the confines of ethical and practical guidelines. Others pushed the boundaries of human behavior during their psychological experiments and created controversies that still linger to this day. And still others were not designed to be true psychological experiments, but ended up as beacons to the psychological community in proving or disproving theories.

This is a list of the 25 most influential psychological experiments still being taught to psychology students of today.

1. A Class Divided

Study conducted by: jane elliott.

Study Conducted in 1968 in an Iowa classroom

A Class Divided Study Conducted By: Jane Elliott

Experiment Details: Jane Elliott’s famous experiment was inspired by the assassination of Dr. Martin Luther King Jr. and the inspirational life that he led. The third grade teacher developed an exercise, or better yet, a psychological experiment, to help her Caucasian students understand the effects of racism and prejudice.

Elliott divided her class into two separate groups: blue-eyed students and brown-eyed students. On the first day, she labeled the blue-eyed group as the superior group and from that point forward they had extra privileges, leaving the brown-eyed children to represent the minority group. She discouraged the groups from interacting and singled out individual students to stress the negative characteristics of the children in the minority group. What this exercise showed was that the children’s behavior changed almost instantaneously. The group of blue-eyed students performed better academically and even began bullying their brown-eyed classmates. The brown-eyed group experienced lower self-confidence and worse academic performance. The next day, she reversed the roles of the two groups and the blue-eyed students became the minority group.

At the end of the experiment, the children were so relieved that they were reported to have embraced one another and agreed that people should not be judged based on outward appearances. This exercise has since been repeated many times with similar outcomes.

For more information click here

2. Asch Conformity Study

Study conducted by: dr. solomon asch.

Study Conducted in 1951 at Swarthmore College

Asch Conformity Study

Experiment Details: Dr. Solomon Asch conducted a groundbreaking study that was designed to evaluate a person’s likelihood to conform to a standard when there is pressure to do so.

A group of participants were shown pictures with lines of various lengths and were then asked a simple question: Which line is longest? The tricky part of this study was that in each group only one person was a true participant. The others were actors with a script. Most of the actors were instructed to give the wrong answer. Strangely, the one true participant almost always agreed with the majority, even though they knew they were giving the wrong answer.

The results of this study are important when we study social interactions among individuals in groups. This study is a famous example of the temptation many of us experience to conform to a standard during group situations and it showed that people often care more about being the same as others than they do about being right. It is still recognized as one of the most influential psychological experiments for understanding human behavior.

3. Bobo Doll Experiment

Study conducted by: dr. alburt bandura.

Study Conducted between 1961-1963 at Stanford University

Bobo Doll Experiment

In his groundbreaking study he separated participants into three groups:

  • one was exposed to a video of an adult showing aggressive behavior towards a Bobo doll
  • another was exposed to video of a passive adult playing with the Bobo doll
  • the third formed a control group

Children watched their assigned video and then were sent to a room with the same doll they had seen in the video (with the exception of those in the control group). What the researcher found was that children exposed to the aggressive model were more likely to exhibit aggressive behavior towards the doll themselves. The other groups showed little imitative aggressive behavior. For those children exposed to the aggressive model, the number of derivative physical aggressions shown by the boys was 38.2 and 12.7 for the girls.

The study also showed that boys exhibited more aggression when exposed to aggressive male models than boys exposed to aggressive female models. When exposed to aggressive male models, the number of aggressive instances exhibited by boys averaged 104. This is compared to 48.4 aggressive instances exhibited by boys who were exposed to aggressive female models.

While the results for the girls show similar findings, the results were less drastic. When exposed to aggressive female models, the number of aggressive instances exhibited by girls averaged 57.7. This is compared to 36.3 aggressive instances exhibited by girls who were exposed to aggressive male models. The results concerning gender differences strongly supported Bandura’s secondary prediction that children will be more strongly influenced by same-sex models. The Bobo Doll Experiment showed a groundbreaking way to study human behavior and it’s influences.

4. Car Crash Experiment

Study conducted by: elizabeth loftus and john palmer.

Study Conducted in 1974 at The University of California in Irvine

Car Crash Experiment

The participants watched slides of a car accident and were asked to describe what had happened as if they were eyewitnesses to the scene. The participants were put into two groups and each group was questioned using different wording such as “how fast was the car driving at the time of impact?” versus “how fast was the car going when it smashed into the other car?” The experimenters found that the use of different verbs affected the participants’ memories of the accident, showing that memory can be easily distorted.

This research suggests that memory can be easily manipulated by questioning technique. This means that information gathered after the event can merge with original memory causing incorrect recall or reconstructive memory. The addition of false details to a memory of an event is now referred to as confabulation. This concept has very important implications for the questions used in police interviews of eyewitnesses.

5. Cognitive Dissonance Experiment

Study conducted by: leon festinger and james carlsmith.

Study Conducted in 1957 at Stanford University

Experiment Details: The concept of cognitive dissonance refers to a situation involving conflicting:

This conflict produces an inherent feeling of discomfort leading to a change in one of the attitudes, beliefs or behaviors to minimize or eliminate the discomfort and restore balance.

Cognitive dissonance was first investigated by Leon Festinger, after an observational study of a cult that believed that the earth was going to be destroyed by a flood. Out of this study was born an intriguing experiment conducted by Festinger and Carlsmith where participants were asked to perform a series of dull tasks (such as turning pegs in a peg board for an hour). Participant’s initial attitudes toward this task were highly negative.

They were then paid either $1 or $20 to tell a participant waiting in the lobby that the tasks were really interesting. Almost all of the participants agreed to walk into the waiting room and persuade the next participant that the boring experiment would be fun. When the participants were later asked to evaluate the experiment, the participants who were paid only $1 rated the tedious task as more fun and enjoyable than the participants who were paid $20 to lie.

Being paid only $1 is not sufficient incentive for lying and so those who were paid $1 experienced dissonance. They could only overcome that cognitive dissonance by coming to believe that the tasks really were interesting and enjoyable. Being paid $20 provides a reason for turning pegs and there is therefore no dissonance.

6. Fantz’s Looking Chamber

Study conducted by: robert l. fantz.

Study Conducted in 1961 at the University of Illinois

Experiment Details: The study conducted by Robert L. Fantz is among the simplest, yet most important in the field of infant development and vision. In 1961, when this experiment was conducted, there very few ways to study what was going on in the mind of an infant. Fantz realized that the best way was to simply watch the actions and reactions of infants. He understood the fundamental factor that if there is something of interest near humans, they generally look at it.

To test this concept, Fantz set up a display board with two pictures attached. On one was a bulls-eye. On the other was the sketch of a human face. This board was hung in a chamber where a baby could lie safely underneath and see both images. Then, from behind the board, invisible to the baby, he peeked through a hole to watch what the baby looked at. This study showed that a two-month old baby looked twice as much at the human face as it did at the bulls-eye. This suggests that human babies have some powers of pattern and form selection. Before this experiment it was thought that babies looked out onto a chaotic world of which they could make little sense.

7. Hawthorne Effect

Study conducted by: henry a. landsberger.

Study Conducted in 1955 at Hawthorne Works in Chicago, Illinois

Hawthorne Effect

Landsberger performed the study by analyzing data from experiments conducted between 1924 and 1932, by Elton Mayo, at the Hawthorne Works near Chicago. The company had commissioned studies to evaluate whether the level of light in a building changed the productivity of the workers. What Mayo found was that the level of light made no difference in productivity. The workers increased their output whenever the amount of light was switched from a low level to a high level, or vice versa.

The researchers noticed a tendency that the workers’ level of efficiency increased when any variable was manipulated. The study showed that the output changed simply because the workers were aware that they were under observation. The conclusion was that the workers felt important because they were pleased to be singled out. They increased productivity as a result. Being singled out was the factor dictating increased productivity, not the changing lighting levels, or any of the other factors that they experimented upon.

The Hawthorne Effect has become one of the hardest inbuilt biases to eliminate or factor into the design of any experiment in psychology and beyond.

8. Kitty Genovese Case

Study conducted by: new york police force.

Study Conducted in 1964 in New York City

Experiment Details: The murder case of Kitty Genovese was never intended to be a psychological experiment, however it ended up having serious implications for the field.

According to a New York Times article, almost 40 neighbors witnessed Kitty Genovese being savagely attacked and murdered in Queens, New York in 1964. Not one neighbor called the police for help. Some reports state that the attacker briefly left the scene and later returned to “finish off” his victim. It was later uncovered that many of these facts were exaggerated. (There were more likely only a dozen witnesses and records show that some calls to police were made).

What this case later become famous for is the “Bystander Effect,” which states that the more bystanders that are present in a social situation, the less likely it is that anyone will step in and help. This effect has led to changes in medicine, psychology and many other areas. One famous example is the way CPR is taught to new learners. All students in CPR courses learn that they must assign one bystander the job of alerting authorities which minimizes the chances of no one calling for assistance.

9. Learned Helplessness Experiment

Study conducted by: martin seligman.

Study Conducted in 1967 at the University of Pennsylvania

Learned Helplessness Experiment

Seligman’s experiment involved the ringing of a bell and then the administration of a light shock to a dog. After a number of pairings, the dog reacted to the shock even before it happened. As soon as the dog heard the bell, he reacted as though he’d already been shocked.

During the course of this study something unexpected happened. Each dog was placed in a large crate that was divided down the middle with a low fence. The dog could see and jump over the fence easily. The floor on one side of the fence was electrified, but not on the other side of the fence. Seligman placed each dog on the electrified side and administered a light shock. He expected the dog to jump to the non-shocking side of the fence. In an unexpected turn, the dogs simply laid down.

The hypothesis was that as the dogs learned from the first part of the experiment that there was nothing they could do to avoid the shocks, they gave up in the second part of the experiment. To prove this hypothesis the experimenters brought in a new set of animals and found that dogs with no history in the experiment would jump over the fence.

This condition was described as learned helplessness. A human or animal does not attempt to get out of a negative situation because the past has taught them that they are helpless.

10. Little Albert Experiment

Study conducted by: john b. watson and rosalie rayner.

Study Conducted in 1920 at Johns Hopkins University

Little Albert Experiment

The experiment began by placing a white rat in front of the infant, who initially had no fear of the animal. Watson then produced a loud sound by striking a steel bar with a hammer every time little Albert was presented with the rat. After several pairings (the noise and the presentation of the white rat), the boy began to cry and exhibit signs of fear every time the rat appeared in the room. Watson also created similar conditioned reflexes with other common animals and objects (rabbits, Santa beard, etc.) until Albert feared them all.

This study proved that classical conditioning works on humans. One of its most important implications is that adult fears are often connected to early childhood experiences.

11. Magical Number Seven

Study conducted by: george a. miller.

Study Conducted in 1956 at Princeton University

Experiment Details:   Frequently referred to as “ Miller’s Law,” the Magical Number Seven experiment purports that the number of objects an average human can hold in working memory is 7 ± 2. This means that the human memory capacity typically includes strings of words or concepts ranging from 5-9. This information on the limits to the capacity for processing information became one of the most highly cited papers in psychology.

The Magical Number Seven Experiment was published in 1956 by cognitive psychologist George A. Miller of Princeton University’s Department of Psychology in Psychological Review .  In the article, Miller discussed a concurrence between the limits of one-dimensional absolute judgment and the limits of short-term memory.

In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (such as 10 different tones varying only in pitch). The person responds to each stimulus with a corresponding response (learned before).

Performance is almost perfect up to five or six different stimuli but declines as the number of different stimuli is increased. This means that a human’s maximum performance on one-dimensional absolute judgment can be described as an information store with the maximum capacity of approximately 2 to 3 bits of information There is the ability to distinguish between four and eight alternatives.

12. Pavlov’s Dog Experiment

Study conducted by: ivan pavlov.

Study Conducted in the 1890s at the Military Medical Academy in St. Petersburg, Russia

Pavlov’s Dog Experiment

Pavlov began with the simple idea that there are some things that a dog does not need to learn. He observed that dogs do not learn to salivate when they see food. This reflex is “hard wired” into the dog. This is an unconditioned response (a stimulus-response connection that required no learning).

Pavlov outlined that there are unconditioned responses in the animal by presenting a dog with a bowl of food and then measuring its salivary secretions. In the experiment, Pavlov used a bell as his neutral stimulus. Whenever he gave food to his dogs, he also rang a bell. After a number of repeats of this procedure, he tried the bell on its own. What he found was that the bell on its own now caused an increase in salivation. The dog had learned to associate the bell and the food. This learning created a new behavior. The dog salivated when he heard the bell. Because this response was learned (or conditioned), it is called a conditioned response. The neutral stimulus has become a conditioned stimulus.

This theory came to be known as classical conditioning.

13. Robbers Cave Experiment

Study conducted by: muzafer and carolyn sherif.

Study Conducted in 1954 at the University of Oklahoma

Experiment Details: This experiment, which studied group conflict, is considered by most to be outside the lines of what is considered ethically sound.

In 1954 researchers at the University of Oklahoma assigned 22 eleven- and twelve-year-old boys from similar backgrounds into two groups. The two groups were taken to separate areas of a summer camp facility where they were able to bond as social units. The groups were housed in separate cabins and neither group knew of the other’s existence for an entire week. The boys bonded with their cabin mates during that time. Once the two groups were allowed to have contact, they showed definite signs of prejudice and hostility toward each other even though they had only been given a very short time to develop their social group. To increase the conflict between the groups, the experimenters had them compete against each other in a series of activities. This created even more hostility and eventually the groups refused to eat in the same room. The final phase of the experiment involved turning the rival groups into friends. The fun activities the experimenters had planned like shooting firecrackers and watching movies did not initially work, so they created teamwork exercises where the two groups were forced to collaborate. At the end of the experiment, the boys decided to ride the same bus home, demonstrating that conflict can be resolved and prejudice overcome through cooperation.

Many critics have compared this study to Golding’s Lord of the Flies novel as a classic example of prejudice and conflict resolution.

14. Ross’ False Consensus Effect Study

Study conducted by: lee ross.

Study Conducted in 1977 at Stanford University

Experiment Details: In 1977, a social psychology professor at Stanford University named Lee Ross conducted an experiment that, in lay terms, focuses on how people can incorrectly conclude that others think the same way they do, or form a “false consensus” about the beliefs and preferences of others. Ross conducted the study in order to outline how the “false consensus effect” functions in humans.

Featured Programs

In the first part of the study, participants were asked to read about situations in which a conflict occurred and then were told two alternative ways of responding to the situation. They were asked to do three things:

  • Guess which option other people would choose
  • Say which option they themselves would choose
  • Describe the attributes of the person who would likely choose each of the two options

What the study showed was that most of the subjects believed that other people would do the same as them, regardless of which of the two responses they actually chose themselves. This phenomenon is referred to as the false consensus effect, where an individual thinks that other people think the same way they do when they may not. The second observation coming from this important study is that when participants were asked to describe the attributes of the people who will likely make the choice opposite of their own, they made bold and sometimes negative predictions about the personalities of those who did not share their choice.

15. The Schacter and Singer Experiment on Emotion

Study conducted by: stanley schachter and jerome e. singer.

Study Conducted in 1962 at Columbia University

Experiment Details: In 1962 Schachter and Singer conducted a ground breaking experiment to prove their theory of emotion.

In the study, a group of 184 male participants were injected with epinephrine, a hormone that induces arousal including increased heartbeat, trembling, and rapid breathing. The research participants were told that they were being injected with a new medication to test their eyesight. The first group of participants was informed the possible side effects that the injection might cause while the second group of participants were not. The participants were then placed in a room with someone they thought was another participant, but was actually a confederate in the experiment. The confederate acted in one of two ways: euphoric or angry. Participants who had not been informed about the effects of the injection were more likely to feel either happier or angrier than those who had been informed.

What Schachter and Singer were trying to understand was the ways in which cognition or thoughts influence human emotion. Their study illustrates the importance of how people interpret their physiological states, which form an important component of your emotions. Though their cognitive theory of emotional arousal dominated the field for two decades, it has been criticized for two main reasons: the size of the effect seen in the experiment was not that significant and other researchers had difficulties repeating the experiment.

16. Selective Attention / Invisible Gorilla Experiment

Study conducted by: daniel simons and christopher chabris.

Study Conducted in 1999 at Harvard University

Experiment Details: In 1999 Simons and Chabris conducted their famous awareness test at Harvard University.

Participants in the study were asked to watch a video and count how many passes occurred between basketball players on the white team. The video moves at a moderate pace and keeping track of the passes is a relatively easy task. What most people fail to notice amidst their counting is that in the middle of the test, a man in a gorilla suit walked onto the court and stood in the center before walking off-screen.

The study found that the majority of the subjects did not notice the gorilla at all, proving that humans often overestimate their ability to effectively multi-task. What the study set out to prove is that when people are asked to attend to one task, they focus so strongly on that element that they may miss other important details.

17. Stanford Prison Study

Study conducted by philip zimbardo.

Study Conducted in 1971 at Stanford University

Stanford Prison Study

The Stanford Prison Experiment was designed to study behavior of “normal” individuals when assigned a role of prisoner or guard. College students were recruited to participate. They were assigned roles of “guard” or “inmate.”  Zimbardo played the role of the warden. The basement of the psychology building was the set of the prison. Great care was taken to make it look and feel as realistic as possible.

The prison guards were told to run a prison for two weeks. They were told not to physically harm any of the inmates during the study. After a few days, the prison guards became very abusive verbally towards the inmates. Many of the prisoners became submissive to those in authority roles. The Stanford Prison Experiment inevitably had to be cancelled because some of the participants displayed troubling signs of breaking down mentally.

Although the experiment was conducted very unethically, many psychologists believe that the findings showed how much human behavior is situational. People will conform to certain roles if the conditions are right. The Stanford Prison Experiment remains one of the most famous psychology experiments of all time.

18. Stanley Milgram Experiment

Study conducted by stanley milgram.

Study Conducted in 1961 at Stanford University

Experiment Details: This 1961 study was conducted by Yale University psychologist Stanley Milgram. It was designed to measure people’s willingness to obey authority figures when instructed to perform acts that conflicted with their morals. The study was based on the premise that humans will inherently take direction from authority figures from very early in life.

Participants were told they were participating in a study on memory. They were asked to watch another person (an actor) do a memory test. They were instructed to press a button that gave an electric shock each time the person got a wrong answer. (The actor did not actually receive the shocks, but pretended they did).

Participants were told to play the role of “teacher” and administer electric shocks to “the learner,” every time they answered a question incorrectly. The experimenters asked the participants to keep increasing the shocks. Most of them obeyed even though the individual completing the memory test appeared to be in great pain. Despite these protests, many participants continued the experiment when the authority figure urged them to. They increased the voltage after each wrong answer until some eventually administered what would be lethal electric shocks.

This experiment showed that humans are conditioned to obey authority and will usually do so even if it goes against their natural morals or common sense.

19. Surrogate Mother Experiment

Study conducted by: harry harlow.

Study Conducted from 1957-1963 at the University of Wisconsin

Experiment Details: In a series of controversial experiments during the late 1950s and early 1960s, Harry Harlow studied the importance of a mother’s love for healthy childhood development.

In order to do this he separated infant rhesus monkeys from their mothers a few hours after birth and left them to be raised by two “surrogate mothers.” One of the surrogates was made of wire with an attached bottle for food. The other was made of soft terrycloth but lacked food. The researcher found that the baby monkeys spent much more time with the cloth mother than the wire mother, thereby proving that affection plays a greater role than sustenance when it comes to childhood development. They also found that the monkeys that spent more time cuddling the soft mother grew up to healthier.

This experiment showed that love, as demonstrated by physical body contact, is a more important aspect of the parent-child bond than the provision of basic needs. These findings also had implications in the attachment between fathers and their infants when the mother is the source of nourishment.

20. The Good Samaritan Experiment

Study conducted by: john darley and daniel batson.

Study Conducted in 1973 at The Princeton Theological Seminary (Researchers were from Princeton University)

Experiment Details: In 1973, an experiment was created by John Darley and Daniel Batson, to investigate the potential causes that underlie altruistic behavior. The researchers set out three hypotheses they wanted to test:

  • People thinking about religion and higher principles would be no more inclined to show helping behavior than laymen.
  • People in a rush would be much less likely to show helping behavior.
  • People who are religious for personal gain would be less likely to help than people who are religious because they want to gain some spiritual and personal insights into the meaning of life.

Student participants were given some religious teaching and instruction. They were then were told to travel from one building to the next. Between the two buildings was a man lying injured and appearing to be in dire need of assistance. The first variable being tested was the degree of urgency impressed upon the subjects, with some being told not to rush and others being informed that speed was of the essence.

The results of the experiment were intriguing, with the haste of the subject proving to be the overriding factor. When the subject was in no hurry, nearly two-thirds of people stopped to lend assistance. When the subject was in a rush, this dropped to one in ten.

People who were on the way to deliver a speech about helping others were nearly twice as likely to help as those delivering other sermons,. This showed that the thoughts of the individual were a factor in determining helping behavior. Religious beliefs did not appear to make much difference on the results. Being religious for personal gain, or as part of a spiritual quest, did not appear to make much of an impact on the amount of helping behavior shown.

21. The Halo Effect Experiment

Study conducted by: richard e. nisbett and timothy decamp wilson.

Study Conducted in 1977 at the University of Michigan

Experiment Details: The Halo Effect states that people generally assume that people who are physically attractive are more likely to:

  • be intelligent
  • be friendly
  • display good judgment

To prove their theory, Nisbett and DeCamp Wilson created a study to prove that people have little awareness of the nature of the Halo Effect. They’re not aware that it influences:

  • their personal judgments
  • the production of a more complex social behavior

In the experiment, college students were the research participants. They were asked to evaluate a psychology instructor as they view him in a videotaped interview. The students were randomly assigned to one of two groups. Each group was shown one of two different interviews with the same instructor. The instructor is a native French-speaking Belgian who spoke English with a noticeable accent. In the first video, the instructor presented himself as someone:

  • respectful of his students’ intelligence and motives
  • flexible in his approach to teaching
  • enthusiastic about his subject matter

In the second interview, he presented himself as much more unlikable. He was cold and distrustful toward the students and was quite rigid in his teaching style.

After watching the videos, the subjects were asked to rate the lecturer on:

  • physical appearance

His mannerisms and accent were kept the same in both versions of videos. The subjects were asked to rate the professor on an 8-point scale ranging from “like extremely” to “dislike extremely.” Subjects were also told that the researchers were interested in knowing “how much their liking for the teacher influenced the ratings they just made.” Other subjects were asked to identify how much the characteristics they just rated influenced their liking of the teacher.

After responding to the questionnaire, the respondents were puzzled about their reactions to the videotapes and to the questionnaire items. The students had no idea why they gave one lecturer higher ratings. Most said that how much they liked the lecturer had not affected their evaluation of his individual characteristics at all.

The interesting thing about this study is that people can understand the phenomenon, but they are unaware when it is occurring. Without realizing it, humans make judgments. Even when it is pointed out, they may still deny that it is a product of the halo effect phenomenon.

22. The Marshmallow Test

Study conducted by: walter mischel.

Study Conducted in 1972 at Stanford University

The Marshmallow Test

In his 1972 Marshmallow Experiment, children ages four to six were taken into a room where a marshmallow was placed in front of them on a table. Before leaving each of the children alone in the room, the experimenter informed them that they would receive a second marshmallow if the first one was still on the table after they returned in 15 minutes. The examiner recorded how long each child resisted eating the marshmallow and noted whether it correlated with the child’s success in adulthood. A small number of the 600 children ate the marshmallow immediately and one-third delayed gratification long enough to receive the second marshmallow.

In follow-up studies, Mischel found that those who deferred gratification were significantly more competent and received higher SAT scores than their peers. This characteristic likely remains with a person for life. While this study seems simplistic, the findings outline some of the foundational differences in individual traits that can predict success.

23. The Monster Study

Study conducted by: wendell johnson.

Study Conducted in 1939 at the University of Iowa

Experiment Details: The Monster Study received this negative title due to the unethical methods that were used to determine the effects of positive and negative speech therapy on children.

Wendell Johnson of the University of Iowa selected 22 orphaned children, some with stutters and some without. The children were in two groups. The group of children with stutters was placed in positive speech therapy, where they were praised for their fluency. The non-stutterers were placed in negative speech therapy, where they were disparaged for every mistake in grammar that they made.

As a result of the experiment, some of the children who received negative speech therapy suffered psychological effects and retained speech problems for the rest of their lives. They were examples of the significance of positive reinforcement in education.

The initial goal of the study was to investigate positive and negative speech therapy. However, the implication spanned much further into methods of teaching for young children.

24. Violinist at the Metro Experiment

Study conducted by: staff at the washington post.

Study Conducted in 2007 at a Washington D.C. Metro Train Station

Grammy-winning musician, Joshua Bell

During the study, pedestrians rushed by without realizing that the musician playing at the entrance to the metro stop was Grammy-winning musician, Joshua Bell. Two days before playing in the subway, he sold out at a theater in Boston where the seats average $100. He played one of the most intricate pieces ever written with a violin worth 3.5 million dollars. In the 45 minutes the musician played his violin, only 6 people stopped and stayed for a while. Around 20 gave him money, but continued to walk their normal pace. He collected $32.

The study and the subsequent article organized by the Washington Post was part of a social experiment looking at:

  • the priorities of people

Gene Weingarten wrote about the social experiment: “In a banal setting at an inconvenient time, would beauty transcend?” Later he won a Pulitzer Prize for his story. Some of the questions the article addresses are:

  • Do we perceive beauty?
  • Do we stop to appreciate it?
  • Do we recognize the talent in an unexpected context?

As it turns out, many of us are not nearly as perceptive to our environment as we might like to think.

25. Visual Cliff Experiment

Study conducted by: eleanor gibson and richard walk.

Study Conducted in 1959 at Cornell University

Experiment Details: In 1959, psychologists Eleanor Gibson and Richard Walk set out to study depth perception in infants. They wanted to know if depth perception is a learned behavior or if it is something that we are born with. To study this, Gibson and Walk conducted the visual cliff experiment.

They studied 36 infants between the ages of six and 14 months, all of whom could crawl. The infants were placed one at a time on a visual cliff. A visual cliff was created using a large glass table that was raised about a foot off the floor. Half of the glass table had a checker pattern underneath in order to create the appearance of a ‘shallow side.’

In order to create a ‘deep side,’ a checker pattern was created on the floor; this side is the visual cliff. The placement of the checker pattern on the floor creates the illusion of a sudden drop-off. Researchers placed a foot-wide centerboard between the shallow side and the deep side. Gibson and Walk found the following:

  • Nine of the infants did not move off the centerboard.
  • All of the 27 infants who did move crossed into the shallow side when their mothers called them from the shallow side.
  • Three of the infants crawled off the visual cliff toward their mother when called from the deep side.
  • When called from the deep side, the remaining 24 children either crawled to the shallow side or cried because they could not cross the visual cliff and make it to their mother.

What this study helped demonstrate is that depth perception is likely an inborn train in humans.

Among these experiments and psychological tests, we see boundaries pushed and theories taking on a life of their own. It is through the endless stream of psychological experimentation that we can see simple hypotheses become guiding theories for those in this field. The greater field of psychology became a formal field of experimental study in 1879, when Wilhelm Wundt established the first laboratory dedicated solely to psychological research in Leipzig, Germany. Wundt was the first person to refer to himself as a psychologist. Since 1879, psychology has grown into a massive collection of:

  • methods of practice

It’s also a specialty area in the field of healthcare. None of this would have been possible without these and many other important psychological experiments that have stood the test of time.

  • 20 Most Unethical Experiments in Psychology
  • What Careers are in Experimental Psychology?
  • 10 Things to Know About the Psychology of Psychotherapy

About Education: Psychology

Explorable.com

Mental Floss.com

About the Author

After earning a Bachelor of Arts in Psychology from Rutgers University and then a Master of Science in Clinical and Forensic Psychology from Drexel University, Kristen began a career as a therapist at two prisons in Philadelphia. At the same time she volunteered as a rape crisis counselor, also in Philadelphia. After a few years in the field she accepted a teaching position at a local college where she currently teaches online psychology courses. Kristen began writing in college and still enjoys her work as a writer, editor, professor and mother.

  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide

psychologyorg

The 11 Most Influential Psychological Experiments in History

The history of psychology is marked by groundbreaking experiments that transformed our understanding of the human mind. These 11 Most Influential Psychological Experiments in History stand out as pivotal, offering profound insights into behaviour, cognition, and the complexities of human nature.

In this PsychologyOrg article, we’ll explain these key experiments, exploring their impact on our understanding of human behaviour and the intricate workings of the mind.

Table of Contents

Experimental psychology.

Experimental psychology is a branch of psychology that uses scientific methods to study human behaviour and mental processes. Researchers in this field design experiments to test hypotheses about topics such as perception, learning, memory, emotion, and motivation.

They use a variety of techniques to measure and analyze behaviour and mental processes, including behavioural observations, self-report measures, physiological recordings, and computer simulations. The findings of experimental psychology studies can have important implications for a wide range of fields, including education, healthcare, and public policy.

Experimental Psychology, Psychologists have long tried to gain insight into how we perceive the world, to understand what motivates our behavior. They have made great strides in lifting that veil of mystery. In addition to providing us with food for stimulating party conversations, some of the most famous psychological experiments of the last century reveal surprising and universal truths about nature.

11 Most Influential Psychological Experiments in History

Throughout the history of psychology, revolutionary experiments have reshaped our comprehension of the human mind. These 11 experiments are pivotal, providing deep insights into human behaviour, cognition, and the intricate facets of human nature.

1. Kohler and the Chimpanzee experiment

Wolfgang Kohler studied the insight process by observing the behaviour of chimpanzees in a problem situation. In the experimental situation, the animals were placed in a cage outside of which food, for example, a banana, was stored. There were other objects in the cage, such as sticks or boxes. The animals participating in the experiment were hungry, so they needed to get to the food. At first, the chimpanzee used sticks mainly for playful activities; but suddenly, in the mind of the hungry chimpanzee, a relationship between sticks and food developed.

The cane, from an object to play with, became an instrument through which it was possible to reach the banana placed outside the cage. There has been a restructuring of the perceptual field: Kohler stressed that the appearance of the new behaviour was not the result of random attempts according to a process of trial and error. It is one of the first experiments on the intelligence of chimpanzees.

2. Harlow’s experiment on attachment with monkeys

In a scientific paper (1959), Harry F. Harlow described how he had separated baby rhesus monkeys from their mothers at birth, and raised them with the help of “puppet mothers”: in a series of experiments he compared the behavior of monkeys in two situations:

Little monkeys with a puppet mother without a bottle, but covered in a soft, fluffy, and furry fabric. Little monkeys with a “puppet” mother that supplied food, but was covered in wire. The little monkeys showed a clear preference for the “furry” mother, spending an average of fifteen hours a day attached to her, even though they were exclusively fed by the “suckling” puppet mother. conclusions of the Harlow experiment: all the experiments showed that the pleasure of contact elicited attachment behaviours, but the food did not.

3. The Strange Situation by Mary Ainsworth

Building on Bowlby’s attachment theory, Mary Ainsworth and colleagues (1978) have developed an experimental method called the Strange Situation, to assess individual differences in attachment security. The Strange Situation includes a series of short laboratory episodes in a comfortable environment and the child’s behaviors are observed.

Ainsworth and colleagues have paid special attention to the child’s behaviour at the time of reunion with the caregiver after a brief separation, thus identifying three different attachment patterns or styles, so called from that moment on. kinds of attachment according to Mary Ainsworth:

Secure attachment (63% of the dyads examined) Anxious-resistant or ambivalent (16%) Avoidant (21%) The Strange Situation by Mary Ainsworth

In a famous 1971 experiment, known as the Stanford Prison, Zimbardo and a team of collaborators reproduced a prison in the garages of Stanford University to study the behaviour of subjects in a context of very particular and complex dynamics. Let’s see how it went and the thoughts on the Stanford prison experiment. The participants (24 students) were randomly divided into two groups:

“ Prisoners “. The latter were locked up in three cells in the basement of a University building for six days; they were required to wear a white robe with a paper over it and a chain on the right ankle. “ Guards “. The students who had the role of prison guards had to watch the basement, choose the most appropriate methods to maintain order, and make the “prisoners” perform various tasks; they were asked to wear dark glasses and uniforms, and never to be violent towards the participants of the opposite role. However, the situation deteriorated dramatically: the fake police officers very soon began to seriously mistreat and humiliate the “detainees”, so it was decided to discontinue the experiment.

4. Jane Elliot’s Blue Eyes Experiment

On April 5, 1968, in a small school in Riceville, Iowa, Professor Jane Elliot decided to give a practical lesson on racism to 28 children of about eight years of age through the blue eyes brown eyes experiment.

“Children with brown eyes are the best,” the instructor began. “They are more beautiful and intelligent.” She wrote the word “melanin” on the board and explained that it was a substance that made people intelligent. Dark-eyed children have more, so they are more intelligent, while blue-eyed children “go hand in hand.”

In a very short time, the brown-eyed children began to treat their blue-eyed classmates with superiority, who in turn lost their self-confidence. A very good girl started making mistakes during arithmetic class, and at recess, she was approached by three little friends with brown eyes “You have to apologize because you get in their way and because we are the best,” said one of them. The girl hastened to apologize. This is one of the psychosocial experiments demonstrating how beliefs and prejudices play a role.

5. The Bobo de Bbandura doll

Albert Bandura gained great fame for the Bobo doll experiment on child imitation aggression, where:

A group of children took as an example, by visual capacity, the adults in a room, without their behaviour being commented on, hit the Bobo doll. Other contemporaries, on the other hand, saw adults sitting, always in absolute silence, next to Bobo.

Finally, all these children were brought to a room full of toys, including a doll like Bobo. Of the 10 children who hit the doll, 8 were those who had seen it done before by an adult. This explains how if a model that we follow performs a certain action, we are tempted to imitate it and this happens especially in children who still do not have the experience to understand for themselves if that behaviour is correct or not.

6. Milgram’s experiment

The Milgram experiment was first carried out in 1961 by psychologist Stanley Milgram, as an investigation into the degree of our deference to authority. A subject is invited to give an electric shock to an individual playing the role of the student, positioned behind a screen when he does not answer a question correctly. An authorized person then tells the subject to gradually increase the intensity of the shock until the student screams in pain and begs to stop.

No justification is given, except for the fact that the authorized person tells the subject to obey. In reality, it was staged: there was absolutely no electric shock given, but in the experiment two-thirds of the subjects were influenced by what they thought was a 450-volt shock, simply because a person in authority told them they would not be responsible for it. nothing.

7. little Albert

We see little Albert’s experiment on unconditioned stimulus, which must be the most famous psychological study. John Watson and Rosalie Raynor showed a white laboratory rat to a nine-month-old boy, little Albert. At first, the boy showed no fear, but then Watson jumped up from behind and made him flinch with a sudden noise by hitting a metal bar with a hammer. Of course, the noise frightened little Albert, who began to cry.

Every time the rat was brought out, Watson and Raynor would rattle the bar with their hammer to scare the poor boy away. Soon the mere sight of the rat was enough to reduce little Albert to a trembling bundle of nerves: he had learned to fear the sight of a rat, and soon afterwards began to fear a series of similar objects shown to him.

8. Pavlov’s dog

Ivan Pavlov’s sheepdog became famous for his experiments that led him to discover what we call “classical conditioning” or “Pavlovian reflex” and is still a very famous psychological experiment today. Hardly any other psychological experiment is cited so often and with such gusto as Pavlov’s theory expounded in 1905: the Russian physiologist had been impressed by the fact that his dogs did not begin to drool at the sight of food, but rather when they heard it. to the laboratory employees who took it away.

He researched it and ordered a buzzer to ring every time it was mealtime. Very soon the sound of the doorbell was enough for the dogs to start drooling: they had connected the signal to the arrival of food.

9. Asch’s experiment

It is about a social psychology experiment carried out in 1951 by the Polish psychologist Solomon Asch on the influence of the majority and social conformity.

The experiment is based on the idea that being part of a group is a sufficient condition to change a person’s actions, judgments, and visual perceptions. The very simple experiment consisted of asking the subjects involved to associate line 1 drawn on a white sheet with the corresponding one, choosing between three different lines A, B, and C present on another sheet. Only one was identical to the other, while the other two were longer or shorter.

The experimentation was carried out in three phases. As soon as one of the subjects, Asch’s accomplice gave a wrong answer associating line 1 with the wrong one, the other members of the group also made the same mistake, even though the correct answer was more than obvious. The participants questioned the reason for this choice and responded that aware of the correct answer, they had decided to conform to the group, adapting to those who had preceded them.

psychotherapy definition types and techniques | Psychotherapy vs therapy Psychologyorg.com

10. Rosenbaum’s experiment

Among the most interesting investigations in this field, an experiment carried out by David Rosenhan (1923) to document the low validity of psychiatric diagnoses stands out. Rosenhan admitted eight assistants to various psychiatric hospitals claiming psychotic symptoms, but once they entered the hospital they behaved as usual.

Despite this, they were held on average for 19 days, with all but one being diagnosed as “psychotic”. One of the reasons why the staff is not aware of the “normality” of the subjects, is, according to Rosenhan, the very little contact between the staff and the patients.

11. Bystander Effect (1968)

The Bystander Effect studied in 1968 after the tragic case of Kitty Genovese, explores how individuals are less likely to intervene in emergencies when others are present. The original research by John Darley and Bibb Latané involved staged scenarios where participants believed they were part of a discussion via intercom.

In the experiment, participants were led to believe they were communicating with others about personal problems. Unknown to them, the discussions were staged, and at a certain point, a participant (confederate) pretended to have a seizure or needed help.

The results were startling. When participants believed they were the sole witness to the emergency, they responded quickly and sought help. However, when they thought others were also present (but were confederates instructed to not intervene), the likelihood of any individual offering help significantly decreased. This phenomenon became known as the Bystander Effect.

The diffusion of responsibility, where individuals assume others will take action, contributes to this effect. The presence of others creates a diffusion of responsibility among bystanders, leading to a decreased likelihood of any single individual taking action.

This experiment highlighted the social and psychological factors influencing intervention during emergencies and emphasized the importance of understanding bystander behaviour in critical situations.

11 Most Influential Psychological Experiments in History

The journey through the “11 Most Influential Psychological Experiments in History” illuminates the profound impact these studies have had on our understanding of human behaviour, cognition, and social dynamics.

Each experiment stands as a testament to the dedication of pioneering psychologists who dared to delve into the complexities of the human mind. From Milgram’s obedience studies to Zimbardo’s Stanford Prison Experiment, these trials have shaped not only the field of psychology but also our societal perceptions and ethical considerations in research.

They serve as timeless benchmarks, reminding us of the ethical responsibilities and the far-reaching implications of delving into the human psyche. The enduring legacy of these experiments lies not only in their scientific contributions but also in the ethical reflections they provoke, urging us to navigate the boundaries of knowledge with caution, empathy, and an unwavering commitment to understanding the intricacies of our humanity.

What is the most famous experiment in the history of psychology?

One of the most famous experiments is the Milgram Experiment, conducted by Stanley Milgram in the 1960s. It investigated obedience to authority figures and remains influential in understanding human behaviour.

Who wrote the 25 most influential psychological experiments in history?

The book “The 25 Most Influential Psychological Experiments in History” was written by Michael Shermer, a science writer and historian of science.

What is the history of experimental psychology?

Experimental psychology traces back to Wilhelm Wundt, often considered the father of experimental psychology. He established the first psychology laboratory in 1879 at the University of Leipzig, marking the formal beginning of experimental psychology as a distinct field.

What was the psychological experiment in the 1960s?

Many significant psychological experiments were conducted in the 1960s. One notable example is the Stanford Prison Experiment led by Philip Zimbardo, which examined the effects of situational roles on behaviour.

Who was the first experimental psychologist?

Wilhelm Wundt is often regarded as the first experimental psychologist due to his establishment of the first psychology laboratory and his emphasis on empirical research methods in psychology.

If you want to read more articles similar to  The 11 Most Influential Psychological Experiments in History , we recommend that you enter our  Psychology  category.

' src=

I'm Waqar, a passionate psychologist and dedicated content writer. With a deep interest in understanding human behavior, I aim to share insights and knowledge in the field of psychology through this blog. Feel free to reach out for collaborations, queries, or discussions. Let's dig into the fascinating world of psychology together!

Similar Posts

Psychology of Attraction: What makes you attracted?

The Psychology of Attraction: What makes you attracted?

Attraction is a powerful force that brings people together, igniting feelings of desire, interest, or affection in various ways, such as sexually, physically, emotionally, romantically,…

Types of Psychotherapy

Types of Psychotherapy Techniques & Methods

Multiple specialists, such as Frank (1961), point out that the significance of psychotherapy techniques is attributed to the capability of therapists to revise the expectations…

Where Do Introverts Find Partners?

Where Do Introverts Find Partners? Top 6 Spots

Where Do Introverts Find Partners? Finding a romantic partner can be a fulfilling and exciting journey for introverts. While introverts tend to thrive in quieter…

types of fears

20 most common types of fears/phobias

What is fear in psychology? Fear is an emotion that alters both perception and reasoning before certain events or situations. Why are we afraid? What is fear for? Adaptive…

Psychology of Religion

Psychology of Religion and Spirituality

Religion and spirituality have been part of human life for thousands of years. They shape our cultures, influence our behaviors, and provide a sense of…

Psychology of Love

The Psychology of Love Theories And Types

Love, one of the most profound emotions experienced by human beings, has fascinated researchers and individuals alike for centuries. It has been the topic of…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

American Psychological Association Logo

Science of Psychology

Science of Psychology

The Go-To Science

Curiosity is part of human nature. One of the first questions children learn to ask is “why?” As adults, we continue to wonder. Using empirical methods, psychologists apply that universal curiosity to collect and interpret research data to better understand and solve some of society’s most challenging problems.

It’s difficult, if not impossible, to think of a facet of life where psychology is not involved. Psychologists employ the scientific method — stating the question, offering a theory and then constructing rigorous laboratory or field experiments to test the hypothesis. Psychologists apply the understanding gleaned through research to create evidence-based strategies that solve problems and improve lives.

The result is that psychological science unveils new and better ways for people to exist and thrive in a complex world.

Psychologists in Action

Jack Stark, PhD, Performance Psychologist

Helping Businesses

Dr. Jack Stark uses psychological science to help NASCAR drivers achieve optimal performance  and keep their team in the winner’s circle.

Dr. Strayer helps place an electroencephalogram (EEG) cap on a study participant.

Improving Lives

Dr. David Strayer uses psychological science to study distracted driving by putting people through rigorous concentration tests during driving simulations.

Dr. Tate gives a study participant an armband to monitor activity levels.

Promoting Health

Dr. Deborah Tate uses psychological science to identify strategies for improving weight loss . Her research brings the proven benefits of face-to-face weight loss programs to more people through technology.

Dr. Salas sits in a helicopter with pilots.

Helping Organizations

As an organizational psychologist, Dr. Eduardo Salas studies people where they work — examining what they do and how they make decisions.

Kathleen Kremer, PhD, Research Psychologist

Working in Schools

Dr. Kathleen Kremer knows a thing or two about fun. Using psychological science, she studies user attitudes, behaviors and emotions to learn what makes a child love a toy.

Science in Action

Psychology is a varied field. Psychologists conduct basic and applied research, serve as consultants to communities and organizations, diagnose and treat people, and teach future psychologists and those who will pursue other disciplines. They test intelligence and personality.

Many psychologists work as health care providers. They assess behavioral and mental function and well-being. Other psychologists study how human beings relate to each other and to machines, and work to improve these relationships.

The application of psychological research can decrease the economic burden of disease on government and society as people learn how to make choices that improve their health and well-being. The strides made in educational assessments are helping students with learning disabilities. Psychological science helps educators understand how children think, process and remember — helping to design effective teaching methods. Psychological science contributes to justice by helping the courts understand the minds of criminals, evidence and the limits of certain types of evidence or testimony.

The science of psychology is pervasive. Psychologists work in some of the nation’s most prominent companies and organizations. From Google, Boeing and NASA to the federal government, national health care organizations and research groups to Cirque du Soleil, Disney and NASCAR — psychologists are there, playing important roles.

Brain Science and Cognitive Psychology

Brain science and cognitive psychology

Climate and Environmental Psychology

Climate and environmental psychology

Climate and Environmental Psychology

Clinical psychology

A Career in Counseling Psychology

Counseling psychology

Developmental psychologists focus on human growth and changes across the lifespan, including physical, cognitive, social, intellectual, perceptual, personality and emotional growth.

Developmental psychology

Experimental psychologists use science to explore the processes behind human and animal behavior.

Experimental psychology

Forensic and Public Service Psychology

Forensic and public service psychology

Health Psychology

Health psychology

Human Factors and Engineering Psychology

Human factors and engineering psychology

Industrial and Organizational Psychology

Industrial and organizational psychology

Teaching and Learning Psychology

Psychology of teaching and learning

Quantitative Psychology Designs Research Methods to Test Complex Issues

Quantitative psychology

Rehabilitation psychologists study and work with individuals with disabilities and chronic health conditions to help them overcome challenges and improve their quality of life.

Rehabilitation psychology

Social Psychology Examines the Influence of Interpersonal and Group Relationships

Social psychology

Sport and Performance Psychology

Sport and performance psychology

Psychology Careers: What to Know

John G. Cottone Ph.D.

Replication Crisis

What do you know the pros and cons of a scientific approach, part 2: is the scientific method the best way to establish knowledge.

Updated September 14, 2023 | Reviewed by Gary Drevitch

In Part 1 of this series, I discussed the illusion of knowledge in the context of COVID-19 and explained how much of what we think we know is actually belief, taken as truth, because it came from a source in which we have faith.

Philosophers as far back as Plato have defined knowledge as "justified true belief (JTB)." However, skeptics and Pyrrhonists have challenged this notion for centuries, reasoning that the same evidence one person considers valid in justifying a belief as "true," another may consider biased or incomplete ( David McClean , personal communication, 2020).

Freepik / Adapted by Lisa A. Cottone, Quixotic Publishing

Against this backdrop, Enlightenment-era thinkers, including René Descartes and Charles Sanders Peirce, have promoted the scientific method as the best means for acquiring justifiable evidence to establish beliefs as true knowledge. In his influential 1877 essay, The Fixation of Belief , Peirce refined Aristotle's approach and extolled the virtues of the scientific method over other means of knowing, including blindly accepting facts from authority figures and relying on pure reasoning to establish knowledge without testing one's conclusions in the real world.

Indeed, the scientific method, with its insistence on direct observation and the objective testing of hypotheses, has been a major advance for our civilization, allowing us to catapult over superstitions and other belief systems that were either invalid or unreliable. Furthermore, it is still the best system for helping our species progressively advance toward truth. However, the scientific method, if not the entire scientific process, is not without its limitations in its ability to yield justifiable evidence to establish knowledge.

Over the past decade, we have learned that many of the scientific findings we have taken as fact have been retracted , either due to error or fraud ( Brainard & You, 2018 ), and as microbiologist Dr. Elisabeth Bik notes in a New York Times op-ed , advancing technology is only making things worse. In the field of psychology, specifically, we have been coming to terms with our own reckoning, known as the " replication crisis ," since 2011 ( Pashler & Wagenmakers, 2012 ). Though slightly less publicized, a replication crisis in the field of neuroradiology may end up having more serious consequences. In 2016, researchers from Sweeden ( Ecklund et al., 2016 ) discovered a statistical anomaly that likely invalidated 40,000 fMRI studies of neurology over a 15 year period.

Fetrinka / Freepik

Part of the problem with science is that as we try to study more sophisticated phenomena, we need more sophisticated equipment, which removes us further and further from direct observation and requires that we place our scientific faith in machines and other people's work. As Bec Crew (2016) points out in a summary of Ecklund et al's findings, "when scientists are interpreting data from an fMRI machine, they’re not looking at the actual brain... what they're looking at is an image of the brain divided into tiny 'voxels', then interpreted by a computer program."

Indeed, even mathematics, the purest of the STEM fields, seems to be suffering from a crisis of confidence , as the validity of countless proofs that form the foundation of modern mathematics has recently been called into question. Mathematician Kevin Buzzard told attendees at a 2019 conference that "the greatest proofs have become so complex that practically no human on earth can understand all of their details, let alone verify them," and he fears that many proofs widely considered to be true are wrong ( Mordechai Rorvig, 2019 ). In paraphrasing Buzzard, journalist Mordechai Rorvig explains that "new proofs by professional mathematicians tend to rely on a whole host of prior results that have already been published and understood...but there are many cases where the prior proofs used to build new proofs are clearly not understood." In philosophy , this is called the problem of infinite regress : where each current article of knowledge is dependent on some previous article of knowledge that is blindly taken as true or cannot be proven true, ad infinitum.

Getting back to science, let's assume a medical researcher, Dr. Feelbetter, wants to run an experiment to determine whether a lower dose of the antibiotic azithromycin would be equally effective as the standard 500 mg dose for treating acute sinusitis (i.e., sinus infections), but with fewer side effects. So she designs a randomized, double-blind study comparing a 250 mg dose to the 500 mg dose and also to a placebo -control.

In every experiment, the principal investigator has control over all of the parameters of the study and is forced to make subjective decisions about every single aspect of the investigation. In this particular experiment, Dr. Feelbetter has an endless list of decisions to make, including:

a) Who will serve as research participants? (Adults? Children? Men? Women? Members of a specific ethnic group susceptible to sinus infections? etc.)

b) How will "acute sinusitis" be defined, measured and diagnosed?

c) How will the drug, azithromycin, be administered? (Tablet? Liquid suspension? IV?)

d) How will the effectiveness of the drug be assessed? (X-ray scans of the nose? Physician exam? Patient self-report?)

e) Which statistical procedures will be used to analyze the data, and which variables, from an infinite set, will be statistically controlled in the analyses?

scientific method psychology experiments

This is but a small fraction of the types of decisions that researchers need to make in scientific experiments, and the reality is that tremendous subjectivity goes into each of these decisions. How do we know whether Dr. Feelbetter, or any of the scientists we trust to conduct the research our lives depend on, made the right decisions in each of these areas, or resisted the temptation to engage in fraudulent practices?

One of the benefits of getting a Ph.D. in psychology is that we are trained not only as clinicians but as scientists; and as scientists, we are required to regularly present our research to peers in the scientific community at academic conferences and weekly brown bag meetings. Presenting one's research at such venues can be a terrifying experience because it is commonplace for other scientists to tear your research apart when they disagree with your methodology or your statistics. This is how things have always been in science and this reality led Max Planck, pioneer of quantum physics, to famously say:

VGStockStudio / Freepik

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."

Contrary to what many people think, most issues in science are not settled - a point made by NYU physicist Steven Koonin in his book about climate change , Unsettled . If climate change is too abstract for you, consider instead the uncertainty in the scientific community about whether masks are effective in stopping the spread of COVID .

I mention these things, not because I seek to attack science but because in order to defend it against pseudoscience and conspiracy theories it is first necessary to create a context of realistic expectations for scientific inquiry. I personally have faith in the majority of scientists and the majority of research findings published in peer-reviewed journals because, in my own work, this practice has served me well. But I must concede that while I have faith in the majority of peer-reviewed research findings, I don't know about them in the same way I know about the effects of gravity on my body when I jump in the air and come crashing back down to Earth (a point that was made a bit more comically by the slackers on It's Always Sunny in Philadelphia ).

Furthermore, I must also confess that my experiences in science have made me aware of the limits of our ability to know things, even when using the scientific method, and these experiences have bolstered my faith in many other things that cannot be proven by science. In the end, I believe that we cannot hope to attain knowledge, we can only approach it, and our best efforts in knowing are supported by our direct experiences (i.e., what William James called "radical empiricism"), validated by the experiences of others, with investigations from multiple perspectives.

I invite you to read Part 3 of my What Do You Know? series which focuses on how postmodernist thinking has eroded our confidence in what we know and this has been exploited by those with a range of intentions.

Brainard, J. & You, J. (2018). What a massive database of retracted papers reveals about science publishing’s ‘death penalty.’ Science Magazine, October 18, 2018.

Pashler, H. & Wagenmakers, E. J. (2012). "Editors' Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?". Perspectives on Psychological Science. 7 (6): 528–530.

Eklund, A., Nichols, T.E. & Knutsson, H. (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates PNAS, 113 (28) 7900-7905.

Rorvig, M. (2019). Number Theorist Fears All Published Math Is Wrong. Vice News, Sep 26 2019, https://www.vice.com/en_us/article/8xwm54/number-theorist-fears-all-published-math-is-wrong-actually

Crew, B. (2016). A Bug in FMRI Software Could Invalidate 15 Years of Brain Research. Science Alert, July 6, 2016, https://www.sciencealert.com/a-bug-in-fmri-software-could-invalidate-decades-of-brain-research-scientists-discover

John G. Cottone Ph.D.

John G. Cottone, Ph.D., is a psychologist in private practice, a clinical assistant professor of psychiatry at the Renaissance School of Medicine at Stony Brook University, and the author of Who Are You?

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

7 Famous Psychology Experiments

Picture of a piece of art used for psychological experiments

Many famous experiments studying human behavior have impacted our fundamental understanding of psychology. Though some could not be repeated today due to breaches in ethical boundaries, that does not diminish the significance of those psychological studies. Some of these important findings include a greater awareness of depression and its symptoms, how people learn behaviors through the process of association and how individuals conform to a group.

Below, we take a look at seven famous psychological experiments that greatly influenced the field of psychology and our understanding of human behavior.

The Little Albert Experiment, 1920

A John’s Hopkins University professor, Dr. John B. Watson, and a graduate student wanted to test a learning process called classical conditioning. Classical conditioning involves learning involuntary or automatic behaviors by association, and Dr. Watson thought it formed the bedrock of human psychology.

A nine-month-old toddler, dubbed “Albert B,” was volunteered for Dr. Watson and Rosalie Rayner ‘s experiment. Albert played with white furry objects, and at first, the toddler displayed joy and affection. Over time, as he played with the objects, Dr. Watson would make a loud noise behind the child’s head to frighten him. After numerous trials, Albert was conditioned to be afraid when he saw white furry objects.

The study proved that humans could be conditioned to enjoy or fear something, which many psychologists believe could explain why people have irrational fears and how they may have developed early in life. This is a great example of experimental study psychology.

Stanford Prison Experiment, 1971

Stanford professor Philip Zimbardo wanted to learn how individuals conformed to societal roles. He wondered, for example, whether the tense relationship between prison guards and inmates in jails had more to do with the personalities of each or the environment.

During Zimbardo’s experiment , 24 male college students were assigned to be either a prisoner or a guard. The prisoners were held in a makeshift prison inside the basement of Stanford’s psychology department. They went through a standard booking process designed to take away their individuality and make them feel anonymous. Guards were given eight-hour shifts and tasked to treat the prisoners just like they would in real life.

Zimbardo found rather quickly that both the guards and prisoners fully adapted to their roles; in fact, he had to shut down the experiment after six days because it became too dangerous. Zimbardo even admitted he began thinking of himself as a police superintendent rather than a psychologist. The study confirmed that people will conform to the social roles they’re expected to play, especially overly stereotyped ones such as prison guards.

“We realized how ordinary people could be readily transformed from the good Dr. Jekyll to the evil Mr. Hyde,” Zimbardo wrote.

The Asch Conformity Study, 1951

Solomon Asch, a Polish-American social psychologist, was determined to see whether an individual would conform to a group’s decision, even if the individual knew it was incorrect. Conformity is defined by the American Psychological Association as the adjustment of a person’s opinions or thoughts so that they fall closer in line with those of other people or the normative standards of a social group or situation.

In his experiment , Asch selected 50 male college students to participate in a “vision test.” Individuals would have to determine which line on a card was longer. However, the individuals at the center of the experiment did not know that the other people taking the test were actors following scripts, and at times selected the wrong answer on purpose. Asch found that, on average over 12 trials, nearly one-third of the naive participants conformed with the incorrect majority, and only 25 percent never conformed to the incorrect majority. In the control group that featured only the participants and no actors, less than one percent of participants ever chose the wrong answer.

Asch’s experiment showed that people will conform to groups to fit in (normative influence) because of the belief that the group was better informed than the individual. This explains why some people change behaviors or beliefs when in a new group or social setting, even when it goes against past behaviors or beliefs.

The Bobo Doll Experiment, 1961, 1963

Stanford University professor Albert Bandura wanted to put the social learning theory into action. Social learning theory suggests that people can acquire new behaviors “through direct experience or by observing the behavior of others.” Using a Bobo doll , which is a blow-up toy in the shape of a life-size bowling pin, Bandura and his team tested whether children witnessing acts of aggression would copy them.

Bandura and two colleagues selected 36 boys and 36 girls between the ages of 3 and 6 from the Stanford University nursery and split them into three groups of 24. One group watched adults behaving aggressively toward the Bobo doll. In some cases, the adult subjects hit the doll with a hammer or threw it in the air. Another group was shown an adult playing with the Bobo doll in a non-aggressive manner, and the last group was not shown a model at all, just the Bobo doll.

After each session, children were taken to a room with toys and studied to see how their play patterns changed. In a room with aggressive toys (a mallet, dart guns, and a Bobo doll) and non-aggressive toys (a tea set, crayons, and plastic farm animals), Bandura and his colleagues observed that children who watched the aggressive adults were more likely to imitate the aggressive responses.

Unexpectedly, Bandura found that female children acted more physically aggressive after watching a male subject and more verbally aggressive after watching a female subject. The results of the study highlight how children learn behaviors from observing others.

The Learned Helplessness Experiment, 1965

Martin Seligman wanted to research a different angle related to Dr. Watson’s study of classical conditioning. In studying conditioning with dogs, Seligman made an astute observation : the subjects, which had already been conditioned to expect a light electric shock if they heard a bell, would sometimes give up after another negative outcome, rather than searching for the positive outcome.

Under normal circumstances, animals will always try to get away from negative outcomes. When Seligman tested his experiment on animals who hadn’t been previously conditioned, the animals attempted to find a positive outcome. Oppositely, the dogs who had been already conditioned to expect a negative response assumed there would be another negative response waiting for them, even in a different situation.

The conditioned dogs’ behavior became known as learned helplessness, the idea that some subjects won’t try to get out of a negative situation because past experiences have forced them to believe they are helpless. The study’s findings shed light on depression and its symptoms in humans.

Is a Psychology Degree Right for You?

Develop you strength in psychology, communication, critical thinking, research, writing, and more.

The Milgram Experiment, 1963

In the wake of the horrific atrocities carried out by Nazi Germany during World War II, Stanley Milgram wanted to test the levels of obedience to authority. The Yale University professor wanted to study if people would obey commands, even when it conflicted with the person’s conscience.

Participants of the condensed study , 40 males between the ages of 20 and 50, were split into learners and teachers. Though it seemed random, actors were always chosen as the learners, and unsuspecting participants were always the teachers. A learner was strapped to a chair with electrodes in one room while the experimenter äóñ another actor äóñ and a teacher went into another.

The teacher and learner went over a list of word pairs that the learner was told to memorize. When the learner incorrectly paired a set of words together, the teacher would shock the learner. The teacher believed the shocks ranged from mild all the way to life-threatening. In reality, the learner, who intentionally made mistakes, was not being shocked.

As the voltage of the shocks increased and the teachers became aware of the believed pain caused by them, some refused to continue the experiment. After prodding by the experimenter, 65 percent resumed. From the study, Milgram devised the agency theory , which suggests that people allow others to direct their actions because they believe the authority figure is qualified and will accept responsibility for the outcomes. Milgram’s findings help explain how people can make decisions against their own conscience, such as when participating in a war or genocide.

The Halo Effect Experiment, 1977

University of Michigan professors Richard Nisbett and Timothy Wilson were interested in following up a study from 50 years earlier on a concept known as the halo effect . In the 1920s, American psychologist Edward Thorndike researched a phenomenon in the U.S. military that showed cognitive bias. This is an error in how we think that affects how we perceive people and make judgements and decisions based on those perceptions.

In 1977, Nisbett and Wilson tested the halo effect using 118 college students (62 males, 56 females). Students were divided into two groups and were asked to evaluate a male Belgian teacher who spoke English with a heavy accent. Participants were shown one of two videotaped interviews with the teacher on a television monitor. The first interview showed the teacher interacting cordially with students, and the second interview showed the teacher behaving inhospitably. The subjects were then asked to rate the teacher’s physical appearance, mannerisms, and accent on an eight-point scale from appealing to irritating.

Nisbett and Wilson found that on physical appearance alone, 70 percent of the subjects rated the teacher as appealing when he was being respectful and irritating when he was cold. When the teacher was rude, 80 percent of the subjects rated his accent as irritating, as compared to nearly 50 percent when he was being kind.

The updated study on the halo effect shows that cognitive bias isn’t exclusive to a military environment. Cognitive bias can get in the way of making the correct decision, whether it’s during a job interview or deciding whether to buy a product that’s been endorsed by a celebrity we admire.

How Experiments Have Impacted Psychology Today

Contemporary psychologists have built on the findings of these studies to better understand human behaviors, mental illnesses, and the link between the mind and body. For their contributions to psychology, Watson, Bandura, Nisbett and Zimbardo were all awarded Gold Medals for Life Achievement from the American Psychological Foundation. Become part of the next generation of influential psychologists with King University’s online bachelor’s in psychology . Take advantage of King University’s flexible online schedule and complete the major coursework of your degree in as little as 16 months. Plus, as a psychology major, King University will prepare you for graduate school with original research on student projects as you pursue your goal of being a psychologist.

  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

While Some Unethical, These 4 Social Experiments Helped Explain Human Behavior

How have we learned about human behavior some studies caused a baby to fear animals — and other experiments helped us explore human nature..

psycologist taking notes

From the CIA’s secret mind control program, MK Ultra, to the stuttering “Monster” study, American researchers have a long history of engaging in human experiments. The studies have helped us better understand ourselves and why we do certain things.

These four experiments did just this and helped us better understand human behavior. However, some of them would be considered unethical today due to either lack of informed consent or the mental and/or emotional damage they caused.

1. Cognitive Dissonance Experiment

After proposing the concept of cognitive dissonance , psychologist Leon Festinger created an experiment to test his theory that was also known as the boring experiment. 

Participants were paid either $1 or $20 to engage in mundane tasks, including turning pegs on a board and moving spools on and off a tray. Despite the boring nature of the activities, they were asked to tell the next participant that it was interesting and fun.

The people who were paid $20 felt more justified lying to others because they were better compensated — and they experienced less cognitive dissonance . Participants who were paid $1 felt greater cognitive dissonance due to their inability to rationalize lying.

In an attempt to reconcile their dissonance, they convinced themselves that the tasks were actually enjoyable.

2. The Little Albert Experiment  

In 1920, psychologist John. B. Watson and graduate student (and future wife) Rosalie Rayner wanted to see if they could produce a response in humans using classical conditioning — the way Pavlov did with dogs.  

They decided to expose a 9-month-old baby, whom they called Albert, to a white rat. At first, the baby displayed no fear and played with the rat. To startle Albert, Watson and Rayner would then make a loud noise by hitting a steel bar with a hammer. 

Each time they made the loud sound while Albert was playing with the rat, he became frightened, started crying, and crawled away from the rat. He had become classically conditioned to fear the rat because he associated it with something negative. He then developed stimulus generalization, where he feared other furry white objects — including a rabbit, white coat, and a Santa mask. 

3. Stanford Prison Experiment

In 1971, Stanford psychologist Philip Zimbardo designed a study to examine societal roles and situational power — through an experiment that recreated prison conditions. 

Zimbardo created a mock prison in a building on Stanford’s campus. He assigned study participants to be either guards or prisoners. Prisoners were given numbers instead of names, had a chain attached to one leg, and were dressed in smocks and stocking caps.

Those assigned to the role of a guard quickly conformed to their new position of power. They became hostile and aggressive toward the prisoners, subjecting them to psychological and verbal abuse — despite never having previously demonstrated such attitudes or behavior. The experiment was slated to last two weeks but needed to be ended after only six days. 

4. The Facial Expression Experiment

In 1924, psychology graduate student Carney Landis wanted to study how people’s emotions were reflected in their facial expressions, exploring whether certain emotions caused the same facial expressions in everyone.

Landis marked participants’ faces with black lines to study the movement of their facial muscles as they reacted. At first, he had them do innocuous tasks, such as listening to jazz music or smelling ammonia. 

As Landis grew frustrated that their responses weren’t strong enough, he had participants engage in increasingly shocking acts, such as sticking their hands into a bucket with live frogs in it. Eventually, Landis instructed participants to decapitate a live mouse. If they refused, he decapitated the mouse himself to elicit a strong reaction from them.

Read More: 5 Unethical Medical Experiments Brought Out of the Shadows of History

Article Sources

Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:

Advance Research Journal of Social Science . Cognitive dissonance: its role in decision making

New Scientist. How a baby was trained to fear

Stanford Prison Experiment. Philip G. Zimbardo

Incarceration . The dirty work of the Stanford Prison Experiment: Re-reading the dramaturgy of coercion

Journal of Experimental Psychology. Studies of emotional reactions. I. 'A preliminary study of facial expression."

The American Journal of Psychology. Carney Landis: 1897-1962

Allison Futterman is a Charlotte, N.C.-based writer whose science, history, and medical/health writing has appeared on a variety of platforms and in regional and national publications. These include Charlotte, People, Our State, and Philanthropy magazines, among others. She has a BA in communications and a MS in criminal justice.

  • brain structure & function

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

  • Faculty Resource Center
  • Biochemistry
  • Bioengineering
  • Cancer Research
  • Developmental Biology

Engineering

  • Environment
  • Immunology and Infection
  • Neuroscience
  • JoVE Journal
  • JoVE Encyclopedia of Experiments
  • JoVE Chrome Extension

Environmental Sciences

  • Pharmacology
  • JoVE Science Education
  • JoVE Lab Manual
  • JoVE Business
  • Videos Mapped to your Course
  • High Schools
  • Videos Mapped to Your Course

Science Education

A video library dedicated to teaching the practice and theory of scientific experiments through engaging and easy-to-understand visual demonstrations., advanced biology.

scientific method psychology experiments

Basic Biology

scientific method psychology experiments

Clinical Skills

scientific method psychology experiments

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

8 Famous Social Experiments

A social experiment is a type of research performed in psychology to investigate how people respond in certain social situations. 

In many of these experiments, the experimenters will include confederates who are people who act like regular participants but who are actually acting the part. Such experiments are often used to gain insight into social psychology phenomena.

Do people really stop to appreciate the beauty of the world? How can society encourage people to engage in healthy behaviors? Is there anything that can be done to bring peace to rival groups?

Social psychologists have been tackling questions like these for decades, and some of the results of their experiments just might surprise you.

Robbers Cave Social Experiment

Why do conflicts tend to occur between different groups? According to psychologist Muzafer Sherif, intergroup conflicts tend to arise from competition for resources, stereotypes, and prejudices. In a controversial experiment, the researchers placed 22 boys between the ages of 11 and 12 in two groups at a camp in the Robbers Cave Park in Oklahoma.

The boys were separated into two groups and spent the first week of the experiment bonding with their other group members. It wasn't until the second phase of the experiment that the children learned that there was another group, at which point the experimenters placed the two groups in direct competition with each other.

This led to considerable discord, as the boys clearly favored their own group members while they disparaged the members of the other group. In the final phase, the researchers staged tasks that required the two groups to work together. These shared tasks helped the boys get to know members of the other group and eventually led to a truce between the rivals.  

The 'Violinist in the Metro' Social Experiment

In 2007, acclaimed violinist Josh Bell posed as a street musician at a busy Washington, D.C. subway station. Bell had just sold out a concert with an average ticket price of $100 each.

He is one of the most renowned musicians in the world and was playing on a handcrafted violin worth more than $3.5 million. Yet most people scurried on their way without stopping to listen to the music. When children would occasionally stop to listen, their parents would grab them and quickly usher them on their way.

The experiment raised some interesting questions about how we not only value beauty but whether we truly stop to appreciate the remarkable works of beauty that are around us.

The Piano Stairs Social Experiment

How can you get people to change their daily behavior and make healthier choices? In one social experiment sponsored by Volkswagen as part of their Fun Theory initiative, making even the most mundane activities fun can inspire people to change their behavior.

In the experiment, a set of stairs was transformed into a giant working keyboard. Right next to the stairs was an escalator, so people were able to choose between taking the stairs or taking the escalator. The results revealed that 66% more people took the stairs instead of the escalator.  

Adding an element of fun can inspire people to change their behavior and choose the healthier alternative.

The Marshmallow Test Social Experiment

During the late 1960s and early 1970s, a psychologist named Walter Mischel led a series of experiments on delayed gratification. Mischel was interested in learning whether the ability to delay gratification might be a predictor of future life success.

In the experiments, children between the ages of 3 and 5 were placed in a room with a treat (often a marshmallow or cookie). Before leaving the room, the experimenter told each child that they would receive a second treat if the first treat was still on the table after 15 minutes.  

Follow-up studies conducted years later found that the children who were able to delay gratification did better in a variety of areas, including academically. Those who had been able to wait the 15 minutes for the second treat tended to have higher SAT scores and more academic success (according to parent surveys).  

The results suggest that this ability to wait for gratification is not only an essential skill for success but also something that forms early on and lasts throughout life.

The Smoky Room Social Experiment

If you saw someone in trouble, do you think you would try to help? Psychologists have found that the answer to this question is highly dependent on the number of other people present. We are much more likely to help when we are the only witness but much less likely to lend a hand when we are part of a crowd.

The phenomenon came to the public's attention after the gruesome murder of a young woman named Kitty Genovese. According to the classic tale, while multiple people may have witnessed her attack, no one called for help until it was much too late.

This behavior was identified as an example of the bystander effect , or the failure of people to take action when there are other people present. (In reality, several witnesses did immediately call 911, so the real Genovese case was not a perfect example of the bystander effect.)  

In one classic experiment, researchers had participants sit in a room to fill out questionnaires. Suddenly, the room began to fill with smoke. In some cases the participant was alone, in some there were three unsuspecting participants in the room, and in the final condition, there was one participant and two confederates.

In the situation involving the two confederates who were in on the experiment, these actors ignored the smoke and went on filling out their questionnaires. When the participants were alone, about three-quarters of the participants left the room calmly to report the smoke to the researchers.

In the condition with three real participants, only 38% reported the smoke. In the final condition where the two confederates ignored the smoke, a mere 10% of participants left to report the smoke.   The experiment is a great example of how much people rely on the responses of others to guide their actions.

When something is happening, but no one seems to be responding, people tend to take their cues from the group and assume that a response is not required.

Carlsberg Social Experiment

Have you ever felt like people have judged you unfairly based on your appearance? Or have you ever gotten the wrong first impression of someone based on how they looked? Unfortunately, people are all too quick to base their decisions on snap judgments made when they first meet people.

These impressions based on what's on the outside sometimes cause people to overlook the characteristics and qualities that lie on the inside. In one rather amusing social experiment, which actually started out as an advertisement , unsuspecting couples walked into a crowded movie theater.

All but two of the 150 seats were already full. The twist is that the 148 already-filled seats were taken by a bunch of rather rugged and scary-looking male bikers. What would you do in this situation? Would you take one of the available seats and enjoy the movie, or would you feel intimidated and leave?

In the informal experiment, not all of the couples ended up taking a seat, but those who eventually did were rewarded with cheers from the crowd and a round of free Carlsberg beers.

The exercise served as a great example of why people shouldn't always judge a book by its cover.

Halo Effect Social Experiment

In an experiment described in a paper published in 1920, psychologist Edward Thorndike asked commanding officers in the military to give ratings of various characteristics of their subordinates.

Thorndike was interested in learning how impressions of one quality, such as intelligence, bled over onto perceptions of other personal characteristics, such as leadership, loyalty, and professional skill.   Thorndike discovered that when people hold a good impression of one characteristic, those good feelings tend to affect perceptions of other qualities.

For example, thinking someone is attractive can create a halo effect that leads people also to believe that a person is kind, smart, and funny.   The opposite effect is also true. Negative feelings about one characteristic lead to negative impressions of an individual's other features.

When people have a good impression of one characteristic, those good feelings tend to affect perceptions of other qualities.

False Consensus Social Experiment

During the late 1970s, researcher Lee Ross and his colleagues performed some eye-opening experiments.   In one experiment, the researchers had participants choose a way to respond to an imagined conflict and then estimate how many people would also select the same resolution.

They found that no matter which option the respondents chose, they tended to believe that the vast majority of other people would also choose the same option. In another study, the experimenters asked students on campus to walk around carrying a large advertisement that read "Eat at Joe's."

The researchers then asked the students to estimate how many other people would agree to wear the advertisement. They found that those who agreed to carry the sign believed that the majority of people would also agree to carry the sign. Those who refused felt that the majority of people would refuse as well.

The results of these experiments demonstrate what is known in psychology as the false consensus effect .

No matter what our beliefs, options, or behaviors, we tend to believe that the majority of other people also agree with us and act the same way we do.

A Word From Verywell

Social psychology is a rich and varied field that offers fascinating insights into how people behave in groups and how behavior is influenced by social pressures. Exploring some of these classic social psychology experiments can provide a glimpse at some of the fascinating research that has emerged from this field of study.

Frequently Asked Questions

An example of a social experiment might be one that investigates the halo effect, a phenomenon in which people make global evaluations of other people based on single traits. An experimenter might have participants interact with people who are either average looking or very beautiful, and then ask the respondents to rate the individual on unrelated qualities such as intelligence, skill, and kindness. The purpose of this social experiment would be to seek if more attractive people are also seen as being smarter, more capable, and nicer.

The Milgram obedience experiment is one of the most famous social experiments ever performed. In the experiment, researchers instructed participants to deliver what they believed was a painful or even dangerous electrical shock to another person. In reality, the person pretending to be shocked was an actor and the electrical shocks were simply pretend. Milgram's results suggested that as many as 65% of participants would deliver a dangerous electrical shock because they were ordered to do so by an authority figure.

A social experiment is defined by its purpose and methods. Such experiments are designed to study human behavior in a social context. They often involved placing participants in a controlled situation in order to observe how they respond to certain situation or events. 

A few ideas for simple social experiments might involve:

  • Stand in a crowd and stare at a random spot on the ground to see if other people will stop to also look
  • Copy someone's body language and see how they respond
  • Stand next to someone in an elevator even if there is plenty of space to stand elsewhere
  • Smile at people in public and see how many smile back
  • Give random strangers a small prize and see how they respond

Sherif M. Superordinate goals in the reduction of intergroup conflict . American Journal of Sociology . 1958;63(4):349-356. doi:10.1086/222258

Peeters M, Megens C, van den Hoven E, Hummels C, Brombacher A. Social Stairs: Taking the Piano Staircase towards long-term behavioral change . In: Berkovsky S, Freyne J, eds. Lecture Notes in Computer Science . Vol 7822. Springer, Berlin, Heidelberg; 2013. doi:10.1007/978-3-642-37157-8_21

Mischel W, Ebbeson EB, Zeiss A. Cognitive and attentional mechanisms in delay of gratification . Journal of Personality and Social Psychology. 1972;21(2):204–218. doi:10.1037/h0032198

Mischel W, Shoda Y, Peake PK. Predicting adolescent cognitive and self-regulatory competencies from preschool delay of gratification: Identifying diagnostic conditions . Developmental Psychology. 1990;26(6):978-986. doi:10.1037/0012-1649.26.6.978

Benderly, BL. Psychology's tall tales . gradPSYCH Magazine . 2012;9:20.

Latane B, Darley JM. Group inhibition of bystander intervention in emergencies . Journal of Personality and Social Psychology. 1968;10(3):215-221. doi:10.1037/h0026570

Thorndike EL. A constant error in psychological ratings . Journal of Applied Psychology. 1920;4(1):25-29. doi:10.1037/h0071663

Talamas SN, Mayor KI, Perrett DI.  Blinded by beauty: Attractiveness bias and accurate perceptions of academic performance.   PLoS One . 2016;11(2):e0148284. doi:10.1371/journal.pone.0148284

Ross, L, Greene, D, & House, P. The "false consensus effect": An egocentric bias in social perception and attribution processes . Journal of Experimental Social Psychology . 1977;13(3):279-301. doi:10.1016/0022-1031(77)90049-X

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

IMAGES

  1. Scientific Method: Definition and Examples

    scientific method psychology experiments

  2. 2.1 Psychologists Use the Scientific Method to Guide Their Research

    scientific method psychology experiments

  3. The scientific method is a process for experimentation

    scientific method psychology experiments

  4. scientific method

    scientific method psychology experiments

  5. How to Conduct a Psychology Experiment

    scientific method psychology experiments

  6. Formula for Using the Scientific Method

    scientific method psychology experiments

VIDEO

  1. Psychology Experiments, அறிவியலும் உளவியலும் (Ep2) Basic Psychology in Tamil

  2. Scientific Method Of Psychology

  3. The Use Of Scientific Method in Psychology

  4. Scientific Method (Psychology A Science) Lecture no#12 In Urdu By Nafsiat.Pk

  5. Non Scientific Method Of Psychology

  6. scienTISM gets WRECKED!

COMMENTS

  1. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  2. What Are The Steps Of The Scientific Method?

    The scientific method is a process that includes several steps: First, an observation or question arises about a phenomenon. Then a hypothesis is formulated to explain the phenomenon, which is used to make predictions about other related occurrences or to predict the results of new observations quantitatively. Finally, these predictions are put to the test through experiments or further ...

  3. 2.1 Psychologists Use the Scientific Method to Guide Their Research

    The scientific method therefore results in an accumulation of scientific knowledge through the reporting of research and the addition to and modifications of these reported findings by other scientists. ... and there are some universally accepted laws in psychology, such as the law of effect and Weber's law. ... After the experiment is over ...

  4. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  5. How the Experimental Method Works in Psychology

    Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus; Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions; Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology

  6. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  7. 6.1 Experiment Basics

    Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané's experiment, the independent variable was the number of witnesses that participants ...

  8. Conducting an Experiment in Psychology

    Like other sciences, psychology utilizes the scientific method and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method: Ask a testable question. Define your variables.

  9. The Scientific Method

    The scientific method is a process for gathering data and processing information. It provides well-defined steps to standardize how scientific knowledge is gathered through a logical, rational problem-solving method. Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and ...

  10. The Practice of Experimental Psychology: An Inevitably Postmodern

    The aim of psychology is to understand the human mind and behavior. In contemporary psychology, the method of choice to accomplish this incredibly complex endeavor is the experiment. This dominance has shaped the whole discipline from the self-concept as an empirical science and its very epistemological and theoretical foundations, via research ...

  11. The Scientific Process

    Process of Scientific Research. Figure 2. The scientific method is a process for gathering data and processing information. It provides well-defined steps to standardize how scientific knowledge is gathered through a logical, rational problem-solving method. Scientific knowledge is advanced through a process known as the scientific method.

  12. Ch 2: Psychological Research Methods

    Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information. ... The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one ...

  13. The 25 Most Influential Psychological Experiments in History

    And still others were not designed to be true psychological experiments, but ended up as beacons to the psychological community in proving or disproving theories. This is a list of the 25 most influential psychological experiments still being taught to psychology students of today. 1. A Class Divided.

  14. The 11 Most Influential Psychological Experiments

    Experimental psychology is a branch of psychology that uses scientific methods to study human behaviour and mental processes. Researchers in this field design experiments to test hypotheses about topics such as perception, learning, memory, emotion, and motivation. ... It is about a social psychology experiment carried out in 1951 by the Polish ...

  15. Research in Psychology: Methods You Should Know

    Research in Psychology: The Basics. The first step in your review should include a basic introduction to psychology research methods. Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior. Psychologists use the scientific method to conduct studies and research ...

  16. Research Methods In Psychology

    Research Methods In Psychology. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  17. Science of Psychology

    Psychological science helps educators understand how children think, process and remember — helping to design effective teaching methods. Psychological science contributes to justice by helping the courts understand the minds of criminals, evidence and the limits of certain types of evidence or testimony. The science of psychology is pervasive.

  18. What Do You Know? The Pros and Cons of a Scientific Approach

    Getting back to science, let's assume a medical researcher, Dr. Feelbetter, wants to run an experiment to determine whether a lower dose of the antibiotic azithromycin would be equally effective ...

  19. 7 Famous Psychology Experiments

    Below, we take a look at seven famous psychological experiments that greatly influenced the field of psychology and our understanding of human behavior. The Little Albert Experiment, 1920. A John's Hopkins University professor, Dr. John B. Watson, and a graduate student wanted to test a learning process called classical conditioning.

  20. While Some Unethical, These 4 Social Experiments Helped Explain Human

    Advance Research Journal of Social Science. Cognitive dissonance: its role in decision making. New Scientist. How a baby was trained to fear. Stanford Prison Experiment. Philip G. Zimbardo. Incarceration. The dirty work of the Stanford Prison Experiment: Re-reading the dramaturgy of coercion. Journal of Experimental Psychology. Studies of ...

  21. Great Ideas for Psychology Experiments to Explore

    Piano stairs experiment. Cognitive dissonance experiments. False memory experiments. You might not be able to replicate an experiment exactly (lots of classic psychology experiments have ethical issues that would preclude conducting them today), but you can use well-known studies as a basis for inspiration.

  22. Science Education: Scientific Video Experiments

    A video library dedicated to teaching the practice and theory of scientific experiments through engaging and easy-to-understand visual demonstrations. Advanced Biology check_url/science-education-library

  23. Experiences of early career researchers: Influences on the design and

    With early career researchers (ECRs) heavily involved in all aspects of animal experiments, it is crucial we understand what shapes their research practices. Semi-structured interviews were conducted with 13 ECRs, including research masters, PhD and postdoctoral academics.

  24. Social Experiments and Studies in Psychology

    A social experiment is a type of research performed in psychology to investigate how people respond in certain social situations. ... A social experiment is defined by its purpose and methods. Such experiments are designed to study human behavior in a social context. ... Freyne J, eds. Lecture Notes in Computer Science. Vol 7822. Springer ...