Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Department of Experimental Psychology

  • Accessibility

experimental psychology in government

New therapies developed by Oxford experts offer online support for anxiety and post-traumatic stress disorders

Four internet-based therapies developed by our experts are proving helpful for patients with social anxiety disorder and post-traumatic stress disorders and for children with anxiety disorders.

Dr Lucy Foulkes

‘Coming Of Age: How Adolescence Shapes Us’ - A Q&A with Lucy Foulkes

Many congratulations to Dr Lucy Foulkes who has recently published her latest book ‘Coming of Age: How Adolescence Shapes Us’ (Penguin, July 2024)

Woman wearing red shirt sits at a wooden table with both hands placed on the table. She is wearing a red shirt, has shoulder-length black hair, dark eyes and brown skin.

Professor Asifa Majid honoured with a Fellowship of the British Academy

We are proud to announce that Professor Asifa Majid has been made a Fellow of the British Academy in 2024 in recognition of her distinguished contributions to cognitive sciences.

Our department was ranked first nationwide in the 2014 Research Excellence Framework.  We have over 20 groups performing ground-breaking basic and translational research.

As one of the leading psychology departments worldwide, our taught and research-based programmes provide exceptional educational opportunities for undergraduate and graduate students.

Our Community

Find out what we are doing to support the well-being of our staff and students, create a welcoming and inclusive environment, and celebrate the successes of our alumni.

From our Head of Department

Matthew Rushworth FRC

At the Oxford Department of Experimental Psychology, our mission is to conduct world-leading experimental research to understand the psychological and neural mechanisms relevant to human behaviour. Wherever appropriate, we translate our findings into evidence-based public benefits in mental health and wellbeing, education, industry, and policy. We aim to provide our students with an inspiring and immersive scientific education, and to train the next generation of outstanding researchers with theoretical rigour and cutting-edge methodologies in an inclusive, diverse, and international environment. We are committed to helping individuals thrive in a community that is free from discrimination, harassment, and bullying; and in which people treat one another with respect and dignity in a mutually supportive manner.

Professor Matthew Rushworth

Recent News

experimental psychology in government

31 July 2024

Dr Lucy Foulkes

29 July 2024

Woman wearing red shirt sits at a wooden table with both hands placed on the table. She is wearing a red shirt, has shoulder-length black hair, dark eyes and brown skin.

18 July 2024

experimental psychology in government

Study outlines feasibility of multiple disease risk prediction model for primary care

31 May 2024

A team of researchers led by a researcher in RDM, but which also includes researchers in NDCN, Population Health and Experimental Psychology, have found that a single, integrated health check carried out in a primary care setting can accurately predict risks for diseases across multiple organs.

Woman asleep at her desk with lamp on and book in front of her

Sleep research: Can sleep improve our memory?

18 March 2022

For World Sleep Day 2022, Bernhard Staresina discusses research into the effects of sleep on memory consolidation.

Photo of woman walking away in the woods in the autumn

Stress Awareness Week 2021 – Tips to better manage our stress

1 November 2021

Globe with face covering held in the air by hands wearing surgical gloves

Graduating in a global pandemic: A personal account

14 September 2021

Incomplete puzzle showing the word Alzheimers

Brain health: The importance of keeping mentally active for reducing the risk of Alzheimer’s disease

21 May 2021

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Gold, silver, and bronze medals in the air. Background for Rio Olympic time (Olympics, Olympic games)

experimental psychology

Our editors will review what you’ve submitted and determine whether to revise the article.

  • American Psychological Association - Understanding Experimental Psychology

experimental psychology , a method of studying psychological phenomena and processes. The experimental method in psychology attempts to account for the activities of animals (including humans) and the functional organization of mental processes by manipulating variables that may give rise to behaviour; it is primarily concerned with discovering laws that describe manipulable relationships. The term generally connotes all areas of psychology that use the experimental method.

These areas include the study of sensation and perception , learning and memory , motivation , and biological psychology . There are experimental branches in many other areas, however, including child psychology , clinical psychology , educational psychology , and social psychology . Usually the experimental psychologist deals with normal, intact organisms; in biological psychology, however, studies are often conducted with organisms modified by surgery, radiation, drug treatment, or long-standing deprivations of various kinds or with organisms that naturally present organic abnormalities or emotional disorders. See also psychophysics .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sage Choice

Logo of sageopen

The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture

Like many other areas of science, experimental psychology is affected by a “replication crisis” that is causing concern in many fields of research. Approaches to tackling this crisis include better training in statistical methods, greater transparency and openness, and changes to the incentives created by funding agencies, journals, and institutions. Here, I argue that if proposed solutions are to be effective, we also need to take into account human cognitive constraints that can distort all stages of the research process, including design and execution of experiments, analysis of data, and writing up findings for publication. I focus specifically on cognitive schemata in perception and memory, confirmation bias, systematic misunderstanding of statistics, and asymmetry in moral judgements of errors of commission and omission. Finally, I consider methods that may help mitigate the effect of cognitive constraints: better training, including use of simulations to overcome statistical misunderstanding; specific programmes directed at inoculating against cognitive biases; adoption of Registered Reports to encourage more critical reflection in planning studies; and using methods such as triangulation and “pre mortem” evaluation of study design to foster a culture of dialogue and criticism.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-img1.jpg

Introduction

The past decade has been a bruising one for experimental psychology. The publication of a paper by Simmons, Nelson, and Simonsohn (2011) entitled “False-positive psychology” drew attention to problems with the way in which research was often conducted in our field, which meant that many results could not be trusted. Simmons et al. focused on “undisclosed flexibility in data collection and analysis,” which is now variously referred to as p -hacking, data dredging, noise mining, or asterisk hunting: exploring datasets with different selections of variables and different analyses to attain a p -value lower than .05 and, subsequently, reporting only the significant findings. Hard on the heels of their demonstration came a wealth of empirical evidence from the Open Science Collaboration (2015) . This showed that less than half the results reported in reputable psychological journals could be replicated in a new experiment.

The points made by Simmons et al. (2011) were not new: indeed, they were anticipated in 1830 by Charles Babbage, who described “cooking” of data:

This is an art of various forms, the object of which is to give ordinary observations the appearance and character of those of the highest degree of accuracy. One of its numerous processes is to make multitudes of observations, and out of these to select only those which agree, or very nearly agree. If a hundred observations are made, the cook must be very unhappy if he cannot pick out fifteen or twenty which will do for serving up. (p. 178–179)

P -hacking refers to biased selection of data or analyses from within an experiment. Bias also affects which studies get published in the form of publication bias—the tendency for positive results to be overrepresented in the published literature. This is problematic because it gives an impression that findings are more consistent than is the case, which means that false theories can attain a state of “canonisation,” where they are widely accepted as true ( Nissen, Magidson, Gross, & Bergstrom, 2016 ). Figure 1 illustrates this with a toy simulation of a set of studies testing a difference between means from two conditions. If we have results from a series of experiments, three of which found a statistically significant difference and three of which did not, this provides fairly strong evidence that the difference is real (panel a). However, if we add a further four experiments that were not reported because results were null, the evidence cumulates in the opposite direction. Thus, omission of null studies can drastically alter our impression of the overall support for a hypothesis.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-fig1.jpg

The impact of publication bias demonstrated with plots of cumulative log odds in favour of true versus null effect over a series of experiments. The log odds for each experiment can be computed with knowledge of alpha (.05) and power (.8); 1 denotes an experiment with significant difference between means, and 0, a null result. The starting point is zero, indicating that we assume a 50:50 chance of a true effect. For each significant result, the log odds of it coming from a true effect versus a null effect is log(.8/.05) = 2.77. For a null result, the log odds is log (.2/.95) = −1.55. The selected set of studies in panel (a) concludes with a log odds greater than 3, indicating that the likelihood of a true effect is 20 times greater than a null effect. However, panel (b), which includes additional null results (labelled in grey), leads to the opposite conclusion.

Since the paper by Simmons et al. (2011) , there has been a dramatic increase in replication studies. As a result, a number of well-established phenomena in psychology have come into question. Often it is difficult to be certain whether the original reports were false positives, whether the replication was flawed, or whether the effect of interest is only evident under specific conditions—see, for example, Hobson and Bishop (2016) on mu suppression in response to observed actions; Sripada, Kesller, and Jonides (2016) on ego depletion; Lehtonen et al. (2018) on an advantage in cognitive control for bilinguals; O’Donnell et al. (2018) on the professor-priming effect; and Oostenbroek et al. (2016) on neonatal imitation. What is clear is that the size, robustness, and generalisability of many classic effects are lower than previously thought.

Selective reporting, through p -hacking and publication bias, is not the only blight on our science. A related problem is many editors place emphasis on reporting results in a way that “tells a good story,” even if that means retrofitting our hypothesis to the data, i.e., HARKing or “hypothesising after the results are known” ( Kerr, 1998 ). Oberauer and Lewandowsky (2019) drew parallels between HARKing and p -hacking: in HARKing, there is post hoc selection of hypotheses, rather than selection of results or an analytic method. They proposed that HARKing is most widely used in fields where theories are so underspecified that they can accommodate many hypotheses and where there is a lack of “disconfirmatory diagnosticity,” i.e., failure to support a prediction is uninformative.

A lack of statistical power is a further problem for psychology—one that has been recognised since 1969 , when Jacob Cohen exhorted psychologists not to waste time and effort doing experiments that had too few observations to show an effect of interest. In other fields, notably clinical trials and genetics, after a period where non-replicable results proliferated, underpowered studies died out quite rapidly when journals adopted stringent criteria for publication (e.g., Johnston, Lahey, & Matthys, 2013 ), and funders began to require power analysis in grant proposals. Psychology, however, has been slow to catch up.

It is not just experimental psychology that has these problems—studies attempting to link psychological traits and disorders to genetic and/or neurobiological variables are, if anything, subject to greater challenges. A striking example comes from a meta-analysis of links between the serotonin transporter gene, 5-HTTPLR, and depression. This postulated association has attracted huge research interest over the past 20 years, and the meta-analysis included 450 studies. Contrary to expectation, it concluded that there was no evidence of association. In a blog post summarising findings, Alexander (2019) wrote,

. . . what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

It is no exaggeration to say that our field is at a crossroads ( Pashler & Wagenmakers, 2012 ), and the 5-HTTLPR story is just a warning sign that practices that lead to bad science are widespread. If we continue to take the well-trodden path, using traditional methods for cooking data and asterisk hunting, we are in danger of losing attention, respect, and funding.

Much has been written about how we might tackle the so-called “replication crisis.” There have been four lines of attack. First, there have been calls for greater openness and transparency ( Nosek et al., 2015 ). Second, a case has been made for better training in methods (e.g., Rousselet, Pernet, & Wilcox, 2017 ). Third, it has been argued we need to change the way research has been conducted to incorporate pre-registration of research protocols, preferably in the format of Registered Reports, which are peer-reviewed prior to data collection ( Chambers, 2019 ). Fourth, it is recognised that for too long, the incentive structure of research has prioritised innovative, groundbreaking results over methodological quality. Indeed, Smaldino and McElreath (2016) suggested that one can model the success of scientists in a field as an evolutionary process, where prestigious publications lead to survival, leaving those whose work is less exciting to wither away and leave science. The common thread to these efforts is that they locate the mechanisms of bad science at the systemic level, in ways in which cultures and institutions reinforce norms and distribute resources. The solutions are, therefore, aimed at correcting these shortcomings by creating systems that make good behaviour easier and more rewarding and make poor behaviour more costly.

My view, however, is that institutional shortcomings are only part of the story: to improve scientific research, we also need to understand the mechanisms that maintain bad practices in individual humans. Bad science is usually done because somebody mistook it for good science. Understanding why individual scientists mistake bad science for good, and helping them to resist these errors, is a necessary component of the movement to improve psychology. I will argue that we need to understand how cognitive constraints lead to faulty reasoning if we are to get science back on course and persuade those who set the incentives to reform. Fortunately, as psychologists, we are uniquely well positioned to tackle this issue.

Experimental psychology has a rich tradition of studying human reasoning and decision-making, documenting the flaws and foibles that lead us to selectively process some types of information, make judgements on the basis of incomplete evidence, and sometimes behave in ways that seem frankly irrational. This line of work has had significant application to economics, politics, business studies, and law, but, with some notable exceptions (e.g., Hossenfelder, 2018 ; Mahoney, 1976 ), it has seldom been considered when studying the behaviour of research scientists. In what follows, I consider how our knowledge of human cognition can make sense of problematic scientific practices, and I propose ways we might use this information to find solutions.

Cognitive constraints that affect how psychological science is done

Table 1 lists four characteristics of human cognition that I focus on: I refer to these as “constraints” because they limit how we process, understand, or remember information, but it is important to note that they include some biases that can be beneficial in many contexts. The first constraint is confirmation bias. As Hahn and Harris (2014) noted, a range of definitions of “confirmation bias” exist—here, I will define it as the tendency to seek out evidence that supports our position. A further set of constraints has to do with understanding of probability. A lack of an intuitive grasp of probability contributes to both neglect of statistical power in study design and p -hacking in data analysis. Third, there is an asymmetry in moral reasoning that can lead us to treat errors of omission as less culpable than errors of commission, even when their consequences are equally serious ( Haidt & Baron, 1996 ). The final constraint featured in Bartlett’s (1932) work: reliance on cognitive schemata to fill in unstated information, leading to “reconstructive remembering,” which imbues memories with meaning while filtering out details that do not fit preconceptions.

Different types of cognitive constraints.

Cognitive constraintDescription
Confirmation biasTendency to seek out and remember evidence that supports a preferred viewpoint
Misunderstanding of probability(a) Failure to understand how estimation scales with sample size
(b) Failure to understand that probability depends on context
Asymmetric moral reasoningErrors of omission judged less seriously than errors of commission
Reliance on schemataPerceiving and/or remembering in line with pre-existing knowledge, leading to omission or distortion of irrelevant information

In what follows, I illustrate how these constraints assume particular importance at different stages of the research process, as shown in Table 2 .

Cognitive constraints that operate at different stages of the research process.

Stage of researchCognitive constraint
Experimental designConfirmation bias: looking for evidence consistent with theory
Statistical misunderstanding: power
Data analysisStatistical misunderstanding: -hacking
Moral asymmetry: omission and “paltering” deemed acceptable
Scientific reportingConfirmation bias in reviewing literature
Moral asymmetry: omission and “paltering” deemed acceptable
Cognitive schemata: need for narrative, HARKing

HARKing: hypothesising after the results are known.

Bias in experimental design

Confirmation bias and the failure to consider alternative explanations.

Scientific discovery involves several phases: the researcher needs to (a) assemble evidence, (b) look for meaningful patterns and regularities in the data, (c) formulate a hypothesis, and (d) test it empirically by gathering informative new data. Steps (a)–(c) may be designated as exploratory and step (d) as hypothesis testing or confirmatory ( Wagenmakers, Wetzels, Borsboom, van der Mass, & Kievit, 2012 ). Importantly, the same experiment cannot be used to both formulate and confirm a hypothesis. In practice, however, the distinction between the two types of experiment is often blurred.

Our ability to see patterns in data is vital at the exploratory stage of research: indeed, seeing something that nobody else has observed is a pinnacle of scientific achievement. Nevertheless, new ideas are often slow to be accepted, precisely because they do not fit the views of the time. One such example is described by Zilles and Amunts (2010) : Brodmann’s cytoarchitectonic map of the brain, described in 1909. This has stood the test of time and is still used over 100 years later, but for several decades, it was questioned by those who could not see the fine distinctions made by Brodmann. Indeed, criticisms of poor reproducibility and lack of objectivity were levelled against him.

Brodmann’s case illustrates that we need to be cautious about dismissing findings that depend on special expertise or unique insight of the observer. However, there are plenty of other instances in the history of science where invalid ideas persisted, especially if proposed by an influential or charismatic figure. Entire edifices of pseudoscience have endured because we are very bad at discarding theories that do not work; as Bartlett (1932) would predict, new information that is consistent with the theory will strengthen its representation in our minds, but inconsistent information will be ignored. Examples from the history of science include the rete mirabile , a mass of intertwined arteries that is found in sheep but wrongly included in anatomical drawings of humans for over 1,000 years because of the significance attributed to this structure by Galen ( Bataille et al., 2007 ); the planet Vulcan, predicted by Newton’s laws and seen by many astronomers until its existence was disproved by Einstein’s discoveries ( Levenson, 2015 ); and N-rays, non-existent rays seen by at least 40 people and analysed in 3,090 papers by 100 scientists between 1903 and 1906 ( Nye, 1980 ).

Popper’s (1934/ 1959 ) goal was to find ways to distinguish science from pseudoscience, and his contribution to philosophy of science was to emphasise that we should be bold in developing ideas but ruthless in attempts to falsify them. In an early attempt to test scientists’ grasp of Popperian logic, Mahoney (1976) administered a classic task developed by Wason (1960) to 84 scientists (physicists, biologists, psychologists, and sociologists). In this deceptively simple task, people are shown four cards and told that each card has a number on one side and a patch of colour on the other side. The cards are placed to show number 3, number 8, red, and blue, respectively (see Figure 2 ). The task is to identify which cards need to be turned over to test the hypothesis that if an even number appears on one side, then the opposite side is red. The subject can pick any number of cards. The correct response is to name the two cards that could disconfirm the hypothesis—the number 8 and the blue card. Fewer than 10% of the scientists tested by Mahoney identified both critical cards, more often selecting the number 8 and the red card.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-fig2.jpg

Wason’s (1960) task: The subject is told, “Each card has a number on one side and a patch of colour on the other. You are asked to test the hypothesis that—for these 4 cards—if an even number appears on one side, then the opposite side is red. Which card(s) would you turn over to test the hypothesis?”

Although this study was taken as evidence of unscientific reasoning by scientists, that conclusion has since been challenged by those who have criticised both Popperian logic, in general, and the Wason selection task, in particular, as providing an unrealistic test of human rationality. For a start, the Wason task uses a deterministic hypothesis that can be disproved by a single piece of evidence. This is not a realistic model of biological or behavioural sciences, where we seldom deal with deterministic phenomena. Consider the claim that smoking causes lung cancer. Most of us accept that this is so, even though we know there are people who smoke and who do not get lung cancer and people who get lung cancer but never smoked. When dealing with probabilistic phenomena, a Bayesian approach makes more sense, whereby we consider the accumulated evidence to determine the relative likelihood of one hypothesis over another (as illustrated in Figure 1 ). Theories are judged as more or less probable, rather than true or false. Oaksford and Chater (1994) showed that, from a Bayesian perspective, typical selections made on the Wason task would be rational in contexts where the antecedent and consequent of the hypothesis (an even number and red colour) were both rare. Subsequently, Perfors and Navarro (2009) concluded that in situations where rules are relevant only for a minority of entities, then confirmation bias is an efficient strategy.

This kind of analysis has shifted the focus to discussions about how far, and under what circumstances, people are rational decision-makers. However, it misses a key point about scientific reasoning, which is that it involves an active process of deciding which evidence to gather, rather than merely a passive evaluation of existing evidence. It seems reasonable to conclude that, when presented with a particular set of evidence, people generally make decisions that are rational when evaluated against Bayesian standards. However, history suggests that we are less good at identifying which new evidence needs to be gathered to evaluate a theory. In particular, people appear to have a tendency to accept a hypothesis on the basis of “good enough” evidence, rather than actively seeking evidence for alternative explanations. Indeed, an early study by Doherty, Mynatt, Tweney, and Schiavo (1979) found that, when given an opportunity to select evidence to help decide which of two hypotheses was true (in a task where a fictitious pot had to be assigned as originating from one of the two islands that differed in characteristic features), people seemed unable to identify which information would be diagnostic and tended, instead, to select information that could neither confirm nor disconfirm their hypothesis.

Perhaps the strongest evidence for our poor ability to consider alternative explanations comes from the history of the development of clinical trials. Although James Lind is credited with doing the first trials for treatment of scurvy in 1747, it was only in 1948 that the randomised controlled trial became the gold standard for evaluating medical interventions ( Vallier & Timmerman, 2008 ). The need for controls is not obvious, and people who are not trained in this methodology will often judge whether a treatment is effective on the basis of a comparison on an outcome measure between a pre-treatment baseline and a post-treatment evaluation. The logic is that if a group of patients given the treatment does not improve, the treatment did not work. If they do show meaningful gains, then it did work. And we can even embellish this comparison with a test of statistical significance. This reasoning can be seen as entirely rational, and this can explain why so many people are willing to accept that alternative medicine is effective.

The problem with this approach is that the pre–post intervention comparison allows important confounds to creep in. For instance, early years practitioners argue that we should identify language problems in toddlers so that we can intervene early. They find that if 18-month-old late talkers are given intervention, only a minority still have language problems at 2 years and, therefore, conclude the intervention was effective. However, if an untreated control group is studied over the same period, we find very similar rates of improvement ( Wake et al., 2011 )—presumably due to factors such a spontaneous resolution of problems or regression to the mean, which will lead to systematic bias in outcomes. Researchers need training to recognise causes of bias and to take steps to overcome them: thinking about possible alternative explanations of an observed phenomenon does not come naturally, especially when the preliminary evidence looks strong.

Intervention studies provide the clearest evidence of what I term “premature entrenchment” of a theory: some other examples are summarised in Table 3 . Note that these examples do not involve poor replicability, quite the opposite. They are all cases where an effect, typically an association between variables, is reliably observed, and researchers then converge on accepting the most obvious causal explanation, without considering lines of evidence that might point to alternative possibilities.

Premature entrenchment: examples where the most obvious explanation for an observed association is accepted for many years, without considering alternative explanations that could be tested with different evidence.

ObservationFavoured explanationAlternative explanationEvidence for alternative explanation
Home literacy environment predicts reading outcomes in childrenAccess to books at home affects children’s learning to read ( )Parents and children share genetic risk for reading problemsChildren who are poor readers tend to have parents who are poor readers ( )
Speech sounds (phonemes) do not have consistent auditory correlates but can be identified by knowledge of articulatory configurations used to produce themMotor theory of speech perception: we learn to recognise speech by mapping input to articulatory gestures ( )Correlations between perception and production reflect co-occurrence rather than causationChildren who are congenitally unable to speak can develop good speech perception, despite having no articulatory experience ( )
Dyslexics have atypical brain responses to speech when assessed using fMRIAtypical brain organisation provides evidence that dyslexia is a “real disorder” with a neurobiological basis ( )Atypical responses to speech in the brain are a consequence of being a poor readerAdults who had never been taught to read have atypical brain organisation for spoken language ( )

fMRI: functional magnetic resonance imaging.

Premature entrenchment may be regarded as evidence that humans adopt Bayesian reasoning: we form a prior belief about what is the case and then require considerably more evidence to overturn that belief than to support it. This would explain why, when presented with virtually identical studies that either provided support for or evidence against astrology, psychologists were more critical of the latter ( Goodstein & Brazis, 1970 ). The authors of that study expressed concern about the “double standard” shown by biased psychologists who made unusually harsh demands of research in borderline areas, but from a Bayesian perspective, it is reasonable to use prior knowledge so that extraordinary claims require extraordinary evidence. Bayesian reasoning is useful in many situations: it allows us to act decisively on the basis of our long-term experience, rather than being swayed by each new incoming piece of data. However, it can be disastrous if we converge on a solution too readily on the basis of incomplete or inaccurate information. This will be exacerbated by publication bias, which distorts the evidential landscape.

For many years, the only methods available to counteract the tendency for premature entrenchment were exhortations to be self-critical (e.g., Feynman, 1974 ) and peer review. The problem with peer review is that it typically comes too late to be useful, after research is completed. In the final section of this article, I will consider some alternative approaches that bring in external appraisal of experimental designs at an earlier stage in the research process.

Misunderstanding of probability leading to underpowered studies

Some 17 years after Cohen’s seminal work on statistical power, Newcombe (1987) wrote,

Small studies continue to be carried out with little more than a blind hope of showing the desired effect. Nevertheless, papers based on such work are submitted for publication, especially if the results turn out to be statistically significant. (p. 657)

In clinical medicine, things have changed, and the importance of adequate statistical power is widely recognised among those conducting clinical trials. But in psychology, the “blind hope” has persisted, and we have to ask ourselves why this is.

My evidence here is anecdotal, but the impression is that many psychologists simply do not believe advice about statistical power, perhaps because there are so many underpowered studies published in the literature. When a statistician is consulted about sample size for a study, he or she will ask the researcher to estimate the anticipated effect size. This usually leads to a sample size estimate that is far higher than the researcher anticipated or finds feasible, leading to a series of responses not unlike the first four of the five stages of grief: denial, anger, bargaining, and depression. The final stage, acceptance, may, however, not be reached.

Of course, there are situations where small sample sizes are perfectly adequate: the key issue is how large the effect of interest is in relation to the variance. In some fields, such as psychophysics, you may not even need statistics—the famous “interocular trauma” test (referring to a result so obvious and clear-cut that it hits you between the eyes) may suffice. Indeed, in such cases, recruitment of a large sample would just be wasteful.

There are, however, numerous instances in psychology where people have habitually used sample sizes that are too small to reliably detect an effect of interest: see, for instance, the analysis by Poldrack et al. (2017) of well-known effects in functional magnetic resonance imaging (fMRI) or Oakes (2017) on looking-time experiments in infants. Quite often, a line of research is started when a large effect is seen in a small sample, but over time, it becomes clear that this is a case of “winner’s curse,” a false positive that is published precisely because it looks impressive but then fails to replicate when much larger sample sizes are used. There are some recent examples from studies looking at neurobiological or genetic correlates of individual differences, where large-scale studies have failed to support previously published associations that had appeared to be solid (e.g., De Kovel & Francks, 2019 , on genetics of handedness; Traut et al., 2018 , on cerebellar volume in autism; Uddén et al., 2019 , on genetic correlates of fMRI language-based activation).

A clue to the persistence of underpowered psychology studies comes from early work by Tversky and Kahneman (1971 , 1974 ). They studied a phenomenon that they termed “belief in the law of small numbers,” an exaggerated confidence in the validity of conclusions based on small samples, and showed that even those with science training tended to have strong intuitions about random sampling that were simply wrong. They illustrated this with the following problem:

A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days? 1. The large hospital 2. The small hospital 3. About the same (that is, within 5% of each other)

Most people selected Option 3, whereas, as illustrated in Figure 3 , Option 2 is the correct answer—with only 15 births per day, the day-to-day variation in the proportion of boys will be much higher than with 45 births per day, and hence, more days will have more than 60% boys. One reason why our intuitions deceive us is because the sample size does not affect the average percentage of male births in the long run: this will be 50%, regardless of the hospital size. But sample size has a dramatic impact on the variability in the proportion of male births from day to day. More formally, if you have a big and small sample drawn from the same population, the expected estimate of the mean will be the same, but the standard error of that estimate will be greater for the small sample.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-fig3.jpg

Simulated data showing proportions of males born in a small hospital with 15 births per day versus a large hospital with 45 births per day. The small hospital has more days where more than 60% of births are boys (points above red line).

Statistical power depends on the effect size, which, for a simple comparison of two means, can be computed as the difference in means divided by the pooled standard deviation. It follows that power is crucially dependent on the proportion of variance in observations that is associated with an effect of interest, relative to background noise. Where variance is high, it is much harder to detect the effect, and hence, small samples are often underpowered. Increasing the sample size is not the only way to improve power: other options include improving the precision of measurement, using more effective manipulations, or adopting statistical approaches to control noise ( Lazic, 2018 ). But in many situations, increasing the sample size is the preferred approach to enhance statistical power to detect an effect.

Bias in data analysis: p -hacking

P -hacking can take various forms, but they all involve a process of selective analysis. Suppose some researchers hypothesise that there is an association between executive function and implicit learning in a serial reaction time task, and they test this in a study using four measures of executive function. Even if there is only one established way of scoring each task, they have four correlations; this means that the probability that none of the correlations is significant at the .05 level is .95 4 —i.e., .815—and conversely, the probability that at least one is significant is .185. This probability can be massaged to even higher levels if the experimenters look at the data and then select an analytic approach that maximises the association: maybe by dropping outliers, by creating a new scoring method, combining measures in composites, and so on. Alternatively, the experimenters may notice that the strength of the correlation varies with the age or sex of participants and so subdivide the sample to coax at least a subset of data into significance. The key thing about p -hacking is that at the end of the process, the researchers selectively report the result that “worked,” with the implication that the p -value can be interpreted at face value. But it cannot: probability is meaningless if not defined in terms of a particular analytic context. P -hacking appears to be common in psychology ( John, Loewenstein, & Prelec, 2012 ). I argue here that this is because it arises from a conjunction of two cognitive constraints: failure to understand probability, coupled with a view that omission of information when reporting results is not a serious misdemeanour.

Failure to understand probability

In an influential career guide, published by the American Psychological Association, Bem (2004) explicitly recommended going against the “conventional view” of the research process, as this might lead us to miss exciting new findings. Instead readers were encouraged to

become intimately familiar with . . . the data. Examine them from every angle. Analyze the sexes separately. Make up new composite indexes. If a datum suggests a new hypothesis, try to find additional evidence for it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results, drop them (temporarily). Go on a fishing expedition for something—anything—interesting. (p. 2)

For those who were concerned this might be inappropriate, Bem offered reassurance. Everything is fine because what you are doing is exploring your data. Indeed, he implied that anyone who follows the “conventional view” would be destined to do boring research that nobody will want to publish.

Of course, Bem (2004) was correct to say that we need exploratory research. The problem comes when exploratory research is repackaged as if it were hypothesis testing, with the hypothesis invented after observing the data (HARKing), and the paper embellished with p -values that are bound to be misleading because they were p -hacked from numerous possible values, rather than derived from testing an a priori hypothesis. If results from exploratory studies were routinely replicated, prior to publication, we would not have a problem, but they are not. So why did the American Psychological Association think it appropriate to publish Bem’s views as advice to young researchers? We can find some clues in the book overview, which explains that there is a distinction between the “formal” rules that students are taught and the “implicit” rules that are applied in everyday life, concluding that “This book provides invaluable guidance that will help new academics plan, play, and ultimately win the academic career game.” Note that the stated goal is not to do excellent research: it is to have “a lasting and vibrant career.” It seems, then, that there is recognition here that if you do things in the “conventional” way, your career will suffer. It is clear from Bem’s framing of his argument that he was aware that his advice was not “conventional,” but he did not think it was unethical—indeed, he implied it would be unfair on young researchers to do things conventionally as that will prevent them making exciting discoveries that will enable them to get published and rise up the academic hierarchy. While it is tempting to lament the corruption of a system that treats an academic career as a game, it is more important to consider why so many people genuinely believe that p -hacking is a valid, and indeed creative, approach to doing research.

The use of null-hypothesis significance testing has attracted a lot of criticism, with repeated suggestions over the years that p -values be banned. I favour the more nuanced view expressed by Lakens (2019) , who suggests that p -values have a place in science, provided they are correctly understood and used to address specific questions. There is no doubt, however, that many people do misunderstand the p -value. There are many varieties of misunderstanding, but perhaps the most common is to interpret the p -value as a measure of strength of evidence that can be attached to a given result, regardless of the context. It is easy to see how this misunderstanding arises: if we hold the sample size constant, then for a single comparison, there will be a linear relationship between the p -value and the effect size. However, whereas an effect size remains the same, regardless of the analytic context, a p -value is crucially context-dependent.

Suppose in the fictitious study of executive function described above, the researchers have 20 participants and four measures of executive function (A–D) that correlate with implicit learning with r values of .21, .47, .07, and −.01. The statistics package tells us that the corresponding two-tailed p -values are .374, .037, .769, and .966. A naive researcher may rejoice at having achieved significance with the second correlation. However, as noted above, the probability that at least one correlation of the four will have an associated p -value of less than .05 is 18%, not 5%. If we want to identify correlations that are unlikely under the null hypothesis, then we need to correct the alpha level (e.g., by doing a Bonferroni correction to adjust by the number of tests, i.e., .05/4 = .0125). At this point, the researchers see their significant result snatched from their grasp. This creates a strong temptation to just drop the three non-significant tests and not report them. Alternatively, one sometimes sees papers that report the original p -value but then state that it “did not survive” Bonferroni correction, but they, nevertheless, exhume it and interpret the uncorrected value. Researchers acting this way may not think that they are doing anything inappropriate, other than going against advice of pedantic statisticians, especially given Bem’s (2004) advice to follow the “implicit” rather than “formal” rules of research. However, this is simply wrong: as illustrated above, a p -value can only be interpreted in relation to the context in which it is computed.

One way of explaining the notion of p -hacking is to use the old-fashioned method of games of chance. I find this scenario helpful: we have a magician who claims he can use supernatural powers to deal a poker hand of “three of a kind” from an unbiased deck of cards. This type of hand will occur in around 1 of 50 draws from an unbiased deck. He points you to a man who, to his amazement, finds that his hand contains three of a kind. However, you then discover he actually tried his stunt with 50 people, and this man was the only one who got three of a kind. You are rightly disgruntled. This is analogous to p -hacking. The three-of-a-kind hand is real enough, but its unusualness, and hence its value as evidence of the supernatural, depends on the context of how many tests were done. The probability that needs to be computed here is not the probability of one specific result but rather the probability that specific result would come up at least once in 50 trials.

Asymmetry of sins of omission and commission

According to Greenwald (1975) “[I]t is a truly gross ethical violation for a researcher to suppress reporting of difficult-to-explain or embarrassing data to present a neat and attractive package to a journal editor” (p. 19).

However, this view is not universal.

Greenwald’s focus was on publication bias, i.e., failure to report an entire study, but the point he made about “prejudice” against null results also applies to cases of p -hacking where only “significant” results are reported, whereas others go unmentioned. It is easy to see why scientists might play down the inappropriateness of p -hacking, when it is so important to generate “significant” findings in a world with a strong prejudice against null results. But I suspect another reason why people tend to underrate the seriousness of p -hacking is because it involves an error of omission (failing to report the full context of a p -value), rather than an error of commission (making up data).

In studies of morality judgement, errors of omission are generally regarded as less culpable than errors of commission (see, e.g., Haidt & Baron, 1996 ). Furthermore, p -hacking may be seen to involve a particularly subtle kind of dishonesty because the statistics and their associated p -values are provided by the output of statistics software. They are mathematically correct when testing a specific, prespecified hypothesis: the problem is that, without the appropriate context, they imply stronger evidence than is justified. This is akin to what Rogers, Zeckhauser, Gino, Norton, and Schweitzer (2017) have termed “paltering,” i.e., the use of truthful statements to mislead, a topic they studied in the context of negotiations. An example was given of a person trying to sell a car that had twice needed a mechanic to fix it. Suppose the potential purchaser directly asks “Has the car ever had problems?” An error of commission is to deny the problems, but a paltering answer would be “This car drives very smoothly and is very responsive. Just last week it started up with no problems when the temperature was −5 degrees Fahrenheit.” Rogers et al. showed that negotiators were more willing to palter than to lie, although potential purchasers regarded paltering as only marginally less immoral than lying.

Regardless of the habitual behaviour of researchers, the general public does not find p -hacking acceptable. Pickett and Roche (2018) did an M-Turk experiment in which a community sample was asked to judge the morality of various scenarios, including this one:

A medical researcher is writing an article testing a new drug for high blood pressure. When she analyzes the data with either method A or B, the drug has zero effect on blood pressure, but when she uses method C, the drug seems to reduce blood pressure. She only reports the results of method C, which are the results that she wants to see.

Seventy-one percent of respondents thought this behaviour was immoral, 73% thought the researcher should receive a funding ban, and 63% thought the researcher should be fired.

Nevertheless, although selective reporting was generally deemed immoral, data fabrication was judged more harshly, confirming that in this context, as in those studied by Haidt and Baron (1996) , sins of commission are taken more seriously than errors of omission.

If we look at the consequences of a specific act of p -hacking, it can potentially be more serious than an act of data fabrication: this is most obvious in medical contexts, where suppression of trial results, either by omitting findings from within a study or by failing to publish studies with null results, can provide a badly distorted basis for clinical decision-making. In their simulations of evidence cumulation, Nissen et al. (2016) showed how p -hacking could compound the impact of publication bias and accelerate the premature “canonization” of theories; the alpha level that researchers assume applies to experimental results is distorted by p -hacking, and the expected rate of false positives is actually much higher. Furthermore, p -hacking is virtually undetectable because the data that are presented are real, but the necessary context for their interpretation is missing. This makes it harder to correct the scientific record.

Bias in writing up a study

Most writing on the “replication crisis” focuses on aspects of experimental design and observations, data analysis, and scientific reporting. The resumé of literature that is found in the introduction to empirical papers, as well as in literature review articles, is given less scrutiny. I will make the case that biased literature reviews are universal and have a major role in sustaining poor reproducibility because they lead to entrenchment of false theories, which are then used as the basis for further research.

It is common to see biased literature reviews that put a disproportionate focus on findings that are consistent with the author’s position. Researchers who know an area well may be aware of this, especially if their own work is omitted, but in general, cherry-picking of evidence is hard to detect. I will use a specific paper published in 2013 to illustrate my point, but I will not name the authors, as it would be invidious to single them out when the kinds of bias in their literature review are ubiquitous. In their paper, my attention was drawn to the following statement in the introduction:

Regardless of etiology, cerebellar neuropathology commonly occurs in autistic individuals. Cerebellar hypoplasia and reduced cerebellar Purkinje cell numbers are the most consistent neuropathologies linked to autism. … MRI studies report that autistic children have smaller cerebellar vermal volume in comparison to typically developing children.

I was surprised to read this because a few years ago, I had attended a meeting on neuroanatomical studies of autism and had come away with the impression that there were few consistent findings. I did a quick search for an up-to-date review, which turned up a meta-analysis ( Traut et al., 2018 ), that included 16 MRI studies published between 1997 and 2010, five of which reported larger cerebellar size in autism and one of which found smaller cerebellar size. In the article I was reading, one paper had been cited to support the MRI statement, but it referred to a study where the absolute size of the vermis did not differ from typically developing children but was relatively small in the autistic participants, after the overall (larger) size of the cerebellum had been controlled for.

Other papers cited to support the claims of cerebellar neuropathology included a couple of early post mortem neuroanatomical studies, as well as two reviews. The first of these ( DiCicco-Bloom et al., 2006 ) summarised presentations from a conference and supported the claims made by the authors. The other one, however ( Palmen, van Engeland, Hof, & Schmitz, 2004 ), expressed more uncertainty and noted a lack of correspondence between early neuroanatomical studies and subsequent MRI findings, concluding,

Although some consistent results emerge, the majority of the neuropathological data remain equivocal. This may be due to lack of statistical power, resulting from small sample sizes and from the heterogeneity of the disorder itself, to the inability to control for potential confounding variables such as gender, mental retardation, epilepsy and medication status, and, importantly, to the lack of consistent design in histopathological quantitative studies of autism published to date.

In sum, a confident statement “cerebellar neuropathology commonly occurs in autistic individuals,” accompanied by a set of references, converged to give the impression that there is consensus that the cerebellum is involved in autism. However, when we drill down, we find that the evidence is uncertain, with discrepancies between neuropathological studies and MRI and methodological concerns about the former. Meanwhile, this study forms part of a large body of research in which genetically modified mice with cerebellar dysfunction are used as an animal model of autism. My impression is that few of the researchers using these mouse models appreciate that the claim of cerebellar abnormality in autism is controversial among those working with humans because each paper builds on the prior literature. There is entrenchment of error, for two reasons. First, many researchers will take at face value the summary of previous work in a peer-reviewed paper, without going back to original cited sources. Second, even if a researcher is careful and scholarly and does read the cited work, they are unlikely to find relevant studies that were not included in the literature review.

It is easy to take an example like this and bemoan the lack of rigour in scientific writing, but this is to discount cognitive biases that make it inevitable that, unless we adopt specific safeguards against this, cherry-picking of evidence will be the norm. Three biases lead us in this direction: confirmation bias, moral asymmetry, and reliance on schemata.

Confirmation bias: cherry-picking prior literature

A personal example may serve to illustrate the way confirmation bias can operate subconsciously. I am interested in genetic effects on children’s language problems, and I was in the habit of citing three relevant twin studies when I gave talks on this topic. All these obtained similar results, namely that there was a strong genetic component to developmental language disorders, as evidenced by much higher concordance for disorder in pairs of monozygotic versus dizygotic twins. In 2005 , however, Hayiou-Thomas, Oliver, and Plomin published a twin study with very different findings, with low twin/co-twin concordance, regardless of zygosity. It was only when I came to write a review of this area and I checked the literature that I realised I had failed to mention the 2005 study in talks for a year or two, even though I had collaborated with the authors and was well aware of the findings. I had formed a clear view on heritability of language disorders, and so I had difficulty remembering results that did not agree. Subsequently, I realised we should try to understand why this study obtained different results and found a plausible explanation ( Bishop & Hayiou-Thomas, 2008 ). But I only went back for a further critical look at the study because I needed to make sense of the conflicting results. It is inevitable that we behave this way as we try to find generalisable results from a body of work, but it creates an asymmetry of attention and focus between work that we readily accept, because it fits, and work that is either forgotten or looked at more critically, because it does not.

A particularly rich analysis of citation bias comes from a case study by Greenberg (2009) , who took as his starting point papers concerned with claims that a protein, β amyloid, was involved in causing a specific form of muscle disease. Greenberg classified papers according to whether they were positive, negative, or neutral about this claim and carried out a network analysis to identify influential papers (those with many citations). He found that papers that were critical of the claim received far fewer citations than those that supported it, and this was not explained by lower quality. Animal model studies were almost exclusively justified by selective citation of positive studies. Consistent with the idea of “reconstructive remembering,” he also found instances where cited content was distorted, as well as cases where influential review papers amplified citation bias by focusing attention only on positive work. The net result was an information (perhaps better termed a disinformation) cascade that would lead to a lack of awareness of critical data, which never gets recognised. In effect, when we have agents that adopt Bayesian reasoning, if they are presented with distorted information, this creates a positive feedback loop that leads to increasing bias in the prior. Viewed this way, we can start to see how omission of relevant citations is not a minor peccadillo but a serious contributor to entrenchment of error. Further evidence of the cumulative impact of citation bias is shown in Figure 4 , which uses studies of intervention for depression. Because studies in this area are registered, it is possible to track the fate of unpublished as well as published studies. The researchers showed that studies with null results are far less likely to be published than those with positive findings, but even if the former are published, there is a bias against citing them.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-fig4.jpg

The cumulative impact of reporting and citation biases on the evidence base for antidepressants. (a) Displays the initial, complete cohort of trials that were recorded in a registry, while (b) through (e) show the cumulative effect of biases. Each circle indicates a trial, while the colour indicates whether results were positive or negative or were reported to give a misleadingly positive impression(spin). Circles connected by a grey line indicate trials from the same publication. The progression from (a) to (b) shows that nearly all the positive trials but only half of those with null results were published, and reporting of null studies showed (c) bias or (d) spin in what was reported. In (e), the size of the circle indicates the (relative) number of citations received by that category of studies.

Source. Reprinted with permission from De Vries et al. (2018) .

While describing such cases of citation bias, it is worth pausing to consider one of the best-known examples of distorted thinking: experimenter bias. This is similar to confirmation bias, but rather than involving selective attention to specific aspects of a situation that fits with our preconceptions, it has a more active character, whereby the experimenter can unwittingly influence the outcome of a study. The best-known research on this topic was the original Rosenthal and Fode (1963) study, where students were informed that the rats they were studying were “maze-bright” or “maze-dull,” when in fact they did not differ. Nevertheless, the “maze-bright” group learned better, suggesting that the experimenter would try harder to train an animal thought to have potential. A related study by Rosenthal and Jacobson (1963) claimed that if teachers were told that a test had revealed that specific pupils were “ready to bloom,” they would do better on an IQ test administered at the end of the year, even though the children so designated were selected at random.

Both these studies are widely cited. It is less well known that work on experimenter bias was subjected to a scathing critique by Barber and Silver (1968) , entitled “Fact, fiction and the experimenter bias effect,” in which it was noted that work in this area suffered from poor methodological quality, in particular p -hacking. Barber and Silver did not deny that experimenter bias could affect results, but they concluded that these effects were far less common and smaller in magnitude than those implied by Rosenthal’s early work. Subsequently, Barber (1976) developed this critique further in his book Pitfalls in Human Research. Yet Rosenthal’s work is more highly cited and better remembered than that of Barber.

Rosenthal’s work provides a cautionary tale: although confirmation bias helps explain distorted patterns of citation, the evidence for maladaptive cognitive biases has been exaggerated. Furthermore, studies on confirmation bias often use artificial experiments, divorced from real life, and the criteria for deciding that reasoning is erroneous are often poorly justified ( Hahn & Harris, 2014 ). In future, it would be worthwhile doing more naturalistic explorations of people’s memory for studies that do and do not support a position when summarising scientific literature.

On a related point, in using confirmation bias as an explanation for persistence of weak theories, there is a danger that I am falling into exactly the trap that I am describing. For instance, I was delighted to find Greenberg’s (2009) paper, as it chimed very well with my experiences when reading papers about cerebellar deficits in autism. But would I have described and cited it here if it had shown no difference between citations for papers that did and did not support the β amyloid claim? Almost certainly not. Am I going to read all literature on citation bias to find out how common it is? That strategy would soon become impossible if I tried to do it for every idea I touch upon in this article.

Moral asymmetry between errors of omission and commission

The second bias that fortifies the distortions in a literature review is the asymmetry of moral judgement that I referred to when discussing p -hacking. To my knowledge, paltering has not been studied in the context of literature reviews, but my impression is that selective presentation of results that fit, while failing to mention important contextual factors (e.g., the vermis in those with autism is smaller but only when you have covaried for the total cerebellar size), is common. How far this is deliberate or due to reconstructive remembering, however, is impossible to establish.

It would also be of interest to conduct studies on people’s attitudes to the acceptability of cherry-picking of literature versus paltering (misleadingly selective reporting) or invention of a study. I would anticipate that most would regard cherry-picking as fairly innocuous, for several reasons: first, it could be an unintended omission; second, the consequences of omitting material from a review may be seen as less severe than introducing misinformation; and third, selective citation of papers that fit a narrative can have a positive benefit in terms of readability. There are also pragmatic concerns: some journals limit the word count for an introduction or reference list so that full citation of all relevant work is not possible and, finally, sanctioning people for harmful omissions would create apparently unlimited obligations ( Haidt & Baron, 1996 ). Quite simply, there is far too much literature for even the most diligent scholar to read.

Nevertheless, consequences of omission can be severe. The above examples of research on the serotonin transporter gene in depression, or cerebellar abnormality in autism, emphasise how failure to cite earlier null results can lead to a misplaced sense of confidence in a phenomenon, which is wasteful in time and money when others attempt to build on it. And the more we encounter a claim, the more likely it is to be judged as true, regardless of actual accuracy (see Pennycook, Cannon, & Rand, 2018 , for a topical example). As Ingelfinger (1976) put it, “faulty or inappropriate references . . . like weeds, tend to reproduce themselves and so permit even the weakest of allegations to acquire, with repeated citation, the guise of factuality” (p. 1076).

Reliance on schemata

Our brains cannot conceivably process all the information around us: we have to find a way to select what is important to function and survive. This involves a search for meaningful patterns, which once established, allow us to focus on what is relevant and ignore the rest. Scientific discovery may be seen as an elevated version of pattern discovery: we see the height of scientific achievement as discovering regularities in nature that allow us to make better predictions about how the world behaves and to create new technologies and interventions from the basic principles we have discovered.

Scientific progress is not a simple process of weighing up competing pieces of evidence in relation to a theory. Rather than simply choosing between one hypothesis and another, we try to understand a problem in terms of a schema. Bartlett (1932) was one of the first psychologists to study how our preconceptions, or schemata, create distortions in perception and memory. He introduced the idea of “reconstructive remembering,” demonstrating how people’s memory of a narrative changed over time in specific ways, to become more coherent and aligned with pre-existing schemata.

Bartlett’s (1932) work on reconstructive remembering can explain why we not only tend to ignore inconsistent evidence ( Duyx, Urlings, Swaen, Bouter, & Zeegers, 2017 ) but also are prone to distort the evidence that we do include ( Vicente & Brewer, 1993 ). If we put together the combined influence of confirmation bias and reconstructive remembering, it suggests that narrative literature reviews have a high probability of being inaccurate: both types of bias will lead to a picture of research converging on a compelling story, when the reality may be far less tidy ( Katz, 2013 ).

I have focused so far on bias in citing prior literature, but schemata also influence how researchers go about writing up results. If we just were to present a set of facts that did not cohere, our work would be difficult to understand and remember. As Chalmers, Hedges, and Cooper (2002) noted, this point was made in 1885 by Lord Raleigh in a presidential address to the British Association for the Advancement of Science:

If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight. The suggestion of a new idea, or the detection of a law, supersedes much that has previously been a burden on the memory, and by introducing order and coherence facilitates the retention of the remainder in an available form. ( Rayleigh, 1885 , p. 20)

Indeed, when we write up our research, we are exhorted to “tell a story,” which achieves the “order and coherence” that Rayleigh described. Since his time, ample literature on narrative comprehension has confirmed that people fill in gaps in unstated information and find texts easier to comprehend and memorise when they fit a familiar narrative structure ( Bower & Morrow, 1990 ; Van den Broek, 1994 ).

This resonates with Dawkins’ ( 1976 ) criteria for a meme, i.e., an idea that persists by being transmitted from person to person. Memes need to be easy to remember, understand, and communicate, and so narrative accounts make far better memes than dry lists of facts. From this perspective, narrative serves a useful function in providing a scaffolding that facilitates communication. However, while this is generally a useful, and indeed essential, aspect of human cognition, in scientific communication, it can lead to propagation of false information. Bartlett (1932) noted that remembering is hardly ever really exact, “and it is not at all important that it should be so.” He was thinking of the beneficial aspects of schemata, in allowing us to avoid information overload and to focus on what is meaningful. However, as Dawkins emphasised, survival of a meme does not depend on it being useful or true. An idea such as the claim that vaccination causes autism is a very effective meme, but it has led to resurgence of diseases that were close to being eradicated.

In communicating scientific results, we need to strike a fine balance between presenting a precis of findings that is easily communicated and moving towards an increase in knowledge. I would argue the pendulum may have swung too far in the direction of encouraging researchers to tell good narratives. Not just media outlets, but also many journal editors and reviewers, encourage authors to tell simple stories that are easy to understand, and those who can produce these may be rewarded with funding and promotion.

The clearest illustration of narrative supplanting accurate reporting comes from the widespread use of HARKing, which was encouraged by Bem (2004) when he wrote,

There are two possible articles you can write: (a) the article you planned to write when you designed your study or (b) the article that makes the most sense now that you have seen the results. They are rarely the same, and the correct answer is (b).

Of course, formulating a hypothesis on the basis of observed data is a key part of the scientific process. However, as noted above, it is not acceptable to use the same data to both formulate and test the hypothesis—replication in a new sample is needed to avoid being misled by the play of chance and littering literature with false positives ( Lazic, 2016 ; Wagenmakers et al., 2012 ).

Kerr (1998) considered why HARKing is a successful strategy and pointed out that it allowed the researcher to construct an account of an experiment that fits a good story script:

Positing a theory serves as an effective “initiating event.” It gives certain events significance and justifies the investigators’ subsequent purposeful activities directed at the goal of testing the hypotheses. And, when one HARKs, a “happy ending” (i.e., confirmation) is guaranteed. (p. 203)

In this regard, Bem’s advice makes perfect sense: “A journal article tells a straightforward tale of a circumscribed problem in search of a solution. It is not a novel with subplots, flashbacks, and literary allusions, but a short story with a single linear narrative line.”

We have, then, a serious tension in scientific writing. We are expected to be scholarly and honest, to report all our data and analyses and not to hide inconvenient truths. At the same time, if we want people to understand and remember our work, we should tell a coherent story from which unnecessary details have been expunged and where we cut out any part of the narrative that distracts from the main conclusions.

Kerr (1998) was clear that HARKing has serious costs. As well as translating type I errors into hard-to-eradicate theory, he noted that it presents a distorted view of science as a process which is far less difficult and unpredictable than is really the case. We never learn what did not work because inconvenient results are suppressed. For early career researchers, it can lead to cynicism when they learn that the rosy picture portrayed in the literature was achieved only by misrepresentation.

Overcoming cognitive constraints to do better science

One thing that is clear from this overview is that we have known about cognitive constraints for decades, yet they continue to affect scientific research. Finding ways to mitigate the impact of these constraints should be a high priority for experimental psychologists. Here, I draw together some general approaches that might be used to devise an agenda for research improvement. Many of these ideas have been suggested before but without much consideration of cognitive constraints that may affect their implementation. Some methods, such as training, attempt to overcome the constraints directly in individuals: others involve making structural changes to how science is done to counteract our human tendency towards unscientific thinking. None of these provides a total solution: rather, the goal is to tweak the dials that dictate how people think and behave, to move us closer to better scientific practices.

It is often suggested that better training is needed to improve replicability of scientific results, yet the focus tends to be on formal instruction in experimental design and statistics. Less attention has been given to engendering a more intuitive understanding of probability, or counteracting cognitive biases, though there are exceptions, such as the course by Steel, Liermann, and Guttorp (2018) , which starts with a consideration of “How the wiring of the human brain leads to incorrect conclusions from data.” One way of inducing a more intuitive sense of statistics and p -values is by using data simulations. Simulation is not routinely incorporated in statistics training, but free statistical software now makes this within the grasp of all ( Tintle et al., 2015 ). This is a powerful way to experience how easy it is to get a “significant” p -value when running multiple tests. Students are often surprised when they generate repeated runs of a correlation matrix of random numbers with, say, five variables and find at least one “significant” correlation in about one in four runs. Once you understand that there is a difference between the probability associated with getting a specific result on a single test, predicted in advance, versus the probability of that result coming up at least once in a multitude of tests, then the dangers of p -hacking become easier to grasp.

Data simulation could also help overcome the misplaced “belief in the law of small numbers” ( Tversky & Kahneman, 1974 ). By generating datasets with a known effect size, and then taking samples from these and subjecting them to statistical test, the student can learn to appreciate just how easy it is to miss a true effect (type II error) if the study is underpowered.

There is small literature evaluating attempts to specifically inoculate people against certain types of cognitive bias. For instance, Morewedge et al. (2015) developed instructional videos and computer games designed to reduce a series of cognitive biases, including confirmation bias, and found these to be effective over the longer term. Typically, however, such interventions focus on hypothetical scenarios outside the scope of experimental psychology. They might improve scientific quality of research projects if adjusted to make them relevant to conducting and appraising experiments.

Triangulation of methods in study design

I noted above that for science to progress, we need to overcome a tendency to settle on the first theory that seems “good enough” to account for observations. Any method that forces the researcher to actively search for alternative explanations is, therefore, likely to stimulate better research.

The notion of triangulation ( Munafò & Davey Smith, 2018 ) was developed in the field of epidemiology, where reliance is primarily on observational data, and experimental manipulation is not feasible. Inferring causality from correlational data is hazardous, but it is possible to adopt a strategic approach of combining complementary approaches to analysis, each of which has different assumptions, strengths, and weaknesses. Epidemiology progresses when different explanations for correlational data are explicitly identified and evaluated, and converging evidence is obtained ( Lawlor, Tilling, & Davey Smith, 2016 ). This approach could be extended to other disciplines, by explicitly requiring researchers to use at least two different methods with different potential biases when evaluating a specific hypothesis.

A “culture of criticism”

Smith (2006) described peer review as “a flawed process, full of easily identified defects with little evidence that it works” (p. 182). Yet peer review provides one way of forcing researchers to recognise when they are so focused on a favoured theory that they are unable to break away. Hossenfelder (2018) has argued that the field of particle physics has stagnated because of a reluctance to abandon theories that are deemed “beautiful.” We are accustomed to regarding physicists as superior to psychologists in terms of theoretical and methodological sophistication. In general, they place far less emphasis than we do on statistical criteria for evidence, and where they do use statistics, they understand probability theory and adopt very stringent levels of significance. Nevertheless, according to Hossenfelder, they are subject to cognitive and social biases that make them reluctant to discard theories. She concludes her book with an Appendix on “What you can do to help,” and as well as advocating better understanding of cognitive biases, she recommends some cultural changes to address these. These include building “a culture of criticism.” In principle, we already have this—talks and seminars should provide a forum for research to be challenged—but in practice, critiquing another’s work is often seen as clashing with social conventions of being supportive to others, especially when it is conducted in public.

Recently, two other approaches have been developed, with the potential to make a “culture of criticism” more useful and more socially acceptable. Registered Reports ( Chambers, 2019 ) is an approach that was devised to prevent publication bias, p -hacking, and HARKing. This format moves the peer review process to a point before data collection so that results cannot influence editorial decisions. An unexpected positive consequence is that peer review comes at a point when it can be acted upon to improve the experimental design. Where reviewers of Registered Reports ask “how could we disprove the hypothesis?” and “what other explanations should we consider?” this can generate more informative experiments.

A related idea is borrowed from business practices and is known as the “pre mortem” approach ( Klein, 2007 ). Project developers gather together and are asked to imagine that a proposed project has gone ahead and failed. They are then encouraged to write down reasons why this has happened, allowing people to voice misgivings that they may have been reluctant to state openly, so they can be addressed before the project has begun. It would be worth evaluating the effectiveness of pre-mortems for scientific projects. We could strengthen this approach by incorporating ideas from Bang and Frith (2017) , who noted that group decision-making is most likely to be effective when the group is diverse and people can express their views anonymously. With both Registered Reports and the study pre-mortem, reviewers can have a role as critical friends who can encourage researchers to identify ways to improve a project before it is conducted. This can be a more positive experience for the reviewer, who may otherwise have no option but to recommend rejection of a study with flawed methodology.

Counteracting cherry-picking of literature

Turning to cherry-picking of prior literature, the established solution is the systematic review, where clear criteria are laid out in advance so that a comprehensive search can be made of all relevant studies ( Siddaway, Wood, & Hedges, 2019 ). The systematic review is only as good as the data that go into it, however, and if a field suffers from substantial publication bias and/or p -hacking, then, rather than tackling error entrenchment, it may add to it. With the most scrupulous search strategy, relevant papers with null results can be missed because positive results are mentioned in titles and abstracts of papers, whereas null results are not ( Lazic, 2016 , p. 15). This can mean that, if a study is looking at many possible associations (e.g., with brain regions or with genes), studies that considered a specific association but failed to find support for it will be systematically disregarded. This may explain why it seems to take 30 or 40 years for some erroneous entrenched theories to be abandoned. The situation may improve with increasing availability of open data. Provided data are adequately documented and accessible, the problem of missing relevant studies may be reduced.

Ultimately, the problem of biased reviews may not be soluble just by changing people’s citation habits. Journal editors and reviewers could insist that abstracts follow a structured format and report all variables that were tested, not just those that gave significant results. A more radical approach by funders may be needed to disrupt this wasteful cycle. When a research team applies to test a new idea, they could first be required to (a) conduct a systematic review (unless one has been recently done) and (b) replicate the original findings on which the work is based: this is the opposite to what happens currently, where novelty and originality are major criteria for funding. In addition, it could be made mandatory for any newly funded research idea to be investigated by at least two independent laboratories and using at least two different approaches (triangulation). All these measures would drastically slow down science and may be unfeasible where research needs highly specialised equipment, facilities, or skills that are specific to one laboratory. Nevertheless, slower science may be preferable to the current system where there are so many examples of false leads being pursued for decades, with consequent waste of resources.

Reconciling storytelling with honesty

Perhaps the hardest problem is how to reconcile our need for narrative with a “warts and all” account of research. Consider this advice from Bem (2004) —which I suspect many journal editors would endorse:

Contrary to the conventional wisdom, science does not care how clever or clairvoyant you were at guessing your results ahead of time. Scientific integrity does not require you to lead your readers through all your wrongheaded hunches only to show—voila!—they were wrongheaded. A journal article should not be a personal history of your stillborn thoughts . . . Your overriding purpose is to tell the world what you have learned from your study. If your results suggest a compelling framework for their presentation, adopt it and make the most instructive findings your centerpiece . . . Think of your dataset as a jewel. Your task is to cut and polish it, to select the facets to highlight, and to craft the best setting for it.

As Kerr (1998) pointed out, HARKing gives a misleading impression of what was found, which can be particularly damaging for students, who on reading literature may form the impression that it is normal for scientists to have their predictions confirmed and think of themselves as incompetent when their own experiments do not work out that way. One of the goals of pre-registration is to ensure that researchers do not omit inconvenient facts when writing up a study—or if they do, at least make it possible to see that this has been done. In the field of clinical medicine, impressive progress has been made in methodology, with registration now a requirement for clinical trials ( International Committee of Medical Journal Editors, 2019 ). Yet, Goldacre et al. (2019) found that even when a trial was registered, it was common for researchers to change the primary outcome measure without explanation, and it has been similarly noted that pre-registrations in psychology are often too ambiguous to preclude p -hacking ( Veldkamp et al., 2018 ). Registered Reports ( Chambers, 2019 ) adopt stricter standards that should prevent HARKing, but the author may struggle to maintain a strong narrative because messy reality makes a less compelling story than a set of results subjected to Bem’s (2004) cutting and polishing process.

Rewarding credible research practices

A final set of recommendations has to do with changing the culture so that incentives are aligned with efforts to counteract unhelpful cognitive constraints, and researchers are rewarded for doing reproducible, replicable research, rather than for grant income or publications in high-impact journals ( Forstmeier, Wagenmakers, & Parker, 2016 ; Pulverer, 2015 ). There is already evidence that funders are concerned to address problems with credibility of biomedical research ( Academy of Medical Sciences, 2015 ), and rigour and reproducibility are increasingly mentioned in grant guidelines (e.g., https://grants.nih.gov/policy/reproducibility/index.htm ). One funder, Cancer Research UK, is innovating by incorporating Registered Reports in a two-stage funding model ( Munafò, 2017 ). We now need publishers and institutions to follow suit and ensure that researchers are not disadvantaged by adopting a self-critical mind-set and engaging in practices of open and reproducible science ( Poldrack, 2019 ).

Acknowledgments

My thanks to Kate Nation, Matt Jaquiery, Joe Chislett, Laura Fortunato, Uta Frith, Stefan Lewandowsky, and Karalyn Patterson for invaluable comments on an early draft of this manuscript.

Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author is supported by a Principal Research Fellowship from the Wellcome Trust (programme grant no. 082498) and European Research Council advanced grant no. 694189.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1747021819886519-img2.jpg

psychology schools guide

  • Psychology Programs

experimental psychology in government

What Can You Do With An Experimental Psychology Degree [2024 Guide]

An experimental psychology degree program teaches you how to use scientific research methods to study thought-processes and behaviors. In other words, it teaches you how to analyze the human mind through a variety of research methods (experiments, case studies, surveys, observation, interviews and/or questionnaires).

This type of training program also teaches you how to investigate genetic and environmental factors that can affect the way a person thinks and behaves.

While in college it is important that you enroll in a variety of research and psychology courses. It is also important that you understand statistics because you will need this skill to analyze your research results.

It can be confusing when trying to decide what to do with your experimental psychology degree, but thankfully, there are several different career paths you can choose with a degree in experimental psychology.

What Type of Degree Do Experimental Psychologists Need?

Experimental psychology training programs are designed to help you develop strong research skills, learn ethical research principles and supervise research studies. Although you are required to have at least a master’s degree in experimental psychology to seek employment in the field, some employers prefer that you have a doctorate.

For instance, if you are interested in becoming a college professor or a university researcher, you will need a doctorate degree in experimental psychology or a related field, but if you want to work at a business in the human resources department, you may be hired simply with a master’s degree in the field.

It is important to note that you do not have to have a degree in experimental psychology to work in the field, but you must have a degree in a related field like industrial-organizational psychology , health psychology or clinical psychology .

If you decide to further your education and enroll in a post-graduate experimental psychology program, you will spend the majority of your time increasing your research knowledge and strengthened your research skills.

Courses that you may take include: research design, research methodology (quantitative and qualitative research studies), statistics, experimentation, research tools, etc.). You may also take courses in human development and life cycle development.

It will take you approximately four years to obtain a bachelor’s degree in psychology (any psychology program is acceptable), two and a half years to obtain a master’s degree in experimental psychology or a related field and up to seven years to obtain a doctorate in experimental psychology or a related field.

Career With a Degree in Experimental Psychology

Human factor psychologist.

You can become a human factor psychologist with a doctorate in experimental psychology. Many government agencies and organizations hire psychologists with an experimental psychology degree to help increase employee morale, productivity, quality and satisfaction. Your main goal will be to help employees have a better job experience.

Educational Psychologist

Another career path you can pursue with an experiential psychology doctoral degree is educational psychology . You will be able to use the skills you learned through your post-graduate degree program to develop more effective educational assessments (standardized tests). You will also consult with teachers, parents and school administrators to improve the learning process for all students (healthy, ill and delayed and disabled).

Psychological Consultant

With a master’s degree and/or doctorate in experimental psychology, you can provide psychological services to companies, agencies, corporations, colleges, etc.

Your main responsibilities will be to help companies hire and promote qualified candidates and employees, organize and train new employees, develop “refresher” training modules for established employees, resolve workplace conflicts and provide advice and guidance on how to improve company practices, procedures and policies.

If you decide to use your degree for consultation work, you will perform experiments on how to improve the individual’s experience.

Product Development Specialist

A growing career path that you can pursue if you have a bachelor’s degree in experimental psychology is product development. If you decide to purse this career path, your main responsibilities will be to help companies improve their products so that they are more user-friendly, functional and efficient. You will work with business executives to create products that will appeal to consumers.

Medical Researcher

If you have a master’s degree in experimental psychology you may be able to enter the medical field as a medical researcher. You may work with other medical professionals (psychologists, psychiatrists, physicians and nurses) to create psychotropic medications (Prozac, Haldol, Celexa, Adderall, etc.) that can treat a variety of psychological disorders and mental illnesses like clinical depression, anxiety disorders, schizophrenia, manic depression, phobias, etc.

Experimental Researcher

If you have a bachelor’s or master’s degree in experimental psychology, you may want to pursue a career in research. You can perform research studies and publish your results in scientific and research journals, books and periodicals. Your main goal will be to study the human mind (learning and memory, behaviors and thought-processes).

In other words, you will use scientific methods to assess and analyze why people think the way they do and behave the way they do. You will spend the majority of your workday testing humans and animals in a controlled setting (a laboratory) or in the field (observing in a natural habitat). You may work for a college or university, research laboratory or government agency.

Counseling Psychologist

Counseling psychology encompasses a broad range of practices that help clients of all ages alleviate stress, improve their well-being, resolve crises and increase their ability to function in a healthful manner.

Counseling psychologists specialize in counseling patients whose issues are related to social, vocational, emotional, health, developmental or organizational concerns. Counseling psychologists normally focus upon patients who have life issues, like adjusting to changes in career or marital status. These professionals often help people deal with everyday problems.

Military Psychologist

Military psychologists help soldiers and their families manage a variety of adjustment and psychological issues such as: depression, anxiety, generalized stress, stress-related combat issues and post-traumatic stress disorder (PTSD).

Military psychologists  provide specialized care to soldiers and their families. Their main goal is to help these individuals heal from stressful and traumatic experiences. Soldiers returning from overseas are especially vulnerable to stress-related psychological disorders.

Military psychologists treat a large number of soldiers with PTSD, so specialized training in psychological disorders is required. In addition, military counselors also work on a military base so ones needs to be mentally prepared to see injured soldiers and comfort grieving families.

Military psychologists need a strong background in psychological disorders and therapy approaches, techniques and methods. Military counselors are expected to treat an array of psychological issues such as: substance abuse, PTSD, anxiety, depression, stress, family-related issues and job-related issues. They may also be required to administer and analyze personality, psychological and career assessments.

College Professor

If you have a doctoral degree, you may want to consider becoming a college professor. If you decide to pursue this career, you may be required to teach courses in research methods, statistics and/or research studies (quantitative and/or qualitative research studies).

Your classroom will more than likely be set up like a lab and your students will perform experiments under your guidance. In addition, you will more than likely also supervise research studies for the college or university. Moreover, you may be required to publish research results in peer-reviewed scientific journals once or twice a year.

Related Reading

  • Top Careers in Psychology
  • How to Become a Research Psychologist
  • How to Become an Experimental Psychologist
  • What is the Difference Between a MA and MS in Psychology?
  • What is a Counseling Psychologist? | Careers in Counseling
  • Associate Degrees
  • Bachelors Degrees
  • Masters Degrees
  • PhD Programs
  • Addiction Counselor
  • Criminal Psychologist
  • Child Psychologist
  • Family Therapist
  • General Psychologist
  • Health Psychologist
  • Industrial-Organizational
  • Sports Psychologist
  • See More Careers
  • Applied Psychology
  • Business Psychology
  • Child Psychology
  • Counseling Psychology
  • Educational Psychology
  • Industrial Psychology
  • Sports Psychology
  • See More Programs
  • Clinical Psychology Degree
  • Cognitive Psychology Degree
  • Forensic Psychology Degree
  • Health Psychology Degree
  • Mental Counseling Degree
  • Social Psychology Degree
  • School Counseling Degree
  • Behavioral Psychologist Career
  • Clinical Psychologist Career
  • Cognitive Psychologist Career
  • Counseling Psychologist Career
  • Forensic Psychologist Career
  • School Psychologist Career
  • Social Psychologist Career

contact us

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How Does Experimental Psychology Study Behavior?

Purpose, methods, and history

  • Why It Matters

What factors influence people's behaviors and thoughts? Experimental psychology utilizes scientific methods to answer these questions by researching the mind and behavior. Experimental psychologists conduct experiments to learn more about why people do certain things.

Overview of Experimental Psychology

Why do people do the things they do? What factors influence how personality develops? And how do our behaviors and experiences shape our character?

These are just a few of the questions that psychologists explore, and experimental methods allow researchers to create and empirically test hypotheses. By studying such questions, researchers can also develop theories that enable them to describe, explain, predict, and even change human behaviors.

For example, researchers might utilize experimental methods to investigate why people engage in unhealthy behaviors. By learning more about the underlying reasons why these behaviors occur, researchers can then search for effective ways to help people avoid such actions or replace unhealthy choices with more beneficial ones.

Why Experimental Psychology Matters

While students are often required to take experimental psychology courses during undergraduate and graduate school , think about this subject as a methodology rather than a singular area within psychology. People in many subfields of psychology use these techniques to conduct research on everything from childhood development to social issues.

Experimental psychology is important because the findings play a vital role in our understanding of the human mind and behavior.

By better understanding exactly what makes people tick, psychologists and other mental health professionals can explore new approaches to treating psychological distress and mental illness. These are often topics of experimental psychology research.

Experimental Psychology Methods

So how exactly do researchers investigate the human mind and behavior? Because the mind is so complex, it seems like a challenging task to explore the many factors that contribute to how we think, act, and feel.

Experimental psychologists use a variety of different research methods and tools to investigate human behavior. Methods in the experimental psychology category include experiments, case studies, correlational research, and naturalistic observations.

Experiments

Experimentation remains the primary standard in psychological research. In some cases, psychologists can perform experiments to determine if there is a cause-and-effect relationship between different variables.

The basics of conducting a psychology experiment involve:

  • Randomly assigning participants to groups
  • Operationally defining variables
  • Developing a hypothesis
  • Manipulating independent variables
  • Measuring dependent variables

One experimental psychology research example would be to perform a study to look at whether sleep deprivation impairs performance on a driving test. The experimenter could control other variables that might influence the outcome, varying the amount of sleep participants get the night before.

All of the participants would then take the same driving test via a simulator or on a controlled course. By analyzing the results, researchers can determine if changes in the independent variable (amount of sleep) led to differences in the dependent variable (performance on a driving test).

Case Studies

Case studies allow researchers to study an individual or group of people in great depth. When performing a case study, the researcher collects every single piece of data possible, often observing the person or group over a period of time and in a variety of situations. They also collect detailed information about their subject's background—including family history, education, work, and social life—is also collected.

Such studies are often performed in instances where experimentation is not possible. For example, a scientist might conduct a case study when the person of interest has had a unique or rare experience that could not be replicated in a lab.

Correlational Research

Correlational studies are an experimental psychology method that makes it possible for researchers to look at relationships between different variables. For example, a psychologist might note that as one variable increases, another tends to decrease.

While such studies can look at relationships, they cannot be used to imply causal relationships. The golden rule is that correlation does not equal causation.

Naturalistic Observations

Naturalistic observation gives researchers the opportunity to watch people in their natural environments. This experimental psychology method can be particularly useful in cases where the investigators believe that a lab setting might have an undue influence on participant behaviors.

What Experimental Psychologists Do

Experimental psychologists work in a wide variety of settings, including colleges, universities, research centers, government, and private businesses. Some of these professionals teach experimental methods to students while others conduct research on cognitive processes, animal behavior, neuroscience, personality, and other subject areas.

Those who work in academic settings often teach psychology courses in addition to performing research and publishing their findings in professional journals. Other experimental psychologists work with businesses to discover ways to make employees more productive or to create a safer workplace—a specialty area known as human factors psychology .

Experimental Psychology Research Examples

Some topics that might be explored in experimental psychology research include how music affects motivation, the impact social media has on mental health , and whether a certain color changes one's thoughts or perceptions.

History of Experimental Psychology

To understand how experimental psychology got where it is today, it can be helpful to look at how it originated. Psychology is a relatively young discipline, emerging in the late 1800s. While it started as part of philosophy and biology, it officially became its own field of study when early psychologist Wilhelm Wundt founded the first laboratory devoted to the study of experimental psychology.

Some of the important events that helped shape the field of experimental psychology include:

  • 1874 - Wilhelm Wundt published the first experimental psychology textbook, "Grundzüge der physiologischen Psychologie" ("Principles of Physiological Psychology").
  • 1875 - William James opened a psychology lab in the United States. The lab was created for the purpose of class demonstrations rather than to perform original experimental research.
  • 1879 - The first experimental psychology lab was founded in Leipzig, Germany. Modern experimental psychology dates back to the establishment of the very first psychology lab by pioneering psychologist Wilhelm Wundt during the late nineteenth century.
  • 1883 - G. Stanley Hall opened the first experimental psychology lab in the United States at John Hopkins University.
  • 1885 - Herman Ebbinghaus published his famous "Über das Gedächtnis" ("On Memory"), which was later translated to English as "Memory: A Contribution to Experimental Psychology." In the work, Ebbinghaus described learning and memory experiments that he conducted on himself.
  • 1887 - George Truball Ladd published his textbook "Elements of Physiological Psychology," the first American book to include a significant amount of information on experimental psychology.
  • 1887 - James McKeen Cattell established the world's third experimental psychology lab at the University of Pennsylvania.
  • 1890 - William James published his classic textbook, "The Principles of Psychology."
  • 1891 - Mary Whiton Calkins established an experimental psychology lab at Wellesley College, becoming the first woman to form a psychology lab.
  • 1893 - G. Stanley Hall founded the American Psychological Association , the largest professional and scientific organization of psychologists in the United States.
  • 1920 - John B. Watson and Rosalie Rayner conducted their now-famous Little Albert Experiment , in which they demonstrated that emotional reactions could be classically conditioned in people.
  • 1929 - Edwin Boring's book "A History of Experimental Psychology" was published. Boring was an influential experimental psychologist who was devoted to the use of experimental methods in psychology research.
  • 1955 - Lee Cronbach published "Construct Validity in Psychological Tests," which popularized the use of construct validity in psychological studies.
  • 1958 - Harry Harlow published "The Nature of Love," which described his experiments with rhesus monkeys on attachment and love.
  • 1961 - Albert Bandura conducted his famous Bobo doll experiment, which demonstrated the effects of observation on aggressive behavior.

Experimental Psychology Uses

While experimental psychology is sometimes thought of as a separate branch or subfield of psychology, experimental methods are widely used throughout all areas of psychology.

  • Developmental psychologists use experimental methods to study how people grow through childhood and over the course of a lifetime.
  • Social psychologists use experimental techniques to study how people are influenced by groups.
  • Health psychologists rely on experimentation and research to better understand the factors that contribute to wellness and disease.

A Word From Verywell

The experimental method in psychology helps us learn more about how people think and why they behave the way they do. Experimental psychologists can research a variety of topics using many different experimental methods. Each one contributes to what we know about the mind and human behavior.

Shaughnessy JJ, Zechmeister EB, Zechmeister JS. Research Methods in Psychology . McGraw-Hill.

Heale R, Twycross A. What is a case study? . Evid Based Nurs. 2018;21(1):7-8. doi:10.1136/eb-2017-102845

Chiang IA, Jhangiani RS, Price PC.  Correlational research . In: Research Methods in Psychology, 2nd Canadian edition. BCcampus Open Education.

Pierce T.  Naturalistic observation . Radford University.

Kantowitz BH, Roediger HL, Elmes DG. Experimental Psychology . Cengage Learning.

Weiner IB, Healy AF, Proctor RW. Handbook of Psychology: Volume 4, Experimental Psychology . John Wiley & Sons.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Society for Experimental Psychology and Cognitive Science

The Society for Experimental Psychology and Cognitive Science

A woman recieving an award at a podium

Division 3 awards

The division recognizes excellence in the field through its lifetime achievement award, early career award, and student poster award, among others.

experimental psychology in government

Division 3 news and events

Learn more about division news and events and stay up-to-date with the latest in experimental psychology and cognitive science.

The mission of Division 3 is as follows:

  • Promote research and teaching in the general field of experimental psychology and its many subdisciplines.
  • Stimulate the exchange of information among its members and with other sciences.
  • Support experimental psychology — and the science of psychology more broadly — through research, advocacy, education and training, service, policy, leadership in APA governance and collaboration with APA's directorates.

Our members do basic and applied research in varied settings on topics covered by the five flagship  Journals of Experimental Psychology . We enthusiastically welcome members who do experimental work in any area of psychology.

Making the job market less scary

A presentation from Division 3 (Society for Experimental Psychology and Cognitive Science) and the Spark Society . Hear from faculty who have served on search committees in research and teaching-focused institutions, share their experiences and advise.

How psychologists and societies can participate in science policy and advocacy

Hear cognitive psychologist Andy DeSoto, who has been an advocate for the social and behavioral sciences from the scientific society side and now from within the U.S. federal government at the National Science Foundation, discuss how individuals and societies can engage with the federal government and policy .

Division Newsletter

The Experimental Psychology Bulletin

Join Div. 3

  • Become a member
  • Join the Division 3 Listserv

Contact Division 3

Psychology Masters Programs

Experimental Psychologist: Role, Responsibilities & Education

Updated icon

Written by Sarah Walsh

Clinical PsyD — Rutgers University | Clinical Psychologist

Fact icon

Experimental psychologists are those who conduct rigorous research studies to explore and uncover the underlying mechanisms of human behavior. Their systematic investigations provide invaluable insights into various aspects of human cognition, perception, emotion, and motivation. Experimental psychologists employ scientific methods and carefully designed experiments to gather data, analyze results, and draw meaningful conclusions. Their contributions profoundly impacted diverse areas such as cognitive psychology, developmental psychology, social psychology, and neuroscience.

This article aims to provide a comprehensive understanding of the role, responsibilities, and education required to become an experimental psychologist in the United States.

Understanding Experimental Psychology

Experimental psychology is a branch of psychology that emphasizes the scientific study of behavior and mental processes. It seeks to uncover human behavior’s underlying mechanisms and causes through systematic observation, measurement, and experimentation. By employing rigorous research methodologies and statistical analyses, experimental psychologists aim to establish causal relationships and make evidence-based conclusions.

The roots of experimental psychology can be traced back to the late 19th century when psychologists such as Wilhelm Wundt and William James pioneered the use of laboratory experiments to study human behavior. They advocated for the scientific approach in psychology and emphasized the importance of systematic observation and measurement. 

Over the years, experimental psychology has evolved and diversified, incorporating advancements in technology, statistics, and interdisciplinary collaborations. Today, experimental psychologists utilize various research methods, including controlled experiments, surveys, observations, and neuroimaging techniques, to investigate various facets of human cognition and behavior.

Role of an Experimental Psychologist

1. conducting research studies.

Central to the role of experimental psychologists is the design and execution of research studies. They formulate research questions, develop hypotheses, and design experiments that allow them to collect relevant data. This may involve selecting appropriate participant samples, designing experimental conditions, and employing measurement tools to assess behavior, cognition, or physiological responses. Experimental psychologists carefully control variables and employ statistical analyses to derive meaningful insights from their data, contributing to the scientific knowledge base.

2. Designing and Implementing Experiments

Experimental psychologists are responsible for designing and implementing scientifically rigorous and ethically sound experiments. They carefully plan every aspect of the experiment, including selecting appropriate research designs, manipulating independent variables, and controlling confounding factors. They also consider ethical guidelines to ensure participants’ well-being and informed consent throughout the study. By employing systematic and controlled experimental designs, experimental psychologists can draw reliable and valid conclusions from their research.

3. Analyzing Data and Drawing Conclusions

Once the data is collected, experimental psychologists utilize various statistical methods and data analysis techniques to make sense of the information gathered. They employ statistical software to analyze the data and interpret the results objectively. This involves running statistical tests, examining effect sizes, and assessing the significance of findings. Experimental psychologists critically evaluate the data to determine the implications and draw meaningful conclusions based on the evidence. They consider the study’s limitations and discuss the implications of their findings within the context of existing research and theories.

4. Reporting Findings and Publishing Research

An essential responsibility of experimental psychologists is to communicate their findings to the scientific community and the broader public. They prepare research reports, academic papers, and presentations that effectively communicate their study design, methodology, results, and conclusions. Experimental psychologists often publish their research in scientific journals, which contributes to advancing knowledge in the field. By disseminating their findings, they foster collaboration and encourage further exploration and replication of their work. Additionally, experimental psychologists may present their research at conferences, seminars, and workshops, promoting dialogue and knowledge exchange among professionals in the field.

Responsibilities of an Experimental Psychologist

1. academic research and teaching.

Experimental psychologists often engage in academic research, conducting studies to contribute to the scientific understanding of human behavior. They may secure research grants, collaborate with colleagues, and publish their findings in scholarly journals. Additionally, many experimental psychologists are involved in teaching at universities and colleges, sharing their expertise with students and mentoring aspiring psychologists.

2. Collaborating with Other Professionals

Experimental psychologists frequently collaborate with professionals from various disciplines, including other psychologists, neuroscientists, statisticians, and social scientists. Such collaborations allow for a multidisciplinary research approach, facilitating a deeper understanding of complex psychological phenomena and their underlying mechanisms. By working in interdisciplinary teams, experimental psychologists can integrate diverse perspectives and methodologies into their research, leading to comprehensive and impactful findings.

3. Ethical Considerations

Experimental psychologists are committed to upholding ethical standards in their research practices. They must obtain informed consent from participants, protect their privacy and confidentiality, and ensure the well-being and safety of individuals involved in their studies. Experimental psychologists adhere to professional, ethical guidelines and institutional review board protocols to ensure the ethical conduct of their research. They are responsible for addressing any potential ethical concerns that may arise during the research process.

4. Professional Development and Continuing Education

As the field of psychology constantly evolves, experimental psychologists engage in ongoing professional development and continuing education. They stay updated on the latest research, methodologies, and advancements in their area of specialization. This may involve attending conferences, workshops, and seminars and actively participating in professional organizations and networks. Experimental psychologists also engage in professional supervision and seek opportunities for collaboration and mentorship to enhance their skills and expand their knowledge base.

How to Become an Experimental Psychologist?

To become an experimental psychologist in the United States, aspiring individuals typically begin their journey by obtaining a bachelor’s degree in psychology or a related field. Undergraduate programs provide a foundation in core psychological principles, research methods, statistics, and critical thinking skills. Students may be able to participate in research projects or gain practical experience through internships.

After completing an undergraduate degree, aspiring experimental psychologists pursue advanced education at the graduate level. This typically involves earning a Master’s in Experimental Psychology and then proceeding to a doctoral program in experimental psychology or a related discipline. Doctoral programs offer specialized coursework, research training, and opportunities for independent research under the guidance of experienced faculty mentors. Graduates may choose to specialize in areas such as cognitive psychology, social psychology, developmental psychology, or neuroscience.

While licensing requirements vary by state, many experimental psychologists pursue licensure to practice independently or in applied settings. Licensing typically involves meeting specific educational and experiential requirements, passing a licensing exam, and fulfilling ongoing continuing education obligations. Additionally, some experimental psychologists may pursue certification from professional organizations, which can demonstrate their expertise and commitment to ethical and professional standards.

Experimental psychologists benefit from joining professional associations that cater to their specific interests and areas of specialization. Organizations such as the American Psychological Association (APA) and the Society for Experimental Psychology and Cognitive Science (SEPCS) provide valuable resources, networking opportunities, and access to the latest research in the field. Membership in these associations can enhance professional development, offer mentorship opportunities, and facilitate collaboration with colleagues.

Subfields of Experimental Psychology

1. cognitive psychology.

Cognitive psychology focuses on the study of mental processes, including attention, perception, memory, language, and problem-solving. Experimental psychologists in this subfield investigate how individuals acquire, process, store, and retrieve information, contributing to our understanding of human cognition and its underlying mechanisms.

2. Developmental Psychology

Developmental psychology explores the changes in human behavior and cognitive processes across the lifespan. Experimental psychologists in this subfield study various aspects of development, including social, cognitive, emotional, and physiological changes, shedding light on the factors that influence human growth and maturation.

3. Social Psychology

Social psychology examines how social interactions and the social environment influence individuals’ thoughts, feelings, and behaviors. Experimental psychologists in this subfield investigate social cognition, group dynamics, attitudes, persuasion, and intergroup relations, contributing to our understanding of the complexities of human social behavior.

4. Psychobiology and Neuroscience

Psychobiology and neuroscience involve studying the relationship between the brain, behavior, and mental processes. Experimental psychologists in this subfield employ neuroimaging techniques, physiological measures, and other research methods to investigate the neural underpinnings of psychological phenomena, providing insights into the biological basis of human behavior.

5. Other Specializations

Experimental psychology encompasses various other specialized areas of study, such as sensation and perception, emotion and motivation, personality, and psychopharmacology. Experimental psychologists may choose to specialize in these subfields or pursue interdisciplinary research that spans multiple areas, contributing to the richness and diversity of the field.

Career Opportunities for Experimental Psychologists

  • Academic Positions : Experimental psychologists often pursue careers in academia, where they can engage in research, teaching, and mentoring. They may secure faculty positions at universities or colleges, conduct research studies, publish scholarly articles, and educate the next generation of psychologists.
  • Research Institutions and Laboratories : Experimental psychologists can find opportunities in research institutions and laboratories in academic and non-academic settings. These positions involve conducting research studies, collaborating with interdisciplinary teams, and contributing to scientific advancements in various domains of experimental psychology.
  • Private Sector Opportunities : The private sector also offers employment opportunities for experimental psychologists. They may work in research and development departments of corporations, consulting firms, or technology companies, where they apply their research expertise to areas such as user experience, product design, market research, and human factors.
  • Government Agencies and Non-Profit Organizations : Experimental psychologists may contribute their expertise to government agencies and non-profit organizations. They can work in research divisions of governmental bodies, such as the National Institutes of Health (NIH) or the Centers for Disease Control and Prevention (CDC). Non-profit organizations may employ experimental psychologists to research social issues, mental health, or program evaluation.

Key Takeaways

  • Experimental psychologists play a crucial role in advancing our understanding of human behavior through rigorous research and experimentation.
  • They design and implement experiments, analyze data, and draw meaningful conclusions that contribute to the scientific knowledge base in psychology.
  • The responsibilities of experimental psychologists include conducting research studies, collaborating with other professionals, addressing ethical considerations, and engaging in ongoing professional development.
  • Education and training pathways for aspiring experimental psychologists typically involve obtaining an undergraduate degree in psychology, pursuing graduate education, and potentially obtaining licensure or certification.
  • Career opportunities for experimental psychologists exist in academia, research institutions, the private sector, government agencies, and non-profit organizations, providing diverse avenues for applying their expertise.

© 2023 Psychology Masters Programs

Privacy Policy | Terms of Condition

Banner

PSY 4150 - Experimental Psychology/Senior Seminar: Government Resources

  • Journal Articles
  • APA Style - 7th Edition
  • Book Sections
  • Business Sources
  • Web Resources
  • Personal Interview
  • Government Resources
  • APA 7 - Direct Quotations
  • APA Formatting Guidelines
  • Research Management Systems

Citing Government Resources

Identify the color-coded elements to cite government resources on your Reference List:

Government resources elements

If no  date is specified, put  (n.d.)   in the year field.

Do not put a period after the URL.

Citations in your paper should be formatted in  a black or  grayscale  font.

Reference List

See the  APA  Publication Manual 6 th  Edition ,   pages 180-192   for more on the Reference List.

Basic Government Citation

Government template citation

Electronic Government Resource

Cover Image of Managing Asthma: A guide for schools

Print Government Resource

Cover for Agricultural Statistics

  • << Previous: Personal Interview
  • Next: APA 7 - Direct Quotations >>
  • Last Updated: Jul 31, 2024 3:24 PM
  • URL: https://libguides.grace.edu/ExperimentalPsychology

Morgan Library-Learning Center logo

1 Lancer Way

Winona Lake, IN 46590

(574) 372-5100 ext. 6297

[email protected]

Accessibility Statement

Librarian Login

Grace College

Grace College Online

Grace Theological Seminary

Experimental Psychologist Career (Salary + Duties + Interviews)

practical psychology logo

Every theory and application in psychology can be traced back to the work of a psychologist. However, the roles and focuses of psychologists can vary widely. While some primarily apply established theories and work directly with patients, others are more involved in the genesis and testing of new theories, relying on intuition and empirical research to validate their insights. That said, many psychologists often blend both roles, applying theories in practical settings while simultaneously questioning and refining them based on their observations and experiences.

For those with a keen interest in the investigative aspects of the discipline, a career in experimental psychology may be the ideal path. Though it may not resemble the "typical" psychology career, experimental psychology is pivotal in ensuring the field remains dynamic, ever-evolving, and rooted in the latest insights and perspectives. Dive deeper to understand what it entails to become an experimental psychologist, the prerequisites for entering this niche, and how you can embark on this intriguing journey.

What does an Experimental Psychologist Do?

Experimental psychologists use the scientific method to test out different theories or questions they or others have developed in psychology. An experimental psychologist may spend their entire career attempting to answer one question, as one set of data or one study may not be enough to answer psychology’s larger questions.

Experimental psychologists may use data, surveys, focus groups, or other various experiments to seek out the answers that shape their careers. Many factors must be considered when these experiments are taken, especially regarding psychology. Motives, background, perception, and the diversity of subjects all come into play and shape the results of a study, survey, or experiment. 

Job Requirements

You must know what you’re talking about to earn grant money for research. Experimental psychologists start by earning their doctorate in experimental psychology or another approach to psychology that might shape their experiments. Through this work, an experimental psychologist will build up their resume by working under other experimental psychologists and contributing to research that may be published. With enough credentials and by answering the questions that spark interest, an experimental psychologist may find work at a college or university and conduct their experiments using grant money or on the school’s dime.

Salary (How Much Do Experimental Psychologists Make?) 

A well-respected experimental psychologist can live a comfortable life while answering the world’s biggest questions, but this salary is not guaranteed to all. Remember, your doctorate is usually the minimum requirement to conduct research at a university or in an esteemed research center. These salaries reflect experimental psychologists who have completed these degrees and are currently working to answer the questions they or others in their field have. 

Economic Research Institute

$96,610

 

ZipRecruiter

$17,500

$62,493

$138,500

Salary.com

$73,459

$97,711

$122,148

VeryWellMind

$92,000

Schools for Experimental Psychology Degrees

Choosing the right educational institution becomes paramount when diving into experimental psychology. The reputation and quality of the program you attend can influence recognition from leaders in the field and potential research funding opportunities. Here are some top universities renowned for their experimental psychology programs, accompanied by a brief description of what makes them stand out:

  • The University of Michigan - Ann Arbor (Ann Arbor, MI): Renowned for its extensive research facilities and faculty of leading researchers in the field.
  • Harvard University (Cambridge, MA): Harvard's storied history in psychology research and its vast resources offer students unparalleled research opportunities.
  • Yale University (New Haven, CT): Boasts a collaborative environment where students frequently work across disciplines to push the boundaries of psychological research.
  • Stanford University (Stanford, CA): Known for fostering innovation and emphasizing combining theoretical and applied research in psychology.
  • The University of South Carolina - Columbia (Columbia, SC): Recognized for its commitment to exploring diverse psychological phenomena and its strong community of researchers.
  • Purdue University (West Lafayette, IN): Offers state-of-the-art labs and facilities and a curriculum rooted in traditional and emerging psychological research areas.
  • University of Rochester (Rochester, NY): Celebrated for its research-intensive approach and a close-knit academic community that promotes collaborative studies.
  • University of Chicago (Chicago, IL): Holds a legacy of groundbreaking research in psychology and offers a rich environment for interdisciplinary studies.
  • University of Rhode Island (Kingston, RI): Prides itself on its hands-on research opportunities and a curriculum that emphasizes real-world application of experimental findings.
  • Texas State University (San Marcos, TX): Offers a comprehensive program that blends rigorous academic training with practical research experiences, preparing students for diverse careers in psychology.

When choosing a school, always consider factors such as faculty expertise, research facilities, and the specific areas of experimental psychology the program emphasizes.

Companies That Hire Experimental Psychologists

Where do experimental psychologists work? There are a lot of options! Many organizations have questions that experimental psychologists can attempt to answer through data collection and research. Any of the following organizations could put out a job listing for an experimental psychologist to work in-house or with various clients: 

  • Research centers 
  • Colleges and universities
  • Government agencies
  • Private businesses 

Research Opportunities in Experimental Psychology: From Assistant to Director

Experimental psychology isn't just limited to conducting experiments; it's also deeply intertwined with the broader world of research in psychology. Whether just starting in the field or aiming for leadership positions, there are many opportunities to delve into research. Here's a closer look at the potential roles within this realm:

Research Psychologist: At the heart of experimental psychology is the Research Psychologist. They are responsible for designing, executing, and interpreting experiments that answer vital questions within the field of psychology. Their work often involves:

  • Formulating research questions or hypotheses.
  • Designing experimental studies or surveys.
  • Collecting and analyzing data using statistical tools.
  • Publishing their findings in reputable journals.
  • Collaborating with other psychologists and professionals from various disciplines.

Research Assistant: This role is often an entry-level position, ideal for those new to the field or currently undergoing their graduate studies. Research Assistants play a crucial role in supporting the execution of experiments. Their tasks often include:

  • Assisting in data collection may involve conducting interviews, administering tests, or managing focus groups.
  • Data entry and preliminary analysis.
  • Literature reviews to support the groundwork for experiments.
  • Assisting in the preparation of research papers or presentations.

Research Director: This senior role is often found within larger research institutions, universities, or corporations. A Research Director oversees multiple research projects and ensures that they align with the organization's broader objectives. Their responsibilities often encompass:

  • Setting the direction and priorities for research initiatives.
  • Securing funding and grants for research projects.
  • Collaborating with stakeholders, including policymakers, corporate leaders, or academic heads, to ensure the research meets necessary standards and serves broader goals.
  • Mentoring and guiding younger researchers, helping them shape their career paths.
  • Ensuring ethical guidelines are adhered to in all research activities.

The field of experimental psychology is vast, and its research opportunities are diverse. Whether starting as a research assistant and learning the ropes or leading groundbreaking research initiatives as a director, there's a pathway for every aspiring experimental psychologist. As the field continues to evolve, the demand for dedicated researchers who can provide insights into human behavior and cognition will only grow, making this a rewarding career choice for many.

Interviews with an Experimental Psychologist

Want to learn more about specific graduate programs in experimental psychology? Watch this video from Seton Hall University. You may also take a path to a Ph.D. in experimental psychology by studying other fields of psychology, like YouTuber You Can Do STEM did!

Learn how experimental psychology differs from applied psychology with this video from Psy vs. Psy. 

Famous Experimental Psychologists

The most famous experiments in psychology are often the most controversial, but they have also influenced how we think about the human mind, personality, and behavior. 

For example, Stanley Milgram ’s obedience experiments showed the world what people would do if they felt they had to obey a researcher or authority figure. 

Phillip Zimbardo ’s Stanford Prison Experiment took a terrifying look into what people could do if given a certain role in society.

Albert Bandura’s Bobo Doll Experiment showed how children pick up certain behaviors and traits through observation, including violent ones. 

Martin Seligman’s Learned Helplessness Experiments showed how run-down we can feel and how helpless we can become if we do not believe that we are in control of what happens to us. 

Finally, Jane Elliot’s Blue Eyes Brown Eyes experiment showed how easily children (and adults) can develop prejudiced behavior just because they are told they are in one group or another.

Experimental Psychology Examples

Experimental psychologists focus on one task within psychology: conducting experiments to answer the field’s largest questions. The average day of an experimental psychologist may include:

  • Sorting through participants in a study to ensure they are working with a diverse group
  • Administering tests to participants 
  • Collecting and organizing survey data
  • Looking at trends in data to come to conclusions
  • Writing about their experiences and how it influenced their conclusions
  • Sharing their work in a journal
  • Applying for grants or funding to continue conducting their experiments

Related posts:

  • 19+ Experimental Design Examples (Methods + Types)
  • 40+ Famous Psychologists (Images + Biographies)
  • Careers in Psychology
  • Cognitive Psychologist Career (Salary + Duties + Interviews)
  • Hasty Generalization Fallacy (31 Examples + Similar Names)

Reference this article:

About The Author

Photo of author

All Psychology Jobs:

Psychology Careers

Forensic Psychologist

Child Psychologist

Developmental Psychologist

Advertising Psychologist

Biological Psychologist

Clinical Psychologist

Criminal Psychologist

Cognitive Psychologist

Educational Psychologist

Experimental Psychologist

Health Psychologist

Sports Psychology

Social Psychologist

Traffic Psychologist

Marriage Counselor

Industrial Organizational Psychologist

NeuroPsychologist

Art Therpaist

NLP Practitioner

experimental psychology in government

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Jenny Grant Rankin Ph.D.

High School Student Explores Politically Charged Views

Authentic exploratory research builds students' awareness of psychological bias..

Posted August 1, 2024 | Reviewed by Margaret Foley

  • Why Education Is Important
  • Take our ADHD Test
  • Find a Child Therapist

Source: Yoav Aziz/Unsplash

This is the fifth in a series.

Through a series of studies, Yale University social psychologist Geoffrey Cohen ( 2003 ) showed that Democrats who identified as extremely liberal would embrace an exceedingly stringent welfare proposal and Republicans who identified as extremely conservative would embrace an exceedingly generous welfare proposal if each study participant was told that the plan was proposed by the participant’s own party. People’s biases are ever-present, and those conducting research must consider such biases to appropriately phrase questions and other study components.

When we prepare students to become researchers, we need to help them understand and plan for psychological factors so their findings can be as reliable and valuable as possible. When interviewing my latest researcher, I felt great joy recognizing his awareness of participants’ psychological biases on the politically charged topic of COVID-19 responses. This awareness was especially impressive considering that the researcher was not a university academic; instead, Nicholai Grombchevsky is a Laguna Beach High School (LBHS) student.

In Part I of this series I interviewed Jun Shen, the passionate teacher and edtech coordinator who runs LBHS’s Authentic Exploratory Research (AER) Program . AER is an independent research course inspired by Palo Alto Unified School District’s Advanced Authentic Research program . The program pairs students with adult mentors (such as LBUSD staff, industry experts, and academics) who assist the teens in researching their own big questions in fields of their choice. Shen’s explanation of how the AER program works, combined with students’ input through the rest of this interview series (from Aryana Mohajerian , Carter McKinzie , and Carter Ghere ), lets us glimpse some of the different ways students can use the program to pursue individual interests, as well as how other educators can implement such a program.

LBHS student Nicholai Grombchevsky was the fourth student to give us an account of his experience in AER and the findings that his AER research produced. The way Grombchevsky integrated psychology with his study of pandemic views illustrates how complex research topics can successfully be tackled by a high school program.

Jenny Grant Rankin: In short, what was your research study about?

Nicholai Grombchevsky: With the rising importance of government-sponsored public health initiatives in modern society, I hoped—with this research—to create a model that helps balance health and safety and public trust that could be used to guide decisions related to research regulations and government action.

JGR: What were your most important findings?

NG: My most important finding was that there was no significant difference between peoples’ trust in the government before and after the COVID-19 pandemic. Due to this, it can reasonably be inferred that people's views of the government did not change significantly following government regulation. There were very few differences between state responses to the pandemic. Many states, and the federal government, followed Centers for Disease Control and Prevention (CDC’s) guidelines. The only major difference between ordinances was between counties.

JGR: What was the biggest thing you learned about conducting research?

NG: I learned the importance of taking people’s biases into account. When collecting survey data, I found that the questions' phrasing changed peoples’ responses. I asked the same questions and phrased them in negative, positive, and neutral ways. I found slight variations in the average political leaning of responses, with negative and positive questions having slightly more polar responses.

JGR: What was the biggest thing you learned about communicating research?

NG: I learned that when communicating research, presenting as, and being, an unbiased presenter was vital. With my topic being political, I had to communicate what the data showed and remove my bias for the data analysis. I also learned that people want to discuss opinions on research and that when I gave my opinion of the data, I had to make it clear that it was an opinion based on my data, not a neutral analysis.

JGR: What was your favorite part about AER?

NG: My favorite part of AER was the mentorship. Being paired with an industry mentor was invaluable. My mentor, Sydney Colitti, works in epidemiology data analysis. From her, I learned so much about data research and data science that I would not have learned without taking this class.

experimental psychology in government

JGR: What was the most difficult part of presenting the research?

NG: The most difficult part of presenting the research was condensing over 120 responses into a less than 10-minute presentation. Trying to present everything I wanted was impossible, so trying to only present the most important data was a struggle.

Grombchevsky not only considered study participants’ biases, but he even considered his own when analyzing data. As we teach students to become researchers, it offers an excellent opportunity to also teach them to apply bias awareness to their own thinking as Grombchevsky did. If all students can master this skill, we should see a brighter, less polarized future.

Cohen, G. L. (2003, November). Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology, 85 (5), 808-22. https://doi.apa.org/doiLanding?doi=10.1037%2F0022-3514.85.5.808

Jenny Grant Rankin Ph.D.

Jenny Grant Rankin, Ph.D., is a Fulbright Specialist for the U.S. Department of State.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Photograph of Oxford University skyline

Image credit: University of Oxford Images / John Cairns Photography

New therapies developed by Oxford experts offer online support for anxiety and post-traumatic stress disorders

Four internet-based therapies developed by experts at the University of Oxford’s Department of Experimental Psychology and Department of Psychiatry are proving helpful for patients with social anxiety disorder and post-traumatic stress disorders and for children with anxiety disorders.

Urgent treatment solutions are needed for children, adolescents and adults with mental health conditions. Despite the government committing to spending 8.9% of all NHS funding on mental health treatment last year, the pipeline to build new facilities and train new staff will take years and, on their own, are insufficient to meet demand. 

A suite of online therapies, developed and clinically validated by expert teams at the University of Oxford’s Experimental Psychology and Psychiatry Departments, is now available to help close this gap in care, and tackle anxiety disorders and mental health conditions across all age groups from children through to adolescents and adults. Patients work through a series of online modules with the brief support of a therapist through short phone or video calls and messages.

Randomised clinical trials by the University of Oxford team have demonstrated the impact of all four of the online platforms. Excellent results led to a new commercial licence partnership negotiated between Oxford University Innovation and Koa Health, a company well placed to leverage this cutting-edge technology and research. Koa Health looks forward to making the programmes available to patients across many NHS services, beginning in West Sussex, Oxfordshire, Buckinghamshire, Leicestershire, Bradford, North Tyneside, and London. Dr. Simon Warner , Head of Licensing & Ventures, Oxford University Innovation , said, “These four mental health digital therapies are a fantastic example of the world class expertise within the University of Oxford which has enabled us to launch cutting edge therapies with our industry partner Koa Health. The therapies are tried and tested and now readily available to help change the lives of people suffering from mental health conditions.”

The National Institute for Health and Care Excellence (NICE) early value assessment recommended 9 online therapies for use across the NHS. The therapies developed by the University of Oxford team, with funding from Wellcome and the National Institute for Health and Care Research (NIHR), represent 4 of the 9 selected therapies and will now be made widely available across NHS Trusts, mental health facilities, schools and colleges

One in five children and young people in England aged eight to 25 have a probable mental disorder and one in four adults in England experiences at least one diagnosable mental health problem in any given year.

Professor Cathy Creswell , a psychologist at the University of Oxford, whose team developed the childhood anxiety programme explains: 'Recent surveys suggest ongoing increases in the number of children and young people that are experiencing anxiety problems. Our online platforms, which were developed with support from the National Institute for Health and Care Research (NIHR) Oxford Health Biomedical Research Centre (OH BRC), provide practical tools with guidance and support to help tackle issues from home.' Professor David Clark , University of Oxford, whose team developed the social anxiety disorder programme adds: 'Social anxiety disorder starts in childhood and is remarkably persistent in the absence of treatment. Internet programmes that deliver optimal treatment for both adolescents and adults have the potential to transform lives and enable people to realise their true potential at school, in the workplace and in society.' Professor Anke Ehlers , a psychologist at the University of Oxford, and OH BRC Co Theme Lead for Psychological Treatments who led the work on post-traumatic stress disorder (PTSD) says: 'We’ve tested the digital therapy with patients who have PTSD from a broad range of traumas. Recovery rates and improvements in quality of life are excellent. Our clients value being able to work on the treatment from home at a time convenient to them.'

The team at the University of Oxford, Koa Health and Oxford University Innovation will work together to maximise the adoption of all four therapies adopted across NHS Trusts and schools over the coming year.

Oliver Harrison, CEO at Koa Health said: 'Koa Health is committed to delivering scalable, evidence-based interventions for mental health. The programmes developed by the Oxford teams can lower the barriers to care, deliver excellent outcomes, and reduce the cost to health services. In short, this means that our NHS is able to treat more people and improve mental health across the population. With an impeccable evidence base and approval by NICE, we see great potential to expand these programmes worldwide, helping children and adults.'

Dr John Pimm, Clinical and Professional Lead for Buckinghamshire Talking Therapies, said: 'People using our Talking Therapies services had been successfully using internet-based cognitive therapy for social anxiety disorders and post-traumatic stress disorder as part of the research trial and we are now pleased that our therapists will be able to offer this innovative treatment to more people using the Koa platform.'

Dr. Jon Wheatley, Clinical Lead, City and Hackney, NHS North East London, said: 'City and Hackney Talking Therapies are looking forward to embracing digital technology in response to increasing patient demand. We are proud to be working with Koa Health as an early adopter of these innovative solutions that enable therapists to deliver gold standard evidence-based treatments through internet programmes that are engaging and empowering for patients.'

Professor Miranda Wolpert , Director of Mental Health at Wellcome , said: 'These important online therapies have arisen from more than three decades of thorough science. Digital therapies have the potential to transform millions of people’s lives around the world. We look forward to supporting more digital innovation in the years to come.'

Subscribe to News

DISCOVER MORE

  • Support Oxford's research
  • Partner with Oxford on research
  • Study at Oxford
  • Research jobs at Oxford

You can view all news or browse by category

Skip to Content

Other ways to search:

  • Events Calendar

Fall 2024 Career and Internship Fairs

Renowned for our talented student body, we are excited to connect top employers with exceptional CU Boulder students through a series of upcoming fairs this fall 2024. This is your opportunity to engage with motivated students across various disciplines.  

Register now to ensure your spot at these events. All times are MDT. 

Fall 2024 Career Services Fair Dates   Career Services In-person Fair Pricing   Career Services Virtual Fair Pricing   Fall 2024 Campus Partner Fair Dates

Fall 2024 Career Services Fair Dates 

These career fairs are hosted by CU Boulder Career Services.    

Virtual Career & Internship Fair | Tuesday, Oct. 1, 10 a.m. - 3 p.m.

This virtual fair will be open to all industries and all majors including science, technology, engineering, math, art, design, humanities, anthropology, social sciences, sociology, economics, government, consulting, sales, communications, education, human services, human resource management, finance, sales, business, non-profit, education, environmental studies, ethnic studies, philosophy, healthcare, real estate, law, biology, international affairs, linguistics, English, language and foreign language, political science, psychology and behavioral sciences, public administration, social work and more.  

Register for Virtual Career Fair

STEM Career & Internship Fair Day 1: Spotlight on Computing and Software | Wednesday, Oct. 2, 11 a.m. - 4 p.m.

This fair will serve industries looking to hire students in computer science, computing software engineering, information science, computer engineering, computer programming, computer systems networking and telecommunications, user experience and social computing, information systems management, software design, math and applied math, data mining, statistics and data science.

Register for STEM Day 1: Spotlight on Computing and Software

STEM Career & Internship Fair Day 2: Spotlight on Aerospace, Defense and More | Thursday, Oct. 3, 11 a.m. - 4 p.m.

This fair will service industries looking to hire students in aerospace engineering, astronomy, astronomy and space exploration, atmospheric and oceanic sciences, mechanical engineering, chemical engineering, chemistry, biochemistry, electrical engineering, information systems management, cyber security, physics, biology, technology, government, international affairs, political science, defense and math.

Register for STEM Day 2: Spotlight on Aerospace, Defense, and more

Roam Anywhere Arts, Humanities, Environmental & Social Impact Career & Internship Fair | Tuesday, Oct. 8, 11 a.m. - 4 p.m.

This fair will serve industries looking to hire Arts & Sciences students in art, design, humanities, anthropology, social sciences, sociology, economics, government, consulting, sales, communications, education, human services, human resource management, finance, sales, business, non-profit, education, environmental studies, ethnic studies, philosophy, healthcare, real estate, law, biology, ecology, geography, geology, international affairs, linguistics, English, language and foreign language, political science, psychology and behavioral sciences, public administration, social work and more. 

Register for Roam Anywhere Career Fair

Career Services Fair Pricing 

In-person Fairs*

*Please note, these prices reflect Career Services-specific fairs. Our partner fairs may differ in prices. 

Platinum Sponsor - $5,000.00 (1 spot)  

Includes: Registration with prime table location in the Glenn Miller Ballroom. This sponsorship also includes two tables in the ballroom (6'x30" with tablecloth), up to 10 reps, 10 parking passes and 10 lunch tickets for each. Marketing benefits include highlighted presence on the Career Services social media and printed promotion on the career fair guide map.  

Gold Sponsor - $3,000.00 (2 spots)  

Includes: Two tables (6'x30" with tablecloth) with prime location in the Glenn Miller Ballroom, eight reps, eight parking passes and eight lunch tickets. Marketing benefits include highlighted presence on Career Services social platforms and printed logo in the student event guide.  

Silver Sponsor - $2,000.00 (5 spots)  

Includes: One table (6'x30" with tablecloth) with perimeter location in the Glenn Miller Ballroom, six reps, six parking passes and six lunch tickets. Marketing benefits include organizational logo on printed material.

For-Profit Employers

For-Profit Premium Level Location - $595.00 (23 spots) 

Includes: Four reps, four parking passes, four lunch tickets, one 6’x30” table with tablecloth and premium level location in the Glenn Miller Ballroom. Premium spots are closer to power sources. 

For-Profit Mid-level Location - $545.00 (48 spots) 

Includes: Four reps, four parking passes, four lunch tickets, and one 6’x30” table with tablecloth in the Glenn Miller Ballroom.  

For-Profit Basic Level Location - $495.00 (22 spots)  

Includes: Booth spot located in adjacent room 235, four reps, four parking passes, four lunch tickets, and one 6’x30” table with tablecloth. 

Non-Profit Employers

Non-Profit Premium Level Location - $345.00 

Includes: Four reps, four parking passes, four lunch tickets, one 6’x30” table with tablecloth and premium location in the Glenn Miller Ballroom. Premium spots are closer to power sources.  

Non-Profit Mid-Level Location - $320.00   

Includes: Four reps, four parking passes, four lunch tickets, one 6’x30” table with tablecloth and mid-level location in the Glenn Miller Ballroom.  

Non-Profit Basic Level Location - $270.00 

Includes: Booth spot located in adjacent room 235, four reps, four parking passes, four lunch tickets and one 6’x30” table with tablecloth. 

Virtual Fairs*

*Please note, these prices reflect Career Services-specific fairs. Our partner fairs may differ in prices.   

Virtual Fair Sponsor - $500.00 (2 spots) 

Includes highlight on social media and marketing to students  

For-Profit Registration - $270.00  

Non-profit registration - $170.00 .

Fall 2024 Campus Partner Fair Dates

These fairs are not hosted directly by the Career Services office.   

CU Boulder Athletics Buffs to Biz | Tuesday, Sept. 10, 4-7 p.m.

This fair supports student athletes in their career exploration and advancement. 

Register for CU Boulder Athletics Buffs to Biz

LEEDS School of Business Meet the Firms | Wednesday, Sept. 11, 5:45-7:45 p.m.

Looking to hire accounting students? We invite you to Fall Meet the Firms hosted by Beta Alpha Psi. Any questions can be directed to [email protected] .

Civil, Environmental, Architectural, Engineering Career & Internship Fair | Wednesday, Sept. 25, 12 p.m. - 4 p.m.

This fair serves industries looking to hire students in civil engineering, environmental engineering, architectural engineering, mechanical engineering and construction. Register for the Civil, Environmental, Architectural, Engineering Career & Internship Fair .   

LEEDS School of Business Finance and Real Estate Industry Fair | Wednesday, Sept. 25, 5-7 p.m.

Please note this is an invite-only event, and you must complete the Handshake survey for consideration to attend. This networking and hiring event promotes your company and any internship or full-time opportunities. It offers an intimate networking opportunity with students interested in finance and real estate. 

LEEDS School of Business Management, Marketing, Sales and Business Analytics Industry Fair | Thursday, Sept. 26, 5 - 7 p.m.

Please note this is an invite-only event, and you must complete the Handshake survey for consideration to attend. This networking and hiring event promotes your company and any internship or full-time opportunities. This event offers an intimate networking opportunity with students interested in management, marketing, sales, and business analytics. 

Biomedical Engineering Society Symposium | Monday, Oct. 7, 4-7 p.m.

This symposium allows companies, organizations and schools to showcase their company mission, values, products, career opportunities and academic programs, as well as build long-lasting relationships with the CU Boulder community. The Biomedical Engineering (BME) program has an interdisciplinary curriculum that provides a balanced education in the fundamentals of engineering, biology and medicine. The coursework, experience and skills prepares students to become researchers, consultants, medical doctors and engineers in the med-tech space and beyond.

Register for Biomedical Engineering Society Symposium

Environmental Design Graduate School Fair | Wednesday, Oct. 9, 12-2 p.m.

This fair is for graduate schools looking to connect with students in the Environmental Design program.  

Register for Environmental Design Grad Fair

CMCI Career & Internship Fair | Wednesday, Oct. 16, 3-5 p.m.

This fair serves employers looking to hire students from the College of Media, Communication and Information (CMCI) with majors in advertising, public relations, media design, communication, journalism and information science. This fair has a free registration for employers. 

LEEDS School of Business Virtual Career Fair | Wednesday, Oct. 16, 3-6 p.m.

Join us for the Leeds Virtual Career Fair to connect with students. Host info sessions, 1:1 networking or a mix of both. Stay as long as you like and bring as many reps as you wish. 

Physics & Quantum Career and Internship Fair | Thursday, Oct. 29, 3:30-6 p.m.

This event will feature employers across all areas of theoretical, experimental and computational physics. The fair will connect undergraduate and graduate physics students and recent alumni with laboratory and industry leaders to learn about internships and employment opportunities.

Visit our employers page for more resources and learn more about our inclusive hiring and retention resources . 

COMMENTS

  1. The Practice of Experimental Psychology: An Inevitably Postmodern

    The aim of psychology is to understand the human mind and behavior. In contemporary psychology, the method of choice to accomplish this incredibly complex endeavor is the experiment. This dominance has shaped the whole discipline from the self-concept as an empirical science and its very epistemological and theoretical foundations, via research ...

  2. Experimental Psychology Studies Humans and Animals

    Experimental psychologists are interested in exploring theoretical questions, often by creating a hypothesis and then setting out to prove or disprove it through experimentation. They study a wide range of behavioral topics among humans and animals, including sensation, perception, attention, memory, cognition and emotion.

  3. Pursuing a Career in Experimental Psychology

    According to APA's 2009 salary survey, annual salaries for doctoral-level experimental psychologists ranged from $76,090 to $116,343 depending on the psychologist's position. The survey captured salary data for experimental psychologists working in faculty positions, research positions, research administration and applied psychology.

  4. What are Experimental Psychology Degree Careers? [2024]

    Experimental psychology is the subfield of psychology that uses scientific methods to collect data regarding psychological and social issues. The data is used to help social scientists learn more about human and animal behavior. Research is conducted using controlled experiments and the results are often presented to outside organizations ...

  5. How to Become an Experimental Psychologist

    The minimum education requirement is usually a master's degree in general or experimental psychology. A doctorate-level degree in psychology is usually required to work at a university. However, you do not have to get a degree in experimental psychology to work as an experimental psychologist. Doctorate programs in psychology also provide ...

  6. Experimental Psychology

    At the Oxford Department of Experimental Psychology, our mission is to conduct world-leading experimental research to understand the psychological and neural mechanisms relevant to human behaviour. Wherever appropriate, we translate our findings into evidence-based public benefits in mental health and wellbeing, education, industry, and policy ...

  7. Experimental Psychology

    This document provides readers with a printout of a conference, slide presentation given by the American Psychological Association's Division 3 on the field of experimental psychology. Opening with a general discussion of what experimental psychology is and the role of experimental psychologists, the presentation offers a brief overview of the history of the field, how it is practiced today ...

  8. Journal of Experimental Psychology: General

    The Journal of Experimental Psychology: General ® publishes articles describing empirical work that is of broad interest or bridges the traditional interests of two or more communities of psychology. The work may touch on issues dealt with in JEP: Learning, Memory, and Cognition, JEP: Human Perception and Performance, JEP: Animal Behavior Processes, or JEP: Applied, but may also concern ...

  9. Experimental psychology

    experimental psychology, a method of studying psychological phenomena and processes.The experimental method in psychology attempts to account for the activities of animals (including humans) and the functional organization of mental processes by manipulating variables that may give rise to behaviour; it is primarily concerned with discovering laws that describe manipulable relationships.

  10. The psychology of experimental psychologists: Overcoming cognitive

    Introduction. The past decade has been a bruising one for experimental psychology. The publication of a paper by Simmons, Nelson, and Simonsohn (2011) entitled "False-positive psychology" drew attention to problems with the way in which research was often conducted in our field, which meant that many results could not be trusted. Simmons et al. focused on "undisclosed flexibility in data ...

  11. Psychological Sciences, Experimental Psychology Concentration, M.A

    It also provides a foundation for employment in settings such as in the federal government or private sector. Overview. Experimental psychology is the area of psychology that utilizes experimental methodology in the science of behavior and mental processes. It is an umbrella term that encompasses the efforts of researchers in several areas of ...

  12. Careers in Experimental Psychology

    Career With a Degree in Experimental Psychology Human Factor Psychologist. You can become a human factor psychologist with a doctorate in experimental psychology. Many government agencies and organizations hire psychologists with an experimental psychology degree to help increase employee morale, productivity, quality and satisfaction.

  13. How Does Experimental Psychology Study Behavior?

    The experimental method in psychology helps us learn more about how people think and why they behave the way they do. Experimental psychologists can research a variety of topics using many different experimental methods. Each one contributes to what we know about the mind and human behavior. 4 Sources.

  14. The Society for Experimental Psychology and Cognitive Science

    The mission of Division 3 is as follows: Promote research and teaching in the general field of experimental psychology and its many subdisciplines. Stimulate the exchange of information among its members and with other sciences. Support experimental psychology — and the science of psychology more broadly — through research, advocacy ...

  15. Experimental Psychologist: Role, Responsibilities & Education

    Experimental psychology is a branch of psychology that emphasizes the scientific study of behavior and mental processes. It seeks to uncover human behavior's underlying mechanisms and causes through systematic observation, measurement, and experimentation. ... Government Agencies and Non-Profit Organizations: Experimental psychologists may ...

  16. Experimental Psychology

    Experimental Psychology ~ Psychological Sciences ~ Graduate Psychology ~ CHBS ... It also provides a foundation for work in applied settings such as in the federal government. Overview Experimental psychology is the area of psychology that utilizes experimental methodology in the science of behavior and mental processes. It is an umbrella term ...

  17. Some Practical Applications of Psychology in Government

    PRACTICAL PSYCHOLOGY IN GOVERNMENT 739 to be a brilliant success.' Tests for prospective telegraphers, signalmen, and look-out men were also worked out and verified by the experimental method. Another important contribution was the development of the army trade tests. While the subject-matter which made up these tests was gathered from many occupa-

  18. PDF Science in Action

    settings, including universities, research centers, the government and private businesses. The exact type of research an experimental psychologist performs may depend on a number of factors, including his or her educational background, interests and area of employment. Often, psychologists with training in experimental psychology contribute across

  19. Experimental Psychology

    Local Government Law. Military and Defence Law. Parliamentary and Legislative Practice. Social Law. Construction Law. ... concrete and practical ideas and activities that might fit and work in your course to engage students in learning experimental psychology research skills across topic areas in the psychology curriculum, whether you are a ...

  20. Government Resources

    Research Guides: PSY 4150 - Experimental Psychology/Senior Seminar: Government Resources

  21. Experimental Psychologist Career (Salary + Duties + Interviews)

    Government agencies; Private businesses ; Research Opportunities in Experimental Psychology: From Assistant to Director. Experimental psychology isn't just limited to conducting experiments; it's also deeply intertwined with the broader world of research in psychology. Whether just starting in the field or aiming for leadership positions, there ...

  22. U.S. Government Mind Control Experiments

    A stream of other movies shortly after the 1977 Senate hearings touched on many citizen's fears of government psychological abuse (e.g., The Secret of NIMH in 1982 and Project X in 1987).

  23. What is Experimental Psychology?

    Experimental psychology is an interesting subdiscipline of psychology. On the one hand, it refers to an approach to studying human behavior - the standardized methods and techniques used to collect and analyze data. On the other hand, experimental psychology is a unique branch, an applied field of psychology that explores theoretical ...

  24. High School Student Explores Politically Charged Views

    Nicholai Grombchevsky: With the rising importance of government-sponsored public health initiatives in modern society, I hoped—with this research—to create a model that helps balance health ...

  25. New therapies developed by Oxford experts offer online support for

    Four internet-based therapies developed by experts at the University of Oxford's Department of Experimental Psychology and Department of Psychiatry are proving helpful for patients with social anxiety disorder ... adolescents and adults with mental health conditions. Despite the government committing to spending 8.9% of all NHS funding on ...

  26. CNN investigates JD Vance's past disparaging comments

    GOP vice presidential candidate Sen. JD Vance has a history of making disparaging remarks toward people without children, according to a KFile review of his comments, including fundraising off his ...

  27. Fall 2024 Career and Internship Fairs

    STEM Career & Internship Fair Day 1: Spotlight on Computing and Software | Wednesday, Oct. 2, 11 a.m. - 4 p.m. This fair will serve industries looking to hire students in computer science, computing software engineering, information science, computer engineering, computer programming, computer systems networking and telecommunications, user experience and social computing, information systems ...