The Implicit Association Test (IAT)

Can implicit associations be measured? How do they compare to self-reported attitudes and beliefs? Take the IAT to find out!

Composite image with a drawing of a head with a universe inside it on the left and a drawing of a outstretched hand hiding a needle on the right

"Romeo and ______." "Salt and ______." "Thunder and _________." You probably never tried to memorize these phrases consciously. And yet, your mind effortlessly fills in the blanks, thanks to its tremendous capacity to build associations — the building blocks of learning.

Most of these associations are useful (or at least harmless). But through years of exposure to stereotypes and other cultural pairings, our minds also develop implicit biases : culturally learned associations that may even conflict with our consciously held beliefs. (Mahzarin Banaji, a co-developer of the test, disavows all biases, and yet she too shows bias—a learned association—on many tests.)

What implicit biases might exist in your mind? How do they compare with your explicit beliefs? Take one of our many tests to find out!

(Note: this version of the test has been adapted for touchscreen devices. For computers with a physical keyboard, take the classic test .)

(Note: this test requires a computer keyboard. If you are using a mobile/touchscreen device, take the Touchscreen IAT. )

Test yourself

A note before we begin

The Implicit Association Test (IAT) has been taken millions of times by people all over the world. We, the creators of these specific tests, have taken these and many other tests and have found them to be revealing and beneficial in understanding ourselves.

Please note that taking this test is an entirely voluntary act. Nobody can require you to take this test or reveal your test results. It’s best not to take this test if you are concerned about receiving information that may cause you discomfort.

The test results are for your information only; no data will be collected .

Of course, in making these tests available, we hope you will learn something from this experience. Please proceed if you remain interested in learning about your implicit bias.

Choose your test

Explicit attitudes.

All information you provide here is completely anonymous. It will not be saved after the test is complete.

How warm or cold do you feel towards ? Use the slider.

Which statement best describes your belief?

"I consider to be ."

And now, to the test itself!

Hold your mobile device with both hands. You will use your left and right thumbs to press buttons on the bottom-left and bottom-right of the screen.

Locate the "E" and "I" computer keys on your keyboard. You will use these two keys (and space bar) for this test by placing your left index finger on the "E" key and your right index finger on the "I" key.

You will see the following appear on the screen, one at a time. Take a second to skim them.

Category Items

There are five parts, and the instructions change for each part. Pay attention!

Subscribe to Outsmarting Implicit Bias

Dive deeper, related modules, frequently asked questions about the iat.

Harvard Online logo

Coming summer 2024: We're developing a new certificate course in partnership with Harvard Online.

Implicit Bias:

Learn the science. make better decisions..

For quick insights, explore our whole library of videos, podcasts, articles, and interactive demos.

For more structure, try these sequences on core concepts in the science of bias.

implicit bias experiment

Browse modules

To learn more about how to use this site, consult the User Guide .

  • Interactive
  • Action Sheets
  • Advertising
  • Recruiting & Hiring
  • Team Dynamics
  • Client Relationships
  • Recommended

The Implicit Association Test (IAT)

Can you guess the rule, gender in negotiation, frequently asked questions about the iat, bias in media: a case study, debias the classroom, how to conduct a structured interview, when does life “spark” in a face, bias in healthcare, you predict: where is bias in america headed, how you speak may affect your access to healthcare, what sentence would you give: watch for the anchoring bias, foreigners in their own country: asians in america, changes in bias across america: an explorable map, 4 ways to manage stress, make stress work for you: cognitive reappraisal, should you trust your vision, illusions at work, “the dna is a match”: confirmation bias, the attractiveness halo effect, how race-based facial features can bias inmate sentencing, how well can you read a face, the availability bias, 6 questions that will make you rethink what you know, how good is your memory, the pygmalion effect, race bias in hiring: when both applicant and employer lose, traditional “diversity training” is out. now what, can women be biased against other women, hear me out: accent bias, the standards we choose: the police chief study, shifting standards, what are implicit associations, measuring implicit bias: the implicit association test (iat), moral credentialing, who are we helping, the endowment effect, self-fulfilling prophecies, can you solve the surgeon riddle, about face: how first impressions fool us, the universe inside your mind, blindspot: hidden biases of good people, guided learning.

Our brains evolved to navigate a much different world than the one we live in now, and what was adaptive then isn’t necessarily adaptive today. Learn about some quirks of the human mind and how to outsmart them to make better decisions.

25 - 35 min

What is implicit bias? Where does it come from? How can we measure it? Start here to build a foundation for your future learning.

40 - 55 min

Our vision does a good job of getting us around, but it can be fooled – just look at optical illusions. How do visual perceptions shape our judgments of people? What costs do we bear if we are in error? Given our dependence on visual input, it’s worth knowing.

20 - 30 min

How can the way a person sounds (voice, word choice, accent) spark bias in our minds?

How are our decisions influenced by our expectations rather than by the evidence? Can another person’s expectations of us shape our behavior? Explore the power expectations have at all stages of decision making.

30 - 40 min

Some implicit biases are changing towards neutrality, but others aren’t. Explore models predicting the future and learn why there’s reason to be hopeful.

15 - 20 min

  • Utility Menu

University Logo

811b107d62a1a513d8a8798f0c17bdb1

Office for Equity, Diversity, Inclusion, and Belonging logo: Multicolored outlines of the Harvard shield

  • Incident Reporting

Implicit Association Test (IAT)

Online Learning

The Implicit Association Test (IAT) measures attitudes and beliefs that people may be unwilling or unable to report. The IAT may be especially interesting if it shows that you have an implicit attitude that you did not know about. 

  • Current Issue
  • Past Issues
  • Get New Issue Alerts
  • American Academy of Arts 
and Sciences

The Implicit Association Test

implicit bias experiment

Among the general public and behavioral scientists alike, the Implicit Association Test (IAT) is the best known and most widely used tool for demonstrating implicit bias: the unintentional impact of social group information on behavior. More than forty million IATs have been completed at the Project Implicit research website. These public datasets are the most comprehensive documentation of IAT and self-reported bias scores in existence. In this essay, we describe the IAT procedure, summarize key findings using the IAT to document the pervasiveness and correlates of implicit bias, and discuss various ways to interpret IAT scores. We also highlight the most common uses of the IAT. Finally, we discuss unanswered questions and future directions for the IAT specifically, and implicit bias research more generally.

Kate A. Ratliff is Associate Professor of Psychology at the University of Florida and past Executive Director at Project Implicit. She has published in such journals as Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Applied .

Colin Tucker Smith is Associate Professor of Psychology at the University of Florida. He serves on the Scientific Advisory Board at Project Implicit and has published in such journals as Journal of Experimental Social Psychology and Personality and Social Psychology Bulletin .

Fill in the blanks to complete the words above. What did you come up with? Imagine that before responding to these word stems, you were casually exposed to a list of animal names. Research shows that, in that case, you would be more likely to complete the stems with Monkey, Panda, and Sheep than Monday, Pancake, and Sheet. This residual effect of prior learning can occur even if you are unable to recall the animal word list when asked. This example illustrates implicit memory. 1 Although never directly instructed to use previous information, people’s responses indicate a residual effect of what they have learned previously.

In 1995, psychologists Anthony G. Greenwald and Mahzarin R. Banaji introduced the idea of implicit attitudes , arguing that the processes underlying implicit memory effects can also apply in the social world. 2 In the same way that traces of experience with word lists can influence word stem completions, traces of experiences can also influence evaluations of social groups—even when we are unable to verbally report on those evaluations. Shortly after Greenwald and Banaji first wrote on implicit attitudes, Greenwald published the Implicit Association Test ( IAT ) as a measure of performance of these implicit social cognitions, including implicit attitudes (evaluations of groups), implicit self-esteem (attitudes toward oneself), and implicit stereotypes (beliefs about traits that are characteristic of a group). 3

In this essay, we describe the IAT procedure, summarize key findings using the IAT , and discuss various ways to interpret IAT scores. We also highlight the most common uses of the IAT . Finally, we discuss unanswered questions and future directions for the IAT specifically, and implicit bias research more generally.

The idea behind the IAT is quite simple: people perform tasks better when a response relies on stronger mental links compared to when a response relies on weaker mental links. Because the IAT is a procedure, not a discrete measure, and researchers vary the features of the task depending on their preferences, there is no single IAT . However, most IAT s follow the same general format; let us walk through the age-attitudes version of the task.

Participants in the IAT are tasked with sorting words or pictures into categories as quickly and accurately as possible. There are two key blocks of trials within the IAT in which two categories share the same response (such as a key on a computer keyboard, a square block on a touch device). In the block of trials pictured in Figure 1, if an elderly face appears or positive words appear, you would press the “ E ” key. If a young-adult face or a negative word appears, you would press the “ I ” key. You would first complete a set of trials sorting words and pictures in this way. And then the categories switch so the young-adult faces and positive words share the same response key, and older-adult faces and negative words share the same response key, and you would go through the process again with the updated pairings.

All the while, the computer is recording how long it takes for you to make a correct response on each trial. An IAT score reflects the standardized difference in average response time between the two sorting conditions. If someone completes the task faster when young people and positive words share the same response key, and old people and negative words share the same response key—as in the bottom picture in Figure 1—their IAT score would reflect an implicit bias favoring young people over old people. If they complete the task faster when old people and positive words share the same response key and young people and negative words share the same response key—as in the top picture—their IAT score would reflect an implicit bias favoring old people over young people. 4

Two stacked screenshots from the Implicit Association Test show a face in the center and text in the upper corners of each. The face in the top image is an older person’s and the text reads “Press E for old people or good words” (left) and “Press I for young people or bad words.” The face in the bottom image is a younger person’s and the text reads “Press E for young people or good words” (left) and “Press I for old people or bad words.”

In 2003, Greenwald and Banaji, together with psychologist Brian Nosek, incorporated Project Implicit, a nonprofit organization with a public education mission and an international research collaboration between behavioral scientists interested in implicit social cognition. The core feature of Project Implicit is a demonstration website, set up in the model of an interactive exhibit at a science museum, where visitors can complete an IAT on a topic of their choice. As of late 2023, more than eighty million study sessions have been launched and more than forty million IAT s completed at the Project Implicit website—an IAT every twenty-one seconds. 5 In addition, there is an uncounted multitude of people who have interacted with the IAT in classroom settings or as part of an educational session at their place of work.

Over the past twenty-five years, we have learned a lot about implicit bias as measured by the IAT . Greenwald and colleagues’ paper introducing the IAT has been cited more than sixteen thousand times since 1998. Across the forty million IAT s completed at the Project Implicit website, IAT scores reflect a moderate to strong bias for systematically advantaged groups over systemically disadvantaged or minoritized groups. As seen in Figure 2, there is a clear pattern in favor of straight people (relative to gay people), thin people (relative to fat people), abled people (relative to disabled people), White people (relative to Black people), cisgender people (relative to transgender people), and young people (relative to old people). Notably, people self-report these same biases, but the strength of these biases are considerably weaker.

Six pie charts show that respondents to the Implicit Association Test reflect a bias toward straight people (relative to gay people), thin people (relative to fat people), abled people (relative to disabled people), White people (relative to Black people), cisgender people (relative to transgender people), and young people (relative to old people).

A notable limitation of the IAT , like most other implicit measures, is that it assesses evaluations based on only one clear identity or social group at a time. In real life, of course, people have multiple identities and these identities intersect. In other words, people belong to age and racial and gender groups, and these identities intersect to produce different patterns of experiences, both for the target and perceiver. People’s identities in real life are often also far more ambiguous than the stimuli used in implicit measures of bias.

In addition to the direction and strength of an IAT score (that is, which group it favors and whether we describe it as slight, moderate, or strong), we can also think about the pervasiveness of IAT -measured implicit bias by looking at the percentages of respondents on each task whose IAT score indicates a bias favoring one group over another. For example, approximately 67 percent of visitors to the Project Implicit website have an IAT score indicating some degree of implicit bias toward White people (relative to Black people). And we see similar patterns of IAT scores on tasks indicating an implicit bias toward thin people (relative to fat people), abled people (relative to disabled people), straight people (relative to gay people), young people (relative to old people), and cisgender people (relative to transgender people).

Overall, there are few individual variables that consistently relate to IAT scores. Meta-analytically across all the tasks at the Project Implicit site that are about social groups, we see essentially no relationship between IAT scores and education, religiosity, or age, and we see small relationships between IAT scores and prior IAT s completed, political orientation, and gender. There are two factors that correlate fairly substantially with IAT scores. One is self-reported attitudes. People who report having more bias also have more biased performance on the IAT . The other factor that matters consistently across almost every task is relevant group membership.

A much higher percentage of heterosexual participants than gay, lesbian, and bisexual participants have an IAT score that reflects bias in favor of straight people: 62 percent compared to 27 percent. Similarly, a higher percentage of White participants than Black participants have an IAT score reflecting an implicit bias toward White people relative to Black people: 73 percent compared to 41 percent. That said, it is not trivial that 41 percent of Black participants have an IAT score reflecting an implicit bias in favor of White people (Figure 3).

Two pie charts show that respondents to the Implicit Association Test reflect a bias toward White people (relative to Black people). Of white participants, 66.7% showed a preference toward white people, and 14% showed a preference toward Black people. Of Black participants, 32.5% showed a preference toward white people, and 39.8% showed a preference toward Black people.

Another opportunity that this accumulated data set of IAT scores affords researchers is the ability to track whether levels of implicit bias have changed over time. Banaji and psychologist Tessa Charlesworth summarized patterns of change among 7.1 million data points collected between 2007 and 2020. 6 They found that IAT scores evidencing preferences for young people (relative to old people), abled people (relative to disabled people), and fat people (relative to thin people) have remained fairly stable over time, but preferences for lighter skin (relative to darker skin), White people (relative to Black people), and straight people (relative to gay people) have all decreased in magnitude (that is, shifted toward neutrality over time). This rate of reduction is particularly remarkable for the latter task. Bias favoring straight people (relative to gay people) was reduced by 65 percent across the thirteen-year period sampled. It is also worth noting that these rates of change are happening more quickly for some people than for others. For example, younger people and political liberals showed a larger decrease in implicit anti-gay bias and implicit anti-Black bias than did older people and political conservatives. To be clear, those decreases are evident in all groups, but they are happening faster among some people than others. 7

Another approach to looking at the influence of time on IAT scores is to compare average IAT scores in some time frame before and after a particular event. For example, the IAT -measured preference for White people (relative to Black people) in the United States is greater when the economy is worse, and the preference for thin people (relative to fat people) was higher shortly after twenty different highly publicized fat-shaming statements made by celebrities. 8 In addition, the bias on the IAT favoring straight people (relative to gay people) decreased at the state level with implementation of same-sex marriage legalization. 9 In sum, it is clear that IAT scores change slowly over time and also respond to temporary fluctuations in current events.

When drawing so many conclusions based on one data source, it is important to point out that visitors to the Project Implicit website are certainly not representative of the population from which they are drawn. That said, in terms of sheer numbers, the number of data points in the Project Implicit sample is bigger than the total combined population of eighteen U.S. states. It is certainly the largest database of IAT scores in existence and probably the largest for self-reported biases as well. There is also growing evidence that data from Project Implicit samples perform similarly to those collected from nationally representative samples. 10 Thus, because of the scale of IAT data available, it can provide a reasonably good inference about societal-level trends that can complement traditional self-report surveys such as those collected by Gallup or Pew Research Center that rely on random—though generally still not representative—sampling.

You may have noticed that, so far, we have described and discussed IAT scores . The data make clear that IAT scores suggest strong and pervasive biases favoring dominant, societally privileged groups over those that are marginalized and minoritized. But how should we think about what IAT scores are, and what implicit bias is?

One of the central tasks of the behavioral sciences is developing procedures and measures to serve as a proxy for psychological constructs. With traditional self-report measures of psychological constructs, this can be straightforward. For example, the ten-item Rosenberg Self-Esteem Scale asks people the extent to which they agree with items like “On the whole, I am satisfied with myself” and “I have a positive attitude toward myself.” 11 This type of instrument is high in face validity; in other words, the measurement procedure makes logical sense as a way to assess the construct of interest. The IAT , however, is not as high in face validity. There is quite a leap between the procedure—sorting words and pictures into categories—and what the test purports to measure—evaluations of social groups. Thus, to demonstrate that the IAT can in fact measure evaluations of social groups, we need to look to other kinds of validity. For example, the IAT relates to other measures of evaluations (convergent validity), it does not relate to measures it should be different from (discriminant validity) , and it varies based on one’s own group memberships, as discussed previously, in ways that make sense (known groups validity). 12 This could be a lengthy discussion, but in sum, the majority of researchers agree that enough validity evidence has accrued to conclude that the IAT does, in fact, serve as a valid and reliable way to assess individual differences in evaluations of and stereotypes about social groups, though perhaps with a bit more noise than self-report measures. 13

But let us return to our original questions in this section: what are IAT scores and what is implicit bias? Even after twenty-five years of research, these are still under vigorous debate, with some arguing that the implicitness construct should be done away with altogether due to its ambiguity and lack of precision, or because it offers little above and beyond self-report measures. 14 While we disagree with this conclusion, the value of the implicitness construct is one of the most important questions in this line of research, and it is worth summarizing a few of the different ways that scholars think about implicit bias. 15

The earliest and probably still most common idea is that implicit biases reflect some kind of latent mental construct—a hidden force inside of people’s minds—that cannot be directly observed. In this view, implicit biases are something people “have,” as in 60 percent of U.S. participants have an implicit bias favoring cisgender people over transgender people. In their 1995 paper introducing implicit cognition, Greenwald and Banaji defined implicit attitudes as “introspectively unidentified (or inaccurately identified) traces of past experience that mediate responses.” 16 The interpretation of this definition (though perhaps not the intention) is that implicit biases are outside of conscious awareness and inaccessible to introspection. The field’s reliance on this definition for more than a decade is likely how unconscious bias and implicit bias came to be used synonymously. In line with this interpretation, the Project Implicit website defined implicit attitudes and stereotypes for many years as those that people are “unwilling or unable to report.”

It has become clear, however, that people do have at least some awareness of their biases, as evidenced by stronger correlations between IAT scores and self-report under particular conditions and by the fact that people are at least somewhat able to predict their IAT scores. 17 It is increasingly obvious that defining implicit bias as an evaluation that is entirely outside of conscious awareness would functionally eradicate the construct, as we currently have no measures that can meet the burden of proof of producing effects that are entirely outside of conscious awareness. 18

We have argued that if we must distinguish between whether an effect is implicit or explicit bias, (un)consciousness is not the best factor by which to do so because awareness: 1) is complex and multifaceted, 2) is nearly impossible to prove, and 3) ignores the importance of an actor’s intentions. 19 Instead, we argue that the key feature of the IAT that distinguishes it from the biases that people self- report is automaticity . Psychologists Agnes Moors and Jan De Houwer conceptualize automaticity as a process that influences task performance (that is, behavior in a way that has one or more of the following features: unintentional, goal-independent, autonomous, unconscious, efficient, and/or fast). 20 Of the particular features of automaticity, intentionality (whether or not one has control over the startup of a process) and control (whether or not one can override a process once started) are highly relevant to distinguishing between implicit and explicit bias. 21

A vexing problem for the latent mental construct approach to implicit bias is that scores on the IAT and other implicit measures demonstrate group-based preferences that are quite large but are also somewhat unstable. In other words, the same person’s score is likely to differ over time, which is not consistent with the idea of deeply ingrained, overlearned unconscious preferences. In response, recent models propose that intergroup attitudes are better understood as group-level constructs. For example, the prejudice-in-places model posits that places can be characterized as biased to the extent that they create predictable, systematic inequalities through formal (for example, laws) and informal (for example, norms) mechanisms that disadvantage some groups relative to others. 22 Variations in these regional inequalities then differentially inform individual-level intergroup attitudes. While the prejudice-in-places model does not distinguish between implicit and explicit intergroup attitudes, the “bias of crowds” model takes a similar approach, but focuses on implicit attitudes. It proposes that implicit attitudes across a group of people reflect rather than cause systemic biases. This perspective also assumes that implicit bias reflects what comes to mind most easily at the time, and that measures like the IAT reflect situations more than people. Biases appear stable to the extent that they reflect systemically biased social structures, but they can fluctuate depending on one’s current context. The interpretation of this approach is that IAT scores are much better measures of biases held by places than biases held within minds. 23 Or, less radically, that the biases that exist within minds are critically impacted by physical environments.

Support for geographic, intergroup bias comes primarily through research using publicly available data from Project Implicit that aggregate individual IAT scores at some geographic unit (for example, county-level race bias) and then correlate those scores with another indicator that is also aggregated within the same unit, like racial disparities in school discipline, test scores, and police stops. 24 Notably, these county-level differences are not random. History casts a long shadow. For example, IAT scores demonstrating anti-Black bias among White people are higher today in counties and states that were more dependent on the labor of enslaved Black people in 1860, suggesting that historical factors create structural inequalities that are transmitted generationally and that lead to implicit biases favoring White people. 25

The idea that something as important as racial bias exists in places more so than in people can be a disorienting idea for many of us born and raised within cultures that predominantly treat places and spaces as neutral and passive while prioritizing the importance of individual actors and their internal states and motivations. In general, when most of us think about a concept like sexism, we think about people (like misogynists). We are unlikely to think about spaces causing people to be sexist. Most researchers have a similar bent. Relatedly, the idea that IAT scores reflect context and history is a radical departure from earlier conceptualizations of implicit bias in two ways, by 1) considering inequality and discrimination as a cause, rather than a consequence, of implicit bias, and 2) implying that countering implicit bias may be accomplished more effectively through changing the environments in which we live rather than changing the individuals who live within those environments.

De Houwer provides a compelling argument that rejects the framing of IAT scores as necessarily reflecting implicit, hidden mental biases that reside inside of minds, and instead conceptualizes performance on measures like the IAT as instances of implicitly biased behavior. 26 The IAT provides an example of how a behavior—the ability to categorize words and pictures—can be influenced by social group cues even when people do not have the intention to be influenced by those cues. Biased responses on more real-world kinds of tasks, like hiring behavior or performance evaluation, can evidence implicit bias even without measures like the IAT that are supposed to assess some kind of mediating attitude or belief. There are two key benefits to this approach. First, a functional approach allows researchers to circumvent the perplexing situation of using the same name (“implicit”) for both construct and measure. Second, given that the problem of bias is a behavioral problem, it makes sense to define bias in behavioral terms.

Defining IAT performance as an instance of implicitly biased behavior does not render the results described previously about the pervasiveness of IAT scores favoring privileged groups any less meaningful, nor does it invalidate the idea that performance on the IAT may reflect situations, history, and context more than personal attitudes. Instead, this view positions the IAT as an observable form of bias. This framing requires researchers to explain observable biases rather than engaging in interminable (and potentially intractable) debates about unobservable, theorized mental constructs. For example, it is an observable phenomenon that most participants find it easier to pair bad words with faces of old people than with faces of young people. From there, without mention of underlying processes, we can ask questions such as: Why might they do that? What might that mean? Might some people do that more than others? Can we make people stop doing that?

Before concluding, it is worth discussing the promises and pitfalls of using the IAT as a pre-post measure (testing individuals at different points in time to show change) to test the efficacy of interventions. For example, imagine an organization assesses the biases of its human resources ( HR ) team using a gender stereotyping IAT , provides its employees with some kind of training program, and then administers the IAT again, finding a reduction in the IAT score. Success, right? Not necessarily. While it may be reasonable and desirable in some situations to examine bias reduction in this way, there are two important caveats to note. First, research shows that IAT scores tend to move toward zero from one test session to the next, without anything in particular happening in between. Thus, it is critical that anyone using the IAT to assess bias reduction includes a control condition to ensure that the intervention has decreased IAT -measured bias more than it would have decreased anyway. Second, when assessing bias reduction using the IAT (or any measure of group-based bias), it is important to clarify that the bias itself is the construct of interest. Returning to the example of the HR team training, we would encourage this team to consider what the training itself was about and then assess that . For example, if the training was about fair interviewing practices, the organization could assess the extent to which HR teams implemented such practices. If the training was about ways to decrease disparities in salary, the organization could assess disparities after a year.

It is difficult to predict what the future holds for the IAT . Citation counts continue to increase year over year, and use of the measure continues to expand into increasingly diverse areas of scholarship. It has been evaluated as rigorously as any psychological measure, and has largely stood up to scrutiny. Further, the concept of “implicit bias” has leapt the walls of the academic journals where it has taken on a life of its own. But ideas ebb and flow, and the way behavioral scientists conceptualize implicit bias has changed dramatically over the last decade, with bias no longer being seen exclusively as a product of individual minds, but instead potentially a product of places. Further, the way that racism and biases exert their power evolves across time, and it is unclear how central implicit forms of bias will be to future versions. We continue to argue about the best ways to define implicit bias in the current time, as evidenced by a recent issue of Psychological Inquiry dedicated to the topic. 27 And, as mentioned previously, still others argue that researchers should do away with the term “implicit” altogether. 28 But in doing so, we would lose something important: a language to talk about the indisputable fact that, regardless of where they come from, people have ingrained prejudices and stereotypes that influence how they see and interpret the world. In our view, implicit bias is ordinary, it is rooted in culture, and it is pervasive, and we will continue to need measures like the IAT to document and quantify these biases.

  • 1 Mary S. Weldon, Henry L. Roediger, and Bradford H. Challis, “ The Properties of Retrieval Cues Constrain the Picture Superiority Effect, ” Memory & Cognition 17 (1) (1989): 95–105; and Daniel L. Schacter, “ Implicit Memory: History and Current Status, ” Journal of Experimental Psychology: Learning, Memory, and Cognition 13 (3) (1987): 501–518.
  • 2 Anthony G. Greenwald and Mahzarin R. Banaji, “ Implicit Social Cognition: Attitudes, Self-Esteem, and Stereotypes, ” Psychological Review 102 (1) (1995): 4–27.
  • 3 Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz, “ Measuring Individual Differences in Implicit Cognition: The Implicit Association Test, ” Journal of Personality and Social Psychology 74 (6) (1998): 1464–1480.
  • 4 For more details regarding IAT procedure and scoring, see Anthony G. Greenwald, Brian A. Nosek, and Mahzarin R. Banaji, “ Understanding and Using the Implicit Association Test: I. An Improved Scoring Algorithm, ” Journal of Personality and Social Psychology 85 (2) (2003): 197–216.
  • 5 For more information about Project Implicit, see Kate A. Ratliff and Colin Tucker Smith, “Lessons from Two Decades with Project Implicit,” in The Cambridge Handbook of Implicit Bias and Racism , ed. Jon A. Krosnick, Tobias H. Stark, and Amanda L. Scott (Cambridge: Cambridge University Press, 2023).
  • 6 Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Stereotypes III: Long-Term Change in Gender Stereotypes, ” Social Psychological and Personality Science 13 (1) (2022): 14–26.
  • 7 Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Attitudes: I. Long-Term Change and Stability from 2007 to 2016, ” Psychological Science 30 (2) (2019): 174–192; and Charlesworth and Banaji, “Patterns of Implicit and Explicit Stereotypes III .
  • 8 Emily C. Bianchi, Erika V. Hall, and Sarah Lee, “ Reexamining the Link Between Economic Downturns and Racial Antipathy: Evidence that Prejudice Against Blacks Rises During Recessions, ” Psychological Science 29 (10) (2018): 1584–1597; and Amanda Ravary, Mark W. Baldwin, and Jennifer A. Bartz, “ Shaping the Body Politic: Mass Media Fat-Shaming Affects Implicit Anti-Fat Attitudes, ” Personality and Social Psychology Bulletin 45 (11) (2019): 1580–1589.
  • 9 Eugene K. Ofosu, Michelle K. Chambers, Jacqueline M. Chen, and Eric Hehman, “ Same-Sex Marriage Legalization Associated with Reduced Implicit and Explicit Antigay Bias, ” Proceedings of the National Academy of Sciences 116 (18) (2019): 8846–8851.
  • 10 Ibid; and Eric Hehman, Jimmy Calanchini, Jessica K. Flake, and Jordan B. Leitner, “ Establishing Construct Validity Evidence for Regional Measures of Explicit and Implicit Racial Bias, ” Journal of Experimental Psychology: General 148 (6) (2019): 1022–1040.
  • 11 Morris Rosenberg, Society and the Adolescent Self-Image (Princeton, N.J.: Princeton University Press, 1965).
  • 12 Anthony G. Greenwald and Calvin K. Lai, “ Implicit Social Cognition, ” Annual Review of Psychology 71 (2020): 419–445; and Anthony G. Greenwald, Miguel Brendl, Huajian Cai, et al., “ Best Research Practices for Using the Implicit Association Test, ” Behavior Research Methods 54 (3) (2022): 1161–1180.
  • 13 Paul Connor and Ellen R. K. Evers, “ The Bias of Individuals (in Crowds): Why Implicit Bias is Probably a Noisily Measured Individual-Level Construct, ” Perspectives on Psychological Science 15 (6) (2020): 1329–1345.
  • 14 Olivier Corneille and Mandy Hütter, “ Implicit? What Do You Mean? A Comprehensive Review of the Delusive Implicitness Construct in Attitude Research, ” Personality and Social Psychology Review 24 (3) (2020): 212–232; and Ulrich Schimmack, “ The Implicit Association Test: A Method in Search of a Construct, ” Perspectives on Psychological Science 16 (2) (2021): 396–414. See also Benedek Kurdi, Kate A. Ratliff, and William A. Cunningham, “Can the Implicit Association Test Serve as a Valid Measure of Automatic Cognition? A Response to Schimmack (2021),” Perspectives on Psychological Science 16 (2) (2021): 422–434.
  • 15 Kate A. Ratliff and Colin Tucker Smith, “ Implicit Bias as Automatic Behavior, ” Psychological Inquiry 33 (3) (2022): 213–218; and Bertram Gawronski, Alison Ledgerwood, and Paul W. Eastwick, “ Reflections on the Difference Between Implicit Bias and Bias on Implicit Measures, ” Psychological Inquiry 33 (3) (2022): 219–231.
  • 16 Greenwald and Banaji, “Implicit Social Cognition,” 5.
  • 17 Kate A. Ranganath, Colin Tucker Smith, and Brian A. Nosek, “ Distinguishing Automatic and Controlled Components of Attitudes from Direct and Indirect Measurement Method, ” Journal of Experimental Social Psychology 44 (11) (2008): 386–396; Adam Hahn, Charles M. Judd, Holen K. Hirsh, and Irene V. Blair, “ Awareness of Implicit Attitudes, ” Journal of Experimental Psychology: General 143 (3) (2014): 1369–1392; and Adam Morris and Benedek Kurdi, “ Awareness of Implicit Attitudes: Large-Scale Investigations of Mechanism and Scope, ” Journal of Experimental Psychology: General 152 (12) (2023): 3311–3343.
  • 18 To be clear, the idea that people harbor internalized prejudices and stereotypes that influence how they see and interpret the world is not dependent on the existence or fitness of any particular measure. Put another way, implicit bias will still exist without the IAT .
  • 19 Ratliff and Smith, “Implicit Bias”; John A. Bargh, “The Four Horsemen of Automaticity: Awareness, Intention, Efficiency, and Control,” in Handbook of Social Cognition: Basic Processes; Applications , ed. Robert S. Wyer Jr. and Thomas K. Srull (Mahwah, N.J.: Lawrence Erlbaum Associates, Inc., 1994), 1–40; Adam Hahn and Alexandra Goedderz, “ Trait-Unconsciousness, State-Unconsciousness, Preconsciousness, and Social Miscalibration in the Context of Implicit Evaluation, ” Social Cognition 38 ( S ) (2020): 115–134; and Richard E. Nisbett and Timothy D. Wilson, “ Telling More Than We Can Know: Verbal Reports on Mental Processes, ” Psychological Review 84 (3) (1977): 231–259.
  • 20 Agnes Moors and Jan De Houwer, “ Automaticity: A Theoretical and Conceptual Analysis, ” Psychological Bulletin 132 (2) (2006): 297–326; and Jan De Houwer and Agnes Moors, “How to Define and Examine the Implicitness of Implicit Measures,” in Implicit Measures and Attitudes , ed. Bernd Wittenbrink and Norbert Schwarz (New York: The Guilford Press, 2007), 179–194.
  • 21 Bargh, “The Four Horsemen of Automaticity.”
  • 22 Mary C. Murphy, Kathryn M. Kroeper, and Elise M. Ozier, “ Prejudiced Places: How Contexts Shape Inequality and How Policy Can Change Them, ” Policy Insights from the Behavioral and Brain Sciences 5 (1) (2018): 66–74.
  • 23 B. Keith Payne, Heidi A. Vuletich, and Kristjen B. Lundberg, “ The Bias of Crowds: How Implicit Bias Bridges Personal and Systemic Prejudice, ” Psychological Inquiry 28 (4) (2017): 233–248; and Manuel J. Galvan and B. Keith Payne, “ Implicit Bias as a Cognitive Manifestation of Systemic Racism, ” Dædalus 153 (1) (Winter 2024): 106–122.
  • 24 Travis Riddle and Stacey Sinclair, “ Racial Disparities in School-Based Disciplinary Actions Are Associated with County-Level Rates of Racial Bias, ” Proceedings of the National Academy of Sciences 116 (17) (2019): 8255–8260; Mark J. Chin, David M. Quinn, Tasminda K. Dhaliwal, et al., “ Bias in the Air: A Nationwide Exploration of Teachers’ Implicit Racial Attitudes, Aggregate Bias, and Student Outcomes, ” Educational Researcher 49 (8) (2020): 566–578; and Marleen Stelter, Iniobong Essien, Carsten Sander, and Juliane Degner, “ Racial Bias in Police Traffic Stops: White Residents’ County-Level Prejudice and Stereotypes Are Related to Disproportionate Stopping of Black Drivers, ” Psychological Science 33 (4) (2022): 483–496.
  • 25 B. Keith Payne, Heidi A. Vuletich, and Jazmin L. Brown-Iannuzzi, “ Historical Roots of Implicit Bias in Slavery, ” Proceedings of the National Academy of Sciences 116 (24) (2019): 11693–11698.
  • 26 Jan De Houwer, “ Implicit Bias Is Behavior: A Functional-Cognitive Perspective on Implicit Bias, ” Perspectives on Psychological Science 14 (5) (2019): 835–840.
  • 27 Bertram Gawronski, Alison Ledgerwood, and Paul W. Eastwick, “ Implicit Bias ≠ Bias on Implicit Measures, ” Psychological Inquiry 33 (3) (2022): 139–155.
  • 28 For comparison, see Corneille and Hütter, “Implicit? What Do You Mean?”; and Greenwald and Lai, “Implicit Social Cognition.”

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Ketanji Brown Jackson.

‘Could I really cut it?’

Exterior of Tiffany flagship

For this ring, I thee sue 

Cass R. Sunstein (left) speaks with Benjamin Eidelson, Professor of Law, on his new book "Campus Free Speech."

Speech is never totally free

Mahzarin Banaji

Mahzarin Banaji opened the symposium on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The final talk is today at 9 a.m.

Kris Snibbe/Harvard Staff Photographer

Turning a light on our implicit biases

Brett Milano

Harvard Correspondent

Social psychologist details research at University-wide faculty seminar

Few people would readily admit that they’re biased when it comes to race, gender, age, class, or nationality. But virtually all of us have such biases, even if we aren’t consciously aware of them, according to Mahzarin Banaji, Cabot Professor of Social Ethics in the Department of Psychology, who studies implicit biases. The trick is figuring out what they are so that we can interfere with their influence on our behavior.

Banaji was the featured speaker at an online seminar Tuesday, “Blindspot: Hidden Biases of Good People,” which was also the title of Banaji’s 2013 book, written with Anthony Greenwald. The presentation was part of Harvard’s first-ever University-wide faculty seminar.

“Precipitated in part by the national reckoning over race, in the wake of George Floyd, Breonna Taylor and others, the phrase ‘implicit bias’ has almost become a household word,” said moderator Judith Singer, Harvard’s senior vice provost for faculty development and diversity. Owing to the high interest on campus, Banaji was slated to present her talk on three different occasions, with the final one at 9 a.m. Thursday.

Banaji opened on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and neuroscience. Banaji’s first experiments found, not surprisingly, that New Englanders associated good things with the Red Sox and bad things with the Yankees.

She then went further by replacing the sports teams with gay and straight, thin and fat, and Black and white. The responses were sometimes surprising: Shown a group of white and Asian faces, a test group at Yale associated the former more with American symbols though all the images were of U.S. citizens. In a further study, the faces of American-born celebrities of Asian descent were associated as less American than those of white celebrities who were in fact European. “This shows how discrepant our implicit bias is from even factual information,” she said.

How can an institution that is almost 400 years old not reveal a history of biases, Banaji said, citing President Charles Eliot’s words on Dexter Gate: “Depart to serve better thy country and thy kind” and asking the audience to think about what he may have meant by the last two words.

She cited Harvard’s current admission strategy of seeking geographic and economic diversity as examples of clear progress — if, as she said, “we are truly interested in bringing the best to Harvard.” She added, “We take these actions consciously, not because they are easy but  because they are in our interest and in the interest of society.”

Moving beyond racial issues, Banaji suggested that we sometimes see only what we believe we should see. To illustrate she showed a video clip of a basketball game and asked the audience to count the number of passes between players. Then the psychologist pointed out that something else had occurred in the video — a woman with an umbrella had walked through — but most watchers failed to register it. “You watch the video with a set of expectations, one of which is that a woman with an umbrella will not walk through a basketball game. When the data contradicts an expectation, the data doesn’t always win.”

Expectations, based on experience, may create associations such as “Valley Girl Uptalk” is the equivalent of “not too bright.” But when a quirky way of speaking spreads to a large number of young people from certain generations,  it stops being a useful guide. And yet, Banaji said, she has been caught in her dismissal of a great idea presented in uptalk.  Banaji stressed that the appropriate course of action is not to ask the person to change the way she speaks but rather for her and other decision makers to know that using language and accents to judge ideas is something people at their own peril.

Banaji closed the talk with a personal story that showed how subtler biases work: She’d once turned down an interview because she had issues with the magazine for which the journalist worked.

The writer accepted this and mentioned she’d been at Yale when Banaji taught there. The professor then surprised herself by agreeing to the interview based on this fragment of shared history that ought not to have influenced her. She urged her colleagues to think about positive actions, such as helping that perpetuate the status quo.

“You and I don’t discriminate the way our ancestors did,” she said. “We don’t go around hurting people who are not members of our own group. We do it in a very civilized way: We discriminate by who we help. The question we should be asking is, ‘Where is my help landing? Is it landing on the most deserved, or just on the one I shared a ZIP code with for four years?’”

To subscribe to short educational modules that help to combat implicit biases, visit outsmartinghumanminds.org .

Share this article

You might like.

Justice Ketanji Brown Jackson discusses new memoir, ‘unlikely path’ from South Florida to Harvard to nation’s highest court

Exterior of Tiffany flagship

Unhappy suitor wants $70,000 engagement gift back. Now court must decide whether 1950s legal standard has outlived relevance.

Cass R. Sunstein (left) speaks with Benjamin Eidelson, Professor of Law, on his new book "Campus Free Speech."

Cass Sunstein suggests universities look to First Amendment as they struggle to craft rules in wake of disruptive protests

Harvard releases race data for Class of 2028

Cohort is first to be impacted by Supreme Court’s admissions ruling

Parkinson’s may take a ‘gut-first’ path

Damage to upper GI lining linked to future risk of Parkinson’s disease, says new study

High doses of Adderall may increase psychosis risk

Among those who take prescription amphetamines, 81% of cases of psychosis or mania could have been eliminated if they were not on the high dose, findings suggest

APS

Cover Story

The bias beneath: two decades of measuring implicit associations.

  • Cognitive Bias
  • Implicit Association Test (IAT)
  • Implicit Bias
  • Implicit Memory
  • Implicit Processing
  • Political Psychology

implicit bias experiment

Over the last 20 years, millions of people have used an online test to probe attitudes they didn’t know they had.

Since its online debut in 1998, the Implicit Association Test (IAT) has allowed people to discover potential prejudices that lurk beneath their awareness — and that researchers therefore wouldn’t find through participant self-reports.

Basically, the IAT asks participants to categorize words or images that appear onscreen by pressing specific keys on a keyboard. The time it takes for participants to respond to different combinations of stimuli is thought to shed light on the mental associations they make, even when they aren’t aware of them.

The IAT is the brainchild of APS William James Fellow Anthony Greenwald (University of Washington), and he began working collaboratively on it with APS Past President Mahzarin Banaji (Harvard University) and APS Fellow Brian Nosek (University of Virginia) in the mid-1990s. Over time, the tool has led to the examination of unconscious and automatic thought processes among people in different contexts, including employers, police officers, jurors, and voters.

Perhaps the most salient examples of implicit bias involve race and gender across a variety of scientific perspectives. APS Past President Elizabeth Phelps has collaborated considerably with Banaji on IAT investigations using functional MRI (fMRI) to explore the brain’s role in the unconscious evaluation of racial groups. Developmental researchers have modified the IAT for use with children to discover some intergroup associations that form in the earliest years of life. (See related story on page 15.) And data from Project Implicit reveal that 75% of people who have taken the IAT have correlated men more strongly with work roles and women more strongly with family positions. A recent study showed that hiring managers whose scores on the IAT indicated gender bias tended to favor men over women in their hiring decisions.

But the IAT has also inspired a wealth of research on implicit biases related to age, weight, political leanings, disability, and much more.

Opinions on the IAT are mixed. Controversy about the test was evident in a 2013 meta-analysis by APS Fellows Fred Oswald and Phillip E. Tetlock and colleagues. They found weaker correlations between IAT scores and discriminatory behavior compared with what Greenwald, Banaji, and their colleagues found in a 2009 meta-analysis.

As researchers continue to explore how to use and interpret IAT findings (a new, larger meta-analysis is being prepared for publication), there’s no question that the test has shaped public discussions about race and discrimination. Hillary Clinton discussed implicit bias during one of the debates in the 2016 presidential election campaign. The US Department of Justice (DOJ) has integrated findings about implicit bias into training curricula for more than 28,000 DOJ employees as a way of combating implicit bias among law enforcement agents and prosecutors. And in a historic 2015 decision involving fair housing, the US Supreme Court referenced implicit bias in a ruling allowing federal action against housing policies that have a disparate impact as well as being overtly discriminating.

“The research of Mahzarin, Tony, and their collaborators has changed national and even international conversations about racism, sexism, classism, and other forms of bias, very much for the better,” says APS Fellow John Jost, Codirector of the Center for Social and Political Behavior at New York University and a former student of Banaji’s.

In this issue of the Observer , we mark the 20th anniversary of the IAT’s debut with examples of the studies it has spawned across numerous areas of psychological study.

Studies have used the IAT to investigate how weight stereotypes affect people who are overweight or obese. In a 2011 psychological field experiment, for example, scientists at Linnaeus University in Sweden found evidence of hiring discrimination against heavier individuals. Experimenters sent out fictitious applications for a large number of actual advertised job openings. The applicants all included their photographs and had the same credentials, but some of the photos showed the job-seekers as obese and others as normal weight. The researchers then compared the number of callbacks received by the normal-weight applicants and the obese applicants. Later, the hiring managers who received the applications were invited to take an obesity IAT as well as measures of their explicit hiring preferences. The researchers found that recruiters who showed the most implicit versus explicit negative associations with obesity were the least likely to have invited an overweight applicant for an interview.

These biases about weight may also play a role in the way medical doctors view their patients, according to findings from a multidisciplinary research team that included UVA’s Nosek. The scientists tested nearly 400,000 participants, including more than 2,000 MDs. They found that doctors are just as biased against obesity as is the general public. Specifically, the MDs reported a strong preference for thin people over overweight people on measures of both explicit and implicit attitudes. But IAT results revealed that male MDs had a considerably stronger implicit bias against overweight individuals compared with their female counterparts. The scientists said the results called for further exploration into any link between provider biases about weight and patient reports of weight discrimination in their health care.

Suicide Risk

Even experienced clinical judgment often misses the marks of suicidal thinking. As a result, suicide experts have long hoped and searched for a behavioral marker of suicide risk. With Banaji, Harvard psychological scientist Matthew Nock and other clinical researchers decided to adapt the IAT to examine whether the test might reveal implicit signs of suicide risk.

Nock and colleagues tested 157 psychiatric patients, including those who were brought to the hospital following a suicide attempt. The scientists wanted to see if the IAT could distinguish those who had tried to kill themselves from those who had not.

While in the emergency room, the patients rapidly classified words related to “me” (e.g., I , me ) and “not me” (e.g., they , them ) as well as “life” (e.g., surviv e, live ) and “death” (e.g., dead , dying ). The researchers examined how quickly patients connected identity-related words to life-or-death words. They found that patients who had attempted suicide prior to admission responded more quickly to word pairs linking the self and death than they did to other word pairs, suggesting that the unconscious association between self and death was stronger for these patients.

Nock followed the patients for 6 months and found that those who showed a relatively strong self–death association in the hospital were significantly more likely to attempt suicide later compared with those who showed a weaker self–death association. The responses on the IAT predicted suicide attempts above and beyond the effects of commonly used predictors such as a depression diagnosis, previous suicide attempts, or the attending clinician’s intuition.

Romantic Attachment

Much of the research on relationship success has relied on self-reports, but some scientists have developed IAT-like tools to assess implicit appraisals of romantic partners. In a study reported in 2010, for example, University of Rochester researchers, including APS Fellow Harry Reis, recruited 222 volunteers involved in romantic relationships. Each volunteer supplied their partner’s first name and two other words, such as a pet name or a distinctive characteristic, which related to the partner. Then they watched a monitor as three types of words were presented one at a time — “good” words (such as peace , vacation , or sharing ), “bad” words (such as death , tragedy , and criticizing ), and partner-related words (e.g., names or traits).

In one kind of test, volunteers pressed the space bar whenever they saw either good words or partner-related words. In the other, they responded when they saw bad words paired with partner words. The expectation was that participants who had generally positive associations with their partners should be able to complete the first task more easily than the second.

The results showed that volunteers who were relatively quick to respond to bad word–partner pairings and relatively slow to respond to good-word–partner pairings were more likely to separate from their partner over the next year. Furthermore, the test results were a stronger predictor of later breakup than were the volunteers’ own evaluations of their relationship quality.

In a typical IAT, a person sits at a computer screen and views a series of words and images. She’s told to press the I key on the keyboard when she sees an upbeat word such as happy or pleasant and the E key for negative words such as dangerous or tragic. The person then is told to press I when she sees the face of a Black man and E when she sees a White man’s face. Next she presses I when she sees a positive word or a Black face and E when she sees a negative word or a White face. The process then reverses to Black face/negative word versus White face/positive word. All the while, the computer records the person’s response times to each stimulus and, at the test’s conclusion, calculates an IAT score based on these data.

Attitudes About Sexuality

Researchers have also been able to use IAT data to track shifts in implicit intergroup attitudes over time, including attitudes toward homosexuality. Public opinion polls have indicated that acceptance of gay men and women has increased as they have gained more legal rights and protections, but those polls only capture explicit attitudes. IAT cocreator Nosek and psychological scientist Erin Westgate of UVA, along with Rachel Riskind of Guilford College in North Carolina, investigated how implicit biases toward gay people have shifted.

The scientists examined test data from nearly 684,000 visitors to the Project Implicit site between February 2006 and August 2013. Eighty percent of the participants identified as heterosexual.

When taking the IAT, participants had to sort positive words (e.g., beautiful , good ) into the “good” category and negative words (e.g., bad , terrible ) into the “bad” category. They then did the same kind of sorting for words and images related to gay people (e.g., pictures of same-sex wedding cake toppers or the word homosexual ) and straight people (e.g., the word heterosexual ). Participants who had negative implicit associations with gay people reacted more slowly when positive words were paired with words related to gay people than did those who had positive implicit associates with gay people.

The researchers found that not only did explicit preferences for straight people over gay people decline by 26% over the 7.5 year period, implicit preferences fell by more than 13% during that same time period. That change was largest among people who were younger, White or Hispanic, and liberal. But nearly every demographic group in the sample showed signs of an attitude shift.

Political Preferences

Voters have increasingly eschewed the Democrat and Republican labels and have opted to identify themselves as Independents. But Nosek and UVA psychological scientist Carlee Beth Hawkins decided to use the IAT to explore the associations that churn inside the Independent mind.

In one study, a random sample of more than 1,800 volunteers participated on the Project Implicit website, where they read a mock newspaper article comparing two competing welfare proposals. One plan was generous in its benefits, the other much more stringent. Some of the volunteers read an article that said the Democrats were supporting the generous plan; Republicans, the stringent plan. Other participants read the same article but with the parties switched around.

The researchers then asked the volunteers to record which proposal they preferred and describe their political ideology and party identification; those who selected Independent were asked if they leaned toward either of the two major parties. Next, the volunteers took a version of the IAT designed to measure partisan identities and policy preferences.

The participants who identified as Independents varied greatly in the implicit associations they showed, and they made political judgments in line with these implicit associations. Those Independents who implicitly identified with Democrats preferred the liberal welfare plan, while those who implicitly identified with Republicans preferred the stringent plan. Furthermore, the Independents who showed implicit associations that favored Republican politics preferred whatever plan was proposed by Republicans — regardless of the values underlying the plan — more than they favored any plan proposed by Democrats. The same was true for those who showed an implicit preference for Democrats.

The findings suggest that self-identified Independents appeared to be influenced both by ideology and by partisanship, the researchers concluded.

Agerström, J., & Rooth, D. O. (2011). The role of automatic obesity stereotypes in real hiring discrimination. Journal of Applied Psychology, 96 , 790–805. doi:10.1037/a0021594.

Banaji, M. A., & Greenwald, A. G. (2013). Blindspot: Hidden biases of good people . New York, NY: Delacorte Press.

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102 , 4–27.

Greenwald, A. G., Banaji, M. R., & Nosek, B. A. (2015). Statistically small effects of the Implicit Association Test can have societally large effects. Journal of Personality and Social Psychology , 108 , 553–561.

Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74 , 1464–1480.

Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., & Banaji, M. J. (2009). Understanding and using the implicit association test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97 , 17–41. doi:10.1037/a0015575.

Hawkins, C.B., & Nosek, B.A. (2012). Motivated independence? Implicit party identity predicts political judgments among self-proclaimed independents. Personality and Social Psychology Bulletin, 38 , 1441-1455, doi: 10.1177/0146167212452313.

Lee, S., Rogge, R.D., Reis, H.T. (2010). Assessing the seeds of relationship decay: Using implicit evaluations to detect the early stages of disillusionment. Psychological Science, 21 , 857-864. doi.org/10.1177/0956797610371342

Nock, M.K., Park, J.M., Finn, C.T., Deliberto, T.L., Dour, H.J., Banaji, M.R. (2010). Measuring the suicidal mind: implicit cognition predicts suicidal behavior. Psychological Science, 21 (4), 511-517, doi: 10.1177/0956797610364762 .

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. (2013). Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 105 , 171–92. doi:10.1037/a0032734.

Sabin, J.A., Marini, M., & Nosek, B.A. (2012). Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender. PLoS ONE, 7 (11), e48448. doi: 10.1371/journal.pone.0048448.

Reuben, E., Sapienza, P., & Zingales, L. (2014). How stereotypes impair women’s careers in science. Proceedings of the National Academy of Sciences, 111 , 4403-4408. doi:10.1073/pnas.1314788111

Westgate, E. C., Riskind, R. G., & Nosek, B. A. (2015). Implicit preferences for straight people over lesbian women and gay men weakened from 2006 to 2013. Collabra, 1 . doi:10.1525/collabra.18

implicit bias experiment

Interesting that all of the research demonstrating the poor reliability and poor validity of the measure is missing from this article. Any research using the IAT must be interpreted with these things in mind: a) IAT is poorly correlated with other measures of prejudice including other implicit measures, b) test-retest reliability is low, c) IAT doesn’t predict behavior

implicit bias experiment

I think the meta-analysis by Oswald, Mitchell, Blanton, Jaccard, & Tetlock (2013) refutes (a), depending on what you consider “poorly” correlated. It also refutes (c), although the prediction of behavior is not large. I think what we can confidently say about IAT/Explicit/Discrimination is that there is a lot of error variance and unexplained variance that we need to understand better.

implicit bias experiment

Is there a reference for the Banaji & Nock work on suicidal intent, please?

implicit bias experiment

Nock, M. K., Park, J. M., Finn, C. T., Deliberto, T. L., Dour, H. J., & Banaji, M. R. (2010). Measuring the suicidal mind: Implicit cognition predicts suicidal behavior. Psychological Science, 21(4), 511–517. https://doi.org/10.1177/0956797610364762

implicit bias experiment

I actually did the word association thing with the black and white faces for a class once. I totally forgot about it till I read this. What a small world.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

implicit bias experiment

Visual Memory Distortions Paint a Picture of the Past That Never Was 

Basic research on our imperfect visual memories is bringing to light how and why we may misremember what we have seen.

Two photos, one of Justin Kruger and the other of David Dunning

Social Psychologists Behind “Unskilled and Unaware of It” Bias Idea Receive 2023 Grawemeyer Award

The two were recognized for their idea, known as the Dunning-Kruger effect, which shows that people who perform worse on certain tasks tend to have overly flattering opinions of their ability to perform them.

implicit bias experiment

Experimental Methods Are Not Neutral Tools

Ana Sofia Morais and Ralph Hertwig explain how experimental psychologists have painted too negative a picture of human rationality, and how their pessimism is rooted in a seemingly mundane detail: methodological choices. 

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
  • About Diversity, Equity, and Inclusion
  • The LMU Anti-Racism Project
  • Implicit Bias
  • Leadership Development
  • Recruitment, Retention, & Inclusive Climate
  • Student Success
  • Truth, Racial Healing, and Transformation Center Alliance
  • Data div.mega__menu -->
  • Diversity Resources
  • LMU Anti-Racism and Inclusion Resources
  • Addressing Demands
  • Capacity Building
  • LMU This Week: DEI Archives
  • Report Bias div.mega__menu -->
  • Asian American and Pacific Islander Community
  • Black and African American Community
  • Latinx Community
  • LGBTQIA+ Community
  • Native American and Indigenous Community
  • Women's History Month
  • Alliance of White Anti-Racists Everywhere (AWARE-LMU)
  • Report Bias
  • Test Your Implicit Bias - Implicit Association Test (IAT)
  • Strategies to Reduce Implicit Bias

Implicit Association Test (IAT)

Project implicit research - harvard university.

The Implicit Association Test (IAT) measures the strength of associations between concepts and evaluations or stereotypes to reveal an individual’s hidden or subconscious biases. This test was first published in 1998 by Project Implicit, and has since been continuously updated and enhanced. Project Implicit was founded by Tony Greenwald of the University of Washington, Mahzarin Banaji of Harvard University, and Brian Nosek of the University of Virginia. Project Implicit is a non-profit organization aimed at educating the public about hidden biases and providing a “virtual laboratory” for collecting data on the Internet.

Take the test here

Source: implicit.harvard.edu | About the IAT | About Us

implicit bias experiment

Photo from  Medium.com

Test Yourself for Hidden Bias

Psychologists at Harvard, the University of Virginia and the University of Washington created "Project Implicit" to develop Hidden Bias Tests—called Implicit Association Tests, or IATs, in the academic world—to measure unconscious bias .

Test yourself for hidden bias

About Stereotypes and Prejudices 

Hidden Bias Tests measure unconscious, or automatic, biases. Your willingness to examine your own possible biases is an important step in understanding the roots of stereotypes and prejudice in our society.

The ability to distinguish friend from foe helped early humans survive, and the ability to quickly and automatically categorize people is a fundamental quality of the human mind. Categories give order to life, and every day, we group other people into categories based on social and other characteristics.

This is the foundation of stereotypes, prejudice and, ultimately, discrimination.

Definition of Terms

A stereotype is an exaggerated belief, image or distorted truth about a person or group—a generalization that allows for little or no individual differences or social variation. Stereotypes are based on images in mass media, or reputations passed on by parents, peers and other members of society. Stereotypes can be positive or negative.

A prejudice is an opinion, prejudgment or attitude about a group or its individual members. A prejudice can be positive, but in our usage refers to a negative attitude.

Prejudices are often accompanied by ignorance, fear or hatred. Prejudices are formed by a complex psychological process that begins with attachment to a close circle of acquaintances or an "in-group" such as a family. Prejudice is often aimed at "out-groups."

Discrimination is behavior that treats people unequally because of their group memberships. Discriminatory behavior, ranging from slights to hate crimes, often begins with negative stereotypes and prejudices.

How do we learn prejudice?

Social scientists believe children begin to acquire prejudices and stereotypes as toddlers. Many studies have shown that as early as age 3, children pick up terms of racial prejudice without really understanding their significance.

Soon, they begin to form attachments to their own group and develop negative attitudes about other racial or ethnic groups, or the "out-group". Early in life, most children acquire a full set of biases that can be observed in verbal slurs, ethnic jokes and acts of discrimination.

How are our biases reinforced?

Once learned, stereotypes and prejudices resist change, even when evidence fails to support them or points to the contrary.

People will embrace anecdotes that reinforce their biases, but disregard experience that contradicts them. The statement "Some of my best friends are _____" captures this tendency to allow some exceptions without changing our bias.

How do we perpetuate bias?

Bias is perpetuated by conformity with in-group attitudes and socialization by the culture at large. The fact that white culture is dominant in America may explain why people of color often do not show a strong bias favoring their own ethnic group.

Mass media routinely take advantage of stereotypes as shorthand to paint a mood, scene or character. The elderly, for example, are routinely portrayed as being frail and forgetful, while younger people are often shown as vibrant and able.

Stereotypes can also be conveyed by omission in popular culture, as when TV shows present an all-white world. Psychologists theorize bias conveyed by the media helps to explain why children can adopt hidden prejudices even when their family environments explicitly oppose them.  

About Hidden Bias

Scientific research has demonstrated that biases thought to be absent or extinguished remain as "mental residue" in most of us. Studies show people can be consciously committed to egalitarianism, and deliberately work to behave without prejudice, yet still possess hidden negative prejudices or stereotypes.

"Implicit Association Tests" (IATs) can tap those hidden, or automatic, stereotypes and prejudices that circumvent conscious control. Project Implicit —a collaborative research effort between researchers at Harvard University, the University of Virginia, and University of Washington—offers dozens of such tests.

We believe the IAT procedure may be useful beyond the research purposes for which it was originally developed. It may be a tool that can jumpstart our thinking about hidden biases: Where do they come from? How do they influence our actions? What can we do about them?

Biases and behavior

A growing number of studies show a link between hidden biases and actual behavior. In other words, hidden biases can reveal themselves in action, especially when a person's efforts to control behavior consciously flags under stress, distraction, relaxation or competition.

Unconscious beliefs and attitudes have been found to be associated with language and certain behaviors such as eye contact, blinking rates and smiles.

Studies have found, for example, that school teachers clearly telegraph prejudices, so much so that some researchers believe children of color and white children in the same classroom effectively receive different educations.

A now classic experiment showed that white interviewers sat farther away from Black applicants than from white applicants, made more speech errors and ended the interviews 25% sooner. Such discrimination has been shown to diminish the performance of anyone treated that way, whether Black or white.

Experiments are being conducted to determine whether a strong hidden bias in someone results in more discriminatory behavior. But we can learn something from even the first studies:

  • Those who showed greater levels of implicit prejudice toward, or stereotypes of, Black or gay people were more unfriendly toward them.
  • Subjects who had a stronger hidden race bias had more activity in a part of the brain known to be responsible for emotional learning when shown Black faces than when shown white faces.

Leading to discrimination?

Whether laboratory studies adequately reflect real-life situations is not firmly established. But there is growing evidence, according to social scientists, that hidden biases are related to discriminatory behavior in a wide range of human interactions, from hiring and promotions to choices of housing and schools.

In the case of police, bias may affect split-second, life-or-death decisions. Shootings of Black men incorrectly thought to be holding guns—an immigrant in New York, a cop in Rhode Island—brought this issue into the public debate.

It is possible unconscious prejudices and stereotypes may also affect court jury deliberations and other daily tasks requiring judgments of human character.

People who argue that prejudice is not a big problem today are, ironically, demonstrating the problem of unconscious prejudice. Because these prejudices are outside our awareness, they can indeed be denied.  

The Effects of Prejudice and Stereotypes

Hidden bias has emerged as an important clue to the disparity between public opinion, as expressed by America's creed and social goals, and the amount of discrimination that still exists.

Despite 30 years of equal-rights legislation, levels of poverty, education and success vary widely across races. Discrimination continues in housing and real estate sales , and racial profiling is a common practice, even among ordinary citizens.

Members of minorities continue to report humiliating treatment by store clerks, co-workers and police. While an African American man may dine in a fine restaurant anywhere in America, it can be embarrassing for him to attempt to flag down a taxi after that dinner.

A person who carries the stigma of group membership must be prepared for its debilitating effects.

Studies indicate that African American teenagers are aware they are stigmatized as being intellectually inferior and that they go to school bearing what psychologist Claude Steele has called a "burden of suspicion." Such a burden can affect their attitudes and achievement.

Similarly, studies found that when college women are reminded their group is considered bad at math, their performance may fulfill this prophecy.

These shadows hang over stigmatized people no matter their status or accomplishments. They must remain on guard and bear an additional burden that may affect their self-confidence, performance and aspirations. These stigmas have the potential to rob them of their individuality and debilitate their attempts to break out of stereotypical roles.  

What You Can Do About Unconscious Stereotypes and Prejudices

Conscious attitudes and beliefs can change.

The negative stereotypes associated with many immigrant groups, for example, have largely disappeared over time. For African-Americans, civil rights laws forced integration and nondiscrimination, which, in turn, helped to change public opinion.

But psychologists have no ready roadmap for undoing such overt and especially hidden stereotypes and prejudices.

Learned at an early age

The first step may be to admit biases are learned early and are counter to our commitment to just treatment. Parents, teachers, faith leaders and other community leaders can help children question their values and beliefs and point out subtle stereotypes used by peers and in the media. Children should also be surrounded by cues that equality matters.

In his classic book, The Nature of Prejudice , the psychologist Gordon Allport observed children are more likely to grow up tolerant if they live in a home that is supportive and loving. "They feel welcome, accepted, loved, no matter what they do."

In such an environment, different views are welcomed, punishment is not harsh or capricious, and these children generally think of people positively and carry a sense of goodwill and even affection.

Community matters

Integration, by itself, has not been shown to produce dramatic changes in attitudes and behavior. But many studies show when people work together in a structured environment to solve shared problems through community service, their attitudes about diversity can change dramatically.

By including members of other groups in a task, children begin to think of themselves as part of a larger community in which everyone has skills and can contribute. Such experiences have been shown to improve attitudes across racial lines and between people old and young.

There also is preliminary evidence that unconscious attitudes, contrary to initial expectations, may be malleable. For example, imagining strong women leaders or seeing positive role models of African Americans has been shown to, at least temporarily, change unconscious biases.

'Feeling' unconscious bias

But there is another aspect of the very experience of taking a test of hidden bias that may be helpful. Many test takers can "feel" their hidden prejudices as they perform the tests.

They can feel themselves unable to respond as rapidly to (for example) old + good concepts than young + good concepts. The very act of taking the tests can force hidden biases into the conscious part of the mind.

We would like to believe that when a person has a conscious commitment to change, the very act of discovering one's hidden biases can propel one to act to correct for it. It may not be possible to avoid the automatic stereotype or prejudice, but it is certainly possible to consciously rectify it.

Committing to change

If people are aware of their hidden biases, they can monitor and attempt to ameliorate hidden attitudes before they are expressed through behavior. This compensation can include attention to language, body language and to the stigmatization felt by target groups.

Common sense and research evidence also suggest that a change in behavior can modify beliefs and attitudes. It would seem logical that a conscious decision to be egalitarian might lead one to widen one's circle of friends and knowledge of other groups. Such efforts may, over time, reduce the strength of unconscious biases.

It can be easy to reject the results of the tests as "not me" when you first encounter them. But that's the easy path. To ask where these biases come from, what they mean, and what we can do about them is the harder task.

Recognizing that the problem is in many others—as well as in ourselves—should motivate us all to try both to understand and to act.

  • Google Classroom

Sign in to save these resources.

Login or create an account to save resources to your bookmark collection.

Get the Learning for Justice Newsletter

Project Implicit

Education services, programming services, welcome to project implicit .

We are a non-profit organization and international, collaborative network of researchers investigating implicit social cognition, or thoughts and feelings that are largely outside of conscious awareness and control. Project Implicit is the product of a team of scientists whose research produced new ways of understanding attitudes, stereotypes, and other hidden biases that influence perception, judgment, and behavior. Our researchers and collaborators translate that academic research into practical applications for addressing diversity, improving decision-making, and increasing the likelihood that practices are aligned with personal and organizational values. To support the organization’s research and educational mission, we also offer research support, education sessions, and (coming very soon!!) a membership and cohort learning program.

If you’d like to stay in touch with the Project Implicit Team and its affiliated researchers, we invite you to sign up for our email list.

What is implicit bias?

Implicit bias is an automatic reaction we have towards other people. These attitudes and stereotypes can negatively impact our understanding, actions, and decision-making. The idea that we can hold prejudices we don’t want or believe was quite radical when it was first introduced, and the fact that people may discriminate unintentionally continues to have implications for understanding disparities in so many aspects of society, including but not limited to health care, policing, and education, as well as organizational practices like hiring and promotion.

implicit bias experiment

26 Million Implicit Association Tests Completed

3,300+ research studies launched, 150+  peer-reviewed papers published, 600+  education sessions facilitated, project implicit research.

Register or log into the research website  as a volunteer participant, and you will be provided information about a novel study. Most studies take less than 15 minutes to complete. Your participation helps researchers around the world learn about biases, attitudes, and stereotypes. NOTE: Participants do not pay a fee or receive compensation for participating in the research studies.

Project Implicit offers virtual and on-site education sessions and other learning programs that explore evidence-based strategies to mitigate the impact of negative or unwanted biases, attitudes, and stereotypes on decision-making. Sessions are offered in three formats, and content can be tailored to a specific audience, industry, learning objective, or topic of interest.

Researchers can work with a Project Implicit programmer to build an online research study incorporating implicit measures, like the Implicit Association Test, as well as other behavioral science tools, demographic questionnaires, and explicit measures. Researchers are provided a private link that can be privately or publicly distributed to participants for data collection.

Bayeté Ross Smith Collaboration

Project Implicit is pleased to announce a collaboration with interdisciplinary artist and activist Bayeté Ross Smith . We have collaborated with Bayeté to create two new Implicit Association Tests based on the Race Attitudes IAT and Race Weapons IAT using images from his “Our Kind of People" photo collection.

implicit bias experiment

The Perception Institute Collaboration

Project Implicit is excited to introduce a collaboration with The Perception Institute on its Hair Implicit Association Test (IAT), focusing on Black women’s hair. This IAT aims to measure the strength of associations between Black women’s hair and potential stereotypes.

By taking this test, you can gain insights into your implicit attitudes toward Black women’s hair and the associations you may hold. We encourage you to participate, explore potential implicit biases, and develop a deeper understanding of societal perceptions.

implicit bias experiment

Special Olympics Collaboration

Project Implicit is excited to introduce a collaboration with the Special Olympics on a new Implicit Association Test, the Intellectual and Developmental Disabilities IAT.   This IAT is intended to measure the strength of associations between Intellectual and Developmental Disabilities related concepts (e.g. “Developmental Delay”, “Typical Development”) and positive or negative evaluations (e.g., “Worthy”, “Incompetent”).

We encourage you to take a test to learn more about what associations you may have and to use this test as a springboard for conversations around biases and Intellectual and Developmental Disabilities.

implicit bias experiment

  • Privacy Policy

Implicit Bias (Unconscious Bias): Definition & Examples

Charlotte Ruhl

Research Assistant & Psychology Graduate

BA (Hons) Psychology, Harvard University

Charlotte Ruhl, a psychology graduate from Harvard College, boasts over six years of research experience in clinical and social psychology. During her tenure at Harvard, she contributed to the Decision Science Lab, administering numerous studies in behavioral economics and social psychology.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Implicit bias refers to the beliefs and attitudes that affect our understanding, actions and decisions in an unconscious way.

Take-home Messages

  • Implicit biases are unconscious attitudes and stereotypes that can manifest in the criminal justice system, workplace, school setting, and in the healthcare system.
  • Implicit bias is also known as unconscious bias or implicit social cognition.
  • There are many different examples of implicit biases, ranging from categories of race, gender, and sexuality.
  • These biases often arise from trying to find patterns and navigate the overwhelming stimuli in this complicated world. Culture, media, and upbringing can also contribute to the development of such biases.
  • Removing these biases is a challenge, especially because we often don’t even know they exist, but research reveals potential interventions and provides hope that levels of implicit biases in the United States are decreasing.

implicit bias

The term implicit bias was first coined in 1995 by psychologists Mahzarin Banaji and Anthony Greenwald, who argued that social behavior is largely influenced by unconscious associations and judgments (Greenwald & Banaji, 1995).

So, what is implicit bias?

Specifically, implicit bias refers to attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious way, making them difficult to control.

Since the mid-90s, psychologists have extensively researched implicit biases, revealing that, without even knowing it, we all possess our own implicit biases.

System 1 and System 2 Thinking

Kahneman (2011) distinguishes between two types of thinking: system 1 and system 2.
  • System 1 is the brain’s fast, emotional, unconscious thinking mode. This type of thinking requires little effort, but it is often error-prone. Most everyday activities (like driving, talking, cleaning, etc.) heavily use the type 1 system.
  • System 2 is slow, logical, effortful, conscious thought, where reason dominates.

Daniel Kahnemans Systems

Implicit Bias vs. Explicit Bias

Implicit BiasExplicit Bias
Unconscious attitudes or stereotypes that affect our understanding, actions, and decisions.Conscious beliefs and attitudes about a person or group.
Can influence decisions and behavior subconsciously.Usually apparent in a person’s language and behavior.
A hiring manager unknowingly favors candidates who went to the same university as them.A person making a conscious decision not to hire someone based on their ethnicity.
Can lead to unintentional discrimination and bias in many areas like hiring, law enforcement, healthcare, etc.A person making a conscious decision not to hire someone based on ethnicity.
Measured using implicit association tests and other indirect methods.Can be assessed directly through surveys, interviews, etc.
Very common, as everyone holds unconscious biases to some degree.Less common, as societal norms have shifted to view explicit bias as unacceptable.
Improve self-awareness, undergo bias training, diversify your experiences and interactions.Education, awareness, promoting inclusivity and diversity.

What is meant by implicit bias?

Implicit bias (unconscious bias) refers to attitudes and beliefs outside our conscious awareness and control. Implicit biases are an example of system one thinking, so we are unaware they exist (Greenwald & Krieger, 2006).

An implicit bias may counter a person’s conscious beliefs without realizing it. For example, it is possible to express explicit liking of a certain social group or approval of a certain action while simultaneously being biased against that group or action on an unconscious level.

Therefore, implicit and explicit biases might differ for the same person.

It is important to understand that implicit biases can become explicit biases. This occurs when you become consciously aware of your prejudices and beliefs. They surface in your mind, leading you to choose whether to act on or against them.

What is meant by explicit bias?

Explicit biases are biases we are aware of on a conscious level (for example, feeling threatened by another group and delivering hate speech as a result). They are an example of system 2 thinking.

It is also possible that your implicit and explicit biases differ from your neighbor, friend, or family member. Many factors can control how such biases are developed.

What Are the Implications of Unconscious Bias?

Implicit biases become evident in many different domains of society. On an interpersonal level, they can manifest in simply daily interactions.

This occurs when certain actions (or microaggressions) make others feel uncomfortable or aware of the specific prejudices you may hold against them.

Implicit Prejudice

Implicit prejudice is the automatic, unconscious attitudes or stereotypes that influence our understanding, actions, and decisions. Unlike explicit prejudice, which is consciously controlled, implicit prejudice can occur even in individuals who consciously reject prejudice and strive for impartiality.

Unconscious racial stereotypes are a major example of implicit prejudice. In other words, having an automatic preference for one race over another without being aware of this bias.

This bias can manifest in small interpersonal interactions and has broader implications in society’s legal system and many other important sectors.

Examples may include holding an implicit stereotype that associates Black individuals as violent. As a result, you may cross the street at night when you see a Black man walking in your direction without even realizing why you are crossing the street.

The action taken here is an example of a microaggression. A microaggression is a subtle, automatic, and often nonverbal that communicates hostile, derogatory, or negative prejudicial slights and insults toward any group (Pierce, 1970). Crossing the street communicates an implicit prejudice, even though you might not even be aware.

Another example of an implicit racial bias is if a Latino student is complimented by a teacher for speaking perfect English, but he is a native English speaker. Here, the teacher assumed that English would not be his first language simply because he is Latino.

Gender Stereotypes

Gender biases are another common form of implicit bias. Gender biases are the ways in which we judge men and women based on traditional feminine and masculine assigned traits.

For example, a greater assignment of fame to male than female names (Banaji & Greenwald, 1995) reveals a subconscious bias that holds men at a higher level than their female counterparts. Whether you voice the opinion that men are more famous than women is independent of this implicit gender bias.

Another common implicit gender bias regards women in STEM (science, technology, engineering, and mathematics).

In school, girls are more likely to be associated with language over math. In contrast, males are more likely to be associated with math over language (Steffens & Jelenec, 2011), revealing clear gender-related implicit biases that can ultimately go so far as to dictate future career paths.

Even if you outwardly say men and women are equally good at math, it is possible you subconsciously associate math more strongly with men without even being aware of this association.

Health Care

Healthcare is another setting where implicit biases are very present. Racial and ethnic minorities and women are subject to less accurate diagnoses, curtailed treatment options, less pain management, and worse clinical outcomes (Chapman, Kaatz, & Carnes, 2013).

Additionally, Black children are often not treated as children or given the same compassion or level of care provided for White children (Johnson et al., 2017).

It becomes evident that implicit biases infiltrate the most common sectors of society, making it all the more important to question how we can remove these biases.

LGBTQ+ Community Bias

Similar to implicit racial and gender biases, individuals may hold implicit biases against members of the LGBTQ+ community. Again, that does not necessarily mean that these opinions are voiced outwardly or even consciously recognized by the beholder, for that matter.

Rather, these biases are unconscious. A really simple example could be asking a female friend if she has a boyfriend, assuming her sexuality and that heterosexuality is the norm or default.

Instead, you could ask your friend if she is seeing someone in this specific situation. Several other forms of implicit biases fall into categories ranging from weight to ethnicity to ability that come into play in our everyday lives.

Legal System

Both law enforcement and the legal system shed light on implicit biases. An example of implicit bias functioning in law enforcement is the shooter bias – the tendency among the police to shoot Black civilians more often than White civilians, even when they are unarmed (Mekawi & Bresin, 2015).

This bias has been repeatedly tested in the laboratory setting, revealing an implicit bias against Black individuals. Blacks are also disproportionately arrested and given harsher sentences, and Black juveniles are tried as adults more often than their White peers.

Black boys are also seen as less childlike, less innocent, more culpable, more responsible for their actions, and as being more appropriate targets for police violence (Goff, 2014).

Together, these unconscious stereotypes, which are not rooted in truth, form an array of implicit biases that are extremely dangerous and utterly unjust.

Implicit biases are also visible in the workplace. One experiment that tracked the success of White and Black job applicants found that stereotypically White received 50% more callbacks than stereotypically Black names, regardless of the industry or occupation (Bertrand & Mullainathan, 2004).

This reveals another form of implicit bias: the hiring bias – Anglicized‐named applicants receiving more favorable pre‐interview impressions than other ethnic‐named applicants (Watson, Appiah, & Thornton, 2011).

We’re susceptible to bias because of these tendencies:

We tend to seek out patterns

A key reason we develop such biases is that our brains have a natural tendency to look for patterns and associations to make sense of a very complicated world.

Research shows that even before kindergarten, children already use their group membership (e.g., racial group, gender group, age group, etc.) to guide inferences about psychological and behavioral traits.

At such a young age, they have already begun seeking patterns and recognizing what distinguishes them from other groups (Baron, Dunham, Banaji, & Carey, 2014).

And not only do children recognize what sets them apart from other groups, they believe “what is similar to me is good, and what is different from me is bad” (Cameron, Alvarez, Ruble, & Fuligni, 2001).

Children aren’t just noticing how similar or dissimilar they are to others; dissimilar people are actively disliked (Aboud, 1988).

Recognizing what sets you apart from others and then forming negative opinions about those outgroups (a social group with which an individual does not identify) contributes to the development of implicit biases.

We like to take shortcuts

Another explanation is that the development of these biases is a result of the brain’s tendency to try to simplify the world.

Mental shortcuts make it faster and easier for the brain to sort through all of the overwhelming data and stimuli we are met with every second of the day. And we take mental shortcuts all the time. Rules of thumb, educated guesses, and using “common sense” are all forms of mental shortcuts.

Implicit bias is a result of taking one of these cognitive shortcuts inaccurately (Rynders, 2019). As a result, we incorrectly rely on these unconscious stereotypes to provide guidance in a very complex world.

And especially when we are under high levels of stress, we are more likely to rely on these biases than to examine all of the relevant, surrounding information (Wigboldus, Sherman, Franzese, & Knippenberg, 2004).

Social and Cultural influences

Influences from media, culture, and your individual upbringing can also contribute to the rise of implicit associations that people form about the members of social outgroups. Media has become increasingly accessible, and while that has many benefits, it can also lead to implicit biases.

The way TV portrays individuals or the language journal articles use can ingrain specific biases in our minds.

For example, they can lead us to associate Black people with criminals or females as nurses or teachers. The way you are raised can also play a huge role. One research study found that parental racial attitudes can influence children’s implicit prejudice (Sinclair, Dunn, & Lowery, 2005).

And parents are not the only figures who can influence such attitudes. Siblings, the school setting, and the culture in which you grow up can also shape your explicit beliefs and implicit biases.

Implicit Attitude Test (IAT)

What sets implicit biases apart from other forms is that they are subconscious – we don’t know if we have them.

However, researchers have developed the Implicit Association Test (IAT) tool to help reveal such biases.

The Implicit Attitude Test (IAT) is a psychological assessment to measure an individual’s unconscious biases and associations. The test measures how quickly a person associates concepts or groups (such as race or gender) with positive or negative attributes, revealing biases that may not be consciously acknowledged.

The IAT requires participants to categorize negative and positive words together with either images or words (Greenwald, McGhee, & Schwartz, 1998).

Tests are taken online and must be performed as quickly as possible, the faster you categorize certain words or faces of a category, the stronger the bias you hold about that category.

For example, the Race IAT requires participants to categorize White faces and Black faces and negative and positive words. The relative speed of association of black faces with negative words is used as an indication of the level of anti-black bias.

Kahneman

Professor Brian Nosek and colleagues tested more than 700,000 subjects. They found that more than 70% of White subjects more easily associated White faces with positive words and Black faces with negative words, concluding that this was evidence of implicit racial bias (Nosek, Greenwald, & Banaji, 2007).

Outside of lab testing, it is very difficult to know if we do, in fact, possess these biases. The fact that they are so hard to detect is in the very nature of this form of bias, making them very dangerous in various real-world settings.

How to Reduce Implicit Bias

Because of the harmful nature of implicit biases, it is critical to examine how we can begin to remove them.

Practicing mindfulness is one potential way, as it reduces the stress and cognitive load that otherwise leads to relying on such biases.

A 2016 study found that brief mediation decreased unconscious bias against black people and elderly people (Lueke & Gibson, 2016), providing initial insight into the usefulness of this approach and paving the way for future research on this intervention.

Adjust your perspective

Another method is perspective-taking – looking beyond your own point of view so that you can consider how someone else may think or feel about something.

Researcher Belinda Gutierrez implemented a videogame called “Fair Play,” in which players assume the role of a Black graduate student named Jamal Davis.

As Jamal, players experience subtle race bias while completing “quests” to obtain a science degree.

Gutierrez hypothesized that participants who were randomly assigned to play the game would have greater empathy for Jamal and lower implicit race bias than participants randomized to read narrative text (not perspective-taking) describing Jamal’s experience (Gutierrez, 2014), and her hypothesis was supported, illustrating the benefits of perspective taking in increasing empathy towards outgroup members.

Specific implicit bias training has been incorporated in different educational and law enforcement settings. Research has found that diversity training to overcome biases against women in STEM improved with men (Jackson, Hillard, & Schneider, 2014).

Training programs designed to target and help overcome implicit biases may also be beneficial for police officers (Plant & Peruche, 2005), but there is not enough conclusive evidence to completely support this claim. One pitfall of such training is a potential rebound effect.

Actively trying to inhibit stereotyping actually results in the bias eventually increasing more so than if it had not been initially suppressed in the first place (Macrae, Bodenhausen, Milne, & Jetten, 1994). This is very similar to the white bear problem that is discussed in many psychology curricula.

This concept refers to the psychological process whereby deliberate attempts to suppress certain thoughts make them more likely to surface (Wegner & Schneider, 2003).

Education is crucial. Understanding what implicit biases are, how they can arise how, and how to recognize them in yourself and others are all incredibly important in working towards overcoming such biases.

Learning about other cultures or outgroups and what language and behaviors may come off as offensive is critical as well. Education is a powerful tool that can extend beyond the classroom through books, media, and conversations.

On the bright side, implicit biases in the United States have been improving.

From 2007 to 2016, implicit biases have changed towards neutrality for sexual orientation, race, and skin-tone attitudes (Charlesworth & Banaji, 2019), demonstrating that it is possible to overcome these biases.

Books for further reading

As mentioned, education is extremely important. Here are a few places to get started in learning more about implicit biases:

  • Biased: Uncovering the Hidden Prejudice That Shapes What We See Think and Do by Jennifer Eberhardt
  • Blindspot by Anthony Greenwald and Mahzarin Banaji
  • Implicit Racial Bias Across the Law by Justin Levinson and Robert Smith

Keywords and Terminology

To find materials on implicit bias and related topics, search databases and other tools using the following keywords:

“implicit bias” “implicit gender bias”
“unconscious bias” “implicit prejudices”
“hidden bias” “implicit racial bias”
“cognitive bias” “Implicit Association Test” or IAT
“implicit association” “implicit social cognition”
bias prejudices
“prejudice psychological aspects” stereotypes

Is unconscious bias the same as implicit bias?

Yes, unconscious bias is the same as implicit bias. Both terms refer to the biases we carry without awareness or conscious control, which can affect our attitudes and actions toward others.

In what ways can implicit bias impact our interactions with others?

Implicit bias can impact our interactions with others by unconsciously influencing our attitudes, behaviors, and decisions. This can lead to stereotyping, prejudice, and discrimination, even when we consciously believe in equality and fairness.

It can affect various domains of life, including workplace dynamics, healthcare provision, law enforcement, and everyday social interactions.

What are some implicit bias examples?

Some examples of implicit biases include assuming a woman is less competent than a man in a leadership role, associating certain ethnicities with criminal behavior, or believing that older people are not technologically savvy.

Other examples include perceiving individuals with disabilities as less capable or assuming that someone who is overweight is lazy or unmotivated.

Aboud, F. E. (1988). Children and prejudice . B. Blackwell.

Banaji, M. R., & Greenwald, A. G. (1995). Implicit gender stereotyping in judgments of fame. Journal of Personality and Social Psychology , 68 (2), 181.

Baron, A. S., Dunham, Y., Banaji, M., & Carey, S. (2014). Constraints on the acquisition of social category concepts. Journal of Cognition and Development , 15 (2), 238-268.

Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review , 94 (4), 991-1013.

Cameron, J. A., Alvarez, J. M., Ruble, D. N., & Fuligni, A. J. (2001). Children’s lay theories about ingroups and outgroups: Reconceptualizing research on prejudice. Personality and Social Psychology Review , 5 (2), 118-128.

Chapman, E. N., Kaatz, A., & Carnes, M. (2013). Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. Journal of general internal medicine , 28 (11), 1504-1510.

Charlesworth, T. E., & Banaji, M. R. (2019). Patterns of implicit and explicit attitudes: I. Long-term change and stability from 2007 to 2016. Psychological science , 30(2), 174-192.

Goff, P. A., Jackson, M. C., Di Leone, B. A. L., Culotta, C. M., & DiTomasso, N. A. (2014). The essence of innocence: consequences of dehumanizing Black children. Journal of personality and socialpsychology,106(4), 526.

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review, 102(1), 4.

Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology , 74(6), 1464.

Greenwald, A. G., & Krieger, L. H. (2006). Implicit bias: Scientific foundations. California Law Review , 94 (4), 945-967.

Gutierrez, B., Kaatz, A., Chu, S., Ramirez, D., Samson-Samuel, C., & Carnes, M. (2014). “Fair Play”: a videogame designed to address implicit race bias through active perspective taking. Games for health journal , 3 (6), 371-378.

Jackson, S. M., Hillard, A. L., & Schneider, T. R. (2014). Using implicit bias training to improve attitudes toward women in STEM. Social Psychology of Education , 17 (3), 419-438.

Johnson, T. J., Winger, D. G., Hickey, R. W., Switzer, G. E., Miller, E., Nguyen, M. B., … & Hausmann, L. R. (2017). Comparison of physician implicit racial bias toward adults versus children. Academic pediatrics , 17 (2), 120-126.

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Lueke, A., & Gibson, B. (2016). Brief mindfulness meditation reduces discrimination. Psychology of Consciousness: Theory, Research, and Practice , 3 (1), 34.

Macrae, C. N., Bodenhausen, G. V., Milne, A. B., & Jetten, J. (1994). Out of mind but back in sight: Stereotypes on the rebound. Journal of personality and social psychology , 67 (5), 808.

Mekawi, Y., & Bresin, K. (2015). Is the evidence from racial bias shooting task studies a smoking gun? Results from a meta-analysis. Journal of Experimental Social Psychology , 61 , 120-130.

Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior , 4 , 265-292.

Pierce, C. (1970). Offensive mechanisms. The black seventies , 265-282.

Plant, E. A., & Peruche, B. M. (2005). The consequences of race for police officers’ responses to criminal suspects. Psychological Science , 16 (3), 180-183.

Rynders, D. (2019). Battling Implicit Bias in the IDEA to Advocate for African American Students with Disabilities. Touro L. Rev. , 35 , 461.

Sinclair, S., Dunn, E., & Lowery, B. (2005). The relationship between parental racial attitudes and children’s implicit prejudice. Journal of Experimental Social Psychology , 41 (3), 283-289.

Steffens, M. C., & Jelenec, P. (2011). Separating implicit gender stereotypes regarding math and language: Implicit ability stereotypes are self-serving for boys and men, but not for girls and women. Sex Roles , 64(5-6), 324-335.

Watson, S., Appiah, O., & Thornton, C. G. (2011). The effect of name on pre‐interview impressions and occupational stereotypes: the case of black sales job applicants. Journal of Applied Social Psychology , 41 (10), 2405-2420.

Wegner, D. M., & Schneider, D. J. (2003). The white bear story. Psychological Inquiry , 14 (3-4), 326-329.

Wigboldus, D. H., Sherman, J. W., Franzese, H. L., & Knippenberg, A. V. (2004). Capacity and comprehension: Spontaneous stereotyping under cognitive load. Social Cognition , 22 (3), 292-309.

Further Information

Test yourself for bias.

  • Project Implicit (IAT Test) From Harvard University
  • Implicit Association Test From the Social Psychology Network
  • Test Yourself for Hidden Bias From Teaching Tolerance
  • How The Concept Of Implicit Bias Came Into Being With Dr. Mahzarin Banaji, Harvard University. Author of Blindspot: hidden biases of good people5:28 minutes; includes a transcript
  • Understanding Your Racial Biases With John Dovidio, Ph.D., Yale University From the American Psychological Association11:09 minutes; includes a transcript
  • Talking Implicit Bias in Policing With Jack Glaser, Goldman School of Public Policy, University of California Berkeley21:59 minutes
  • Implicit Bias: A Factor in Health Communication With Dr. Winston Wong, Kaiser Permanente19:58 minutes
  • Bias, Black Lives and Academic Medicine Dr. David Ansell on Your Health Radio (August 1, 2015)21:42 minutes
  • Uncovering Hidden Biases Google talk with Dr. Mahzarin Banaji, Harvard University
  • Impact of Implicit Bias on the Justice System 9:14 minutes
  • Students Speak Up: What Bias Means to Them 2:17 minutes
  • Weight Bias in Health Care From Yale University16:56 minutes
  • Gender and Racial Bias In Facial Recognition Technology 4:43 minutes

Journal Articles

  • An implicit bias primer Mitchell, G. (2018). An implicit bias primer. Virginia Journal of Social Policy & the Law , 25, 27–59.
  • Implicit Association Test at age 7: A methodological and conceptual review Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior, 4 , 265-292.
  • Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., … & Coyne-Beasley, T. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. American Journal of public health, 105 (12), e60-e76.
  • Reducing Racial Bias Among Health Care Providers: Lessons from Social-Cognitive Psychology Burgess, D., Van Ryn, M., Dovidio, J., & Saha, S. (2007). Reducing racial bias among health care providers: lessons from social-cognitive psychology. Journal of general internal medicine, 22 (6), 882-887.
  • Integrating implicit bias into counselor education Boysen, G. A. (2010). Integrating Implicit Bias Into Counselor Education. Counselor Education & Supervision, 49 (4), 210–227.
  • Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect Christian, S. (2013). Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect. Journal of Mass Media Ethics, 28 (3), 160–174.
  • Empathy intervention to reduce implicit bias in pre-service teachers Whitford, D. K., & Emerson, A. M. (2019). Empathy Intervention to Reduce Implicit Bias in Pre-Service Teachers. Psychological Reports, 122 (2), 670–688.

An infographic titled '6 ways to combat implicit bias' with elaborations on each point and an associated image for each.

American Psychological Association Logo

Implicit bias

young Black woman looking sad

Implicit bias, also known as implicit prejudice or implicit attitude, is a negative attitude, of which one is not consciously aware, against a specific social group.

Implicit bias is thought to be shaped by experience and based on learned associations between particular qualities and social categories, including race and/or gender. Individuals’ perceptions and behaviors can be influenced by the implicit biases they hold, even if they are unaware they hold such biases. Implicit bias is an aspect of implicit social cognition: the phenomenon that perceptions, attitudes, and stereotypes can operate prior to conscious intention or endorsement.

Adapted from the APA Dictionary of Psychology and Wikipedia

Resources from APA

jurors in jury box during trial

Is justice blind if we say it is?

A U.S. Supreme Court case raises the issue of whether jurors can determine how influenced they are by potentially biasing characteristics of a defendant.

Speaking of Psychology: Can we unlearn implicit biases? With Mahzarin Banaji, PhD

Can we unlearn implicit biases?

Mahzarin Banaji, PhD, talks about how implicit bias differs from prejudice and racism and how we can overcome our own biases

Mahzarin R. Banaji, PhD

Scientist Spotlight: Q&A with Mahzarin R. Banaji, PhD

Banaji discusses her research on implicit bias and directions she’d like her discipline to go.

person in a wheelchair

Despite the ADA, equity is still out of reach

Psychologists are intensifying efforts to improve health care, justice, employment, and more for people with disabilities.

More resources about implicit bias

The Psychology of Prejudice, 2nd Ed.

The Myth of Racial Color Blindness

Perspectives on Hate

Journal special issues

Prejudice and Discrimination

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Implicit bias.

Harini S. Shah ; Julie Bohlen .

Affiliations

Last Update: March 4, 2023 .

  • Continuing Education Activity

Implicit bias is the attitude or internalized stereotypes that unconsciously affect our perceptions, actions, and decisions. These unconscious biases often affect behavior that leads to unequal treatment of people based on race, ethnicity, gender identity, sexual orientation, age, disability, health status, and other characteristics. In medicine, unconscious bias-based discriminatory practices negatively impact patient care and medical training programs, hinder effective communication, limit workforce diversity, lead to inequitable distribution of research funding, impede career advancement, and result in carriers and disparities in the access to and delivery of healthcare services. This activity will address strategies to reduce the harm of implicit bias, clinician self-awareness and self-assessment of personal biases, and the role of the interprofessional team in increasing awareness and reducing bias-based discriminatory behavior.

  • Recognize how implicit bias affects the perceptions and treatment decisions of clinicians leading to disparities in healthcare delivery and health outcomes.
  • Identify stigmatized groups and strategies to eliminate discriminatory behavior in healthcare delivery for these groups.
  • Describe strategies to increase awareness of personal unconscious biases in daily interactions and change behavior accordingly.
  • Discuss how interdisciplinary teams can reduce the harmful effects of implicit bias in medicine.
  • Introduction

Implicit biases are subconscious associations between two disparate attributes that can result in inequitable decisions. They operationalize throughout the healthcare ecosystem, impacting patients, clinicians, administrators, faculty, and staff. No individual is immune from the harmful effects of implicit biases. Unconscious bias-based discriminatory practices negatively impact patient care, medical training programs, hiring decisions, and financial award decisions and also limit workforce diversity, lead to inequitable distribution of research funding, and can impede career advancement. [1]

When implicit biases are ignored, they jeopardize delivering high-quality healthcare services. [2] A simple analogy can exemplify implicit bias in healthcare in action. Several physicians are reviewing the chest x-ray of a black man with a productive cough to determine a possible diagnosis. Another physician, not privy to the patient's demographics, joins the discussion later and quickly states that his condition most likely is cystic fibrosis. The clinicians were initially influenced by the patient's demographics and then realized the chest X-ray findings were diagnostic for late-stage cystic fibrosis, a condition more common in White populations than other races. 

Explicit versus Implicit Bias

With explicit bias, individuals are aware of their negative attitudes or prejudices toward groups of people and may allow those attitudes to affect their behavior. The preference for a particular group is conscious. For example, a hospital CEO may seek a male physician to head a department due to his explicit belief that men make better leaders than women. This type of bias is fully conscious.

Implicit bias includes the subconscious feelings, attitudes, prejudices, and stereotypes an individual has developed due to prior influences and imprints throughout their lives. Individuals are unaware that subconscious perceptions, instead of facts and observations, affect their decision-making. Implicit bias and explicit bias are both problematic because they lead to discriminatory behavior and potentially suboptimal healthcare delivery.

We all hold implicit biases. Implicit bias is challenging to recognize in oneself; awareness of bias is one step toward changing one's behavior. [1] Cultural safety refers to the need for healthcare professionals to examine themselves and the potential impact of their culture, power, privilege, and personal biases on clinical interactions and healthcare delivery. This requires health providers to question their own attitudes, assumptions, stereotypes, and prejudices that may contribute to a lower quality of healthcare for some patients. Cultural safety compels healthcare professionals and organizations to engage in ongoing self-reflection and self-awareness and hold themselves accountable to provide culturally safe care, which the patients and their communities define. [3] Healthcare professionals and their healthcare organizations should work together to develop strategies to mitigate the harmful effects of bias and reduce bias-based decisions that contribute to barriers to healthcare access, healthcare disparities in patient care delivery, and lack of workforce diversity.

Stigmatized Groups and the Implicit Association Test (IAT)

Although we may consciously reject negative associations with stigmatized groups, it is virtually impossible to dissociate from a culture impregnated with such stereotypes. Patients from stigmatized groups may have one or more of these characteristics or conditions: advanced age, non-White race, HIV, disabilities, and substance or alcohol use disorders. [4] [5] [6]  Other factors include low socioeconomic status, mental illness, non-English speaking, non-heterosexual, and obesity. [7] [8] [9] [10] Implicit biases, by definition, occur in the absence of salient understanding or conscious awareness. [11] [12]  However, we can apply harm mitigation strategies to avoid the destructive implications of implicit bias. To this end, recognition is the first step.

Implicit biases in healthcare are well-characterized by studies that use Implicit Association Tests (IAT) to evaluate medical decision-making toward stigmatized groups. The IAT measures the strength of associations between concepts and evaluations or stereotypes to reveal an individual's hidden or subconscious biases (Project Implicit - implicit.harvard.edu). The IAT is a highly validated measure for implicit biases; although vulnerable to voluntary control, the tool remains a gold standard in implicit bias research. [13] [14] Studies have shown that strong implicit biases hinder communication. [15] Effective patient-healthcare provider (HCP) communication is associated with reduced patient morbidity and mortality, lower healthcare costs, and decreased rates of HCP burnout. [16] [17] [18] [16]

Implicit biases become destructive when they translate into microaggressions, defined as verbal or nonverbal cues that communicate hostile attitudes towards those from stigmatized groups. [19] [20] Although often unintentional, microaggressions maintain power structures and threaten the psychological safety of patients, resulting in adverse public health implications. [21] Reducing microaggressions has been shown to reduce HCP burnout and depression. [22] [23]

Implicit Bias Awareness and Training

Comprehensive implicit bias training enhances the healthcare workforce's financial value, productivity, and longevity. The recognition of implicit bias is the first step in mitigating its effects. Many states in the US require implicit bias training for employment and licensure in the healthcare profession. The ongoing engagement of implicit biases among HCPs promotes cultural safety in healthcare organizations, representing a critical consciousness that welcomes accountability in the collaborative effort to provide culturally safe healthcare as defined by patients and their communities. HCPs should be aware of their implicit biases but not blame themselves when situations out of their control arise—respect for themselves, peers, and patients is the utmost priority. Progress toward reducing implicit bias is limited without personal discomfort and vulnerability.

Currently, very limited knowledge exists on how to conduct effective implicit bias training. However, studies show that incorporating mindfulness, coalition-building, and personal retrospection alongside broader structural changes is integral in reducing the harmful effects of implicit bias in the clinical environment. [2] [24] [25] This article provides strategies to mitigate the impact of implicit biases among physicians, residents, physician assistants, pharmacists, registered nurses, nurse practitioners, medical assistants, medical scribes, certified registered nurse anesthetists, physical and occupational therapists, chiropractors, dentists, hygienists, licensed nutritionists, dieticians, social workers, counselors, psychologists, other allied health professionals, and healthcare trainees. Implicit bias in continuing education is required in many states.

Implicit Bias Training: State Legislation and Requirements for Healthcare Providers

California - AB241 (legislation)

Illinois - Sec. 2105-15.7 (legislation)

Michigan - R 338.7001 (legislation)

Maryland - HB28. Sec. 1-225 (legislation) (HB28)

Minnesota - Sec. 144.1461 (legislation)

Washington - Sec. 43.70.613 (legislation)

Massachusetts - 243 CMR 2.06(a)3 (legislation)

New York - S3077 (legislation)

Pennsylvania - HB 2110. Title 63. Sec. 2102a (legislation)

Indiana - HB 1178 (legislation)

Oklahoma - HB 2730 (legislation)

South Carolina - H 4712. Session 123 (legislation)

Tennessee - SB0956 and HB0642 (legislation)

  • Issues of Concern

Harm-Reduction Strategies for Stigmatized Groups

Studies show that implicit bias training has little to no benefit without disaggregating the experiences of stigmatized groups and providing actionable recommendations. Here, we outline harm-reduction strategies, disaggregated based on the previously stigmatized groups (advanced age, nonwhite race, HIV positive, disabilities, substance use disorder, alcohol use disorder, low socioeconomic status, mental illness, non-English speaking, non-heterosexual, and obesity). Patients often belong to more than one group, given the intersectionality of historically disadvantaged populations in the US (e.g., being black with low socioeconomic status).

Persons of Advanced Age

Older adults are often associated with a cultural fear of death and dying. [26] [27] This fear is so pervasive that older adults may even internalize that they're a burden to others. [28] [21] HCPs may perceive older adults as less independent (regardless of decision-making capacity), attention-seeking, unrewarding to care for, and visually unpleasant. [29] From a mental health standpoint, physicians are less willing to treat older adults with suicidal ideation than young adults with suicidal ideation. Healthcare trainees are more comfortable interacting with older adults (compared to younger adults) with suicidal ideation. [30] Nurses with negative perceptions towards older adults provide less health education and have shorter patient interactions with older adults. [30]

Implicit bias can result in less mental health treatment for individuals of advanced age. Strategies to reduce implicit bias are created to educate clinicians that older adults deserve mental health treatment and should not be overlooked due to unconscious prejudicial negative feelings that clinicians may hold. HCPs should aim to schedule multiple health appointments in the same location and allot extra time for care for older adults. A healthcare team must ask for written permission before speaking with family members and caregivers. Healthcare teams should talk directly to patients even if a caregiver is present. Studies have shown that peer mentor support among older adults and support from those who have experienced illness facilitates patient empowerment. [31] Providing multiple forms of accessible communication ensures a complete understanding of care. [32]

Persons of Nonwhite Race

In 2021, the Center for Disease Control and Prevention (CDC) cited racism as a serious public health threat (CDC, 2021). Indeed, numerous studies have shown specific examples of race-based discrimination in healthcare settings. For example, implicit racial biases impact clinical decision-making for pain management, noninvasive cardiac testing, thrombolysis, cardiac catheterization, and cancer screening. [33] [21] Pediatric nonwhite patients also face implicit racial biases from HCPs. [34] [35] [36]  Black, Latinx, and indigenous patients are frequently met with verbal dominance from HCPs and negative experiences in the medical setting, compromising trust in HCPs and patient care quality. [37] [38] [39] HCPs who score highly on the IAT for black-white implicit bias are often rated poorly by black patients regarding patient-centered care. [37] Implicit biases against those of nonwhite race are particularly salient when the clinician perceives increased time pressure and ambiguity, such as in acute care or emergencies. [40] [41] [42] The COVID-19 pandemic exacerbated discriminatory attitudes towards HCPs of Asian and Pacific Islander descent. [43] [44]

Strategies to reduce harm from implicit racial bias include finding things in common such as a shared group membership, which has been associated with a decrease in implicit racial bias. [25] Counter-stereotypical examples, such as a 36-year-old black male CEO of a Fortune 500 company, may also result in unconscious prejudice or stereotyping. [45] Expanding one's network and forming friendships with people of different healthcare professions further reduces the effect of implicit bias in the healthcare setting. [45] One may learn to recognize personal changes in non-verbal (e.g., gestures, eye contact, body distance) and paraverbal (e.g., tone, pitch, volume) communication behaviors. [46] [45] [21] Racialized experiences are valuable in a patient's health history; rather than ignoring these experiences, one can recognize their impact on health outcomes. HCPs may ask clinical questions to ascertain a patient's experiences with racism. [47] Examples of questions to determine racialized experiences are as follows:

  • "Many of our patients face racism in healthcare; is this something you've experienced before?"
  • "Are there any important life events that you've experienced or are currently experiencing that affect your health?"

Finally, it is essential to thank patients for sharing their stories, validate them, and acknowledge the trauma that those experiences may have caused. Knowledge of these experiences gives context to patients who lost trust in the healthcare system or may appear "non-compliant." Incorporating this practice into healthcare workflows enhances value-based care. [48]

Persons with Limited English Proficiency

The nature of implicit bias toward those with limited English proficiency stems from an inherent miscommunication in health care. For English speakers, speaking English in the work setting is comfortable; when HCPs are displaced from their comfort zone, study findings reveal that healthcare quality declines. The widespread use of medical interpreters has reduced many patient barriers, but interpreters are usually only available in large healthcare systems and are not often used during outpatient care. As a result, HCPs often translate to the best of their ability when communicating with a patient with limited English proficiency. Although faster, this method leaves wide gaps in the exchange of health information and treatment compliance. [49] [50] [51] As mentioned previously, patient unfamiliarity and HCP time constraints are two competing factors that widen disparities in healthcare delivery. [15]

Strategies to reduce harm due to implicit bias against those with limited English proficiency include consistently using professional medical translators in outpatient and inpatient settings. Before patient care visits, it is more effective if HCPs and staff can ensure the professional translator is available for the entire appointment. [15] Caution must be taken when caregivers or family members offer to translate for older adults, as studies show this approach compromises patient autonomy over their care. 

Persons Living with HIV (PLWH)

The nature of implicit bias against persons living with HIV (PLWH) has deep roots in AIDS exceptionalism, a Western response to a lethal virus that initially disproportionately affected men who had sex with men (MSM). Fear and stigma in the early 1980s drove a public health response that worsened the alienation of PLWH. The long-term impact of this public health response is a deeply held, false narrative that PLWH are dangerous. This narrative continues to dampen opportunities for well-studied public health measures to expand prophylaxis, diagnosis, and treatment of HIV. [52]

Implicit biases and stigma associated with HIV are independent barriers to testing, adherence, and retention. [53] [54] HCPs are responsible for understanding their implicit biases against PLWH and reducing their influence on providing equitable, timely HIV treatment. Unlike other groups, greater exposure to PLWH and training to reduce the stigma associated with HIV is associated with more positive experiences among patients and HCPs. [55] Examples of implicit biases or perceptions held by HCPs are as follows: PLWH are poor, have many sexual partners, could have avoided HIV if they wanted to, and are affected due to risky or irresponsible behavior. [56] [57] Some studies have shown that HCPs would themselves feel ashamed if they were infected with HIV, contributing to a fear of occupational exposure to HIV. [55] [58] [59]

Strategies to reduce harm due to implicit biases against PLWH include actively countering the belief that HIV is avoidable without irresponsible behavior. Decades of studies have shown that PLWH is not the problem; a nationwide response that fails to protect its vulnerable population(s) has a more catastrophic outcome than the role of any individual group. [59] Furthermore, one must actively avoid the assumption that HIV runs in specific circles or neighborhoods; public health efforts to target at-risk groups do not necessarily equate to deeming which are high-risk communities.

Persons of the LGBTQIA+ Community

The stigma surrounding PLWH and its misconstrued association with the lesbian, gay, bisexual, transgender, or queer/questioning (LGBTQ) community is exacerbated by heteronormative microaggressions when receiving healthcare, conveying the message that non-heterosexual identities are abnormal, different, or inferior to the heterosexual majority. [60] Unsurprisingly, HCPs identifying as heterosexual tend to harbor these implicit associations. [61] [62] Among HCPs, mental health providers are least likely, and nurses are most likely to hold implicit preferences for heterosexual patients. [61] When caring for sexual minority patients, HCPs with implicit biases express discomfort while taking patient sexual histories and advising about safe sex behaviors, compromising the quality of care for sexual minority patients. [61]

To reduce harm from implicit biases against those identifying with the LGBTQ community, it is essential to do one's diligence in understanding the terminology and how patients define themselves. [63] For example, a person whose gender differs from that assigned at birth may refer to themselves as transsexual in formal settings but may also use self-descriptors such as trans, gender non-conforming, they/them/theirs, or nonbinary. HCPs should discuss and use patient self-descriptors both in communication and medical documentation. The more HCPs deliberately create safe spaces for patients of the LGBTQ community, the easier it will be to use patient self-descriptors in HCP workflows. [64]

Although not enough to produce culturally competent care, small changes such as supporting the observance of LGBTQ Pride Day or encouraging employees to use their descriptor pronouns can have a positive impact. [65] Lastly, HCPs should be aware of this population's relevant social and health needs and provide appropriate screenings and treatment without isolating patients. [66] Examples of these needs include violence prevention, comprehensive mental health treatment, discussions on substance and alcohol use, HPV screening, food insecurity, transgender transitional care, and hormonal therapy. [67] [68] [69] [70] [71]

Persons with Substance Use Disorder, Alcohol Use Disorder, History of Incarceration, or Exposure to Police Violence

Individuals with substance and alcohol use disorders, a history of incarceration, or exposure to police violence represent a population with significant unmet social and health needs. These unmet needs are exacerbated when HCPs hold negative implicit attitudes that individuals belonging to these groups are poorly motivated, manipulative, noncompliant, and violent. [72] [73] [74] HCPs have been shown to unfairly judge patient "treatability" before admission to rehabilitation programs, provide lower-quality palliative care for late-stage patients with cancer and substance use disorders and display microaggressions towards pregnant patients with substance use disorders during prenatal visits. [75] [76] [75] [77]

Studies findings reveal that medical, nursing, and pharmacy trainees rarely receive training in healthcare delivery for persons with histories of criminal legal system exposure, characterized as those with frequent police stops, arrests, and incarceration, despite this group representing 57% of men and 31% of women in the US population. [78] [79] [80]  As more individuals are released from jails and prisons into the community, HCPs unaware of their prejudicial negative feelings toward persons with criminal legal system involvement may threaten the psychological safety of an already vulnerable, community-dwelling population. [81]

One goal of implicit bias awareness and training is to reduce the harmful effects of implicit biases toward community-dwelling persons with a history of criminal legal system involvement. To do this, we must first dismantle the idea that a person with a history of incarceration must be a bad person; some groups are more likely to be incarcerated due to race alone. [82] [83]  Nearly 1 in 3 black men will be imprisoned in the US. [83] Furthermore, sentence length, police brutality, and delayed parole are features encumbered by implicit bias. [84] [85] [86] [87] Prevalent comorbidities such as severe mental illness make it virtually impossible to re-integrate into one's community without the assistance of a strong family network, healing environments, and financial resources. [88] [89] [90] Trauma-informed healthcare is messy, difficult, and time-consuming, but essential, given the complex health needs of this population. Individuals with a history of incarceration may present anywhere in the healthcare system. HCPs, when able, must carefully document these experiences in a protected health record to inform other HCPs and avoid re-traumatizing patients. 

Persons with Low Socioeconomic Status or a History of Homelessness

It is well-documented that HCPs working in safety-net hospitals and emergency departments express disdain towards individuals with low socioeconomic status and homelessness, colloquially known as the "revolving door" of acute care utilization in this population. [91] [92] HCPs may perceive hospital admissions of patients with a history of homelessness as an unnecessary use of resources that may otherwise be used for those who need them. [93] [94] [95] Discriminatory behavior towards those experiencing homelessness is associated with suboptimal healthcare delivery and increased hospitalizations, exacerbating the "revolving door" problem. [54] [96] [97] [98] [99] [100] An explanation for discriminatory behavior among HCPs is relative exhaustion from large patient loads, administrative pushback, and competing demands in acute care environments, which tend to amplify implicit biases. [42] [41]

Strategies to reduce harm from implicit biases towards individuals from this group are twofold: 1) countering burnout with mindfulness and positive coping mechanisms and 2) eliminating the belief that low socioeconomic status and/or homelessness is earned. [101] [102] [103] [104] On the contrary, decades of research suggest that homelessness is neither incidental nor self-directed. Adverse childhood experiences and "poverty traps"—systems designed to siphon wealth from the poor to the wealthy—make it virtually impossible for those in poverty to gain enough social capital to access outpatient preventive healthcare. [105] [106] [107] Indeed, it may be easy to blame patients experiencing homelessness for their unmet health needs, but the habit of doing so perpetuates negative behaviors, worsens burnout, and decreases job satisfaction among HCPs. [108] [109] [110] [111]

Persons with a Disability

Evidence exists for the presence of implicit bias toward persons with a disability (PWD) from OT/PT specialists, [112] genetic counselors, [113] healthcare researchers, and other HCPs [114] [115] In the US, PWD receive suboptimal preventive care and have overall poorer health statuses compared to those without a disability, partly due to negative implicit attitudes from HCPs. [116] [117] [118] [119] When asked about their willingness to treat PWD, HCPs feel largely unprepared to care for PWD and prefer not to treat them due to limited education on PWD's unique health needs. [120] Interestingly, studies show that current healthcare education paradoxically promotes ableist viewpoints. [121] [120] [122]

To reduce harm from HCP implicit biases toward PWD, HCPs should involve PWD in redesigning clinic spaces to improve accessibility. Many US outpatient clinics have incorporated features such as wheelchair-accessible doors, touchscreens, height-adjustable exam tables, and scales with handrails, but the lack of national standardization remains a limitation. [123] [124] [125] Additionally, not including PWD in clinic redesign has led to mediocre improvements in accessibility. [125] [123] To address this issue, focus groups with PWD as team members could develop patient-centered questions to identify patients needing healthcare accommodations. [126] Long-term changes include increasing the representation of HCPs with disabilities. [127]

Persons with Mental Illness

The prevalence of mental illness is rising due to increased recognition and treatment (National Institute of Mental Health, 2022). Unfortunately, the negative stigma of having a mental illness prohibits many from seeking treatment. [21] [128] The stigma surrounding mental illness has deep roots in US history; in the 19th and early 20th centuries, those with severe mental illness were held in asylums with limited access to the outside world. Deinstitutionalization, or the release of patients with serious mental illness into the community, began in the 1950s and was largely driven by financial burdens for the rising welfare state in maintaining asylums. [129] [130] Unfortunately, closing asylums was not met with increased community-based mental health services, leading to the systematic stigmatization and criminalization of patients with serious mental illness. [129] This history reflects a broader message that forms implicit biases among HCPs today: that having a mental illness is shameful. [128] [131] [132] [21]

Strategies to reduce harm due to implicit biases toward those with mental illness include speaking up when HCP colleagues dismiss a patient's mental illness or use it as a reason for lower-quality medical treatment. [133] [134] [135] HCPs should avoid the assumption that patients with mental illness seek to take advantage of the healthcare system. [135] Indeed, numerous studies suggest that those with mental illness are quickly labeled as "frequent flyers" in acute care settings, more likely to be dismissed when complaining of pain, despite having more complex health needs. [136]

Persons with Obesity

Those with obesity are too often misrepresented as lazy, irresponsible, and lacking self-discipline; however, ample evidence suggests that genetic factors, socioeconomic status, and environment can change a person's obesity risk. [137] The idea that individuals with obesity are inferior is perpetuated in social media, colleges, and health care. [138]  

Strategies to reduce harm to those with obesity starts with using appropriate terminology. For example, HCPs must use the word obesity as a noun describing an illness and not use the word obese as an adjective to describe a patient. The proper terminology is a patient with obesity and not an obese patient. This concept also applies to electronic health documentation; for example, the HCP should record a patient's information as a "31-year-old patient with obesity" and not a "31-year-old obese patient."  [139] While HCPs must provide optimal health recommendations for patients, they must recognize the genetic, environmental, and ethnic factors influencing body fat distribution. The best outcomes for weight management occur in collaboration with an interdisciplinary team of dieticians, primary care providers, and bariatric services. [140]

  • Clinical Significance

The US healthcare system poses many challenges to HCPs: administrative burden, high patient load, and inefficiencies. Acknowledging and reducing implicit biases may seem like insurmountable tasks given these challenges. After all, how can you be emotionally available to recognize your own biases when you are barely managing to keep the ship afloat? A part of this reality is true; it is impossible to eliminate one’s own implicit biases and treat everyone equally all the time. However, studies have shown that practicing mindfulness, attentional control, and emotional regulation, in addition to showing compassion when able, positively impacts the culture of healthcare. [54]

At the health systems level, providing implicit bias training courses for employees is not enough. Healthcare systems must 1) create stress-free spaces for HCPs to debrief and reflect on their experiences with implicit bias, 2) stop pressuring HCPs to constantly make major decisions during intense cognitive stress, and 3) provide opportunities for role-playing encounters with patients when implicit bias is perceived or acknowledged, as studies show the more HCPs practice these discussions, the more likely implicit biases are acknowledged and reflected upon in patient rooms. [9]

  • Enhancing Healthcare Team Outcomes

Although the relationship between implicit bias and interdisciplinary teams is relatively unexplored, it is evident that no single member is responsible for molding a healthcare team's culture. A culture that values open discussion of biases and protects psychological safety promotes team productivity, whereas rudeness and negative behaviors in healthcare teams may adversely affect team performance. [141]  [Level 1] The "butterfly effect" is the idea that small team changes can significantly impact other parts of the process or system; it occurs in a system where implicit biases are openly recognized without repercussions. [142]  [Level 3]

Tools for self-reflection of implicit biases among healthcare teams have been shown to improve patient trust in the quality of care. [Level 1] Clear communication of expectations and responsibilities minimizes the impact of bias on choosing team roles. [143]  [Level 1] Implicit bias training can provide new team knowledge when additional learning is needed. Graduate medical education that includes implicit bias training has been shown to improve leadership qualities in trainees, which may foster an equitable team culture. [144]  [Level 1] However, isolated training does not result in equitable care without team members applying knowledge acquired in daily interactions. [1]  [Level 3] Therefore, regular check-ins and debriefs are essential to ensuring that team members feel prepared to engage in self-improvement. [143]  [Level 1]

Interprofessional Education Collaborative and Core Competencies

Interprofessional teams share their values, perspectives, and strategies for planning interventions, and each member of the team plays a role in delivering patient care. Team members share their expertise and skills to provide effective patient care and achieve optimal outcomes. Teams function optimally when the members effectively communicate and have mutual respect for each other and their individual roles. Four core competencies have been established for interprofessional collaborative practice (see IPEC Core Competencies for Interprofessional Education Collaborative):

  • Work with individuals of other professions to maintain a climate of mutual respect and shared values. (Values/Ethics). When team members place a high value on treating patients and team members equally and respectfully and operate ethically, interventions to reduce the harmful effects of implicit bias that result in health disparities can be created in a culturally safe and accepting environment.
  • Use the knowledge of one's own role and those of other professions to appropriately assess and address the health care needs of patients to promote and advance the health of populations. (Roles/Responsibilities)  Each interprofessional team member is responsible for identifying how implicit bias affects perceptions and clinicians' treatment decisions, leading to disparities in healthcare delivery and health outcomes.
  • Communicate with patients, families, communities, and professionals in health and other fields in a responsive and responsible manner that supports a team approach to promoting and maintaining health and preventing and treating disease. (Interprofessional Communication) Discussions regarding cultural safety and the continued need for clinicians to engage in ongoing self-reflection and self-awareness and hold themselves accountable to provide culturally safe care should be a priority. Open discussions focused on accepting that everyone has implicit biases and that everyone has the ability to recognize them and change their behavior through interventions, such as counter-stereotyping, are helpful. Strategies to improve patient-clinician communication are beneficial, especially with patients in stigmatized groups.
  • Apply relationship-building values and the principles of team dynamics to perform effectively in different team roles to plan, deliver, and evaluate patient/population-centered care and population health programs and policies that are safe, timely, efficient, effective, and equitable (Teams and Teamwork)  Teams should work together to develop strategies to eliminate discriminatory practices that result in disparities in healthcare delivery, limited access, and suboptimal patient outcomes. Time should be given to interventions that embrace and increase diversity in the workforce.
  • Nursing, Allied Health, and Interprofessional Team Interventions

If members of an interprofessional health team don’t acknowledge their individual implicit biases, we will still leave a large hole in the potential to address bias in healthcare. The entire interprofessional team, including clinicians, nurses, pharmacists, therapists, and other ancillary and administrative personnel, is responsible for openly discussing implicit biases influencing the care provided and keeping one another accountable.

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Harini Shah declares no relevant financial relationships with ineligible companies.

Disclosure: Julie Bohlen declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Shah HS, Bohlen J. Implicit Bias. [Updated 2023 Mar 4]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • A Shadow of Doubt: Is There Implicit Bias Among Orthopaedic Surgery Faculty and Residents Regarding Race and Gender? [Clin Orthop Relat Res. 2024] A Shadow of Doubt: Is There Implicit Bias Among Orthopaedic Surgery Faculty and Residents Regarding Race and Gender? Gilbert SR, Torrez T, Jardaly AH, Templeton KJ, Ode GE, Coe K, Patt JC, Schenker ML, McGwin G, Ponce BA, et al. Clin Orthop Relat Res. 2024 Jul 1; 482(7):1145-1155. Epub 2024 Jan 12.
  • Review Evidence Brief: The Quality of Care Provided by Advanced Practice Nurses [ 2014] Review Evidence Brief: The Quality of Care Provided by Advanced Practice Nurses McCleery E, Christensen V, Peterson K, Humphrey L, Helfand M. 2014 Sep
  • The effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospital settings: a systematic review of quantitative evidence protocol. [JBI Database System Rev Implem...] The effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospital settings: a systematic review of quantitative evidence protocol. Botha E, Gwin T, Purpora C. JBI Database System Rev Implement Rep. 2015 Oct; 13(10):21-9.
  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. [Cochrane Database Syst Rev. 2022] Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, et al. Cochrane Database Syst Rev. 2022 Feb 1; 2(2022). Epub 2022 Feb 1.
  • Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics [ 2014] Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics Peterson K, McCleery E. 2014 May

Recent Activity

  • Implicit Bias - StatPearls Implicit Bias - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

ScholarWorks@UTEP

Home > HEALTH_SCI > PT > DPT Capstones > 3

DPT Capstones

Implicit bias regarding race in physical therapist students: a mixed method study.

Brianna Dawkins , The University of Texas at El Paso Joseph Kennerly , The University of Texas at El Paso

Date of Award

Degree type.

DPT Project

Degree Name

Doctor of Physical Therapy (DPT)

Celia Pechak

Introduction: Bias is defined as a tendency to show favor for or against something or someone; bias can be implicit or explicit. When their implicit biases are left unchecked, healthcare practitioners are more likely to negatively affect patient health outcomes. The purposes of this study were to examine implicit racial bias among physical therapist students, and explore their perspectives about how cultural competency content regarding race and ethnicity could be improved in their current education program.

Methods: Thirty-three students from the Doctor of Physical Therapy (DPT) Program at The University of Texas at El Paso (UTEP) completed a pre-Implicit Association Test (IAT) survey and post-IAT survey. Nine participants completed a semi-structured interview to explore participants’ _feelings about their IAT results and their assessment of the cultural competency content in their current education program. Survey data were analyzed using descriptive statistics and interview data were analyzed using thematic qualitative analysis.

Results: Majority of participants predicted having no automatic preference to having a slight automatic preference for White/European Americans. On the contrary, the actual IAT revealed majority of participants having slight to moderate automatic preference to White/European Americans. Of the 33 participants, 9 participants correctly predicted their IAT results. Qualitative analysis identified 4 themes: the juxtaposition of self-awareness and IAT results, allyship, dangers of implicit bias, and curriculum improvement.

Discussion and Conclusion: Though participants were not aware of the extent of their implicit bias, they possess awareness of having a degree of implicit racial bias and acknowledge the potential for bias to affect the treatment and health outcomes of patients. Current literature suggests a positive relationship between cultural competency interventions and improvement in individual client/patient health outcomes, but not health disparities. Programs must specifically talk about race and the historical systems of marginalization that affect the health of diverse populations of people to impact health disparities.

Embargo Period

Recommended citation.

Dawkins, Brianna and Kennerly, Joseph, "Implicit Bias Regarding Race in Physical Therapist Students: A Mixed Method Study" (2022). DPT Capstones . 3. https://scholarworks.utep.edu/dpt_cap/3

Included in

Physical Therapy Commons

Advanced Search

  • Notify me via email or RSS
  • UTEP Library
  • Collections
  • Expert Gallery

Author Corner

  • Physical Therapy and Movement Sciences Website

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

, the authors reveal hidden biases based on their experience with the Implicit Association Test. Project Implicit is graciously hosting electronic versions of ’s IATs. These should work properly on any desktop computer and on several touch-screen devices including iPads, Android tablets, Nook tablets, and the Kindle Fire.

These tests are not currently supported on Regular Kindle, Kindle Paperwhite, standard Nooks, and smartphones. Please use a computer or one of the supported tablets.

For best results with keyboards: Use left hand for “e” key and right hand for “i” key. For touch-screens: Lay the device on a flat surface, use landscape orientation, using left hand for the left tap area and right hand for the right tap area.

If you encounter technical difficulties, please send a brief description to and we will do our best to respond.

IMAGES

  1. Implicit Bias

    implicit bias experiment

  2. Psychology Experiment: Implicit Bias and Associations Test

    implicit bias experiment

  3. 17 Implicit Bias Examples (2024)

    implicit bias experiment

  4. New Paper: What We Can (And Can’t) Infer About Implicit Bias From

    implicit bias experiment

  5. Implicit Bias

    implicit bias experiment

  6. Psychology Experiment: Implicit Bias and Associations Test

    implicit bias experiment

VIDEO

  1. Positive series Clipper with Positive Bias| Experiment

  2. The Impact of Confirmation Bias

  3. Bias Experiment

  4. PN JUNCTION DIODE Forward Bias & Reverse Bias Experiment #Basic electronics lab#Engineering#btechlab

  5. Becoming a Vampire: A Philosophical Experiment

  6. A Philosophical Thought Experiment: Skywalk

COMMENTS

  1. Take a Test

    On the next page, you'll be asked to select an Implicit Association Test (IAT) from a list of possible topics. We'll also ask you (optionally) to report your attitudes or beliefs about these topics and give you some information about yourself. We ask these questions because the IAT can be more valuable if you also describe your own self ...

  2. The Implicit Association Test (IAT)

    The Implicit Association Test (IAT) has been taken millions of times by people all over the world. We, the creators of these specific tests, have taken these and many other tests and have found them to be revealing and beneficial in understanding ourselves. Please note that taking this test is an entirely voluntary act.

  3. Project Implicit

    Educational resource and research site for investigations in implicit social cognition. Includes online tests for implicit preferences for racial groups, age groups, political candidates, and associations between gender and academic domains.

  4. Outsmarting Implicit Bias: A Project at Harvard University

    Some implicit biases are changing towards neutrality, but others aren't. Explore models predicting the future and learn why there's reason to be hopeful. 15 - 20 min. Explore the mind's blindspots with Outsmarting Implicit Bias, an educational media series founded by Harvard psychologist Mahzarin Banaji.

  5. Implicit Association Test (IAT)

    The Implicit Association Test (IAT) measures attitudes and beliefs that people may be unwilling or unable to report. The IAT may be especially interesting if it shows that you have an implicit attitude that you did not know about. Learn More. Office for Equity, Diversity, Inclusion, and Belonging (OEDIB)

  6. The Implicit Association Test

    Among the general public and behavioral scientists alike, the Implicit Association Test (IAT) is the best known and most widely used tool for demonstrating implicit bias: the unintentional impact of social group information on behavior. More than forty million IATs have been completed at the Project Implicit research website. These public datasets are the most comprehensive documentation of ...

  7. PDF About the Implicit Association Test Do you have hidden biases?

    The IAT is a test to measure unconscious bias, developed by psychologists at Harvard, the University of Virginia and the University of Washington. Unlike explicit bias (which re-flects the attitudes or beliefs that you endorse at a conscious level), implicit bias is the bias in judgment and/or behavior

  8. Taking a hard look at our implicit biases

    The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and neuroscience. Banaji's first experiments found, not surprisingly, that New Englanders associated good things with the Red ...

  9. The Bias Beneath: Two Decades of Measuring Implicit Associations

    But the IAT has also inspired a wealth of research on implicit biases related to age, weight, political leanings, disability, and much more. Opinions on the IAT are mixed. Controversy about the test was evident in a 2013 meta-analysis by APS Fellows Fred Oswald and Phillip E. Tetlock and colleagues. They found weaker correlations between IAT ...

  10. About the IAT

    The Implicit Association Test (IAT) measures the strength of associations between concepts (e.g., black people, gay people) and evaluations (e.g., good words, bad words) or stereotypes (e.g., athletic, clumsy). The main idea is that making a response is easier when closely related items share the same response key.

  11. Test Your Implicit Bias

    The Implicit Association Test (IAT) measures the strength of associations between concepts and evaluations or stereotypes to reveal an individual's hidden or subconscious biases. This test was first published in 1998 by Project Implicit, and has since been continuously updated and enhanced. Project Implicit was founded by Tony Greenwald of ...

  12. Take a Test

    On the next page you'll be asked to select an Implicit Association Test (IAT) from a list of possible topics . We will also ask you (optionally) to report your attitudes or beliefs about these topics and provide some information about yourself. We ask these questions because the IAT can be more valuable if you also describe your own self ...

  13. Implicit bias in healthcare: clinical practice, research and decision

    The Implicit Association Test (IAT) is the commonest measure of bias within research literature. It was developed from review work which identified that much of social behaviour was unconscious or implicit and may contribute to unintended discrimination. 16,17 The test involves users sorting words into groups as quickly and accurately as possible and comes in different categories from ...

  14. Select a Test

    Skip to main content. Toggle navigation Project Implicit. Take a test; About us; Learn more . Overview; About the IAT

  15. Test Yourself for Hidden Bias

    Experiments are being conducted to determine whether a strong hidden bias in someone results in more discriminatory behavior. But we can learn something from even the first studies: Those who showed greater levels of implicit prejudice toward, or stereotypes of, Black or gay people were more unfriendly toward them.

  16. Project Implicit

    Project Implicit is the product of a team of scientists whose research produced new ways of understanding attitudes, stereotypes, and other hidden biases that influence perception, judgment, and behavior. Our researchers and collaborators translate that academic research into practical applications for addressing diversity, improving decision ...

  17. Implicit Bias: What It Is, Examples, & Ways to Reduce It

    The Implicit Attitude Test (IAT) is a psychological assessment to measure an individual's unconscious biases and associations. The test measures how quickly a person associates concepts or groups (such as race or gender) with positive or negative attributes, revealing biases that may not be consciously acknowledged.

  18. Implicit bias

    Implicit bias. Implicit bias, also known as implicit prejudice or implicit attitude, is a negative attitude, of which one is not consciously aware, against a specific social group. Implicit bias is thought to be shaped by experience and based on learned associations between particular qualities and social categories, including race and/or gender.

  19. About the IAT

    The IAT measures the strength of associations between concepts (e.g., black people, gay people) and evaluations (e.g., good, bad) or stereotypes (e.g., athletic, clumsy). The main idea is that making a response is easier when closely related items share the same response key. When doing an IAT you are asked to quickly sort words into categories ...

  20. The good, the bad, and the ugly of implicit bias

    The concept of implicit bias, also termed unconscious bias, and the related Implicit Association Test (IAT) rests on the belief that people act on the basis of internalised schemas of which they are unaware and thus can, and often do, engage in discriminatory behaviours without conscious intent.1 This idea increasingly features in public discourse and scholarly inquiry with regard to ...

  21. Implicit Bias

    Implicit bias is the attitude or internalized stereotypes that unconsciously affect our perceptions, actions, and decisions. These unconscious biases often affect behavior that leads to unequal treatment of people based on race, ethnicity, gender identity, sexual orientation, age, disability, health status, and other characteristics. In ...

  22. Implicit Bias Regarding Race in Physical Therapist Students: A Mixed

    Introduction: Bias is defined as a tendency to show favor for or against something or someone; bias can be implicit or explicit. When their implicit biases are left unchecked, healthcare practitioners are more likely to negatively affect patient health outcomes. The purposes of this study were to examine implicit racial bias among physical therapist students, and explore their perspectives ...

  23. Frequently Asked Questions

    The link between implicit bias and behavior is fairly small on average but can vary quite greatly. The same is true for the link between explicit, or self-reported, bias and behavior. However, we do know that the relationship between implicit bias and behavior is larger in some domains than in others. Moreover, even small effects can be important.

  24. Black Women Are 25% More Likely to Have Unnecessary C-Sections

    A new report shows that Black women are 25% more likely to undergo a C-section than white women. The researchers suggest that implicit racial bias among providers may play a role and that there ...

  25. Race IAT

    In the book Blindspot, the authors reveal hidden biases based on their experience with the Implicit Association Test.Project Implicit is graciously hosting electronic versions of Blindspot's IATs.These should work properly on any desktop computer and on several touch-screen devices including iPads, Android tablets, Nook tablets, and the Kindle Fire.