1.1 The Science of Biology

Learning objectives.

By the end of this section, you will be able to do the following:

  • Identify the shared characteristics of the natural sciences
  • Summarize the steps of the scientific method
  • Compare inductive reasoning with deductive reasoning
  • Describe the goals of basic science and applied science

What is biology? In simple terms, biology is the study of life. This is a very broad definition because the scope of biology is vast. Biologists may study anything from the microscopic or submicroscopic view of a cell to ecosystems and the whole living planet ( Figure 1.2 ). Listening to the daily news, you will quickly realize how many aspects of biology we discuss every day. For example, recent news topics include Escherichia coli ( Figure 1.3 ) outbreaks in spinach and Salmonella contamination in peanut butter. Other subjects include efforts toward finding a cure for AIDS, Alzheimer’s disease, and cancer. On a global scale, many researchers are committed to finding ways to protect the planet, solve environmental issues, and reduce the effects of climate change. All of these diverse endeavors are related to different facets of the discipline of biology.

The Process of Science

Biology is a science, but what exactly is science? What does the study of biology share with other scientific disciplines? We can define science (from the Latin scientia , meaning “knowledge”) as knowledge that covers general truths or the operation of general laws, especially when acquired and tested by the scientific method. It becomes clear from this definition that applying scientific method plays a major role in science. The scientific method is a method of research with defined steps that include experiments and careful observation.

We will examine scientific method steps in detail later, but one of the most important aspects of this method is the testing of hypotheses by means of repeatable experiments. A hypothesis is a suggested explanation for an event, which one can test. Although using the scientific method is inherent to science, it is inadequate in determining what science is. This is because it is relatively easy to apply the scientific method to disciplines such as physics and chemistry, but when it comes to disciplines like archaeology, psychology, and geology, the scientific method becomes less applicable as repeating experiments becomes more difficult.

These areas of study are still sciences, however. Consider archaeology—even though one cannot perform repeatable experiments, hypotheses may still be supported. For instance, archaeologists can hypothesize that an ancient culture existed based on finding a piece of pottery. They could make further hypotheses about various characteristics of this culture, which could be correct or false through continued support or contradictions from other findings. A hypothesis may become a verified theory. A theory is a tested and confirmed explanation for observations or phenomena. Therefore, we may be better off to define science as fields of study that attempt to comprehend the nature of the universe.

Natural Sciences

What would you expect to see in a museum of natural sciences? Frogs? Plants? Dinosaur skeletons? Exhibits about how the brain functions? A planetarium? Gems and minerals? Maybe all of the above? Science includes such diverse fields as astronomy, biology, computer sciences, geology, logic, physics, chemistry, and mathematics ( Figure 1.4 ). However, scientists consider those fields of science related to the physical world and its phenomena and processes natural sciences . Thus, a museum of natural sciences might contain any of the items listed above.

There is no complete agreement when it comes to defining what the natural sciences include, however. For some experts, the natural sciences are astronomy, biology, chemistry, earth science, and physics. Other scholars choose to divide natural sciences into life sciences , which study living things and include biology, and physical sciences , which study nonliving matter and include astronomy, geology, physics, and chemistry. Some disciplines such as biophysics and biochemistry build on both life and physical sciences and are interdisciplinary. Some refer to natural sciences as “hard science” because they rely on the use of quantitative data. Social sciences that study society and human behavior are more likely to use qualitative assessments to drive investigations and findings.

Not surprisingly, the natural science of biology has many branches or subdisciplines. Cell biologists study cell structure and function, while biologists who study anatomy investigate the structure of an entire organism. Those biologists studying physiology, however, focus on the internal functioning of an organism. Some areas of biology focus on only particular types of living things. For example, botanists explore plants, while zoologists specialize in animals.

Scientific Reasoning

One thing is common to all forms of science: an ultimate goal “to know.” Curiosity and inquiry are the driving forces for the development of science. Scientists seek to understand the world and the way it operates. To do this, they use two methods of logical thinking: inductive reasoning and deductive reasoning.

Inductive reasoning is a form of logical thinking that uses related observations to arrive at a general conclusion. This type of reasoning is common in descriptive science. A life scientist such as a biologist makes observations and records them. These data can be qualitative or quantitative, and one can supplement the raw data with drawings, pictures, photos, or videos. From many observations, the scientist can infer conclusions (inductions) based on evidence. Inductive reasoning involves formulating generalizations inferred from careful observation and analyzing a large amount of data. Brain studies provide an example. In this type of research, scientists observe many live brains while people are engaged in a specific activity, such as viewing images of food. The scientist then predicts the part of the brain that “lights up” during this activity to be the part controlling the response to the selected stimulus, in this case, images of food. Excess absorption of radioactive sugar derivatives by active areas of the brain causes the various areas to "light up". Scientists use a scanner to observe the resultant increase in radioactivity. Then, researchers can stimulate that part of the brain to see if similar responses result.

Deductive reasoning or deduction is the type of logic used in hypothesis-based science. In deductive reasoning, the pattern of thinking moves in the opposite direction as compared to inductive reasoning. Deductive reasoning is a form of logical thinking that uses a general principle or law to predict specific results. From those general principles, a scientist can deduce and predict the specific results that would be valid as long as the general principles are valid. Studies in climate change can illustrate this type of reasoning. For example, scientists may predict that if the climate becomes warmer in a particular region, then the distribution of plants and animals should change.

Both types of logical thinking are related to the two main pathways of scientific study: descriptive science and hypothesis-based science. Descriptive (or discovery) science , which is usually inductive, aims to observe, explore, and discover, while hypothesis-based science , which is usually deductive, begins with a specific question or problem and a potential answer or solution that one can test. The boundary between these two forms of study is often blurred, and most scientific endeavors combine both approaches. The fuzzy boundary becomes apparent when thinking about how easily observation can lead to specific questions. For example, a gentleman in the 1940s observed that the burr seeds that stuck to his clothes and his dog’s fur had a tiny hook structure. On closer inspection, he discovered that the burrs’ gripping device was more reliable than a zipper. He eventually experimented to find the best material that acted similarly, and produced the hook-and-loop fastener popularly known today as Velcro. Descriptive science and hypothesis-based science are in continuous dialogue.

The Scientific Method

Biologists study the living world by posing questions about it and seeking science-based responses. Known as scientific method, this approach is common to other sciences as well. The scientific method was used even in ancient times, but England’s Sir Francis Bacon (1561–1626) first documented it ( Figure 1.5 ). He set up inductive methods for scientific inquiry. The scientific method is not used only by biologists; researchers from almost all fields of study can apply it as a logical, rational problem-solving method.

The scientific process typically starts with an observation (often a problem to solve) that leads to a question. Let’s think about a simple problem that starts with an observation and apply the scientific method to solve the problem. One Monday morning, a student arrives at class and quickly discovers that the classroom is too warm. That is an observation that also describes a problem: the classroom is too warm. The student then asks a question: “Why is the classroom so warm?”

Proposing a Hypothesis

Recall that a hypothesis is a suggested explanation that one can test. To solve a problem, one can propose several hypotheses. For example, one hypothesis might be, “The classroom is warm because no one turned on the air conditioning.” However, there could be other responses to the question, and therefore one may propose other hypotheses. A second hypothesis might be, “The classroom is warm because there is a power failure, and so the air conditioning doesn’t work.”

Once one has selected a hypothesis, the student can make a prediction. A prediction is similar to a hypothesis but it typically has the format “If . . . then . . . .” For example, the prediction for the first hypothesis might be, “ If the student turns on the air conditioning, then the classroom will no longer be too warm.”

Testing a Hypothesis

A valid hypothesis must be testable. It should also be falsifiable , meaning that experimental results can disprove it. Importantly, science does not claim to “prove” anything because scientific understandings are always subject to modification with further information. This step—openness to disproving ideas—is what distinguishes sciences from non-sciences. The presence of the supernatural, for instance, is neither testable nor falsifiable. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. Each experiment will have one or more variables and one or more controls. A variable is any part of the experiment that can vary or change during the experiment. The control group contains every feature of the experimental group except it is not given the manipulation that the researcher hypothesizes. Therefore, if the experimental group's results differ from the control group, the difference must be due to the hypothesized manipulation, rather than some outside factor. Look for the variables and controls in the examples that follow. To test the first hypothesis, the student would find out if the air conditioning is on. If the air conditioning is turned on but does not work, there should be another reason, and the student should reject this hypothesis. To test the second hypothesis, the student could check if the lights in the classroom are functional. If so, there is no power failure and the student should reject this hypothesis. The students should test each hypothesis by carrying out appropriate experiments. Be aware that rejecting one hypothesis does not determine whether or not one can accept the other hypotheses. It simply eliminates one hypothesis that is not valid ( Figure 1.6 ). Using the scientific method, the student rejects the hypotheses that are inconsistent with experimental data.

While this “warm classroom” example is based on observational results, other hypotheses and experiments might have clearer controls. For instance, a student might attend class on Monday and realize she had difficulty concentrating on the lecture. One observation to explain this occurrence might be, “When I eat breakfast before class, I am better able to pay attention.” The student could then design an experiment with a control to test this hypothesis.

In hypothesis-based science, researchers predict specific results from a general premise. We call this type of reasoning deductive reasoning: deduction proceeds from the general to the particular. However, the reverse of the process is also possible: sometimes, scientists reach a general conclusion from a number of specific observations. We call this type of reasoning inductive reasoning, and it proceeds from the particular to the general. Researchers often use inductive and deductive reasoning in tandem to advance scientific knowledge ( Figure 1.7 ). In recent years a new approach of testing hypotheses has developed as a result of an exponential growth of data deposited in various databases. Using computer algorithms and statistical analyses of data in databases, a new field of so-called "data research" (also referred to as "in silico" research) provides new methods of data analyses and their interpretation. This will increase the demand for specialists in both biology and computer science, a promising career opportunity.

Visual Connection

In the example below, the scientific method is used to solve an everyday problem. Match the scientific method steps (numbered items) with the process of solving the everyday problem (lettered items). Based on the results of the experiment, is the hypothesis correct? If it is incorrect, propose some alternative hypotheses.

1. Observation a. There is something wrong with the electrical outlet.
2. Question b. If something is wrong with the outlet, my coffeemaker also won’t work when plugged into it.
3. Hypothesis (answer) c. My toaster doesn’t toast my bread.
4. Prediction d. I plug my coffee maker into the outlet.
5. Experiment e. My coffeemaker works.
6. Result f. Why doesn’t my toaster work?

Decide if each of the following is an example of inductive or deductive reasoning.

  • All flying birds and insects have wings. Birds and insects flap their wings as they move through the air. Therefore, wings enable flight.
  • Insects generally survive mild winters better than harsh ones. Therefore, insect pests will become more problematic if global temperatures increase.
  • Chromosomes, the carriers of DNA, are distributed evenly between the daughter cells during cell division. Therefore, each daughter cell will have the same chromosome set as the mother cell.
  • Animals as diverse as humans, insects, and wolves all exhibit social behavior. Therefore, social behavior must have an evolutionary advantage.

The scientific method may seem too rigid and structured. It is important to keep in mind that, although scientists often follow this sequence, there is flexibility. Sometimes an experiment leads to conclusions that favor a change in approach. Often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion. Instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests. Notice, too, that we can apply the scientific method to solving problems that aren’t necessarily scientific in nature.

Two Types of Science: Basic Science and Applied Science

The scientific community has been debating for the last few decades about the value of different types of science. Is it valuable to pursue science for the sake of simply gaining knowledge, or does scientific knowledge only have worth if we can apply it to solving a specific problem or to bettering our lives? This question focuses on the differences between two types of science: basic science and applied science.

Basic science or “pure” science seeks to expand knowledge regardless of the short-term application of that knowledge. It is not focused on developing a product or a service of immediate public or commercial value. The immediate goal of basic science is knowledge for knowledge’s sake, although this does not mean that, in the end, it may not result in a practical application.

In contrast, applied science or “technology,” aims to use science to solve real-world problems, making it possible, for example, to improve a crop yield, find a cure for a particular disease, or save animals threatened by a natural disaster ( Figure 1.8 ). In applied science, the problem is usually defined for the researcher.

Some individuals may perceive applied science as “useful” and basic science as “useless.” A question these people might pose to a scientist advocating knowledge acquisition would be, “What for?” However, a careful look at the history of science reveals that basic knowledge has resulted in many remarkable applications of great value. Many scientists think that a basic understanding of science is necessary before researchers develop an application, therefore, applied science relies on the results that researchers generate through basic science. Other scientists think that it is time to move on from basic science in order to find solutions to actual problems. Both approaches are valid. It is true that there are problems that demand immediate attention; however, scientists would find few solutions without the help of the wide knowledge foundation that basic science generates.

One example of how basic and applied science can work together to solve practical problems occurred after the discovery of DNA structure led to an understanding of the molecular mechanisms governing DNA replication. DNA strands, unique in every human, are in our cells, where they provide the instructions necessary for life. When DNA replicates, it produces new copies of itself, shortly before a cell divides. Understanding DNA replication mechanisms enabled scientists to develop laboratory techniques that researchers now use to identify genetic diseases, pinpoint individuals who were at a crime scene, and determine paternity. Without basic science, it is unlikely that applied science could exist.

Another example of the link between basic and applied research is the Human Genome Project, a study in which researchers analyzed and mapped each human chromosome to determine the precise sequence of DNA subunits and each gene's exact location. (The gene is the basic unit of heredity represented by a specific DNA segment that codes for a functional molecule. An individual’s complete collection of genes is their genome.) Researchers have studied other less complex organisms as part of this project in order to gain a better understanding of human chromosomes. The Human Genome Project ( Figure 1.9 ) relied on basic research with simple organisms and, later, with the human genome. An important end goal eventually became using the data for applied research, seeking cures and early diagnoses for genetically related diseases.

While scientists usually carefully plan research efforts in both basic science and applied science, note that some discoveries are made by serendipity , that is, by means of a fortunate accident or a lucky surprise. Scottish biologist Alexander Fleming discovered penicillin when he accidentally left a petri dish of Staphylococcus bacteria open. An unwanted mold grew on the dish, killing the bacteria. Fleming's curiosity to investigate the reason behind the bacterial death, followed by his experiments, led to the discovery of the antibiotic penicillin, which is produced by the fungus Penicillium . Even in the highly organized world of science, luck—when combined with an observant, curious mind—can lead to unexpected breakthroughs.

Reporting Scientific Work

Whether scientific research is basic science or applied science, scientists must share their findings in order for other researchers to expand and build upon their discoveries. Collaboration with other scientists—when planning, conducting, and analyzing results—is important for scientific research. For this reason, important aspects of a scientist’s work are communicating with peers and disseminating results to peers. Scientists can share results by presenting them at a scientific meeting or conference, but this approach can reach only the select few who are present. Instead, most scientists present their results in peer-reviewed manuscripts that are published in scientific journals. Peer-reviewed manuscripts are scientific papers that a scientist’s colleagues or peers review. These colleagues are qualified individuals, often experts in the same research area, who judge whether or not the scientist’s work is suitable for publication. The process of peer review helps to ensure that the research in a scientific paper or grant proposal is original, significant, logical, and thorough. Grant proposals, which are requests for research funding, are also subject to peer review. Scientists publish their work so other scientists can reproduce their experiments under similar or different conditions to expand on the findings.

A scientific paper is very different from creative writing. Although creativity is required to design experiments, there are fixed guidelines when it comes to presenting scientific results. First, scientific writing must be brief, concise, and accurate. A scientific paper needs to be succinct but detailed enough to allow peers to reproduce the experiments.

The scientific paper consists of several specific sections—introduction, materials and methods, results, and discussion. This structure is sometimes called the “IMRaD” format. There are usually acknowledgment and reference sections as well as an abstract (a concise summary) at the beginning of the paper. There might be additional sections depending on the type of paper and the journal where it will be published. For example, some review papers require an outline.

The introduction starts with brief, but broad, background information about what is known in the field. A good introduction also gives the rationale of the work. It justifies the work carried out and also briefly mentions the end of the paper, where the researcher will present the hypothesis or research question driving the research. The introduction refers to the published scientific work of others and therefore requires citations following the style of the journal. Using the work or ideas of others without proper citation is plagiarism .

The materials and methods section includes a complete and accurate description of the substances the researchers use, and the method and techniques they use to gather data. The description should be thorough enough to allow another researcher to repeat the experiment and obtain similar results, but it does not have to be verbose. This section will also include information on how the researchers made measurements and the types of calculations and statistical analyses they used to examine raw data. Although the materials and methods section gives an accurate description of the experiments, it does not discuss them.

Some journals require a results section followed by a discussion section, but it is more common to combine both. If the journal does not allow combining both sections, the results section simply narrates the findings without any further interpretation. The researchers present results with tables or graphs, but they do not present duplicate information. In the discussion section, the researchers will interpret the results, describe how variables may be related, and attempt to explain the observations. It is indispensable to conduct an extensive literature search to put the results in the context of previously published scientific research. Therefore, researchers include proper citations in this section as well.

Finally, the conclusion section summarizes the importance of the experimental findings. While the scientific paper almost certainly answers one or more scientific questions that the researchers stated, any good research should lead to more questions. Therefore, a well-done scientific paper allows the researchers and others to continue and expand on the findings.

Review articles do not follow the IMRAD format because they do not present original scientific findings, or primary literature. Instead, they summarize and comment on findings that were published as primary literature and typically include extensive reference sections.

Scientific Ethics

Scientists must ensure that their efforts do not cause undue damage to humans, animals, or the environment. They also must ensure that their research and communications are free of bias and that they properly balance financial, legal, safety, replicability, and other considerations. All scientists -- and many people in other fields -- have these ethical obligations, but those in the life sciences have a particular obligation because their research may involve people or other living things. Bioethics is thus an important and continually evolving field, in which researchers collaborate with other thinkers and organizations. They work to define guidelines for current practice, and also continually consider new developments and emerging technologies in order to form answers for the years and decades to come.

For example, bioethicists may examine the implications of gene editing technologies, including the ability to create organisms that may displace others in the environment, as well as the ability to “design” human beings. In that effort, ethicists will likely seek to balance the positive outcomes -- such as improved therapies or prevention of certain illnesses -- with negative outcomes.

Unfortunately, the emergence of bioethics as a field came after a number of clearly unethical practices, where biologists did not treat research subjects with dignity and in some cases did them harm. In the 1932 Tuskegee syphilis study, 399 African American men were diagnosed with syphilis but were never informed that they had the disease, leaving them to live with and pass on the illness to others. Doctors even withheld proven medications because the goal of the study was to understand the impact of untreated syphilis on Black men.

While the decisions made in the Tuskegee study are unjustifiable, some decisions are genuinely difficult to make. Bioethicists work to establish moral and dignifying approaches before such decisions come to pass. For example, doctors rely on artificial intelligence and robotics for medical diagnosis and treatment; in the near future, even more responsibility will lie with machines. Who will be responsible for medical decisions? Who will explain to families if a procedure doesn’t go as planned? And, since such treatments will likely be expensive, who will decide who has access to them and who does not? These are all questions bioethicists seek to answer, and are the types of considerations that all scientific researchers take into account when designing and conducting studies.

Bioethics are not simple, and often leave scientists balancing benefits with harm. In this text and course, you will discuss medical discoveries, vaccines, and research that, at their core, have an ethical complexity or, in the view of many, an ethical lapse. In 1951, Henrietta Lacks , a 30-year-old African American woman, was diagnosed with cervical cancer at Johns Hopkins Hospital. Unique characteristics of her illnesses gave her cells the ability to divide continuously, essentially making them “immortal.” Without her knowledge or permission, researchers took samples of her cells and with them created the immortal HeLa cell line. These cells have contributed to major medical discoveries, including the polio vaccine. Many researchers mentioned in subsequent sections of the text relied on HeLa cell research as at least a component of their work related to cancer, AIDS, cell aging, and even very recently in COVID-19 research.

Today, harvesting tissue or organs from a dying patient without consent is not only considered unethical but illegal, regardless of whether such an act could save other patients’ lives. Is it ethical, then, for scientists to continue to use Lacks’s tissues for research, even though they were obtained illegally by today’s standards? Should Lacks be mentioned as a contributor to the research based on her cells, and should she be cited in the several Nobel Prizes that have been awarded through such work? Finally, should medical companies be obligated to pay Lacks’ family (which had financial difficulties) a portion of the billions of dollars in revenue earned through medicines that benefited from HeLa cell research? How would Henrietta Lacks feel about this? Because she was never asked, we will never know.

To avoid such situations, the role of ethics in scientific research is to ask such questions before, during, and after research or practice takes place, as well as to adhere to established professional principles and consider the dignity and safety of all organisms involved or affected by the work.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/biology-2e/pages/1-introduction
  • Authors: Mary Ann Clark, Matthew Douglas, Jung Choi
  • Publisher/website: OpenStax
  • Book title: Biology 2e
  • Publication date: Mar 28, 2018
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/biology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/biology-2e/pages/1-1-the-science-of-biology

© Jul 10, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Module 1: Introduction to Biology

Experiments and hypotheses, learning outcomes.

  • Form a hypothesis and use it to design a scientific experiment

Now we’ll focus on the methods of scientific inquiry. Science often involves making observations and developing hypotheses. Experiments and further observations are often used to test the hypotheses.

A scientific experiment is a carefully organized procedure in which the scientist intervenes in a system to change something, then observes the result of the change. Scientific inquiry often involves doing experiments, though not always. For example, a scientist studying the mating behaviors of ladybugs might begin with detailed observations of ladybugs mating in their natural habitats. While this research may not be experimental, it is scientific: it involves careful and verifiable observation of the natural world. The same scientist might then treat some of the ladybugs with a hormone hypothesized to trigger mating and observe whether these ladybugs mated sooner or more often than untreated ones. This would qualify as an experiment because the scientist is now making a change in the system and observing the effects.

Forming a Hypothesis

When conducting scientific experiments, researchers develop hypotheses to guide experimental design. A hypothesis is a suggested explanation that is both testable and falsifiable. You must be able to test your hypothesis through observations and research, and it must be possible to prove your hypothesis false.

For example, Michael observes that maple trees lose their leaves in the fall. He might then propose a possible explanation for this observation: “cold weather causes maple trees to lose their leaves in the fall.” This statement is testable. He could grow maple trees in a warm enclosed environment such as a greenhouse and see if their leaves still dropped in the fall. The hypothesis is also falsifiable. If the leaves still dropped in the warm environment, then clearly temperature was not the main factor in causing maple leaves to drop in autumn.

In the Try It below, you can practice recognizing scientific hypotheses. As you consider each statement, try to think as a scientist would: can I test this hypothesis with observations or experiments? Is the statement falsifiable? If the answer to either of these questions is “no,” the statement is not a valid scientific hypothesis.

Practice Questions

Determine whether each following statement is a scientific hypothesis.

Air pollution from automobile exhaust can trigger symptoms in people with asthma.

  • No. This statement is not testable or falsifiable.
  • No. This statement is not testable.
  • No. This statement is not falsifiable.
  • Yes. This statement is testable and falsifiable.

Natural disasters, such as tornadoes, are punishments for bad thoughts and behaviors.

a: No. This statement is not testable or falsifiable. “Bad thoughts and behaviors” are excessively vague and subjective variables that would be impossible to measure or agree upon in a reliable way. The statement might be “falsifiable” if you came up with a counterexample: a “wicked” place that was not punished by a natural disaster. But some would question whether the people in that place were really wicked, and others would continue to predict that a natural disaster was bound to strike that place at some point. There is no reason to suspect that people’s immoral behavior affects the weather unless you bring up the intervention of a supernatural being, making this idea even harder to test.

Testing a Vaccine

Let’s examine the scientific process by discussing an actual scientific experiment conducted by researchers at the University of Washington. These researchers investigated whether a vaccine may reduce the incidence of the human papillomavirus (HPV). The experimental process and results were published in an article titled, “ A controlled trial of a human papillomavirus type 16 vaccine .”

Preliminary observations made by the researchers who conducted the HPV experiment are listed below:

  • Human papillomavirus (HPV) is the most common sexually transmitted virus in the United States.
  • There are about 40 different types of HPV. A significant number of people that have HPV are unaware of it because many of these viruses cause no symptoms.
  • Some types of HPV can cause cervical cancer.
  • About 4,000 women a year die of cervical cancer in the United States.

Practice Question

Researchers have developed a potential vaccine against HPV and want to test it. What is the first testable hypothesis that the researchers should study?

  • HPV causes cervical cancer.
  • People should not have unprotected sex with many partners.
  • People who get the vaccine will not get HPV.
  • The HPV vaccine will protect people against cancer.

Experimental Design

You’ve successfully identified a hypothesis for the University of Washington’s study on HPV: People who get the HPV vaccine will not get HPV.

The next step is to design an experiment that will test this hypothesis. There are several important factors to consider when designing a scientific experiment. First, scientific experiments must have an experimental group. This is the group that receives the experimental treatment necessary to address the hypothesis.

The experimental group receives the vaccine, but how can we know if the vaccine made a difference? Many things may change HPV infection rates in a group of people over time. To clearly show that the vaccine was effective in helping the experimental group, we need to include in our study an otherwise similar control group that does not get the treatment. We can then compare the two groups and determine if the vaccine made a difference. The control group shows us what happens in the absence of the factor under study.

However, the control group cannot get “nothing.” Instead, the control group often receives a placebo. A placebo is a procedure that has no expected therapeutic effect—such as giving a person a sugar pill or a shot containing only plain saline solution with no drug. Scientific studies have shown that the “placebo effect” can alter experimental results because when individuals are told that they are or are not being treated, this knowledge can alter their actions or their emotions, which can then alter the results of the experiment.

Moreover, if the doctor knows which group a patient is in, this can also influence the results of the experiment. Without saying so directly, the doctor may show—through body language or other subtle cues—their views about whether the patient is likely to get well. These errors can then alter the patient’s experience and change the results of the experiment. Therefore, many clinical studies are “double blind.” In these studies, neither the doctor nor the patient knows which group the patient is in until all experimental results have been collected.

Both placebo treatments and double-blind procedures are designed to prevent bias. Bias is any systematic error that makes a particular experimental outcome more or less likely. Errors can happen in any experiment: people make mistakes in measurement, instruments fail, computer glitches can alter data. But most such errors are random and don’t favor one outcome over another. Patients’ belief in a treatment can make it more likely to appear to “work.” Placebos and double-blind procedures are used to level the playing field so that both groups of study subjects are treated equally and share similar beliefs about their treatment.

The scientists who are researching the effectiveness of the HPV vaccine will test their hypothesis by separating 2,392 young women into two groups: the control group and the experimental group. Answer the following questions about these two groups.

  • This group is given a placebo.
  • This group is deliberately infected with HPV.
  • This group is given nothing.
  • This group is given the HPV vaccine.
  • a: This group is given a placebo. A placebo will be a shot, just like the HPV vaccine, but it will have no active ingredient. It may change peoples’ thinking or behavior to have such a shot given to them, but it will not stimulate the immune systems of the subjects in the same way as predicted for the vaccine itself.
  • d: This group is given the HPV vaccine. The experimental group will receive the HPV vaccine and researchers will then be able to see if it works, when compared to the control group.

Experimental Variables

A variable is a characteristic of a subject (in this case, of a person in the study) that can vary over time or among individuals. Sometimes a variable takes the form of a category, such as male or female; often a variable can be measured precisely, such as body height. Ideally, only one variable is different between the control group and the experimental group in a scientific experiment. Otherwise, the researchers will not be able to determine which variable caused any differences seen in the results. For example, imagine that the people in the control group were, on average, much more sexually active than the people in the experimental group. If, at the end of the experiment, the control group had a higher rate of HPV infection, could you confidently determine why? Maybe the experimental subjects were protected by the vaccine, but maybe they were protected by their low level of sexual contact.

To avoid this situation, experimenters make sure that their subject groups are as similar as possible in all variables except for the variable that is being tested in the experiment. This variable, or factor, will be deliberately changed in the experimental group. The one variable that is different between the two groups is called the independent variable. An independent variable is known or hypothesized to cause some outcome. Imagine an educational researcher investigating the effectiveness of a new teaching strategy in a classroom. The experimental group receives the new teaching strategy, while the control group receives the traditional strategy. It is the teaching strategy that is the independent variable in this scenario. In an experiment, the independent variable is the variable that the scientist deliberately changes or imposes on the subjects.

Dependent variables are known or hypothesized consequences; they are the effects that result from changes or differences in an independent variable. In an experiment, the dependent variables are those that the scientist measures before, during, and particularly at the end of the experiment to see if they have changed as expected. The dependent variable must be stated so that it is clear how it will be observed or measured. Rather than comparing “learning” among students (which is a vague and difficult to measure concept), an educational researcher might choose to compare test scores, which are very specific and easy to measure.

In any real-world example, many, many variables MIGHT affect the outcome of an experiment, yet only one or a few independent variables can be tested. Other variables must be kept as similar as possible between the study groups and are called control variables . For our educational research example, if the control group consisted only of people between the ages of 18 and 20 and the experimental group contained people between the ages of 30 and 35, we would not know if it was the teaching strategy or the students’ ages that played a larger role in the results. To avoid this problem, a good study will be set up so that each group contains students with a similar age profile. In a well-designed educational research study, student age will be a controlled variable, along with other possibly important factors like gender, past educational achievement, and pre-existing knowledge of the subject area.

What is the independent variable in this experiment?

  • Sex (all of the subjects will be female)
  • Presence or absence of the HPV vaccine
  • Presence or absence of HPV (the virus)

List three control variables other than age.

What is the dependent variable in this experiment?

  • Sex (male or female)
  • Rates of HPV infection
  • Age (years)
  • Revision and adaptation. Authored by : Shelli Carter and Lumen Learning. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Scientific Inquiry. Provided by : Open Learning Initiative. Located at : https://oli.cmu.edu/jcourse/workbook/activity/page?context=434a5c2680020ca6017c03488572e0f8 . Project : Introduction to Biology (Open + Free). License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Waymaker

Noun: a procedure done in a controlled environment for the purpose of gathering observations , data, or facts, demonstrating known facts or theories, or testing hypotheses or theories. Verb: to carry out such a procedure.

Last updated on May 29th, 2023

You will also like...

Inheritance and probability, genetic mutations, related articles....

On Mate Selection Evolution: Are intelligent males more attractive?

Nervous System

Generation of resting membrane potential

Tools and Methods for Data Collection in Ethnobotanical Studies of Homegardens

Effects of Gravity on Sleep

  • COVID-19 Tracker
  • Biochemistry
  • Anatomy & Physiology
  • Microbiology
  • Neuroscience
  • Animal Kingdom
  • NGSS High School
  • Latest News
  • Editors’ Picks
  • Weekly Digest
  • Quotes about Biology

Biology Dictionary

Controlled Experiment

BD Editors

Reviewed by: BD Editors

Controlled Experiment Definition

A controlled experiment is a scientific test that is directly manipulated by a scientist, in order to test a single variable at a time. The variable being tested is the independent variable , and is adjusted to see the effects on the system being studied. The controlled variables are held constant to minimize or stabilize their effects on the subject. In biology, a controlled experiment often includes restricting the environment of the organism being studied. This is necessary to minimize the random effects of the environment and the many variables that exist in the wild.

In a controlled experiment, the study population is often divided into two groups. One group receives a change in a certain variable, while the other group receives a standard environment and conditions. This group is referred to as the control group , and allows for comparison with the other group, known as the experimental group . Many types of controls exist in various experiments, which are designed to ensure that the experiment worked, and to have a basis for comparison. In science, results are only accepted if it can be shown that they are statistically significant . Statisticians can use the difference between the control group and experimental group and the expected difference to determine if the experiment supports the hypothesis , or if the data was simply created by chance.

Examples of Controlled Experiment

Music preference in dogs.

Do dogs have a taste in music? You might have considered this, and science has too. Believe it or not, researchers have actually tested dog’s reactions to various music genres. To set up a controlled experiment like this, scientists had to consider the many variables that affect each dog during testing. The environment the dog is in when listening to music, the volume of the music, the presence of humans, and even the temperature were all variables that the researches had to consider.

In this case, the genre of the music was the independent variable. In other words, to see if dog’s change their behavior in response to different kinds of music, a controlled experiment had to limit the interaction of the other variables on the dogs. Usually, an experiment like this is carried out in the same location, with the same lighting, furniture, and conditions every time. This ensures that the dogs are not changing their behavior in response to the room. To make sure the dogs don’t react to humans or simply the noise of the music, no one else can be in the room and the music must be played at the same volume for each genre. Scientist will develop protocols for their experiment, which will ensure that many other variables are controlled.

This experiment could also split the dogs into two groups, only testing music on one group. The control group would be used to set a baseline behavior, and see how dogs behaved without music. The other group could then be observed and the differences in the group’s behavior could be analyzed. By rating behaviors on a quantitative scale, statistics can be used to analyze the difference in behavior, and see if it was large enough to be considered significant. This basic experiment was carried out on a large number of dogs, analyzing their behavior with a variety of different music genres. It was found that dogs do show more relaxed and calm behaviors when a specific type of music plays. Come to find out, dogs enjoy reggae the most.

Scurvy in Sailors

In the early 1700s, the world was a rapidly expanding place. Ships were being built and sent all over the world, carrying thousands and thousands of sailors. These sailors were mostly fed the cheapest diets possible, not only because it decreased the costs of goods, but also because fresh food is very hard to keep at sea. Today, we understand that lack of essential vitamins and nutrients can lead to severe deficiencies that manifest as disease. One of these diseases is scurvy.

Scurvy is caused by a simple vitamin C deficiency, but the effects can be brutal. Although early symptoms just include general feeling of weakness, the continued lack of vitamin C will lead to a breakdown of the blood cells and vessels that carry the blood. This results in blood leaking from the vessels. Eventually, people bleed to death internally and die. Before controlled experiments were commonplace, a simple physician decided to tackle the problem of scurvy. James Lind, of the Royal Navy, came up with a simple controlled experiment to find the best cure for scurvy.

He separated sailors with scurvy into various groups. He subjected them to the same controlled condition and gave them the same diet, except one item. Each group was subjected to a different treatment or remedy, taken with their food. Some of these remedies included barley water, cider and a regiment of oranges and lemons. This created the first clinical trial , or test of the effectiveness of certain treatments in a controlled experiment. Lind found that the oranges and lemons helped the sailors recover fast, and within a few years the Royal Navy had developed protocols for growing small leafy greens that contained high amounts of vitamin C to feed their sailors.

Related Biology Terms

  • Field Experiment – An experiment conducted in nature, outside the bounds of total control.
  • Independent Variable – The thing in an experiment being changed or manipulated by the experimenter to see effects on the subject.
  • Controlled Variable – A thing that is normalized or standardized across an experiment, to remove it from having an effect on the subject being studied.
  • Control Group – A group of subjects in an experiment that receive no independent variable, or a normalized amount, to provide comparison.

Cite This Article

Subscribe to our newsletter, privacy policy, terms of service, scholarship, latest posts, white blood cell, t cell immunity, satellite cells, embryonic stem cells, popular topics, acetic acid, water cycle, homeostasis, hermaphrodite, endocrine system, translation.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Developmental Biology

Developmental biology is the science that investigates how a variety of interacting processes generate an organism’s heterogeneous shapes, size, and structural features that arise on the trajectory from embryo to adult, or more generally throughout a life cycle. It represents an exemplary area of contemporary experimental biology that focuses on phenomena that have puzzled natural philosophers and scientists for more than two millennia. Philosophers of biology have shown interest in developmental biology due to the potential relevance of development for understanding evolution, the theme of reductionism in genetic explanations, and via increased attention to the details of particular research programs, such as stem cell biology. Developmental biology displays a rich array of material and conceptual practices that can be analyzed to better understand the scientific reasoning exhibited in experimental life science. This entry briefly reviews some central phenomena of ontogeny and then explores four domains that represent some of the import and promise of conceptual reflection on the epistemology of developmental biology.

1.1 Historical Considerations

1.2 developmental phenomena, 1.3 developmental mechanisms, 2.1 no theory of development, 2.2 erotetic organization, 3.1 genetics, 3.2 physics, 3.3 integrating approaches: genetics and physics, 4. model organisms for the study of development, 5.1 functional homology in developmental genetics, 5.2 normal stages and phenotypic plasticity, 6. conclusion, figure credits, other internet resources, related entries, 1. overview.

Developmental biology is the science that investigates how a variety of interacting processes generate an organism’s heterogeneous shapes, size, and structural features that arise on the trajectory from embryo to adult, or more generally throughout a life cycle (Love 2008; Minelli 2011a). It represents an exemplary area of contemporary experimental biology that focuses on phenomena that have puzzled natural philosophers and scientists for more than two millennia. How do the dynamic relations among seemingly homogeneous components in the early stages of an embryo produce a unified whole organism containing heterogeneous parts in the appropriate arrangement and with correct interconnections? More succinctly, how do we explain ontogeny (or, more archaically, generation )? In Generation of Animals , Aristotle provided the first systematic investigation of developmental phenomena and recognized key issues about the emergence of and relationships between hierarchically organized parts (e.g., bone and anatomical features containing bone), as well as the explanatory difficulty of determining how a morphological form is achieved reliably in offspring (e.g., the typical shape and structure of appendages in a particular species). Generation remained a poignant question throughout the early modern period and was explored by many key figures writing at the time, including William Harvey, René Descartes, Robert Boyle, Pierre Gassendi, Nicolas Malebranche, Gottfried Wilhelm Leibniz, Anne Conway, Immanuel Kant, and others (Smith 2006). Observations of life cycle transitions, such as metamorphosis, fed into these endeavors and led to striking conclusions, such as Leibniz’s denial of generation sensu stricto .

Animals and all other organized substances have no beginning … their apparent generation is only a development, a kind of augmentation … a transformation like any other, for instance like that of a caterpillar into a butterfly. (Smith 2011: 186–187)

A major theme that crystallized in this history of investigation is the distinction between epigenesis and preformation (see the entry on theories of biological development ). Proponents of epigenesis claimed that heterogeneous, complex features of form emerge from homogeneous, less complex embryonic structures through interactive processes. Thus, an explanation of the ontogeny of these form features requires accounting for how the interactions occur. Proponents of preformation claimed that complex form preexists in the embryo and “unfolds” via ordinary growth processes. An adequate explanation involves detailing how growth occurs. Although preformation has a lighter explanatory burden in accounting for how form emerges during ontogeny (on the assumption that growth is easier to explain than process interactions), it also must address how the starting point of the next generation is formed with the requisite heterogeneous complex features. This was sometimes accomplished by embedding smaller and smaller miniatures ad infinitum inside the organism ( Figure 1 ). Epigenetic perspectives were often dependent on forms of teleological reasoning (see the entry on teleological notions in biology ) to account for why interactions among homogeneous components eventually result in a complex, integrated whole organism. Though nothing prevents mixing features of these two outlooks in explaining different aspects of development, polarization into dichotomous positions has occurred frequently (Rose 1981; Smith 2006).

Figure 1: An early modern depiction of a tiny person inside of a sperm exemplifying preformationist views.

In the late 19 th and early 20 th century, the topic of development was salient in controversies surrounding vitalism, such as the disagreement between Wilhelm Roux and Hans Driesch over how to explain ontogeny (Maienschein 1991). Roux thought that a fertilized egg contains inherited elements that represent different organismal characteristics. During the process of cellular division, these elements become unequally distributed among daughter cells leading to distinct cell fates. Driesch, in contrast, held that each cell retained its full potential through division even though differentiation occurred. Although this issue is often understood in terms of the metaphysics of life (vitalism versus materialism), Driesch’s interpretation of development and the autonomy of an organism had epistemological dimensions (Maienschein 2000). The explanatory disagreement involved different experimental approaches and divergent views on the nature of differentiation in early ontogeny (e.g., to what degree cells are pre-specified). A familiar philosophical theme running through these discussions, both epistemological and metaphysical, is the status of reductionism in biology . Through the middle of the 20 th century, embryology—the scientific discipline studying development—slowly transformed into developmental biology with a variety of reworked and recalcitrant elements (Berrill 1961). In conjunction with the issue of reductionism, a key aspect of this history is the molecularization of experimental (as opposed to comparative) embryology (Fraser and Harland 2000), with a concomitant emphasis on the explanatory power of genes (see the entry on gene and Section 3.1 ). This complex and fascinating history, including interrelations with medicine and reproductive technology, has been detailed elsewhere (see, e.g., Oppenheimer 1967; Horder et al. 1986; Hamburger 1988; Hopwood 2019; Maienschein 2014; Maienschein et al. 2005; Gilbert 1991; Embryo Project in Other Internet Resources).

Developmental biology has increasingly become an area of exploration for philosophy of biology due to the potential relevance of development for understanding evolution (Love 2015; Section 5 ), the theme of reductionism in biology and explanations from molecular genetics (Robert 2004; Rosenberg 2006; Section 3 ), and via increased attention to the details of particular research programs, such as stem cell biology (Fagan 2013; Laplane 2016). However, it should not be forgotten that ontogeny was on the radar of philosophical scholars in the 20 th century, as seen in Ernest Nagel’s treatment of hierarchical organization and reduction in the development of living systems (Nagel 1961: 432ff). For contemporary philosophy of science, developmental biology displays a rich array of material and conceptual practices that can be analyzed to better understand the scientific reasoning exhibited in experimental life science (see the entry on experiment in biology ). After a brief review of some central phenomena of ontogeny, this entry explores four domains that represent some of the import and promise of conceptual reflection on the epistemology of developmental biology.

Developmental biology is the science that seeks to explain how the structure of organisms changes with time. Structure, which may also be called morphology or anatomy, encompasses the arrangement of parts, the number of parts, and the different types of parts. (Slack 2006: 6)

Most of the properties that developmental biologists attempt to explain are structural rather than functional. For example, a developmental biologist concentrates more on how tissue layers fold or how shape is generated than on what the folded tissue layers do or how the shape functions. The ontogeny of function, at all levels of organization, is an element of developmental biology, but it is often bracketed because of the predominance (both past and present) of questions surrounding the ontogeny of form or structure (Love 2008).

Textbooks (e.g., Gilbert 2010; Slack 2013; Wolpert et al. 2010) typically describe a canonical set of events surrounding the changing structures displayed during animal development. [ 1 ] The first of these is fertilization (in sexually reproducing species), where an already semi-organized egg merges with a sperm cell, followed by the fusion of their nuclei to achieve the appropriate complement of genetic material. Second, the fertilized egg undergoes several rounds of cleavage , which are mitotic divisions without cell growth that subdivide the zygote into many distinct cells ( Figure 2 ). After many rounds of cleavage, this spherical conglomerate of cells (now called a blastula ) begins to exhibit some specification of germ layers (endoderm, mesoderm, and ectoderm) and then proceeds to invaginate at one end, a complex process referred to as gastrulation that eventually yields a through-gut. All three germ layers, from which specific types of cells are derived (e.g., neural cells from ectoderm), become established during gastrulation or shortly after it completes. [ 2 ] Organogenesis refers to the production of tissues and organs through the interaction and rearrangement of cell groups. Events confined to distinct taxonomic groups include neurulation in chordates, whereas other events correlate with mode of development ( metamorphosis from a larval to adult stage) or individual trauma ( regeneration of a limb).

[6 columns by 2 rows. The second row is a sideways view of the item in the first row. The first column (A) has four orange colored cells labelled 'A', 'B', 'C', and 'D'. The second column (B) has 8 cells, the bigger lower orange ones labelled '1A', '1B', '1C', and '1D'; the smaller pink ones on top labelled '1a', '1b', '1c', and '1d'. The third column (C) has 12 cells with the bottom, orange, labelled '2A' through '2D', the middle, red, labelled '2a' through '2d' and the top, pink, labelled '1a' through '1d'. The fourth column (D) has 16 cells, bottom layer, orange, is '2A' through '2D', top, pink, is '1a1' through '1d1', the middle alternates between larger red cells labelled '2a' through '2d' and smaller beige cells labelled '1a2' through '1d2'. The fifth column (E) has at the bottom orange cells labelled '3A' through '3D', at the top a cross of four pink cells labelled '1a1' through '1d1' and between the branches of the cross four beige cells labelled '1a2' through '1d2'. In the middle are large red cells labelled '2a' through '2d' and squished between the orange, beige, and red cells are dark red cells labelled '3a' through '3d'. The last column (F) is different. On the top is a clump of cells colored blue, green, pink, and orange. On the bottom on the left side is an arrow pointed up labelled 'developmental time'. Next to it is a 6 rowed drawing. The first row on the bottom extends the full width of the column and contains the word 'zygote', the row above is split in two, on the left 'AB' and on the left 'CD'. The third row is split evenly in 4: green 'A', blue 'B', orange 'C', and red 'D'. The fourth row is split evenly into 8 each shaded the same color as the unit below (though with different intensities). The fifth row is split evenly into 16 and the sixth row into 32 following the same pattern as the fourth row.]

Figure 2: An example of embryonic cleavage in marine snail embryos showing the fate of different cell lineages through developmental time.

Several key processes underlie these distinct developmental events and the resulting features of form that emerge (e.g., the through-gut formed subsequent to gastrulation or the heart formed during organogenesis). These are critical to the ontogeny of form and link directly to major research questions in developmental biology ( Section 2 ). First, cellular properties, such as shape, change during ontogeny. This is a function of differentiation whereby cells adopt specific fates that include shape transformations ( Figure 3 ). Second, regions of cells in the embryo are designated through arrangement and composition alterations that correspond to different axes in different parts of the embryo (e.g., dorsal-ventral, anterior-posterior, left-right, and proximal-distal). The successive establishment of these regions is referred to as pattern formation . Third, cells translocate and aggregate into layers (e.g., endoderm and ectoderm, followed by the mesoderm in many lineages) and later tissues (aggregations of differentiated cell types). Fourth, cells and tissues migrate and interact to produce new arrangements and shapes composed of multiple tissue layers with novel functions (i.e., organs). These last two sets of processes are usually termed morphogenesis (Davies 2005) and occur via many distinct mechanisms ( Section 1.3 ). Fifth, there is growth in the size of different form features in the individual, remarkably obvious when comparing zygote to adult, although proportional change between different features ( allometry ) is also striking.

experimental biology definition

Figure 3: A simple illustration of the kinds of differentiation related to the cellular components found in blood.

None of these processes occur in isolation and explanations of particular form features usually draw on several of them simultaneously, presuming other features that originated earlier in ontogeny by different instantiations and combinations of the processes. This sets a broad agenda for investigation: how do various iterations and combinations of these processes generate form features during ontogeny? Consider the concrete example of vertebrate cardiogenesis. How does the vertebrate heart, with its internal and external shape and structure originate during ontogeny (Harvey 2002)? How does the heart come to exhibit left/right asymmetry in the body cavity? What causes cells to adopt a muscle cell fate or certain tissues to interact in the prospective region of the heart? How do muscle cells migrate to, aggregate in, and differentiate at the correct location? How does the interior of the heart adopt a particular tubular structure with various chambers (that differs among vertebrate species)? How does the heart grow at a particular rate and achieve a specific size? Solutions relevant to explaining the ontogeny of form characterize causal factors that account for how different processes occur and yield various outcomes ( Section 3 ).

A developmental mechanism is a mechanism or process that operates during ontogeny (see McManus 2012 for discussion). At least two different types of developmental mechanisms can be distinguished (Love 2017a): molecular genetic mechanisms (signaling or gene regulatory networks; Section 3.1 ) and cellular-physical mechanisms (cell migration or epithelial invagination; Section 3.2 ). Philosophical explorations of mechanisms in science and mechanistic explanation have grown dramatically over the past two decades (Craver and Darden 2013; Glennan and Illari 2017; Illari and Williamson 2012). Among different accounts of scientific mechanisms, four shared elements are discernable: (1) what a mechanism is for, (2) its constituents, (3) its organization, and, (4) the spatiotemporal context of its operation. Developmental explanations seek to characterize these four elements through various experimental interventions. Together these elements provide a template for characterizing the two different types of developmental mechanisms.

A well-established molecular genetic mechanism is the initial formation of segments in Drosophila due to the segment polarity network of gene expression (Wolpert et al. 2010, 70-81; Damen 2007). By Stage 8 of development (~3 hours post-fertilization), Drosophila embryos have 14 parasegment units that were defined by pair-rule gene expression in earlier stages. The transcription factor Engrailed accumulates in the anterior portion of each parasegment. This initiates a cascade of gene activity that defines the boundaries of each compartment of cells that will eventually become a segment. One element of this activity is the expression of hedgehog , a secreted signaling protein, in cells anterior to the band of cells where Engrailed has accumulated, which marks the posterior boundary of each nascent segment. This, in turn, activates the expression of wingless , another secreted signaling protein, which maintains the expression of both engrailed and hedgehog in a feedback loop so that segment boundaries persist ( Figure 4 ). The segment polarity network exhibits all four of the shared elements of a mechanism. It is constituted by a number of parts (e.g., Engrailed, Wingless, Hedgehog) and activities or component operations (e.g., signaling proteins bind receptors, transcription factors bind to DNA and initiate gene expression), which are organized into patterns of interacting relationships (feedback loops, signaling pathways) within a spatiotemporal context (in parasegments of the Drosophila embryo, ~3 hours post-fertilization) so as to produce a specific behavior or phenomenon (a set of distinct segments with well-defined boundaries).

Wingless and Hedgehog reciprocal signaling during segmentation of Drosophila embryos

Figure 4: Wingless and Hedgehog reciprocal signaling during segmentation of Drosophila embryos.

Next, consider the cellular-physical mechanism of branching morphogenesis, which refers to combinations of cellular proliferation and movement that yield branch-like structures in kidneys, lungs, glands, or blood vessels. There are many types of branching morphogenesis, but one primary mechanism is epithelial folding, which involves cells invaginating at different locations on a structure to yield branches (Davies 2013, ch. 20). Different cellular-physical mechanisms can produce invaginations that lead to branching structures (Varner and Nelson 2014): the constriction of one end of a subset of columnar cells in an epithelium (“apical constriction”); increased cell proliferation of one epithelial sheet in relation to another (“differential growth”); and compression of an epithelium leading to periodic invaginations (“mechanical buckling”). That different mechanisms can lead to the same morphological outcome means it can be difficult to discern which mechanism is operating in an embryonic context. Branching morphogenesis also exhibits all four of the shared elements of a mechanism. The parts are cells and tissues with activities or component operations (e.g., apical constriction, differential growth, mechanical buckling) being organized into patterns of interacting relationships (apical constriction leading to epithelial invagination) within a spatiotemporal context (in tracheal precursors within the Drosophila embryo around Stage 7 and 8). This organization produces a specific behavior or phenomenon (a set of branching structures—the trachea).

Once these types of developmental mechanisms have been distinguished, several conceptual issues become salient. The first pertains to how the two types of mechanisms are interrelated during ontogeny, and how different investigative approaches do or do not successfully provide integrated accounts of them ( Section 3.3 ). A second is their distinct patterns of generality. Molecular genetic mechanisms are widely conserved across phylogenetically disparate taxa as a consequence of evolutionary descent, whereas cellular-physical mechanisms are widely instantiated as a consequence of shared physical organization but not due to evolutionary descent (Love 2017a). The divergence of these patterns has prompted explicit epistemological reflection by developmental biologists. [ 3 ]

2. The Epistemological Organization of Developmental Biology

One recurring theme in the long history of investigations into development is that explaining the ontogeny of form consists of many interrelated questions about diverse phenomena ( Section 1.2 ). Sometimes philosophers have attempted to compress these questions into one broad problem.

The real question concerning metazoan ontogeny is just how a single cell gives rise to the requisite number of differentiated cell lineages with all the right inductive developmental interactions required to reproduce the form of the mature organism (Moss 2002: 97). The central problem of developmental biology is to understand how a relatively simple and homogeneous cellular mass can differentiate into a relatively complex and heterogeneous organism closely resembling its progenitor(s) in relevant aspects (Robert 2004: 1).

This language is not necessarily incorrect but can lead to skewed interpretations. For example, Philip Kitcher has argued that:

In contemporary developmental biology, there is … uncertainty about how to focus the big, vague question, How do organisms develop? (Kitcher 1993: 115)

This is simply false. While it is true that these questions have been manifested with differing frequency and vigor through history, and the ability to answer them (as well as the nature of the questions themselves) has been contingent on different research strategies and methods, the issue has not been an unwieldy central problem. But scrutinizing the structure of developmental biology’s questions is not merely an exercise in clarification. It is crucial for understanding how the science of developmental biology is organized.

Although it is common in philosophy to associate sciences with theories, such that the individuation of a science is dependent on a constitutive theory or group of models, it is uncommon to find presentations of developmental biology that make reference to a theory of development (see discussion in Minelli and Pradeu 2014). Instead, we find references to families of approaches (developmental genetics, experimental embryology, cell biology, and molecular biology ) or catalogues of “key molecular components” (transcription factor families, inducing factor families, cytoskeleton or cell adhesion molecules, and extracellular matrix components). No standard theory or group of models provides theoretical scaffolding in the major textbooks (e.g., Slack 2013; Wolpert et al. 2010; Gilbert 2010). The absence of any reference to a constitutive theory of development or some set of core explanatory models is prima facie puzzling. Three interpretations of this situation are possible: (a) despite the lack of reference to theories, one can reconstruct a theory (or theories) of developmental biology out of the relevant discourse (e.g., multiple allied molecular models); (b) the lack of reference to theories indicates an immaturity in developmental biology because mature sciences always have systematic theories; and, (c) the lack of reference to theories should be taken at face value.

Developmental biology is not an immature science, groping about for some way to explain its phenomena: “some of the basic processes and mechanisms of embryonic development are now quite well understood” (Slack 2013: 7). The impetus for this type of interpretation arises out of commitments to a conception of mature science that presumes theories are abstract systems with a small set of laws or core principles (see the entry on the structure of scientific theories ). On the other hand, holding that developmental biology already has a theory costumed in different guise—not referred to as such by developmental biologists—is a possible interpretation. It arises out of a view that sciences must have theories, which has been expanded to allow for different understandings of theory structure, such as constellations of models without laws, even though the assumption is that theory still plays a similar organizing role in guiding research. However, this assumption should be challenged and rejected on methodological grounds in the case of developmental biology. An analysis of the reasoning in a science should exhibit epistemic transparency and not postulate “hidden” reasoning structure (Love 2012). This criterion is based on the premise that the basis of successes in scientific inquiry must be available to those engaged in its practice (i.e., scientists). If we postulate hidden structure not present in scientific discourse to account for inductive inference, explanation, or other forms of reasoning, then we risk obscuring how scientists themselves access this structure to evaluate it (Woodward 2003: ch. 4). The successes of developmental biology would become mysterious when viewed from the vantage point of its participants.

Epistemic transparency demands a descriptive correspondence between philosophical accounts of science and scientific practice. This does not mean that every claim made by any scientist should be taken with the same credence. A ruling concern is pervasive features of practice. The problem with assuming laws are required for explanation is their relative absence from a variety of successful sciences routinely offering explanations, not that no scientist ever appeals to laws as explanatory. Pervasive features of scientific practice should be prominent in philosophical accounts of sciences. Thus, it is not surprising that the desire for a theory can be found among some developmental biologists: “Developing a theory is of utmost importance for any discipline” (Sommer 2009: 417). But the fact that these calls are rare means we should not assume theories are actually needed to govern and organize inquiry within the domain. [ 4 ]

It was once thought that each science must have laws in order to offer explanations (see the entry on scientific explanations ), but now this is seen as unnecessary (Giere 1999; Woodward 2003). The expectation that a science have a theory to accomplish the task of organizing and guiding inquiry is of similar vintage. It derives from an intuitive expectation of what counts as a mature science in the first place. Even if we find empirically successful and coherent traditions of research without a systematic theoretical framework providing guidance, then the science cannot be mature. One might shrug off these quasi-positivist appeals to maturity by invoking more flexible conceptions of theory and theory structure. But why retain the expectation that theories should accomplish the same epistemic tasks? It is a preconception about knowledge structure that is not plausible in light of the diversity of research practices found across the sciences. The few scientists who favor this philosophical response have different motivations. Instead of maturity, other reasons are salient, such as guidance in the face of a welter of biochemical detail or the need to forge a synthesis between evolution and development. [ 5 ]

Developmental biologists recognize that the “curse of detail” is one of the costs of developmental biology’s meteoric success over the past three decades: “The principal challenge today is that of exponentially increasing detail” (Slack 2013: ix). While something must provide organization and guidance to developmental biology, it need not be theories that accomplish the task. Regarding calls for a synthesis of evolution and development, these often assume that having a developmental theory is a precondition for synthesis (Sommer 2009): “Our troubles … derive from our standing lack of an explicit theory of development” (Minelli 2011a: 4). However, this line of argument relies on the degree to which evolutionary theory exhibits the supposed structure to which developmental biologists should aspire. The actual practice associated with evolutionary theory indicates a more flexible framework with chameleon qualities that is responsively adjusted to the diverse investigative aims of evolutionary researchers (Love 2013). Therefore, it is not clear that evolutionary theory supplies the preferred template. A productive way forward is to relinquish the prior expectation that sciences must have theories of a certain kind to govern and guide their activity. Instead, sciences that display empirical success and fecundity should be studied to discover what features are responsible, without assuming that those features will be the same for all sciences: “Science need not be understood in these terms and, indeed, may be better understood in other terms” (Giere 1999: 4).

The criterion of epistemic transparency ( Section 2.1 ) encourages an exploration of our third interpretive option—the lack of reference to theories should be taken at face value. Developmental biology is organized primarily by stable, broad domains of problems that correspond to abstract representations of major ontogenetic processes (differentiation, pattern formation, growth, and morphogenesis; Section 1.2 ). Yet how do we interpret the “theoretical” aspects of developmental biology (e.g., positional information models of pattern formation) and the utilization of theories from other domains (e.g., biochemistry)? One way is to distinguish between theory-informed science—using theoretical knowledge—and theory-directed science—having a theory that directs inquiry and organizes knowledge (Waters 2007b); developmental biology is theory-informed but not theory-directed. Theories need not be wholly absent from developmental biology but—when present—they play roles very different from standard philosophical expectations. Developmental biology uses theoretical knowledge from biochemistry when appealing to morphogen gradients to explain how segments are established or chemical thermodynamics when invoking reaction–diffusion mechanisms to explain pigmentation patterns. It also uses theoretical knowledge derived from within developmental biology, such as positional information models. Different kinds of theory inform developmental biology, but these do not organize research—they are not necessary to structure the knowledge and direct investigative activities. Developmental biologists are not focused on confirming and extending the theory of reaction–diffusion mechanisms, nor are they typically organizing their research around positional information. [ 6 ] This theoretical knowledge is used in building explanations but does not provide rails of guidance for how to proceed in a research program. All sciences may use theoretical knowledge, but this is not the same as all sciences having a theory providing direction and organization.

Why think that problems provide organizational architecture for the epistemology of developmental biology? They are a pervasive feature of its reasoning practices, illustrated in textbooks that capture substantial community consensus about standards of explanation, experimental methods, essential concepts, and empirical content. Unlike evolutionary biology textbooks that discuss the theory of natural selection or economics textbooks that talk about microeconomic theory, an examination of several editions of major textbooks indicate that developmental biology does not have similar kinds of theories.

Jonathan Slack’s Essential Developmental Biology (Slack 2006, 2013) is organized around four main types of processes, also described as clustered groups of problems, which occur during embryonic development: regional specification (pattern formation), cell differentiation, morphogenesis, and growth. These broad clusters are then fleshed out along a standard timeline of early development, highlighting gametogenesis, fertilization, cleavage, gastrulation, and axis specification ( see Section 1.2 ). Different experimental approaches (cell and molecular biology, developmental genetics, and experimental embryology) are utilized in a specific set of model organisms ( see below, Section 4 ) to dissect the workings of these developmental phenomena. Subsequent chapters cover later aspects of development (e.g., organogenesis), with different systems treated in depth by tissue layer, differentiation and growth, or in relation to evolutionary questions (see below, Section 5 ). Throughout this presentation, no specific theory, set of hypotheses, or dominant model is invoked to organize these different domains of investigation. Instead, broad clusters of questions that reflect generally delineated processes (differentiation, specification, morphogenesis, and growth) set the agenda of research.

Scott Gilbert’s Developmental Biology exhibits a similar pattern (Gilbert 2000 [2003, 2006, 2010]). Developmental biology is constituted by two broad questions (“How does the fertilized egg give rise to the adult body? And how does that adult body produce yet another body?”), which can then be subdivided into further categories, such as differentiation, morphogenesis, growth, reproduction, regeneration, evolution, and environmental regulation. These questions can be parsed more analytically in terms of five variables: abstraction, variety, connectivity, temporality, and spatial composition. The values given to these variables structure the constellation of research questions within the broad problem agendas corresponding to generally delineated processes. For example, research questions oriented around events in zebrafish gastrulation are structured in a way that differs from the research questions oriented around vertebrate neural crest cell migration because they involve different values for the five variables: abstraction (zebrafish vs. vertebrates), temporality (earlier vs. later), spatial composition (tissue layer interactions vs. a distinctive population of cells), variety (epiboly vs. epithelium to mesenchyme transition), and connectivity (gut formation and endoderm vs. organogenesis and ectoderm/mesoderm). These configurations can be adjusted readily in response to shifts in the values for different variables (Love 2014). [ 7 ]

This anatomy of problems, with explicit epistemological structure derived from different values for these variables, operates to organize the science of development. Investigators from different disciplines can be working on the same problem but asking different questions that require distinct but complementary methodological resources. Knowledge and inquiry in developmental biology are intricately organized, just not by a central theory or group of models, and this erotetic organization is epistemologically accessible to the participating scientists. While theoretical knowledge, especially that drawn from molecular biological mechanisms (see the entry on molecular biology ) and mathematical models (e.g., reaction–diffusion models) is ubiquitous ( theory-informed ), the clusters of problems that reappear across the textbooks and correspond to different types of processes provide the governing architecture ( not theory-directed ), which can be characterized explicitly according to the variables described. Further analysis of this problem anatomy is possible, including how it is displayed in regular research articles and not just textbooks, as well as other areas of biology (see, e.g., Brigandt and Love 2012).

3. Explanatory Approaches to Development

Explanations in developmental biology are usually causal, though unlike standard mechanistic explanation there is a constant acquisition of new causal capacities (in terms of constituent entities, activities, and their organization) through development (McManus 2012; Parkkinen 2014). Although much work remains in characterizing different aspects of explanation in developmental biology, there is no doubt that a difference making or manipulability conception of causation (see the entry on causation and manipulability ) provides a core element of the reasoning (Woodward 2003; Strevens 2009; Waters 2007a). Genetic explanations of development ( Section 3.1 ), similar to what is seen in molecular genetics , work by identifying changes in the expression of genes and interactions among their RNA and protein products that lead to changes in the properties of morphological features during ontogeny (e.g., shape or size), while holding a variety of contextual variables fixed. More recently, there has been growing interest in physical explanations of development ( Section 3.2 ) that involve appeals to mechanical forces due to geometrical arrangements of mesoscale materials, such as fluid flow (Forgacs and Newman 2005). Researchers agree on the phenomena that need to be explained ( Section 1.2 and Section 2.2 ), but differ on whether physical rules or genetic factors are more or less explanatory (Keller 2002). [ 8 ] The existence of two different types of causal explanations for developmental phenomena poses an additional question about how they might be combined into a more integrated explanatory framework ( Section 3.3 ).

Many philosophers have turned to explanations of development over the past two decades in an effort to esteem or deflate claims about the causal power of genes (Keller 2002; Neumann-Held and Rehmann-Sutter 2006; Rosenberg 2006; Robert 2004; Waters 2007a). [ 9 ] Genetic explanations touch the philosophical theme of reductionism and appear to constitute the bulk of empirical success accruing to developmental biology over the past several decades. [ 10 ] Statements from developmental biologists reinforce this perspective:

Developmental biology … deals with the process by which the genes in the fertilized egg control cell behavior in the embryo and so determine its pattern, its form, and much of its behavior … differential gene activity controls development. (Wolpert et al. 1998: v, 15)

These types of statements are sometimes amplified in appeals to a genetic program for development.

[Elements of the genome] contain the sequence-specific code for development; and they determine the particular outcome of developmental processes, and thus the form of the animal produced by every embryo. … Development is the execution of the genetic program for construction of a given species of organism (Davidson 2006: 2, 16). [ 11 ]

At other times, statements concentrate on genetics as the primary locus of causation in ontogeny: “Developmental complexity is the direct output of the spatially specific expression of particular gene sets and it is at this level that we can address causality in development” (Davidson and Peter 2015: 2). Whether or not these statements can be substantiated has been the subject of intense debate. [ 12 ] The strongest claims about genetic programs or the genetic control of development have empirical and conceptual drawbacks that include an inattention to plasticity and the role of the environment, an ambiguity about the locus of causal agency, and a reliance on metaphors drawn from computer science (Gilbert and Epel 2009; Keller 2002; Moss 2002; Robert 2004). However, this leaves intact the difference-making principle of genetic explanation exhibited in molecular genetics (Waters 2007a), which yields more narrow and precise causal claims under controlled experimental conditions, and is applicable to diverse molecular entities that play causal roles during development, such as regulatory RNAs, proteins, and environmental signals. We can observe this briefly by reconsidering the example of vertebrate cardiogenesis ( Section 1.2 ).

Are there problems with claiming that genes contain all of the information (see the entry on biological information ) to form vertebrate hearts? Is there a genetic program in the DNA controlling heart development? Are genes the primary supplier and organizer of material resources for heart development, largely determining the phenotypic outcome? Existing studies of heart development have identified a role for fluid forces in specifying the internal form of the heart (Hove et al. 2003) and its left/right asymmetry (Nonaka et al. 2002). Biochemical gradients of extracellular calcium are responsible for activating the asymmetric expression of the regulatory gene Nodal (Raya et al. 2004) and inhibition of voltage gradients scrambles normal asymmetry establishment (Levin et al. 2002). Mechanical cues such as microenvironmental stiffness are crucial for key transitions from migratory cells into organized sheets during heart formation (Jackson et al. 2017). A number of genes are clearly difference makers in these processes (Asp et al. 2019; Srivastava 2006; Brand 2003; Olson 2006), but the conclusion that genes carry all the information needed to generate form features of the heart seems unwarranted. While it may be warranted empirically in some cases to privilege DNA sequence differences as causal factors in specific processes of ontogeny (Waters 2007a), such as hierarchically organized networks of genetic difference makers explaining tissue specification (Peter and Davidson 2011), the diversity of entities appealed to in molecular genetics and the extent of their individual and joint roles in specifying developmental outcomes implies that debates about the meaning, scope, and power of genetic explanations will continue (Griffiths and Stotz 2013). However, a shift away from genetic programs and genetic determinism to DNA, RNA, and proteins as difference makers that operate conjointly suggests that we conceptualize other causal factors in a similar way.

Fluid flow, as a physical force, is also a difference maker during the development of the heart, and ontogeny more generally, and developmental biologists appeal to physical difference makers, which are understood as factors in producing the morphological properties of developmental phenomena (Forgacs and Newman 2005). A physical causation approach was on display in the late 19th century work of Wilhelm His (Hopwood 1999, 2000; Pearson 2018) and especially visible in the early 20th century work of D’Arcy Thompson and others (Thompson 1992 [1942]; Keller 2002: ch. 2; Olby 1986). This occurred in the milieu of increasing attention to the chromosomal theory of inheritance and attempts to explore developmental phenomena via classical genetic methods (Morgan 1923, 1926, 1934). Thompson appealed to differential rates of growth and the constraints of geometrical relationships to explain how organismal morphology originates. Visual representations of abiotic, mechanical analogues provided the plausibility, such as the shape of liquid splashes or hanging drops for the cup and bell configurations of the free-swimming sexual stage of jellyfish. If physical forces generated specific morphologies in viscoelastic materials, then analogous morphologies in living species should be explained in terms of physical forces operating on the viscoelastic materials of the developing embryo. Yet morphogenetic processes that produce the shape and structure of morphology have been seen primarily, if not exclusively, in terms of genetics for the last half-century. Physical approaches moved into the background as molecular genetics approaches went from strength to strength (Fraser and Harland 2000).

The molecularization of experimental embryology is one of the most striking success stories of contemporary biology as genes and genetic interactions (e.g., in transcriptional networks and signaling pathways; see Section 1.3 ) were discovered to underlie specific details of differentiation, morphogenesis, pattern formation, and growth when structure originates during development. Genetic approaches predominate in contemporary developmental biology and physical modes of causation are often neglected. The frustration among researchers interested in physical causation during embryogenesis has been palpable.

To the molecular types, a cause is a molecule or a gene. To explain a phenomenon is to identify genes and characterize proteins without which the phenomenon will fail or be abnormal. A molecule is an explanation: a force is a description; to argue otherwise brings pity, at best (Albert Harris to John Trinkaus, 12 March 1996; Source: Marine Biological Laboratory Library Archives).

Despite this predominance of genetic explanatory approaches and the frustration among researchers utilizing other approaches, a groundswell of interest has been building around physical explanations of development, especially in terms of their integration with genetic explanations (Miller and Davidson 2013; Newman 2015). Some philosophers have argued that the biomechanical modeling of physical causal factors constitutes a rejection of certain forms of reductive explanation in biology (Green and Batterman 2017).

Thompson held that physical forces were explanatory but inadequate in isolation to account for the developmental origin of morphology; heredity (genetics) was also a necessary causal factor. [ 13 ] Yet Thompson was quick to highlight that mechanical modes of causation might be neglected in the midst of growing attention to heredity (genetics):

it is no less of an exaggeration if we tend to neglect these direct physical and mechanical modes of causation altogether, and to see in the characters of a bone merely the results of variation and of heredity. (Thompson 1992 [1942]: 1023)

Despite this latter form of exaggeration manifesting itself through much of the 20 th century, an agenda to combine or integrate the two approaches is now explicit. [ 14 ]

There is no controversy about whether genetic and physical modes of causation are at work simultaneously:

both the physics and biochemical signaling pathways of the embryo contribute to the form of the organism. (Von Dassow et al. 2010: 1)

They are not competing causal explanations of the same phenomenon. Explanations should capture how their productive interactions yield developmental outcomes:

an increasing number of examples point to the existence of a reciprocal interplay between expression of some developmental genes and the mechanical forces that are associated with morphogenetic movements. (Brouzés and Farge 2004: 372)

Genetic causes can lead to physical causation and vice versa . Physical causation brings about genetic causation through mechanotransduction. Stretching, contraction, compression, fluid shear stress, and other physical dynamics are sensed by different molecular components inside and outside of cells that translate these environmental changes into biochemical signals (Hoffman et al. 2011; Wozniak and Chen 2009). Genetic causation brings about physical causation by creating different physical properties of cells and tissues through the presence, absence, or change in frequency of particular proteins. For example, different patterns of expression for cell adhesion molecules (e.g., cadherins) can lead to differential adhesion across epithelial sheets of tissue and thereby generate phase separations or compartments via surface tension variation (Newman and Bhat 2008). If these modes of causation are not competing, then how might one combine genetic and physical difference makers into an integrated causal explanation? How much explanatory unity can be achieved for this “reciprocal interplay”?

Finding philosophical models for the explanatory integration of genetics and physics remains an open question (Love 2017b). Apportioning causal responsibility in the sense of determining relative contributions (e.g., the composition of causal magnitudes among different physical forces in Newtonian mechanics) is problematic because this requires commensurability with respect to how causes produce their effects (Sober 1988). In the context of causation understood in terms of difference makers, the difficulty of integration is a variation on a problem in causal reasoning identified by John Stuart Mill and labeled the “intermixture of effects,” which involves multiple causes contributing in a blended fashion to yield an outcome.

This difficulty is most of all conspicuous in the case of physiological phenomena; it being seldom possible to separate the different agencies which collectively compose an organized body, without destroying the very phenomena which it is our object to investigate. (Mill 1843 [1974]: 456 [book 3, chapter 11, section 1, paragraph 7])

Careful statistical methodology in experiments can answer whether one type of difference maker accounts for more of the variation in the effect variable for a particular population. But a ranking of causal factors with respect to how much of a difference they made is not the same as combining two modes of causation into an integrated account. Another response is to dissolve the integration problem by reducing all of the causal interactions to one of the two distinct modes, thereby achieving a kind of explanatory unity (Rosenberg 2006). However, this approach is eschewed by working biologists who take both genetic and physical modes of causation as significant and not reducible one to the other.

A different strategy is integrative pluralism (Mitchell 2002). This involves a two-step procedure for explaining complex phenomena whose features are the result of multiple causes: (a) formulate idealized models where particular causal factors operate in isolation (“theoretical modeling”); and, (b) integrate idealized models to explain how particular, concrete phenomena originate from these causes in combination. This model is suggestive but also has key drawbacks that include the fact that genetic causal reasoning in developmental biology does not typically involve theoretical modeling and the precise nature of the integration is underspecified. Integration of genetic and physical difference makers in a single mechanism offers a further possibility (Darden 2006; Craver 2007). Although this valuably highlights the productive continuity between difference makers through stages in a sequence (i.e., their reciprocal interplay), it also has handicaps. These include:

Divergent approaches to measuring time. Instead of time “in the mechanism,” time is measured with external standardized stages (see below, Section 5.2 ). Stages facilitate the study of different kinds of developmental mechanisms, with different characteristic rates and durations for their stages, within a common framework for a model organism (e.g., Drosophila ), while also permitting conserved molecular mechanisms to be studied in different species because the corresponding mechanism description is not anchored to the temporal sequence of the model organism.

An expectation that mechanism descriptions “bottom-out” in lowest level activities of molecular entities (Darden 2006). In the case of combining genetic and physical difference makers, the reciprocal interplay means that there is a studious avoidance of bottoming out in one or the other mode of causation.

The requirement of stable, compositional organization for mechanisms:

Mechanistic explanations are constitutive or componential explanations: they explain the behavior of the mechanism as a whole in terms of the organized activities and interaction of its components. (Craver 2007: 128)

But these mechanism descriptions are often embedded in different developmental contexts (at different times in ontogeny) with distinct compositional relations (within and between species). The reciprocal interplay between genetic and physical difference makers is not maintained precisely because these compositional differences alter relationships of physical causation (fluid flow, tension, etc.; see Section 1.3 ). Developmental biologists have been able to generalize relationships of genetic causation (in terms of genetic mechanisms; see Section 1.3 ) across species quite widely, but the attempt to combine these with physical causation has necessitated narrowing the scope of the causal claims.

Adequate philosophical models of the systematic dependence between genetic and physical difference makers in ontogeny need to account for how the temporal relations necessary for making causal claims are anchored in an external periodization used by developmental biologists. The imposition of different temporal scales can lead to distinct factors being significant or salient, which matters for ascertaining how different kinds of causes can be combined into integrated explanations. One possibility is to juxtapose these difference makers at distinct stages via experimental verification such that they exhibit productive continuity within the constraints of the external periodization (Love 2017b). This facilitates representing symmetry between causal factors because genetic difference makers can be placed before or after physical difference makers (and vice versa ). Although this does not provide a way to combine causal magnitudes (as in vector addition from Newtonian mechanics), it offers an explicit strategy for assigning responsibility among different kinds of causes through the vehicle of temporal organization that goes beyond ranking difference makers. The periodization serves as a template from the practices of developmental biologists for providing wholeness or unity to the different modes of causation to yield a kind of integrated explanation of the morphology that results from a sequence of developmental processes.

Not all types of causal explanation involve an external periodization and there are other ways to combine causes in order to produce more integrated explanatory frameworks. One area where combined explanations for developmental phenomena are being analyzed pertains to mechanism descriptions and mathematical modeling in systems biology (Brigandt 2013; Fagan 2013). For example, Fagan (2013: ch. 9) shows how an integrated explanation emerges from a step-wise procedure that starts with a detailed description of a molecular mechanism followed by the formulation of an abstracted wiring diagram of component interactions, which is then translated into a system of equations that can account for changes in component interactions over time. Solutions to these systems of equations and a mapping of solutions for the interactions among components of the system onto the behavior of the overall system within a shared landscape representation more systematically explains cellular differentiation.

Model organisms are central to contemporary biology and studies of embryogenesis (Ankeny and Leonelli 2011; Steel 2008; Bier and McGinnis 2003; Davis 2004). Biologists utilize only a small number of species to experimentally elucidate various properties of ontogeny (e.g., C. elegans , Drosophila , and Brachydanio [zebrafish]; see Figure 5 ). These experimental models permit researchers to investigate development in great depth and facilitate a precise dissection of causal relationships. Critics have questioned whether these models are good representatives of other species because of inherent biases involved in their selection, such as rapid development and short generation time (Bolker 1995), and problematic presumptions about the conservation of gene functions and regulatory networks (Lynch 2009). For example, C. elegans embryogenesis is not representative of nematodes in terms of pattern formation and cell specification (Schulze and Schierenberg 2011) and zebrafish appendage formation is a poor proxy for the development of appendages in tetrapods (Metscher and Ahlberg 1999).

[a color photo showing from the side a 0.1 x 0.03 inch (2.5 x 0.8 mm) small male Drosophila melanogaster fly with red eyes facing left.]

Figure 5: Drosophila melanogaster (the common fruit fly) is one of the standard model organisms used in developmental biology.

One response to this criticism is to emphasize the conserved genetic mechanisms shared by all animals despite differences in developmental phenomena (Gerhart and Kirschner 2007; Ankeny and Leonelli 2011; Weber 2005). Fruit flies may be unrepresentative in exhibiting syncytial development, but they use the collinear expression of Hox genes to specify their anterior-posterior body axis. This response indicates that whether an entire model organism is representative per se is too coarse-grained a criterion to capture the rationale behind their use. We have to ask about representation with respect to what , and some accounts have moved in this direction. Jessica Bolker has distinguished exemplary and surrogate modes of representation (Bolker 2009), where the former serve basic research by exemplifying a larger group and the latter correspond to models designed to provide indirect experimental access to otherwise inaccessible phenomena, such as mouse models of human psychological disorders (e.g., depression). Surrogate models are adopted in biomedical contexts where the phenomena of interest are manifested in humans. Most developmental biologists consider model organisms as exemplars, not surrogates. [ 15 ] Thus, in order to respond to a criticism of non-representativeness, the criterion of representation must be explored in more detail. [ 16 ]

A basic presumption about model organisms is that they bear appropriate similarity relationships to larger groups of animals. This presumption is an instantiation of what is discussed generally for models in science meant to represent phenomena. Model organisms represent developmental phenomena in species that are either studied little or never studied at all: “we study flies and frogs as examples for the development of animals in general” (Nüsslein-Volhard 2006: 87). One source of confidence in treating them as exemplars derives from an inductive inference over discovered patterns of evolutionary conservation with respect to developmental phenomena (e.g., gastrulation or somite formation). If all or most model organisms share a developmental feature, then all or most animals will share the feature. This inference can be circumscribed more or less narrowly (e.g., if all or most vertebrate model organisms share somite formation, then all or most vertebrates will share it).

As a consequence of this confidence, the model organism (“source”) can represent these other unstudied species (“targets”). This basic distinction between the model or source and the phenomena or target it is supposed to represent is ubiquitous in reasoning with model organisms (Ankeny and Leonelli 2011). Zebrafish is a model or representation of vertebrate development, the target phenomena, because we expect to learn about vertebrate development generally by studying ontogeny in zebrafish specifically. We do not invest time and resources into zebrafish as a model organism only because we are interested in zebrafish. Researchers plan to make claims about somite formation from observations in zebrafish that will apply to somite formation in other vertebrates that we will never have the time or money to investigate.

Developmental biologists often speak of investigating mechanisms that account for phenomena in ontogeny (see Section 1.3 ), and focus on conserved genetic and cellular mechanisms in model organisms (Gerhart and Kirschner 2007; Ankeny and Leonelli 2011; Weber 2005). This suggests a distinction between representation with respect to developmental phenomena and representation with respect to genetic and cellular mechanisms operating in development. If we are interested in explaining how hearts ( phenomena ) develop, then we might investigate the molecular or cellular mechanisms occurring in the heart field during zebrafish ontogeny. Some of these mechanisms could be conserved even though the phenomena are not. Drosophila has only one cardiac cell type, no neural crest cells, and a heart with no atrial or ventricular chamber morphology (Kirby 1999). However, cardiogenesis in all invertebrates and vertebrates investigated thus far depends essentially on the expression of the homeobox gene Nkx2-5 / tinman (Gajewski and Schulz 2002). The reverse situation also can hold: similar phenomena may be manifested but genetic and cellular mechanisms might differ. Amphibians form a neural tube (neurulation) through a process of invagination (the folding of an epithelial sheet), whereas teleost fishes form a neural tube via cavitation (the hollowing out of a block of tissue via cell death). The neural tube is homologous across vertebrates (i.e., a conserved phenomenon), but the cellular and genetic mechanisms involved in invagination versus cavitation are distinct (Davies 2013: ch. 4).

The distinction between phenomena and mechanisms assumes specificity; i.e., there are specific phenomena (somite formation in vertebrates) or mechanisms (collinear Hox gene expression) in view when judging the relationship between source (model) and target. But animal development consists of a multitude of different processes that involve a host of different mechanisms. Therefore, another distinction operating in the representational criterion pertains to questions of specificity versus variety when selecting and using model organisms. A model might represent one type of target phenomena (differentiation or growth) or mechanism (cell signaling or cell cycling) but not others—specificity—or may do so better or worse with respect to particular types of phenomena or mechanisms. A model might represent several types of target phenomena and mechanisms simultaneously—variety—with variability in how each type is represented. Trade-offs exist with respect to how well different phenomena or mechanisms are co-instantiated in a model organism. Note that experimental organisms may be selected with respect to variety and specificity simultaneously, such as when a biologist working on a specific phenomenon intends to work on others using the same model in the future. They also may be selected with one or the other of these two aspects predominant. A model might be desirable if it has representational variety in both mechanisms and phenomena even if it is not the best representative for every specific mechanism or phenomenon. Conversely, a model organism might be desirable if it is the best representative for a specific mechanism despite being a very poor model for other phenomena or mechanisms. Variety is indicative of the “whole organism” being the model. [ 17 ] A further distinction can be introduced between “model organisms” and “experimental organisms” (Ankeny and Leonelli 2011) or “general model organisms” and “Krogh-principle model organisms” (Love 2010). General model organisms are selected and used with the variety aspect of the representational criterion preeminent; experimental or Krogh-principle model organisms are selected and used with specificity preeminent.

Other issues relevant to the representation criterion include how individual cells or cell types serve as developmental models (Fagan 2016), how developmental mechanisms in different model organisms are compared and evaluated (Yoshida forthcoming), how the use of model organisms constitutes an example of case-based reasoning (Ankeny 2012), and how model organisms involve idealizations or known departures from features present in the model’s target as the result of laboratory cultivation (Ankeny 2009; Section 5.2 ). Additionally, the question of representation is not the only one germane to understanding model organism use. Because model organisms are utilized for experimental intervention, questions of representation must be juxtaposed with questions of manipulation (see the supplement on Model Organisms and Manipulation ).

5. Development and Evolution

The relationships that obtain between development and evolution are complicated and under ongoing investigation (for a review, see Love 2015). Two main axes dominate within a loose conglomeration of research programs (Raff 2000; Müller 2007): (a) the evolution of development, or inquiry into the pattern and processes of how ontogeny varies and changes over time; and, (b) the developmental basis of evolution, or inquiry into the causal impact of ontogenetic processes on evolutionary trajectories—both in terms of constraint and facilitation. Two examples where the concepts and practices of developmental and evolutionary biology intersect are treated here: the problematic appeal to functional homology in developmental genetics that is meant to underwrite evolutionary generalizations about ontogeny ( Section 5.1 ) and the tension between using normal stages for developmental investigation and determining the evolutionary significance of phenotypic plasticity ( Section 5.2 ). These cases expose some of the philosophical issues inherent in how development and evolution can be related to one another.

The conserved role of Hox genes in axial patterning is referred to as functionally homologous across animals (Manak and Scott 1994), over and above the relation of structural homology that obtains between DNA sequences. And yet “functional homology” is a contradiction in terms (Abouheif et al. 1997) because the definition of a homologue is “the same organ in different animals under every variety of form and function” (Owen 1843: 379)—the descendant, evolutionary distinction between homology (structure) and analogy (function) is founded on this recognition. Therefore, the idea of functional homology appears theoretically confused and there is a conceptual tension in its use by molecular developmental biologists.

[three skeletons each with the left wing outstretched and the outline of the wing shaded in. The skeleton labeled 1 is of a pterodactyl; that labeled 2 is a bat; and that of 3 is a bird.]

Figure 6: Vertebrate wings are homologous as forelimbs; they are derived by common descent from the same structure. The function of vertebrate wings (i.e., flight) is analogous; although the wings fulfill similar functions, their role in flight has evolved separately.

The reference to “organ” in Owen’s definition is indicative of a structure (an entity) found in an organism that may vary in its shape and composition (form) or what it is for (function) in the species where it occurs. Translated into an evolutionary context, sameness is cashed out by reference to common ancestry. Since structures also can be similar by virtue of natural selection operating in similar environments, homology is contrasted with analogy. Homologous structures are the same by virtue of descent from a common ancestor, regardless of what functions these structures are involved in, whereas analogous structures are similar by virtue of selection processes favoring comparable functional outcomes, regardless of common descent ( Figure 6 ).

This is what makes similarity of function an especially problematic criterion of homology (Abouheif et al. 1997). Because functional similarity is the appropriate relation for analogy, it is not necessary for analogues to have the same function as a consequence of common ancestry—similarity despite different origins suffices (Ghiselin 2005). Classic cases of analogy involve taxa that do not share a recent common ancestor that exhibits the structure, such as the external body morphology of dolphins and tuna (Pabst 2000). Thus, functional homology seems to be a category error because what a structure does should not enter into an evaluation of homologue correspondence and similarity of function is often the result of adaptation via natural selection to common environmental demands, not common ancestry.

Although we might be inclined to simply prohibit the terminology of functional homology, its widespread use in molecular and developmental biology should at least make us pause. [ 18 ] While it is important to recognize this pervasive practice, some occurrences may be illicit. Swapping structurally homologous genes between species to rescue mutant or null phenotypes is not a genuine criterion of functional homology, especially when there is little or no attention to establishing a phylogenetic context. This makes a number of claims of functional homology suspect. To not run afoul of the conceptual tension, explicit attention must be given to the meaning of “function.” Biological practice harbors at least four separate meanings of function (Wouters 2003, 2005): activity (what something does), causal role (contribution to a capacity), fitness advantage or viability (value of having something), and selected effect or etiology (origination and maintenance via natural selection). Debate has raged about which of them (if any) is most appropriate for different aspects of biological and psychological reasoning or most general in scope (i.e., what makes them all function concepts?) (see discussion in Garson 2016). Here the issue is whether we can identify a legitimate concept of homology of function.

If we are to avoid mixing homology and analogy, then the appropriate notion of function cannot be based on selection history, which is allied with the concept of analogy and concerns a particular variety of function. Similarly, viability interpretations concentrate on features where the variety of function is critical because of conferred survival advantages. Any interpretation of function that relies on a particular variety of function (because it was selected or because it confers viability) clashes with the demand that homology concern something “under every variety of form and function.” A causal role interpretation emphasizes a systemic capacity to which a function makes a contribution. It too focuses on a particular variety of function, though in a way different from either selected effect or viability interpretations. Only an activity interpretation (‘what something does’) accents the function itself, apart from its specific contribution to a systemic capacity and position in a larger context. Therefore, the most appropriate meaning to incorporate into homology of function is “ activity -function” because it is at least possible for activity-functions to remain constant under every variety. An evaluation of sameness due to common ancestry is made separately from the role the function plays (or its use), whether understood in terms of a causal role, a fitness advantage, or a history of selection. [ 19 ] Activity -functions can be put to different uses while being shared via common descent (i.e., homologous). More precisely, homology of function can be defined as the same activity-function in different animals under every variety of form and use-function (Love 2007). This unambiguously removes the tension that plagued functional homology.

Careful discussions of regulatory gene function in development and evolution recognize something akin to the distinction between activity- and use-function (i.e., between what a gene does and what it is for in some process within the organism).

When studying the molecular evolution of regulatory genes, their biochemical and developmental function must be considered separately. The biochemical function of PAX-6 and eyeless are as general transcription factors (which bind and activate downstream genes), but their developmental function is their specific involvement in eye morphogenesis (Abouheif 1997: 407).

The biochemical function is the activity-function and the developmental function is the use-function. This distinction helps to discriminate between divergent evolutionary trajectories. Biochemical (activity-functions) of genes are often conserved (i.e., homologous), while simultaneously being available for co-option to make causal role contributions (use-functions) to distinct developmental processes. The same regulatory genes are evolutionarily stable in terms of activity-function and evolutionarily labile in terms of use-function. [ 20 ] By implication, claims about use-function homology for genes qua developmental function are suspect compared to those concerning activity-function homology for genes qua biochemical function because developmental functions are more likely to have changed as phylogenetic distance increases.

The distinction between biochemical (activity) function and developmental (use) function is reinforced by the hierarchical aspects of homology (Hall 1994). A capacity defining the use-function of a regulatory gene at one level of organization, such as axial patterning, must be considered as an activity-function itself at another level of organization, such as the differentiation of serially repeated elements along a body axis. (Note that “ level of organization ” need not be compositional and thus the language of “higher” and “lower” levels may be inappropriate.) The developmental roles of Hox genes in axial patterning may be conserved by virtue of their biochemical activity-function homologies but Hox genes are not use-function homologues because of these developmental roles. Instead of focusing on the activity of a gene component and its causal role in axial patterning, we shift to the activity of axial patterning and its causal role elsewhere (or elsewhen) in embryonic development.

Introducing a conceptually legitimate idea of homology of activity-function is not about keeping the ideas of developmental biology tidy. It assists in the interpretation of evidence and circumscribes the inferences drawn. For example, NK-2 genes are involved in mesoderm specification, which underlies muscle morphogenesis. In Drosophila , the expression of a particular NK-2 gene ( tinman ) is critical for both cardiac and visceral mesoderm development. If tinman is knocked out and transgenically replaced with its vertebrate orthologue, Nkx2-5 , only visceral mesoderm specification is rescued; the regulation of cardiac mesoderm is not (Ranganayakulu et al. 1998). A region of the vertebrate protein near the 5′ end of the polypeptide differs enough to prevent appropriate regulation in cardiac morphogenesis. The homeodomains (stretches of sequence that confer DNA binding) for vertebrate Nkx2-5 and Drosophila tinman are interchangeable. The inability of Nkx2-5 to rescue cardiac mesoderm specification is not related to the activity-function of differential DNA binding. One component of the orthologous (homologous) proteins in both species retains an activity-function homology related to visceral mesoderm specification but another component (not the homeodomain) has diverged. This homeobox gene does not have a single use-function (as expected), but it also does not have a single activity-function. Any adequate evaluation of these cases must recognize a more fine-grained decomposition of genes into working units to capture genuine activity-function conservation. We can link activity-function homologues directly to structural motifs within a gene, but there is not necessarily a single activity-function for an entire open reading frame.

Defusing the conceptual tensions between developmental and evolutionary biology with respect to homology of function has a direct impact on the causal generalizations and inferences made from model organisms ( Section 4 ). Activity-function homology directs our attention to the stability or conservation of activities. This conservation is indicative of when the study of mechanisms in model organisms will produce robust and stable generalizations ( Section 1.3 ). The widespread use of functional homology in developmental biology is aimed at exactly this kind of question, which explains its persistence in experimental biology despite conceptual ambiguities. Generalizations concerning molecular signaling cascades are underwritten by the coordinated biochemical activities in view, not the developmental roles (though sometimes they may coincide). Thus, activity-function details about a signaling cascade gleaned from a model organism can be generalized via homology to other unstudied organisms even if the developmental role varies for the activity-function in other species.

All reasoning strategies combine distinctive strengths alongside of latent weaknesses. For example, decomposing a system into its constituents to understand the features manifested by the system promotes a dissection of the causal interactions of the localized constituents, while downplaying interactions with elements external to the system (Wimsatt 1980; Bechtel and Richardson 1993). Sometimes the descriptive and explanatory practices of the sciences are successful precisely because they intentionally ignore aspects of natural phenomena or use a variety of approximation techniques. Idealization is one type of reasoning strategy that scientists use to describe, model, and explain that purposefully departs from features known to be present in nature. For example, the interior space of a cell is often depicted as relatively empty even though intracellular space is known to be crowded (Ellis 2001); the variable of cellular volume takes on a value that is known to be false (i.e., relatively empty). Idealizations involve knowingly ignoring variations in properties or excluding particular values for variables, in a variety of different ways, for descriptive and explanatory purposes (Jones 2005; Weisberg 2007).

“Normal development” is conceptualized through strategies of abstraction that manage variation inherent within and across developing organisms (Lowe 2015, 2016). The study of ontogeny in model organisms ( Section 4 ) is usually executed by establishing a set of normal stages for embryonic development (see Other Internet Resources). A developmental trajectory from fertilized zygote to fully-formed adult is broken down into distinct temporal periods by reference to the occurrence of major events, such as fertilization, gastrulation, or metamorphosis (Minelli 2003: ch. 4; see Section 1.2 ). This enables researchers in different laboratory contexts to have standardized comparisons of experimental results (Hopwood 2005, 2007). They are critical to large communities of developmental biologists working on well-established models, such as chick (Hamburger and Hamilton 1951) or zebrafish (Kimmel et al. 1995): “Embryological research is now unimaginable without such standard series” (Hopwood 2005: 239). These normal stages are a form of idealization because they intentionally ignore kinds of variation in development, including variation associated with environmental variables. While facilitating the study of particular causal relationships, this means that specific kinds of variation in developmental features that might be relevant to evolution are minimized in the process of rendering ontogeny experimentally tractable (Love 2010).

Phenotypic plasticity is a ubiquitous biological phenomenon. It involves the capacity of a particular genotype to generate phenotypic variation, often in the guise of qualitatively distinct phenotypes, in response to differential environmental cues (Pigliucci 2001; DeWitt and Scheiner 2004; Kaplan 2008; Gilbert and Epel 2009). One familiar example is seasonal caterpillar morphs that depend on different nutritional sources (Greene 1989). Some of the relevant environmental variables include temperature, nutrition, pressure/gravity, light, predators or stressful conditions, and population density (Gilbert and Epel 2009). The reaction norm is a summary of the range of phenotypes, whether quantitatively or qualitatively varying, exhibited by organisms of a given genotype for different environmental conditions. When the reaction norm exhibits discontinuous variation or bivalent phenotypes (rather than quantitative, continuous variation), it is often labeled a polyphenism ( Figure 7 ).

[two color photos of leafed twigs each with a well camouflaged caterpillars (Biston betularia) looking like a branch, one green (on willow, right) and one brown (on birch, left).]

Figure 7: A color polyphenism in American Peppered Moth caterpillars that represents an example of phenotypic plasticity.

Phenotypic plasticity has been of recurring interest to biological researchers and controversial in evolutionary theory. Extensive study of phenotypic plasticity has occurred in the context of quantitative genetic methods and phenotypic selection analyses, where the extent of plasticity in natural populations has been demonstrated and operational measures delineated for its detection (Scheiner 1993; Pigliucci 2001). Other aspects of plasticity require different investigative methods to ascertain the sources of plasticity during ontogeny, the molecular genetic mechanisms that encourage plasticity, and the kinds of mapping functions that exist between the genotype and phenotype (Pigliucci 2001; Kirschner and Gerhart 2005: ch. 5). These latter aspects, the origin of phenotypic variation during and after ontogeny, are in view at the intersection of development and evolution: How do molecular genetic mechanisms produce (or reduce) plasticity? What genotype-phenotype mapping functions are prevalent or rare? Does plasticity contribute to the origination of evolutionary novelties (Moczek et al. 2011; West-Eberhard 2003)?

In order to evaluate these questions experimentally, researchers need to alter development through the manipulation of environmental variables and observe how a novel phenotype can be established within the existing plasticity of an organism (Kirschner and Gerhart 2005: ch. 5). This manipulation could allow for the identification of patterns of variation through the reliable replication of particular experimental alterations within different environmental regimes. However, without measuring variation across different environmental regimes, you cannot observe phenotypic plasticity. These measurements are required to document the degree of plasticity and its patterns for a particular trait, such as qualitatively distinct morphs. An evaluation of the significance of phenotypic plasticity for evolution requires answers to questions about where plasticity emerges, how molecular genetic mechanisms are involved in the plasticity, and what genotype-phenotype relations obtain.

Developmental stages intentionally ignore variation associated with phenotypic plasticity. Animals and plants are raised under stable environmental conditions so that stages can be reproduced in different laboratory settings and variation is often viewed as noise that must be reduced or eliminated if one is to understand how development works (Frankino and Raff 2004). This practice also encourages the selection of model organisms that exhibit less plasticity (Bolker 1995). The laboratory domestication of a model organism may also reduce the amount or type of observable phenotypic variation (Gu et al. 2005), though laboratory domestication also can increase variation (e.g., via inbreeding). Despite attempts to reduce variation by controlling environmental factors, some of it always remains (Lowe 2015) and is displayed by the fact that absolute chronology is not a reliable measure of time in ontogeny, and neither is the initiation or completion of its different parts (Mabee et al. 2000; Sheil and Greenbaum 2005). Developmental stages allow this recalcitrant variation to be effectively ignored by judgments of embryonic typicality. Normal stages also involve assumptions about the causal connections between different processes across sequences of stages (Minelli 2003: ch. 4). Once these stages have been constructed, it is possible to use them as a visual standard against which to recognize and describe variation as a deviation from the norm (DiTeresi 2010; Lowe 2016). But, more typically, variation ignored in the construction of these stages is also ignored in the routine consultation of the stages in day-to-day research contexts (Frankino and Raff 2004).

Normal stages fulfill a number of goals related to descriptive and explanatory endeavors that developmental biologists engage in (Kimmel et al. 1995). They yield a way to measure experimental replication, enable consistent and unambiguous communication among researchers, especially if stages are founded on commonly observable morphological features, facilitate accurate predictions of developmental phenomena, and aid in making comparisons or generalizations across species. As idealizations of ontogeny, normal stages allow for a classification of developmental events that is comprehensive with suitably sized and relatively homogeneous stages, reasonably sharp boundaries between stages, and stability under different investigative conditions (Dupré 2001), which encourages more precise explanations within particular disciplinary approaches (Griesemer 1996). Idealizations also can facilitate abstraction and generalization, both of which are a part of extrapolating findings from the investigative context of a model organism to other domains (Steel 2008; see Section 4 and 5.1 ).

There are various weaknesses associated with normal stages that accompany the fulfillment of these investigative and explanatory goals. Key morphological indicators sometimes overlap stages, terminology that is useful for one purpose may be misleading for another, particular terms can be misleading in cross-species comparisons, and manipulation of the embryo for continued observation can have a causal impact on ontogeny. Avoiding variability in stage indicators can encourage overlooking the significance of this variation, or at least provide a reason to favor its minimization.

Thus, there are good reasons for adopting normal stages to periodize model organism ontogeny, and these reasons help to explain why their continued use yields empirical success. However, similar to other standard (successful) practices in science, normal stages are often taken for granted, which means their biasing effects are neglected (Wimsatt 1980), some of which are relevant to evolutionary questions (e.g., systematically underestimating the extent of variation in a population). This is critical to recognize because the success of a periodization is not a function of the eventual ability to relax the idealizations; periodizations are not slowly corrected so that they become less idealized. Instead, new periodizations are constructed and used alongside the existing ones because different idealizations involve different judgments of typicality that serve diverse descriptive and explanatory aims. In addition to the systematic biases involved in developmental staging, most model organisms are poorly suited to inform us about how environmental effects modulate or combine with genetic or other factors in development—they make it difficult to discover details about mechanisms underlying reaction norms. Short generation times and rapid development are tightly correlated with insensitivity to environmental conditions through various mechanisms such as prepatterning (Bolker 1995).

The tension between the specific practice of developmental staging in model organisms and uncovering the relevance of variation due to phenotypic plasticity for evolution can be reconstructed as an argument.

  • Variation due to phenotypic plasticity is a normal feature of ontogeny.
  • The developmental staging of model organisms intentionally downplays variation in ontogeny associated with the effects of environmental variables (e.g., phenotypic plasticity) by strictly limiting the range of values for environmental variables and by removing variation in characters utilized to establish the comprehensive periodization.
  • Therefore, using model organisms with specified developmental stages will make it difficult, if not impossible, to observe patterns of variation due to phenotypic plasticity.

Although this tension obtains even if the focus is not on evolutionary questions, sometimes encouraging developmental biologists to interpret absence of evidence as evidence of the developmental insignificance of phenotypic plasticity, it is exacerbated for evolutionary researchers. The documentation of patterns of variation is precisely what is required to gauge the evolutionary significance of phenotypic plasticity. Practices of developmental staging in model organisms can retard our ability to make either a positive or negative assessment. Developmental staging, in conjunction with the properties of model organisms, tends to encourage a negative assessment of the evolutionary importance of phenotypic plasticity because the variation is not manifested and documented, and therefore is unlikely to be reckoned as substantive. Idealizations involving normal stages discourage a robust experimental probing of phenotypic plasticity, which is an obstacle to determining its evolutionary significance.

The consequences of this tension for the intersection of development and evolution are two-fold. First, the most powerful experimental systems for studying development are set up to minimize variation that may be critical to comprehending how evolutionary processes occur in nature. Second, if evolutionary investigations revolve around a character that was assessed for typicality to underwrite the temporal partitions that we call stages, then much of the variation in this character was conceptually removed as a part of rendering the model organism experimentally tractable. [ 21 ]

The identification of drawbacks that accompany strategies of idealization used to study development invites consideration of ways to address the liabilities identified (Love 2006). We can construct a principled perspective on how to address these liabilities by adding three further premises:

  • Reasoning strategies involving idealization, such as (2), are necessary to the successful prosecution of biological investigations of ontogeny.
  • Therefore, compensatory tactics should be chosen in such a way as to specifically redress the blind spots arising from the kind of idealizations utilized.
  • Given (1)–(3), compensatory tactics must be related to the effects of ignoring variation due to phenotypic plasticity that result from the developmental staging of model organisms.

At least two compensatory tactics can promote observations of variation due to phenotypic plasticity that is ignored when developmental stages are constructed for model organisms: the employment of diverse model organisms and the adoption of alternate periodizations.

Variation often will be observable in non-standard model organisms because experimental organisms that do not have large communities built around them are less likely to have had their embryonic development formally staged, and thus the effects of idealization on phenotypic plasticity are not operative. In turn, researchers are sensitized to the ways in which these kinds of variation are being muted in the study of standard models. Stages can be used then as visual standards to identify variation as deviations from a norm and thereby characterize patterns of variability. [ 22 ]

A second compensatory tactic is the adoption of alternative periodizations. This involves choosing different characters to construct new temporal partitions, thereby facilitating the observation of variation with respect to characteristics previously stabilized in the normal stage periodization. These alternative periodizations often divide a subset of developmental events according to processes or landmarks that differ from those used to construct the normal stages, and they may not map one-one onto the existing normal stages, especially if they encompass events beyond the trajectory from fertilization to a sexually mature adult. This lack of isomorphism between periodizations also will be manifested if different measures of time are utilized, whether sequence (event ordering) or duration (succession of defined intervals), and whether sequences or durations are measured relative to one another or against an external standard, such as absolute chronology (Reiss 2003; Colbert and Rowe 2008). These incompatibilities prevent assimilating the alternative periodizations into a single, overarching staging scheme. In all of these cases, idealization is involved and therefore each new periodization is subject to the liabilities of ignoring kinds of variation. However, alternative periodizations require choosing different characters to stabilize and typify when defining its temporal partitions, which means different kinds of variation will be exposed than were previously observable. [ 23 ]

The compensatory tactics of employing a diversity of model organisms and adopting alternative periodizations may be conceptually appropriate for addressing how the practice of developmental staging has an impact on the detection of phenotypic plasticity, but this does not remove associated costs (human, financial, and otherwise) or controversy. The advantages of a single, comprehensive periodization for a general model organism (e.g., zebrafish normal stages) must be weighed in light of the advantages of alternative, process-specific periodizations. However, by openly scrutinizing these practices in relation to the phenomenon of interest and recognizing both advantages and drawbacks involved in the idealizations utilized, developmental and evolutionary biologists are better positioned to offer systematic descriptions and comprehensive explanations of biological phenomena.

This entry has only sampled a small portion of work relevant to the import and promise of conceptual reflection on the epistemology of developmental biology. Much more could be said about each of the above domains, such as a more fine-grained analysis of how normal stages operate as types in developmental biology (DiTeresi 2010; Lowe 2016). Additionally, little has been said about how evidence works in developmental biological experimentation or differences between confirmatory and exploratory experimentation (Hall 2005; O’Malley 2007; Waters 2007b), nor have I treated the role of metaphors and models that characterize key practices in developmental biology (Fagan 2013; Keller 2002). The latter have been perspicuously analyzed via increased attention to the details of particular research programs. Finally, nothing has been said about the metaphysical implications of developmental phenomena (a key input for Aristotle’s metaphysics ). Concepts of potentiality are very natural in descriptions of embryological phenomena (e.g., the pluripotency of stem cells or the potential of a germ layer to yield different kinds of tissue lineages) and some have argued that empirical advances in developmental biology support a new form of essentialism about biological natural kinds (Austin 2019). This bears on how we understand dispositions (see the entry on dispositions ) because the triggering conditions are often complex and multiply realized (including manifestations without a trigger), as well as the fact that cells exhibit dispositions with multiple possible manifestations (cell types) in specific sequential orderings (Hüttemann and Kaiser 2018; Laplane 2016). Metaphysical issues also arise in the context of human developmental biology, such as how to understand the ontology of pregnancy (Kingma 2018; Sidzinska 2017). Thus, developmental biology displays not only a rich array of material and conceptual practices that can be analyzed to better understand the scientific reasoning exhibited in experimental life science, but also points in the direction of new ideas for metaphysics, especially when that endeavor explicitly considers the input of empirically successful sciences.

  • Abouheif, E., 1997, “Developmental genetics and homology: a hierarchical approach”, Trends in Ecology and Evolution , 12: 405–408.
  • Abouheif, E., M. Akam, W.J. Dickinson, P.W.H. Holland, A. Meyer, N.H. Patel, R.A. Raff, V.L. Roth, and G.A. Wray, 1997, “Homology and developmental genes”, Trends in Genetics , 13: 432–433.
  • Ankeny, R.A., 2009, “Model organisms as fictions”, in Fictions in Science: Philosophical Essays on Modeling and Idealization , M. Suárez (ed.), 193–204. New York and London: Routledge, Tayor & Francis Group.
  • –––, 2012, “Detecting themes and variations: the use of cases in developmental biology”, Philosophy of Science , 79: 644–654.
  • Ankeny, R.A., and S. Leonelli, 2011, “What’s so special about model organisms?”, Studies in History and Philosophy of Science , 42: 313–323.
  • Asp, M., S. Giacomello, L. Larsson, C. Wu, D. Fürth, X. Qian, E. Wärdell, J. Custodio, J. Reimegård, F. Salmén, C. Österholm, P. L. Ståhl, E. Sundström, E. Åkesson, O. Bergmann, M. Bienko, A. Månsson-Broberg, M. Nilsson, C. Sylvén and J. Lundeberg, 2019, “A spatiotemporal organ-wide gene expression and cell atlas of the developing human heart”, Cell , 179: 1647–1660.
  • Austin, C.J., 2019, Essence in the Age of Evolution: A New Theory of Natural Kinds , New York: Routledge.
  • Bechtel, W, and R. Richardson, 1993, Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research , Princeton: Princeton University Press.
  • Berrill, N.J., 1961, Growth, Development, and Pattern , San Francisco: W.H. Freeman and Company.
  • Bier, E., and W. McGinnis, 2003, “Model organisms in the study of development and disease”, in Molecular Basis of Inborn Errors of Development , C.J. Epstein, R.P. Erickson, and A. Wynshaw-Boris (eds.), 25–45. New York: Oxford University Press.
  • Bolker, J.A., 1995, “Model systems in developmental biology”, BioEssays , 17: 451–455.
  • –––, 2009, “Exemplary and surrogate models: Two modes of representation in biology”, Perspectives in Biology and Medicine , 52: 485–499.
  • Brand, T., 2003, “Heart development: molecular insights into cardiac specification”, Developmental Biology , 258: 1–19.
  • Brigandt, I., 2013, “Systems biology and the integration of mechanistic explanation and mathematical explanation”, Studies in History and Philosophy of Biological and Biomedical Sciences , 44: 477–492.
  • Brigandt, I., and A.C. Love, 2012, “Conceptualizing evolutionary novelty: Moving beyond definitional debates”, Journal of Experimental Zoology (Mol Dev Evol) , 318B: 417–427.
  • Brouzés, E., and E. Farge, 2004, “Interplay of mechanical deformation and patterned gene expression in developing embryos”, Current Opinion in Genetics & Development , 14: 367–374.
  • Colbert, M.W., and T. Rowe, 2008, “Ontogenetic Sequence Analysis: using parsimony to characterize developmental sequences and sequence polymorphism”, Journal of Experimental Zoology (Mol Dev Evol) , 310B: 398–416.
  • Craver, C.F., 2007, Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience , New York: Oxford University Press.
  • Crotty, D.A., and A. Gann, 2009, Emerging Model Organisms: A Laboratory Manual, Volume 1 , Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press.
  • Damen, W.G.M., 2007, “Evolutionary conservation and divergence of the segmentation process in arthropods”, Developmental Dynamics , 236: 1379–1391.
  • Darden, L., 2006, Reasoning in Biological Discoveries: Essays on Mechanisms, Interfield Relations, and Anomaly Resolution , New York: Cambridge University Press.
  • Davidson, E.H., 2006, The Regulatory Genome: Gene Regulatory Networks in Development and Evolution , San Diego: Academic Press.
  • Davidson, E.H. and I.S. Peter, 2015, Genomic Control Process: Development and Evolution , San Diego, CA: Academic Press.
  • Davies, J.A., 2013, Mechanisms of Morphogenesis: The Creation of Biological Form , 2nd edition, San Diego, CA: Elsevier Academic Press.
  • Davis, R.H., 2004, “The age of model organisms”, Nature Reviews Genetics , 5: 69–76.
  • DeWitt, T.J., and S.M. Scheiner, 2004, Phenotypic Plasticity: Functional and Conceptual Approaches , New York: Oxford University Press.
  • DiTeresi, C.A., 2010, “Taming Variation: Typological Thinking and Scientific Practice in Developmental Biology”, University of Chicago, Chicago. PhD Thesis.
  • Dubuis, J.O., G. Tkačik, E.F. Wieschaus, T. Gregor and W. Bialek, 2013, “Positional information, in bits”, Proceedings of the National Academy of Sciences , 110: 16301–16308.
  • Dupré, J., 2001, “In defence of classification”, Studies the History and Philosophy of Biological and Biomedical Sciences , 32: 203–219.
  • Ellis, R.J., 2001, “Macromolecular crowding: obvious but underappreciated”, Trends in Biochemical Sciences , 26: 597–604.
  • Emlen, D.J., 2000, “Integrating development with evolution: A case study with beetle horns”, BioScience , 50: 403–418.
  • Facchin, S., R. Lopreiato, M. Ruzzene, O. Marin, G. Sartori, C. Götz, M. Montenarh, G. Carignani, and L.A. Pinna, 2003, “Functional homology between yeast piD261/Bud32 and human PRPK: both phosphorylate p53 and PRPK partially complements piD261/Bud32 deficiency”, FEBS Letters , 549: 63–66.
  • Fagan, M.B., 2013, Philosophy of Stem Cell Biology: Knowledge in Flesh and Blood , London: Palgrave Macmillan.
  • –––, 2016, “Generative models: human embryonic stem cells and multiple modeling relations”, Studies in History and Philosophy of Science , 56: 122–134.
  • Forgacs, G., and S.A. Newman, 2005, Biological Physics of the Developing Embryo , New York: Cambridge University Press.
  • Frankino, W.A., and R.A. Raff, 2004, “Evolutionary importance and pattern of phenotypic plasticity”, in DeWitt and Scheiner 2004: 64–81.
  • Fraser, S.E., and R.M. Harland, 2000, “The molecular metamorphosis of experimental embryology”, Cell , 100: 41–55.
  • Furley, D., and J.S. Wilkie, 1984, Galen: On Respiration and the Arteries , Princeton: Princeton University Press.
  • Gajewski, K., and R.A. Schulz, 2002, “Comparative genetics of heart development: conserved cardiogenic factors in Drosophila and vertebrates”, in Cardiac Development , B. Ostadal, M. Nagano, and N.S. Dhalla (eds.), 1–23. Boston: Kluwer Academic Publishers.
  • Garson, J., 2016, A Critical Overview of Biological Functions , Dordrecht: Springer.
  • Gerhart, J., and M. Kirschner, 2007, “The theory of facilitated variation”, Proceedings of the National Academy of Sciences USA , 104: 8582–8589.
  • Ghiselin, M.T., 2005, “Homology as a relation of correspondence between parts of individuals”, Theory in Biosciences , 124: 91–103.
  • Giere, R.N., 1999, Science Without Laws , Chicago: University of Chicago Press.
  • Gilbert, S.F. (ed.), 1991, A Conceptual History of Modern Embryology (Volume 7: Developmental Biology: A Comprehensive Synthesis ), New York: Plenum Press.
  • –––, 2000 [2003, 2006, 2010], Developmental Biology , 6th edition, Sunderland, MA: Sinauer Associates, Inc., 2000; 7th edition, 2003; 8th edition, 2006; 9th edition, 2010.
  • Gilbert, S.F. and D. Epel, 2009, Ecological Developmental Biology: Integrating Epigenetics, Medicine, and Evolution , Sunderland, MA: Sinauer.
  • Glennan, S. and P. Illari (eds.), 2017, The Routledge Handbook of the Philosophy of Mechanisms and Mechanical Philosophy , New York: Routledge.
  • Green, S. and R. Batterman, 2017, “Biology meets physics: reductionism and multi-scale modeling of morphogenesis”, Studies in History and Philosophy of Biological and Biomedical Sciences , 61: 20–34.
  • Greene, E., 1989, “A diet-induced developmental polymorphism in a caterpillar”, Science , 243: 643–646.
  • Griesemer, J.R., 1996, “Periodization and models in historical biology”, in New Perspectives on the History of Life , M.T. Ghiselin, and G. Pinna (eds.), 19–30. San Francisco: California Academy of Sciences.
  • Griffiths, P. and K. Stotz, 2013, Genetics and Philosophy: An Introduction , New York: Cambridge University Press.
  • Gu, Z., L. David, D. Petrov, T. Jones, R.W. Davis, and L.M. Steinmetz, 2005, “Elevated evolutionary rates in the laboratory strain of Saccharomyces cerevisiae ”, Proceedings of the National Academy of Sciences USA , 102: 1092–1097.
  • Gulledge, A.T., and Y. Kawaguchi, 2007, “Phasic cholinergic signaling in the hippocampus: functional homology with the neocortex?” Hippocampus , 17: 327–332.
  • Hall, B.K. (ed.), 1994, Homology: The Hierarchical Basis of Comparative Biology , San Diego: Academic Press.
  • Hall, L.R., 2005, “Exploratory experiments”, Philosophy of Science , 72: 888–899.
  • Hamburger, V., 1988, The Heritage of Experimental Embryology: Hans Spemann and the Organizer , New York: Oxford University Press.
  • Hamburger, V., and H.L. Hamilton, 1951, “A series of normal stages in the development of the chick embryo”, Journal of Morphology , 88: 49–92.
  • Harvey, R.P., 2002, “Patterning the vertebrate heart”, Nature Reviews Genetics , 3: 544–556.
  • Hoffman, B.D., C. Grashoff, and M.A. Schwartz, 2011, “Dynamic molecular processes mediate cellular mechanotransduction”, Nature , 475: 316–323.
  • Hopwood, N., 1999, “‘Giving body’ to embryos: modeling, mechanism, and the microtome in late nineteenth-century anatomy”, Isis , 90: 462–496.
  • –––, 2000, “Producing development: the anatomy of human embryos and the norms of Wilhelm His”, Bulletin of the History of Medicine , 74: 29–79.
  • –––, 2005, “Visual standards and disciplinary change: normal plates, tables and stages in embryology”, History of Science , 43: 239–303.
  • –––, 2007, “A history of normal plates, tables and stages in vertebrate embryology”, International Journal of Developmental Biology , 51: 1–26.
  • –––, 2019, “Inclusion and exclusion in the history of developmental biology”, Development , 146(7): dev175448. doi:10.1242/dev.175448
  • Horder, T.J., J.A. Witkowski, and C.C. Wylie (eds), 1986, A History of Embryology , Cambridge: Cambridge University Press.
  • Hove, J.R., R.W. Köster, A.S. Forouhar, G. Acevedo-Bolton, S.E. Fraser, and M. Gharib, 2003, “Intracardiac fluid forces are an essential epigenetic factor for embryonic cardiogenesis”, Nature , 421: 172–177.
  • Huang, A., C.A. Scougall, J.W. Lowenthal, A.R. Jilbert, and I. Kotlarski, 2001, “Structural and functional homology between duck and chicken interferon-gamma”, Developmental and Comparative Immunology , 25: 55–68.
  • Hüttemann, A. and M.I. Kaiser., 2018, “Potentiality in biology”, in Handbook of Potentiality , K. Engelhardt and M. Quante (eds.), 401–428. Dordrecht: Springer.
  • Illari, P., and J. Williamson, 2012, “What is a mechanism? Thinking about mechanisms across the sciences”, European Journal of the Philosophy of Science , 2: 119–135.
  • Jackson, T.R., H.Y. Kim, U.L. Balakrishnan, C. Stuckenholz and L.A. Davidson, 2017, “Spatiotemporally controlled mechanical cues drive progenitor mesenchymal-to-epithelial transition enabling proper heart formation and function”, Current Biology , 27: 1326–1335.
  • Jones, M.R., 2005, “Idealization and abstraction: a framework”, in Idealization XII: Correcting the Model. Idealization and Abstraction in the Sciences (Poznan Studies in the Philosophy of the Sciences and the Humanities, vol. 86) , M.R. Jones and N. Cartwright (eds.), 173–217. Amsterdam/New York: Rodopi.
  • Kaplan, J.M., 2008, “Phenotypic plasticity and reaction norms”, in Sarkar and Plutynski 2008: 205–222.
  • Keller, E.F., 2002, Making Sense of Life: Explaining Biological Development with Models, Metaphors, and Machines , Cambridge, MA: Harvard University Press.
  • Kimmel, C.B., W.W. Ballard, S.R. Kimmel, B. Ullmann, and T.F. Schilling, 1995, “Stages of embryonic development of the zebrafish”, Developmental dynamics , 203: 253–310.
  • Kingma, E., 2018, “Lady parts: the metaphysics of pregnancy”, Royal Institute of Philosophy Supplement , 82: 165–187.
  • Kirby, M.L., 1999, “Contribution of neural crest to heart and vessel morphology”, in Heart Development , R.P. Harvey and N. Rosenthal (eds.), 179–193. San Diego: Academic Publishers.
  • Kirschner, M.W., and J.C. Gerhart, 2005, The Plausibility of Life: Resolving Darwin’s Dilemma , New Haven and London: Yale University Press.
  • Kitcher, P., 1993, The Advancement of Science: Science Without Legend, Objectivity Without Illusions , New York: Oxford University Press.
  • Laplane, L., 2016, Cancer Stem Cells: Philosophy and Therapies , Cambridge, MA: Harvard University Press.
  • Levin, M., T. Thorlin, K.R. Robinson, T. Nogi, and M. Mercola, 2002, “Asymmetries in H + /K + -ATPase and cell membrane potentials comprise a very early step in left-right patterning”, Cell , 111: 77–89.
  • Love, A.C., 2007, “Functional homology and homology of function: biological concepts and philosophical consequences”, Biology & Philosophy , 22: 691–708.
  • –––, 2008, “Explaining the ontogeny of form: philosophical issues”, in Sarkar and Plutynski 2008: 223–247.
  • –––, 2010, “Idealization in evolutionary developmental investigation: a tension between phenotypic plasticity and normal stages”, Philosophical Transactions of the Royal Society B: Biological Sciences , 365: 679–690.
  • –––, 2012, “Formal and material theories in philosophy of science: A methodological interpretation”, in EPSA Philosophy of Science: Amsterdam 2009 (The European Philosophy of Science Association Proceedings, Vol. 1) , H.W. de Regt, S. Okasha, and S. Hartmann (eds.), 175–185. Berlin: Springer.
  • –––, 2013, “Theory is as theory does: Scientific practice and theory structure in biology”, Biological Theory , 7: 325–337.
  • –––, 2014, “The erotetic organization of development”, in Towards a Theory of Development , A. Minelli, and T. Pradeu (eds.), 33–55. Oxford: Oxford University Press.
  • –––, 2015, “Evolutionary developmental biology: philosophical issues”, in Handbook of Evolutionary Thinking in the Sciences , T. Heams, P. Huneman, L. Lecointre, and M. Silberstein (eds.), 265–283, Berlin: Springer.
  • –––, 2017a, “Developmental mechanisms”, in The Routledge Handbook of the Philosophy of Mechanisms and Mechanical Philosophy , S. Glennan and P. Illari (eds.), 332–347, New York: Routledge.
  • –––, 2017b, “Building integrated explanatory models of complex biological phenomena: from Mill’s methods to a causal mosaic”, in EPSA15 Selected Papers: The 5th conference of the European Philosophy of Science Association , M. Massimi, J.-W. Romeijn and G. Schurz (eds.), 221–232, Cham: Springer International Publishing.
  • Love, A.C., and M. Travisano, 2013, “Microbes modeling ontogeny”, Biology & Philosophy , 28: 161–188.
  • Lowe, J.W.E., 2015, “Managing variation in the investigation of organismal development: problems and opportunities”, History and Philosophy of the Life Sciences , 37: 449–473.
  • –––, 2016, “Normal development and experimental embryology: Edmund Beecher Wilson and Amphioxus ”, Studies in History and Philosophy of Biological and Biomedical Sciences , 57: 44–59.
  • Lynch, V.J., 2009, “Use with caution: developmental systems divergence and potential pitfalls of animal models”, Yale Journal of Biology and Medicine , 82: 53–66.
  • Mabee, P.M., K.L. Olmstead, and C.C. Cubbage, 2000, “An experimental study of intrspecific variation, developmental timing, and heterochrony in fishes”, Evolution , 54: 2091–2106.
  • Maienschein, J., 1991, Transforming Traditions in American Biology, 1880–1915 , Baltimore, MD: The Johns Hopkins University Press.
  • –––, 2000, “Competing epistemologies and developmental biology”, in Biology and Epistemology , R. Creath and J. Maienschein (eds.), 122–137. Cambridge: Cambridge University Press.
  • –––, 2014, Embryos under the Microscope: The Diverging Meanings of Life , Cambridge, MA: Harvard University Press.
  • Maienschein, J., M. Glitz, and G.E. Allen, 2005, Centennial History of the Carnegie Institution of Washington: Volume 5, The Department of Embryology , New York: Cambridge University Press.
  • Manak, J.R., and M.P. Scott, 1994, “A class act: conservation of homeodomain protein functions”, Development , (Supplement): 61–71.
  • McManus, F., 2012, “Development and mechanistic explanation”, Studies in History and Philosophy of Biological and Biomedical Sciences , 43: 532–541.
  • Metscher, B.D., and P.E. Ahlberg, 1999, “Zebrafish in context: uses of a laboratory model in comparative studies”, Developmental Biology , 210: 1–14.
  • Mill, J.S., 1843 [1974], A System of Logic Ratiocinative and Inductive, Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation (Books I–III) , in The Collected Works of John Stuart Mill, Volume VII , John M. Robson (ed.), Toronto: University of Toronto Press, London: Routledge and Kegan Paul. [ Mill 1843 [1974] available online ]
  • Miller, C.J., and L.A. Davidson, 2013, “The interplay between cell signalling and mechanics in developmntal processes”, Nature Reviews Genetics , 14: 733–744.
  • Minelli, A., 2003, The Development of Animal Form: Ontogeny, Morphology, and Evolution , Cambridge: Cambridge University Press.
  • –––, 2011a, “Animal development, an open-ended segment of life”, Biological Theory , 6: 4–15.
  • –––, 2011b, “A principle of developmental inertia”, in Epigenetics: Linking Genotype and Phenotype in Development and Evolution , B. Halgrímsson and B.K. Hall (eds.), 116–133. San Francisco: University of California Press.
  • Minelli, A., C. Brena, G. Deflorian, D. Maruzzo, and G. Fusco, 2006, “From embryo to adult-beyond the conventional periodization of arthropod development”, Development Genes and Evolution , 216: 373–383.
  • Minelli, A. and T. Pradeu (eds.), 2014, Towards a Theory of Development , Oxford: Oxford University Press.
  • Mitchell, S.D., 2002, “Integrative pluralism”, Biology & Philosophy , 17: 55–70.
  • Moczek, A., 2008, “On the origins of novelty in development and evolution”, BioEssays , 30: 432–447.
  • Moczek, A.P., and L.M. Nagy, 2005, “Diverse developmental mechanisms contribute to different levels of diversity in horned beetles”, Evolution & Development , 7: 175–185.
  • Moczek, A.P., S. Sultan, S. Foster, C. Ledón-Rettig, I. Dworkin, H.F. Nijhout, E. Abouheif and D.W. Pfennig, 2011, “The role of developmental plasticity in evolutionary innovation”, Proceedings of the Royal Society of London B: Biological Sciences , 278: 2705–2713.
  • Morgan, T.H., 1923, “The modern theory of genetics and the problem of embryonic development”, Physiological Review , 3: 603–627.
  • –––, 1926, “Genetics and the physiology of development”, American Naturalist , 60: 489–515.
  • –––, 1934, Embryology and Genetics , New York: Columbia University Press.
  • Moss, L., 2002, What Genes Can’t Do , Cambridge, MA: MIT Press, A Bradford Book.
  • Müller, G.B., 2007, “Evo-devo: extending the evolutionary synthesis”, Nature Reviews Genetics , 8: 943–949.
  • Nagel, E., 1961, The Structure of Science: Problems in the Logic of Scientific Explanation , New York: Harcourt, Brace & World, Inc.
  • Neumann-Held, E.M., and C. Rehmann-Sutter (eds.), 2006, Genes in Development: Re-reading the Molecular Paradigm , Durham and London: Duke University Press.
  • Newman, S.A., 2015, “Development and evolution: The physics connection”, in Conceptual Change in Biology: Scientific and Philosophical Perspectives on Evolution and Development , A.C. Love (ed.), 421–440, Berlin: Springer.
  • Newman, S.A., and R. Bhat, 2008, “Dynamical patterning modules: physico-genetic determinants of morphological development and evolution”, Physical Biology , 5: 1–14.
  • Nonaka, S., H. Shiratori, Y. Saijoh, and H. Hamada, 2002, “Determination of left-right patterning of the mouse embryo by artificial nodal flow”, Nature , 418: 96–99.
  • Nüsslein-Volhard, C., 2006, Coming to Life: How Genes Drive Development , Carlsbad, CA: Kales Press.
  • Olby, R.C., 1986, “Structural and dynamical explanations in the world of neglected dimensions”, in Horder, Witkowski, and Wylie 1986: 275–308.
  • Olson, E.N., 2006, “Gene regulatory networks in the evolution and development of the heart”, Science , 313: 1922–1927.
  • O’Malley, M.A., 2007, “Exploratory experimentation and scientific practice: metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29: 337–60.
  • Oppenheimer, J.M., 1967, Essays in the History of Embryology and Biology , Cambridge, MA: MIT Press.
  • Overton, J.A., 2013, “‘Explain’ in scientific discourse”, Synthese , 190: 1383–1405.
  • Owen, R., 1843, Lectures on the Comparative Anatomy and Physiology of the Invertebrate Animals , London: Longman, Brown, Green, and Longmans.
  • Pabst, D.A., 2000, “To bend a dolphin: convergence of force transmission designs in Cetaceans and Scombrid fishes”, American Zoologist , 40: 146–155.
  • Parkkinen, V-P., 2014, “Developmental explanations”, in New Directions in the Philosophy of Science: The Philosophy of Science in a European Perspective, Vol. 5 , M.C. Galavotti, D. Dieks, W.J. Gonzalez, S. Hartmann, T. Uebel, and M. Weber (eds.), Berlin: Springer.
  • Pearson, C., 2018, “How-possibly explanation in biology: lessons from Wilhelm His’s ‘simple experiments’ models”, Philosophy, Theory, and Practice in Biology , 10(4), doi:10.3998/ptpbio.16039257.0010.004
  • Peter, I.S., and E.H. Davidson, 2011, “A gene regulatory network controlling the embryonic specification of endoderm”, Nature , 474: 635–639.
  • Pigliucci, M., 2001, Phenotypic Plasticity: Beyond Nature and Nurture , Baltimore and London: The Johns Hopkins University Press.
  • –––, 2002, “Touchy and bushy: phenotypic plasticity and integration in response to wind stimulation in Arabidopsis thaliana ”, International Journal of Plant Sciences , 163: 399–408.
  • Raff, R.A., 2000, “Evo-Devo: the evolution of a new discipline”, Nature Reviews Genetics , 1: 74–79.
  • Ranganayakulu, G., D.A. Elliott, R.P. Harvey, and E.N. Olson, 1998, “Divergent roles for NK-2 class homeobox genes in cardiogenesis in flies and mice”, Development , 125: 3037–3048.
  • Raya, Á., Y. Kawakami, C. Rodríguez-Esteban, M. Ibañes, D. Rasskin-Gutman, J. Rodríguez-León, D. Büscher, J.A. Feijó, and J.C.I. Belmonte, 2004, “Notch activity acts as a sensor for extracellular calcium during vertebrate left-right determination”, Nature , 427: 121–128.
  • Reed, R.D., P-H. Chen, and H.F Nijhout, 2007, “Cryptic variation in butterfly eyespot development: the importance of sample size in gene expression studies”, Evolution & Development , 9: 2–9.
  • Reiss, J.O., 2003, “Time”, in Keywords and Concepts in Evolutionary Developmental Biology , B.K. Hall, and W.M. Olson (eds.), 359–368. Cambridge, MA: Harvard University Press.
  • Robert, J.S., 2004, Embryology, Epigenesis, and Evolution: Taking Development Seriously , New York: Cambridge University Press.
  • Roe, S.A., 1981, Matter, Life, and Generation: 18th Century Embryology and the Haller-Wolff Debate , Cambridge: Cambridge University Press.
  • Rosenberg, A., 2006, Darwinian Reductionism: Or, How to Stop Worrying and Love Molecular Biology , Chicago: University of Chicago Press.
  • Rubin, G.M., 1988, Drosophila melanogaster as an experimental organism. Science , 240: 1453–1459.
  • Sarkar, S. and A. Plutynski (eds.), 2008, A Companion to the Philosophy of Biology , (Blackwell Companions to Philosophy), Malden, MA: Blackwell Publishers.
  • Savin, T., N.A. Kurpios, A.E. Shyer, P. Florescu, H. Liang, L. Mahadevan, and C. Tabin, 2011, “On the growth and form of the gut”, Nature , 476: 57–62.
  • Scheiner, S.M., 1993, “Genetics and evolution of phenotypic plasticity”, Annual Review of Ecology and Systematics , 24: 35–68.
  • Schulze, J., and E. Schierenberg, 2011, “Evolution of embryonic development in nematodes”, EvoDevo , 2: 18.
  • Sheil, C.A., and E. Greenbaum, 2005, “Reconsideration of skeletal development of Chelydra serpentina (Reptilia: Testudinata: Chelydridae): evidence for intraspecific variation”, Journal of Zoology , 265: 235–267.
  • Sidzinska, M., 2017, “Not one, not two: toward an ontology of pregnancy”, Feminist Philosophy Quarterly , 3(4), Article 2, doi:10.5206/fpq/2017.4.2
  • Slack, J.M.W., 2006, Essential Developmental Biology , 2nd ed. Malden, MA: Blackwell Publishing.
  • –––, 2009, “Emerging market organisms”, Science , 323: 1674–1675.
  • –––, 2013, Essential Developmental Biology , 3rd edition, Chichester: Wiley-Blackwell.
  • Smith, J.E.H. (ed.), 2006, The Problem of Animal Generation in Early Modern Philosophy , New York: Cambridge University Press.
  • –––, 2011, Divine Machines: Leibniz and the Sciences of Life , Princeton, NJ: Princeton University Press.
  • Sober, E., 1988, “Apportioning causal responsibility”, Journal of Philosophy , 85: 303–318.
  • Sommer, R.J., 2009, “The future of evo-devo: model systems and evolutionary theory”, Nature Reviews Genetics , 10: 416–422.
  • Srivastava, D., 2006, “Making or breaking the heart: from lineage determination to morphogenesis”, Cell , 126: 1037–1048.
  • Steel, D.P., 2008, Across the Boundaries: Extrapolation in Biology and Social Science , New York: Oxford University Press.
  • Strevens, M., 2009, Depth: An Account of Scientific Explanation , Cambridge, MA: Harvard University Press.
  • Thompson, D’A.W. 1992 [1942]. On Growth and Form , Complete Revised Edition. New York: Dover Publications, Inc.
  • Varner, V.D., and C.M. Nelson, 2014, “Cellular and physical mechanisms of branching morphogenesis”, Development , 141: 2750–2759.
  • Vázquez-Novelle, M.D., V. Esteban, A. Bueno, and M.P. Sacristán., 2005, “Functional homology among human and fission yeast Cdc14 phosphatases”, Journal of Biological Chemistry , 280: 29144–29150.
  • Von Dassow, M., J. Strother, and L.A. Davidson, 2010, “Surprisingly simple mechanical behavior of a complex embryonic tissue”, PLoS ONE , 5:e15359.
  • Waters, C.K., 2007a, “Causes that make a difference”, Journal of Philosophy , 104: 551–579.
  • –––, 2007b, “The nature and context of exploratory experimentation”, History and Philosophy of the Life Sciences , 29: 275–284.
  • Weber, M., 2005, Philosophy of Experimental Biology , New York: Cambridge University Press.
  • Weisberg, M., 2007, “Three kinds of idealization”, Journal of Philosophy , 104: 639–659.
  • West-Eberhard, M.J., 2003, Developmental Plasticity and Evolution , New York: Oxford University Press.
  • Wimsatt, W.C., 1980, “Reductionistic research strategies and their biases in the units of selection controversy”, in Scientific Discovery: Case Studies , T. Nickles (ed.), 213–259. Dordrecht: D. Reidel Publishing Company.
  • Wolpert, L., R. Beddington, J. Brockes, T. Jessell, P.A. Lawrence, and E.M. Meyerowitz, 1998, Principles of Development , New York: Oxford University Press.
  • Wolpert, L., C. Tickle, T. Jessell, P. Lawrence, E. Meyerowitz, E. Robertson, and J. Smith, 2010, Principles of Development , 4th ed. New York and Oxford: Oxford University Press.
  • Woodward, J., 2003, Making Things Happen: A Theory of Causal Explanation , New York: Oxford University Press.
  • Wouters, A., 2003, “Four notions of biological function”, Studies in the History and Philosophy of Biological and Biomedical Sciences , 34: 633–668.
  • –––, 2005, “The function debate in philosophy”, Acta Biotheoretica , 53: 123–151.
  • Wozniak, M., and C.S. Chen, 2009, “Mechanotransduction in development: a growing role for contractility”, Nature Reviews Molecular Cell Biology , 10: 34–43.
  • Yoshida, Y., forthcoming, “Multiple-models juxtaposition and trade-offs among modeling desiderata”, Philosophy of Science .
  • Figure 1 : “Preformation”, drawn by Nicolaas Hartsoeker ( Essai de Dioptrique , 1694). Licensed under Public domain via Wikimedia Commons. http://commons.wikimedia.org/wiki/File:Preformation.GIF
  • Figure 2 : “Spiral cleavage in gastropod Trochus ” by Morgan Q. Goulding, 2009, “Cell Lineage of the Ilyanassa Embryo: Evolutionary Acceleration of Regional Differentiation during Early Development“, PLoS ONE , 4(5): e5506, doi:10.1371/journal.pone.0005506, Figure 1 TIFF. Licensed under Creative Commons Attribution 2.5 via Wikimedia Commons. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0005506 OR http://commons.wikimedia.org/wiki/File:Spiral_cleavage_in_Trochus.png
  • Figure 3 : “Hematopoiesis simple” by Mikael Häggström (no attribution required), from original by A. Rad (requires attribution) - Image:Hematopoiesis_(human)_diagram.png by A. Rad. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons. http://commons.wikimedia.org/wiki/File:Hematopoiesis_simple.svg
  • Figure 4 : “Wingless and Hedgehog reciprocal signaling during segmentation of Drosophila embryos” by Fred the Oyster (requires attribution). Licensed under Creative Commons Attribution-Share Alike 4.0 via Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/7/7c/Wingless_and_Hedgehog_reciprocal_signaling_during_segmentation_of_Drosophila_embryos.svg
  • Figure 5 : “ Drosophila melanogaster —side (aka)” by André Karwath aka Aka - Own work. Licensed under Creative Commons Attribution-Share Alike 2.5 via Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Drosophila_melanogaster_-_side_(aka).jpg
  • Figure 6 : “Homology” from George John Romanes, 1892 [1910], Darwin and after Darwin , (fourth edition), Chicago: The Open Court Publishing Company, Figure 5, Chapter 3, p. 56, “Wings of Reptile, Mammal, and Bird. Drawn from nature (Brit. Mus.)”. http://www.talkorigins.org/faqs/precursors/images/homology.jpg . Licensed under Public domain via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Homology.jpg
  • Figure 7 : “ Biston betularia caterpillars on birch (left) and willow (right)” by Mohamed A.F. Noor, Robin S. Parnell, Bruce S. Grant, 2008, “A Reversible Color Polyphenism in American Peppered Moth ( Biston betularia cognataria ) Caterpillars”, PLoS ONE , 3(9): e3142. doi:10.1371/journal.pone.0003142. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003142 Licensed under Creative Commons Attribution 2.5 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Biston_betularia.png
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Embryo Project
  • Visible Embryo (Human Development)
  • Society for Developmental Biology
  • Model Organisms for Biological Research
  • Normal Stages for Model Organisms Used in Developmental Biology

Aristotle, General Topics: biology | Aristotle, General Topics: metaphysics | biological development: theories of | biology: experiment in | biology: philosophy of | Boyle, Robert | causation: and manipulability | cell biology, philosophy of | Conway, Lady Anne | Descartes, René | dispositions | epistemology | evolution | evolution: concept before Darwin | evolution: from the Origin of Species to the Descent of Man | Gassendi, Pierre | gene | genetics | genetics: genotype/phenotype distinction | information: biological | Kant, Immanuel | Leibniz, Gottfried Wilhelm | levels of organization in biology | life | Malebranche, Nicolas | mechanism in science | metaphysics | Mill, John Stuart | models in science | molecular biology | reduction, scientific: in biology | scientific explanation | systems and synthetic biology, philosophy of | teleology: teleological notions in biology

Acknowledgments

Thanks to the many philosophical and scientific colleagues who have provided me with extensive comments on different aspects of this material over the past decade. I am grateful to Max Dresow, Kelle Dhein, Nathan Lackey, Lauren Wilson, and Yoshinari Yoshida for insightful recommendations and an anonymous referee for helpful feedback that substantially improved the final version of the entry.

Copyright © 2020 by Alan Love < aclove @ umn . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Experimental evolution articles from across Nature Portfolio

Experimental evolution is the use of laboratory or controlled field manipulations to investigate evolutionary processes. It usually makes use of organisms with rapid generation times and small physical size, often microbes, to observe phenomena that in large multicellular organisms occur too slowly.

Latest Research and Reviews

experimental biology definition

Enhanced metabolic entanglement emerges during the evolution of an interkingdom microbial community

Here, the authors investigate the mechanisms behind mutualism with an engineered microbial community, finding that repeated indirect selection for enhanced metabolic dependencies may be a common factor in the evolution of mutualistic communities.

  • Giovanni Scarinci
  • Jan-Luca Ariens
  • Victor Sourjik

experimental biology definition

Continuously fluctuating selection reveals fine granularity of adaptation

Natural environmental and ecological shifts impose sufficiently strong selection to drive exceptionally rapid, parallel and fluctuating adaptive tracking in a Drosophila melangaster mesocosm.

  • M. C. Bitter
  • D. A. Petrov

experimental biology definition

Measuring the burden of hundreds of BioBricks defines an evolutionary limit on constructability in synthetic biology

Engineered DNA will slow the growth of a host cell if it redirects limiting resources or otherwise interferes with homeostasis. Here the authors measure how 301 BioBrick plasmids affected Escherichia coli growth and found that 19.6% were burdensome, primarily because they depleted the limited gene expression resources of host cells.

  • Genevieve A. Mortensen
  • Jeffrey E. Barrick

experimental biology definition

Environmental memory alters the fitness effects of adaptive mutations in fluctuating environments

A combination of experimental evolution of barcoded yeast lineages evolving in static and fluctuating conditions with math modelling shows that environmental fluctuations have non-additive effects on fitness.

  • Clare I. Abreu
  • Shaili Mathur
  • Dmitri A. Petrov

experimental biology definition

Introducing carbon assimilation in yeasts using photosynthetic directed endosymbiosis

Transforming model heterotrophs into autotrophs is usually accomplished by engineering one carbon assimilation pathway and/or employing laboratory evolution. Here, the authors report the engineering of cyanobacterial endosymbionts in yeasts to achieve photosynthetic growth, carbon assimilation and natural products production.

  • Yang-le Gao
  • Jason E. Cournoyer
  • Angad P. Mehta

experimental biology definition

Cellular adaptation to cancer therapy along a resistance continuum

Tumour cells adapt to anticancer drug treatments by a series of cellular state transitions, each inducing distinct gene expression programmes and leading to increased drug resistance.

  • Gustavo S. França
  • Maayan Baron

Advertisement

News and Comment

experimental biology definition

Autotrophic yeast

Yeast is a widely used cell factory for the conversion of sugar into fuels, chemicals and pharmaceuticals. Establishing yeast as being autotrophic can enable it to grow solely on CO 2 and light, and hereby yeast can be used as a wider platform for transition to a sustainable society.

  • Jens Nielsen

experimental biology definition

Multicellularity drives ecological diversity in a long-term evolution experiment

Long-term experimental evolution in brewer’s yeast reveals how the transition to simple multicellularity can drive ecological divergence and maintain diversity.

experimental biology definition

Fitness effects of mutations throughout evolution

A study in Science uses bacteria from the Long-Term Evolution Experiment to report on how fitness effects of mutations change through evolution.

experimental biology definition

Toxin rescue by a random sequence

A random sequence variant in an experimental screen can rescue Escherichi coli from the deleterious effects of a RNase toxin by interacting with chaperones.

  • Klara Hlouchova

experimental biology definition

Spatial sorting creates winners and losers

A catastrophic flooding event offered an unusual chance to demonstrate that spatial sorting — the differential dispersal of phenotypes — occurs in soapberry bugs as they recolonize after a major disturbance. But this process does not always prove to be adaptive.

  • Swanne P. Gordon
  • Caleb J. Axelrod

experimental biology definition

Smooth functional landscapes in microcosms

Inspired by systems biology, a statistical model now shows that low-order ecological interactions — which are inferable from relatively limited species-presence datasets — can successfully predict functional performance across synthetic microcosms.

  • Daniel R. Amor

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

experimental biology definition

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction & Top Questions

Homeostasis

  • Behaviour and interrelationships
  • Cells and their constituents
  • Tissues and organs
  • Biological practices among Assyrians and Babylonians
  • Biological knowledge of Egyptians, Chinese, and Indians
  • Theories about humankind and the origin of life
  • Aristotelian concepts
  • Botanical investigations
  • Post-Grecian biological studies
  • Arab domination of biology
  • Development of botany and zoology
  • Revitalization of anatomy
  • Resurgence of biology
  • Advances in botany
  • Advances in anatomy
  • The discovery of the circulation of blood
  • The establishment of scientific societies
  • Malpighi’s animal and plant studies
  • The discovery of “animalcules”
  • Swammerdam’s innovative techniques
  • Grew’s anatomical studies of plants
  • The discovery of cells
  • The use of structure for classifying organisms
  • Reorganization of groups of organisms
  • The development of comparative biological studies
  • Spontaneous generation
  • The death of spontaneous generation
  • The origin of primordial life
  • Biological expeditions
  • The development of cell theory
  • The theory of evolution
  • Preformation versus epigenesis
  • The fertilization process
  • Pre-Mendelian theories of heredity
  • Mendelian laws of heredity
  • Elucidation of the hereditary mechanism
  • Important conceptual and technological developments
  • Intradisciplinary and interdisciplinary work
  • Changing social and scientific values
  • Coping with problems of the future

biology; microscope

Why is biology important?

Science studient dissecting frog

Our editors will review what you’ve submitted and determine whether to revise the article.

  • LiveScience - What is Biology?
  • University of Hawaiʻi Pressbooks - Concepts of Zoology – Hawaiʻi Edition - Themes and concepts of Biology and Zoology
  • Biology LibreTexts Library - The Study of Life
  • Encyclopedia of Life Support Systems - History of Biology
  • biology - Children's Encyclopedia (Ages 8-11)
  • biology - Student Encyclopedia (Ages 11 and up)
  • Table Of Contents

biology; microscope

What is biology?

Biology is a branch of science that deals with living organisms and their vital processes. Biology encompasses diverse fields, including botany , conservation , ecology , evolution , genetics , marine biology , medicine , microbiology , molecular biology , physiology , and zoology .

As a field of science , biology helps us understand the living world and the ways its many species (including humans ) function, evolve, and interact. Advances in medicine , agriculture , biotechnology , and many other areas of biology have brought improvements in the quality of life. Fields such as genetics and evolution give insight into the past and can help shape the future, and research in ecology and conservation inform how we can protect this planet’s precious biodiversity .

Where do biology graduates work?

Biology graduates can hold a wide range of jobs, some of which may require additional education. A person with a degree in biology could work in agriculture , health care, biotechnology , education, environmental conservation, research, forensic science , policy, science communication, and many other areas.

Recent News

biology , study of living things and their vital processes. The field deals with all the physicochemical aspects of life . The modern tendency toward cross-disciplinary research and the unification of scientific knowledge and investigation from different fields has resulted in significant overlap of the field of biology with other scientific disciplines . Modern principles of other fields— chemistry , medicine , and physics , for example—are integrated with those of biology in areas such as biochemistry , biomedicine, and biophysics .

Biology is subdivided into separate branches for convenience of study, though all the subdivisions are interrelated by basic principles. Thus, while it is custom to separate the study of plants ( botany ) from that of animals ( zoology ), and the study of the structure of organisms ( morphology ) from that of function ( physiology ), all living things share in common certain biological phenomena—for example, various means of reproduction , cell division , and the transmission of genetic material.

Biology is often approached on the basis of levels that deal with fundamental units of life. At the level of molecular biology , for example, life is regarded as a manifestation of chemical and energy transformations that occur among the many chemical constituents that compose an organism. As a result of the development of increasingly powerful and precise laboratory instruments and techniques, it is possible to understand and define with high precision and accuracy not only the ultimate physiochemical organization (ultrastructure) of the molecules in living matter but also the way living matter reproduces at the molecular level. Especially crucial to those advances was the rise of genomics in the late 20th and early 21st centuries.

Cell biology is the study of cells—the fundamental units of structure and function in living organisms. Cells were first observed in the 17th century, when the compound microscope was invented. Before that time, the individual organism was studied as a whole in a field known as organismic biology; that area of research remains an important component of the biological sciences. Population biology deals with groups or populations of organisms that inhabit a given area or region. Included at that level are studies of the roles that specific kinds of plants and animals play in the complex and self-perpetuating interrelationships that exist between the living and the nonliving world, as well as studies of the built-in controls that maintain those relationships naturally. Those broadly based levels— molecules , cells, whole organisms, and populations—may be further subdivided for study, giving rise to specializations such as morphology , taxonomy , biophysics, biochemistry, genetics , epigenetics , and ecology . A field of biology may be especially concerned with the investigation of one kind of living thing—for example, the study of birds in ornithology , the study of fishes in ichthyology , or the study of microorganisms in microbiology .

Basic concepts of biology

Biological principles.

greylag. Flock of Greylag geese during their winter migration at Bosque del Apache National Refugee, New Mexico. greylag goose (Anser anser)

The concept of homeostasis —that living things maintain a constant internal environment—was first suggested in the 19th century by French physiologist Claude Bernard , who stated that “all the vital mechanisms, varied as they are, have only one object: that of preserving constant the conditions of life.”

As originally conceived by Bernard, homeostasis applied to the struggle of a single organism to survive. The concept was later extended to include any biological system from the cell to the entire biosphere , all the areas of Earth inhabited by living things.

experimental biology definition

All living organisms, regardless of their uniqueness, have certain biological, chemical, and physical characteristics in common. All, for example, are composed of basic units known as cells and of the same chemical substances, which, when analyzed, exhibit noteworthy similarities, even in such disparate organisms as bacteria and humans . Furthermore, since the action of any organism is determined by the manner in which its cells interact and since all cells interact in much the same way, the basic functioning of all organisms is also similar.

There is not only unity of basic living substance and functioning but also unity of origin of all living things. According to a theory proposed in 1855 by German pathologist Rudolf Virchow , “all living cells arise from pre-existing living cells.” That theory appears to be true for all living things at the present time under existing environmental conditions. If, however, life originated on Earth more than once in the past, the fact that all organisms have a sameness of basic structure, composition , and function would seem to indicate that only one original type succeeded.

A common origin of life would explain why in humans or bacteria—and in all forms of life in between—the same chemical substance, deoxyribonucleic acid ( DNA ), in the form of genes accounts for the ability of all living matter to replicate itself exactly and to transmit genetic information from parent to offspring. Furthermore, the mechanisms for that transmittal follow a pattern that is the same in all organisms.

Whenever a change in a gene (a mutation ) occurs, there is a change of some kind in the organism that contains the gene. It is this universal phenomenon that gives rise to the differences ( variations ) in populations of organisms from which nature selects for survival those that are best able to cope with changing conditions in the environment .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Exp Biol Med (Maywood)
  • v.243(3); 2018 Feb

Logo of ebm

Biomarker definitions and their applications

Robert m califf.

1 School of Medicine, Duke University, Durham, NC 27710, USA

2 Verily Life Sciences (Alphabet), South San Francisco, CA 94043, USA

3 Department of Medicine, Stanford University, Stanford, CA 94305, USA

Short abstract

Biomarkers are critical to the rational development of medical therapeutics, but significant confusion persists regarding fundamental definitions and concepts involved in their use in research and clinical practice, particularly in the fields of chronic disease and nutrition. Clarification of the definitions of different biomarkers and a better understanding of their appropriate application could result in substantial benefits. This review examines biomarker definitions recently established by the U.S. Food and Drug Administration and the National Institutes of Health as part of their joint Biomarkers, EndpointS, and other Tools (BEST) resource. These definitions are placed in context of their respective uses in patient care, clinical research, or therapeutic development. We explore the distinctions between biomarkers and clinical outcome assessments and discuss the specific definitions and applications of diagnostic, monitoring, pharmacodynamic/response, predictive, prognostic, safety, and susceptibility/risk biomarkers. We also explore the implications of current biomarker development trends, including complex composite biomarkers and digital biomarkers derived from sensors and mobile technologies. Finally, we discuss the challenges and potential benefits of biomarker-driven predictive toxicology and systems pharmacology, the need to ensure quality and reproducibility of the science underlying biomarker development, and the importance of fostering collaboration across the entire ecosystem of medical product development.

Impact statement

Biomarkers are critical to the rational development of medical diagnostics and therapeutics, but significant confusion persists regarding fundamental definitions and concepts involved in their use in research and clinical practice. Clarification of the definitions of different biomarker classes and a better understanding of their appropriate application could yield substantial benefits. Biomarker definitions recently established in a joint FDA-NIH resource place different classes of biomarkers in the context of their respective uses in patient care, clinical research, or therapeutic development. Complex composite biomarkers and digital biomarkers derived from sensors and mobile technologies, together with biomarker-driven predictive toxicology and systems pharmacology, are reshaping development of diagnostic and therapeutic technologies. An approach to biomarker development that prioritizes the quality and reproducibility of the science underlying biomarker development and incorporates collaborative regulatory science involving multiple disciplines will lead to rational, evidence-based biomarker development that keeps pace with scientific and clinical need.

Introduction

Biomarkers are critical to the rational development of drugs and medical devices. 1 But despite their tremendous value, there is significant confusion about the fundamental definitions and concepts involved in their use in research and clinical practice. Further, the complexity of biomarkers has been identified as a limitation to understanding chronic disease and nutrition. 2

Several years ago, this issue came to a head. At a joint leadership conference of the U.S. Food and Drug Administration (FDA) and the National Institutes of Health (NIH), it became apparent that leaders from each federal agency had differing impressions about the appropriate definitions of biomarkers in different contexts of use. A joint task force was therefore formed to forge common definitions and to make them publicly available through a continuously updated online document–the “Biomarkers, EndpointS, and other Tools” (BEST) resource. 3

The importance of well-understood definitions and a shared understanding of how to apply them should not be underestimated. Science has produced a surfeit of associations between biological measurements and models of disease at the subcellular, cellular, organ, biological system, and intact organism levels. This steadily increasing ability to make measurements in model systems, animals, and humans has led to an avalanche of potential biomarkers for states of disease and wellness, extending beyond pure research into medical product development, clinical practice, nutrition, and environmental policy development. But at the same time, the potential for much more acute biological measurement has been blunted by confusion about definitions that is slowing or even stalling progress toward development of useful diagnostic and therapeutic technologies.

The concept behind BEST is that improving our collective ability to match a biomarker with its appropriate purpose will enable greater speed, efficiency, and precision in the development of useful diagnostic and therapeutic technologies and strategies, as well as benefitting the development and implementation of public health policies. When scientific resources are devoted to developing a biomarker application that does not meet criteria for regulatory approval, reimbursement, or clinical use, the financial and human investments are wasted. Even in early translational research, mistaken concepts about future use can lead to an unfortunate diversion of funding and scientific effort toward biomarker development programs that are destined to yield inaccurate estimates of effects on animal or human health.

In this section, these definitions will be reviewed and placed into context. Examples from the field of cardiovascular disease will be used because of the author’s specific experience in this field, although the concepts are applicable to all areas of human and veterinary medicine. The chapter does not go into detail about the validation process, which is covered in other sections. However, it is worth noting that the process of validation requires the specific and interdependent steps of analytical validation, qualification using an evidentiary assessment, and utilization ( Figure 1 ). 2 These steps are specific to each condition of use for the biomarker.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1535370217750088-fig1.jpg

Steps in the evaluation framework for biomarkers. Adapted from: Institute of Medicine. Evaluation of biomarkers and surrogate endpoints in chronic disease. Summary. Washington, D.C.: National Academies Press, 2010.

Biomarkers, clinical outcome assessments, and endpoints

The basic definition of a biomarker is deceptively simple: “A defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes or responses to an exposure or intervention.” 3 This broad definition encompasses therapeutic interventions and can be derived from molecular, histologic, radiographic, or physiologic characteristics. For the sake of clarity, biomarkers should be distinct from direct measures of how a person feels, functions, or survives—a category of measure known as a clinical outcome assessment (COA). This difference between biomarkers and COAs is important, because COAs measure outcomes that are directly important to the patients and can be used to meet standards for regulatory approval of therapeutics, whereas biomarkers serve a variety of purposes, one of which is to link a measurement to a prediction of COAs. Only when a biomarker is validated can it serve as the primary basis for regulatory approval for marketing, except in circumstances where no effective therapy is available. In such situations, the biomarker may be used to support approval under one of several accelerated approval pathways 4 as deemed appropriate by FDA reviewers.

Biomarkers and COAs take on additional complexity—and corresponding need for scientific rigor—when used as endpoints in clinical studies. An endpoint is a precisely defined variable intended to reflect an outcome of interest that is analyzed using statistics to address a particular research question. 3 Although a biomarker or COA may be discussed in a more general sense, when either is used as an endpoint, a degree of rigor that includes multiple dimensions is required. What is the precise definition, and what are the steps that will be used to measure the endpoint of interest? When will the measurement(s) occur, and how will multiple measurements in the same individual be handled in the analysis? Thus, the investigation of a biomarker can posit a less specific construct for the general development of scientific and technological concepts, but clinical study endpoints must be precisely defined to yield reliable and reproducible results.

Biomarker definitions

A number of subtypes of biomarkers have been defined according to their putative applications. Importantly, a single biomarker may meet multiple criteria for different uses, but it is important to develop evidence for each definition. Thus, while definitions may overlap, they also have clear distinguishing features that specify particular uses.

Diagnostic biomarkers

A diagnostic biomarker detects or confirms the presence of a disease or condition of interest, or identifies an individual with a subtype of the disease. 3 As we move into the era of precision medicine, this type of biomarker will evolve considerably. Such biomarkers may be used not only to identify people with a disease, but to redefine the classification of the disease. For example, the detection of cancer is moving rapidly toward a molecular and imaging-based classification rather than a largely organ-based classification scheme.

Given a diagnostic biomarker that can be measured with sufficient precision and reliability with a delineated context of use, the assessment of that biomarker remains complex. One goal is to define a method for validation that assures that the biomarker can be measured reliably, precisely, and repeatably at a low cost. All too often, assays are not validated, engendering misleading assumptions about the biomarker’s value. The complexity of validation can be seen in the use of troponin, clearly an important biomarker for the diagnosis of acute myocardial infarction. The operating characteristics of the many assays for troponin vary considerably, especially at the lower limit threshold, where misclassification can lead to a major difference in medical care. Furthermore, while the advent of high-sensitivity troponin assays has opened many avenues for sophisticated diagnosis of small episodes of myocardial necrosis, it has created further confusion in the field. When small elevations of troponin occur at previously undetectable levels, the clinical consequences are unclear. We can expect that as measurement methods continue to improve, the understanding of the value of individual diagnostic biomarkers will likewise evolve.

If a diagnostic biomarker moves beyond a general application, such as advancing scientific concepts, to specific use in prospective research or clinical practice, close attention must be paid to the context of use. A diagnostic biomarker may be useful in one set of clinical circumstances but completely misleading in another context. For example: in low-prevalence diseases such as pancreatic or ovarian cancer for which a new diagnosis is psychologically devastating or would require invasive evaluation, a biomarker must have a very low false-positive rate. On the other hand, in screening for common diseases such as hypertension or hyperlipidemia for which repeated assessments can be done with little risk, higher false-positive rates are tolerable and the focus of concern may be on false-negative rates.

The use of receiver-operating characteristic curves has enabled a rational process of diagnostic biomarker evaluation to proceed. 5 A common problem, however, is the absence of a historical standard for defining the presence or absence of the disease or condition. Furthermore, decision thresholds and clinical utility are becoming important measures for assessing the value of biomarkers for clinical application. In the future, proof that a biomarker adds information about diagnosis may be necessary but not sufficient. Rather, the key question will be whether the additional information is substantial enough to lead to a change in clinical decision-making. Statistics for evaluating this issue, such as the net reclassification index, are evolving. 6 Researchers involved in early preclinical biomarker research would be well served to understand how the biomarker will eventually be evaluated, just as those doing early drug development should have the ultimate use in humans in view. 7

Monitoring biomarkers

When a biomarker can be measured serially to assess the status of a disease or medical condition for evidence of exposure to a medical product or environmental agent, or to detect an effect of a medical product or biological agent, it is a monitoring biomarker. Monitoring is a broad concept, so there is overlap with other categories of biomarkers as described below.

Monitoring biomarkers have important applications in clinical care. When blood pressure is treated or low-density lipoprotein (LDL) cholesterol-lowering drugs are used, blood pressure or LDL cholesterol levels are monitored. Similarly, when HIV infection is treated, CD4 counts are monitored. But while the general concept of monitoring for clinical purposes is intuitive, arriving at a more refined understanding of what changes in the biomarker should signal a particular change in clinical course and decision-making (e.g. more testing or intervention) is complex and often less precise than is desirable.

For example, target measurements for hemoglobin (Hb)A1C, 8 blood pressure, 9 and LDL cholesterol 10 remain controversial despite these being among our most well-studied and accepted biomarkers. Similarly, we often lack sufficient empirical confirmation of the most helpful interval between measurements or the duration of the clinical course during which measurements should be made. Many biomarkers routinely used in clinical practice have very imprecise operating characteristics, so that they are used in a clinical “gestalt” along with the phrase “clinical judgment is needed.” Yet the specifics of clinical parameters that should go into a good clinical judgment are unspecified.

When medical products are developed, changes in biomarkers are routinely used to make decisions about whether key thresholds have been reached, allowing developers to conclude that the therapy affected a biological target enough to merit continued development of the product. Most initial biomarkers used for this purpose measure effect on the assumed target of the intervention, so that changes in the biomarker indicate target engagement and related activity. As discussed below, the ability to measure off-target effects on biological systems will increasingly bring panels of biomarkers and systems measurement into play to evaluate intermediate findings in medical product development.

Monitoring biomarkers are also important in ensuring the safety of human research participants. For example, the safety threshold for drugs with possible liver toxicity is monitored through serial measurement of liver function tests, and cardiovascular events are measured through the use of serial troponins.

Monitoring biomarkers are also useful for measuring pharmacodynamic effects, to detect early evidence of a therapeutic response, and to detect complications of a disease or therapy. International normalized ratio (INR) is a classical pharmacodynamic measure used to titrate the dose of warfarin anticoagulation. Similarly, when blood pressure is treated, a reduction in the measure of blood pressure provides evidence that the therapy is working.

One of the more interesting aspects of monitoring biomarkers is the almost unalterable belief held by many researchers and clinicians that changes in biomarker measurements give the best measure of the likely outcome for a patient or population. However, in many circumstances the actual measure, not the change, is the best predictor of outcome, even if the change is the best way to monitor whether the therapy itself is having an effect. For example, an angiotensin-converting enzyme (ACE) inhibitor may cause an elevation of serum creatinine and/or potassium, and this provides a measure of drug effect. However, the risk to the patient or research participant is primarily determined by the actual creatinine or potassium level, not the change in levels.

Pharmacodynamic/response biomarkers

When the level of a biomarker changes in response to exposure to a medical product or an environmental agent, it can be called a pharmacodynamic/response biomarker . This type of biomarker is extraordinarily useful both in clinical practice and early therapeutic development. If one is treating hypertension or diabetes and no reduction in blood pressure or glucose occurs with a therapy, there is good reason to eschew that intervention and pursue another. Similarly, a candidate drug for a condition that does not alter the key parameter of that biomarker in phase 1 trials would hardly be worth pursuing. A special circumstance is phase 1 studies of normal individuals. It would be unexpected for a disease-related biomarker to show a major change (for example, blood pressure) in persons with normal baseline values. In this circumstance, the main focus is on developing preliminary evidence that the drug will be safe to use in individuals with the target disease. For many drugs, dosing is determined by measured change in a pharmacodynamic/response biomarker when a therapy is given.

However, the interpretation of pharmacodynamics/response biomarkers is not always simple or straightforward. In the case of ACE inhibitors, the initial view was that acute titration of dose in the intensive care unit could guide dosing in heart failure patients. And indeed, it was possible to see major differences in the responsiveness of different patients to the same dose. But unfortunately these acute responses did not adequately predict long-term responses. It is therefore critically important to validate that the measured change in the pharmacodynamics/response biomarker provides a reliable signal for the expected therapeutic response.

Another complex problem arises when easily measureable biomarkers do not reflect true pharmacodynamic responses. With intravenous fibrinolytic agents, serum pharmacokinetics do not reflect the activity of the agent in the thrombus. Similarly, amiodarone is heavily deposited in fat and therefore has a much longer duration of activity than simple measurement of serum levels would predict.

Predictive biomarkers

A predictive biomarker is defined by the finding that the presence or change in the biomarker predicts an individual or group of individuals more likely to experience a favorable or unfavorable effect from the exposure to a medical product or environmental agent. 3 Proving that a biomarker is useful for this purpose requires a rigorous approach to clinical studies. Ideally, patients with or without the biomarker are randomized to one of two or more treatments (or a placebo comparator) and differences in outcome as function of treatment are significantly related to the difference in presence, absence, or level of the biomarker. Proof of a reliable predictive biomarker thus represents a “high hurdle” to clear.

Predictive biomarkers are important for enrichment strategies 11 , 12 in the design and conduct of clinical trials. Especially in the pre-registration phase of development, focusing enrollment on participants with elevated levels of a predictive biomarker enables a clearer signal that the treatment actually has an effect by enrolling people in whom the treatment is likely to “work.” Using predictive biomarkers for enrichment is a more targeted approach than using prognostic biomarkers, which can be used to increase event rates but not to select specific patients who are more likely to respond or not respond to therapy.

The same thinking underlies much of the current consensus about treatment choice in clinical practice. Antihypertensive medications are prescribed for patients with elevated blood pressure; blood transfusion is used in people with anemia measured by low Hb levels; acute reperfusion is indicated in patients with ST-segment elevation on an electrocardiogram—all of these are examples of biomarkers that differentially select patients likely to respond to therapy. Similarly, populations at increased risk due to high levels of predictive biomarkers are identified as needing additional intervention in population health strategies. For example, patients with high levels of HbA1C have the most to gain from aggressive therapies to treat diabetes. In addition, a major growth area in predictive biomarkers is the development of genetic and genomic markers for precision medicine, as in the case of cancer patients with HER2 receptor positive assays who are more likely to respond to treatment with herceptin.

The biomarker-guided use of LDL cholesterol-lowering drugs offers an excellent example of the complexity of these issues. LDL cholesterol is clearly a susceptibility/risk biomarker and a prognostic biomarker . Patients with elevated LDL cholesterol are at increased risk both of developing atherosclerosis and of experiencing an event such as death, stroke, or myocardial infarction once they have diagnosed disease. Statins, the selective cholesterol absorption inhibitor ezetimibe, and PCSK9 inhibitors all lower LDL cholesterol levels and reduce mortality and critical clinical events such as stroke. However, in multiple clinical trials cumulatively enrolling more than 100,000 patients, the relative effect on event reduction is similar across all levels of LDL cholesterol, including levels well within the normal range. 13 Therefore, in clinical trials, the event reduction is a function of the overall relative risk reduction and the absolute risk of an event, which is determined not only by LDL cholesterol levels, but also by multiple factors, including age, smoking status, diabetes, and blood pressure. Environmental exposures have similar characteristics. Individuals and subpopulations may have particular risks associated with specific biomarkers such that preventive measures are most likely to be useful in people with elevated levels of those biomarkers.

Prognostic biomarkers

A prognostic biomarker is used to identify the likelihood of a clinical event, disease recurrence, or disease progression in patients with a disease or medical condition of interest. Although this distinction is not uniformly accepted, the BEST working groups concluded that prognostic biomarkers should be differentiated from susceptibility/risk biomarkers, which deal with association with the transition from healthy state to disease. Furthermore, they are distinguished from predictive biomarkers , which identify factors associated with the effect of intervention or exposure.

In clinical trials, prognostic biomarkers are routinely used to set trial entry and exclusion criteria to identify higher-risk populations. The key issue is that the statistical power of a trial is determined by the number of events rather than the sample size. When trials are enriched in this manner, the event rates are increased; if the treatment is effective, the differences in outcomes as a function of treatment are magnified quantitatively but not qualitatively. In addition, prognostic biomarkers are especially important for predicting the risk of an event or poor outcome in an individual. This information is key to decisions about length of stay in hospital and/or in intensive care units. Yet another major use of prognostic biomarkers is for resource allocation in population health: by stratifying the risk for both negative clinical and financial outcomes, a healthcare organization can distinguish which patients could benefit from more intensive evaluation while allowing others to avoid unnecessary additional diagnostic tests or medical interventions.

A safety biomarker is measured before or after an exposure to a medical intervention or environmental agent to indicate the likelihood, presence, or extent of a toxicity as an adverse event. For many therapies, monitoring for hepatic, renal, or cardiovascular toxicity is critical to assuring that a given therapy can be safely sustained.

Safety biomarkers are useful for identifying patients who are experiencing adverse effects from a treatment. When antiarrhythmic drugs are prescribed, prolongation of the QT interval on the electrocardiogram is used as a safety biomarker because it predicts the risk of developing the lethal arrhythmia torsades de pointes and can be used to identify patients in need of countermeasures for effective therapy. Similarly, safety biomarkers can be used to monitor a population for exposure to an environmental risk or to monitor a population after an exposure.

An interesting aspect of developing safety biomarkers is the balance that should be sought between safety and the potential benefits of therapy. Returning to the example of QT interval monitoring: the effect such monitoring has had on drug development has been a topic of frequent discussion and controversy. It is possible that a drug whose benefits outweighed its risks has been missed because development was stopped when QT interval prolongation was detected. The Cardiac Safety Research Consortium, which includes representatives from the FDA, industry, and academia, is working on strategies for establishing an optimal balance between the ability to measure risk through early biomarker detection with the potential for benefit. 14

Susceptibility/risk

A biomarker that indicates the potential for developing a disease or medical condition in an individual who does not currently have clinically apparent disease or the medical condition is classified as a susceptibility/risk biomarker. The concept is similar to prognostic biomarkers, except that the key issue is the association with the development of a disease rather than prognosis after one already has the diagnosis. These types of biomarkers are foundational for the conduct of epidemiological studies about risk of disease.

Prognostic versus predictive biomarkers

The distinction between prognostic and predictive biomarkers is critically important when assessing likely disease outcomes with treatment. Prognostic biomarkers are associated with differential disease outcomes, but predictive biomarkers discriminate those who will respond or not respond to therapy. For example: ST-segment deviation on the electrocardiogram is a prognostic biomarker, but the direction of the ST-segment change is a crucial predictive biomarker and ST-segment elevation predicts response to fibrinolytic therapy, whereas ST-segment depression predicts a lack of response to therapy. The issue is easiest to visualize in the context of an “all-or-nothing” response scenario in which the treatment effect is clearly different depending on the level of the biomarker. However, in many cases, the response is graded (a spectrum of responses), probabilistic (the treatment is effective in most, but more or less effective in those with the biomarker), or both.

The single most common and serious error in the evaluation of biomarkers is the assumption that a correlation between the measured level of a biomarker and a clinical outcome means that the biomarker constitutes a valid surrogate. In fact, for a biomarker to qualify as a surrogate, the biomarker must not only be correlated with the outcome, but the change in the biomarker must “explain” the change in the clinical outcome. The term “explains” invokes statistical inference, which can only be made with confidence if the observation is made in multiple therapies that all change the biomarker. This high bar means that the overwhelming majority of biomarkers are not valid surrogates; further, even when a surrogate is validated, that validation only pertains to a specific context of use.

The classic work of Fleming and DeMets 15 and Prentice 16 clearly delineates the reasons that “a correlation does not a surrogate make” ( Figure 2 ). Biological pathways and therapeutic effects are multifaceted and redundant. This means that a therapy can change an outcome without affecting the putative surrogate, it can change the putative surrogate without changing the clinical outcome, or it can change both to a variable degree. Among many excellent examples: high-density lipoprotein (HDL) cholesterol is a notably excellent prognostic and susceptibility biomarker, but when employed as a surrogate, it has failed multiple times across many classes of drugs. People with low levels of HDL are susceptible to developing atherosclerosis and thus are more likely to have poor outcomes, but drugs that raise levels of HDL cholesterol have had either no effect or detrimental effects on clinical outcomes.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1535370217750088-fig2.jpg

Reasons for failure of surrogate endpoints. (a) In this situation, the disease affects the putative surrogate endpoint and the true clinical outcome via different mechanisms, so that any correlation between the two is not causal. (b) The intervention affects the putative surrogate endpoint, which has some impact on the true clinical outcome. Unfortunately, the disease affects the true clinical outcome by other mechanisms, which make the change in the putative surrogate an unreliable measure of change in the true clinical outcome. (c)The intervention affects the putative surrogate endpoint through mechanisms independent of its effect on the true clinical outcome. Thus, the change in the surrogate endpoint is not a reliable measure of the change in the true clinical outcome. (d) All of the above issues are in play. Adapted from: Fleming TR, DeMets DL. Surrogate end points in clinical trials: are we being misled? Ann Intern Med 1996; 25 :605–13.

The reason it is so important to get this concept right is because surrogates substitute for clinical outcomes and thus can be used to draw inferences about whether a treatment is clinically beneficial. The FDA has the statutory authority to approve medical products for marketing based on validated biomarkers, but the amount of work required to validate a biomarker is substantial. For each biomarker and endpoint, this means that multiple clinical trials that measure both the outcome and the biomarker must be done to demonstrate that the relationship between the change in biomarker and the change in outcome is generalizable across therapies.

The application of these definitions would require substantial discipline even if the underlying scientific fields were static. However, we are currently witnessing tremendous developments in systems biology. At the same time, continuous progress in our capacity to store, collate, and compute massive amounts of information is fundamentally changing our understanding of both biology and clinical outcomes. Taken together, these developments augur a period of explosive growth and rapid change in the field of biomarkers that will occur in tandem with a blossoming in the fields of clinical pharmacology and toxicology. Some examples of critical trends are given below.

Complex biomarkers

The field of biomarkers has been built on critical measures with profound associations with disease that can be understood in a straightforward paradigm. For instance: LDL cholesterol is associated with the risk of cardiovascular disease and lower LDL cholesterol is better; higher systolic blood pressure is associated with stroke and lower systolic blood pressure is better. However, biological systems are complex and multidimensional. As increasingly sophisticated biological models are developed, it is clear that evaluating one biomarker in the absence of an understanding of others can lead to erroneous conclusions. In addition, measurement of complex, composite biomarkers may enable better predictions because multiple biomarkers each play a small role in the summative outcome of interest.

The effort required to understand a single biomarker becomes many times more complex when the interrelationships of multiple biomarkers are considered. Fortunately, changes in computing and measurements are making such an approach increasingly feasible. The result of ongoing investigations such as Verily/Alphabet’s Project Baseline 17 and the NIH’s All of Us 18 will produce a vast array of complex biological data as well as context for how these data relates to more traditional clinical outcomes such as survival, major clinical events, and quality of life. Figure 3 provides a visual representation of why these relationships are so complex and intertwined. 2

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1535370217750088-fig3.jpg

Multiple components, biological pathways, and outcomes all contribute to the complexity of using biomarkers and surrogate endpoints in the context of chronic disease. Adapted from: Institute of Medicine. Evaluation of biomarkers and surrogate endpoints in chronic disease . Summary. Washington, D.C.: National Academies Press, 2010.

Digital biomarkers

One rapidly developing frontier is the field of digital biomarkers. 19 Sensors and personal devices now enable rapid and continuous assimilation of information about a person that provides insight into complex measures such as psychological state, exercise level, cognitive abilities, eating patterns, motion, and tremor. Because these data are in large part derived from new sources including smartphones and wearable electronic devices and facilitated by novel technologies that allow for the streaming and storage of complex data, standards for evaluating these biomarkers are just now developing. Although the Clinical Trials Transformation Initiative has recently published recommendations on standards for quality in the field, 20 a great deal of additional study is needed to link digital phenotypes and endpoints to traditional outcome measures. For example, the 6-min walk test has become a standard method for assessing exercise tolerance, and seated resting systolic blood pressure has become the standard measure for blood pressure assessment. But the relationship between the patient’s activity status and measurements derived from wearable accelerometers, including ones embedded in wristwatches or cellphones, is a work in progress, 21 while sensors and smartphone apps for blood pressure measurement are likewise undergoing evolution. 22 Dealing with missing data, outlier values, and reduction of massive volumes of data into measures that can inform decisions will entail considerable work.

Ultimately, it is likely that digital biomarkers will open up entirely new measures of phenomena that are already used in practice. For example, it may be that total activity over the course of the day or some composite of peak activity and continuous activity would be a better measure to predict onset of new diseases ( risk/susceptibility biomarker ), prognosis for those who already have a disease ( prognostic biomarker) or response to treatment ( response biomarker ). Similarly, it is likely that when very frequent blood pressure measurements are possible, derivative measures from the array of blood pressures and activities will be a better indicator of response to therapy for hypertension than seated resting blood pressure measurement.

Predictive toxicology and systems pharmacology

For all the reasons described in this section, individual biomarkers cannot be considered the primary goal of biological discovery or therapeutic development. In particular, understanding the effect of an intervention or exposure will develop directly as a function of understanding its complex ramifications in biological systems. 23 For the most part, early evaluation of therapies has involved a relatively static set of assays. Yet it is well known that extrapolations from animal models to human biology have often been unreliable. 24 However, dramatic and continuing reductions in the cost of measurement of genetic, genomic, and integrated biological measures 25 and the expansion of computing and analytical power are increasingly conferring the ability to look beyond the specific mechanism of action of a technology. A tremendous amount of validation and confirmation of increasingly complex models will be needed, but ultimately this work should enable much more effective prediction of an intervention’s impact on integrative biology and clinical outcomes.

Quality and reproducibility of the underlying science

The benefit of using a biomarker for a specific purpose is directly related to the quality of the research supporting it. All too often, the basic research underlying assessment of a biomarker for a specific context of use cannot be reproduced. This lack of reproducibility recently presented a major problem in the regulatory evaluation of a new treatment for Duchenne muscular dystrophy when assessment of the key biomarker was not done in rigorous, blinded fashion. 26 New policies implemented at the NIH are improving commitment to rigorous methodology, transparency, and reproducibility.

The importance of working together across the ecosystem

If we are to make needed progress in the proper application of these definitions so that medical product development and environmental policy can improve, academics, industry, and trial sponsors must expand their horizons to encompass new methods and approaches. Despite the best efforts of all involved, there is a tendency to continue to use the same methods over time in regulated studies because of a shared level of comfort with the use of well-worn measures. The discipline of regulatory science is the common ground for all elements of the ecosystem to come together to advance the field. 27 By continuing to evolve in our current thinking about biomarkers, endpoints, and other tools in medical product development, we will accelerate our understanding of biological science and improve the efficiency and pace at which effective technologies are developed for the prevention, diagnosis, and treatment of disease.

Biomarkers are critical to the fabric of discovery science, medical product development, and healthcare for the individual and population. Recent and ongoing explosive growth in measurement, computation, and analysis are producing rapid change in the field. The NIH and FDA have worked together to create a set of definitions that should guide researchers in developing needed evidence and practitioners in the application of biomarkers in health care, while other organizations such as the Clinical Trials Transformation Initiative and the Foundation for the National Institutes of Health Biomarkers Consortium are following suit in extending this work. An approach to biomarker development that incorporates collaborative regulatory science involving multiple disciplines is needed to ensure that rational, evidence-based biomarker development keeps pace with scientific and clinical need.

Declaration of conflicting interests

Dr. Califf was the Commissioner of Food and Drugs, US Food and Drug Administration from February 2016 to January 2017. He currently receives consulting payments from Merck and is employed as a scientific advisor by Verily Life Sciences (Alphabet).

The following statements relate to relationships which ended in February 2015 when Dr. Califf was appointed to the FDA as Deputy Commissioner for Medical Products and Tobacco. The current disclosures of note are those listed in the above declaration of conflicting interests. Califf received research grant funding from the Patient-Centered Outcomes Research Institute, the National Institutes of Health, the US Food and Drug Administration, Amylin, and Eli Lilly and Company; research grants and consulting payments from Bristol-Myers Squibb, Janssen Research and Development, Merck, and Novartis; consulting payments from Amgen, Bayer Healthcare, BMEB Services, Genentech, GlaxoSmithKline, Heart.org – Daiichi Sankyo, Kowa, Les Laboratoires Servier, Medscape/Heart.org, Regado, and Roche; he also held equity in N30 Pharma and Portola.

  • More from M-W
  • To save this word, you'll need to log in. Log In

experimental

Definition of experimental

  • developmental

Examples of experimental in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'experimental.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Middle English, borrowed from Medieval Latin experīmentālis, from Latin experīmentum "testing, experience, proof" + -ālis -al entry 1 — more at experiment entry 1

15th century, in the meaning defined at sense 1

Phrases Containing experimental

  • pre - experimental

Articles Related to experimental

hypothesis

This is the Difference Between a...

This is the Difference Between a Hypothesis and a Theory

In scientific reasoning, they're two completely different things

Dictionary Entries Near experimental

experimental design

Cite this Entry

“Experimental.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/experimental. Accessed 23 Aug. 2024.

Kids Definition

Kids definition of experimental, medical definition, medical definition of experimental, more from merriam-webster on experimental.

Nglish: Translation of experimental for Spanish Speakers

Britannica English: Translation of experimental for Arabic Speakers

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

Plural and possessive names: a guide, 31 useful rhetorical devices, more commonly misspelled words, absent letters that are heard anyway, how to use accents and diacritical marks, popular in wordplay, 8 words for lesser-known musical instruments, it's a scorcher words for the summer heat, 7 shakespearean insults to make life more interesting, 10 words from taylor swift songs (merriam's version), 9 superb owl words, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

What Is a Control Variable? Definition and Examples

A control variable is any factor that is controlled or held constant in an experiment.

A control variable is any factor that is controlled or held constant during an experiment . For this reason, it’s also known as a controlled variable or a constant variable. A single experiment may contain many control variables . Unlike the independent and dependent variables , control variables aren’t a part of the experiment, but they are important because they could affect the outcome. Take a look at the difference between a control variable and control group and see examples of control variables.

Importance of Control Variables

Remember, the independent variable is the one you change, the dependent variable is the one you measure in response to this change, and the control variables are any other factors you control or hold constant so that they can’t influence the experiment. Control variables are important because:

  • They make it easier to reproduce the experiment.
  • The increase confidence in the outcome of the experiment.

For example, if you conducted an experiment examining the effect of the color of light on plant growth, but you didn’t control temperature, it might affect the outcome. One light source might be hotter than the other, affecting plant growth. This could lead you to incorrectly accept or reject your hypothesis. As another example, say you did control the temperature. If you did not report this temperature in your “methods” section, another researcher might have trouble reproducing your results. What if you conducted your experiment at 15 °C. Would you expect the same results at 5 °C or 35 5 °C? Sometimes the potential effect of a control variable can lead to a new experiment!

Sometimes you think you have controlled everything except the independent variable, but still get strange results. This could be due to what is called a “ confounding variable .” Examples of confounding variables could be humidity, magnetism, and vibration. Sometimes you can identify a confounding variable and turn it into a control variable. Other times, confounding variables cannot be detected or controlled.

Control Variable vs Control Group

A control group is different from a control variable. You expose a control group to all the same conditions as the experimental group, except you change the independent variable in the experimental group. Both the control group and experimental group should have the same control variables.

Control Variable Examples

Anything you can measure or control that is not the independent variable or dependent variable has potential to be a control variable. Examples of common control variables include:

  • Duration of the experiment
  • Size and composition of containers
  • Temperature
  • Sample volume
  • Experimental technique
  • Chemical purity or manufacturer
  • Species (in biological experiments)

For example, consider an experiment testing whether a certain supplement affects cattle weight gain. The independent variable is the supplement, while the dependent variable is cattle weight. A typical control group would consist of cattle not given the supplement, while the cattle in the experimental group would receive the supplement. Examples of control variables in this experiment could include the age of the cattle, their breed, whether they are male or female, the amount of supplement, the way the supplement is administered, how often the supplement is administered, the type of feed given to the cattle, the temperature, the water supply, the time of year, and the method used to record weight. There may be other control variables, too. Sometimes you can’t actually control a control variable, but conditions should be the same for both the control and experimental groups. For example, if the cattle are free-range, weather might change from day to day, but both groups have the same experience. When you take data, be sure to record control variables along with the independent and dependent variable.

  • Box, George E.P.; Hunter, William G.; Hunter, J. Stuart (1978). Statistics for Experimenters : An Introduction to Design, Data Analysis, and Model Building . New York: Wiley. ISBN 978-0-471-09315-2.
  • Giri, Narayan C.; Das, M. N. (1979). Design and Analysis of Experiments . New York, N.Y: Wiley. ISBN 9780852269145.
  • Stigler, Stephen M. (November 1992). “A Historical View of Statistical Concepts in Psychology and Educational Research”. American Journal of Education . 101 (1): 60–70. doi: 10.1086/444032

Related Posts

IMAGES

  1. PPT

    experimental biology definition

  2. Types Of Experiments In Biology

    experimental biology definition

  3. PPT

    experimental biology definition

  4. Experimental Design

    experimental biology definition

  5. Experimental Design for Biologists, Second Edition

    experimental biology definition

  6. WUSM Cell Biology Lab Lawrence Group

    experimental biology definition

COMMENTS

  1. Experimental biology

    Experimental biology is the set of approaches in the field of biology concerned with the conduction of experiments to investigate and understand biological phenomena. The term is opposed to theoretical biology which is concerned with the mathematical modelling and abstractions of the biological systems. Due to the complexity of the investigated systems, biology is primarily an experimental ...

  2. Khan Academy

    When possible, scientists test their hypotheses using controlled experiments. A controlled experiment is a scientific test done under controlled conditions, meaning that just one (or a few) factors are changed at a time, while all others are kept constant. We'll look closely at controlled experiments in the next section.

  3. Experimental Group

    Experimental Group Definition. In a comparative experiment, the experimental group (aka the treatment group) is the group being tested for a reaction to a change in the variable. There may be experimental groups in a study, each testing a different level or amount of the variable. The other type of group, the control group, can show the effects ...

  4. Experiment in Biology

    He argues that research in experimental biology always "begins with the choice of a system rather than with the choice of a theoretical framework" (p. 25). He then follows a certain experimental system through time and shows how it exerted a strong effect on the development of biology. The system in question is a so-called in-vitro system ...

  5. 1.1 The Science of Biology

    In simple terms, biology is the study of life. This is a very broad definition because the scope of biology is vast. Biologists may study anything from the microscopic or submicroscopic view of a cell to ecosystems and the whole living planet ( Figure 1.2 ). Listening to the daily news, you will quickly realize how many aspects of biology we ...

  6. The scientific method and experimental design

    A. The facts collected from an experiment are written in the form of a hypothesis. (Choice B) A hypothesis is the correct answer to a scientific question. B. A hypothesis is the correct answer to a scientific question. (Choice C) A hypothesis is a possible, testable explanation for a scientific question. C.

  7. Experiment Definition in Science

    Experiment Definition in Science. By definition, an experiment is a procedure that tests a hypothesis. A hypothesis, in turn, is a prediction of cause and effect or the predicted outcome of changing one factor of a situation. Both the hypothesis and experiment are components of the scientific method. The steps of the scientific method are:

  8. Experiments and Hypotheses

    When conducting scientific experiments, researchers develop hypotheses to guide experimental design. A hypothesis is a suggested explanation that is both testable and falsifiable. You must be able to test your hypothesis through observations and research, and it must be possible to prove your hypothesis false. For example, Michael observes that ...

  9. Experiment

    Experiment. Noun: a procedure done in a controlled environment for the purpose of gathering observations, data, or facts, demonstrating known facts or theories, or testing hypotheses or theories. Verb: to carry out such a procedure. Last updated on May 29th, 2023.

  10. Replicates and repeats—what is the difference and is it significant?

    Experimental design, at its simplest, is the art of varying one factor at a time while controlling others: an observed difference between two conditions can only be attributed to Factor A if that is the only factor differing between the two conditions. We always need to consider plausible alternative interpretations of an observed result.

  11. Experimental Cell Biology: Definition & Methods

    Cell Culture. Cell culture is the technique of growing cells in the laboratory outside an organism. Cell culture is essential to experimental cell biology because it allows for a steady supply of ...

  12. Experimental Design in Science

    The experimental design is a set of procedures that are designed to test a hypothesis. The process has five steps: define variables, formulate a hypothesis, design an experiment, assign subjects ...

  13. Controlled Experiment

    Controlled Experiment Definition. A controlled experiment is a scientific test that is directly manipulated by a scientist, in order to test a single variable at a time. The variable being tested is the independent variable, and is adjusted to see the effects on the system being studied. The controlled variables are held constant to minimize or ...

  14. Developmental Biology

    Developmental biology is the science that investigates how a variety of interacting processes generate an organism's heterogeneous shapes, size, and structural features that arise on the trajectory from embryo to adult, or more generally throughout a life cycle. It represents an exemplary area of contemporary experimental biology that focuses ...

  15. Experiment

    e. An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on ...

  16. Experimental evolution

    Definition. Experimental evolution is the use of laboratory or controlled field manipulations to investigate evolutionary processes. It usually makes use of organisms with rapid generation times ...

  17. Experimental Embryology: Definition & Principles

    Experimental embryology is a branch of embryology that studies how an organism develops from a single cell to the adult form. It focuses on discovering the processes and timing of cellular ...

  18. Biology

    biology, study of living things and their vital processes. The field deals with all the physicochemical aspects of life. The modern tendency toward cross-disciplinary research and the unification of scientific knowledge and investigation from different fields has resulted in significant overlap of the field of biology with other scientific ...

  19. Biomarker definitions and their applications

    Biomarkers, clinical outcome assessments, and endpoints. The basic definition of a biomarker is deceptively simple: "A defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes or responses to an exposure or intervention." 3 This broad definition encompasses therapeutic interventions and can be derived from molecular, histologic ...

  20. Experimental Definition & Meaning

    experimental: [adjective] of, relating to, or based on experience or experiment.

  21. What Is a Control Variable? Definition and Examples

    Control Variable Examples. Anything you can measure or control that is not the independent variable or dependent variable has potential to be a control variable. Examples of common control variables include: Duration of the experiment. Size and composition of containers. Temperature.

  22. Experimental Error Types, Sources & Examples

    Here are some more examples of experimental errors. Recall, systematic errors result in all measurements being off the same amount due to old equipment, improper calibration, or mistakes in ...