• Search Search Please fill out this field.

What Is Econometrics?

Understanding econometrics.

  • Limitations
  • Econometrics FAQs

The Bottom Line

  • Corporate Finance
  • Financial Analysis

Econometrics: Definition, Models, and Methods

Adam Hayes, Ph.D., CFA, is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7, 55 & 63 licenses. He currently researches and teaches economic sociology and the social studies of finance at the Hebrew University in Jerusalem.

hypothesis econometrics

Econometrics is the use of statistical and mathematical models to develop theories or test existing hypotheses in economics and to forecast future trends from historical data. It subjects real-world data to statistical trials and then compares the results against the theory being tested.

Depending on whether you are interested in testing an existing theory or in using existing data to develop a new hypothesis, econometrics can be subdivided into two major categories: theoretical and applied. Those who routinely engage in this practice are commonly known as econometricians.

Key Takeaways

  • Econometrics is the use of statistical methods to develop theories or test existing hypotheses in economics or finance.
  • Econometrics relies on techniques such as regression models and null hypothesis testing.
  • Econometrics can also be used to try to forecast future economic or financial trends.
  • As with other statistical tools, econometricians should be careful not to infer a causal relationship from statistical correlation.
  • Some economists have criticized the field of econometrics for prioritizing statistical models over economic reasoning.

Investopedia / Michela Buttignol

Econometrics analyzes data using statistical methods in order to test or develop economic theory. These methods rely on statistical inferences to quantify and analyze economic theories by leveraging tools such as frequency distributions , probability, and probability distributions , statistical inference, correlation analysis, simple and multiple regression analysis, simultaneous equations models, and time series methods.

Econometrics was pioneered by Lawrence Klein , Ragnar Frisch, and Simon Kuznets . All three won the Nobel Prize in economics for their contributions. Today, it is used regularly among academics as well as practitioners such as Wall Street traders and analysts.

An example of the application of econometrics is to study the income effect using observable data. An economist may hypothesize that as a person increases their income, their spending will also increase.

If the data show that such an association is present, a regression analysis can then be conducted to understand the strength of the relationship between income and consumption and whether or not that relationship is statistically significant—that is, it appears to be unlikely that it is due to chance alone.

Methods of Econometrics

The first step to econometric methodology is to obtain and analyze a set of data and define a specific hypothesis that explains the nature and shape of the set. This data may be, for example, the historical prices for a stock index, observations collected from a survey of consumer finances, or unemployment and inflation rates in different countries.

If you are interested in the relationship between the annual price change of the S&P 500 and the unemployment rate, you'd collect both sets of data. Then, you might test the idea that higher unemployment leads to lower stock market prices. In this example, stock market price would be the dependent variable and the unemployment rate is the independent or explanatory variable.

The most common relationship is linear, meaning that any change in the explanatory variable will have a positive correlation with the dependent variable. This relationship could be explored with a simple regression model, which amounts to generating a best-fit line between the two sets of data and then testing to see how far each data point is, on average, from that line.

Note that you can have several explanatory variables in your analysis—for example, changes to GDP and inflation in addition to unemployment in explaining stock market prices. When more than one explanatory variable is used, it is referred to as multiple linear regression . This is the most commonly used tool in econometrics.

Some economists, including John Maynard Keynes , have criticized econometricians for their over-reliance on statistical correlations in lieu of economic thinking.

Different Regression Models

There are several different regression models that are optimized depending on the nature of the data being analyzed and the type of question being asked. The most common example is the ordinary least squares (OLS) regression, which can be conducted on several types of cross-sectional or time-series data. If you're interested in a binary (yes-no) outcome—for instance, how likely you are to be fired from a job based on your productivity—you might use a logistic regression or a probit model. Today, econometricians have hundreds of models at their disposal.

Econometrics is now conducted using statistical analysis software packages designed for these purposes, such as STATA, SPSS, or R. These software packages can also easily test for statistical significance to determine the likelihood that correlations might arise by chance. R-squared , t-tests ,  p-values , and null-hypothesis testing are all methods used by econometricians to evaluate the validity of their model results.

Limitations of Econometrics

Econometrics is sometimes criticized for relying too heavily on the interpretation of raw data without linking it to established economic theory or looking for causal mechanisms. It is crucial that the findings revealed in the data are able to be adequately explained by a theory, even if that means developing your own theory of the underlying processes.

Regression analysis also does not prove causation, and just because two data sets show an association, it may be spurious. For example, drowning deaths in swimming pools increase with GDP. Does a growing economy cause people to drown? This is unlikely, but perhaps more people buy pools when the economy is booming. Econometrics is largely concerned with correlation analysis, and it is important to remember that correlation does not equal causation.

What Are Estimators in Econometrics?

An estimator is a statistic that is used to estimate some fact or measurement about a larger population. Estimators are frequently used in situations where it is not practical to measure the entire population. For example, it is not possible to measure the exact employment rate at any specific time, but it is possible to estimate unemployment based on a randomly-chosen sample of the population.

What Is Autocorrelation in Econometrics?

Autocorrelation measures the relationships between a single variable at different time periods. For this reason, it is sometimes called lagged correlation or serial correlation, since it is used to measure how the past value of a certain variable might predict future values of the same variable. Autocorrelation is a useful tool for traders, especially in technical analysis.

What Is Endogeneity in Econometrics?

An endogenous variable is a variable that is influenced by changes in another variable. Due to the complexity of economic systems, it is difficult to determine all of the subtle relationships between different factors, and some variables may be partially endogenous and partially exogenous. In econometric studies, the researchers must be careful to account for the possibility that the error term may be partially correlated with other variables.

Econometrics is a popular discipline that integrates statistical tools and modeling for economic data, and it is frequently used by policymakers to forecast the result of policy changes. Like with other statistical tools, there are many possibilities for error when econometric tools are used carelessly. Econometricians must be careful to justify their conclusions with sound reasoning as well as statistical inferences.

The Nobel Prize. " Simon Kuznets ."

The Nobel Prize. " Ragnar Frisch ."

The Nobel Prize. " Lawrence R. Klein ."

hypothesis econometrics

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Your Privacy Choices

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1.3 The Economists’ Tool Kit

Learning objectives.

  • Explain how economists test hypotheses, develop economic theories, and use models in their analyses.
  • Explain how the all-other-things unchanged (ceteris paribus) problem and the fallacy of false cause affect the testing of economic hypotheses and how economists try to overcome these problems.
  • Distinguish between normative and positive statements.

Economics differs from other social sciences because of its emphasis on opportunity cost, the assumption of maximization in terms of one’s own self-interest, and the analysis of choices at the margin. But certainly much of the basic methodology of economics and many of its difficulties are common to every social science—indeed, to every science. This section explores the application of the scientific method to economics.

Researchers often examine relationships between variables. A variable is something whose value can change. By contrast, a constant is something whose value does not change. The speed at which a car is traveling is an example of a variable. The number of minutes in an hour is an example of a constant.

Research is generally conducted within a framework called the scientific method , a systematic set of procedures through which knowledge is created. In the scientific method, hypotheses are suggested and then tested. A hypothesis is an assertion of a relationship between two or more variables that could be proven to be false. A statement is not a hypothesis if no conceivable test could show it to be false. The statement “Plants like sunshine” is not a hypothesis; there is no way to test whether plants like sunshine or not, so it is impossible to prove the statement false. The statement “Increased solar radiation increases the rate of plant growth” is a hypothesis; experiments could be done to show the relationship between solar radiation and plant growth. If solar radiation were shown to be unrelated to plant growth or to retard plant growth, then the hypothesis would be demonstrated to be false.

If a test reveals that a particular hypothesis is false, then the hypothesis is rejected or modified. In the case of the hypothesis about solar radiation and plant growth, we would probably find that more sunlight increases plant growth over some range but that too much can actually retard plant growth. Such results would lead us to modify our hypothesis about the relationship between solar radiation and plant growth.

If the tests of a hypothesis yield results consistent with it, then further tests are conducted. A hypothesis that has not been rejected after widespread testing and that wins general acceptance is commonly called a theory . A theory that has been subjected to even more testing and that has won virtually universal acceptance becomes a law . We will examine two economic laws in the next two chapters.

Even a hypothesis that has achieved the status of a law cannot be proven true. There is always a possibility that someone may find a case that invalidates the hypothesis. That possibility means that nothing in economics, or in any other social science, or in any science, can ever be proven true. We can have great confidence in a particular proposition, but it is always a mistake to assert that it is “proven.”

Models in Economics

All scientific thought involves simplifications of reality. The real world is far too complex for the human mind—or the most powerful computer—to consider. Scientists use models instead. A model is a set of simplifying assumptions about some aspect of the real world. Models are always based on assumed conditions that are simpler than those of the real world, assumptions that are necessarily false. A model of the real world cannot be the real world.

We will encounter our first economic model in Chapter 35 “Appendix A: Graphs in Economics” . For that model, we will assume that an economy can produce only two goods. Then we will explore the model of demand and supply. One of the assumptions we will make there is that all the goods produced by firms in a particular market are identical. Of course, real economies and real markets are not that simple. Reality is never as simple as a model; one point of a model is to simplify the world to improve our understanding of it.

Economists often use graphs to represent economic models. The appendix to this chapter provides a quick, refresher course, if you think you need one, on understanding, building, and using graphs.

Models in economics also help us to generate hypotheses about the real world. In the next section, we will examine some of the problems we encounter in testing those hypotheses.

Testing Hypotheses in Economics

Here is a hypothesis suggested by the model of demand and supply: an increase in the price of gasoline will reduce the quantity of gasoline consumers demand. How might we test such a hypothesis?

Economists try to test hypotheses such as this one by observing actual behavior and using empirical (that is, real-world) data. The average retail price of gasoline in the United States rose from an average of $2.12 per gallon on May 22, 2005 to $2.88 per gallon on May 22, 2006. The number of gallons of gasoline consumed by U.S. motorists rose 0.3% during that period.

The small increase in the quantity of gasoline consumed by motorists as its price rose is inconsistent with the hypothesis that an increased price will lead to an reduction in the quantity demanded. Does that mean that we should dismiss the original hypothesis? On the contrary, we must be cautious in assessing this evidence. Several problems exist in interpreting any set of economic data. One problem is that several things may be changing at once; another is that the initial event may be unrelated to the event that follows. The next two sections examine these problems in detail.

The All-Other-Things-Unchanged Problem

The hypothesis that an increase in the price of gasoline produces a reduction in the quantity demanded by consumers carries with it the assumption that there are no other changes that might also affect consumer demand. A better statement of the hypothesis would be: An increase in the price of gasoline will reduce the quantity consumers demand, ceteris paribus. Ceteris paribus is a Latin phrase that means “all other things unchanged.”

But things changed between May 2005 and May 2006. Economic activity and incomes rose both in the United States and in many other countries, particularly China, and people with higher incomes are likely to buy more gasoline. Employment rose as well, and people with jobs use more gasoline as they drive to work. Population in the United States grew during the period. In short, many things happened during the period, all of which tended to increase the quantity of gasoline people purchased.

Our observation of the gasoline market between May 2005 and May 2006 did not offer a conclusive test of the hypothesis that an increase in the price of gasoline would lead to a reduction in the quantity demanded by consumers. Other things changed and affected gasoline consumption. Such problems are likely to affect any analysis of economic events. We cannot ask the world to stand still while we conduct experiments in economic phenomena. Economists employ a variety of statistical methods to allow them to isolate the impact of single events such as price changes, but they can never be certain that they have accurately isolated the impact of a single event in a world in which virtually everything is changing all the time.

In laboratory sciences such as chemistry and biology, it is relatively easy to conduct experiments in which only selected things change and all other factors are held constant. The economists’ laboratory is the real world; thus, economists do not generally have the luxury of conducting controlled experiments.

The Fallacy of False Cause

Hypotheses in economics typically specify a relationship in which a change in one variable causes another to change. We call the variable that responds to the change the dependent variable ; the variable that induces a change is called the independent variable . Sometimes the fact that two variables move together can suggest the false conclusion that one of the variables has acted as an independent variable that has caused the change we observe in the dependent variable.

Consider the following hypothesis: People wearing shorts cause warm weather. Certainly, we observe that more people wear shorts when the weather is warm. Presumably, though, it is the warm weather that causes people to wear shorts rather than the wearing of shorts that causes warm weather; it would be incorrect to infer from this that people cause warm weather by wearing shorts.

Reaching the incorrect conclusion that one event causes another because the two events tend to occur together is called the fallacy of false cause . The accompanying essay on baldness and heart disease suggests an example of this fallacy.

Because of the danger of the fallacy of false cause, economists use special statistical tests that are designed to determine whether changes in one thing actually do cause changes observed in another. Given the inability to perform controlled experiments, however, these tests do not always offer convincing evidence that persuades all economists that one thing does, in fact, cause changes in another.

In the case of gasoline prices and consumption between May 2005 and May 2006, there is good theoretical reason to believe the price increase should lead to a reduction in the quantity consumers demand. And economists have tested the hypothesis about price and the quantity demanded quite extensively. They have developed elaborate statistical tests aimed at ruling out problems of the fallacy of false cause. While we cannot prove that an increase in price will, ceteris paribus, lead to a reduction in the quantity consumers demand, we can have considerable confidence in the proposition.

Normative and Positive Statements

Two kinds of assertions in economics can be subjected to testing. We have already examined one, the hypothesis. Another testable assertion is a statement of fact, such as “It is raining outside” or “Microsoft is the largest producer of operating systems for personal computers in the world.” Like hypotheses, such assertions can be demonstrated to be false. Unlike hypotheses, they can also be shown to be correct. A statement of fact or a hypothesis is a positive statement .

Although people often disagree about positive statements, such disagreements can ultimately be resolved through investigation. There is another category of assertions, however, for which investigation can never resolve differences. A normative statement is one that makes a value judgment. Such a judgment is the opinion of the speaker; no one can “prove” that the statement is or is not correct. Here are some examples of normative statements in economics: “We ought to do more to help the poor.” “People in the United States should save more.” “Corporate profits are too high.” The statements are based on the values of the person who makes them. They cannot be proven false.

Because people have different values, normative statements often provoke disagreement. An economist whose values lead him or her to conclude that we should provide more help for the poor will disagree with one whose values lead to a conclusion that we should not. Because no test exists for these values, these two economists will continue to disagree, unless one persuades the other to adopt a different set of values. Many of the disagreements among economists are based on such differences in values and therefore are unlikely to be resolved.

Key Takeaways

  • Economists try to employ the scientific method in their research.
  • Scientists cannot prove a hypothesis to be true; they can only fail to prove it false.
  • Economists, like other social scientists and scientists, use models to assist them in their analyses.
  • Two problems inherent in tests of hypotheses in economics are the all-other-things-unchanged problem and the fallacy of false cause.
  • Positive statements are factual and can be tested. Normative statements are value judgments that cannot be tested. Many of the disagreements among economists stem from differences in values.

Look again at the data in Table 1.1 “LSAT Scores and Undergraduate Majors” . Now consider the hypothesis: “Majoring in economics will result in a higher LSAT score.” Are the data given consistent with this hypothesis? Do the data prove that this hypothesis is correct? What fallacy might be involved in accepting the hypothesis?

Case in Point: Does Baldness Cause Heart Disease?

A bald man's head

Mark Hunter – bald – CC BY-NC-ND 2.0.

A website called embarrassingproblems.com received the following email:

What did Dr. Margaret answer? Most importantly, she did not recommend that the questioner take drugs to treat his baldness, because doctors do not think that the baldness causes the heart disease. A more likely explanation for the association between baldness and heart disease is that both conditions are affected by an underlying factor. While noting that more research needs to be done, one hypothesis that Dr. Margaret offers is that higher testosterone levels might be triggering both the hair loss and the heart disease. The good news for people with early balding (which is really where the association with increased risk of heart disease has been observed) is that they have a signal that might lead them to be checked early on for heart disease.

Source: http://www.embarrassingproblems.com/problems/problempage230701.htm .

Answer to Try It! Problem

The data are consistent with the hypothesis, but it is never possible to prove that a hypothesis is correct. Accepting the hypothesis could involve the fallacy of false cause; students who major in economics may already have the analytical skills needed to do well on the exam.

Principles of Economics Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • A-Z Publications

Annual Review of Economics

Volume 2, 2010, review article, hypothesis testing in econometrics.

  • Joseph P. Romano 1 , Azeem M. Shaikh 2 , and Michael Wolf 3
  • View Affiliations Hide Affiliations Affiliations: 1 Departments of Economics and Statistics, Stanford University, Stanford, California 94305; email: [email protected] 2 Department of Economics, University of Chicago, Chicago, Illinois 60637 3 Institute for Empirical Research in Economics, University of Zürich, CH-8006 Zürich, Switzerland
  • Vol. 2:75-104 (Volume publication date September 2010) https://doi.org/10.1146/annurev.economics.102308.124342
  • First published as a Review in Advance on February 09, 2010
  • © Annual Reviews

This article reviews important concepts and methods that are useful for hypothesis testing. First, we discuss the Neyman-Pearson framework. Various approaches to optimality are presented, including finite-sample and large-sample optimality. Then, we summarize some of the most important methods, as well as resampling methodology, which is useful to set critical values. Finally, we consider the problem of multiple testing, which has witnessed a burgeoning literature in recent years. Along the way, we incorporate some examples that are current in the econometrics literature. While many problems with well-known successful solutions are included, we also address open problems that are not easily handled with current technology, stemming from such issues as lack of optimality or poor asymptotic approximations.

Article metrics loading...

Full text loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, power laws in economics and finance, the gravity model, the china shock: learning from labor-market adjustment to large changes in trade, microeconomics of technology adoption, financial literacy, financial education, and economic outcomes, gender and competition, corruption in developing countries, the economics of human development and social mobility, the roots of gender inequality in developing countries, weak instruments in instrumental variables regression: theory and practice.

Publication Date: 04 Sep 2010

Online Option

Sign in to access your institutional or personal subscription or get immediate access to your online copy - available in PDF and ePub formats

ECONOMETRICS HANDBOOK: BASIC DEFINITION OF CONCEPTS, PRINCIPLES AND METHODS

  • November 2023

Felix Ijeh at Igbinedion University

  • Igbinedion University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Badi H. Baltagi

  • Jörn-Steffen Pischke
  • Rainer Winkelmann
  • TECHNOMETRICS
  • Josef Schmee
  • T. W. Anderson

Jeffrey M. Wooldridge

  • Handbook Econometrics

Whitney K. Newey

  • D.L. McFadden
  • Terry G. Seaks

George Judge

  • Tsoung-Chao Lee
  • Philippe Rouzier
  • Peter Kennedy
  • G.S. Maddala

Kajal Lahiri

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • For Individuals
  • For Businesses
  • For Universities
  • For Governments
  • Online Degrees
  • Find your New Career
  • Join for Free

Queen Mary University of London

Hypotheses Testing in Econometrics

This course is part of Econometrics for Economists and Finance Practitioners Specialization

Taught in English

Some content may not be translated

Dr Leone Leonida

Instructor: Dr Leone Leonida

Financial aid available

1,820 already enrolled

Coursera Plus

Recommended experience

Intermediate level

Learners must understand basic statistics (mean, variance, skewness, kurtosis). Learners should complete Classical Linear Regression Model.

What you'll learn

How to perform hypothesis testing

How to check that the estimated model is empirically adequate

How to use hypothesis testing for decision making

Skills you'll gain

  • Calculate and perform the t-test
  • Calculate and perform the various diagnostic test
  • Calculate and perform the F-test
  • Prove the concept of unbiasedness
  • Prove the concept of of efficiency

Details to know

hypothesis econometrics

Add to your LinkedIn profile

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 4 modules in this course

In this course, you will learn why it is rational to use the parameters recovered under the Classical Linear Regression Model for hypothesis testing in uncertain contexts. You will:

– Develop your knowledge of the statistical properties of the OLS estimator as you see whether key assumptions work. – Learn that the OLS estimator has some desirable statistical properties, which are the basis of an approach for hypothesis testing to aid rational decision making. – Examine the concept of null hypothesis and alternative hypothesis, before exploring a statistic and a distribution under the null hypothesis, as well as a rule for deciding which hypothesis is more likely to hold true. – Discover what happens to the decision-making framework if some assumptions of the CLRM are violated, as you explore diagnostic testing. – Learn the steps involved to detect violations, the consequences upon the OLS estimator, and the techniques that must be adopted to address these problems. Before starting this course, it is expected that you have an understanding of some basic statistics, including mean, variance, skewness and kurtosis. It is also recommended that you have completed and understood the previous course in this Specialisation: The Classical Linear Regression model. By the end of this course, you will be able to: – Explain what hypothesis testing is – Explain why the OLS is a rational approach to hypothesis testing – Perform hypothesis testing for single and multiple hypothesis – Explain the idea of diagnostic testing – Perform hypothesis testing for single and multiple hypothesis with R – Identify and resolve problems raised by identification of parameters.

Properties of the OLS Approach

This week we are going to look at the properties of the OLS approach as a basis for the hypothesis testing, focussing on linearity, unbiasedness, efficiency and consistency.

What's included

6 videos 4 readings 5 quizzes 3 discussion prompts

6 videos • Total 10 minutes

  • Welcome to Hypotheses Testing in Econometrics • 1 minute • Preview module
  • Properties of the OLS Estimator • 2 minutes
  • Presentation of Linearity • 1 minute
  • Unbiasedness • 1 minute
  • Efficiency • 1 minute
  • Consistency • 1 minute

4 readings • Total 40 minutes

  • Understanding Linearity of the OLS Estimator • 10 minutes
  • Understanding Unbiasedness • 10 minutes
  • Understanding Efficiency • 10 minutes
  • Understanding Consistency • 10 minutes

5 quizzes • Total 140 minutes

  • Knowledge Check: Properties of the OLS Approach • 20 minutes
  • Linearity • 30 minutes
  • Check Your Understanding of Unbiasedness • 30 minutes
  • Check Your Understanding of Efficiency • 30 minutes
  • Check Your Understanding of Consistency • 30 minutes

3 discussion prompts • Total 30 minutes

  • The Importance of Unbiasedness • 10 minutes
  • The Importance of Efficiency • 10 minutes
  • Exploring Consistency • 10 minutes

Hypothesis Testing

This week we shall be exploring hypothesis testing, looking at the t-test and the F-test, and considering the problems raised by hypothesis testing.

4 videos 6 readings 7 quizzes 1 discussion prompt 2 ungraded labs

4 videos • Total 19 minutes

  • Hypothesis Testing • 4 minutes • Preview module
  • The t-Test • 4 minutes
  • The F-Test • 5 minutes
  • Type I and Type II Errors • 4 minutes

6 readings • Total 60 minutes

  • Using Hypothesis Testing • 10 minutes
  • Exploring the Test of Significance • 10 minutes
  • Example of the t-Test • 10 minutes
  • Test Joint Hypothesis • 10 minutes
  • An Example • 10 minutes
  • Types of Errors • 10 minutes

7 quizzes • Total 180 minutes

  • Knowledge Check: Hypothesis Testing • 20 minutes
  • Building a Hypothesis • 10 minutes
  • Interpreting t-Tests • 30 minutes
  • Conditions for the f-Test • 30 minutes
  • Differences between t and F-Tests • 30 minutes
  • Non-Nested Models • 30 minutes
  • Check Your Understanding of Hypothesis Testing • 30 minutes

1 discussion prompt • Total 10 minutes

  • The Importance of Hypothesis Testing • 10 minutes

2 ungraded labs • Total 120 minutes

  • Example t-Test with R • 60 minutes
  • Example F-Test with R • 60 minutes

Diagnostic Testing I

This week we shall be discussing diagnostic testing as we look at non-linearity, violation of full rank and errors correlated with regressors.

4 videos 5 readings 5 quizzes 4 discussion prompts 2 ungraded labs

4 videos • Total 17 minutes

  • Diagnostic Testing • 4 minutes • Preview module
  • Violation of Linearity • 3 minutes
  • Violation of Full Rank • 2 minutes
  • Violation of Regression Model • 5 minutes

5 readings • Total 50 minutes

  • Test for the Violations • 10 minutes
  • Test for the Violation of Linearity • 10 minutes
  • Test for the Violation • 10 minutes
  • Consequences of the Violation • 10 minutes
  • Knowledge Check: Diagnostic Testing I • 20 minutes
  • Check Understanding of Diagnostic Testing • 30 minutes
  • Solving Violations of Linearity • 30 minutes
  • Solving Collinearity • 30 minutes
  • Solving Endogeneity • 30 minutes

4 discussion prompts • Total 40 minutes

  • The Importance of Studying Violations of Assumptions • 10 minutes
  • How Do You Solve Linearity? • 10 minutes
  • How Do You Solve Collinearity? • 10 minutes
  • How Do You Solve Endogeneity? • 10 minutes
  • Example of a Violation of Linearity with R • 60 minutes
  • Example of Collinearity with R • 60 minutes

Diagnostic Testing II

This week we will continue to look at diagnostic testing as we consider spherical errors, heteroscedasticity, autocorrelation, Stochastic Regressors, and the non-normality of errors.

2 videos 7 readings 6 quizzes 1 peer review 4 discussion prompts 3 ungraded labs

2 videos • Total 5 minutes

  • Stochastic Regressors • 2 minutes • Preview module
  • Non-Normal Errors • 3 minutes

7 readings • Total 70 minutes

  • Consequences of the Violations • 10 minutes
  • Congratulations • 10 minutes

6 quizzes • Total 170 minutes

  • Understanding Hypothesis Testing • 30 minutes
  • Knowledge Check: Diagnostic Testing II • 20 minutes
  • Understanding Heteroscedasticity • 30 minutes
  • Understanding Autocorrelation • 30 minutes
  • Understanding Stochastic Regressors • 30 minutes
  • Understanding Non-Normal Errors • 30 minutes

1 peer review • Total 120 minutes

  • Hypothesis and Diagnostic Testing • 120 minutes
  • Importance of Testing for Heteroscedasticity • 10 minutes
  • How Do You Solve Autocorrelation? • 10 minutes
  • How Do You Solve Stochastic Regressors? • 10 minutes
  • How Do You Solve Non-Normal Errors? • 10 minutes

3 ungraded labs • Total 240 minutes

  • Example of Heteroscedasticity with R • 60 minutes
  • Example of Autocorrelation with R • 60 minutes
  • Example of Normality with R • 120 minutes

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

hypothesis econometrics

Queen Mary University of London is a leading research-intensive university with a difference – one that opens the doors of opportunity to anyone with the potential to succeed. Ranked 117 in the world, the University has over 28000 students and 4400 members of staff. We are a truly global university: over 160 nationalities are represented on our 5 campuses in London, and we also have a presence in Malta, Paris, Athens, Singapore and China. The reach of our education is extended still further through our online provision.

Recommended if you're interested in Economics

hypothesis econometrics

Queen Mary University of London

Topics in Applied Econometrics

hypothesis econometrics

The Econometrics of Time Series Data

hypothesis econometrics

The Classical Linear Regression Model

hypothesis econometrics

Econometrics for Economists and Finance Practitioners

Specialization

Why people choose Coursera for their career

hypothesis econometrics

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions

When will i have access to the lectures and assignments.

Access to lectures and assignments depends on your type of enrollment. If you take a course in audit mode, you will be able to see most course materials for free. To access graded assignments and to earn a Certificate, you will need to purchase the Certificate experience, during or after your audit. If you don't see the audit option:

The course may not offer an audit option. You can try a Free Trial instead, or apply for Financial Aid.

The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.

What will I get if I subscribe to this Specialization?

When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile. If you only want to read and view the course content, you can audit the course for free.

What is the refund policy?

If you subscribed, you get a 7-day free trial during which you can cancel at no penalty. After that, we don’t give refunds, but you can cancel your subscription at any time. See our full refund policy Opens in a new tab .

Is financial aid available?

Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.

More questions

  • About the IMF
  • Capacity Development
  • Publications

Econometrics: Making Theory Count

Finance & Development

Sam Ouliaris

If economic theory is to be a useful tool for policymaking, it must be quantifiable

Econometrics: Making Theory Count

Numbers racket (photo: Tom Grill/Corbis)

Economists develop economic models to explain consistently recurring relationships. Their models link one or more economic variables to other economic variables. For example, economists connect the amount individuals spend on consumer goods to disposable income and wealth, and expect consumption to increase as disposable income and wealth increase (that is, the relationship is positive).

There are often competing models capable of explaining the same recurring relationship, called an empirical regularity, but few models provide useful clues to the magnitude of the association. Yet this is what matters most to policymakers. When setting monetary policy , for example, central bankers need to know the likely impact of changes in official interest rates on inflation and the growth rate of the economy. It is in cases like this that economists turn to econometrics.

Econometrics uses economic theory, mathematics, and statistical inference to quantify economic phenomena. In other words, it turns theoretical economic models into useful tools for economic policymaking. The objective of econometrics is to convert qualitative statements (such as “the relationship between two or more variables is positive”) into quantitative statements (such as “consumption expenditure increases by 95 cents for every one dollar increase in disposable income”). Econometricians—practitioners of econometrics—transform models developed by economic theorists into versions that can be estimated. As Stock and Watson (2007) put it, “econometric methods are used in many branches of economics, including finance, labor economics, macroeconomics, microeconomics, and economic policy.” Economic policy decisions are rarely made without econometric analysis to assess their impact.

hypothesis econometrics

A daunting task

Certain features of economic data make it challenging for economists to quantify economic models. Unlike researchers in the physical sciences, econometricians are rarely able to conduct controlled experiments in which only one variable is changed and the response of the subject to that change is measured. Instead, econometricians estimate economic relationships using data generated by a complex system of related equations, in which all variables may change at the same time. That raises the question of whether there is even enough information in the data to identify the unknowns in the model.

Econometrics can be divided into theoretical and applied components.

Theoretical econometricians investigate the properties of existing statistical tests and procedures for estimating unknowns in the model. They also seek to develop new statistical procedures that are valid (or robust) despite the peculiarities of economic data—such as their tendency to change simultaneously. Theoretical econometrics relies heavily on mathematics, theoretical statistics, and numerical methods to prove that the new procedures have the ability to draw correct inferences.

Applied econometricians, by contrast, use econometric techniques developed by the theorists to translate qualitative economic statements into quantitative ones. Because applied econometricians are closer to the data, they often run into—and alert their theoretical counterparts to—data attributes that lead to problems with existing estimation techniques. For example, the econometrician might discover that the variance of the data (how much individual values in a series differ from the overall average) is changing over time.

The main tool of econometrics is the linear multiple regression model, which provides a formal approach to estimating how a change in one economic variable, the explanatory variable, affects the variable being explained, the dependent variable—taking into account the impact of all the other determinants of the dependent variable. This qualification is important because a regression seeks to estimate the marginal impact of a particular explanatory variable after taking into account the impact of the other explanatory variables in the model. For example, the model may try to isolate the effect of a 1 percentage point increase in taxes on average household consumption expenditure, holding constant other determinants of consumption, such as pretax income, wealth, and interest rates.

Stages of development

The methodology of econometrics is fairly straightforward.

The first step is to suggest a theory or hypothesis to explain the data being examined. The explanatory variables in the model are specified, and the sign and/or magnitude of the relationship between each explanatory variable and the dependent variable are clearly stated. At this stage of the analysis, applied econometricians rely heavily on economic theory to formulate the hypothesis. For example, a tenet of international economics is that prices across open borders move together after allowing for nominal exchange rate movements (purchasing power parity). The empirical relationship between domestic prices and foreign prices (adjusted for nominal exchange rate movements) should be positive, and they should move together approximately one for one.

The second step is the specification of a statistical model that captures the essence of the theory the economist is testing. The model proposes a specific mathematical relationship between the dependent variable and the explanatory variables—on which, unfortunately, economic theory is usually silent. By far the most common approach is to assume linearity—meaning that any change in an explanatory variable will always produce the same change in the dependent variable (that is, a straight-line relationship).

Because it is impossible to account for every influence on the dependent variable, a catchall variable is added to the statistical model to complete its specification. The role of the catchall is to represent all the determinants of the dependent variable that cannot be accounted for—because of either the complexity of the data or its absence. Economists usually assume that this “error” term averages to zero and is unpredictable, simply to be consistent with the premise that the statistical model accounts for all the important explanatory variables.

The third step involves using an appropriate statistical procedure and an econometric software package to estimate the unknown parameters (coefficients) of the model using economic data. This is often the easiest part of the analysis thanks to readily available economic data and excellent econometric software. Still, the famous GIGO (garbage in, garbage out) principle of computing also applies to econometrics. Just because something can be computed doesn’t mean it makes economic sense to do so.

The fourth step is by far the most important: administering the smell test. Does the estimated model make economic sense—that is, yield meaningful economic predictions? For example, are the signs of the estimated parameters that connect the dependent variable to the explanatory variables consistent with the predictions of the underlying economic theory? (In the household consumption example, for instance, the validity of the statistical model would be in question if it predicted a decline in consumer spending when income increased). If the estimated parameters do not make sense, how should the econometrician change the statistical model to yield sensible estimates? And does a more sensible estimate imply an economically significant effect? This step, in particular, calls on and tests the applied econometrician’s skill and experience.

Testing the hypothesis

The main tool of the fourth stage is hypothesis testing, a formal statistical procedure during which the researcher makes a specific statement about the true value of an economic parameter, and a statistical test determines whether the estimated parameter is consistent with that hypothesis. If it is not, the researcher must either reject the hypothesis or make new specifications in the statistical model and start over.

If all four stages proceed well, the result is a tool that can be used to assess the empirical validity of an abstract economic model. The empirical model may also be used to construct a way to forecast the dependent variable, potentially helping policymakers make decisions about changes in monetary and/or fiscal policy to keep the economy on an even keel.

Students of econometrics are often fascinated by the ability of linear multiple regression to estimate economic relationships. Three fundamentals of econometrics are worth remembering.

• First, the quality of the parameter estimates depends on the validity of the underlying economic model.

• Second, if a relevant explanatory variable is excluded, the most likely outcome is poor parameter estimates.

• Third, even if the econometrician identifies the process that actually generated the data, the parameter estimates have only a slim chance of being equal to the actual parameter values that generated the data. Nevertheless, the estimates will be used because, statistically speaking, they will become precise as more data become available.

Econometrics, by design, can yield correct predictions on average, but only with the help of sound economics to guide the specification of the empirical model. Even though it is a science, with well-established rules and procedures for fitting models to economic data, in practice econometrics is an art that requires considerable judgment to obtain estimates useful for policymaking.

Sam Ouliaris is a Senior Economist in the IMF Institute.

hypothesis econometrics

  • Archive of F&D Issues

SVisit Finance and Development Facebook page

Write to us

F&D welcomes comments and brief letters, a selection of which are posted under Letters to the Editor. Letters may be edited. Please send your letters to [email protected]

F&D Magazine

  • About F&D
  • Advertising Information
  • Subscription Information
  • Copyright Information
  • Writing Guidelines
  • Use the free Adobe Acrobat Reader to view pdf files

Free Email Notification

Receive emails when we post new items of interest to you. Subscribe or Modify your profile

  • Country Info
  • Data and Statistics
  • Copyright and Usage
  • Privacy Policy
  • How to Contact Us
  • Français
  • Español

What is Econometrics?

How does econometrics work, examples of using econometrics, what is applied econometrics, what is theoretical econometrics, econometrics.

An area of economics where statistical and mathematical methods are used to analyze economic data

Econometrics is an area of economics where statistical and mathematical methods are used to analyze economic data. Individuals who are involved with econometrics are referred to as econometricians.

Econometricians test economic theories and hypotheses by using statistical tools such as probability, statistical inference, regression analysis , frequency distributions, and more. After testing economic theories, econometricians can compare the results with real data and observations, which can be helpful in forecasting future economic trends.

Econometrics

The purpose is to use statistical modeling and analysis in order to transform qualitative economic concepts into quantitative information that individuals can use. For example, policymakers can use the information to create new fiscal and monetary policies to stimulate the economy.

Suppose that policymakers are creating a new policy to increase the number of jobs in order to improve the unemployment rate and boost the economy. Econometricians test if this hypothesis will be true or not by using statistical models.

The following steps are the methodology of econometrics:

  • Econometricians who are examining a dataset will suggest a theory or hypothesis to explain the data. At this stage, econometricians would define variables found in the economic model and the relationship between different variables. In order to come up with a hypothesis to explain the relationships, econometricians would look at existing economic theories.
  • The second stage is to define a statistical model to quantify the economic theory that is being analyzed in the first step.
  • In the third stage, statistical procedures are used to forecast unknown points in the statistical model. Econometricians may even use econometric software in order to assist with this step.
  • Hypothesis testing is done in order to determine whether or not the hypothesis should be rejected or not. If it is rejected, the econometrician should come up with new definitions in the statistical model. The purpose of doing so is to assess the validity of the economic model.

There are various approaches to econometrics, and it is not limited to the methodology described above. Other methodologies include the vector autoregression approach and the Cowles Commission approach.

In the past, econometricians have studied patterns and relationships between different economic concepts, including:

  • Income and expenditure
  • Production, supply, and cost
  • Labor and capital
  • Salary and productivity

Econometrics can be separated into two main categories: applied and theoretical . The main goal for an applied econometrician is to turn qualitative data into something quantitative.

Applied econometrics refers to the idea of how economic data and theories are used to draw conclusions to improve decision-making and assist in solving economic issues. Its purpose is to enable the government, policymakers, businesses, and financial institutions to gain insight into possible solutions that can be used to solve economic problems. In order to do so, applied econometricians would analyze economic metrics, try to find out if there are any statistical trends, and predict what the outcome would be for an economic issue.

For example, suppose an applied econometrician is comparing household income with inflation rates and concludes that there is a relationship between the two. As a result, the government can use the research from econometricians to impose changes to policies that can increase household income during times of inflation .

Theoretical econometrics is about analyzing existing statistical procedures in order to predict anomalies or unknown parameters in economic data. Besides analyzing current statistical procedures, theoretical econometricians also develop new statistical procedures and methodologies in order to explain anomalies found in economic data .

As a result, theoretical econometricians depend on mathematical techniques and statistical theories to ensure that the new procedures that they develop can successfully generate correct economic conclusions.

CFI is the official provider of the Capital Markets & Securities Analyst (CMSA®)  certification program, designed to help anyone become a world-class financial analyst. To keep advancing your career, the additional CFI resources below will be useful:

  • Demographics
  • Economic Indicators
  • Keynesian Economic Theory
  • Quantitative Analysis
  • See all economics resources
  • Share this article

Excel Fundamentals - Formulas for Finance

Create a free account to unlock this Template

Access and download collection of free Templates to help power your productivity and performance.

Already have an account? Log in

Supercharge your skills with Premium Templates

Take your learning and productivity to the next level with our Premium Templates.

Upgrading to a paid membership gives you access to our extensive collection of plug-and-play Templates designed to power your performance—as well as CFI's full course catalog and accredited Certification Programs.

Already have a Self-Study or Full-Immersion membership? Log in

Access Exclusive Templates

Gain unlimited access to more than 250 productivity Templates, CFI's full course catalog and accredited Certification Programs, hundreds of resources, expert reviews and support, the chance to work with real-world finance and research tools, and more.

Already have a Full-Immersion membership? Log in

Introductory Econometrics

Chapter 17: joint hypothesis testing.

Chapter 16 shows how to test a hypothesis about a single slope parameter in a regression equation. This chapter explains how to test hypotheses about more than one of the parameters in a multiple regression model. Simultaneous multiple parameter hypothesis testing generally requires constructing a test statistic that measures the difference in fit between two versions of the same model.

An Example of a Test Involving More than One Parameter

One of the central tasks in economics is explaining savings behavior. National savings rates vary considerably across countries, and the United States has been at the low end in recent decades. Most studies of savings behavior by economists look at strictly economic determinants of savings. Differences in national savings rates, however, seem to reflect more than just differences in the economic environment. In a study of individual savings behavior, Carroll et al. (1999) examined the hypothesis that cultural factors play a role. Specifically, they asked the question, Does national origin help to explain differences in savings rate across a group of immigrants to the United States? Using 1980 and 1990 U.S. Census data with data on immigrants from 16 countries and on native-born Americans, Carroll et al. estimated a model similar to the following :( 1 )

For reasons that will become obvious, we call this the unrestricted model. The dependent variable is the household savings rate. Age and education measure, respectively, the age and education of the household head (both in years). The error term reflects omitted variables that affect savings rates as well as the influence of luck. The subscript h indexes households. A series of 16 dummy variables indicate the national origin of the immigrants; for example, Chinah = 1 if both husband and wife in household h were Chinese immigrants .( 2 ) Suppose that the value for the coefficient multiplying China is 0.12. This would indicate that, with other factors controlled, immigrants of Chinese origin have a savings rate 12 percentage points higher than the base case (which in this regression consists of people who were born in the United States).

If there are no cultural effects on savings, then all the coefficients multiplying the dummy variables for national origin ought to be equal to each other. In other words, if culture does not matter, national origin ought not to affect savings rates ceteris paribus. This is a null hypothesis involving 16 parameters and 16 equal signs:

The alternative hypothesis simply negates the null hypothesis, meaning that immigrants from at least one country have different savings rates than immigrants from other countries:

Now, if the null hypothesis is true, then an alternative, simpler model describes the data generation process:

Relative to the original model, the one above is a restricted model. We can test the null hypothesis with a new test statistic, the F-statistic, which essentially measures the difference between the fit of the original and restricted models above. The test is known as an F-test. The F-statistic will not have a normal distribution. Under the often-made assumption that the error terms are normally distributed, when the null is true, the test statistic follows an F distribution, which accounts for the name of the statistic. We will need to learn about the F- and the related chi-square distributions in order to calculate the P-value for the F-test.

F-Test Basics

The F-distribution is named after Ronald A. Fisher, a leading statistician of the first half of the twentieth century. This chapter demonstrates that the F distribution is a ratio of two chi-square random variables and that, as the number of observations increases, the F-distribution comes to resemble the chi-square distribution. Karl Pearson popularized the chi-square distribution beginning in 1900.

The Whole Model F-Test (discussed in Section 17.2) is commonly used as a test of the overall significance of the included independent variables in a regression model. In fact, it is so often used that Excel’s LINEST function and most other statistical software report this statistic. We will show that there are many other F-tests that facilitate tests of a variety of competing models. The idea that there are competing models opens the door to a difficult question: How do we decide which model is the right one? One way to answer this question is with an F-test. At first glance, one might consider measures of fit such as R2 or the sum of squared residuals (SSR) as a guide. But these statistics have a serious weakness – as you include additional independent variables, the R2 and SSR are guaranteed (practically speaking) to improve. Thus, naive reliance on these measures of fit leads to kitchen sink regression – that is, we throw in as many variables as we can find (the proverbial kitchen sink) in an effort to optimize the fit.

The problem with kitchen sink regression is that, for a particular sample, it will yield a higher R2 or lower SSR than a regression with fewer X variables, but the true model may be the one with the smaller number of X variables. This will be shown via a concrete example in Section 17.5. The F-test provides a way to discriminate between alternative models. It recognizes that there will be differences in measures of fit when one model is compared with another, but it requires that the loss of fit be substantial enough to reject the reduced model.

Organization

In general, the F-test can be used to test any restriction on the parameters in the equation. The idea of a restricted regression is fundamental to the logic of the F-test, and thus it is discussed in detail in the next section. Because the F-distribution is actually the ratio of two chi-square (?2) distributed random variables (divided by their respective degrees of freedom), Section 17.3 explains the chi-square distribution and points out that, when the errors are normally distributed, the sum of squared residuals is a random variable with a chi-square distribution. Section 17.4 demonstrates that the ratio of two chi-square distributed random variables is an F-distributed random variable. The remaining sections of this chapter put the F-statistic into practice. Section 17.5 does so in the context of Galileo’s model of acceleration, whereas Section 17.6 considers an example involving food stamps. We use the food stamp example to show that, when the restriction involves a single equals sign, one can rewrite the original model to make it possible to employ a t-test instead of an F-test. The t- and F-tests yield equivalent results in such cases. We apply the F-test to a real-world example in Section 17.7. Finally, Section 17.8 discusses multicollinearity and the distinction between confi- dence intervals for a single parameter and confidence regions for multiple parameters.

1 Their actual model is, not surprisingly, substantially more complicated. Return to text. 2 There were 17 countries of origin in the study, including 900 households selected at random from the United States. Only married couples from the same country of origin were included in the sample. Other restrictions were that the household head must have been older than 35 and younger than 50 in 1980. Return to text.

Excel Workbooks

ChiSquareDist.xls CorrelatedEstimates.xls FDist.xls FDistEarningsFn.xls FDistFoodStamps.xls FDistGalileo.xls MyMonteCarlo.xls NoInterceptBug.xls

Aaron Smith

Chapter 6 - hypothesis testing and confidence intervals.

Click here to read the chapter (link works only for UC affiliates)

Lecture Slides:       Powerpoint     PDF

Learning Objectives

  • Test a hypothesis about a regression coefficient 
  • Form a confidence interval around a regression coefficient
  • Show how the central limit theorem allows econometricians to ignore assumption CR4 in large samples 
  • Present results from a regression model
  • Central Limit Theorem in Action

What We Learned

  • Our result is the same whether we drop CR4 and invoke the central limit theorem (valid in large samples) or whether we impose CR4 (necessary in small samples).
  • Confidence intervals are narrow when the sum of squared errors is small, the sample is large, or there’s a lot of variation in X .
  • How to present results from a regression model.

Article Categories

Book categories, collections.

  • Business, Careers, & Money Articles
  • Business Articles
  • Economics Articles

Econometrics For Dummies Cheat Sheet

Econometrics for dummies.

Book image

Sign up for the Dummies Beta Program to try Dummies' newest way to learn.

To accurately perform these tasks, you need econometric model-building skills, quality data, and appropriate estimation strategies. And both economic and statistical assumptions are important when using econometrics to estimate models.

Econometric estimation and the CLRM assumptions

Econometric techniques are used to estimate economic models, which ultimately allow you to explain how various factors affect some outcome of interest or to forecast future events. The ordinary least squares (OLS) technique is the most popular method of performing regression analysis and estimating econometric models, because in standard situations (meaning the model satisfies a series of statistical assumptions) it produces optimal (the best possible) results.

The proof that OLS generates the best results is known as the Gauss-Markov theorem, but the proof requires several assumptions. These assumptions, known as the classical linear regression model (CLRM) assumptions, are the following:

The model parameters are linear, meaning the regression coefficients don’t enter the function being estimated as exponents (although the variables can have exponents).

The values for the independent variables are derived from a random sample of the population, and they contain variability.

The explanatory variables don’t have perfect collinearity (that is, no independent variable can be expressed as a linear function of any other independent variables).

The error term has zero conditional mean, meaning that the average error is zero at any specific value of the independent variable(s).

The model has no heteroskedasticity (meaning the variance of the error is the same regardless of the independent variable’s value).

The model has no autocorrelation (the error term doesn’t exhibit a systematic relationship over time).

If one (or more) of the CLRM assumptions isn’t met (which econometricians call failing ), then OLS may not be the best estimation technique. Fortunately, econometric tools allow you to modify the OLS technique or use a completely different estimation method if the CLRM assumptions don’t hold.

Useful formulas in econometrics

After you acquire data and choose the best econometric model for the question you want to answer, use formulas to produce the estimated output.

In some cases, you have to perform these calculations by hand (sorry). However, even if your problem allows you to use econometric software such as STATA to generate results, it’s nice to know what the computer is doing.

Here’s a look at the most common estimators from an econometric model along with the formulas used to produce them.

image0.jpg

Econometric analysis: Looking at flexibility in models

You may want to allow your econometric model to have some flexibility, because economic relationships are rarely linear. Many situations are subject to the “law” of diminishing marginal benefits and/or increasing marginal costs, which implies that the impact of the independent variables won’t be constant (linear).

The precise functional form depends on your specific application, but the most common are as follows:

image0.jpg

Typical problems estimating econometric models

If the CLRM doesn’t work for your data because one of its assumptions doesn’t hold, then you have to address the problem before you can finalize your analysis.

Fortunately, one of the primary contributions of econometrics is the development of techniques to address such problems or other complications with the data that make standard model estimation difficult or unreliable.

The following table lists the names of the most common estimation issues, a brief definition of each one, their consequences, typical tools used to detect them, and commonly accepted methods for resolving each problem.

Problem Definition Consequences Detection Solution
High multicollinearity Two or more independent variables in a regression model exhibit
a close linear relationship.
Large standard errors and insignificant
-statistics
Coefficient estimates sensitive to minor changes in model
specification
Nonsensical coefficient signs and magnitudes
Pairwise correlation coefficients
Variance inflation factor (VIF)
1. Collect additional data.
2. Re-specify the model.
3. Drop redundant variables.
Heteroskedasticity The variance of the error term changes in response to a change
in the value of the independent variables.
Inefficient coefficient estimates
Biased standard errors
Unreliable hypothesis tests
Park test
Goldfeld-Quandt test
Breusch-Pagan test
White test
1. Weighted least squares (WLS)
2. Robust standard errors
Autocorrelation An identifiable relationship (positive or negative) exists
between the values of the error in one period and the values of the
error in another period.
Inefficient coefficient estimates
Biased standard errors
Unreliable hypothesis tests
Geary or runs test
Durbin-Watson test
Breusch-Godfrey test
1. Cochrane-Orcutt transformation
2. Prais-Winsten transformation
3. Newey-West robust standard errors

About This Article

This article is from the book:.

  • Econometrics For Dummies ,

About the book author:

Roberto Pedace , PhD, is an associate professor in the Department of Economics at Scripps College. His published work has appeared in Economic Inquiry, Industrial Relations, the Southern Economic Journal , Contemporary Economic Policy , the Journal of Sports Economics , and other outlets.

This article can be found in the category:

  • Economics ,
  • How to Choose a Forecasting Method in Econometrics
  • Specifying Your Econometrics Regression Model
  • Ten Practical Applications of Econometrics
  • Econometrics: Choosing the Functional Form of Your Regression Model
  • Working with Special Dependent Variables in Econometrics
  • View All Articles From Book

Study.com

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

Top 4 Types of Hypothesis in Consumption (With Diagram)

hypothesis econometrics

The following points highlight the top four types of Hypothesis in Consumption. The types of Hypothesis are: 1. The Post-Keynesian Developments 2. The Relative Income Hypothesis 3. The Life-Cycle Hypothesis 4. The Permanent Income Hypothesis.

Hypothesis Type # 1. The Post-Keynesian Developments:

Data collected and examined in the post-Second World War period (1945-) confirmed the Keynesian consumption function.

Time series data collected over long periods showed that the relation between income and consumption was different from what cross-section data revealed.

In the short run, there was a non-proportional relation between income and consumption. But in the long run the relation was proportional. By constructing new aggregate data on consumption and income from 1869 and examining the same, Simon Kuznets discovered that the ratio of consumption to income was fairly stable from decade to decade, despite large increases in income over the period he studied.

ADVERTISEMENTS:

This contradicted Keynes’ conjecture that the average propensity to consume would fall with increases in income. Kuznets’ findings indicated that the APC is fairly constant over long periods of time. This fact presented a puzzle which is illustrated in Fig. 17.10.

Consumption Puzzle

Studies of cross-section (household) data and short time series confirmed the Keynesian hypothesis — the relationship between consumption and income, as indicated by the consumption function C s in Fig. 17.10.

But studies of long time series found that APC did not vary systematically with income, as is shown by the long-run consumption func­tion C L . The short-run consumption function has a falling APC, whereas the long-run consumption function has a constant APC.

Subsequent research on consumption at­tempted to explain how these two consump­tion functions could be consistent with each other.

Various attempts have been made to rec­oncile these conflicting evidences. In this context mention has to be made of James Duesenberry (who developed the relative income hypothesis), Ando, Brumberg and Modigliani (who developed the life cycle hypoth­esis of saving behaviour) and Milton Friedman who developed the permanent income hypothesis of consumption behaviour.

All these economists proposed explanations of these seemingly contradictory findings. These hypotheses may now be discussed one by one.

Hypothesis Type # 2. The Relative Income Hypothesis :

In 1949, James Duesenberry presented the relative income hypothesis. According to this hypothesis, saving (consumption) depends on relative income. The saving function is expressed as S t =f(Y t / Y p ), where Y t / Y p is the ratio of current income to some previous peak income. This is called relative income. Thus current consumption or saving is not a function-of current income but relative income.

Duensenberry pointed out that during depression when income falls consumption does not fall much. People try to protect their living standards either by reducing their past savings (or accumulated wealth) or by borrowing.

However as the economy gradually moves initially into the recovery and then in to the prosperity phase of the business cycle consumption does not rise even if income increases. People use a portion of their income either to restore the old saving rate or to repay their old debt.

Thus we see that there is a lack of symmetry in people’s consumption behaviour. People find it more difficult to reduce their consumption level than to raise it. This asymmetrical behaviour of consumers is known as the ratchet effect.

Thus if we observe a consumer’s short-run behaviour we find a non-proportional relation between income and consumption. Thus MPC is less than APC in the short run, as Keynes’s absolute income hypothesis has postulated. But if we study a consumer’s behaviour in the long run, i.e., over the entire business cycle we find a proportional relation between income and consumption. This means that in the long run MPC = APC.

Hypothesis Type # 3. The Life-Cycle Hypothesis :

In the late 1950s and early 1960s Franco Modigliani and his co-workers Albert Ando and Richard Brumberg related consumption expenditure to demography. Modigliani, in particular, emphasised that income varies systematically over peoples’ lives and that saving allows consumers to move income from early years of earning (when income is high) to later years after retirement when income is low.

This interpretation of household consumption behaviour forms the basis of his life-cycle hypothesis.

The life cycle hypothesis (henceforth LCH) represents an attempt to deal with the way in which consumers dispose off their income over time. In this hypothesis wealth is assigned a crucial role in consumption decision. Wealth includes not only property (houses, stocks, bonds, savings accounts, etc.) but also the value of future earnings.

Thus consumers visualise themselves as having a stock of initial wealth, a flow of income generated by that wealth over their lifetime and a target (which may be zero) as their end-of-life wealth. Consumption decisions are made with the whole series of financial flows in mind.

Thus, changes in wealth as reflected by unexpected changes in flow of earnings or unexpected movements in asset prices would have an impact on consumers’ spending decisions because they would enhance future earnings from property, labour or both. The theory has empirically testable implications for the relation between saving and age of a person as also for the role of wealth in influencing aggregate consumer spending.

The Hypothesis :

The main reason that an individual’s income varies is retirement. Since most people do not want their current living standard (as measured by consumption) to fall after retirement they save a portion of their income every year (over their entire service period). This motive for saving has an important implication for an individual’s consumption behaviour.

Suppose a representative consumer expects to live another T years, has wealth of W, and expects to earn income Y per year until he (she) retires R years from now. What should be the optimal level of consumption of the individual if he wishes to maintain a smooth level of consumption over his entire life?

The consumer’s lifetime endowments consist of initial wealth W and lifetime earnings RY. If we assume that the consumer divides his total wealth W + RY equally among the T years and wishes to consume smoothly over his lifetime then his annual consumption will be:

C = (W + RY)/T … (5)

This person’s consumption function can now be expressed as

C = (1/T)W + (R/T)Y

If all individuals plan their consumption in the same way then the aggregate consumption function is a replica of our representative consumer’s consumption function. To be more specific, aggregate consumption depends on both wealth and income. That is, the aggregate consumption function is

C = αW + βY …(6)

where the parameter α is the MPC out of wealth, and the parameter β is the MPC out of income.

Implications :

Fig. 17.11 shows the relationship between consumption and income in terms of the life cycle hypothesis. For any initial level of wealth w, the consumption function looks like the Keynesian function.

But the intercept αW which shows what would happen to consump­tion if income ever fell to zero, is not a constant, as is the term a in the Keynesian consumption function. Instead the intercept αW depends on the level of wealth. If W increases; the consumption line will shift up­ward parallely.

Life Cycle Consumption Function

So one main prediction of the LCH is that consumption depends on wealth as well as income, as is shown by the intercept of the consumption function.

Solving the consumption puzzle:

The LCH can solve the consumption puzzle in a simple way.

According to this hypothesis, the APC is:

C/Y = α(W/Y) + β … (7)

Since wealth does not vary proportionately with income from person to person or from year to year, cross-section data (which show inter-individual differences in income and consumption over short periods) reveal that high income corresponds to a low APC. But in the long run, wealth and income grow together, resulting in a constant W/Y and a constant APC (as time-series show).

If wealth remains constant as in the short run the life cycle consumption function looks like the Keynesian consumption function, consumption function shifts upward as shown in Fig. 17.12. This prevents the APC from falling as income increases.

This means that the short-run consumption income relation (which takes wealth as constant) will not continue to hold in the long run when wealth increases. This is how the life cycle hypothesis (LCH) solves the consumption puzzle posed by Kuznets’ studies.

Shift in Consumption Function

Other Predictions :

Another important prediction made by the LCH is that saving varies over a person’s lifetime. The LCH helps to link consumption and savings with the demo­graphic considerations, especially with the age distribution of the population.

The MPC out of life-time income changes with age. If a person has no wealth at the beginning of his service life, then he will accumulate wealth over his working years and then run down his wealth after his retirement. Fig. 17.13 shows the consumer’s income, consumption and wealth over his adult life.

Consumption, Income and Welath Over the Life Cycle

If a consumer smoothest consumption over his life (as indicated by the horizontal consumption line), he will save and accumulate wealth during his working years and then dissave and run down his wealth after retirement. In other words, since people want to smooth consumption over their lives, the young — who are working — save, while the old — who have retired — dissave.

In the long run the consumption-income ratio is very stable, but in the short run it fluctuates. The life cycle approach explains this by pointing out that people seek to maintain a smooth profile of consumption even if their lifetime income flow is uneven, and thus emphasises the role of wealth in the consumption function.

Theory and Evidence: Do Old People Dissave?

Some recent findings present a genuine problem for the LCH. Old people are found not to dissave as much as the hypothesis predicts. This means that the elderly do not reduce their wealth as fast as one would expect, if they were trying to smooth their consumption over their remaining years of life.

Two reasons explain why the old people do not dissave as much as the LCH predicts:

(i) Precautionary saving:

The old people are very much concerned about unpredictable expenses. So there is some precautionary motive for saving which originates from uncertainty. This uncertainty arises from the fact that old people often live longer than they expect. So they have to save more than what an average span of retirement would warrant.

Moreover uncertainty arises due to the fact that the medical expenses of old people increase faster than their age. So some sort of Malthusian spectre is found to be operating in this case. While an old person’s age increases at an arithmetical progression his medical expenses increase in geometrical progression due to accelerated depreciation of human body and the stronger possibility of illness.

The old people are likely to respond to this uncertainty by saving more in order to be able to overcome these contingencies.

Of course, there is an offsetting consideration here. Due to the spread of health and medical insurance in recent years old people can protect themselves against uncertainties about medical expenses at a low cost (i.e., just by paying a small premium).

Now-a-days various insurance plans are offered by both government and private agencies (such as Medisave, Mediclaim, Medicare, etc.). Of course, the premium rate increases with age. As a result the old people are required to increase their saving rate to fulfill their contractual obligations.

However, to protect against uncertainty regarding lifespan, old people can buy annuities from insurance companies. For a fixed fee, annuities offer a stream of income over the entire life span of the recipient.

(ii) Leaving bequests:

Old people do not dissave because they want to leave bequests to their children. The reason is that they care about them. But altruism is not really the reason that parents leave bequests. Parents often use the implicit threat of disinheritance to induce a desirable pattern of behaviour so that children and grandchildren take more care of them or be more attentive.

Thus LCH cannot fully explain consumption behaviour in the long run. No doubt providing for retirement is an important motive for saving, but other motives, such as precautionary saving and bequest, are no less important in determining people’s saving behaviour.

Another explanation, which differs in details but entirely shares the spirit of the life cycle approach is the permanent income hypothesis of consumption. The hypothesis, which is the brainchild of Milton Friedman, argues that people gear their consumption behaviour to their permanent or long term consumption opportunities, not to their current level of income.

An individual does not plan consumption within a period solely on the basis of income within the period; rather, consumption is planned in relation to income over a longer period. It is to this hypothesis that we turn now. We may now turn to Friedman’s permanent income hypothesis, which suggests an alternative explanation of long-run income-consumption relationship.

Hypothesis Type # 4. The Permanent Income Hypothesis :

Milton Friedman’s permanent income hypothesis (henceforth PIH) presented in 1957, comple­ments Modigliani’s LCH. Both the hypotheses argue that consumption should not depend on current income alone.

But there is a difference of insight between the two hypotheses while the LCH emphasises that income follows a regular pattern over a person’s lifetime, the PIH emphasises that people experience random and temporary changes in their incomes from year to year.

The PIH, Friedman himself claims, ‘seems potentially more fruitful and in some measure more general” than the relative income hypothesis or the life-cycle hypothesis.

The idea of consumption spending that is geared to long-term average or permanent income is essentially the same as the life cycle theory. It raises two further questions. The first concerns the precise relationship between current consumption and permanent income. The second question is how to make the concept of present income operational, that is how to measure it.

The Basic Hypothesis :

According to Friedman the total measured income of an individual Y m has two compo­nents : permanent income Y p and transitory income Y t . That is, Y m – Y p + Y t .

Permanent income is that part of income which people expect to earn over their working life. Transitory income is that part of income which people do not expect to persist. In other words, while permanent income is average income, transitory income is the random deviation from that average.

Different forms of income have different degrees of persistence. While adequate investment in human capital (expenditure on training and education) provides a permanently higher income, good weather provides only transitorily higher income.

The PIH states that current consumption is not dependent solely on current disposable income but also on whether or not that income is expected to be permanent or transitory. The PIH argues that both income and consumption are split into two parts — permanent and transitory.

A person’s permanent income consists of such things as his long term earnings from employment (wages and salaries), retirement pensions and income derived from possessions of capital assets (interest and dividends).

The amount of a person’s permanent income will determine his permanent consumption plan, e.g., the size and quality of house he buys and, thus, his long term expenditure on mortgage repayments, etc.

Transitory income consists of short-term (temporary) overtime payments, bonuses and windfall gains from lotteries or stock appreciation and inheritances. Negative transitory income consists of short-term reduction in income arising from temporary unemployment and illness.

Transitory consumption such as additional holidays, clothes, etc. will depend upon his entire income. Long term consumption may also be related to changes in a person’s wealth, in particular the value of house over time. The economic significance of the PIH is that the short run level of consumption will be higher or lower than that indicated by the level of current disposable income.

According to Friedman consumption depends primarily on permanent income, because consumers use saving and borrowing to smooth consumption in response to transitory changes in income. The reason is that consumers spend their permanent income, but they save rather than spend most of their transitory income.

Since permanent income should be related to long run average income, this feature of the consumption function is clearly in line with the observed long run constancy of the consumption income ratio.

Let Y represent a consumer unit’s measured income for some time period, say, a year. This, according to Friedman, is the sum of two components : a permanent component (Y p ) and a transitory component (Y t ), or

Y = Y P + Y t …(8)

The permanent component reflects the effect of those factors that the unit regards as determining its capital value or wealth the non-human wealth it owns, the personal attributes of the earners in the unit, such as their training, ability, personality, the attributes of the economic activity of the earners, such as the occupation followed, the location of the economic activity, and so on.

The transitory component is to be interpreted as reflecting all ‘other’ factors, factors that are likely to be treated by the unit affected as ‘accident’ or ‘chance’ occurrences, for example, illness, a bad guess about when to buy or sell, windfall or chance gains from race or lotteries and so on. Permanent income is some sort of average.

Transitory income is a random variable. The difference between the two depends on how long the income persists. In other words, the distinction between the two is based on the degree of persistence. For example education gives an individual permanent income but luck — such as good weather — gives a farmer transitory income.

It may also be noted that permanent income cannot be zero or negative but transitory income can be.

Suppose a daily wage earner falls sick for a day or two and may not earn anything. So his transitory income is zero. Similarly if an individual sales a share in the stock exchange at a loss his transitory income is negative. Finally permanent income shows a steady trend but transitory income shows wide fluctuation(s).

Similarly, let C represent a consumer unit’s expenditures for some time period. It is also the sum of a permanent component (C p ) and a transitory component (C t ), so that

C = C p + C t … (9)

Some factors producing transitory components of consumption are: unusual sickness, a specifically favourable opportunity to purchase and the like. Permanent consumption is assumed to be the flow of utility services consumed by a group over a specific period.

The permanent income hypothesis is given by three simple equations (8), (9) and (10):

Y = Y p + Y t …(8)

C – C p + C t …(9)

C p = kY p , where k = f (r, W, u) …(10)

Here equation (6) defines a relation between permanent income and permanent consump­tion. Friedman specifies that the ratio between them is independent of the size of permanent income, but does depend on other variables in particular: (i) the rate of interest (r) or sets of rates of interest at which the consumer unit can borrow or lend; (ii) the relative importance of property and non-property income, symbolised by the ratio of non-human wealth to income (W) (iii) the factors symbolised by the random variable u determining the consumer unit’s tastes and preference for consumption versus additions to wealth. Equations (8) and (9) define the connection between the permanent components and the measured magnitudes.

Friedman assumes that the transistory components of income and consumption are uncorrelated with one another and with the corresponding permanent components, or

P ytyp = P ctcp = P ytct = 0 …(11)

where p stands for the correlation coefficient between the variables designated by the subscripts. The assumption that the third component in equation (11) — between the transitory components of income and consumption — is zero is indeed a strong assumption.

As Friedman says:

“The common notion that savings,…, are a ‘residue’ speaks strongly for the plausibility of the assumption. For this notion implies that consumption is determined by rather long-run considerations, so that any transitory changes in income lead primarily to additions to assets or to the use of previously accumulated balances rather than to corresponding changes in consumption.”

In Fig. 17.14 we consider the con­sumer units with a particular measured income, say which is above the mean measured income for the group as a whole — Y’. Given zero correlation be­tween permanent and transitory compo­nents of income, the average permanent income of those units is less than Y 0 ; that is, the average transitory component is positive.

The average consumption of units with a measured income Y 0 is, therefore, equal to their average perma­nent consumption. In Friedman’s hy­pothesis this is k times their average permanent income.

If Y 0 were not only the measured income of these units but also their permanent income, their mean consumption would be Y 0 or Y 0 E. Since their mean permanent income is less than their measured income (i.e., the transitory component of income is positive), their average consumption, Y 0 F, is less than Y 0 E.

Permanent Income Hypothesis

By the same logic, for consumer units with an income equal to the mean of the group as a whole, or Y, the average transitory component of income as well as of consumption is zero, so the ordinate of the regression line is equal to the ordinate of the line 0E which gives the relation between Y p and C p .

For units with an income below the mean, the average transitory component of income is negative, so average measured consumption (CC”) is greater than the ordinate of 0E (BC’). The regression line (C = a + bY), therefore, intersects 0E at D, is above it to the left of D, and below it to the right of D.

If k is less than unity, permanent consumption is always less than permanent income. But measured consumption is not necessarily less than measured income. The line OH is a 45° line along which C = Y.

The vertical distance between this line and IF is average measured savings. Point J is called the ‘break-even’ point at which average measured savings are zero. To the left of J, average measured savings are negative, to the right, positive; as measured income increases so does the ratio of average measured savings to measured income.

Friedman’s hypothesis thus yields a relation between measured consumption and measured income that reproduces the broadest features of the corresponding regressions that have been computed from observed data. The point is that consumption expenditures seem to be proportional to disposable income in the long run.

In the short run, on the other hand, the consumption-income ratio fluctuates considerably. In sum, current consumption is related to some long-run measure of income (e.g., permanent income) while short-run fluctuations in income tend primarily to affect the level of saving.

Estimating Permanent Income :

Dornbusch and Fischer have defined permanent income as “the steady rate of consumption a person could maintain for the rest of his or her life, given the present level of wealth and income earned now and in the future.”

One might estimate permanent income as being equal to last year’s income plus some fraction of the change in income from last year to this year:

hypothesis econometrics

Econometrics 105 - On Demand Course Material

Join us for Econometrics 105, where we'll dive deep into analyzing data and making predictions like a pro!

Select date and time

  • Wednesday August 14 1:00 PM
  • Thursday August 15 1:00 PM
  • Friday August 16 1:00 PM
  • Saturday August 17 1:00 PM
  • Sunday August 18 1:00 PM
  • More options

Refund Policy

1. Introduction and our Guest Speaker

2. Omitted Variable Bias (OVB)

3. Multivariate Model

4. Ordinary Least Squares (OLS)

5. Confidence Interval and Hypothesis Tests

6. F-Statistic and Joint Significance

7. Special Cases

9. Quiz questions to test your knowledge

About this event

  • Event lasts 23 hours 30 minutes

Dive into the World of Econometrics

Learn about the statistical methods that govern data analysis in economics to ensure accurate and insightful results in your studies and work! Understand why econometrics matters and how it impacts economic research and policy decisions. Upon purchasing, you will get access to a 1-hour video lecture, comprehensive slides, a quiz, and an audio transcript.

This course is your chance to:

  • Gain practical insights into econometric modeling and hypothesis testing.
  • Ask burning questions and get them answered by an expert in the field.
  • Network with individuals who share your interest in this critical area of economics.

Don't miss this opportunity to:

  • Empower yourself with knowledge about the statistical tools shaping economic analysis.
  • Gain a competitive edge by understanding econometric techniques.
  • Become an informed analyst engaged in discussions about the role of econometrics in economic decision-making.

Relevant topics include:

  • Multivariate linear regression
  • Hypothesis testing
  • Ordinary Least Squares (OLS)
  • Model selection
  • Addressing omitted variable bias

This course is for:

  • Aspiring economists and data analysts
  • Anyone new to econometric methods
  • Students and professionals in economics and related fields
  • Decision-makers in data-driven industries
  • Policy analysts and enthusiasts

Register today! Small class size to maximize engaging and meaningful discussions!

  • Online Events
  • Things To Do Online
  • Online Classes
  • Online Science & Tech Classes
  • #econometrics
  • #economics_finance

Organized by

  • Open access
  • Published: 10 August 2024

Plant economics spectrum governs leaf nitrogen and phosphorus resorption in subtropical transitional forests

  • Boyu Ma 1 ,
  • Jielin Ge 1 ,
  • Changming Zhao 1 ,
  • Wenting Xu 1 ,
  • Kai Xu 1 &
  • Zongqiang Xie 1 , 2  

BMC Plant Biology volume  24 , Article number:  764 ( 2024 ) Cite this article

125 Accesses

Metrics details

Leaf nitrogen (N) and phosphorus (P) resorption is a fundamental adaptation strategy for plant nutrient conservation. However, the relative roles that environmental factors and plant functional traits play in regulating N and P resorption remain largely unclear, and little is known about the underlying mechanism of plant functional traits affecting nutrient resorption. Here, we measured leaf N and P resorption and 13 plant functional traits of leaf, petiole, and twig for 101 representative broad-leaved tree species in our target subtropical transitional forests. We integrated these multiple functional traits into the plant economics spectrum (PES). We further explored whether and how elevation-related environmental factors and these functional traits collectively control leaf N and P resorption.

We found that deciduous and evergreen trees exhibited highly diversified PES strategies, tending to be acquisitive and conservative, respectively. The effects of PES, rather than of environmental factors, dominated leaf N and P resorption patterns along the elevational gradient. Specifically, the photosynthesis and nutrient recourse utilization axis positively affected N and P resorption for both deciduous and evergreen trees, whereas the structural and functional investment axis positively affected leaf N and P resorption for evergreen species only. Specific leaf area and green leaf nutrient concentrations were the most influential traits driving leaf N and P resorption.

Conclusions

Our study simultaneously elucidated the relative contributions of environmental factors and plant functional traits to leaf N and P resorption by including more representative tree species than previous studies, expanding our understanding beyond the relatively well-studied tropical and temperate forests. We highlight that prioritizing the fundamental role of traits related to leaf resource capture and defense contributes to the monitoring and modeling of leaf nutrient resorption. Therefore, we need to integrate PES effects on leaf nutrient resorption into the current nutrient cycling model framework to better advance our general understanding of the consequences of shifting tree species composition for nutrient cycles across diverse forests.

Peer Review reports

Introduction

Nutrient resorption is the process by which plant withdraws nutrients from senescing tissues before litterfall and thus is recognized as an adaptation strategy for plant nutrient conservation [ 1 , 2 , 3 , 4 ]. This process prolongs nutrient mean residence time, which enables plants to depend less on current nutrient uptake capacity to mitigate the nutrient limitation of plant production, thereby enhancing overall nutrient use efficiency and improving plant fitness in natural ecosystems [ 5 , 6 , 7 ]. Nutrient resorption represents a fundamental pathway of nutrient recycling in forest ecosystems, especially for nitrogen (N) and phosphorus (P), which are heavily involved in plant physiological processes and are integral components in many metabolic compounds, such as proteins and Ribonucleic Acid (RNA) [ 8 , 9 ]. Plant N and P resorption is estimated to provide an average of 31% and 40% of the annual plant N and P requirements for forests globally, respectively [ 10 ]. Nutrient resorption efficiency (RE) and nutrient resorption proficiency (RP) have been well-recognized as complementary metrics of leaf nutrient resorption [ 3 , 5 , 11 ]. For leaves, RE indicates the proportion of nutrients that were resorbed before the abscission of green leaves, and RP represents the lowest nutrient concentration in the senescent leaves, which is considered to be the biochemical limit of resorption [ 5 ]. Therefore, conducting a quantitative analysis of leaf N and P resorption is necessary for gaining a better understanding of nutrient utilization and adaptation strategies of tree species and of how to simulate the nutrient cycling process of forest ecosystems.

Decades of research have illustrated environmental conditions (including climate and edaphic properties) and plant functional traits as predominant controls over leaf N and P resorption [ 2 , 4 , 12 ]. However, their relative roles in the leaf N and P resorption process are still elusive. Previous studies have revealed that N resorption efficiency (NRE) decreased as mean annual temperature (MAT) and mean annual precipitation (MAP) increased, while P resorption efficiency (PRE) followed opposite trends [ 12 ]. However, other studies have suggested that both NRE and PRE decreased as MAT and MAP increased [ 2 , 3 ]. Because our current understanding of such associations between leaf N and P resorption and climate is primarily derived from data-integration studies at the global and regional scales, these inconsistent conclusions may be attributed to the diverse species compositions in the respective study areas.

Soil nutrient availability is another major environmental driver of leaf N and P resorption [ 2 , 6 ]. It is recognized as a key factor that mediates the impact of climate on leaf N and P resorption [ 7 , 13 ]. However, there is still an ongoing debate as to whether and how N and P resorption relates directly to soil nutrient availability. One possible reason is that the relative costs of resorbing N and P back into live tissues from senescing ones, instead of uptaking new N and P from the soil, remain largely uncertain. Particularly, nutrient constraints on plant growth and productivity transition from P limitation at low latitudes to N limitation at high latitudes [ 4 , 14 , 15 ], leading plants to trade off different nutrient acquisition strategies depending on soil nutrient availability in tropical versus temperate regions. Furthermore, our nuanced understanding of nutrient availability and utilization in subtropical transitional forests remains unclear, seriously hindering a more general understanding of N and P resorption.

In addition to the above-mentioned environmental controls, plant functional traits are also the intrinsic biotic factors affecting N and P resorption. Current studies on leaf N and P resorption of different tree species mainly focus on leaf habits (i.e., deciduous and evergreen). Previous studies have demonstrated that evergreen tree species in tropical forests had higher NRE and PRE than deciduous counterparts [ 2 , 16 ], but the NRE of evergreen trees was seemingly lower, and the PRE showed no significant difference in temperate forests [ 17 , 18 ]. However, patterns and controls of nutrient resorption in subtropical transitional forests between tropical and temperate forests remain relatively unclear. These unexplained mechanisms call for the urgent need to link functional traits closely related to leaf habit to leaf N and P resorption, such as specific leaf area (SLA), leaf dry matter content (LDMC), and green leaf N and P concentrations [ 19 , 20 ].

The analysis of individual functional traits can contribute to identifying specific traits that influence N and P resorption. In some cases, however, a certain plant trait may have reached its physiological threshold, rendering it insufficient for mitigating increasing environmental stress [ 21 , 22 , 23 ]. Under such circumstances, these different combinations of intercorrelating physiological and morphological traits led to the construction of trait coordination networks [ 24 , 25 ]. Quantifying trait network patterns across leaf habits is essential for understanding the diversity of plant form and function, as well as for identifying the main drivers of ecological processes such as leaf nutrient resorption [ 25 , 26 ]. The key traits related to leaf N and P economics could reflect trait-based adaptive strategies [ 27 ]. The differences between trait networks of deciduous and evergreen trees can characterize their respective manners of acquiring N and P resources [ 26 , 28 ].

In recent decades, the plant economics spectrum (PES) has been proposed as a method to reflect the pervasive trade-offs and coordination in plant nutrient utilization by using the components and coordination relationships of trait networks to integrate a unified concept across plant taxonomy and growth forms [ 19 , 29 ]. By integrating multivariate functional traits into PES, ecological trade-offs may become apparent, thus facilitating predictions of some important ecosystem processes such as nutrient resorption [ 21 , 30 ]. Nonetheless, current studies examining the association between N and P resorption and PES have produced inconsistent patterns, or even uncoupled relationships [ 6 , 27 , 31 ]. This phenomenon may be mainly attributed to particular adaptation strategies for each PES trait associated with N and P economics under environmental selection, forming multiple counteracting pathways affecting resorption processes [ 31 ]. Species with similar or opposite strategies converge or diverge along the axis of trait variation according to PES . Deciduous trees tend to adopt an acquisitive strategy, whereas evergreen trees tend to exhibit a conservative strategy [ 19 , 28 ] . Furthermore, there have been significant advances in our understanding of PES for plant functional traits related to N and P economics at different spatial and biological scales [ 20 , 31 ], but quantitative studies on how PES drives N and P resorption in leaves are still lacking [ 20 , 27 ]. Therefore, we expected that PES well describe a trait syndrome of leaf habits that is closely associated with N and P resorption.

To date, although the individual effects of environmental factors and plant functional traits on leaf N and P resorption have been well documented, their relative roles at different scales remain largely unclear [ 2 , 16 , 32 ]. Global-scale studies have revealed that MAT and MAP significantly affect leaf N and P resorption, regional-scale studies have demonstrated that soil nutrient availability and leaf life span are dominant factors controlling leaf N and P resorption, but local-scale studies have indicated that leaf N and P resorption are tightly linked with tree species composition and plant functional traits [ 6 , 12 , 16 , 32 ]. More critically, different spatial scales are usually accompanied by multiple differences in environmental conditions and clear distinct tree species compositions, which complicates the issue of dominant factors for nutrient resorption, and further hinders our robust understanding of regulatory mechanisms of plant nutrient resorption strategies in these diverse environments [ 23 , 32 , 33 ]. These aforementioned substantial complexity and uncertainty call for simultaneous studies on plant functional traits under continuous, wide-ranging environmental conditions. Elevational gradient precisely provides a continuously changing environment and bridges the extensive gaps in tree species composition and environmental isolation with its gradual change, is thus an ideal platform for exploring how plant functional traits and environmental controls over nutrient resorption simultaneously [ 34 , 35 ].

In this study, we have investigated and quantified relationships between plant functional traits and leaf N and P resorption of broad-leaved tree species along an elevational gradient to get a more nuanced understanding of the drivers of nutrient resorption in subtropical China. To this end, we attempted to test the following hypotheses: (a) leaf habits directly explain more variations in leaf N and P resorption of broad-leaved tree species than elevation, meaning that N and P resorption should be more strongly controlled by morphological and physiological trait combinations than by climate and soil conditions associated with elevational gradients; (b) deciduous and evergreen trees differentiate along an integrated PES, arraying on the acquisitive side and the conservative side, respectively; (c) PES is strongly coupled with N and P resorption, and these mediating associations can be largely attributed to one or more crucial traits.

Materials and methods

This study was carried out on the southern slope of Shennongjia Mountain (31°19′4″ N, 110°29′44″ E), northwest of Hubei Province, in central China. The region belongs to the transition zone between the (sub-) tropical and temperate climates and is an important biodiversity hotspot in China and globally. The mean annual temperature of this area is 10.6 °C, and annual precipitation ranges from 1306 to 1722 mm. The dominant soil classes are mountain yellow–brown soil and mountain brown soil, with a typical subtropical forest vegetation of deciduous and evergreen broad-leaved mixed forest [ 36 ].

Experimental design and field sampling

We conducted a field forest inventory on the southern slope of Shennongjia Mountain in July 2019 to identify common broad-leaved tree species that occurred with high frequency. We set up six sampling sites at intervals of 200 m from 800 to 1800 m above sea level (Fig.  1 ). We sequentially sampled all of the common representative broad-leaved tree species identified through pre-inventory on six sites (± 20 m elevation) and previous studies [ 37 , 38 ]. For each site, we followed similar previous studies to sample [ 26 , 31 ]. Specifically, we treated an individual mature tree as a sampling unit, with three repeat units for each tree species, at least 15 m apart between each two sampling units. Overall, we sampled 101 broad-leaved species, belonging to 33 families and 60 genera and formed 205 elevation-species combinations. The identification of tree species was based on taxonomic expertise, and the nomenclature followed the Flora of China [ 39 ], and all selected tree species were classified as evergreen ( n  = 43) or deciduous ( n  = 58) according to their leaf habit. Detailed sampling information is shown in Table S 1 .

figure 1

Sampling diagram of broad-leaved tree species on the southern slope of Shennongjia Mountain. Asterisks represent the six sampling transects along the elevational gradient. The size of the pie chart represents the total number of tree species as shown in the left-most concentric circle, and the labels in the pie charts represent the numbers of deciduous and evergreen tree species sampled at each elevation. The background of the mountain was designed by pch.vector, Freepik ( http://www.freepik.com/ ). Tree illustrations are from Tracey Saxby, Dylan Taillie, Kim Kraeer, and Lucy Van Essen-Fishman, Integration and Application Network, University of Maryland Center for Environmental Science ( http://ian.umces.edu/imagelibrary/ )

We took the tree height as the selection criteria to identify mature and healthy individual trees as sampling units, referring to the Flora of China [ 39 ]. All sampling units are labeled to ensure the source of the mature leaf samples corresponds to that of the senescent leaf samples. In July and August of 2019 and 2020, we collected 30–50 undamaged, relatively large, and fully expanded fresh mature leaf samples from the different canopy positions of each sampling unit, and we cut off representative twigs from each sample. We wore gloves during sampling to minimize sample contamination. Based on the previous field investigation experience in our target subtropical forests and literature records [ 40 , 41 ], the peak period of leaf abscission for deciduous tree species is October and November in autumn, and that of evergreen tree species is October and November in autumn or April and May in spring. Therefore, in October and November of 2019 and 2020, we collected senescent leaves of all deciduous trees and partial evergreen trees, and the remaining senescent leaves of evergreen trees were collected in April and May 2020 to ensure that all senescent leaf samples were from the peak period of leaf abscission. We collected yellowed senescent leaves by shaking trees or picking up newly fallen leaves on the ground to avoid nutrient leaching from partly decomposed leaf litter on the forest floor, which could lead to an underestimation of leaf N and P concentrations [ 22 , 42 ].

Measurement of plant functional traits

Here, we selected and determined 13 fundamental traits of fresh leaves according to previous standard protocols [ 43 , 44 , 45 ] (Table S 2 ). Mature fresh leaf samples were stored in sealed plastic bags immediately after collection in the field, and they were kept cool until being brought to the laboratory. On the day of sampling, we selected 3 intact fresh leaves from each sampling unit, separating the petioles for the following physical trait measurements [ 42 , 46 ]. We then dried the remaining leaves in the oven at 65 °C for 48 h for subsequent chemical analysis [ 36 , 46 ]. For each leaf subsample, we measured the relative content of Chlorophyll (SPAD) using a hand-held portable chlorophyll meter (SPAD-502), which was repeated at least three times for each leaf to avoid the possible impacts of leaf thickness and hair coats on SPAD. Leaf thickness (LT, mm) was quantified with a vernier caliper, and care was taken to avoid the main veins. Leaf samples were scanned by a leaf area meter (CI-203, USA) to determine leaf area (LA, cm 2 ). The leaf subsamples were then weighed to determine the leaf fresh mass, and then they were dried in the oven at 65 °C for 48 h to constant mass to determine the leaf dry mass [ 36 , 46 ]. This data was then used to calculate specific leaf area (SLA, leaf area/leaf dry mass, cm 2 g −1 ) and leaf dry matter content (LDMC, leaf dry mass/leaf fresh mass). The length (PL, mm) and width (PW, mm) of each leaf sample petiole were measured by a vernier caliper. The twig dry matter content (TDMC) of each tree was calculated using the same method as the leaf. Then twig wood density (TWD, g cm −3 ) was calculated using the drainage method [ 47 ].

On the day of sampling, fresh and senescent leaves were dried in the oven at 65 °C for 48 h to constant mass [ 36 , 46 ], and then they were ground with a laboratory mill for analysis of chemical elements. The leaf carbon (C) and N concentrations per unit mass for fresh green leaves (Cgr, Ngr, mg g −1 ) and senescent leaves (Csen, Nsen, mg g −1 ) were determined by an element analyzer (vario EL cube CHNOS Elemental Analyzer, Elementar Analysensysteme GmbH, Hanau, Germany). We also calculated the ratio of leaf C to N concentrations (CNgr) to serve as an indicator trait. The leaf P concentration per unit mass for fresh leaves (Pgr, mg g −1 ) and senescent leaves (Psen, mg g −1 ) were determined using inductively-coupled plasma spectrometry after digestion of samples in HNO 3 (iCAP 6300 ICP-OES Spectrometer, Thermo Fisher, USA). More details are provided in Table S 2 .

Measurement of environmental factors

We measured a total of 11 environmental factors in this study (Table S 3 ). At each elevation site, we used HOBO Onset microclimatic recorders (Onset Computer Corporation, USA) to measure actual microclimatic variables. These included air temperature (AT, °C), soil temperature (ST, °C), and soil moisture (SM, %). At each level, we selected five 2 m × 2 m plots with high frequencies of sampled trees and collected soil samples at a depth of 0–10 cm [ 18 ]. Visible plant materials and stones were removed from all soil samples. Next, the soil samples were air-dried at room temperature and either passed through a 100-mesh sieve for subsequent chemical analysis or frozen at − 20 °C prior to microbiological assay. Soil total N concentration (SN, mg g −1 ) was measured using the Kjeldahl method after digesting samples with H 2 SO 4 , and available N concentration (SNA, mg g −1 ) was determined using the diffusion method after alkaline hydrolysis. Soil total P concentration (SP, mg g −1 ) was quantified after wet digestion with concentrated HF and HClO 4 , and soil available P concentration (SPA, mg g −1 ) was determined in NaHCO 3 extraction, using the molybdenum antimony resistance colorimetry method. Soil organic C concentration (SOC, mg g −1 ) was measured by titrimetry after oxidation with a mixture of potassium dichromate and sulfuric acid. Soil pH was measured using a pH electrode (PB-10, Sartorius, Germany) in a soil–water suspension (soil: water = 1:2.5 [v/w]). Soil microbial biomass C and N concentration (MBC, MBN, mg g −1 ) were quantified using chloroform fumigation (multi N/C 3100, TOC/TNb analyzer, Analytik-Jena AG, Germany). The specific and detailed experimental procedures can be found in Protocols for Standard Biological Observation and Measurement in Terrestrial Ecosystems [ 46 ]. More data details are provided in Table S 3 .

Nutrient resorption calculation

We used two fundamental and complementary metrics to quantify nutrient resorption: resorption efficiency (RE, %) and resorption proficiency (RP, mg g −1 ) [ 2 , 6 ]. RE was calculated as the ratio of the difference in nutrient concentrations between green and senescent leaves to green leaf nutrient concentrations. Considering the mass loss that occurs during leaf senescence, we used a mass loss correction factor (MLCF, 0.78 for evergreen broad-leaved tree species and 0.784 for deciduous broad-leaved tree species) to correct nutrient concentrations in senescent leaves [ 3 ]. RP was measured as the nutrient concentration in senescent leaves, with low litter nutrient concentrations corresponding to high RP and vice versa [ 5 ]. At each elevational site, we calculated the average value of each trait and nutrient resorption for each species.

Data analysis

We checked the data for normality and log-transformed it to improve the linearity of the relationships if necessary before conducting the analysis. To explore the effects of leaf habit and elevation on RE and RP, we used a two-way analysis of variance (ANOVA) followed by  t -tests and Tukey’s post-hoc test. To explore the relative importance of environmental factors and plant functional traits on leaf N and P resorption, we have added variance partition analysis. Based on the Pearson correlation, we further examined the intercorrelation of environmental factors and the relationships between individual environmental factors and nutrient resorption using Mantel’s test in the VEGAN package [ 48 ]. In the VEGAN package we also then performed redundancy analysis (RDA) to test overall associations between environmental factors and nutrient resorption [ 48 ].

Since tree species phylogeny may influence the process of nutrient resorption [ 5 ], we used the V.PHYLOMAKER package [ 49 ], based on the published phylogenetic trees as background trees, to extract the common target nodes through tree species names and generated our target phylogenetic trees. The PLANTLIST package [ 50 ] was used to match the same canonical family, genus, and species information as the V.PHYLOMAKER package. We used Blomberg’s K value and Pagel’s λ value to evaluate the phylogenetic signal on each plant trait using the PICANTE and PHYTOOLS packages [ 51 , 52 ]. A large K value and λ value with p  < 0.05 indicated conservatism for a trait. We described plant functional trait coordination networks according to statistically significant Pearson correlations among traits and illustrated them using the IGRAPH package [ 53 ]. We adopted two indicators to characterize the centrality of networks: D (degree, the number of adjacent edges for each vertex) and D W (the sum of the edge weights of the adjacent edges for each vertex) [ 54 ].

To integrate these interrelated traits into PES, we performed a principal component analysis (PCA) using the basic function ‘prcomp’. Corresponding illustrations were made in the GGBIPLOT package [ 55 ]. To evaluate the difference in functional traits across leaf habits, we compared the trait scores of the first two axes (PC1 and PC2) between deciduous and evergreen trees using t -tests. Furthermore, to assess and compare the directions and strengths of principal components on N and P resorption across leaf habits, we conducted linear mixed effect models (LMMs) with tree species as random factors and the PC1 and PC2 scores as fixed effects using the LME4 package [ 56 ]. LMMs were used to deal with elevation-species pseudo-replication. To better identify and quantify which traits or trait combinations varied in relation to N and P resorption, we re-applied the similar LMMs with tree species as random factors and the 13 functional traits mentioned above as fixed effects to evaluate the role of each single trait. We removed the variables with the variance inflation factor (VIF) value > 10 to avoid the potential multicollinearity of the models. These were visualized using the GGPLOT2 package [ 57 ]. All analyses and illustrations were performed in R 4.0.3 [ 58 ].

Patterns of leaf N and P resorption across elevations and leaf habits

The overall NRE and PRE of broad-leaved trees were 39.65% and 50.76%, respectively. NRE was significantly higher in deciduous trees than in evergreen trees ( p  < 0.05, Fig.  2 ), but PRE displayed no significant difference across leaf habits ( p  > 0.05, Fig.  2 ). Mean leaf litter N and P concentrations of broad-leaved trees were 13.08 mg g −1 and 0.74 mg g −1 , respectively, generally higher than the threshold of incomplete resorption proposed by Killingbeck [ 5 ], indicating low N and P resorption proficiency (NRP and PRP). Moreover, deciduous trees exhibited higher N and P concentrations in senescent leaves than did evergreen trees, which suggests that deciduous trees have lower resorption proficiency ( p  < 0.05, Fig.  2 ). By contrast, NRE, NRP, PRE, and PRP were all insignificantly associated with elevation ( p  > 0.05, Table  1 , Fig. S 1 ). Furthermore, when considering the interaction of elevation and leaf habit, we found no significant relationship between NRE, NRP, and the interaction, but PRE and PRP were significantly affected by the interaction (Table  1 ).

figure 2

N and P resorption efficiency (RE) and proficiency (RP) across leaf habits. Asterisks indicate significant differences between evergreen and deciduous tree species (** p  < 0.01, *** p  < 0.001), and NS indicates no significant difference. The dotted lines represent the thresholds for incomplete (N > 10 mg g −1 , P > 0.5 mg g −1 for evergreen tree species, P > 0.8 mg g −1 for deciduous tree species) and complete (N < 7 mg g −1 , P  < 0.4 mg g −1 for evergreen tree species, P  < 0.5 mg g. −1 for deciduous tree species) resorption, as defined by Killingbeck (1996)

Environmental effects on leaf N and P resorption

We found that environmental factors only explained 0.9% of the variation of leaf N and P resorption, while plant functional traits explained 13.8% of that, indicating that the relative effects of environmental factors are much weaker than plant functional traits (Fig. S 2 ). In addition, the RDA results revealed that environmental factors played a minor role in N and P resorption along the elevational gradient. Environmental factors merely contributed 4.19% ( R 2  = 4.19%, Fig. S 3 ) of variation in N and P resorption. Furthermore, Mantel’s test suggested that individual environmental factors do not significantly control N and P resorption ( p  > 0.05, r  < 0.2, Fig.  3 ). Given the minor effects of the environmental factors, we mainly focused on the roles of plant functional traits in N and P resorption below analysis.

figure 3

Environmental factors play a minor role in nutrient resorption. Correlation between environmental factors and their effects on N and P resorption. Asterisks indicate the significance of effects among environmental factors (* p  < 0.05, ** p  < 0.01, *** p  < 0.001). The edge color (r-size) corresponds to the coefficients between resorption characteristics and environmental variables. Edge width denotes the statistical significance ( p -value) between each trait and environmental variable. AT, Air temperature; ST, Soil temperature; SM, Soil moisture; MBC, Soil microbial biomass carbon content; MBN, Soil microbial biomass nitrogen content; pH, Soil pH; SOC, Soil organic carbon content; SNA, Soil available nitrogen content; SP, Soil total phosphorus content; SPA, Soil available phosphorus content

Associations between traits and leaf N and P resorption

We first constructed a phylogeny tree of our targeted tree species to assess phylogenetic affiliation (Fig. S 4 ) and found that plant functional traits did not exhibit relatively high degrees of phylogenetic signal, with Blomberg’s K value ranging from 0.142 (SPAD) to 0.374 (TDMC) and Pagel’s λ value ranging from 0.082 (LA) to 0.787 (Cgr) (Table S 4 ). This implied preference towards random trait evolution, with evolutionary patterns of traits likely to be influenced by non-phylogeny. Most of the plant functional traits correlated significantly with one another ( p  < 0.05, Fig.  4 ). For all trait networks, SLA exhibited the highest connectedness, followed by PL, TDMC, and green leaf nutrient concentrations. The trait networks of deciduous trees displayed higher transitivity than those of evergreen trees (Table S 5 ).

figure 4

Trait correlation networks of broad-leaved trees for ( a ) all, ( b ) deciduous, and ( c ) evergreen tree species. Traits in the network were represented as vertices and their correlations as the edges linking them. Different node colors represent different types of plant organs or tissues. Node size denotes the degree of connectedness. Red and blue edges represent positive and negative correlations ( p  < 0.05), respectively. SPAD, Chlorophyll relative content; LDMC, Leaf dry matter content; LA, Leaf area; SLA, Specific leaf area; LT, Leaf thickness; PL, Petiole length; PW, Petiole width; TDMC, Twig dry matter content; TWD, Twig wood density; Cgr, Carbon concentration in green leaves; Ngr, Nitrogen concentration in green leaves; Pgr, Phosphorus concentration in green leaves; CNgr, The ratio of carbon to nitrogen in green leaves

Given the close associations among traits, we performed PCA and found that PES could predict variation in N and P resorption well (Fig.  5 and Fig.  6 ). PC1 and PC2 explained 34.2% and 17.2% of trait variation, respectively. The 10 plant functional traits besides PW, TDMC, and Cgr all contributed significantly to PC1 ( p  < 0.05, Table S 6 ). Along the PC1 axis, traits representing a conservative resource utilization strategy (SPAD, LDMC, LT, TWD, and CNgr) occupied the right end, while traits representing an acquisitive strategy (LA, SLA, PL, Ngr, and Pgr) were on the left. PC2 correlated significantly and positively with LA, LT, and PW, which represented a conservative strategy of tree structure and defense, and negatively with LDMC, TDMC, and TWD, which related to an acquisitive strategy of photosynthesis and nutrient supply. Most evergreen trees were distributed on the conservative side, while most deciduous species clustered on the acquisitive side ( p  < 0.05, Fig.  5 ).

figure 5

Principal component analysis (PCA) of 13 plant functional traits. The confidence ellipses are drawn at the 90% level. The boxplots at the upper and right margins illustrate the differences in the distributions of tree species across leaf habits along the first and second PCA axes. Asterisks indicate significant differences between evergreen and deciduous trees (*** p  < 0.001). Abbreviations for plant functional traits are provided in Fig.  4

figure 6

The effect size of the first (PC1: photosynthesis and nutrient resource utilization) and second (PC2: structural and functional investment) principal components of 13 plant functional traits of evergreen and deciduous trees on leaf N and P ( a ) resorption efficiency (RE) and ( b ) resorption proficiency (RP) in linear mixed effect models. The horizontal error bars show the 95% confidence intervals of the fixed effect size from linear mixed effect models. Asterisks indicate the significance of the effects of PC1 and PC2 on RE and RP (* p  < 0.05, ** p  < 0.01, *** p  < 0.001). Yellow and green represent deciduous and evergreen trees, respectively

The PC1 axis exerted significant negative effects on NRE, NRP, PRE, and PRP for both deciduous and evergreen trees ( p  < 0.05, Fig.  6 ), except that NRE of evergreen trees was not significantly affected. The PC2 axis showed no obvious impact on deciduous trees, while it positively affected NRE and negatively affected NRP and PRP of evergreen species ( p  < 0.05, Fig.  6 ). For both deciduous and evergreen trees, SLA, N, and P concentrations in green leaves explained N and P resorption the most (Fig.  7 ).

figure 7

Effects of plant functional traits of deciduous (D) and evergreen (E) trees on leaf N and P ( a ) resorption efficiency (RE) and ( b ) resorption proficiency (RP) in linear mixed effect models. The color and size shown indicate the direction and strength of the coefficients from linear mixed effect models. Asterisks indicate the significance of the effects of PC1 and PC2 on RE and RP (· p  < 0.01, * p  < 0.05, ** p  < 0.01, *** p  < 0.001). Abbreviations for plant functional traits are provided in Fig.  4

Here, we have mainly focused on leaf N and P resorption for more representative species along an elevational gradient than in past studies [ 22 , 23 , 35 ], considering environmental factors and plant functional traits simultaneously to explore their roles. Our field observations confirmed previous modeling predictions of N and P limitation that our target subtropical forest is potentially less nutrient-limited [ 14 ]. We found large differences in leaf N and P resorption as a result of various functional trait combinations derived from leaf habit of tree species, rather than diverse environmental conditions along the elevational gradient. The highly diversified strategies of nutrient use and conservation in deciduous and evergreen trees (i.e., acquisitive and conservative strategies, respectively) led to pronounced divergence in leaf N and P resorption. These findings clearly highlight the urgent need to recognize that distinct controls of nutrient cycling dominate at different scales and suggest that biogeochemical cycle models should incorporate multiple traits and their coordination rather than just including individual traits.

Deciduous tree species tend to resorb N and P more effectively but less proficiently than evergreen counterparts

We found that deciduous trees resorbed N more effectively than evergreen counterparts, but there was only a slight difference in resorbing P of trees (Fig.  2 ), which was similar to findings in previous studies from Aert [ 11 ] and Tang et al. [ 59 ], but contradictory to global-scale findings by Yuan and Chen [ 12 ] and Vergutz et al. [ 3 ]. Different from insignificant differences in NRP and PRP between deciduous and evergreen trees [ 59 , 60 ], our study demonstrated that deciduous trees were less proficient in resorbing N and P than evergreen trees (Fig.  2 ), concurred with Aerts [ 11 ]. On one hand, this may be derived from the observation that mature leaf N and P exhibited much higher concentrations in deciduous trees (Ngr = 18.67 mg g −1 , Pgr = 1.39 mg g −1 ) compared to evergreen trees (Ngr = 14.16 mg g −1 , Pgr = 1.01 mg g −1 ) in our study. Consequently, leaf litter N and P concentrations, as the final products reflecting the nutrient status of mature fresh leaves after resorption, are likely to display significant differences. Moreover, the leaves of deciduous tree species are typically composed of more soluble compounds, such as proteins and carbohydrates [ 61 , 62 ]. These compounds break down more easily thus achieving more efficient nutrient resorption for deciduous tree species. In contrast, the leaves of evergreen tree species typically contain more insoluble structural compounds such as cellulose and lignin, hindering leaf nutrient resorption for evergreen species [ 61 , 63 ].

On the other hand, this could be attributed to the fact that leaf N and P resorption between deciduous and evergreen trees varied with leaf longevity and duration of leaf abscission [ 2 , 6 , 64 ]. Deciduous trees, characterized by frequent leaf abscission and annual new leaf regeneration, tend to allocate more nutrients to facilitate faster growth and exhibit higher leaf production within their comparatively shorter growing seasons [ 65 , 66 ]. Additionally, owing to their shorter abscission period and lower temperatures during the abscission season, the reaction time and activity of the decomposing enzymes for leaves may be insufficient, resulting in diminished soil nutrient availability and necessitating enhanced nutrient resorption to sustain growth and productivity [ 22 ]. In contrast, evergreen trees grow slowly and have long leaf lifespans, they develop adaptation strategies with low nutrient requirements to tolerate disadvantageous climatic and nutritional conditions, by maximizing the growth period and minimizing nutrient loss [ 6 , 17 ]. Overall, due to the variable functional traits and nutrient strategies, N and P resorption varied across leaf habits [ 28 , 67 ].

More notably, the strict classification of tree species into deciduous and evergreen could obscure important ecological variations in traits within each leaf habit, and the prominent distinction across leaf habits is essentially the result of a combination of many plant functional traits [ 64 , 67 , 68 ]. Therefore, considering the effect of leaf habit, a classification indicator only may be not enough, further analyses on the relationship between plant functional traits and N and P resorption are necessary.

Dominant effects of PES on leaf N and P resorption

We found no clear phylogenic signal of plant functional traits, which aligns with the point that the role of phylogeny was negligible when specific traits were considered [ 31 , 69 ]. Other studies have also revealed that N and P resorption are not affected by phylogeny [ 70 ]. Therefore, we posit that the associations between plant functional traits and nutrient resorption in this study were phylogenetically independent. The strong associations of plant functional traits suggest that the evolution of traits is intimately linked [ 19 , 29 ]. Trade-off strategies of resource acquisition and investment are defined as the trait combination form, that is, PES. One major dimension of our PES (PC1) represented photosynthesis and nutrient resource utilization, with a gradient of acquisitive strategy to conservative strategy running from left to right. The other dimension (PC2) reflected the structural and functional investment, with a gradient of investment preferences for photosynthesis and nutrient acquisition and for tree growth and structural conservation from bottom to top (Fig.  5 ). Evergreen and deciduous trees here were mainly distributed in the upper right and lower left parts of the PCA, respectively, which suggested that each leaf habit had obvious preferences for plant functional trait combinations along with nutrient strategies [ 67 ]. The findings support our hypothesis that deciduous and evergreen trees differentiate along an integrated PES, arraying on the acquisitive side and the conservative side, respectively.

Plant functional trait networks and PES approaches could be particularly useful in considering complex and dynamic processes such as nutrient resorption [ 20 , 31 , 45 ]. We did find significant correlations between PES and resorption for both N and P in our subtropical forest, thus supporting our hypothesis that trait coordination taken along with multi-trait variation on the PCA axes collectively determine trade-offs in nutrient resource utilization patterns. Remarkably, photosynthetic and nutrient resource utilization capacity positively affected N and P resorption for both deciduous and evergreen trees, whereas structure and defense investments only positively affected N and P resorption for evergreen trees ( p  < 0.05, Fig.  6 ). The differences in strategies between deciduous and evergreen trees is understandable, as they differ in phenology, leaf lifespan, and nutrient use efficiency [ 17 , 68 ]. Deciduous trees are fast-growing species with cheaper tissue investments and rapid resource returns, but evergreen trees are slow-growing species with expensive tissue investments and slower resource returns on that investment [ 6 , 66 ]. Deciduous species have an acquisitive strategy, and they accordingly have stronger nutrient resorption efficiency and weaker nutrient resorption proficiency. Deciduous trees use N and P exploitatively and have higher carbon assimilation and transpiration rates, which can improve photosynthetic efficiency and fix nutrients in a short period to enable rapid growth. Thus, deciduous trees are characterized by high SLA, high green leaf N, and P concentration, as well as by cheap but plentiful tissue expenditures (i.e., higher PL and LA) [ 26 , 66 ]. Conversely, evergreen species adopt a conservative strategy, and so their N and P resorption process is relatively weak. Evergreen trees consume N and P economically, but they allocate expensive resources to build tissue with thick laminas and high tissue densities (i.e., high LT, LDMC, and TWD) to withstand physical and mechanical damage. Meanwhile, they slow growth to preserve nutrients, and they have higher SPAD and CNgr to reduce metabolic costs [ 26 , 71 , 72 ].

While the fundamental role of SLA and green leaf nutrients in mediating N or P resorption has been increasingly recognized, their explicit roles remain to be explored [ 21 , 45 , 73 ]. Here, we found that SLA and green leaf nutrient concentrations are the traits most affecting leaf N and P resorption, matching previous studies that SLA underlined the role of functional traits on leaf N and P utilization [ 21 , 45 , 74 ]. On one hand, while trait correlation networks varied with leaf habit, these traits had markedly higher values of centrality for both deciduous and evergreen trees. On the other hand, they were the most important components of PC1 and PC2 axes. Additionally, in the multivariate linear mixed effects models, SLA, Ngr, and Pgr had significant effects on N and P resorption (Fig.  6 ). These findings collectively suggested that plant functional traits related to leaf resource capture and defense play dominant roles in mediating the trait coordination of N and P economics. Specifically, we found that SLA, Ngr, and Pgr correlated positively with N and P resorption efficiency and senescent leaf N and P concentrations, which agreed with previous studies [ 3 , 45 , 74 ]. Here, we assumed that in our targeted subtropical forests, evergreen trees showed an adaptive response to low resource availability, and deciduous trees adapted to reduce water loss during winter periods. Evergreen trees produce leaves with a long lifespan that exhibit a slow metabolism, low SLA, and low nutrient concentrations. Relative to evergreen trees, deciduous trees produce short-lived leaves that exhibit traits related to rapid metabolism, i.e., high SLA and high nutrient concentrations. This further accounts for the higher nutrient resorption efficiency of deciduous trees. Finally, we found the notable phenomenon that individual traits displayed a weaker relationship with nutrient resorption than did PES, which suggests that plant functional traits operate in a coordinated manner to produce divergent mechanisms for nutrient utilization under different nutrient conditions. This finding underscores the importance of considering multiple trait dimensions, rather than individual traits one at a time.

Subordinate role of environmental factors in leaf N and P resorption

Both efficiency and proficiency of N and P resorption showed no significant association with elevation, and only PRE and PRP exhibited marginally significant differences when considering the interaction of elevation and leaf habit (Table  1 , Fig. S 1 ). Variance partition analysis also demonstrated that environmental factors were less important than plant functional traits in driving variation of leaf N and P resorption (Fig. S 2 ). These findings aligned with our hypothesis that in our target subtropical forests, elevation-related environmental factors directly explain less variations in leaf N and P resorption of broad-leaved tree species than leaf habits-related plant functional traits. Mantel’s test and RDA further verified the negligible effects of environmental factors, including microclimates and soil nutrients, on leaf N and P resorption (Fig.  3 , Fig. S 3 ). Broad-scale studies have indicated that N and P resorption is closely related to soil nutrient availability [ 6 , 18 , 75 ]. At first sight, these seem to imply that tree species from nutrient-poor environments resorb nutrients more efficiently, with the important caveat that study species came from habitats differing widely in climate and soil. These tree species have undergone different environmental selection and genetic adaption processes, thus tree species adaptation and acclimation are accompanying [ 35 , 76 ]. There has been compelling evidence that plant functional traits are more likely to reflect plant nutrient utilization strategies and thus directly explain more to the variation in leaf N and P resorption than climate and soil along elevational gradients [ 22 , 59 , 77 ].

Therefore, we confirm our first hypothesis and argue that environmental factors play a minor direct role in N and P resorption along elevational gradients. In other words, trees alter their N and P resorption along elevational gradients due to changes in tree species composition, essentially by changing plant trait combinations. Notably, similar to previous studies, we sampled only 0–10 cm of topsoil, which, although sufficient in the majority of cases [ 18 ], may weaken the importance of soil properties in leaf nutrient resorption. Therefore, it could be useful to sample deeper soil to improve our understanding of soil properties on nutrient resorption in further studies. Moreover, previous studies indicated that the abundance of trees with conservative strategies, which have lower nutrient resorption efficiency, increases at high altitudes in response to the low temperatures that hamper nutrient mineralization by slowing soil microbial activities [ 20 , 62 , 73 ]. This finding highlights that nutrient resorption is not merely a process but a plant nutrient utilization for adapting to diverse environments. Consequently, future research should pay more attention to the trade-offs of above- and below-ground nutrient acquisition pathways when exploring plant trait-resorption linkage to advance our understanding of nutrient cycling in forest ecosystems.

By collecting a greater number of representative tree species than in previous studies, we have simultaneously elucidated the relative contributions of environmental factors and plant functional traits to N and P resorption along an elevational gradient in the subtropical region. We found lower leaf N and P resorption in the subtropical transitional forest than previously assumed, expanding our current knowledge of leaf N and P resorption beyond the relatively well-studied tropical and temperate forests. Deciduous species tend to resorb nutrients more effectively but less proficiently than evergreen counterparts for both N and P. These imply highly diversified nutrient conservation strategies across leaf habits, with evergreen trees preferring conservative strategies that invest in growth and structure and deciduous trees preferring acquisitive strategies for photosynthetic and nutrient resources. Our work further highlighted the overriding role of plant functional traits in the transitional subtropical forest, and that incorporating PES, especially SLA and green leaf nutrient concentrations, into the monitoring and modeling of leaf nutrient resorption could notably improve our understanding of species-driven nutrient cycling. These findings underscored the potential importance of plant functional trait coordination and thus call for these PES effects on leaf nutrient resorption to be more explicitly incorporated into a current nutrient cycling model framework to better forecast the consequences of shifting tree species composition for biogeochemical cycles in forest ecosystems since nutrient resorption has been commonly involved in many biogeochemical models.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Liu B, Fan X, Meng D, Liu Z, Gao D, Chang Q, Bai E. Ectomycorrhizal trees rely on nitrogen resorption less than arbuscular mycorrhizal trees globally. Ecol Lett. 2023;00:1–11.

Google Scholar  

Brant AN, Chen HYH. Patterns and mechanisms of nutrient resorption in plants. Crit Rev Plant Sci. 2015;34(5):471–86.

Article   CAS   Google Scholar  

Vergutz L, Manzoni S, Porporato A, Novais RF, Jackson RB. Global resorption efficiencies and concentrations of carbon and nutrients in leaves of terrestrial plants. Ecol Monogr. 2012;82(2):205–20.

Article   Google Scholar  

Augusto L, Achat DL, Jonard M, Vidal D, Ringeval B. Soil parent material-A major driver of plant nutrient limitations in terrestrial ecosystems. Glob Change Biol. 2017;23(9):3808–24.

Killingbeck KT. Nutrients in senesced leaves: Keys to the search for potential resorption and resorption proficiency. Ecology. 1996;77(6):1716–27.

Achat DL, Pousse N, Nicolas M, Augusto L. Nutrient remobilization in tree foliage as affected by soil nutrients and leaf life span. Ecol Monogr. 2018;88(3):408–28.

Sun X, Li D, Lü X, Fang Y, Ma Z, Wang Z, Chu C, Li M, Chen H. Widespread controls of leaf nutrient resorption by nutrient limitation and stoichiometry. Funct Ecol. 2023;37(6):1653–62.

Prieto I, Querejeta JI. Simulated climate change decreases nutrient resorption from senescing leaves. Glob Change Biol. 2019;26:1795–807.

Suriyagoda LDB, Ryan MH, Gille CE, Dayrell RLC, Finnegan PM, Ranathunge K, Nicol D, Lambers H. Phosphorus fractions in leaves. New Phytol. 2023;237(4):1122–35.

Article   CAS   PubMed   Google Scholar  

Cleveland CC, Houlton BZ, Smith WK, Marklein AR, Reed SC, Parton W, Del Grosso SJ, Running SW. Patterns of new versus recycled primary production in the terrestrial biosphere. Proc Natl Acad Sci. 2013;110(31):12733–7.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Aerts R. Nutrient resorption from senescing leaves of perennials: are there general patterns? J Ecol. 1996;84(4):597–608.

Yuan ZY, Chen HYH. Global-scale patterns of nutrient resorption associated with latitude, temperature and precipitation. Glob Ecol Biogeogr. 2009;18(1):11–8.

Suseela V, Tharayil N, Xing B, Dukes JS. Warming and drought differentially influence the production and resorption of elemental and metabolic nitrogen pools in Quercus rubra . Glob Change Biol. 2015;21(11):4177–95.

Du E, Terrer C, Pellegrini AF, Ahlström A, van Lissa CJ, Zhao X, Xia N, Wu X, Jackson RB. Global patterns of terrestrial nitrogen and phosphorus limitation. Nat Geosci. 2020;13(3):221–6.

Deng M, Liu L, Jiang L, Liu W, Wang X, Li S, Yang S, Wang B. Ecosystem scale trade-off in nitrogen acquisition pathways. Nature Ecology & Evolution. 2018;2(11):1724–34.

Xu M, Zhu Y, Zhang S, Feng Y, Zhang W, Han X. Global scaling the leaf nitrogen and phosphorus resorption of woody species: Revisiting some commonly held views. Sci Total Environ. 2021;788: 147807.

Aerts R, Chapin FS. The mineral nutrition of wild plants revisited: A re-evaluation of processes and patterns. Adv Ecol Res. 1999;30:1–67.

Yan T, Zhu J, Yang K. Leaf nitrogen and phosphorus resorption of woody species in response to climatic conditions and soil nutrients: a meta-analysis. Journal of Forestry Research. 2018;29(4):905–13.

Díaz S, Kattge J, Cornelissen JH, Wright IJ, Lavorel S, Dray S, Reu B, Kleyer M, Wirth C, Prentice IC. The global spectrum of plant form and function. Nature. 2016;529(7585):167–71.

Article   PubMed   Google Scholar  

Sartori K, Violle C, Vile D, Vasseur F, Villemereuil P, Bresson J, Gillespie L, Fletcher LR, Sack L, Kazakou E. Do leaf nitrogen resorption dynamics align with the slow-fast continuum? A test at the intraspecific level. Funct Ecol. 2022;36(5):1315–28.

Wood TE, Lawrence D, Wells JA. Inter-specific variation in foliar nutrients and resorption of nine canopy-tree species in a secondary neotropical rain forest. Biotropica. 2011;43(5):544–51.

Liu B, Gao D, Chang Q, Liu Z, Fan X, Meng D, Bai E. Leaf enzyme plays a more important role in leaf nitrogen resorption efficiency than soil properties along an elevation gradient. J Ecol. 2022;110(11):2603–14.

Hättenschwiler S, Aeschlimann B, Coûteaux MM, Roy J, Bonal D. High variation in foliage and leaf litter chemistry among 45 tree species of a neotropical rainforest community. New Phytol. 2008;179(1):165–75.

Poorter H, Lambers H, Evans JR. Trait correlation networks: a whole-plant perspective on the recently criticized leaf economic spectrum. New Phytol. 2014;201(2):378–82.

Messier J, Lechowicz MJ, McGill BJ, Violle C, Enquist BJ. Interspecific integration of trait dimensions at local scales: the plant phenotype as an integrated network. J Ecol. 2017;105(6):1775–90.

Li J, Chen X, Niklas KJ, Sun J, Wang Z, Zhong Q, Hu D, Cheng D. A whole-plant economics spectrum including bark functional traits for 59 subtropical woody plant species. J Ecol. 2022;110(1):248–61.

Yu L, Huang Z, Li Z, Korpelainen H, Li C. Sex-specific strategies of nutrient resorption associated with leaf economics in Populus euphratica . J Ecol. 2022;110(9):2062–73.

Zhao YT, Ali A, Yan ER. The plant economics spectrum is structured by leaf habits and growth forms across subtropical species. Tree Physiol. 2017;37(2):173–85.

PubMed   Google Scholar  

Weigelt A, Mommer L, Andraczek K, Iversen CM, Bergmann J, Bruelheide H, Fan Y, Freschet GT, Guerrero-Ramírez NR, Kattge J, et al. An integrated framework of plant form and function: the belowground perspective. New Phytol. 2021;232(1):42–59.

Joswig JS, Wirth C, Schuman MC, Kattge J, Reu B, Wright IJ, Sippel SD, Rüger N, Richter R, Schaepman ME. Climatic and soil factors explain the two-dimensional spectrum of global plant trait variation. Nature Ecology & Evolution. 2022;6(1):36–50.

Freschet GT, Cornelissen JHC, van Logtestijn RSP, Aerts R. Substantial nutrient resorption from leaves, stems and roots in a subarctic flora: what is the link with other resource economics traits? New Phytol. 2010;186(4):879–89.

Reed SC, Townsend AR, Davidson EA, Cleveland CC. Stoichiometric patterns in foliar nutrient resorption across multiple scales. New Phytol. 2012;196(1):173–80.

Drenovsky RE, Pietrasiak N, Short TH. Global temporal patterns in plant nutrient resorption plasticity. Glob Ecol Biogeogr. 2019;28(6):728–43.

Pepin NC, Arnone E, Gobiet A, Haslinger K, Kotlarski S, Notarnicola C, Palazzi E, Seibert P, Serafin S, Schöner W, et al. Climate changes and their elevational patterns in the mountains of the world. Reviews of Geophysics. 2022;60:e2020RG000730.

Gerdol R, Iacumin P, Brancaleoni L, Wang F. Differential effects of soil chemistry on the foliar resorption of nitrogen and phosphorus across altitudinal gradients. Funct Ecol. 2019;33(7):1351–61.

Ge J, Ma B, Xu W, Zhao C, Xie Z. Temporal shifts in the relative importance of climate and leaf litter traits in driving litter decomposition dynamics in a Chinese transitional mixed forest. Plant Soil. 2022;477(1–2):679–92.

Xie Z, Shen G. The outstanding universal value and conservation of Hubei Shennongjia. Beijing: Science Press; 2021.

Luo L, Shen G, Xie Z, Yu J. Leaf functional traits of four typical forests along the altitudinal gradients in Mt. Shennongjia Acta Ecologica Sinica. 2011;31(21):6420–8.

The Editorial Board of Flora of China: Flora of China. Beijing: Science Press; 2004.

Zhang L, Luo T, Zhu H, Daly C, Deng K. Leaf life span as a simple predictor of evergreen forest zonation in China. J Biogeogr. 2010;37(1):27–36.

Ma B, Zhao C, Ge J, Xu W, Xiong G, Shen G, Xie Z. A dataset of 17 dominant plants phenological observation in Shennongjia (2009–2018). China Scientific Data. 2020;5(1):20–30.

Halbritter AH, De Boeck HJ, Eycott AE, Reinsch S, Robinson DA, Vicca S, Berauer B, Christiansen CT, Estiarte M, Grunzweig JM, et al. The handbook for standardized field and laboratory measurements in terrestrial climate change experiments and observational studies (ClimEx). Methods Ecol Evol. 2020;11(1):22–37.

Funk JL, Larson JE, Ames GM, Butterfield BJ, Cavender-Bares J, Firn J, Laughlin DC, Sutton-Grier AE, Williams L, Wright J. Revisiting the Holy Grail: using plant functional traits to understand ecological processes. Biol Rev. 2017;92(2):1156–73.

Pérez-Harguindeguy N, Diaz S, Gamier E, Lavorel S, Poorter H, Jaureguiberry P, Bret-Harte M, Comwell W, Craine J, Gurvich D. New handbook for stand-ardised measurement of plant functional traits worldwide. Aust J Bot. 2013;61(3):167–234.

Zhang JL, Zhang SB, Chen YJ, Zhang YP, Poorter L, Bonser S. Nutrient resorption is associated with leaf vein density and growth performance of dipterocarp tree species. J Ecol. 2015;103(3):541–9.

China Ecosystem Research Network Science Committee. Protocols for standard biological observation and measurement in terrestrial ecosystems. Beijing: China Environmental Science Press; 2007.

Pérez-Harguindeguy N, Díaz S, Garnier E, Lavorel S, Poorter H, Jaureguiberry P, Bret-Harte MS, Cornwell WK, Craine JM, Gurvich DE, et al. New handbook for standardised measurement of plant functional traits worldwide. Aust J Bot. 2013;61(3):167–234.

Oksanen J, Blanchet FG, Friendly M, Kindt R, Legendre P, McGlinn D, Minchin P, O’Hara R, Simpson G, Solymos P. vegan: Community ecology package. R package version 2.6–4. 2022.

Jin Y, Qian H. V.PhyloMaker: an R package that can generate very large phylogenies for vascular plants. Ecography. 2019;42(8):1353–9.

Zhang J LB, Liu S, Feng Z, Jiang K. plantlist: Looking up the status of plant scientific names based on the Plant List Database, searching the Chinese names and making checklists of plants. R package version 0.8.0. 2022

Kembel SW, Cowan PD, Helmus MR, Cornwell WK, Morlon H, Ackerly DD, Blomberg SP, Webb CO. Picante: R tools for integrating phylogenies and ecology. Bioinformatics. 2010;26(11):1463–4.

Revell LJ. Phytooln updated R ecosystem for phylogenetic comparative methods (and other things). PeerJ. 2024;12:e16505.

Article   PubMed   PubMed Central   Google Scholar  

Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal, Complex Systems. 2006;1695(5):1–9.

Xie J, Wang Z, Li Y. Stomatal opening ratio mediates trait coordinating network adaptation to environmental gradients. New Phytol. 2022;235(3):907–22.

Vu VQ, Friendly M. ggbiplot: A grammar of graphics implementation of biplots. R package version 0.6.2. 2024.

Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67(1):1–48.

Wickham H. Package ‘ggplot2’: elegant graphics for data analysis. New York: Springer-Verlag; 2016.

R Core Team: R: A language and environment for statistical computing. Vienna:  R Foundation for Statistical Computing; 2023.

Tang L, Han W, Chen Y, Fang J. Resorption proficiency and efficiency of leaf nutrients in woody plants in eastern China. Journal of Plant Ecology. 2013;6(5):408–17.

Kang H, Xin Z, Berg B, Burgess PJ, Liu Q, Liu Z, Li Z, Liu C. Global pattern of leaf litter nitrogen and phosphorus in woody plants. Ann For Sci. 2010;67(8):811.

Estiarte M, Campiolli M, Mayol M, Penuelas J. Variability and limits in resorption of nitrogen and phosphorus during foliar senescence. Plant Commun. 2023;4(2):100503.

Tsujii Y, Onoda Y, Kitayama K. Phosphorus and nitrogen resorption from different chemical fractions in senescing leaves of tropical tree species on Mount Kinabalu. Borneo Oecologia. 2017;185(2):171–80.

McGroddy ME, Daufresne T, Hedin LO. Scaling of C: N: P stoichiometry in forests worldwide: Implications of terrestrial redfield-type ratios. Ecology. 2004;85(9):2390–401.

Craine JM, Mack MC. Nutrients in senesced leaves: comment. Ecology. 1998;79(5):1818–20.

Estiarte M, Peñuelas J. Alteration of the phenology of leaf senescence and fall in winter deciduous species by climate change: effects on nutrient proficiency. Glob Change Biol. 2015;21(3):1005–17.

Reich PB. The world-wide “fast-slow” plant economics spectrum: a traits manifesto. J Ecol. 2014;102(2):275–301.

Powers JS, Tiffin P. Plant functional type classifications in tropical dry forests in Costa Rica: leaf habit versus taxonomic approaches. Funct Ecol. 2010;24(4):927–36.

van Ommen KA, Douma J, Ordonez JC, Reich PB, Van Bodegom P. Global quantification of contrasting leaf life span strategies for deciduous and evergreen species in response to environmental conditions. Glob Ecol Biogeogr. 2012;21(2):224–35.

Keller AB, Phillips RP. Leaf litter decay rates differ between mycorrhizal groups in temperate, but not tropical, forests. New Phytol. 2019;222(1):556–64.

Huang X, Lu Z, Xu X, Wan F, Liao J, Wang J. Global distributions of foliar nitrogen and phosphorus resorption in forest ecosystems. Sci Total Environ. 2023;871: 162075.

Ouédraogo D-Y, Fayolle A, Gourlet-Fleury S, Mortier F, Freycon V, Fauvet N, Rabaud S, Cornu G, Bénédet F, Gillet J-F, et al. The determinants of tropical forest deciduousness: disentangling the effects of rainfall and geology in central Africa. J Ecol. 2016;104(4):924–35.

Kaproth MA, Fredericksen BW, González-Rodríguez A, Hipp AL, Cavender-Bares J. Drought response strategies are coupled with leaf habit in 35 evergreen and deciduous oak ( Quercus ) species across a climatic gradient in the Americas. New Phytol. 2023;239(3):888–904.

González-Zurdo P, Escudero A, Mediavilla S. N resorption efficiency and proficiency in response to winter cold in three evergreen species. Plant Soil. 2015;394(1):87–98.

Wright IJ, Westoby M. Nutrient concentration, resorption and lifespan: leaf traits of Australian sclerophyll species. Funct Ecol. 2003;17(1):10–9.

Yuan Z, Chen HY. Negative effects of fertilization on plant nutrient resorption. Ecology. 2015;96(2):373–80.

Kobe RK, Lepczyk CA, Iyer M. Resorption efficiency decreases with increasing green leaf nutrients in a global data set. Ecology. 2005;86(10):2780–92.

Sigdel SR, Liang E, Rokaya MB, Rai S, Dyola N, Sun J, Zhang L, Zhu H, Chettri N, Chaudhary RP, et al. Functional traits of a plant species fingerprint ecosystem productivity along broad elevational gradients in the Himalayas. Funct Ecol. 2022;37(2):383–94.

Download references

Acknowledgements

We would like to thank Professor Björn Berg at the University of Helsinki for his insightful comments on this manuscript. We are grateful to Dr. Savannah Grace at the University of Florida for her assistance with the English language and grammatical editing of the manuscript.

The National Natural Science Foundation of China (Grant No.32271641 and No.31600360) financed this study.

Author information

Authors and affiliations.

State Key Laboratory of Vegetation and Environment Change, Institute of Botany, Chinese Academy of Sciences, No.20 Nanxincun, Xiangshan, Beijing, 100093, China

Boyu Ma, Jielin Ge, Changming Zhao, Wenting Xu, Kai Xu & Zongqiang Xie

University of Chinese Academy of Sciences, Beijing, 100049, China

Zongqiang Xie

You can also search for this author in PubMed   Google Scholar

Contributions

BM, JG, and ZX conceived and designed the study. BM and JG conducted the fieldwork and lab experiments with contributions from CZ, WX, KX, and ZX. BM conducted the data analyses. BM and JG led the writing of the manuscript. All authors contributed substantially to the final writing.

Corresponding author

Correspondence to Jielin Ge .

Ethics declarations

Ethics approval and consent to participate.

The collection of leaf samples was permitted by Xingshan County Forestry Bureau, Yichang, Hubei Province, China. We declare that this study complies with the guidelines and legislation of the People’s Republic of China, the IUCN Policy Statement on Research Involving Species at Risk of Extinction, and the Convention on the Trade in Endangered Species of Wild Fauna and Flora.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ma, B., Ge, J., Zhao, C. et al. Plant economics spectrum governs leaf nitrogen and phosphorus resorption in subtropical transitional forests. BMC Plant Biol 24 , 764 (2024). https://doi.org/10.1186/s12870-024-05484-9

Download citation

Received : 04 June 2024

Accepted : 05 August 2024

Published : 10 August 2024

DOI : https://doi.org/10.1186/s12870-024-05484-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Elevational gradient
  • Nutrient cycling
  • Nutrient resorption
  • Plant functional trait
  • Trait coordination

BMC Plant Biology

ISSN: 1471-2229

hypothesis econometrics

State-of-the-Art Brain Recordings Reveal How Neurons Resonate

Findings shed light on how the human brain turns words into thoughts

Published Date

Topics covered:.

  • Neuroscience

Share This:

Article content.

For decades, scientists have focused on how the brain processes information in a hierarchical manner, with different brain areas specialized for different tasks. However, how these areas communicate and integrate information to form a coherent whole has remained a mystery. Now, researchers at University of California San Diego School of Medicine have brought us closer to solving it by observing how neurons synchronize across the human brain while reading. The findings are published in Nature Human Behavior and are also the basis of a thesis by UC San Diego School of Medicine doctoral candidate Jacob Garrett.

“How the activity of the brain relates to the subjective experience of consciousness is one of the fundamental unanswered questions in modern neuroscience,” said study senior author Eric Halgren, Ph.D., professor in the Departments of Neurosciences and Radiology at UC San Diego School of Medicine. “If you think about what happens when you read text, something in the brain has to turn that series of lines into a word and then associate it with an idea or an object. Our findings support the theory that this is accomplished by many different areas of the brain activating in sync.”

This synchronization of different brain areas, called “co-rippling” is thought to be essential for binding different pieces of information together to form a coherent whole. In rodents, co-rippling has been observed in the hippocampus, the part of the brain that encodes memories. In humans, Halgren and his colleagues previously observed that co-rippling also occurs across the entire cerebral cortex.

To examine co-rippling at the mechanistic level, Ilya Verzhbinsky, an M.D./Ph.D. candidate in UC San Diego School of Medicine’s Medical Scientist Training Program completing his research in Halgren’s lab, led a study published in the Proceedings of the National Academy of Sciences that looked at what happens to single neurons firing in different cortical areas during ripples. The present study looks at the phenomenon with a wider lens, asking how the many billions of neurons in the cortex are able to coordinate this firing to process information.

“There are 16 billion neurons in the cortex – double the number of people on Earth,” said Halgren. “In the same way a large chorus needs to be organized to sound as a single entity, our brain neurons need to be coordinated to produce a single thought or action. Co-rippling is like neurons singing on pitch and in rhythm, allowing us to integrate information and make sense of the world. Unless they’re co-rippling, these neurons have virtually no effect on the other, but once ripples are present about two thirds of neuron pairs in the cortex become synchronized. We were surprised by how powerful the effect was.”

{/exp:typographee}

The lines on this diagram of the brain represent connections between various areas of the cerebral cortex involved in language processing. When we read, the neurons in these areas fire in precise synchronicity, a phenomenon known as “co-rippling.” Photo credit: UC San Diego Health Sciences

Co-rippling in the cortex has been difficult to observe in humans due to limitations of noninvasive brain scanning. To work around this problem, the researchers used an approach called intracranial electroencephalography (EEG) scanning, which measures the electrical activity of the brain from inside the skull. The team studied a group of 13 patients with drug-resistant epilepsy who were already undergoing EEG monitoring as part of their care. This provided an opportunity to study the activity of the brain in more depth than typical brain scans using noninvasive approaches.

Participants were shown a series of animal names interspersed with strings of random consonants or nonsense fonts and then asked to press a button to indicate the animal whose name they saw. The researchers observed three stages of cognition during these tests: an initial hierarchical phase in visual areas of the cortex in which the participant could see the word without conscious understanding of it; a second stage in which this information was “seeded” with co-ripples into other areas of the cortex involved in more complex cognitive functions; and a final phase, again with co-ripples, where the information across the cortex is integrated into conscious knowledge and a behavioral response – pressing the button.

The researchers found that throughout the exercise, co-rippling occurred between the various parts of the brain engaged in these cognitive stages, but the rippling was stronger when the participants were reading real words.

The study's findings have potential long-term implications for the treatment of neurological and psychiatric disorders, such as schizophrenia, which are characterized by disruptions in these information integration processes.

"It will be easier to find ways to reintegrate the mind in people with these disorders if we can better understand how minds are integrated in typical, healthy cases,” added Halgren.

More broadly, the study's findings have significant implications for our understanding of the link between brain function and human experience.

"This is a fundamental question of human existence and gets at the heart of the relationship between mind and brain,” said Halgren. “By understanding how our brain's neurons work together, we can gain new insights into the nature of consciousness itself."

Additional co-authors on the study include Erik Kaestner at UC San Diego School of Medicine, Chad Carlson at Medical College of Wisconsin, Werner K. Doyle and Orrin Devinsky at New York University Langone School of Medicine, and Thomas Thesen at Geisel School of Medicine.

The study was funded, in part, by National Institutes of Health (grants MH117155, T32MH020002) and the Office of Naval Research (grant N00014-16-1-2829).

Disclosures: The authors declare no competing interests.

"This is a fundamental question of human existence and gets at the heart of the relationship between mind and brain. By understanding how our brain's neurons work together, we can gain new insights into the nature of consciousness itself."

You May Also Like

Cybersecurity flaws could derail high-profile cycling races, presence of liquid water most probable explanation for data collected by mars lander, oyster virus detected in san diego bay likely worsened by warmer waters, largest protein yet discovered builds algal toxins, stay in the know.

Keep up with all the latest from UC San Diego. Subscribe to the newsletter today.

You have been successfully subscribed to the UC San Diego Today Newsletter.

Campus & Community

Arts & culture, visual storytelling.

  • Media Resources & Contacts

Signup to get the latest UC San Diego newsletters delivered to your inbox.

Award-winning publication highlighting the distinction, prestige and global impact of UC San Diego.

Popular Searches: Covid-19   Ukraine   Campus & Community   Arts & Culture   Voices

IMAGES

  1. PPT

    hypothesis econometrics

  2. Hypothesis Testing for Statistics & Econometrics

    hypothesis econometrics

  3. PPT

    hypothesis econometrics

  4. Econometrics with big data(3)-Hypothesis testing Flashcards

    hypothesis econometrics

  5. PPT

    hypothesis econometrics

  6. Introduction to Econometrics

    hypothesis econometrics

COMMENTS

  1. Econometrics: Definition, Models, and Methods

    Econometrics is the application of statistical and mathematical theories in economics for the purpose of testing hypotheses and forecasting future trends. It takes economic models, tests them ...

  2. PDF Hypothesis Testing in Econometrics

    A general hypothesis about the underlying model can be specified by a subset of O. In the classical Neyman-Pearson setup that we consider, the problem is to test the null hypothesis H 0: 2 O 0 against the alternative hypothesis H 1: 2 O 1. Here, O 0 and O 1 are disjoint subsets of O with union O. A hypothesis is called simple if it completely ...

  3. PDF Notes on Econometrics I

    Notes on Econometrics I Grace McCormack April 28, 2019 Contents ... hypothesis test - we can use our data to see if we can reject various hypothesis about our data (for example, a hypothesis may be that the mean of a distribution is 7 or that education has no effect on income) estimator - our "best guess" of what the population param-

  4. 1.3 The Economists' Tool Kit

    Economics differs from other social sciences because of its emphasis on opportunity cost, the assumption of maximization in terms of one's own self-interest, and the analysis of choices at the margin. ... Testing Hypotheses in Economics. Here is a hypothesis suggested by the model of demand and supply: an increase in the price of gasoline ...

  5. Hypothesis Testing in Econometrics

    Hypothesis Testing in Econometrics. This article reviews important concepts and methods that are useful for hypothesis testing. First, we discuss the Neyman-Pearson framework. Various approaches to optimality are presented, including finite-sample and large-sample optimality. Then, we summarize some of the most important methods, as well as ...

  6. (Pdf) Econometrics Handbook: Basic Definition of Concepts, Principles

    In econometrics, hypothesis testing is commonl y used to evaluate the significance of the . estimated coefficients and test specific economic hypotheses. The process involves the following .

  7. PDF Econometrics

    • Hypothesis Testing • Confidence Intervals • Heteroskedasticity • Nonlinear Regression Models: Polynomials, Logs, and Interaction Terms 2. Panel Data: • Fixed Effects • Clustered HAC SE 3. Internal Validity and External Validity 4. Binary Dependent Variables: LPM, Probit and Logit Model 5.

  8. Hypotheses Testing in Econometrics

    Hypotheses Testing in Econometrics. This course is part of Econometrics for Economists and Finance Practitioners Specialization. Taught in English. 21 languages available. Some content may not be translated. Instructor: Dr Leone Leonida. Enroll for Free. Starts Aug 9. Financial aid available.

  9. PDF Hypothesis Testing in Econometrics

    This paper highlights many of the current approaches to hypothesis testing in the econometrics literature. We consider the general problem of testing in the classical Neyman-Pearson frame-work, reviewing the key concepts in Section 2. As such, optimality is defined via the power function. Section 3 briefly addresses control of the size of a test.

  10. What Is Econometrics? Back to Basics: Finance & Development ...

    The methodology of econometrics is fairly straightforward. The first step is to suggest a theory or hypothesis to explain the data being examined. The explanatory variables in the model are specified, and the sign and/or magnitude of the relationship between each explanatory variable and the dependent variable are clearly stated.

  11. Econometrics

    Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. [1] More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, ...

  12. PDF Large Sample Estimation and Hypothesis Testing*

    Ch. 36: Large Sample Estimation and Hypothesis Testing 2115 objective function o,(0) such that o^ maximizes o,(Q) subject to HE 0, (1.1)' where 0 is the set of possible parameter values. In the notation, dependence of H^ on n and of i? and o,,(G) on the data is suppressed for convenience. This estimator

  13. PDF The Nature and Scope of Econometrics

    It is fair to say that econometrics has become an integral part of training in economics and business. 1.3 THE METHODOLOGY OF ECONOMETRICS How does one actually do an econometric study? Broadly speaking, economet-ric analysis proceeds along the following lines. 1. Creating a statement of theory or hypothesis. 2. Collecting data. 3.

  14. Econometrics: Making Theory Count

    The methodology of econometrics is fairly straightforward. The first step is to suggest a theory or hypothesis to explain the data being examined. The explanatory variables in the model are specified, and the sign and/or magnitude of the relationship between each explanatory variable and the dependent variable are clearly stated.

  15. Econometrics

    Econometrics is an area of economics where statistical and mathematical methods are used to analyze economic data. Individuals who are involved with econometrics are referred to as econometricians. Econometricians test economic theories and hypotheses by using statistical tools such as probability, statistical inference, regression analysis ...

  16. Introductory Econometrics Chapter 17: F Tests

    Now, if the null hypothesis is true, then an alternative, simpler model describes the data generation process: Relative to the original model, the one above is a restricted model. We can test the null hypothesis with a new test statistic, the F-statistic, which essentially measures the difference between the fit of the original and restricted ...

  17. PDF LECTURE 5 Introduction to Econometrics Hypothesis testing

    ON TODAY'S LECTURE I We are going to discuss how hypotheses about coefficients can be tested in regression models I We will explain what significance of coefficients means I We will learn how to read regression output I Readings for this week: I Studenmund, Chapter 5.1 - 5.4 I Wooldridge, Chapter 4 2/26

  18. Econometrics

    Step 1: Make the hypothesis. The first step in econometrics consists of making a hypothesis. In statistics, analysts start with an assumption or potential explanation of a certain phenomenon to ...

  19. Chapter 6

    Chapter 6 - Hypothesis Testing and Confidence Intervals. We reject the null hypothesis of zero relationship between free lunch eligibility (FLE) and academic performance. Our result is the same whether we drop CR4 and invoke the central limit theorem (valid in large samples) or whether we impose CR4 (necessary in small samples).

  20. Hypothesis testing

    This video provides some insight into hypothesis testing in econometrics and statistics. Check out https://ben-lambert.com/econometrics-course-problem-sets-a...

  21. Econometrics For Dummies Cheat Sheet

    Econometrics For Dummies. You can use the statistical tools of econometrics along with economic theory to test hypotheses of economic theories, explain economic phenomena, and derive precise quantitative estimates of the relationship between economic variables. To accurately perform these tasks, you need econometric model-building skills ...

  22. Forming Hypotheses & Questions About Economic Issues

    One of the most practical aspects of economics is the development of questions and hypotheses. A hypothesis is an educated guess or a guess based on evidence and research. We formulate an economic ...

  23. Top 4 Types of Hypothesis in Consumption (With Diagram)

    The following points highlight the top four types of Hypothesis in Consumption. The types of Hypothesis are: 1. The Post-Keynesian Developments 2. The Relative Income Hypothesis 3. The Life-Cycle Hypothesis 4. The Permanent Income Hypothesis. Hypothesis Type # 1. The Post-Keynesian Developments: Data collected and examined in the post-Second World War period (1945-) confirmed the Keynesian ...

  24. Econometrics 105

    Understand why econometrics matters and how it impacts economic research and policy decisions.Upon purchasing, you will get access to a 1-hour video lecture, comprehensive slides, a quiz, and an audio transcript. This course is your chance to: Gain practical insights into econometric modeling and hypothesis testing.

  25. Impacts of the Russia-Ukraine war on energy prices: evidence from OECD

    An important prerequisite for the use of DID model is to satisfy the parallel trend hypothesis. It means the treatment group and the control group must have the same trend before the war, and the changes affected by other factors must be the same. ... Chien-Chiang Lee, Professor of the School of Economics and Management at Nanchang University ...

  26. Plant economics spectrum governs leaf nitrogen and phosphorus

    Leaf nitrogen (N) and phosphorus (P) resorption is a fundamental adaptation strategy for plant nutrient conservation. However, the relative roles that environmental factors and plant functional traits play in regulating N and P resorption remain largely unclear, and little is known about the underlying mechanism of plant functional traits affecting nutrient resorption.

  27. State-of-the-Art Brain Recordings Reveal How Neurons Resonate

    For many years the brain was through to organize information in a hierarchical manner, but researchers at UC San Diego have a different hypothesis: that the brain organizes information by synchronizing the firing of neurons in different part of the brain, a phenomenon called "co-rippling." Photo credit: Image_Jungle/iStock