Plan to Attend Cell Bio 2024

Change Password

Your password must have 8 characters or more and contain 3 of the following:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

  • Sign in / Register

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

To What Extent Do Study Habits Relate to Performance?

  • Elise M. Walck-Shannon
  • Shaina F. Rowell
  • Regina F. Frey

*Address correspondence to: Elise M. Walck-Shannon ( E-mail Address: [email protected] ).

Biology Department, Washington University in St. Louis, St. Louis, MO 63130

Search for more papers by this author

Center for Integrative Research on Cognition, Learning, and Education (CIRCLE), Washington University in St. Louis, St. Louis, MO 63130

Department of Chemistry, University of Utah, Salt Lake City, UT 84112

Students’ study sessions outside class are important learning opportunities in college courses. However, we often depend on students to study effectively without explicit instruction. In this study, we described students’ self-reported study habits and related those habits to their performance on exams. Notably, in these analyses, we controlled for potential confounds, such as academic preparation, self-reported class absences, and self-reported total study time. First, we found that, on average, students used approximately four active strategies to study and that they spent about half of their study time using active strategies. In addition, both the number of active strategies and the proportion of their study time using active strategies positively predicted exam performance. Second, on average, students started studying 6 days before an exam, but how early a student started studying was not related to performance on in-term (immediate) or cumulative (delayed) exams. Third, on average, students reported being distracted about 20% of their study time, and distraction while studying negatively predicted exam performance. These results add nuance to lab findings and help instructors prioritize study habits to target for change.

INTRODUCTION

One of our goals in college courses is to help students develop into independent, self-regulated learners. This requires students to perform several metacognitive tasks on their own, including setting goals, choosing strategies, monitoring and reflecting on performance, and modifying those steps over time ( Zimmerman, 2002 ). There are many challenges that learners encounter in developing self-regulation. One such challenge is that students often misjudge their learning during the monitoring and reflection phases ( Kornell and Bjork, 2007 ). Often, students feel that they learn more from cognitively superficial tasks than from cognitively effortful tasks. As one example, students may feel that they have learned more if they reread a text passage multiple times than if they are quizzed on that same material ( Karpicke and Roediger, 2008 ). In contrast to students’ judgments, many effortful tasks are highly effective for learning. R. A. Bjork defines these effective, effortful tasks as desirable difficulties ( Bjork, 1994 ). In the present study, we investigated the frequency with which students reported carrying out effortful (active) or superficial (passive) study habits in a large introductory biology course. Additionally, we examined the relationship between study habits and performance on exams while controlling for prior academic preparation and total study time.

THEORETICAL FRAMEWORK

Why would difficulties be desirable.

During learning, the goal is to generate knowledge or skills that are robustly integrated with related knowledge and easily accessible. Desirable difficulties promote cognitive processes that either aid forming robust, interconnected knowledge or skills or retrieving that knowledge or skill ( Bjork, 1994 ; also see Marsh and Butler, 2013 , for a chapter written for educators). Learners employing desirable difficulties may feel that they put in more effort and make more mistakes, but they are actually realizing larger gains toward long-term learning than learners using cognitively superficial tasks.

Which Study Habits Are Difficult in a Desirable Way?

Study habits can include a wide variety of behaviors, from the amount of time that students study, to the strategies that they use while studying, to the environment in which they study. The desirable difficulties framework ( Bjork and Bjork, 2011 ), describes two main kinds of effective habits that apply to our study: 1) using effortful study strategies or techniques that prompt students to generate something or test themselves during studying and 2) distributing study time into multiple sessions to avoid “cramming” near the exam. In the following two paragraphs, we expand upon these study habits of interest.

The desirable difficulties framework suggests that study strategies whereby students actively generate a product or test themselves promote greater long-term learning than study strategies whereby students passively consume presentations. This is supported by strong evidence for the “generation effect,” in which new knowledge or skills are more robustly encoded and retrieved if you generate a solution, explanation, or summary, rather than looking it up ( Jacoby, 1978 ). A few generative strategies that are commonly reported among students—summarization, self-explanation, and practice testing—are compared below. Summarization is a learning strategy in which students identify key points and combine them into a succinct explanation in their own words. As predicted by the generation effect, evidence suggests that summarization is more effective than rewriting notes (e.g., laboratory study by Bretzing and Kulhavy, 1979 ) or reviewing notes (e.g., classroom study by King, 1992 ). Self-explanation is a learning strategy wherein students ask “how” and “why” questions for material as they are being exposed to the material or shortly after ( Berry, 1983 ). This is one form of elaborative interrogation, a robust memory technique in which learners generate more expansive details for new knowledge to help them remember that information ( Pressley et al. , 1987 ). Self-explanation requires little instruction and seems to be helpful for a broad array of tasks, including recall, comprehension, and transfer. Further, it is more effective than summarization (e.g., classroom study by King, 1992 ), perhaps because it prompts students to make additional connections between new and existing knowledge. Practice testing is supported by evidence of the “testing effect,” for which retrieving information itself actually promotes learning ( Karpicke and Roediger, 2008 ). The memory benefits of the “testing effect” can be achieved with any strategy in which students complete problems or practice retrieval without relying on external materials (quizzing, practice testing, flashcards, etc.). In this study, we refer to these strategies together as “self-quizzing.” Self-quizzing is especially effective at improving performance on delayed tests, even as long as 9–11 months after initial learning ( Carpenter, 2009 ). Additionally, in the laboratory, self-quizzing has been shown to be effective on a range of tasks from recall to inference ( Karpicke and Blunt, 2011 ). Overall, research suggests that active, more effortful strategies—such as self-quizzing, summarization, and self-explanation—are more effective for learning than passive strategies—such as rereading and rewriting notes. In this study, we asked whether these laboratory findings would extend to students’ self-directed study time, focusing especially on the effectiveness of effortful (herein, “active”) study strategies.

The second effective habit described by the desirable difficulties framework is to avoid cramming study time near exam time. The “spacing effect” describes the phenomenon wherein, when given equal study time, spacing study out into multiple sessions promotes greater long-term learning than massing (i.e., cramming) study into one study session. Like the “testing effect,” the “spacing effect” is especially pronounced for longer-term tests in the laboratory ( Rawson and Kintsch, 2005 ). Based on laboratory studies, we would expect that, in a course context, cramming study time into fewer sessions close to an exam would be less desirable for long-term learning than distributing study time over multiple sessions, especially if that learning is measured on a delay.

However, estimating spacing in practice is more complicated. Classroom studies have used two main methodologies to estimate spacing, either asking the students to report their study schedules directly ( Susser and McCabe, 2013 ) or asking students to choose whether they describe their pattern of study as spaced out or occurring in one session ( Hartwig and Dunlosky, 2012 ; Rodriguez et al. , 2018 ). The results from these analyses have been mixed; in some cases, spacing has been a significant, positive predictor of performance ( Rodriquez et al. , 2018 ; Susser and McCabe, 2013 ), but in other cases it has not ( Hartwig and Dunlosky, 2012 ).

In the present study, we do not claim measure spacing directly. Lab definitions of spacing are based on studying the same topic over multiple sessions. But, because our exams have multiple topics, some students who start studying early may not revisit the same topic in multiple sessions. Rather, in this study, we measure what we refer to as “spacing potential.” For example, if students study only on the day before the exam, there is little potential for spacing. If, instead, they are studying across 7 days, there is more potential for spacing. We collected two spacing potential measurements: (1) cramming , or the number of days in advance that a student began studying for the exam; and (2) consistency , or the number of days in the week leading up to an exam that a student studied. Based on our measurements, students with a higher spacing potential would exhibit less cramming and study more consistently than students with lower spacing potential. Because not every student with a high spacing potential may actually space out the studying of a single topic into multiple sessions, spacing potential is likely to underestimate the spacing effect; however, it is a practical way to indirectly estimate spacing in practice.

Importantly, not all difficult, or effortful, study tasks are desirable ( Bjork and Bjork, 2011 ). For example, in the present study, we examined students’ level of distraction while studying. Distraction can come in many forms, commonly “multitasking,” or splitting one’s attention among multiple tasks (e.g., watching lectures while also scrolling through social media). However, multitasking has been shown to decrease working memory for the study tasks at hand ( May and Elder, 2018 ). Thus, it may make a task more difficult, but in a way that interferes with learning rather than contributing to it.

In summary, available research suggests that active, effortful study strategies are more effective than passive ones; that cramming is less effective than distributing studying over time; and that focused study is more effective than distracted study. Whether students choose to use these more effective practices during their independent study time is a separate question.

How Do Students Actually Study for Their Courses?

There have been several studies surveying students’ general study habits. When asked free-response questions about their study strategies in general, students listed an average of 2.9 total strategies ( Karpicke et al. , 2009 ). In addition, few students listed active strategies, such as self-quizzing, but many students listed more passive strategies, such as rereading.

There have also been studies asking whether what students actually do while they are studying is related to their achievement. Hartwig and Dunlosky (2012) surveyed 324 college students about their general study habits and found that self-quizzing and rereading were positively correlated with grade point average (GPA). Other studies have shown that using Facebook or texting during study sessions was negatively associated with college GPA ( Junco, 2012 ; Junco and Cotten, 2012 ). While these findings are suggestive, we suspect that the use of study strategies and the relationship between study strategies and achievement may differ from discipline to discipline. The research we have reviewed thus far has been conducted for students’ “general” study habits, rather than for specific courses. To learn about how study habits relate to learning biology, it is necessary to look at study habits within the context of biology courses.

How Do Students Study for Biology Courses?

Several prior qualitative studies carried out within the context of specific biology courses have shown that students often report ineffective habits, such as favoring passive strategies or cramming. Hora and Oleson ( 2017 ) found that, when asked about study habits in focus groups, students in science, technology, engineering, and mathematics (STEM) courses (including biology) used predominantly passive strategies such as reviewing notes or texts, practices that in some cases were unchanged from high school. Tomanek and Montplaisir (2004) found that the majority of 13 interviewed students answered questions on old exams (100% of students) and reread lecture slides (92.3% of students) or the textbook (61.5% of students) to study for a biology exam, but only a small minority participated in deeper tasks such as explaining concepts to a peer (7.7% of students) or generating flashcards for retrieval practice (7.7% of students). We can also learn indirectly about students’ study habits by analyzing what they would change upon reflection. For example, in another study within an introductory biology classroom, Stanton and colleagues ( 2015 ) asked students what they would change about their studying for the next exam. In this context, 13.5% of students said that using active strategies would be more effective for learning, and 55.5% said that they wanted to spend more time studying, many of whom reported following through by studying earlier for the next exam ( Stanton et al. , 2015 ). In the current study, we extended prior research by exploring the prevalence of multiple study habits simultaneously, including the use of active study strategy and study timing, in a large sample of introductory biology students.

In addition to characterizing students’ study habits, we also aimed to show how those study habits were related to performance in a biology classroom. In one existing study, there were positive associations between exam performance and some (but not all) active strategies—such as completing practice exams and taking notes—but no significant associations between performance and some more passive strategies—such as reviewing notes/screencasts or reviewing the textbook ( Sebesta and Bray Speth, 2017 ). In another study, both self-reported study patterns (e.g., spacing studies into multiple sessions or one single session) and self-quizzing were positively related to overall course grade in a molecular biology course ( Rodriguez et al. , 2018 ). We build on this previous work by asking whether associations between performance and a wide variety of study habits still hold when controlling for confounding variables, such as student preparation and total study time.

In this study, we asked whether students actually use cognitive psychologists’ recommendations from the desirable difficulty framework in a specific biology course, and we investigated whether students who reported using those recommendations during studying performed differently on exams than those who did not. We wanted to focus on how students spend their study time, rather than the amount of time that they study, their level of preparation, or engagement. Therefore, we used regression analyses to hold preparation (i.e., ACT math and the course pretest scores), self-reported class absences, and overall study time equal. In this way, we estimated the relationship between particular study habit variables—including the strategies that students use, their timing of using those strategies, and their level of distraction while studying—and exam performance.

Students would use a combination of active and passive strategies, but those who used more active study strategies or who devoted more of their study time to active strategies would perform higher on their exams than those who used fewer active strategies or devoted less time to active strategies.

Students would vary in their study timing, but those with less spacing potential (e.g., crammed their study time or studied less consistently) would perform worse, especially on long-term tests (final exam and course posttest), than students with more spacing potential.

Students would report at least some distraction during their studying, but those who reported being distracted for a smaller percent of their study time would score higher on exams than students who reported being distracted for a larger percent of their study time.

Context and Participants

Data for this study were gathered from a large-enrollment introductory biology course (total class size was 623) during the Spring 2019 semester at a selective, private institution in the Midwest. This course covers basic biochemistry and molecular genetics. It is the first semester of a two-semester sequence. Students who take this course are generally interested in life science majors and/or have pre-health intentions. The data for this study came from an on-campus repository; both the repository and this study have been approved by our internal review board (IRB ID: 201810007 for this study; IRB ID: 201408004 for the repository). There were no exclusion criteria for the study. Anyone who gave consent and for whom all variables were available was considered for the analyses. However, because the variables were different in each analysis, the sample differed slightly from analysis to analysis. When we compared students who were included in the first hypothesis’s analyses to students who gave consent but were not included, we found no significant differences between participants and nonparticipants for ACT math score, pretest score, year in school, sex, or race (Supplemental Table 1). This suggested that our sample did not dramatically differ from the class as a whole.

Other than those analyses labeled “post hoc,” analyses were preplanned before data were retrieved.

Timeline of Assignments Used in This Study

Figure 1 shows a timeline of the assignments analyzed in this study, which included the exam 1 and 2 reflections (both online), exams 1 and 2 (both in person), the course pre and post knowledge tests (both online), and a cumulative final exam (in person). As shown in the text boxes within Figure 1 , the majority (85.7% [430/502] or greater) of students completed each of the assignments that were used in this study.

FIGURE 1. Timeline of assignments used in this study organized by mode of submission (online vs. in person) and grading (completion vs. accuracy). Exam days are indicated by thick lines. There were other course assignments (including a third exam), but they are not depicted here, because they were not analyzed in this study. Exam return is indicated by dotted lines. Light gray boxes represent weeks that class was in session. The number of consenting students who completed each assignment is indicated in the corresponding assignment box; the total number of consenting students was 502.

Exam Reflections

Students’ responses to exam 1 and 2 study habits reflections were central to all of our hypotheses. In these reflection assignments, students were asked to indicate their study habits leading up to the exam (see Supplemental Item 1 for prompts), including the timing of studying and type of study strategies. The list of strategies for students to choose from came from preliminary analysis of open-response questions in previous years. To increase the likelihood that students accurately remembered their study habits, we made the exercise available online immediately after each exam for 5 days. The reflection assignment was completed before exam grades were returned to students so that their performance did not bias their memory of studying. Students received 0.20% of the total course points for completion of each reflection.

Exams in this course contained both structured-response (multiple-choice, matching, etc.) and free-response questions. The exams were given in person and contained a mixture of lower-order cognitive level (i.e., recall and comprehension) and higher-order cognitive level (i.e., application, analysis, synthesis, or evaluation) questions. Two independent (A.B and G. Y.) raters qualitatively coded exam questions by cognitive level using a rubric slightly modified from Crowe et al. (2008) to bin lower-order and higher-order level questions. This revealed that 38% of exam points were derived from higher-order questions. Each in-term exam was worth 22.5% of the course grade, and the cumulative final exam was worth 25% of the course grade. To prepare for the exams, students were assigned weekly quizzes and were given opportunities for optional practice quizzing and in-class clicker questions as formative assessment. Students were also provided with weekly learning objectives and access to the previous year’s exams. None of the exam questions were identical to questions presented previously in problem sets, old exams, or quizzes. Additionally, in the first week of class, students were given a handout about effective study strategies that included a list of active study techniques along with content-specific examples. Further, on the first quiz, students were asked to determine the most active way to use a particular resource from a list of options. The mean and SD of these exams, and all other variables used in this analysis, can be found in Supplemental Table 2. Pairwise correlations for all variables can be found in Supplemental Table 3

Pre and Post Knowledge Test

As described previously ( Walck-Shannon et al. , 2019 ), the pre/posttest is a multiple-choice test that had been developed by the instructor team. The test contained 38 questions, but the percentage of questions correct is reported here for ease of interpretation. The same test was given online in the first week of classes and after class sessions had ended. One percent extra credit was given to students who completed both tests. To encourage students to participate fully, we presented the pre and posttests as learning opportunities in the course to foreshadow topics for the course (pretest) or review topics for the final (posttest). Additionally, we told students that “reasonable effort” was required for credit. Expressing this rationale seemed to be effective for participation rates. While others have found that participation is low when extra credit is offered as an incentive (38%, Padilla-Walker et al. , 2005 ), we found participation rates for the pre- and posttests to be high; 97.4% of students completed the pretest and 85.9% of students completed the posttest.

Statistical Analyses

To test our three hypotheses, we used hierarchical regression. We controlled for potential confounding variables in step 1 and factored in the study variable of interest at step 2 for each model. We performed the following steps to check that the assumptions of linear regression were met for each model: first, we made scatter plots and found that the relationship was roughly linear, rather than curved; second, we plotted the histogram of residuals and found that they were normally distributed and centered around zero; and finally, we checked for multicollinearity by verifying that no two variables in the model were highly correlated (greater than 0.8). All statistical analyses were performed in JMP Pro (SAS Institute).

Base Model Selection

The purpose of the base model was to account for potential confounding variables. Thus, we included variables that we theoretically expected to explain some variance in exam performance based on previous studies. First, based on a meta-analysis ( Westrick et al. , 2015 ) and our own previous study with a different cohort in this same course ( Walck-Shannon et al. , 2019 ), we expected academic preparation to predict performance. Therefore, we included ACT math and biology pretest scores in our base model. Second, the negative relationship between self-reported class absences and exam or course performance is well documented ( Gump, 2005 ; Lin and Chen, 2006 ; Credé et al. , 2010 ). Therefore, we included the number of class sessions missed in our base model. Finally, our research questions focus on how students use their study time, rather than the relationship between study time itself and performance. Because others have found a small but significant relationship between total study time and performance ( Credé and Kuncel, 2008 ), we controlled for the total number of hours spent studying in our base model. In summary, theoretical considerations of confounds prompted us to include ACT math score, biology pretest score, self-reported class absences, and self-reported exam study time as the base for each model.

Calculated Indices

In the following sections we provide descriptions of variables that were calculated from the reported data. If variables were used directly as input by the student (e.g., class absences, percent of study time distracted) or directly as reported by the registrar (e.g., ACT score), they are not listed below.

Total Exam Study Time.

In students’ exam reflections, they were asked to report both the number of hours that they studied each day in the week leading up to the exam and any hours that they spent studying more than 1 week ahead of the exam. The total exam study time was the sum of these study hours.

Number of Active Strategies Used.

To determine the number of active strategies used, we first had to define which strategies were active. To do so, all authors reviewed literature about desirable difficulties and effective study strategies (also reviewed in Bjork and Bjork, 2011 , and Dunlosky et al. , 2013 , respectively). Then, each author categorized the strategies independently. Finally, we met to discuss until agreement was reached. The resulting categorizations are given in Table 1 . Students who selected “other” and wrote a text description were recoded into existing categories. After the coding was in place, we summed the number of active strategies that each student reported to yield the number of active strategies variable.

Specific study strategy prompts from exam reflections, listed in prevalence of use for exam 1

Exam 1Exam 2
Study strategy abbreviationStudy strategy promptType % %
Read notesRead lecture slides or class notesPassive40194.5844594.48
Completed problem setsAnswered the problem set questionsActive36586.0839383.44
Completed old examsAnswered the old exam questionsActive34882.0834172.4
Self-quizzedQuizzed myself using ungraded weekly quizzes, quizlets, flashcardsActive32175.7131767.3
Explained conceptsExplained concepts to myself or othersActive27564.8630163.91
Synthesized notesParaphrased or outlined class notesActive25760.6132669.21
Made diagramsMade my own diagrams or comparison tables from lecture notesActive23254.7231266.24
Reviewed online outside contentReviewed online content from sources outside of the course (e.g., videos)Passive17240.5724050.96
Watched lectureWatched lecture videosPassive17140.3324852.65
Read textbookRead textbookPassive16238.219720.59
Attended review session/help sessions/office hoursAttended the exam review session, study hall, office hours, etc.Mixed10123.825912.53
Rewrote notesRewrote my class notes word for wordPassive6916.278117.2

a The classification of the strategy into active and passive is stated in “type.” Prevalences for exam 1 ( n = 424) and exam 2 ( n = 471) are reported.

Proportion of Study Time Using Active Strategies.

In addition to asking students which strategies they used, we also asked them to estimate the percentage of their study time they spent using each strategy. To calculate the proportion of study time using active strategies, we summed the percentages of time using each of the active strategies, then divided by the sum of the percentages for all strategies. For most students (90.0% for exam 1 and 92.8% for exam 2), the sum of all percentages was 100%. However, there were some students whose reported percentages did not add to 100%. If the summed percentages added to between 90 and 110%, they were still included in analyses. If, for example, the sum of all percentages was 90%, and 40% of that was using active strategies, this would become 0.44 (40/90). If the summed percentages were lower than 90% or higher than 110%, students were excluded from the analyses involving the proportion of active study time index.

Number of Days in Advance Studying Began.

In the exam 2 reflection, we asked students to report: 1) their study hours in the week leading up to the exam; and 2) if they began before this time, the total number of hours and date that they began studying. If students did not report any study hours earlier than the week leading up to the exam, we used their first reported study hour as the first day of study. If students did report study time before the week before the exam, we used the reported date that studying began as the first day of study. To get the number of days in advance variable, we counted the number of days between the first day of study and the day of the exam. If a student began studying on exam day, this would be recorded as 0. All students reported some amount of studying.

Number of Days Studied in Week Leading Up to the Exam.

As a measure of studying consistency, we counted the number of days that each student reported studying in the week leading up to exam 2. More specifically, the number of days with nonzero reported study hours were summed to give the number of days studied.

The study strategies that students selected, the timing with which they implemented those strategies, and the level of distraction they reported while doing so are described below. We depict the frequencies with which certain study variables were reported and correlate those study variables to exam 1 and exam 2 scores. For all performance analyses described in the Results section, we first controlled for a base model described below.

We attempted to control for some confounding variables using a base model, which included preparation (ACT math and course pretest percentage), self-reported class absences, and self-reported total study hours. For each analysis, we included all consenting individuals who responded to the relevant reflection questions for the model. Thus, the sample size and values for the variables in the base model differed slightly from analysis to analysis. For brevity, only the first base model is reported in the main text; the other base models included the same variables and are reported in Supplemental Tables 5A, 7A, and 8A.

The base model significantly predicted exam 1 score and exam 2 score for all analyses. Table 2 shows these results for the first analysis; exam 1: R 2 = 0.327, F (4, 419) = 51.010, p < 0.0001; exam 2: R 2 = 0.219, F(4, 466) = 32.751, p < 0.0001. As expected, all individual predictor terms were significant for both exams, with preparation and study time variables positively associated and absences negatively associated. For means and SDs of all continuous variables in this study, see Supplemental Table 2. We found that the preparatory variables were the most predictive, with the course pretest being more predictive than ACT math score. Total study time and class absences were predictive of performance to a similar degree. In summary, our base model accounted for a substantial proportion (32.7%) of the variance due to preparation, class absences, and study time, which allowed us to interpret the relationship between particular study habits and performance more directly.

Base model for hierarchical regression analyses in for exam 1 ( = 424) and exam 2 ( = 471)

Exam 1Exam 2
Base modelβ SE β SE
Intercept36.441***5.5020.32717.235*7.9490.219
ACT [0–36]0.240***0.965***0.2400.223***1.248***0.243
Bio. pretest % [0–100]0.419***0.339***0.0350.333***0.364***0.047
Number of classes missed before exam 1 (0–15) or between exams 1 and 2 (0–12)−0.160***−0.630***0.158−0.101*−0.450*0.183
Total exam 1 or 2 study hours (≥0)0.126**0.132**0.0420.154***0.203***0.055

a Standardized β values, unstandardized b values, and standard errors (SE) are reported. Each model’s R 2 is also reported. Ranges of possible values for each variable are shown in brackets. The values for the base models corresponding to Tables 4 – 6 vary slightly depending on the students included in that analysis (see Supplemental Table 4). The following symbols indicate significance: * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001.

Did Students Who Used More Active Study Strategies Perform Better on Exams?

We first investigated the specific study strategies listed in Table 1 . Then, we examined the total amount of time spent on active strategies to test our hypothesis that students who spent more time actively studying performed better on exams. Further, we counted the number of different types of active strategies that students used to test whether students who used a more diverse set of active strategies performed better on exams than those who used fewer active strategies.

Study Strategies Differed in Their Frequency of Use and Effectiveness.

The frequency with which specific study strategies were employed is reported in Table 1 . Almost all students reported reading notes. The next most prevalent strategies were active in nature, including that students (in order of prevalence) completed problem sets, completed old exams, self-quizzed, synthesized notes, explained concepts, and made diagrams. Surprisingly, each of these active strategies was used by the majority of students (54.7–86.1%) for both exams 1 and 2 ( Table 1 ). Less frequently used strategies included those more passive in nature, including that students (in order of prevalence) watched lectures, reviewed online content, read the textbook, and rewrote notes. A relatively infrequent strategy was attending review sessions, office hours, and help sessions. Because student engagement varied dramatically in these different venues, we classified this category as mixed. In summary, our results showed that, after reading notes, the most frequently used strategies were active strategies.

Next, we wondered whether the types of strategies that students reported using were related to exam performance. For these analyses, we added whether a student used a specific strategy (0 or 1) into the model, after controlling for the base model reported in Table 2 . When holding preparation, class absences, and total study time equal, we found that, on average, students who reported having completed problem sets, explained concepts, self-quizzed, or attended review sessions earned 4.0–7.7% higher on average on both exams 1 and 2 than students who did not report using the strategy (see b unstd. in Table 3 ). Notably, these strategies were active in nature, except for the category attending review session, which was mixed in nature. The remaining active strategies were positively correlated to performance for only one of the exams. Additionally, we observed that the strategies categorized as passive were either nonsignificant or negatively related to performance on at least one exam. Together, these results suggest that active strategies tended to be positively related to exam performance. In our sample, each of these active strategies was used by the majority (more than half) of the students.

Relating specific study strategy use to performance on exam 1 ( = 424) and exam 2 ( = 471) when controlling for preparation, class absences, and total study hours (base model)

Exam 1Exam 2
Study strategyType SEΔR SEΔ
Read notesPassive−1.2251.9540.001−2.6342.7050.002
Completed problem setsActive4.320***1.2920.018***7.678***1.6370.034***
Completed old examsActive1.4441.1770.0035.847***1.3730.029***
Self-quizzedActive4.377***1.0420.028***5.395***1.3170.027***
Explained conceptsActive5.538***0.9170.054***6.288***1.2800.038***
Synthesized notesActive1.4020.9570.0042.813*1.3750.007*
Made diagramsActive1.855*0.9260.007*2.1021.3410.004
Reviewed online outside contentPassive−1.839*0.9100.007*−1.7311.2560.003
Watched lecturePassive0.1940.9680.001−2.660*1.3050.006*
Read textbookPassive−1.3520.9110.004−1.4251.5250.003
Attended review session/help sessions/office hoursMixed4.600***1.0280.031***3.961*1.9000.007*
Rewrote notesPassive−0.8531.2320.001−4.177*1.6590.011*
All strategies0.114***0.115***

a This table summarizes 12 two-step hierarchical models per exam. The first step adjusted for the base model (see Table 2 ), and the second step included one specific study strategy. Unstandardized b values, standard errors (SE), and the change in R 2 relative to the base model are reported. Full model outputs are reported in Supplemental Table 4. Unstandardized b coefficients correspond to the average difference in exam score between the people who did and did not use the strategy. The following symbols indicate significance: * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001.

b Relative to a base model with R 2 = 0.327 that included ACT score, bio. pretest %, number of classes missed before exam 1, and total exam 1 study hours. See Table 2 for base model details.

c Relative to a base model with R 2 = 0.219 that included ACT score, bio. pretest %, number of classes missed between exams 1 and 2, and total exam 2 study hours. See Table 2 for base model details.

The Proportion of Time Spent Using Active Strategies Positively Predicted Exam Score.

To further understand how active strategies related to performance, we investigated the proportion of study time that students spent using active strategies. On average, students spent about half of their study time using active strategies for exam 1 (M = 0.524, SD = 0.244) and exam 2 (M = 0.548, SD = 0.243), though values varied from 0 to 1 ( Figure 2 ). Importantly, students who spent a larger proportion of their study time on active strategies tended to perform better on exams 1 and 2. More specifically, after accounting for our base model (Supplemental Table 5A), the proportion of time students spent using active strategies added significant additional predictive value for exam 1, F (1, 416) = 8.770, p = 0.003, Δ R 2 = 0.014; and exam 2, F (1, 450) = 14.848, p = 0.0001, Δ R 2 = 0.024. When holding preparation, class absences, and total study time equal, we found that students who spent all of their study time on active strategies scored 5.5% higher and 10.0% higher on exams 1 and 2, respectively, than those who spent none of their study time on active strategies ( Table 4 ). Overall, these two results suggested that, on average, students spent about half of their study time using active strategies and students who devoted more study time to active strategies tended to perform better on exams.

FIGURE 2. Distribution of the proportion of time that students devoted to active study for exam 1 ( n = 422) and exam 2 ( n = 456). Percentages of students in each bin are indicated.

Relating active study strategy use to performance on exam 1 ( = 422) and exam 2 ( = 456) when controlling for preparation, class absences, and total study hours (base model)

Exam 1Exam 2
Study strategy SEΔ SEΔ
Proportion of exam study time using active strategies (0–1)5.499**1.8570.014**9.967***2.5870.024***
Number of active strategies used for exam studying (0–6)1.858***0.3200.051***2.759***0.4200.066***

a This table summarizes two two-step hierarchical models per exam. The first step adjusted for the base model, and the second step included either proportion of active study time or the number of active strategies. Full model outputs are reported in Supplemental Table 5. Unstandardized b values, standard errors (SE), and the change in R 2 from the base model (see Supplemental Table 5A) are reported. Ranges of possible values for each variable are given in brackets. The following symbols indicate significance: * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001.

b Relative to a base model with R 2 = 0.322 that included ACT score, bio. pretest %, number of classes missed before exam 1, and total exam 1 study hours. See Supplemental Table 5A for base model details.

c Relative to a base model with R 2 = 0.238 that included ACT score, bio. pretest %, number of classes missed between exams 1 and 2, and total exam 2 study hours. See Supplemental Table 5A for base model details.

The Number of Active Strategies Used Positively Predicted Exam Score.

We next investigated the number of active strategies used by each student. On average, students used approximately four active strategies for exam 1 (M = 4.212, SD = 1.510) and exam 2 (M = 4.239, SD = 1.501). Very few students used no active strategies and most students (73%) used four or more active strategies ( Figure 3 ). Further, those students who used more active strategies tended to perform higher on exams 1 and 2. More specifically, after accounting for our base model, the number of active strategies students used added significant additional predictive value for exam 1, ( F (1, 416) = 33.698, p < 0.0001 Δ R 2 = 0.024; and exam 2, F (1, 450) = 91.083, p < 0.0001, Δ R 2 = 0.066. When holding preparation, class absences, and total study time equal, we found that, for each additional active strategy used, students scored 1.9% and 2.8% higher on exams 1 and 2, respectively. Students who used all six active strategies scored 11.1% higher and 16.6% higher on exams 1 and 2, respectively, than those who used no active strategies ( Table 4 , See Supplemental Table 5A for base model). In summary, students who used a greater diversity of active strategies tended to perform better on exams.

FIGURE 3. Distribution of the number of active strategies that each student used for exam 1 ( n = 422) and exam 2 ( n = 456). Percentages of students in each bin are indicated.

Post Hoc Analysis 1: Are Certain Active Strategies Uniquely Predictive of Performance?

Though it was not part of our planned analyses, the previous finding that the number of active strategies is predictive of performance made us question whether certain active strategies are uniquely predictive or whether they each have overlapping benefits. To test this, we added all six of the active strategies into the model as separate variables in the same step. When doing so, we found that the following active strategies were distinctly predictive for both exams 1 and 2: explaining concepts, self-quizzing, and completing problem sets (Supplemental Table 6). In other words, the portion of exam-score variance explained by certain active strategies was non-overlapping.

Did Study Timing Predict Performance on Immediate or Delayed Exams?

We next characterized students’ spacing potential using two indices: 1) the number of days in advance that studying began (cramming) and 2) the number of days in the week leading up to the exam that a student studied (consistency). Notably, in these results, we adjusted for our base model, which included total study time. In this way, we addressed the timing of studying while holding the total amount of studying equal. We examined outcomes at two different times: exam 2, which came close after studying; and the cumulative final exam and the posttest, which came after about a 5-week delay.

Cramming Was Not a Significant Predictor of Exam 2, the Final Exam, or the Posttest.

While there was variation in the degree of cramming among students, this was not predictive of exam score on either immediate or delayed tests. On average, students began studying 5.842 d in advance of exam 2 (SD = 4.377). About a third of students began studying 0–3 days before the exam, and another third began studying 4–6 days before the exam ( Figure 4 A). When holding preparation, class absences, and total study time equal, we found that the number of days in advance that studying began was not a significant predictor of in-term exam 2, the posttest, or the cumulative final ( Table 5 ; see Supplemental Table 7A for base model).

FIGURE 4. Distributions of spacing potential variables for exam 2 ( n = 450). (A) The distribution of the days in advance that exam 2 studying began (cramming); (B) the distribution of the number of days studied in the week before exam 2 (consistency). Percentages of students in each bin are indicated.

Relating spacing potential to performance on in-term exam 2 ( = 447), the posttest ( = 392), and the cumulative final exam ( = 450) when controlling for preparation, class absences, and total study hours (base model)

Exam 2PosttestFinal exam
Study habit SEΔ SEΔ SEΔ
Days in advance exam 2 studying began (≥0)0.0110.1680.0000.2520.1620.003−0.1840.1400.002
Number of days studied in week before exam 2 (0–8)−0.4640.4180.002−0.5890.4130.003−0.6080.3480.004

a This table summarizes two two-step hierarchical regression models per assessment. The first step adjusted for our base model, and second step included either days in advance, or number of days studied in week before exam. Full model outputs are reported in Supplemental Table 7. Unstandardized b values, standard errors (SE), and the change in R 2 from the base model (Supplemental Table 7A) are reported. Ranges of possible values for each variable are given in brackets. The following symbols indicate significance, * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001.

b Relative to a base model with R 2 = 0.231 that included ACT score, bio. pretest %, number of classes missed before exam 2, and total exam 2 study hours. See Supplemental Table 7A for base model details.

c Relative to a base model with R 2 = 0.396 that included ACT score, bio. pretest %, number of classes missed before exam 2, and total exam 2 study hours. See Supplemental Table 7A for base model details.

d Relative to a base model with R 2 = 0.273 that included ACT score, bio. pretest %, number of classes missed before exam 2, and total exam 2 study hours. See Supplemental Table 7A for base model details.

Studying Consistency Was Not a Significant Predictor of Exam 2, the Final Exam, or the Posttest.

While there was variation in how consistently students studied in the week leading up to exam 2, this consistency was not predictive of exam score either immediately or on delayed tests. On average, students studied 5 of the 8 days leading up to the exam (M = 5.082, SD = 1.810 ). Sixteen percent of students studied every day, and no students studied fewer than 2 days in the week leading up to the exam ( Figure 4 B). When holding preparation, class absences, and total study time equal, we found that the number of days studied in the week leading up to the exam was not a significant predictor of in-term exam 2, the posttest, or the cumulative final ( Table 5 ; see Supplemental Table 7A for base model).

In summary, our students varied in both the degree of cramming and the consistency of their studying. Even so, when holding preparation, class absences, and study time equal as part of our base model, neither of these spacing potential measures were predictive of performance on immediate or delayed tests.

Did Students Who Reported Being Less Distracted while Studying Perform Better on Exams?

In addition to the timing of studying, another factor that contextualizes the study strategies is how focused students are during study sessions. In the exam reflections, we asked students how distracted they were while studying. Here, we relate those estimates to exam scores while controlling for our base model of preparation, class absences, and total study time.

Distraction while Studying Was a Negative Predictor of Exam Score.

On average, students reported being distracted during 20% of their exam 1 and exam 2 study time (exam 1: M = 20.733, SD = 16.478; exam 2: M = 20.239, SD = 15.506) . Sixty-one percent of students reported being distracted during more than 10% of their study time ( Figure 5 ). Further, students who were more distracted while studying tended to perform lower on exams 1 and 2. After accounting for our base model, the percent of study time that students reported being distracted added significant additional predictive value for exam 1 and exam 2; exam 1: F (1, 429) = 12.365, p = 0.000, Δ R 2 = 0.019; exam 2: F (1, 467) = 8.942, p = 0.003, Δ R 2 = 0.015. When holding preparation, class absences, and total study time equal, we found that students who reported being distracted 10% more than other students scored about 1% lower on exams 1 and 2 ( Table 6 ; see Supplemental Table 8A for base model). In summary, this suggests that not only was it common for students to be distracted while studying, but this was also negatively related to exam performance.

FIGURE 5. Distribution of the percent of time students reported being distracted while studying for exam 1 ( n = 435) and exam 2 ( n = 473). Percentages of students in each bin are indicated.

Relating study distraction to performance on exam 1 ( = 435) and exam 2 ( = 473) when controlling for preparation, class absences, and total study hours (base model)

Exam 1Exam 2
Study strategy SEΔ SEΔ
Percent of time distracted while studying (0–100)−0.093***0.0270.019***−0.119**0.0400.015**

a This table summarizes one two-step hierarchical regression model per exam. The first step adjusted for our base model, and second step included the percent of study time that students reported being distracted. Full model outputs are reported in Supplemental Table 8. Unstandardized b values, standard errors (SE), and the change in R 2 from the base model (see Supplemental Table 8A) are reported. The range of possible values is given in brackets. The following symbols indicate significance: * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001.

b Relative to a base model with R 2 = 0.328 that included ACT score, bio. pretest %, number of classes missed before exam 1, and total exam 1 study hours. See Supplemental Table 8A for base model details.

c Relative to a base model with R 2 = 0.219 that included ACT score, bio. pretest %, number of classes missed between exams 1 and 2, and total exam 2 study hours. See Supplemental Table 8A for base model details.

Students’ independent study behaviors are an important part of their learning in college courses. When holding preparation, class absences, and total study time equal, we found that students who spent more time on effortful, active study strategies and used a greater number of active strategies had higher scores for exams. Yet neither students who started studying earlier nor those who studied over more sessions scored differently than students who started later or studied over fewer sessions. Additionally, students who were more distracted while studying tended to perform worse than students who were less distracted. In other words, both the degree to which students employed desirably difficult strategies while studying and their level of focus when doing so were important for performance.

Specific Study Strategies (Hypothesis 1)

Our finding that more time and diversity of active study strategies were associated with higher exam grades was consistent with our hypothesis based on the desirable difficulties framework, laboratory, and classroom research studies ( Berry, 1983 ; King, 1992 ; Bjork, 1994 ; Karpicke and Roediger, 2008 ; Karpicke and Blunt, 2011 ; Hartwig and Dunlosky, 2012 ). Our study brought together lab research about effective strategies with what students did during self-directed study in an actual course. In doing so, we affirmed the lab findings that active strategies are generally effective, but also uncovered further nuances that highlight the value of investigating course-specific study strategies.

First, our study, when combined with other work, may have revealed that certain study strategies are more common than course-nonspecific surveys would predict. For example, compared with surveys of general study habits, our students reported relatively high use of active strategies. We found that the majority of students (73%) reported using four or more active strategies, which was more than the 2.9 average total strategies listed by students in a survey about general study habits at this same institution ( Karpicke et al. , 2009 ). In particular, we found that two-thirds of students reported the active study strategy of self-quizzing. This was considerably higher than what was found in a free-response survey about general habits not focused on a specific course at the same institution ( Karpicke et al. , 2009 ). In this survey, only 10.7% reported self-testing and 40.1% reported using flashcards. This higher frequency of self-quizzing behaviors may be due to a combination of factors in the course, the measures, and/or the students. In this course, we attempted to make self-quizzing easier by reopening the weekly quiz questions near exam time ( Walck-Shannon et al. , 2019 ). We also used a course-specific survey rather than the more general, course-nonspecific surveys used in the previous research. Additionally, it is possible that, in recent years, more students have become more aware of the benefits of self-testing and so are using this strategy with greater frequency. When we compared our frequencies of several categories to analogous categories from course-specific surveys of introductory biology students ( Sebesta and Bray Speth, 2017 ) and molecular biology students ( Rodriguez et al. , 2018 ), we saw similar results. Combined with our work, these studies suggest that when students focused on a particular course, they reported more active strategies than when prompted about studying in general.

Second, the opportunity to control for potential confounding variables in our study, including total study time, allowed us to better estimate the relationships between specific strategies and performance. This approach was important, given concerns raised by others that in classroom studies, benefits of certain strategies, such as explanation, could simply have been due to greater total study time ( Dunlosky et al. , 2013 ). Our results showed that, even when controlling for total study time, self-explanation and other strategies were still significant predictors of performance. This helped illustrate that the strategies themselves, and not just the time on task, were important considerations of students’ study habits.

Third, we were surprised by how predictive the diversity of active strategies was of performance. While we found that the proportion of active study time and the number of active strategies were both important predictors of performance, we found that the latter was a stronger predictor. This suggests that, if total study time was held equal, students who used a larger number of active strategies tended to perform better than those that used a smaller number of active strategies. This finding also deserves to be followed up in subsequent study to determine whether any of the active strategies that students use tend to co-occur in a “suite,” and whether any of those suites are particularly predictive of performance. We suspect that there is some limit to the benefit of using diverse strategies, as some strategies take a considerable amount of time to master ( Bean and Steenwyk, 1984 ; Armbruster et al. , 1987 ; Wong et al. , 2002 ), and students need to devote enough time to each strategy to learn how to use it well.

Additionally, we found that particular active study strategies—explanation, self-quizzing, and answering problem sets—were uniquely predictive of higher performance in a biology course context. Undergraduate biology courses introduce a large amount of discipline-specific terminology, in addition to requiring the higher-order prediction and application skills found among STEM courses ( Wandersee, 1988 ; Zukswert et al. , 2019 ). This is true for the course studied here, which covers biochemistry and molecular genetics, and the assessments that we used as our outcomes reflect this combination of terminology, comprehension, prediction, and application skills. Our results support the finding that active, effortful strategies can be effective on a variety of cognitive levels ( Butler, 2010 ; Karpicke and Blunt, 2011 ; Smith and Karpicke, 2014 ); and this work extends support of the desirable difficulties framework into biology by finding unique value for distinct generative or testing strategies.

Study Timing (Hypothesis 2)

Inconsistent with our second hypothesis that students with less spacing potential would perform worse than students with more spacing potential, we found no relationship between study timing and performance on in-term or cumulative exams. Because we knew that spacing was difficult to estimate, we analyzed two spacing potential indices, the degree of cramming (i.e., the number of days in advance that students started studying) and the consistency of studying (i.e., the number of days studied in the week leading up to the exam). We controlled for total study time, because the spacing effect is defined as identical study time spread over multiple sessions rather than fewer, massed sessions. When doing so, neither of these measures were significantly related to performance.

There are a few possible explanations why we may not have observed a “spacing effect.” First, as explained in the Introduction , we measured spacing potential. It could be that students with high spacing potential may have arranged their studies to mass studying each topic, rather than spacing it out, which would lead us to underestimate the spacing effect. Second, students likely studied again before our cumulative final. This delayed test is where we expected to see the largest effect, and restudying may have masked any spacing effect that did exist. Third, we asked students to directly report their study time, and some may have struggled to remember the exact dates that they studied. While this has the advantage that it results in more sensitive and direct measures of students’ spacing potential than asking students to interpret for themselves whether they binarily spaced their studies or crammed ( Hartwig and Dunlosky, 2012 ; Rodriguez et al. , 2018 ), students who did not remember their study schedules may have reported idealized study schedules with greater spacing, rather than realistic schedules with more cramming ( Susser and McCabe, 2013 ), thus minimizing the expected spacing effect.

Despite the lack of a spacing effect in our data, we certainly do not advocate that students cram their studying, as we find it likely that students who started studying earlier may also have tended to study more. Also, those same students who studied earlier may have felt less stressed and gotten more sleep. In other words, even though our estimation of spacing potential did not capture performance benefits, benefits of spacing for well-being may be multifaceted and not wholly captured by our study.

Distraction (Hypothesis 3)

Consistent with our third hypothesis, we found a negative relationship between distraction while studying and performance. This finding agreed with the few available studies that related distraction during self-directed out-of-class studying and grade, but differs in that our students reported a lower level of distraction than other published studies ( Junco, 2012 ; Junco and Cotten, 2012 ). One possible reason for our low distraction estimate may have been that students were inadvertently underestimating their distraction, as has been reported ( Kraushaar and Novak, 2010 ). In addition, some students may not have been including multitasking as a type of distraction, and this habit of multitasking while studying will likely be difficult to change, as students tend to underestimate how negatively it will affect performance ( Calderwood et al. , 2016 ).

Implications for Instruction

To encourage students to use more active study strategies, try asking students to turn in the output of the strategy as a low-stakes assignment. For example, to encourage self-explanation, you could ask students to turn in a short video of themselves verbally explaining a concept for credit. To encourage practice quizzing, try to publish or reopen quizzes near exam time ( Walck-Shannon et al. , 2019 ) and ask students to complete them for credit.

To encourage students to use active study strategies effectively, model those strategies during class. For example, when doing a clicker question, explicitly state your approach to answering the question and self-explain your reasoning out loud. This also gives you an opportunity to add the rationale for why certain strategies are effective or provide advice about carrying them out. In addition to modeling a strategy, remind students to do it often. Simply prompting students to explain their reasoning to their neighbors or themselves during a clicker question helps shifts students’ conversations toward explanation ( Knight et al. , 2013 ).

To encourage students to stay focused during studying, provide voluntary, structured study sessions. These could include highly structured peer-led team-learning sessions during which students work through a packet of new questions ( Hockings et al. , 2008 ; Snyder et al. , 2015 ) or more relaxed sessions during which students work through problems that have already been provided ( Kudish et al. , 2016 ).

Limitations and Future Directions

There are multiple caveats to these analyses, which may be addressed in future studies. First, our data about study behaviors were self-reported. While we opened the reflection exercise immediately after the exam to mitigate students forgetting their behaviors, some may still have misremembered. Further, some students may not have forgotten, but rather were unable to accurately self-report certain behaviors. As stated earlier, one behavior that is especially prone to this is distraction. But, similarly, we suspect that some students had trouble estimating the percent of study time that they spent using each strategy, while their binary report of whether they used it or not may be more accurate. This may be one reason why the number of active strategies has more explanatory power than the percent of time using an active strategy. Separately, although students were told that we would not analyze their responses until after the semester had ended, some may have conformed their responses to what they thought was desirable. However, there is not strong evidence that students conform their study habit responses to their beliefs about what is effective. For example, Blasiman and colleagues found that, even though students believed rereading was an ineffective strategy, they still reported using it more than other strategies ( Blasiman et al. , 2017 ). Another limitation due to self-reporting is that we lack knowledge of the exact, nuanced behaviors that a student carried out. Thus, a student who chose a strategy that we defined as active—such as “completing problem sets”—may have actually performed more passive behaviors. Specifically, while we did use verbal reminders and delay the release of a key when encouraging students to complete the problem sets and old exams before looking at the answers, some students may have looked up answers prematurely or may have read passively through portions of the key. These more passive behaviors may have underestimated the importance of active strategies. A second limitation is that these data were collected from a course at a selective research-intensive institution and may not be applicable to all student populations. A third limitation is that our analyses are correlational. While we have carefully selected potential confounds, there may be other important confounding variables that we did not account for. Finally, it was beyond the scope of this study to ask whether certain subgroups of students employed different strategies or whether strategies were more or less predictive of performance for different subgroups of students.

Despite these caveats, the main point is clear. Students’ course-specific study habits predict their performance. While many students in our sample reported using effective strategies, some students still had room to improve, especially with their level of distraction. One open question that remains is how we can encourage these students to change their study habits over time.

ACKNOWLEDGMENTS

We would like to thank April Bednarski, Kathleen Weston-Hafer, and Barbara Kunkel for their flexibility and feedback on the exam reflection exercises. We would also like to acknowledge Grace Yuan and Ashton Barber for their assistance categorizing exam questions. This research was supported in part by an internal grant titled “Transformational Initiative for Educators in STEM,” which aimed to foster the adoption of evidence-based teaching practices in science classrooms at Washington University in St. Louis.

  • Armbruster, B. B., Anderson, T. H., & Ostertag, J. ( 1987 ). Does text structure/summarization instruction facilitate learning from expository text? Reading Research Quarterly , 22 (3), 331. Google Scholar
  • Bean, T. W., & Steenwyk, F. L. ( 1984 ). The effect of three forms of summarization instruction on sixth graders’ summary writing and comprehension . Journal of Reading Behavior , 16 (4), 297–306. Google Scholar
  • Berry, D. C. ( 1983 ). Metacognitive experience and transfer of logical reasoning . Quarterly Journal of Experimental Psychology Section A , 35 (1), 39–49. Google Scholar
  • Bjork, E. L., & Bjork, R. A. ( 2011 ). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning . In Gernsbacher, M. A.Pew, R. W.Hough, L. M.Pomerantz, J. R. (Eds.) & FABBS Foundation, Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). New York, NY: Worth Publishers. Google Scholar
  • Bjork, R. A. ( 1994 ). Memory and metamemory considerations in the training of human beings . In Metcalfe, J.Shimamura, A. P. (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: Worth Publishers. Google Scholar
  • Blasiman, R. N., Dunlosky, J., & Rawson, K. A. ( 2017 ). The what, how much, and when of study strategies: Comparing intended versus actual study behaviour . Memory , 25 (6), 784–792. Medline ,  Google Scholar
  • Bretzing, B. H., & Kulhavy, R. W. ( 1979 ). Notetaking and depth of processing . Contemporary Educational Psychology , 4 (2), 145–153. Google Scholar
  • Butler, A. C. ( 2010 ). Repeated testing produces superior transfer of learning relative to repeated studying . Journal of Experimental Psychology: Learning, Memory, and Cognition , 36 (5), 1118–1133. Medline ,  Google Scholar
  • Calderwood, C., Green, J. D., Joy-Gaba, J. A., & Moloney, J. M. ( 2016 ). Forecasting errors in student media multitasking during homework completion . Computers and Education , 94 , 37–48. Google Scholar
  • Carpenter, S. K. ( 2009 ). Cue strength as a moderator of the testing effect: The benefits of elaborative retrieval . Journal of Experimental Psychology: Learning, Memory, and Cognition , 35 (6), 1563–1569. Medline ,  Google Scholar
  • Credé, M., & Kuncel, N. R. ( 2008 ). Study habits, skills, and attitudes: The third pillar supporting collegiate academic performance . Perspectives on Psychological Science , 3 (6), 425–453. Medline ,  Google Scholar
  • Credé, M., Roch, S. G., & Kieszczynka, U. M. ( 2010 ). Class attendance in college . Review of Educational Research , 80 (2), 272–295. Google Scholar
  • Crowe, A., Dirks, C., & Wenderoth, M. P. ( 2008 ). Biology in Bloom: Implementing Bloom’s taxonomy to enhance student learning in biology . CBE—Life Sciences Education , 7 (4), 368–381. Link ,  Google Scholar
  • Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. ( 2013 ). Improving students’ learning with effective learning techniques . Psychological Science in the Public Interest , 14 (1), 4–58. Medline ,  Google Scholar
  • Gump, S. E. ( 2005 ). The cost of cutting class: Attendance as a predictor of success . College Teaching , 53 (1), 21–26. Google Scholar
  • Hartwig, M. K., & Dunlosky, J. ( 2012 ). Study strategies of college students: Are self-testing and scheduling related to achievement? Psychonomic Bulletin & Review , 19 (1), 126–134. Medline ,  Google Scholar
  • Hockings, S. C., DeAngelis, K. J., & Frey, R. F. ( 2008 ). Peer-led team learning in general chemistry: Implementation and evaluation . Journal of Chemical Education , 85 (7), 990. Google Scholar
  • Hora, M. T., & Oleson, A. K. ( 2017 ). Examining study habits in undergraduate STEM courses from a situative perspective . International Journal of STEM Education , 4 (1), 1. Google Scholar
  • Jacoby, L. L. ( 1978 ). On interpreting the effects of repetition: Solving a problem versus remembering a solution . Journal of Verbal Learning and Verbal Behavior , 17 (6), 649–667. Google Scholar
  • Junco, R. ( 2012 ). Too much face and not enough books: The relationship between multiple indices of Facebook use and academic performance . Computers in Human Behavior , 28 (1), 187–198. Google Scholar
  • Junco, R., & Cotten, S. R. ( 2012 ). No A 4 U: The relationship between multitasking and academic performance . Computers & Education , 59 (2), 505–514. Google Scholar
  • Karpicke, J. D., & Blunt, J. R. ( 2011 ). Retrieval practice produces more learning than elaborative studying with concept mapping . Science , 331 (6018), 772–775. Medline ,  Google Scholar
  • Karpicke, J. D., Butler, A. C., & Roediger, H. L. ( 2009 ). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory , 17 (4), 471–479. Medline ,  Google Scholar
  • Karpicke, J. D., & Roediger, H. L. ( 2008 ). The critical importance of retrieval for learning . Science , 319 (5865), 966–968. Medline ,  Google Scholar
  • King, A. ( 1992 ). Comparison of self-questioning, summarizing, and notetaking-review as strategies for learning from lectures . American Educational Research Journal , 29 (2), 303. Google Scholar
  • Knight, J. K., Wise, S. B., & Southard, K. M. ( 2013 ). Understanding clicker discussions: Student reasoning and the impact of instructional cues . CBE—Life Sciences Education , 12 (4), 645–654. Link ,  Google Scholar
  • Kornell, N., & Bjork, R. A. ( 2007 ). The promise and perils of self-regulated study . Psychonomic Bulletin & Review , 14 (2), 219–224. Medline ,  Google Scholar
  • Kraushaar, J. M., & Novak, D. ( 2010 ). Examining the effects of student multitasking with laptops during the lecture . Journal of Information Systems Education , 21 (2), 241–251. Google Scholar
  • Kudish, P., Shores, R., McClung, A., Smulyan, L., Vallen, E. A., & Siwicki, K. K. ( 2016 ). Active learning outside the classroom: Implementation and outcomes of peer-led team-learning workshops in introductory biology . CBE—Life Sciences Education , 15 (3), ar31. Link ,  Google Scholar
  • Lin, T. F., & Chen, J. ( 2006 ). Cumulative class attendance and exam performance . Applied Economics Letters , 13 (14), 937–942. Google Scholar
  • Marsh, E. J., & Butler, A. C. ( 2013 ). Memory in educational settings . In Reisberg, D. (Ed.), Oxford library of psychology. The Oxford handbook of cognitive psychology (pp. 299–317). Washington, DC: Oxford University Press. Google Scholar
  • May, K. E., & Elder, A. D. ( 2018 ). Efficient, helpful, or distracting? A literature review of media multitasking in relation to academic performance . International Journal of Educational Technology in Higher Education , 15 (1), 13. Google Scholar
  • Padilla-Walker, L. M., Thompson, R. A., Zamboanga, B. L., & Schmersal, L. A. ( 2005 ). Extra credit as incentive for voluntary research participation . Teaching of Psychology , 32 (3), 150–153. Google Scholar
  • Pressley, M., McDaniel, M. A., Turnure, J. E., Wood, E., & Ahmad, M. ( 1987 ). Generation and precision of elaboration: Effects on intentional and incidental learning . Journal of Experimental Psychology: Learning, Memory, and Cognition , 13 (2), 291–300. Google Scholar
  • Rawson, K. A., & Kintsch, W. ( 2005 ). Rereading effects depend on time of test . Journal of Educational Psychology , 97 (1), 70–80. Google Scholar
  • Rodriquez, F., Rivas, M. J., Matsumura, L. H., Warschauer, M., & Sato, B. K. ( 2018 ). How do students study in STEM courses? Findings from a light-touch intervention and its relevance for underrepresented students . PLoS ONE , 13 (7), e0200767. Medline ,  Google Scholar
  • Sebesta, A. J., & Bray Speth, E. ( 2017 ). How should I study for the exam? Self-regulated learning strategies and achievement in introductory biology . CBE—Life Sciences Education , 16 (2), ar30. Link ,  Google Scholar
  • Smith, M. A., & Karpicke, J. D. ( 2014 ). Retrieval practice with short-answer, multiple-choice, and hybrid tests . Memory , 22 (7), 784–802. Medline ,  Google Scholar
  • Snyder, J. J., Elijah Carter, B., & Wiles, J. R. ( 2015 ). Implementation of the peer-led team-learning instructional model as a stopgap measure improves student achievement for students opting out of laboratory . CBE—Life Sciences Education , 14 (1), ar2. Link ,  Google Scholar
  • Stanton, J. D., Neider, X. N., Gallegos, I. J., & Clark, N. C. ( 2015 ). Differences in metacognitive regulation in introductory biology students: When prompts are not enough . CBE—Life Sciences Education , 14 (2), ar15. Link ,  Google Scholar
  • Susser, J. A., & McCabe, J. ( 2013 ). From the lab to the dorm room: Metacognitive awareness and use of spaced study . Instructional Science , 41 (2), 345–363. Google Scholar
  • Tomanek, D., & Montplaisir, L. ( 2004 ). Students’ studying and approaches to learning in introductory biology . Cell Biology Education , 3 (4), 253–262. Link ,  Google Scholar
  • Walck-Shannon, E. M., Cahill, M. J., McDaniel, M. A., & Frey, R. F. ( 2019 ). Participation in voluntary re-quizzing is predictive of increased performance on cumulative assessments in introductory biology . CBE—Life Sciences Education , 18 (2), ar15. Link ,  Google Scholar
  • Wandersee, J. H. ( 1988 ). The terminology problem in biology education: A reconnaissance . American Biology Teacher , 50 (2), 97–100. Google Scholar
  • Westrick, P. A., Le, H., Robbins, S. B., Radunzel, J. M. R., & Schmidt, F. L. ( 2015 ). College performance and retention: A meta-analysis of the predictive validities of ACT® scores, high school grades, and SES . Educational Assessment , 20 (1), 23–45. Google Scholar
  • Wong, R. M. F., Lawson, M. J., & Keeves, J. ( 2002 ). The effects of self-explanation training on students’ problem solving in high-school mathematics . Learning and Instruction , 12 (2), 233–262. Google Scholar
  • Zimmerman, B. J. ( 2002 ). Becoming a self-regulated learner: An overview . Theory into Practice , 41 (2), 64–70. Google Scholar
  • Zukswert, J. M., Barker, M. K., & McDonnell, L. ( 2019 ). Identifying troublesome jargon in biology: Discrepancies between student performance and perceived understanding . CBE—Life Sciences Education , 18 (1), ar6. Link ,  Google Scholar
  • Causes and outcomes of at-risk underperforming pharmacy students: implications for policy and practice 19 April 2024 | BMC Medical Education, Vol. 24, No. 1
  • Analyzing the Impact of Time Spent on Practice Questions on General Chemistry Students’ Problem-Solving Performance 30 July 2024 | Journal of Chemical Education, Vol. 101, No. 8
  • Study strategy use among elementary school students: Is use of specific study strategies related to academic performance? 2 March 2024 | Metacognition and Learning, Vol. 19, No. 2
  • Riding the wave towards flourishing in STEM education: Enhancing teaching efficacy through a K-12 training program 1 Jun 2024 | Teaching and Teacher Education, Vol. 143
  • Medical Students’ Study Habits Through a Sociocultural Lens: A Systematic Literature Review 12 April 2024 | International Journal of Medical Students, Vol. 12, No. 1
  • Exploring the link between perceived physical literacy and academic performance outcomes: insights from the EHDLA study 24 January 2024 | Frontiers in Sports and Active Living, Vol. 6
  • Factors that influence general chemistry students’ decision making in study strategies 1 January 2024 | Chemistry Education Research and Practice
  • Breaking the mold: Study strategies of students who improve their achievement on introductory biology exams 3 July 2023 | PLOS ONE, Vol. 18, No. 7
  • The impact of effective study strategy use in an introductory anatomy and physiology class 31 May 2023 | Frontiers in Education, Vol. 8
  • Assessing Learner Engagement and the Impact on Academic Performance within a Virtual Learning Environment 15 February 2023 | Pharmacy, Vol. 11, No. 1
  • Which course resources and student approaches to learning are related to higher grades in introductory biology? 1 Jan 2023 | Journal of Research in Science, Mathematics and Technology Education
  • Sleep Deprivation and Study Habits Effects toward Medical Imaging Students, UiTM Puncak Alam 30 September 2022 | Environment-Behaviour Proceedings Journal, Vol. 7, No. 21
  • Evaluating open-note exams: Student perceptions and preparation methods in an undergraduate biology class 18 August 2022 | PLOS ONE, Vol. 17, No. 8
  • Eight Recommendations to Promote Effective Study Habits for Biology Students Enrolled in Online Courses 29 Apr 2022 | Journal of Microbiology & Biology Education, Vol. 23, No. 1
  • Analysis of Learning and Academic Performance of Education Students Before and During the Coronavirus Disease Pandemic 15 October 2021 | European Journal of Educational Research, Vol. volume-10-2021, No. volume-10-issue-4-october-2021

quantitative research about study habits and academic performance

Submitted: 19 May 2020 Revised: 19 October 2020 Accepted: 22 October 2020

© 2021 E. M. Walck-Shannon et al. CBE—Life Sciences Education © 2021 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  • Corpus ID: 232139568

A Systematic Review : Correlation of Study Habits and Academic Performance of Medical Students

  • Shama Khan , Javeria Dawar , +5 authors A. Hussain
  • Published 2021
  • Medicine, Education

Figures from this paper

figure 1

One Citation

Medical students’ study habits through a sociocultural lens: a systematic literature review, 29 references, the relationship between the study habits and the academic performance of medical sciences students, study habits and achievement: a comparison of medical and paramedical students, study habits and academic achievement among medical students: a comparison between male and female subjects, relationship between study habits and academic achievement in students of medical sciences in kermanshah-iran, study of the relationship between study habits and academic achievement of students: a case of spicer higher secondary school, india., the relationship between the study habits and the academic achievement of students in islamic azad university of jiroft branch, studying the role of habits and achievement motivation in improving students' academic performance, study habits and academic performance among late adolescents, academic achievement of adolescents in relation to study habits, “a study of academic achievement among high school students in relation to their study habits”, related papers.

Showing 1 through 3 of 0 Related Papers

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Adv Med Educ Pract

The Effect of Sleep Quality on Students’ Academic Achievement

Rostam jalali.

1 Faculty of Nursing and Midwifery, Kermanshah University of Medical Sciences, Kermanshah, Iran

Habibollah Khazaei

2 Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran

Behnam Khaledi Paveh

Zinab hayrani, lida menati.

Sleep is an inseparable part of human health and life, which is crucial in learning, practice, as well as physical and mental health. It affects the capacity of individual learning, academic performance, and neural-behavioral functions. This study aimed to determine the relationship between sleep quality and students’ academic achievement among students at Kermanshah University of Medical Sciences.

In this cross-sectional study, 102 medical students from different fields, with maximum variation sampling, completed Pittsburgh Sleep Quality Index (PSQI). For data analysis, SPSS 19 was used through which Pearson correlation test, Spearman test, and t -test were employed.

Based on the quality of sleep questionnaire scores, the results indicated no significant difference between students with high grades and those with low grades. However, there were moderate and sometimes severe sleep disturbances in both groups.

The results showed no significant difference between sleep quality and academic achievement. Nevertheless, longitudinal study should be performed to control for confounding factors.

Sleep is an inseparable part of human health and life, and is pivotal to learning and practice as well as physical and mental health. 1 Studies have suggested that insufficient sleep, increased frequency of short-term sleep, and going to sleep late and getting up early affect the learning capacity, academic performance, and neurobehavioral functions. 2 , 3 Previous studies have indicated that the quantity of sleep reported by individuals as delayed or inappropriate sleep, waking up too late, especially at weekends and daytime sleepiness is associated with compromised academic performance in children and adults. 2 Some studies have emphasized the relationship between delayed starting time of classes and academic success. 4 Reduced overnight sleep or altered sleep patterns has been associated with severe drowsiness and failure in academic success. 5 In a study, people who had enough sleep compared to their sleep-deprived individuals used innovative solutions twice as often when confronted with complex mathematical problems. 6 The chance of academic failure was as long as one or more than 1 year in students with inadequate sleep compared to those with proper sleep. 7 People who sleep less and sleep during the day are more prone to vehicle and work accidents. 8 In some studies, sleep efficiency has been considered as essential for recovery, cognitive processing, and memory integration. 9 On the other hand, lack of sleep has been associated with emotional instability and impaired concentration. 10 In this regard, students are particularly at risk of developing sleep disorders and development of the disorder among them has a negative effect on their academic performance across different grades, 11 – 13 However, there is no consensus in this case and not all studies state that sleep disorders yield a negative effect on academic performance. Eliasson (2010) believes that the time it takes to fall asleep and waking up affect academic performance more than duration of sleep does. 14 Sweileh and colleagues (2011) also believe that there is no relationship between sleep quality and academic success. 15 Similarly, it is claimed there is no relationship between the night sleep before the exam and test scores either. 16

In another study, the author believes stress from lack of sleep causes poor school performance. 17 On the other hand, in a systematic review, the authors could not establish a cause and effect relationship between sleep quality and academic performance. 2 In their meta-analysis study, Dewald and colleagues (2010) emphasized that because of the diversity of the methodology of studies, it is impossible to definitely derive a relationship between sleep quality and academic performance, and thus more longitudinal intervention studies are warranted. 1 According to different conclusions in this respect, the researchers decided to determine the relationship between sleep quality and academic performance among students at Kermanshah University of Medical Sciences.

In this cross-sectional study, through maximum variation sampling, the first three students with highest scores and three last students with lowest scores were selected, and the Pittsburgh Sleep Quality Index (PSQI) was completed for them.

The study population consisted of students of Kermanshah University of Medical Sciences. The samples were also students at each school with the highest GPA (first three high scores) and the lowest GPA (last three lowest scores). The sampling was purposeful sampling with maximum variation. The sample covered a number of disciplines in the third semester and above ( Figures 1 & 2 ). After determining the target students, the questionnaire was given to them and then returned to the researcher after completion.

An external file that holds a picture, illustration, etc.
Object name is AMEP-11-497-g0001.jpg

Abundant distribution of students by field of study.

An external file that holds a picture, illustration, etc.
Object name is AMEP-11-497-g0002.jpg

Frequency distribution of students by semester.

The data collection instruments were demographic form (including age, gender, place of residence, grade, rank in the class, discipline) and Pittsburgh Sleep Quality Index (PSQI). PSQI is a self-report questionnaire which examines the quality of sleep. It has 18 questions which are classified into seven components: the first component is the subjective sleep quality which is determined with Question 9. The second component is related to delays in falling asleep, where the score is calculated by two questions, the mean score of Question 2 and part of Question 5. The third component deals with sleep duration and is determined by Question 4. The fourth component is related to the efficiency and effectiveness of sleeping in patients. Its score is calculated via dividing the total hours of sleep by total hours in the bed multiplied by 100. Then, the fifth component deals with sleep disorders and is achieved by calculating the mean value of Question 5. The sixth component is related to hypnotic drugs and is determined based on Question 6. Finally, the seventh component captures inadequate performance throughout the day and is determined by two questions (mean scores of Questions 7 and 8). Each question is rated between 0 and 3 points where maximum score for each component is 3. The total scores range of the seven components making up the total score range from 0 to 21. Higher scores represent a lower sleep quality, where a score above 6 indicates poor sleep quality. The reliability and validity of this inventory have also been approved in Iran, where the Cronbach’s alpha coefficient of the questionnaire was 0.78 to 0.82. 18 In another study, Cronbach’s alpha for the Persian version was 0.77. In cut-off point 5, the sensitivity and specificity were 94% and 72%, and in cut-off point 6, they were 85% and 84%, respectively. 19

After collecting the questionnaires and introducing students’ demographic data to a computer using SPSS version 16, the relationship between sleep quality scores and grade point average (GPA high and low) was calculated.

The results indicated that 34 cases (33.3%) of the subjects were male. The mean age of the sample 23.10 ± 3.25, where the mean age for females was 22.46± 2.44 and for males was 24.38± 4.19. The participants in the study came from various disciplines including laboratory science, medicine, pharmacology, emergency medicine, obstetrics, radiology, operating room, health technology, and nursing.

Most students lived in dormitories (50%) and 46.1% at home, with 3.9% living in rental houses. The students' educational level ranged between the third semester and twelfth semester.

Among those participating in the study, 67 patients (65.7%) consumed coffee, 90 cases (88.2%) used tea, and 1 (1%) took a drug.

For comparing the mean scores of students and the component of sleep, Spearman test (non-normal data) was employed, where a significant correlation was observed between GPA and hours taking to fall asleep ( Table 1 ).

The Relationship Between Sleep Components and GPA in KUMS Students

Sleep Componentsp valueCorrelation Coefficient
Fall asleep0.0010.35
Minutes to fall asleep0.008−0.27
Real sleep0.0450.21
Hypnotic0.008−0.26
Place of life0.0150.24
Wake up time0.696−0.04
Efficient sleep0.4
Sleep disorder0.44
Subjective sleep quality0.37
Inappropriate performance0.16
Coffee0.74
Tea0.43
Drugs0.38

Similarly, there was a relationship between sleep components and tea, coffee, hypnotic drugs, and drug ( Table 2 ).

The Relationship Between Sleep Components and Type of Drink or Drug in KUMS Students Kermanshah

Sleep ComponentsType of Usep valueCorrelation Coefficient
Sleep componentsTea0.81
Coffee0.88
Drugs0.64
Fall asleepTea0.14
Coffee0.99
Drugs1
Minutes to fall asleepTea0.1
Coffee0.001
Drugs0.69
Wake up timeTea0.380.32
Coffee0.14
Drugs0.33
Real sleepTea0.61
Coffee0.31
Drugs0.72
Subjective sleepTea0.13
Coffee0.001
Drugs0.16
HypnoticTea0.46−0.36
Coffee0.9
Drugs0.66
Efficient sleepTea0.47
Coffee0.96
Drugs0.71
Sleep disorderTea0.27
Coffee0.75
Drugs0.14
Appropriate performanceTea0.81
Coffee0.88
Drugs0.64

On the other hand, independent t -test between Pittsburgh scores in the two groups did not show any significant differences. Nevertheless, impaired sleep quality was moderate to severe in both groups ( Table 3 ).

The Difference Between the Mean Pittsburg Scores in Two Groups (Students with High and Low GPA)

NoTotal Pittsburg ScoresTotalMeanStandard Deviationp value
NormalWeakModerateSevere
GroupLow4910214413.56.8080.875
High0922275813.284.319
Total4183248102

The results indicated that impaired sleep quality between the two groups was not statistically significant. Although the relationship between sleeping and academic success has been introduced in medical literature since a long time, there still no definitive answer in this case. In a meta-analysis study conducted to examine the impact of sleep quality, sleep duration, and sleepiness on adolescents’ academic performance, although all three variables were related to academic achievement (positive relationship between sleep quality and duration of sleep and negative association with sleepiness), this relationship was very trivial. 1

On the other hand, another systematic review study of descriptive studies concluded that sleep disturbance adversely affects different areas such as general health, social status, and academic performance. However, longitudinal studies are required for a more accurate examination. 20 , 21 In an another systematic review of other authors, the authors concluded that under-sleeping would have an impact on learning of some students, and could have a detrimental effect on academic achievement. 22 Further, another review study also suggests a conclusive recommendation which has to be done to modify sleep so that it can be used for academic success. 23

The present study was conducted to explore whether sleep disorder can influence academic achievement or not. Accordingly, a specific sample of accomplished or unachieved students were selected to compare the quality and quantity of sleep. However, no significant difference was between the two groups. Other studies have reached similar conclusions.

Sweileh and his colleagues in a study on 400 Palestinian students concluded that academic achievement was not correlated with sleep quality. 15 In another study on 189 medical students in Pakistan, there was no significant association between lack of sleep and test scores. 16 In this regard, there is a possibility of sleep disorder in students, and this possibility has been expressed for the lack of academic achievement, but it has not been clearly explained. 11 In another study, sleepiness during the day (not the quality and quantity of sleep) was identified as an independent predictor of academic success. 5 In a similar study again the time it takes to fall sleep and the wake-up time (not the total amount of sleep) were associated with academic success, 14 where the total amount of sleep in adolescents with a dynamic mind was not related to their academic achievement. 24 In contrast to such studies that emphasize lack of association or low association, there are other studies that have observed an inverse relationship between sleep disturbance and academic achievement. In a study on 491 first-, second-, and third-year medical students, there was a correlation between academic performance and the amount of nighttime sleep as well as daytime sleepiness. 25 In a similar study on medical students, lack of sleep at night, late going to bed, and daytime sleepiness had a negative effect on the academic performance of the students. 26 Notably, sleep disturbances are likely to yield a negative impact on academic performance, thereby causing a vicious cycle. 25 , 27 Taken together, the studies suggest that most studies have mentioned poor quality sleep among the majority of students. 3 , 26 , 27 Accordingly, concluding the relationship between common sleep disturbance and academic performance should be done with caution. The reason is that academic success can be affected by different factors including the level of family income, the evolutionary process, intake of supplements and vitamins, family size, social media dependency, addiction to social networks, and social issues. In studies these extraneous factors are not under control, thus emphasizing the fact that the presence or absence of correlation between sleep quality and academic performance should be done with caution and using longitudinal studies.

Limitations

The main limitation of this study was the small sample size, but a specific sampling method was chosen to overcome this shortcoming. Another limitation of the study was not controlling for confounding factors in the study. Based on the results of this study and similar studies, further research should be conducted with a better design.

The results indicated no significant difference between sleep quality in achieved and unachieved academic performance. Nevertheless, to conclude with more certainty, longitudinal studies should be performed to control confounding factors.

Acknowledgments

The authors of this article appreciate the collaborations of the Sleep Disorders Research Center.

Funding Statement

Funding for this research was provided by the Kermanshah University of Medical Sciences, Sleep Disorders Research Center (93026).

Data Sharing Statement

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Ethics Approval and Consent to Participate

Informed consent obtained from all participants in the study and this study conducted by the Sleep Disorders Research Center. Identity letter obtained from deputy of research and technology to collecting data. Ethics approval was received from the ethics committee of deputy of research and technology – Kermanshah University of Medical Sciences, number 93026 on 6 April 2013.

The authors declare that they have no conflict of interest.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 10 December 2020

Effect of internet use and electronic game-play on academic performance of Australian children

  • Md Irteja Islam 1 , 2 ,
  • Raaj Kishore Biswas 3 &
  • Rasheda Khanam 1  

Scientific Reports volume  10 , Article number:  21727 ( 2020 ) Cite this article

72k Accesses

25 Citations

32 Altmetric

Metrics details

  • Human behaviour
  • Risk factors

This study examined the association of internet use, and electronic game-play with academic performance respectively on weekdays and weekends in Australian children. It also assessed whether addiction tendency to internet and game-play is associated with academic performance. Overall, 1704 children of 11–17-year-olds from young minds matter (YMM), a cross-sectional nationwide survey, were analysed. The generalized linear regression models adjusted for survey weights were applied to investigate the association between internet use, and electronic-gaming with academic performance (measured by NAPLAN–National standard score). About 70% of the sample spent > 2 h/day using the internet and nearly 30% played electronic-games for > 2 h/day. Internet users during weekdays (> 4 h/day) were less likely to get higher scores in reading and numeracy, and internet use on weekends (> 2–4 h/day) was positively associated with academic performance. In contrast, 16% of electronic gamers were more likely to get better reading scores on weekdays compared to those who did not. Addiction tendency to internet and electronic-gaming is found to be adversely associated with academic achievement. Further, results indicated the need for parental monitoring and/or self-regulation to limit the timing and duration of internet use/electronic-gaming to overcome the detrimental effects of internet use and electronic game-play on academic achievement.

Similar content being viewed by others

quantitative research about study habits and academic performance

The impact of digital media on children’s intelligence while controlling for genetic differences in cognition and socioeconomic background

quantitative research about study habits and academic performance

Adaption and implementation of the engage programme within the early childhood curriculum

quantitative research about study habits and academic performance

A mobile device-based game prototype for ADHD: development and preliminary feasibility testing

Introduction.

Over the past two decades, with the proliferation of high-tech devices (e.g. Smartphone, tablets and computers), both the internet and electronic games have become increasingly popular with people of all ages, but particularly with children and adolescents 1 , 2 , 3 . Recent estimates have shown that one in three under-18-year-olds across the world uses the Internet, and 75% of adolescents play electronic games daily in developed countries 4 , 5 , 6 . Studies in the United States reported that adolescents are occupied with over 11 h a day with modern electronic media such as computer/Internet and electronic games, which is more than they spend in school or with friends 7 , 8 . In Australia, it is reported that about 98% of children aged 15–17 years are among Internet users and 98% of adolescents play electronic games, which is significantly higher than the USA and Europe 9 , 10 , 11 , 12 .

In recent times, the Internet and electronic games have been regarded as important, not just for better results at school, but also for self-expression, sociability, creativity and entertainment for children and adolescents 13 , 14 . For instance, 88% of 12–17 year-olds in the USA considered the Internet as a useful mechanism for making progress in school 15 , and similarly, electronic gaming in children and adolescents may assist in developing skills such as decision-making, smart-thinking and coordination 3 , 15 .

On the other hand, evidence points to the fact that the use of the Internet and electronic games is found to have detrimental effects such as reduced sleeping time, behavioural problems (e.g. low self-esteem, anxiety, depression), attention problems and poor academic performance in adolescents 1 , 5 , 12 , 16 . In addition, excessive Internet usage and increased electronic gaming are found to be addictive and may cause serious functional impairment in the daily life of children and adolescents 1 , 12 , 13 , 16 . For example, the AU Kids Online survey 17 reported that 50% of Australian children were more likely to experience behavioural problems associated with Internet use compared to children from 25 European countries (29%) surveyed in the EU Kids Online study 18 , which is alarming 12 . These mixed results require an urgent need of understanding the effect of the Internet use and electronic gaming on the development of children and adolescents, particularly on their academic performance.

Despite many international studies and a smaller number in Australia 12 , several systematic limitations remain in the existing literature, particularly regarding the association of academic performance with the use of Internet and electronic games in children and adolescents 13 , 16 , 19 . First, the majority of the earlier studies have either relied on school grades or children’s self assessments—which contain an innate subjectivity by the assessor; and have not considered the standardized tests of academic performance 16 , 20 , 21 , 22 . Second, most previous studies have tested the hypothesis in the school-based settings instead of canvassing the whole community, and cannot therefore adjust for sociodemographic confounders 9 , 16 . Third, most studies have been typically limited to smaller sample sizes, which might have reduced the reliability of the results 9 , 16 , 23 .

By considering these issues, this study aimed to investigate the association of internet usage and electronic gaming on a standardized test of academic performance—NAPLAN (The National Assessment Program—Literacy and Numeracy) among Australian adolescents aged 11–17 years using nationally representative data from the Second Australian Child and Adolescent Survey of Mental Health and Wellbeing—Young Minds Matter (YMM). It is hypothesized that the findings of this study will provide a population-wide, contextual view of excessive Internet use and electronic games played separately on weekdays and weekends by Australian adolescents, which may be beneficial for evidence-based policies.

Subject demographics

Respondents who attended gave NAPLAN in 2008 (N = 4) and 2009 (N = 29) were removed from the sample due to smaller sample size, as later years (2010–2015) had over 100 samples yearly. The NAPLAN scores from 2008 might not align with a survey conducted in 2013. Further missing cases were deleted with the assumption that data were missing at random for unbiased estimates, which is common for large-scale surveys 24 . From the initial survey of 2967 samples, 1704 adolescents were sampled for this study.

The sample characteristics were displayed in Table 1 . For example, distribution of daily average internet use was checked, showing that over 50% of the sampled adolescents spent 2–4 h on internet (Table 1 ). Although all respondents in the survey used internet, nearly 21% of them did not play any electronic games in a day and almost one in every three (33%) adolescents played electronic games beyond the recommended time of 2 h per day. Girls had more addictive tendency to internet/game-play in compare to boys.

The mean scores for the three NAPLAN tests scores (reading, writing and numeracy) ranged from 520 to 600. A gradual decline in average NAPLAN tests scores (reading, writing and numeracy) scores were observed for internet use over 4 h during weekdays, and over 3 h during weekends (Table 2 ). Table 2 also shows that adolescents who played no electronic games at all have better scores in writing compared to those who play electronic games. Moreover, Table 2 shows no particular pattern between time spent on gaming and NAPLAN reading and numeracy scores. Among the survey samples, 308 adolescents were below the national standard average.

Internet use and academic performance

Our results show that internet (non-academic use) use during weekdays, especially more than 4 h, is negatively associated with academic performance (Table 3 ). For internet use during weekdays, all three models showed a significant negative association between time spent on internet and NAPLAN reading and numeracy scores. For example, in Model 1, adolescents who spent over 4 h on internet during weekdays are 15% and 17% less likely to get higher reading and numeracy scores respectively compared to those who spend less than 2 h. Similar results were found in Model 2 and 3 (Table 3 ), when we adjusted other confounders. The variable addiction tendency to internet was found to be negatively associated with NAPLAN results. The adolescents who had internet addiction were 17% less and 14% less likely to score higher in reading and numeracy respectively than those without such problematic behaviour.

Internet use during weekends showed a positive association with academic performance (Table 4 ). For example, Model 1 in Table 4 shows that internet use during weekends was significant for reading, writing and national standard scores. Youths who spend around 2–4 h and over 4 h on the internet during weekends were 21% and 15% more likely to get a higher reading scores respectively compared to those who spend less than 2 h (Model 1, Table 4 ). Similarly, in model 3, where the internet addiction of adolescents was adjusted, adolescents who spent 2–4 h on internet were 1.59 times more likely to score above the national standard. All three models of Table 4 confirmed that adolescents who spent 2–4 h on the internet during weekends are more likely to achieve better reading and writing scores and be at or above national standard compared to those who used the internet for less than 2 h. Numeracy scores were unlikely to be affected by internet use. The results obtained from Model 3 should be treated as robust, as this is the most comprehensive model that accounts for unobserved characteristics. The addiction tendency to internet/game-play variable showed a negative association with academic performance, but this is only significant for numeracy scores.

Electronic gaming and academic performance

Time spent on electronic gaming during weekdays had no effect on the academic performance of writing and language but had significant association with reading scores (Model 2, Table 5 ). Model 2 of Table 5 shows that adolescents who spent 1–2 h on gaming during weekdays were 13% more likely to get higher reading scores compared to those who did not play at all. It was an interesting result that while electronic gaming during weekdays tended to show a positive effect on reading scores, internet use during weekdays showed a negative effect. Addiction tendency to internet/game-play had a negative effect; the adolescents who were addicted to the internet were 14% less likely to score more highly in reading than those without any such behaviour.

All three models from Table 6 confirm that time spent on electronic gaming over 2 h during weekends had a positive effect on readings scores. For example, the results of Model 3 (Table 6 ) showed that adolescents who spent more than 2 h on electronic gaming during weekdays were 16% more likely to have better reading scores compared to adolescents who did not play games at all. Playing electronic games during weekends was not found to be statistically significant for writing and numeracy scores and national standard scores, although the odds ratios were positive. The results from all tables confirm that addiction tendency to internet/gaming is negatively associated with academic performance, although the variable is not always statistically significant.

Building on past research on the effect of the internet use and electronic gaming in adolescents, this study examined whether Internet use and playing electronic games were associated with academic performance (i.e. reading, writing and numeracy) using a standardized test of academic performance (i.e. NAPLAN) in a nationally representative dataset in Australia. The findings of this study question the conventional belief 9 , 25 that academic performance is negatively associated with internet use and electronic games, particularly when the internet is used for non-academic purpose.

In the current hi-tech world, many developed countries (e.g. the USA, Canada and Australia) have recommended that 5–17 year-olds limit electronic media (e.g. internet, electronic games) to 2 h per day for entertainment purposes, with concerns about the possible negative consequences of excessive use of electronic media 14 , 26 . However, previous research has often reported that children and adolescents spent more than the recommended time 26 . The present study also found similar results, that is, that about 70% of the sampled adolescents aged 11–17 spent more than 2 h per day on the Internet and nearly 30% spent more than 2-h on electronic gaming in a day. This could be attributed to the increased availability of computers/smart-phones and the internet among under-18s 12 . For instance, 97% of Australian households with children aged less than 15 years accessed internet at home in 2016–2017 10 ; as a result, policymakers recommended that parents restrict access to screens (e.g. Internet and electronic games) in children’s bedrooms, monitor children using screens, share screen hours with their children, and to act as role models by reducing their own screen time 14 .

This research has drawn attention to the fact that the average time spent using the internet, which is often more than 4 h during weekdays tends to be negatively associated with academic performance, especially a lower reading and numeracy score, while internet use of more than 2 h during weekends is positively associated with academic performance, particularly having a better reading and writing score and above national standard score. By dividing internet use and gaming by weekdays and weekends, this study find an answer to the mixed evidence found in previous literature 9 . The results of this study clearly show that the non-academic use of internet during weekdays, particularly, spending more than 4 h on internet is harmful for academic performance, whereas, internet use on the weekends is likely to incur a positive effect on academic performance. This result is consistent with a USA study that reported that internet use is positively associated with improved reading skills and higher scores on standardized tests 13 , 27 . It is also reported in the literature that academic performance is better among moderate users of the internet compared to non-users or high level users 13 , 27 , which was in line with the findings of this study. This may be due to the fact that the internet is predominantly a text-based format in which the internet users need to type and read to access most websites effectively 13 . The results of this study indicated that internet use is not harmful to academic performance if it is used moderately, especially, if ensuring very limited use on weekdays. The results of this study further confirmed that timing (weekdays or weekends) of internet use is a factor that needs to be considered.

Regarding electronic gaming, interestingly, the study found that the average time of gaming either in weekdays or weekends is positively associated with academic performance especially for reading scores. These results contradicted previous literatures 1 , 13 , 19 , 27 that have reported negative correlation between electronic games and educational performance in high-school children. The results of this study were consistent with studies conducted in the USA, Europe and other countries that claimed a positive correlation between gaming and academic performance, especially in numeracy and reading skills 28 , 29 . This is may be due to the fact that the instructions for playing most of the electronic games are text-heavy and many electronic games require gamers to solve puzzles 9 , 30 . The literature also found that playing electronic games develops cognitive skills (e.g. mental rotation abilities, dexterity), which can be attributable to better academic achievement 31 , 32 .

Consistent with previous research findings 33 , 34 , 35 , 36 , the study also found that adolescents who had addiction tendency to internet usage and/or electronic gaming were less likely to achieve higher scores in reading and numeracy compared to those who had not problematic behaviour. Addiction tendency to Internet/gaming among adolescents was found to be negatively associated with overall academic performance compared to those who were not having addiction tendency, although the variables were not always statistically significant. This is mainly because adolescents’ skipped school and missed classes and tuitions, and provide less effort to do homework due to addictive internet usage and electronic gaming 19 , 35 . The results of this study indicated that parental monitoring and/ or self-regulation (by the users) regarding the timing and intensity of internet use/gaming are essential to outweigh any negative effect of internet use and gaming on academic performance.

Although the present study uses a large nationally representative sample and advances prior research on the academic performance among adolescents who reported using the internet and playing electronic games, the findings of this study also have some limitations that need to be addressed. Firstly, adolescents who reported on the internet use and electronic games relied on self-reported child data without any screening tests or any external validation and thus, results may be overestimated or underestimated. Second, the study primarily addresses the internet use and electronic games as distinct behaviours, as the YMM survey gathered information only on the amount of time spent on internet use and electronic gaming, and included only a few questions related to addiction due to resources and time constraints and did not provide enough information to medically diagnose internet/gaming addiction. Finally, the cross-sectional research design of the data outlawed evaluation of causality and temporality of the observed association of internet use and electronic gaming with the academic performance in adolescents.

This study found that the average time spent on the internet on weekends and electronic gaming (both in weekdays and weekends) is positively associated with academic performance (measured by NAPLAN) of Australian adolescents. However, it confirmed a negative association between addiction tendency (internet use or electronic gaming) and academic performance; nonetheless, most of the adolescents used the internet and played electronic games more than the recommended 2-h limit per day. The study also revealed that further research is required on the development and implementation of interventions aimed at improving parental monitoring and fostering users’ self-regulation to restrict the daily usage of the internet and/or electronic games.

Data description

Young minds matter (YMM) was an Australian nationwide cross-sectional survey, on children aged 4–17 years conducted in 2013–2014 37 . Out of the initial 76,606 households approached, a total of 6,310 parents/caregivers (eligible household response rate 55%) of 4–17 year-old children completed a structured questionnaire via face to face interview and 2967 children aged 11–17 years (eligible children response rate 89%) completed a computer-based self-reported questionnaire privately at home 37 .

Area based sampling was used for the survey. A total of 225 Statistical Area 1 (defined by Australian Bureau of Statistics) areas were selected based on the 2011 Census of Population and Housing. They were stratified by state/territory and by metropolitan versus non-metropolitan (rural/regional) to ensure proportional representation of geographic areas across Australia 38 . However, a small number of samples were excluded, based on most remote areas, homeless children, institutional care and children living in households where interviews could not be conducted in English. The details of the survey and methodology used in the survey can be found in Lawrence et al. 37 .

Following informed consent (both written and verbal) from the primary carers (parents/caregivers), information on the National Assessment Program—Literacy and Numeracy (NAPLAN) of the children and adolescents were also added to the YMM dataset. The YMM survey is ethically approved by the Human Research Ethics Committee of the University of Western Australia and by the Australian Government Department of Health. In addition, the authors of this study obtained a written approval from Australian Data Archive (ADA) Dataverse to access the YMM dataset. All the researches were done in accordance with relevant ADA Dataverse guidelines and policy/regulations in using YMM datasets.

Outcome variables

The NAPLAN, conducted annually since 2008, is a nationwide standardized test of academic performance for all Australian students in Years 3, 5, 7 and 9 to assess their skills in reading, writing numeracy, grammar and spelling 39 , 40 . NAPLAN scores from 2010 to 2015, reported by YMM, were used as outcome variables in the models; while NAPLAN data of 2008 (N = 4) and 2009 (N = 29) were excluded for this study in order to reduce the time lag between YMM survey and the NAPLAN test. The NAPLAN gives point-in-time standardized scores, which provide the scope to compare children’s academic performance over time 40 , 41 . The NAPLAN tests are one component of the evaluation and grading phase of each school, and do not substitute for the comprehensive, consistent evaluations provided by teachers on the performance of each student 39 , 41 . All four domains—reading, writing, numeracy and language conventions (grammar and spelling) are in continuous scales in the dataset. The scores are given based on a series of tests; details can be found in 42 . The current study uses only reading, writing and numeracy scores to measure academic performance.

In this study, the National standard score is a combination of three variables: whether the student meets the national standard in reading, writing and numeracy. Based on national average score, a binary outcome variable is also generated. One category is ‘below standard’ if a child scores at least one standard deviation (one below scores) from the national standard in reading, writing and numeracy, and the rest is ‘at/above standard’.

Independent variables

Internet use and electronic gaming.

In the YMM survey, owing to the scope of the survey itself, an extensive set of questions about internet usage and electronic gaming could not be included. Internet usage omitted the time spent in academic purposes and/or related activities. Playing electronic games included playing games on a gaming console (e.g. PlayStation, Xbox, or similar console ) online or using a computer, or mobile phone, or a handled device 12 . The primary independent covariates were average internet use per day and average electronic game-play in hours per day. A combination of hours on weekdays and weekends was separately used in the models. These variables were based on a self-assessed questionnaire where the youths were asked questions regarding daily time spent on the Internet and electronic game-play, specifically on either weekends or weekdays. Since, internet use/game-play for a maximum of 2 h/day is recommended for children and adolescents aged between 5 and 17 years in many developed countries including Australia 14 , 26 ; therefore, to be consistent with the recommended time we preferred to categorize both the time variables of internet use and gaming into three groups with an interval of 2 h each. Internet use was categorized into three groups: (a) ≤ 2 h), (b) 2–4 h, and (c) > 4 h. Similar questions were asked for game-play h. The sample distribution for electronic game-play was skewed; therefore, this variable was categorized into three groups: (a) no game-play (0 h), (b) 1–2 h, and (c) > 2 h.

Other covariates

Family structure and several sociodemographic variables were used in the models to adjust for the differences in individual characteristics, parental inputs and tastes, household characteristics and place of residence. Individual characteristics included age (continuous) and sex of the child (boys, girls) and addiction tendency to internet use and/or game-play of the adolescent. Addiction tendency to internet/game-play was a binary independent variable. It was a combination of five behavioural questions relating to: whether the respondent avoided eating/sleeping due to internet use or game-play; feels bothered when s/he cannot access internet or play electronic games; keeps using internet or playing electronic games even when s/he is not really interested; spends less time with family/friends or on school works due to internet use or game-play; and unsuccessfully tries to spend less time on the internet or playing electronic games. There were four options for each question: never/almost never; not very often; fairly often; and very often. A binary covariate was simulated, where if any four out of five behaviours were reported as for example, fairly often or very often, then it was considered that the respondent had addictive tendency.

Household characteristics included household income (low, medium, high), family type (original, step, blended, sole parent/primary carer, other) 43 and remoteness (major cities, inner regional, outer regional, remote/very remote). Parental inputs and taste included education of primary carer (bachelor, diploma, year 10/11), primary carer’s likelihood of serious mental illness (K6 score -likely; not likely); primary carer’s smoking status (no, yes); and risk of alcoholic related harm by the primary carer (risky, none).

Statistical analysis

Descriptive statistics of the sample and distributions of the outcome variables were initially assessed. Based on these distributions, the categorization of outcome variables was conducted, as mentioned above. For formal analysis, generalized linear regression models (GLMs) 44 were used, adjusting for the survey weights, which allowed for generalization of the findings. As NAPLAN scores of three areas—reading, writing and numeracy—were continuous variables, linear models were fitted to daily average internet time and electronic game play time. The scores were standardized (mean = 0, SD = 1) for model fitness. The binary logistic model was fitted for the dichotomized national standard outcome variable. Separate models were estimated for internet and electronic gaming on weekends and weekdays.

We estimated three different models, where models varied based on covariates used to adjust the GLMs. Model 1 was adjusted for common sociodemographic factors including age and sex of the child, household income, education of primary carer’s and family type 43 . However, the results of this model did not account for some unobserved household characteristics (e.g. taste, preferences) that are unobserved to the researcher and are arguably correlated with potential outcomes. The effects of unobserved characteristics were reduced by using a comprehensive set of observable characteristics 45 , 46 that were available in YMM data. The issue of unobserved characteristics was addressed by estimating two additional models that include variables by including household characteristics such as parental taste, preference and inputs, and child characteristics in the model. In addition to the variables in Model 1, Model 2 included remoteness, primary carer’s mental health status, smoking status and risk of alcoholic related harm by the primary carer. Model 3 further included internet/game addiction of the adolescent in addition to all the covariates in Model 2. Model 3 was expected to account for a child’s level of unobserved characteristics as the children who were addicted to internet/games were different from others. The model will further show how academic performance is affected by internet/game addiction. The correlation among the variables ‘internet/game addiction’ and ‘internet use’ and ‘gaming’ (during weekdays and weekends) were also assessed, and they were less than 0.5. Multicollinearity was assessed using the variance inflation factor (VIF), which was under 5 for all models, suggesting no multicollinearity 47 .

p value below the threshold of 0.05 was considered the threshold of significance. All analysis was conducted in R (version 3.6.1). R-package survey (version 3.37) was used for modelling which is suited for complex survey samples 48 .

Data availability

The authors declare that they do not have permission to share dataset. However, the datasets of Young Minds Matter (YMM) survey data is available at the Australian Data Archive (ADA) Dataverse on request ( https://doi.org/10.4225/87/LCVEU3 ).

Wang, C. -W., Chan, C. L., Mak, K. -K., Ho, S. -Y., Wong, P. W. & Ho, R. T. Prevalence and correlates of video and Internet gaming addiction among Hong Kong adolescents: a pilot study. Sci. World J . 2014 (2014).

Anderson, E. L., Steen, E. & Stavropoulos, V. Internet use and problematic internet use: a systematic review of longitudinal research trends in adolescence and emergent adulthood. Int. J. Adolesc. Youth 22 , 430–454 (2017).

Article   Google Scholar  

Oliveira, M. P. MTd. et al. Use of internet and electronic games by adolescents at high social risk. Trends Psychol. 25 , 1167–1183 (2017).

Google Scholar  

UNICEF. Children in a digital world. United Nations Children's Fund (UNICEF) (2017)

King, D. L. et al. The impact of prolonged violent video-gaming on adolescent sleep: an experimental study. J. Sleep Res. 22 , 137–143 (2013).

Byrne, J. & Burton, P. Children as Internet users: how can evidence better inform policy debate?. J. Cyber Policy. 2 , 39–52 (2017).

Council, O. Children, adolescents, and the media. Pediatrics 132 , 958 (2013).

Paulus, F. W., Ohmann, S., Von Gontard, A. & Popow, C. Internet gaming disorder in children and adolescents: a systematic review. Dev. Med. Child Neurol. 60 , 645–659 (2018).

Posso, A. Internet usage and educational outcomes among 15-year old Australian students. Int J Commun 10 , 26 (2016).

ABS. 8146.0—Household Use of Information Technology, Australia, 2016–2017 (2018).

Brand, J. E. Digital Australia 2018 (Interactive Games & Entertainment Association (IGEA), Eveleigh, 2017).

Rikkers, W., Lawrence, D., Hafekost, J. & Zubrick, S. R. Internet use and electronic gaming by children and adolescents with emotional and behavioural problems in Australia–results from the second Child and Adolescent Survey of Mental Health and Wellbeing. BMC Public Health 16 , 399 (2016).

Jackson, L. A., Von Eye, A., Witt, E. A., Zhao, Y. & Fitzgerald, H. E. A longitudinal study of the effects of Internet use and videogame playing on academic performance and the roles of gender, race and income in these relationships. Comput. Hum. Behav. 27 , 228–239 (2011).

Yu, M. & Baxter, J. Australian children’s screen time and participation in extracurricular activities. Ann. Stat. Rep. 2016 , 99 (2015).

Rainie, L. & Horrigan, J. A decade of adoption: How the Internet has woven itself into American life. Pew Internet and American Life Project . 25 (2005).

Drummond, A. & Sauer, J. D. Video-games do not negatively impact adolescent academic performance in science, mathematics or reading. PLoS ONE 9 , e87943 (2014).

Article   ADS   CAS   Google Scholar  

Green, L., Olafsson, K., Brady, D. & Smahel, D. Excessive Internet use among Australian children (2012).

Livingstone, S. EU kids online. The international encyclopedia of media literacy . 1–17 (2019).

Wright, J. The effects of video game play on academic performance. Mod. Psychol. Stud. 17 , 6 (2011).

Gentile, D. A., Lynch, P. J., Linder, J. R. & Walsh, D. A. The effects of violent video game habits on adolescent hostility, aggressive behaviors, and school performance. J. Adolesc. 27 , 5–22 (2004).

Rosenthal, R. & Jacobson, L. Pygmalion in the classroom. Urban Rev. 3 , 16–20 (1968).

Willoughby, T. A short-term longitudinal study of Internet and computer game use by adolescent boys and girls: prevalence, frequency of use, and psychosocial predictors. Dev. Psychol. 44 , 195 (2008).

Weis, R. & Cerankosky, B. C. Effects of video-game ownership on young boys’ academic and behavioral functioning: a randomized, controlled study. Psychol. Sci. 21 , 463–470 (2010).

Howell, D. C. The treatment of missing data. The Sage handbook of social science methodology . 208–224 (2007).

Terry, M. and Malik, A. Video gaming as a factor that affects academic performance in grade nine. Online Submission (2018).

Houghton, S. et al. Virtually impossible: limiting Australian children and adolescents daily screen based media use. BMC Public Health. 15 , 5 (2015).

Jackson, L. A., Von Eye, A., Fitzgerald, H. E., Witt, E. A. & Zhao, Y. Internet use, videogame playing and cell phone use as predictors of children’s body mass index (BMI), body weight, academic performance, and social and overall self-esteem. Comput. Hum. Behav. 27 , 599–604 (2011).

Bowers, A. J. & Berland, M. Does recreational computer use affect high school achievement?. Educ. Technol. Res. Dev. 61 , 51–69 (2013).

Wittwer, J. & Senkbeil, M. Is students’ computer use at home related to their mathematical performance at school?. Comput. Educ. 50 , 1558–1571 (2008).

Jackson, L. A. et al. Does home internet use influence the academic performance of low-income children?. Dev. Psychol. 42 , 429 (2006).

Barlett, C. P., Anderson, C. A. & Swing, E. L. Video game effects—confirmed, suspected, and speculative: a review of the evidence. Simul. Gaming 40 , 377–403 (2009).

Suziedelyte, A. Can video games affect children's cognitive and non-cognitive skills? UNSW Australian School of Business Research Paper (2012).

Chiu, S.-I., Lee, J.-Z. & Huang, D.-H. Video game addiction in children and teenagers in Taiwan. CyberPsychol. Behav. 7 , 571–581 (2004).

Skoric, M. M., Teo, L. L. C. & Neo, R. L. Children and video games: addiction, engagement, and scholastic achievement. Cyberpsychol. Behav. 12 , 567–572 (2009).

Leung, L. & Lee, P. S. Impact of internet literacy, internet addiction symptoms, and internet activities on academic performance. Soc. Sci. Comput. Rev. 30 , 403–418 (2012).

Xin, M. et al. Online activities, prevalence of Internet addiction and risk factors related to family and school among adolescents in China. Addict. Behav. Rep. 7 , 14–18 (2018).

PubMed   Google Scholar  

Lawrence, D., Johnson, S., Hafekost, J., et al. The mental health of children and adolescents: report on the second Australian child and adolescent survey of mental health and wellbeing (2015).

Hafekost, J. et al. Methodology of young minds matter: the second Australian child and adolescent survey of mental health and wellbeing. Aust. N. Z. J. Psychiatry 50 , 866–875 (2016).

Australian Curriculum ARAA. National Assessment Program Literacy and Numeracy: Achievement in Reading, Persuasive Writing, Language Conventions and Numeracy: National Report for 2011 . Australian Curriculum, Assessment and Reporting Authority (2011).

Daraganova, G., Edwards, B. & Sipthorp, M. Using National Assessment Program Literacy and Numeracy (NAPLAN) Data in the Longitudinal Study of Australian Children (LSAC) . Department of Families, Housing, Community Services and Indigenous Affairs (2013).

NAP. NAPLAN (2016).

Australian Curriculum ARAA. National report on schooling in Australia 2009. Ministerial Council for Education, Early Childhood Development and Youth… (2009).

Vu, X.-B.B., Biswas, R. K., Khanam, R. & Rahman, M. Mental health service use in Australia: the role of family structure and socio-economic status. Children Youth Serv. Rev. 93 , 378–389 (2018).

McCullagh, P. Generalized Linear Models (Routledge, Abingdon, 2019).

Book   Google Scholar  

Gregg, P., Washbrook, E., Propper, C. & Burgess, S. The effects of a mother’s return to work decision on child development in the UK. Econ. J. 115 , F48–F80 (2005).

Khanam, R. & Nghiem, S. Family income and child cognitive and noncognitive development in Australia: does money matter?. Demography 53 , 597–621 (2016).

Kutner, M. H., Nachtsheim, C. J., Neter, J. & Li, W. Applied Linear Statistical Models (McGraw-Hill Irwin, New York, 2005).

Lumley T. Package ‘survey’. 3 , 30–33 (2015).

Download references

Acknowledgements

The authors would like to thank the University of Western Australia, Roy Morgan Research, the Australian Government Department of Health for conducting the survey, and the Australian Data Archive for giving access to the YMM survey dataset. The authors also would like to thank Dr Barbara Harmes for proofreading the manuscript.

This research did not receive any specific Grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

Centre for Health Research and School of Commerce, University of Southern Queensland, Workstation 15, Room T450, Block T, Toowoomba, QLD, 4350, Australia

Md Irteja Islam & Rasheda Khanam

Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), Mohakhali, Dhaka, 1212, Bangladesh

Md Irteja Islam

Transport and Road Safety (TARS) Research Centre, School of Aviation, University of New South Wales, Sydney, NSW, 2052, Australia

Raaj Kishore Biswas

You can also search for this author in PubMed   Google Scholar

Contributions

M.I.I.: Methodology, Validation, Visualization, Investigation, Writing—Original draft preparation, Writing—Reviewing and Editing. R.K.B.: Methodology, Software, Data curation, Formal Analysis, Writing—Original draft preparation. R.K.: Conceptualization, Methodology, Supervision, Writing- Reviewing and Editing.

Corresponding author

Correspondence to Md Irteja Islam .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Islam, M.I., Biswas, R.K. & Khanam, R. Effect of internet use and electronic game-play on academic performance of Australian children. Sci Rep 10 , 21727 (2020). https://doi.org/10.1038/s41598-020-78916-9

Download citation

Received : 28 August 2020

Accepted : 02 December 2020

Published : 10 December 2020

DOI : https://doi.org/10.1038/s41598-020-78916-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

I want to play a game: examining sex differences in the effects of pathological gaming, academic self-efficacy, and academic initiative on academic performance in adolescence.

  • Sara Madeleine Kristensen
  • Magnus Jørgensen

Education and Information Technologies (2024)

Measurement Invariance of the Lemmens Internet Gaming Disorder Scale-9 Across Age, Gender, and Respondents

  • Iulia Maria Coșa
  • Anca Dobrean
  • Robert Balazsi

Psychiatric Quarterly (2024)

Academic and Social Behaviour Profile of the Primary School Students who Possess and Play Video Games

  • E. Vázquez-Cano
  • J. M. Ramírez-Hurtado
  • C. Pascual-Moscoso

Child Indicators Research (2023)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

quantitative research about study habits and academic performance

share this!

July 31, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Autonomy boosts college student attendance and performance

by Stacy Kish, Carnegie Mellon University

college

A new paper from Carnegie Mellon University indicates that giving students more autonomy leads to better attendance and improved performance. The research was published in the journal Science Advances .

In one experiment, students were given the choice to make their own attendance mandatory. Contradicting common faculty beliefs, 90% of students in the initial study chose to do so, committing themselves to attending class reliably or to having their final grades docked. Under this "optional-mandatory attendance" policy, students came to class more reliably than students whose attendance had been mandated.

Student choice in learning

The pattern has held true. In additional studies across five classes that included 60–200 students, 73%–95% opted for mandatory attendance, and at most 10% regretted their decision by the semester's end.

"Like Ulysses, students know they will face significant temptations. By making their attendance mandatory, they exercise self-control over their future behavior," said first author Simon Cullen, assistant professor in the Department of Philosophy and Dietrich College AI and Education Fellow.

"We are born curious, and we naturally enjoy mastering many challenging learning tasks, but controlling course policies like mandatory attendance can undermine that motivation."

The role of autonomy in academic success

According to Cullen, the findings challenge widely held beliefs about student behavior. He continues that many educators worry that given the choice, students would opt for the easiest path possible. However, this study paints a starkly different picture.

"Anytime in a class that you give freedom to choose, you give students the feeling of control over their education," said Danny Oppenheimer, professor in the Social and Decision Sciences and Psychology departments at CMU and co-author of the article. "It puts the learning in the students' hands and increases their motivation."

Preparing students for real-world challenges

A second experiment indicated that when given the option to switch to an easier homework stream at any time before midterms, 85%–90% of students chose to tackle the more challenging work. The "optional-mandatory homework" policy led students to spend more time on their assignments and to learn more over the semester compared to students who were compelled to complete the same work. Cullen gauged the improved understanding of the material by examining how well students did on the problem sets throughout the semester.

These findings suggest that the common practice of imposing strict rules on college students may be counterproductive. Cullen and Oppenheimer found that allowing students more autonomy could lead to better academic outcomes and prepare them more effectively for the real world.

"The thought was that giving them greater control over their own learning would prepare them for the real world," Cullen said. "Students can be driven to excel in our classes by the same sources of motivation that drive them to pursue countless projects and passions that require no external incentives. But only if we let them choose to learn."

Enhancing Engagement and Retention in Higher Education

The researchers note their findings also highlight a significant gap in current educational practices. Despite decades of research demonstrating the importance of autonomy to motivation, autonomy-promoting policies remain rare in higher education.

"It's as if we've been ignoring one of the most powerful tools in our educational toolkit," said Oppenheimer. "By harnessing students' intrinsic motivation to learn through increased autonomy, we achieve better results than through external pressure."

The researchers caution that their findings, while promising, have limitations. The study was conducted at a single university with a limited number of students, and more research is needed to determine if the results will replicate across different types of institutions and student populations. The authors are collaborating with a diverse set of institutions to test its broader applicability.

"We're super excited about these results, but we're also eager to see how our interventions work across a range of settings," Cullen said. "We're particularly interested in exploring how autonomy might benefit students from disadvantaged backgrounds and those with disabilities."

The study opens up new avenues for research and practical applications in higher education. The authors suggest that similar choice architectures could be applied to other aspects of college courses, such as deadlines, course materials and even exam formats.

"As colleges and universities grapple with issues of student engagement, retention and academic success, this research offers a fresh perspective," said Cullen. "By trusting students with more control over their education, institutions might not only improve academic outcomes but also foster a more positive and empowering learning environment."

Journal information: Science Advances

Provided by Carnegie Mellon University

Explore further

Feedback to editors

quantitative research about study habits and academic performance

Evidence stacks up for poisonous books containing toxic dyes

quantitative research about study habits and academic performance

Researchers develop an instant version of trendy, golden turmeric milk

quantitative research about study habits and academic performance

Saturday Citations: Citizen scientists observe fast thing; controlling rat populations; clearing nanoplastic from water

21 hours ago

quantitative research about study habits and academic performance

New AI tool captures how proteins behave in context

Aug 17, 2024

quantitative research about study habits and academic performance

Scientists discover phenomenon impacting Earth's radiation belts

quantitative research about study habits and academic performance

Geophysicists find link between seismic waves called PKP precursors and strange anomalies in Earth's mantle

quantitative research about study habits and academic performance

New twist on synthesis technique promises sustainable manufacturing

quantitative research about study habits and academic performance

Researchers discover smarter way to recycle polyurethane

Aug 16, 2024

quantitative research about study habits and academic performance

DNA study challenges thinking on ancestry of people in Japan

quantitative research about study habits and academic performance

A visionary approach: How a team developed accessible maps for colorblind scientists

Relevant physicsforums posts, cover songs versus the original track, which ones are better.

2 hours ago

Why are ABBA so popular?

20 hours ago

Today's Fusion Music: T Square, Cassiopeia, Rei & Kanade Sato

Favorite songs (cont.), talent worthy of wider recognition, history of railroad safety - spotlight on current derailments.

More from Art, Music, History, and Linguistics

Related Stories

quantitative research about study habits and academic performance

Study examines educational and career disparities among minoritized students

Jun 14, 2024

quantitative research about study habits and academic performance

Games are the secret to learning math and statistics, says new research

Apr 15, 2024

Does good attendance equal good grades?

Jun 20, 2018

quantitative research about study habits and academic performance

New research highlights importance of equity in education

Sep 18, 2023

quantitative research about study habits and academic performance

New research shows students' knowledge and perceptions of active learning declined during pandemic-era teaching

Feb 9, 2024

quantitative research about study habits and academic performance

Early rise times found to lead to lower grades, poorer attendance

Feb 22, 2023

Recommended for you

quantitative research about study habits and academic performance

How some states help residents avoid costly debt during hard times

quantitative research about study habits and academic performance

Singing from memory unlocks a surprisingly common musical superpower

Aug 15, 2024

quantitative research about study habits and academic performance

Study suggests five-second break can defuse an argument between coupled partners

Aug 14, 2024

quantitative research about study habits and academic performance

Statistical analysis can detect when ChatGPT is used to cheat on multiple-choice chemistry exams

quantitative research about study habits and academic performance

Larger teams in academic research worsen career prospects, study finds

quantitative research about study habits and academic performance

Findings suggest empowering women is key to both sustainable energy and gender justice

Aug 13, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. (PDF) The Influence of Study Attitudes and Study Habits on the Academic

    quantitative research about study habits and academic performance

  2. (PDF) Relationship Between Study Habit And Academic Performance In

    quantitative research about study habits and academic performance

  3. (PDF) Lifestyle Habits Predict Academic Performance in High School

    quantitative research about study habits and academic performance

  4. (PDF) THE LEARNERS' STUDY HABITS AND ITS RELATION ON THEIR ACADEMIC

    quantitative research about study habits and academic performance

  5. The Impact Of Study Habits On Academic Performance in Mathematics. The

    quantitative research about study habits and academic performance

  6. (PDF) A Review on Study Habits of School Going Children in Relation to

    quantitative research about study habits and academic performance

COMMENTS

  1. Relationship between study habits and academic achievement in students

    Introduction. Academic performance of students is one of the main indicators used to evaluate the quality of education in universities. 1, 2 Academic performance is a complex process that is influenced by several factors, such as study habits. 2 Study habit is different individual behavior in relation to studying 3 and is a combination of study method and skill. 4 In other words, study habits ...

  2. The Learners' Study Habits and Its Relation on Their Academic Performance

    It isanaction like reading, taking notes, conducting study groups that students perform. frequently, and regularly accomplishing the. learning goals. It can be defined as effective or ...

  3. (Pdf) the Influence of Study Habits in The Academic Performance of

    However, the study habits such as creating and performing review schedules for exam (r 2 =0.064), reading (r 2 =0.057), and taking down notes (r 2 =0.042) were the most influential study habits ...

  4. A Quantitative Analysis of Study Habits Among Lower- and Higher

    Our recent small-scale qualitative study investigated behavioral diferences between higher- and lower- performing students by conducting multiple interviews with a few students as the CS1 course progressed [18]. We found noticeable diferences in the study habits of higher- and lower- performing students. As the findings resulted from interviews ...

  5. (PDF) The Influence of Study Habits in the Academic Performance of

    This quantitative study aims to explore the influence of study habits in the development of academic performance of 128 identified senior high school students at Cagraray island, Philippines.

  6. Study Habits and Procrastination: The Role of Academic Self-Efficacy

    The conceptual model, shown in. Figure 1. , assumes that the influence of Study Skill Habits on academic procrastination is mediated by Study Self-Efficacy. The SSH construct is specified as a formative latent construct, whereas SSE and procrastination are specified as reflective latent constructs.

  7. To What Extent Do Study Habits Relate to Performance?

    Students' study sessions outside class are important learning opportunities in college courses. However, we often depend on students to study effectively without explicit instruction. In this study, we described students' self-reported study habits and related those habits to their performance on exams. Notably, in these analyses, we controlled for potential confounds, such as academic ...

  8. PDF Study of the relationship between study habits and academic ...

    the academic achievement and study habit of the student ... study habits. According to Sharma (2005, p.67)" academic performance is a necessary evil because one kind of ability is rewarded economically and socially more than ... Methods of research The study applied quantitative approach. Vermeulen (1993, p.15) Siahi and Maiyo 137

  9. Study Habits and Academic Performance among Students: A Systematic

    Study habits are the well-planned intended methods of study, the chain of approaches in the process of memorising, systematizing, regulating, retaining novel facts and ideas related to the learning materials, which has gained the shape of consistent endeavours on the part of students, towards comprehending academic subjects and qualifying examinations. The constant practices a person utilizes ...

  10. Effect of sleep and mood on academic performance—at ...

    Academic achievement and cognitive functions are influenced by sleep and mood/emotion. In addition, several other factors affect learning. A coherent overview of the resultant interrelationships ...

  11. PDF Study Habits and Academic Performance among Students: A Systematic Review

    mic performance. Better study habits lead to higher academic performance. Completion of homework and assignments at proper time, proper time allocation, reading and note-taking and teacher consulta. ion significantly influenced academic performance of university students.Kumar (2017) stated that study habits an.

  12. A Systematic Review : Correlation of Study Habits and Academic

    A good study pattern has a great contribution towards academic performance of student as it develops a sense of working more effectively, efficiently and ultimately leading to higher scores and experiencing lesser stress in the process. Background: Study habits can be defined as the normal routine act of studying which also has an influence on cognitive process. Many activities are included in ...

  13. Sleep quality, duration, and consistency are associated with better

    These findings provide quantitative, objective evidence that better quality, longer duration, and greater consistency of sleep are strongly associated with better academic performance in college ...

  14. The Effects of Student Reflection on Academic Performance and

    The results of the quantitative research indicated that there was a statistically insignificant correlation between student self-reflection and academic performance and motivation to complete assignments for underserved students in 11th- and 12th-grade English; however, analysis of the qualitative data indicated that students' levels of ...

  15. PDF STUDY HABITS AND ACADEMIC PERFORMANCE OF SECONDARY SCHOOL STUDENTS ...

    study habits which could lead to good academic performance in mathematics and a functional school library should be mounted in all the secondary schools in Uyo Local Government Area of AkwaIbom State, Nigeria. Key words: Mathematics, Students Academic Performance and Study Habits. 1. Introduction Mathematics is a methodical application of matter.

  16. The Effect of Sleep Quality on Students' Academic Achievement

    Sleep is an inseparable part of human health and life, which is crucial in learning, practice, as well as physical and mental health. It affects the capacity of individual learning, academic performance, and neural-behavioral functions. This study aimed to determine the relationship between sleep quality and students' academic achievement ...

  17. Effect of internet use and electronic game-play on academic performance

    The current study uses only reading, writing and numeracy scores to measure academic performance. In this study, the National standard score is a combination of three variables: whether the ...

  18. PDF Study Habits of Students: Keys to Good Academic Performance in Public

    International Journal of Quantitative and Qualitative Research Methods Vol.6, No.3, pp.10-23, August 2018 ___Published by European Centre for Research Training and Development UK (www.eajournals.org) 11 ... good study habits enhance academic performance whilst poor study habits stifles students

  19. (PDF) Learning styles, study habits and academic performance of

    relationship among the learning styles, study habits and academic performance of the respondents. Further, this study tested the research hypotheses in null for m at 0.05 alpha level: 1) if there ...

  20. The Impact of Study Habits on the Academic Performance of Senior High

    found that there was a significance between the study habits and academic performance of the senior high school students, thus the study's null hypothesis was eventually rejected. Keywords: ... beneficial for quantitative research as it requires statistical methods to collect the data needed. With the given research design, the researchers ...

  21. Sleep Disorders' Prevalence and Impact on Academic Performance among

    A separate study revealed that a range of sleep issues, including snoring and excessive daytime fatigue, detrimentally affect the academic performance of college students (Khassawneh et al., 2018). A recent study by Belingheri et al. (2020) demonstrated a high prevalence of sleep disorders among nursing and medical students and an association ...

  22. A Study on Study Habits and Academic Performance of Students

    In this study, the association between study habits and academic performance of students is examined. Sample of 270 students were taken from two colleges Govt. Allama Iqbal College for Women ...

  23. Autonomy boosts college student attendance and performance

    A new paper from Carnegie Mellon University indicates that giving students more autonomy leads to better attendance and improved performance. The research was published in the journal Science ...

  24. Impact on Learning Habit and Academic Performance of Students

    The impact of the. students' good study habits including doing their assignments, participating fully in class, managing. their time, remaining focused, and working hard — has significantly ...

  25. Study Habits and Their Effects with the Academic Performance of

    The results of the analysis of variance of the regression of study habits on the academic performance of the students revealed an F ratio of 0.939 and 0.900 with an associate probability equal to ...

  26. Relationship between Social Media Exposure and Academic Performance

    This mixed-methods study examined the influence of social media exposure and addiction on the academic achievement of undergraduates at Prince Abubakar Audu University from 2019-2021.

  27. (PDF) Impact of ICT on the Academic Performance of Students at

    The study used a quantitative experimental research method employing a single-group pre-test and post-test design. A total of N=525 undergraduate computer science students partook in this research.