Center for Teaching Innovation

Resource library, teaching students to evaluate each other, why use peer review.

Peer assessment, or review, can improve overall learning by helping students become better readers, writers, and collaborators. A well-designed peer review program also develops students’ evaluation and assessment skills. The following are a few techniques that instructors have used to implement peer review.

Planning for peer review

  • Identify where you can incorporate peer review exercises into your course.
  • For peer review on written assignments, design guidelines that specify clearly defined tasks for the reviewer. Consider what feedback students can competently provide.
  • Determine whether peer review activities will be conducted as in-class or out-of-class assignments (or as a combination of both).
  • Plan for in-class peer reviews to last at least one class session. More time will be needed for longer papers and papers written in foreign languages.
  • Model appropriate constructive criticism and descriptive feedback through the comments you provide on papers and in class.
  • Explain the reasons for peer review, the benefits it provides, and how it supports course learning outcomes.
  • Set clear expectations: determine whether students will receive grades on their contributions to peer review sessions. If grades are given, be clear about what you are assessing, what criteria will be used for grading, and how the peer review score will be incorporated into their overall course grade.

Before the first peer review session

  • Give students a sample paper to review and comment on in class using the peer review guidelines. Ask students to share feedback and help them rephrase their comments to make them more specific and constructive, as needed.
  • Consider using the sample paper exercise to teach students how to think about, respond to, and use comments by peer reviewers to improve their writing.
  • Ask for input from students on the peer review worksheet or co-create a rubric in class.
  • Prevent overly harsh peer criticism by instructing students to provide feedback as if they were speaking to the writer or presenter directly.
  • Consider how you will assign students to groups. Do you want them to work together for the entire semester, or change for different assignments? Do you want peer reviewers to remain anonymous? How many reviews will each assignment receive?

During and after peer review sessions

  • Give clear directions and time limits for in-class peer review sessions and set defined deadlines for out-of-class peer review assignments.
  • Listen to group discussions and provide guidance and input when necessary.
  • Consider requiring students to write a plan for revision indicating the changes they intend to make on the paper and explaining why they have chosen to acknowledge or disregard specific comments and suggestions. For exams and presentations, have students write about how they would approach the task next time based on the peer comments.
  • Ask students to submit the peer feedback they received with their final papers. Make clear whether or not you will be taking this feedback into account when grading the paper, or when assigning a participation grade to the student reviewer.
  • Consider having students assess the quality of the feedback they received.
  • Discuss the process in class, addressing problems that were encountered and what was learned.

Examples of peer review activities

  • After collection, redistribute papers randomly along with a grading rubric. After students have evaluated the papers ask them to exchange with a neighbor, evaluate the new paper, and then compare notes.
  • After completing an exam, have students compare and discuss answers with a partner. You may offer them the opportunity to submit a new answer, dividing points between the two.
  • In a small class, ask students to bring one copy of their paper with their name on it and one or two copies without a name. Collect the “name” copy and redistribute the others for peer review. Provide feedback on all student papers. Collect the peer reviews and return papers to their authors.
  • For group presentations, require the class to evaluate the group’s performance using a predetermined marking scheme.
  • When working on group projects, have students evaluate each group member’s contribution to the project on a scale of 1-10. Require students to provide rationale for how and why they awarded points.

Peer review technologies

Best used for providing feedback (formative assessment), PeerMark is a peer review program that encourages students to evaluate each other’s work. Students comment on assigned papers and answer scaled and free-form questions designed by the instructor. PeerMark does not allow you to assign point values or assign and export grades.

Contact the Center for a consultation on using these peer assessment tools.

Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing.  Learning and Instruction , 20 (4), 328-338.

Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive perspective.  Learning and Instruction , 20 (4), 344-348.

The Teaching Center. (2009). Planning and guiding in-class peer review.  Washington University in St. Louis.  Retrieved from  http://teachingcenter.wustl.edu/resources/writing-assignments-feedback/planning-and-guiding-in-class-peer-review/ .

Wasson, B., & Vold, V. (2012). Leveraging new media skills in a peer feedback tool.  Internet and Higher Education , 15 (4), 1-10.

Xie, Y., Ke, F., & Sharma, P. (2008). The effect of peer feedback for blogging on college students’ reflective learning processes.  Internet and Higher Education , 11 (1), 18-25.

van Zundert, M., Sluijsmans, D., & van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions.  Learning and Instruction , 20 (4), 270-279.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(9); 2010 Nov 10

A Standardized Rubric to Evaluate Student Presentations

Michael j. peeters.

a University of Toledo College of Pharmacy

Eric G. Sahloff

Gregory e. stone.

b University of Toledo College of Education

To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course.

A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

The Many-Facets Rasch Model (MFRM) was used to determine the rubric's reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007-2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008-2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted.

The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.

INTRODUCTION

Evaluations are important in the process of teaching and learning. In health professions education, performance-based evaluations are identified as having “an emphasis on testing complex, ‘higher-order’ knowledge and skills in the real-world context in which they are actually used.” 1 Objective structured clinical examinations (OSCEs) are a common, notable example. 2 On Miller's pyramid, a framework used in medical education for measuring learner outcomes, “knows” is placed at the base of the pyramid, followed by “knows how,” then “shows how,” and finally, “does” is placed at the top. 3 Based on Miller's pyramid, evaluation formats that use multiple-choice testing focus on “knows” while an OSCE focuses on “shows how.” Just as performance evaluations remain highly valued in medical education, 4 authentic task evaluations in pharmacy education may be better indicators of future pharmacist performance. 5 Much attention in medical education has been focused on reducing the unreliability of high-stakes evaluations. 6 Regardless of educational discipline, high-stakes performance-based evaluations should meet educational standards for reliability and validity. 7

PharmD students at University of Toledo College of Pharmacy (UTCP) were required to complete a course on presentations during their final year of pharmacy school and then give a presentation that served as both a capstone experience and a performance-based evaluation for the course. Pharmacists attending the presentations were given Accreditation Council for Pharmacy Education (ACPE)-approved continuing education credits. An evaluation rubric for grading the presentations was designed to allow multiple faculty evaluators to objectively score student performances in the domains of presentation delivery and content. Given the pass/fail grading procedure used in advanced pharmacy practice experiences, passing this presentation-based course and subsequently graduating from pharmacy school were contingent upon this high-stakes evaluation. As a result, the reliability and validity of the rubric used and the evaluation process needed to be closely scrutinized.

Each year, about 100 students completed presentations and at least 40 faculty members served as evaluators. With the use of multiple evaluators, a question of evaluator leniency often arose (ie, whether evaluators used the same criteria for evaluating performances or whether some evaluators graded easier or more harshly than others). At UTCP, opinions among some faculty evaluators and many PharmD students implied that evaluator leniency in judging the students' presentations significantly affected specific students' grades and ultimately their graduation from pharmacy school. While it was plausible that evaluator leniency was occurring, the magnitude of the effect was unknown. Thus, this study was initiated partly to address this concern over grading consistency and scoring variability among evaluators.

Because both students' presentation style and content were deemed important, each item of the rubric was weighted the same across delivery and content. However, because there were more categories related to delivery than content, an additional faculty concern was that students feasibly could present poor content but have an effective presentation delivery and pass the course.

The objectives for this investigation were: (1) to describe and optimize the reliability of the evaluation rubric used in this high-stakes evaluation; (2) to identify the contribution and significance of evaluator leniency to evaluation reliability; and (3) to assess the validity of this evaluation rubric within a criterion-referenced grading paradigm focused on both presentation delivery and content.

The University of Toledo's Institutional Review Board approved this investigation. This study investigated performance evaluation data for an oral presentation course for final-year PharmD students from 2 consecutive academic years (2007-2008 and 2008-2009). The course was taken during the fourth year (P4) of the PharmD program and was a high-stakes, performance-based evaluation. The goal of the course was to serve as a capstone experience, enabling students to demonstrate advanced drug literature evaluation and verbal presentations skills through the development and delivery of a 1-hour presentation. These presentations were to be on a current pharmacy practice topic and of sufficient quality for ACPE-approved continuing education. This experience allowed students to demonstrate their competencies in literature searching, literature evaluation, and application of evidence-based medicine, as well as their oral presentation skills. Students worked closely with a faculty advisor to develop their presentation. Each class (2007-2008 and 2008-2009) was randomly divided, with half of the students taking the course and completing their presentation and evaluation in the fall semester and the other half in the spring semester. To accommodate such a large number of students presenting for 1 hour each, it was necessary to use multiple rooms with presentations taking place concurrently over 2.5 days for both the fall and spring sessions of the course. Two faculty members independently evaluated each student presentation using the provided evaluation rubric. The 2007-2008 presentations involved 104 PharmD students and 40 faculty evaluators, while the 2008-2009 presentations involved 98 students and 46 faculty evaluators.

After vetting through the pharmacy practice faculty, the initial rubric used in 2007-2008 focused on describing explicit, specific evaluation criteria such as amounts of eye contact, voice pitch/volume, and descriptions of study methods. The evaluation rubric used in 2008-2009 was similar to the initial rubric, but with 5 items added (Figure ​ (Figure1). 1 ). The evaluators rated each item (eg, eye contact) based on their perception of the student's performance. The 25 rubric items had equal weight (ie, 4 points each), but each item received a rating from the evaluator of 1 to 4 points. Thus, only 4 rating categories were included as has been recommended in the literature. 8 However, some evaluators created an additional 3 rating categories by marking lines in between the 4 ratings to signify half points ie, 1.5, 2.5, and 3.5. For example, for the “notecards/notes” item in Figure ​ Figure1, 1 , a student looked at her notes sporadically during her presentation, but not distractingly nor enough to warrant a score of 3 in the faculty evaluator's opinion, so a 3.5 was given. Thus, a 7-category rating scale (1, 1.5, 2, 2.5. 3, 3.5, and 4) was analyzed. Each independent evaluator's ratings for the 25 items were summed to form a score (0-100%). The 2 evaluators' scores then were averaged and a letter grade was assigned based on the following scale: >90% = A, 80%-89% = B, 70%-79% = C, <70% = F.

An external file that holds a picture, illustration, etc.
Object name is ajpe171fig1.jpg

Rubric used to evaluate student presentations given in a 2008-2009 capstone PharmD course.

EVALUATION AND ASSESSMENT

Rubric reliability.

To measure rubric reliability, iterative analyses were performed on the evaluations using the Many-Facets Rasch Model (MFRM) following the 2007-2008 data collection period. While Cronbach's alpha is the most commonly reported coefficient of reliability, its single number reporting without supplementary information can provide incomplete information about reliability. 9 - 11 Due to its formula, Cronbach's alpha can be increased by simply adding more repetitive rubric items or having more rating scale categories, even when no further useful information has been added. The MFRM reports separation , which is calculated differently than Cronbach's alpha, is another source of reliability information. Unlike Cronbach's alpha, separation does not appear enhanced by adding further redundant items. From a measurement perspective, a higher separation value is better than a lower one because students are being divided into meaningful groups after measurement error has been accounted for. Separation can be thought of as the number of units on a ruler where the more units the ruler has, the larger the range of performance levels that can be measured among students. For example, a separation of 4.0 suggests 4 graduations such that a grade of A is distinctly different from a grade of B, which in turn is different from a grade of C or of F. In measuring performances, a separation of 9.0 is better than 5.5, just as a separation of 7.0 is better than a 6.5; a higher separation coefficient suggests that student performance potentially could be divided into a larger number of meaningfully separate groups.

The rating scale can have substantial effects on reliability, 8 while description of how a rating scale functions is a unique aspect of the MFRM. With analysis iterations of the 2007-2008 data, the number of rating scale categories were collapsed consecutively until improvements in reliability and/or separation were no longer found. The last positive iteration that led to positive improvements in reliability or separation was deemed an optimal rating scale for this evaluation rubric.

In the 2007-2008 analysis, iterations of the data where run through the MFRM. While only 4 rating scale categories had been included on the rubric, because some faculty members inserted 3 in-between categories, 7 categories had to be included in the analysis. This initial analysis based on a 7-category rubric provided a reliability coefficient (similar to Cronbach's alpha) of 0.98, while the separation coefficient was 6.31. The separation coefficient denoted 6 distinctly separate groups of students based on the items. Rating scale categories were collapsed, with “in-between” categories included in adjacent full-point categories. Table ​ Table1 1 shows the reliability and separation for the iterations as the rating scale was collapsed. As shown, the optimal evaluation rubric maintained a reliability of 0.98, but separation improved the reliability to 7.10 or 7 distinctly separate groups of students based on the items. Another distinctly separate group was added through a reduction in the rating scale while no change was seen to Cronbach's alpha, even though the number of rating scale categories was reduced. Table ​ Table1 1 describes the stepwise, sequential pattern across the final 4 rating scale categories analyzed. Informed by the 2007-2008 results, the 2008-2009 evaluation rubric (Figure ​ (Figure1) 1 ) used 4 rating scale categories and reliability remained high.

Evaluation Rubric Reliability and Separation with Iterations While Collapsing Rating Scale Categories.

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl1.jpg

a Reliability coefficient of variance in rater response that is reproducible (ie, Cronbach's alpha).

b Separation is a coefficient of item standard deviation divided by average measurement error and is an additional reliability coefficient.

c Optimal number of rating scale categories based on the highest reliability (0.98) and separation (7.1) values.

Evaluator Leniency

Described by Fleming and colleagues over half a century ago, 6 harsh raters (ie, hawks) or lenient raters (ie, doves) have also been demonstrated in more recent studies as an issue as well. 12 - 14 Shortly after 2008-2009 data were collected, those evaluations by multiple faculty evaluators were collated and analyzed in the MFRM to identify possible inconsistent scoring. While traditional interrater reliability does not deal with this issue, the MFRM had been used previously to illustrate evaluator leniency on licensing examinations for medical students and medical residents in the United Kingdom. 13 Thus, accounting for evaluator leniency may prove important to grading consistency (and reliability) in a course using multiple evaluators. Along with identifying evaluator leniency, the MFRM also corrected for this variability. For comparison, course grades were calculated by summing the evaluators' actual ratings (as discussed in the Design section) and compared with the MFRM-adjusted grades to quantify the degree of evaluator leniency occurring in this evaluation.

Measures created from the data analysis in the MFRM were converted to percentages using a common linear test-equating procedure involving the mean and standard deviation of the dataset. 15 To these percentages, student letter grades were assigned using the same traditional method used in 2007-2008 (ie, 90% = A, 80% - 89% = B, 70% - 79% = C, <70% = F). Letter grades calculated using the revised rubric and the MFRM then were compared to letter grades calculated using the previous rubric and course grading method.

In the analysis of the 2008-2009 data, the interrater reliability for the letter grades when comparing the 2 independent faculty evaluations for each presentation was 0.98 by Cohen's kappa. However, using the 3-facet MRFM revealed significant variation in grading. The interaction of evaluator leniency on student ability and item difficulty was significant, with a chi-square of p < 0.01. As well, the MFRM showed a reliability of 0.77, with a separation of 1.85 (ie, almost 2 groups of evaluators). The MFRM student ability measures were scaled to letter grades and compared with course letter grades. As a result, 2 B's became A's and so evaluator leniency accounted for a 2% change in letter grades (ie, 2 of 98 grades).

Validity and Grading

Explicit criterion-referenced standards for grading are recommended for higher evaluation validity. 3 , 16 - 18 The course coordinator completed 3 additional evaluations of a hypothetical student presentation rating the minimal criteria expected to describe each of an A, B, or C letter grade performance. These evaluations were placed with the other 196 evaluations (2 evaluators × 98 students) from 2008-2009 into the MFRM, with the resulting analysis report giving specific cutoff percentage scores for each letter grade. Unlike the traditional scoring method of assigning all items an equal weight, the MFRM ordered evaluation items from those more difficult for students (given more weight) to those less difficult for students (given less weight). These criterion-referenced letter grades were compared with the grades generated using the traditional grading process.

When the MFRM data were rerun with the criterion-referenced evaluations added into the dataset, a 10% change was seen with letter grades (ie, 10 of 98 grades). When the 10 letter grades were lowered, 1 was below a C, the minimum standard, and suggested a failing performance. Qualitative feedback from faculty evaluators agreed with this suggested criterion-referenced performance failure.

Measurement Model

Within modern test theory, the Rasch Measurement Model maps examinee ability with evaluation item difficulty. Items are not arbitrarily given the same value (ie, 1 point) but vary based on how difficult or easy the items were for examinees. The Rasch measurement model has been used frequently in educational research, 19 by numerous high-stakes testing professional bodies such as the National Board of Medical Examiners, 20 and also by various state-level departments of education for standardized secondary education examinations. 21 The Rasch measurement model itself has rigorous construct validity and reliability. 22 A 3-facet MFRM model allows an evaluator variable to be added to the student ability and item difficulty variables that are routine in other Rasch measurement analyses. Just as multiple regression accounts for additional variables in analysis compared to a simple bivariate regression, the MFRM is a multiple variable variant of the Rasch measurement model and was applied in this study using the Facets software (Linacre, Chicago, IL). The MFRM is ideal for performance-based evaluations with the addition of independent evaluator/judges. 8 , 23 From both yearly cohorts in this investigation, evaluation rubric data were collated and placed into the MFRM for separate though subsequent analyses. Within the MFRM output report, a chi-square for a difference in evaluator leniency was reported with an alpha of 0.05.

The presentation rubric was reliable. Results from the 2007-2008 analysis illustrated that the number of rating scale categories impacted the reliability of this rubric and that use of only 4 rating scale categories appeared best for measurement. While a 10-point Likert-like scale may commonly be used in patient care settings, such as in quantifying pain, most people cannot process more then 7 points or categories reliably. 24 Presumably, when more than 7 categories are used, the categories beyond 7 either are not used or are collapsed by respondents into fewer than 7 categories. Five-point scales commonly are encountered, but use of an odd number of categories can be problematic to interpretation and is not recommended. 25 Responses using the middle category could denote a true perceived average or neutral response or responder indecisiveness or even confusion over the question. Therefore, removing the middle category appears advantageous and is supported by our results.

With 2008-2009 data, the MFRM identified evaluator leniency with some evaluators grading more harshly while others were lenient. Evaluator leniency was indeed found in the dataset but only a couple of changes were suggested based on the MFRM-corrected evaluator leniency and did not appear to play a substantial role in the evaluation of this course at this time.

Performance evaluation instruments are either holistic or analytic rubrics. 26 The evaluation instrument used in this investigation exemplified an analytic rubric, which elicits specific observations and often demonstrates high reliability. However, Norman and colleagues point out a conundrum where drastically increasing the number of evaluation rubric items (creating something similar to a checklist) could augment a reliability coefficient though it appears to dissociate from that evaluation rubric's validity. 27 Validity may be more than the sum of behaviors on evaluation rubric items. 28 Having numerous, highly specific evaluation items appears to undermine the rubric's function. With this investigation's evaluation rubric and its numerous items for both presentation style and presentation content, equal numeric weighting of items can in fact allow student presentations to receive a passing score while falling short of the course objectives, as was shown in the present investigation. As opposed to analytic rubrics, holistic rubrics often demonstrate lower yet acceptable reliability, while offering a higher degree of explicit connection to course objectives. A summative, holistic evaluation of presentations may improve validity by allowing expert evaluators to provide their “gut feeling” as experts on whether a performance is “outstanding,” “sufficient,” “borderline,” or “subpar” for dimensions of presentation delivery and content. A holistic rubric that integrates with criteria of the analytic rubric (Figure ​ (Figure1) 1 ) for evaluators to reflect on but maintains a summary, overall evaluation for each dimension (delivery/content) of the performance, may allow for benefits of each type of rubric to be used advantageously. This finding has been demonstrated with OSCEs in medical education where checklists for completed items (ie, yes/no) at an OSCE station have been successfully replaced with a few reliable global impression rating scales. 29 - 31

Alternatively, and because the MFRM model was used in the current study, an items-weighting approach could be used with the analytic rubric. That is, item weighting based on the difficulty of each rubric item could suggest how many points should be given for that rubric items, eg, some items would be worth 0.25 points, while others would be worth 0.5 points or 1 point (Table ​ (Table2). 2 ). As could be expected, the more complex the rubric scoring becomes, the less feasible the rubric is to use. This was the main reason why this revision approach was not chosen by the course coordinator following this study. As well, it does not address the conundrum that the performance may be more than the summation of behavior items in the Figure ​ Figure1 1 rubric. This current study cannot suggest which approach would be better as each would have its merits and pitfalls.

Rubric Item Weightings Suggested in the 2008-2009 Data Many-Facet Rasch Measurement Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl2.jpg

Regardless of which approach is used, alignment of the evaluation rubric with the course objectives is imperative. Objectivity has been described as a general striving for value-free measurement (ie, free of the evaluator's interests, opinions, preferences, sentiments). 27 This is a laudable goal pursued through educational research. Strategies to reduce measurement error, termed objectification , may not necessarily lead to increased objectivity. 27 The current investigation suggested that a rubric could become too explicit if all the possible areas of an oral presentation that could be assessed (ie, objectification) were included. This appeared to dilute the effect of important items and lose validity. A holistic rubric that is more straightforward and easier to score quickly may be less likely to lose validity (ie, “lose the forest for the trees”), though operationalizing a revised rubric would need to be investigated further. Similarly, weighting items in an analytic rubric based on their importance and difficulty for students may alleviate this issue; however, adding up individual items might prove arduous. While the rubric in Figure ​ Figure1, 1 , which has evolved over the years, is the subject of ongoing revisions, it appears a reliable rubric on which to build.

The major limitation of this study involves the observational method that was employed. Although the 2 cohorts were from a single institution, investigators did use a completely separate class of PharmD students to verify initial instrument revisions. Optimizing the rubric's rating scale involved collapsing data from misuse of a 4-category rating scale (expanded by evaluators to 7 categories) by a few of the evaluators into 4 independent categories without middle ratings. As a result of the study findings, no actual grading adjustments were made for students in the 2008-2009 presentation course; however, adjustment using the MFRM have been suggested by Roberts and colleagues. 13 Since 2008-2009, the course coordinator has made further small revisions to the rubric based on feedback from evaluators, but these have not yet been re-analyzed with the MFRM.

The evaluation rubric used in this study for student performance evaluations showed high reliability and the data analysis agreed with using 4 rating scale categories to optimize the rubric's reliability. While lenient and harsh faculty evaluators were found, variability in evaluator scoring affected grading in this course only minimally. Aside from reliability, issues of validity were raised using criterion-referenced grading. Future revisions to this evaluation rubric should reflect these criterion-referenced concerns. The rubric analyzed herein appears a suitable starting point for reliable evaluation of PharmD oral presentations, though it has limitations that could be addressed with further attention and revisions.

ACKNOWLEDGEMENT

Author contributions— MJP and EGS conceptualized the study, while MJP and GES designed it. MJP, EGS, and GES gave educational content foci for the rubric. As the study statistician, MJP analyzed and interpreted the study data. MJP reviewed the literature and drafted a manuscript. EGS and GES critically reviewed this manuscript and approved the final version for submission. MJP accepts overall responsibility for the accuracy of the data, its analysis, and this report.

trigger

Mastering Peer Evaluation with Effective Rubrics

Shreya verma.

Aug 1, 2023 • 5min read

Peer evaluation has emerged as a powerful process for fostering collaborative learning and providing valuable feedback to students. However, the process of peer evaluation can sometimes be ambiguous and subjective, leading to inconsistent outcomes. That's where rubrics come into play – these structured scoring guides bring clarity and objectivity to the peer evaluation process.

What are rubrics?

Rubrics are used to define the expectations of a particular assignment, providing clear guidelines for assessing different levels of effectiveness in meeting those expectations. 

Instructors should consider using rubrics when conducting peer evaluation for the following reasons:

  • Increase Transparency and Consistency in Grading: Rubrics promote transparency by outlining success criteria and ensuring consistent grading, fostering fairness in assessments.
  • Increase the Efficiency of Grading: Rubrics streamline grading with predefined criteria, enabling quicker evaluations.
  • Support Formative Assessment: Rubrics are useful for formative assessment, providing ongoing feedback for student improvement and progress over time.
  • Enhance the Quality of Self- and Peer-Evaluation: Rubrics empower students as active learners, fostering deeper understanding and critical thinking through self-assessment and peer evaluation.
  • Encourage Students to Think Critically: Rubrics link assignments to learning outcomes , stimulating critical thinking and encouraging students to reflect on their performance's alignment with intended outcomes.
  • Reduce Student Concerns about Subjectivity or Arbitrariness in Grading: Rubrics offer a clear framework for evaluation, minimizing subjectivity and ensuring that assessments are based on specific criteria rather than subjective judgment.

Components of a Rubric

A rubric comprises several essential components that collectively define the evaluation criteria for a given module. Firstly, a clear task description is needed, outlining the expectations and requirements. This description serves as the foundation upon which students' work will be assessed. The scale helps in gauging performance levels by offering various ranges such as good-bad, always-never, or beginner-expert. Moreover, the rubric breaks down the evaluation into distinct dimensions , which are specific elements of expectations that together shape the overall assessment. These dimensions can encompass various aspects like timeliness, contribution, preparation, and more. Lastly, a rubric requires a definition of the dimensions , outlining the performance levels for each dimension and providing a clear understanding of the expectations at each level.

Types of Rubrics: Holistic vs Analytic

Blog Post Images  (19)

When it comes to assessing student performance, two common types of rubrics are employed: holistic and analytic . 

Holistic rubrics use rating scales that encompass multiple criteria, emphasizing what the learner can demonstrate or accomplish. These rubrics are easier to develop and use, providing consistent and reliable evaluations. However, they do not offer specific feedback for improvement and can be challenging to score accurately. 

In contrast, analytic rubrics use rating scales to evaluate separate criteria, usually presented in a grid format. Each criterion is associated with descriptive tags or numbers that define the required level of performance. Analytic rubrics provide detailed feedback on areas of strength and weakness, and each criterion can be weighted to reflect its relative importance. While they offer more comprehensive feedback, creating analytic rubrics can be more time-consuming, and maintaining consistency in scoring may pose a challenge. 

Holistic rubrics are preferable when grading work or measuring overall progress and general performance. On the other hand, analytic rubrics are more suitable for evaluating multiple areas or criteria separately, allowing for a more detailed assessment of proficiency in each area.

Rubrics in Formative vs Summative Assessments

When deciding between using a holistic or analytic rubric, several factors come into play. One crucial consideration is the purpose of the rubric and how it aligns with the assessment goals. 

For formative assessment , where the focus is on providing ongoing feedback and supporting student learning, a holistic rubric might be more suitable. It allows educators to assess progress comprehensively, giving students an overall understanding of their performance.

In contrast, for summative assessment , where the emphasis lies on making final evaluations, an analytic rubric could be more effective. It breaks down the evaluation into distinct criteria, offering specific feedback on each area of assessment, enabling a more detailed and precise evaluation.

Examples of Rubrics in Peer Evaluation for Team-based Learning

Blog Post Images  (20)

Analytic rubrics, like Koles' and Texas Tech's methods , evaluate individual criteria separately, providing detailed feedback on various aspects of performance. Additionally, UT Austin's method, which is useful for formative assessments, focuses on ongoing feedback to support student learning and growth throughout the evaluation process. In contrast, holistic rubrics, such as Michaelsen's and Finks' methods , create an overall assessment score, capturing a comprehensive view of a student's performance. For an overview of these methods, fill out the form here to gain access to our Peer Evaluation Methods guide.

All in all, rubrics empower both educators and students to engage in meaningful and effective evaluations. The thoughtful application of rubrics ensures fair and transparent evaluations, ultimately contributing to enhanced learning outcomes.

intedashboard logo

Free Download

Peer evaluation methods guide, join our newsletter community, recommended for you, the 5 benefits of peer evaluation in team-based learning.

Peer evaluation is an integral part of

5 Reasons Why Immediate Feedback is Important for Effective Learning

Educators are always searching for teaching

PBL vs TBL: What's the Difference?

Educators are always looking for effective

3 Benefits of e-Gallery Walk for Students

InteDashboard can be used to conduct an online

Pros and Cons of The 4 Peer Evaluation Methods for Team-Based Learning

If you have been following our blog, you

What is Team-based Learning?

Team-based learning began as a way to improve

7 Benefits of Switching from IF-AT Scratch-off Cards to Digital TRAT

In 2015, after I had left as CFO of an airline

4 Benefits of Team-based Learning for Students

Hate sitting through hours of boring lectures

Testimonials

Published in

Evaluation Form Templates

Free Peer Evaluation Forms & Samples (Word | PDF)

Peer evaluation or assessment offers a structured learning process for learners to critique and offer feedback regarding the work easily. This helps students in developing lifelong skills in evaluating and providing feedback to each other. Peer assessment also equips the learners with skills of self-assessment, leading to work improvement.

What is Peer Evaluation? 

Peer evaluation is, therefore, is an assessment that allows students to assess each other’s performance properly. It is extremely valuable to help learners learn from each other by listening, analyzing, and offering a problem-solving solution. Thus, on the other hand, it offers learners a chance of encountering diversity in many ways. They also get to learn how to be responsible for their learning by coming up with clear judgments.

Why use Peer Evaluation

Peer evaluation can;

  • Empower in taking up responsibilities and, at the same time, manage their learning.
  • It enables students to study the right techniques for assessing and giving others constructive feedback to develop long-life assessment skills.
  • It enhances the student’s learning through the exchange of ideas and knowledge diffusion.
  • It helps in motivating the students to engage with the course materials more genuinely easily.

Consideration for using Peer Evaluation

  • Let the students understand the rationale for doing the peer review. This includes explaining the benefits of taking part in a peer-review process.
  • Consider the need for having the students evaluate anonymous assignments for more objective feedback.
  • Always be prepared to offer the right feedback to the student’s feedback on each other. This includes displaying examples of quality feedbacks and discussing the ones that are useful and why.
  • Provide a clear direction and the time limit for all the in-class peer reviews. You should also set defined rules and deadlines for the out of class peer evaluation assignments.
  • Take your time, listen to the group feedback and discussions, and provide the right input and guidance.
  • Student Ownership and familiarity of criteria tend to enhance peer evaluation validity easily. Therefore make sure you involve the student on the criteria used through a proper discussion. Make sure you involve your students in developing an assessment rubric.

Getting Started with Peer Evaluation

The beginning process for peer evaluation has to follow a certain guideline for the assessment to meet the set target. Some of the necessary steps involved with getting started with peer evaluation therefore include;

  • Indenting the activities and assignments with which the students might greatly benefit from the peer feedback.
  • Consider breaking the larger assignments into small pieces and incorporate peer assignment opportunities within every stage. For example, start with the assignment outline, followed by the first draft, the second draft, and many more. 
  • Make sure you also design guidelines or even rubrics that have a clearly defined task for the reviewers. 
  • Introduce the rubrics through learning practices ensuring that the students acquire the ability to apply the rubrics perfectly. 
  • Determine if the peer review activities will be carried out as an out-of-class or in-class assignment. For an out-class assignment, the peer evaluation should be facilitated through an online Turnitin.
  • Ensure you help your students in carrying out peer evaluation through modeling appropriate and constructive criticism. Descriptive feedback using your comments on the work and a well-constructed rubric also helps make the entire process fruitful.
  • Incorporate small feed-back groups where a written comment on the assignment can easily be explained and also discussed with different reviewers.

Free Forms & Templates

employee peer evaluation form

The Benefits of doing Peer Evaluation

  • It encourages learners to reflect on each other work critically.
  • It encourages students to take part in the assessment process effectively. 
  • Assists students in developing proper judgmental skills once they go through the work of the other group members.
  • The students easily generate more feedbacks as compared to one or even tow teachers. 
  • Eliminates the workload and time of marking for the teacher.
  • Discourages free-riders since students tend to put more effort to perform better in front of their peers.
  • It helps in maintaining the fairness of assessment since everyone has the opportunity of assessing each other.
  • The students learn how to evaluate criticism and apply the proper generic skills during the entire process. 
  • Students get the chance to learn more from each other. 

While many tutors may not see the need for putting in place an effective peer evaluation approach, the fact remains that this is an effective approach that helps learners bring the best out of themselves. The approach has been attributed to many students’ successful results in places where it is being implemented. However, it is important to also keep in mind that friendship and peer pressure can influence the reliability of the grades being given by the students. You, therefore, have to be very keen on the right outcomes to be achieved.

Keep reading

12 free workshop evaluation forms (word | pdf), 23+ free questionnaire templates & survey forms, free training evaluation forms and questionnaires, 9 free course evaluation forms (word | pdf | excel), free student evaluation forms samples, 24 free employee performance evaluation forms (word | excel).

  • Faculty and Staff

twitter

Assessment and Curriculum Support Center

Creating and using rubrics.

Last Updated: 4 March 2024. Click here to view archived versions of this page.

On this page:

  • What is a rubric?
  • Why use a rubric?
  • What are the parts of a rubric?
  • Developing a rubric
  • Sample rubrics
  • Scoring rubric group orientation and calibration
  • Suggestions for using rubrics in courses
  • Equity-minded considerations for rubric development
  • Tips for developing a rubric
  • Additional resources & sources consulted

Note:  The information and resources contained here serve only as a primers to the exciting and diverse perspectives in the field today. This page will be continually updated to reflect shared understandings of equity-minded theory and practice in learning assessment.

1. What is a rubric?

A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior.

There are two main types of rubrics:

Analytic Rubric : An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for each characteristic (e.g., a score on “formatting” and a score on “content development”).

  • Advantages: provides more detailed feedback on student performance; promotes consistent scoring across students and between raters
  • Disadvantages: more time consuming than applying a holistic rubric
  • You want to see strengths and weaknesses.
  • You want detailed feedback about student performance.

Holistic Rubric: A holistic rubrics provide a single score based on an overall impression of a student’s performance on a task.

  • Advantages: quick scoring; provides an overview of student achievement; efficient for large group scoring
  • Disadvantages: does not provided detailed information; not diagnostic; may be difficult for scorers to decide on one overall score
  • You want a quick snapshot of achievement.
  • A single dimension is adequate to define quality.

2. Why use a rubric?

  • A rubric creates a common framework and language for assessment.
  • Complex products or behaviors can be examined efficiently.
  • Well-trained reviewers apply the same criteria and standards.
  • Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”
  • Using rubrics can lead to substantive conversations among faculty.
  • When faculty members collaborate to develop a rubric, it promotes shared expectations and grading practices.

Faculty members can use rubrics for program assessment. Examples:

The English Department collected essays from students in all sections of English 100. A random sample of essays was selected. A team of faculty members evaluated the essays by applying an analytic scoring rubric. Before applying the rubric, they “normed”–that is, they agreed on how to apply the rubric by scoring the same set of essays and discussing them until consensus was reached (see below: “6. Scoring rubric group orientation and calibration”). Biology laboratory instructors agreed to use a “Biology Lab Report Rubric” to grade students’ lab reports in all Biology lab sections, from 100- to 400-level. At the beginning of each semester, instructors met and discussed sample lab reports. They agreed on how to apply the rubric and their expectations for an “A,” “B,” “C,” etc., report in 100-level, 200-level, and 300- and 400-level lab sections. Every other year, a random sample of students’ lab reports are selected from 300- and 400-level sections. Each of those reports are then scored by a Biology professor. The score given by the course instructor is compared to the score given by the Biology professor. In addition, the scores are reported as part of the program’s assessment report. In this way, the program determines how well it is meeting its outcome, “Students will be able to write biology laboratory reports.”

3. What are the parts of a rubric?

Rubrics are composed of four basic parts. In its simplest form, the rubric includes:

  • A task description . The outcome being assessed or instructions students received for an assignment.
  • The characteristics to be rated (rows) . The skills, knowledge, and/or behavior to be demonstrated.
  • Beginning, approaching, meeting, exceeding
  • Emerging, developing, proficient, exemplary 
  • Novice, intermediate, intermediate high, advanced 
  • Beginning, striving, succeeding, soaring
  • Also called a “performance description.” Explains what a student will have done to demonstrate they are at a given level of mastery for a given characteristic.

4. Developing a rubric

Step 1: Identify what you want to assess

Step 2: Identify the characteristics to be rated (rows). These are also called “dimensions.”

  • Specify the skills, knowledge, and/or behaviors that you will be looking for.
  • Limit the characteristics to those that are most important to the assessment.

Step 3: Identify the levels of mastery/scale (columns).

Tip: Aim for an even number (4 or 6) because when an odd number is used, the middle tends to become the “catch-all” category.

Step 4: Describe each level of mastery for each characteristic/dimension (cells).

  • Describe the best work you could expect using these characteristics. This describes the top category.
  • Describe an unacceptable product. This describes the lowest category.
  • Develop descriptions of intermediate-level products for intermediate categories.
Important: Each description and each characteristic should be mutually exclusive.

Step 5: Test rubric.

  • Apply the rubric to an assignment.
  • Share with colleagues.
Tip: Faculty members often find it useful to establish the minimum score needed for the student work to be deemed passable. For example, faculty members may decided that a “1” or “2” on a 4-point scale (4=exemplary, 3=proficient, 2=marginal, 1=unacceptable), does not meet the minimum quality expectations. We encourage a standard setting session to set the score needed to meet expectations (also called a “cutscore”). Monica has posted materials from standard setting workshops, one offered on campus and the other at a national conference (includes speaker notes with the presentation slides). They may set their criteria for success as 90% of the students must score 3 or higher. If assessment study results fall short, action will need to be taken.

Step 6: Discuss with colleagues. Review feedback and revise.

Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.

5. Sample rubrics

Rubrics are on our Rubric Bank page and in our Rubric Repository (Graduate Degree Programs) . More are available at the Assessment and Curriculum Support Center in Crawford Hall (hard copy).

These open as Word documents and are examples from outside UH.

  • Group Participation (analytic rubric)
  • Participation (holistic rubric)
  • Design Project (analytic rubric)
  • Critical Thinking (analytic rubric)
  • Media and Design Elements (analytic rubric; portfolio)
  • Writing (holistic rubric; portfolio)

6. Scoring rubric group orientation and calibration

When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called “norming.” It’s a way to calibrate the faculty members so that scores are accurate and consistent across the faculty. Below are directions for an assessment coordinator carrying out this process.

Suggested materials for a scoring session:

  • Copies of the rubric
  • Copies of the “anchors”: pieces of student work that illustrate each level of mastery. Suggestion: have 6 anchor pieces (2 low, 2 middle, 2 high)
  • Score sheets
  • Extra pens, tape, post-its, paper clips, stapler, rubber bands, etc.

Hold the scoring session in a room that:

  • Allows the scorers to spread out as they rate the student pieces
  • Has a chalk or white board, smart board, or flip chart
  • Describe the purpose of the activity, stressing how it fits into program assessment plans. Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
  • Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
  • Describe the scoring rubric and its categories. Explain how it was developed.
  • Analytic: Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each score (level of mastery) is used. Holistic: Explain that readers should assign the score or level of mastery that best describes the whole piece; some aspects of the piece may not appear in that score and that is okay. They should apply the criteria without concern for how often each score is used.
  • Give each scorer a copy of several student products that are exemplars of different levels of performance. Ask each scorer to independently apply the rubric to each of these products, writing their ratings on a scrap sheet of paper.
  • Once everyone is done, collect everyone’s ratings and display them so everyone can see the degree of agreement. This is often done on a blackboard, with each person in turn announcing his/her ratings as they are entered on the board. Alternatively, the facilitator could ask raters to raise their hands when their rating category is announced, making the extent of agreement very clear to everyone and making it very easy to identify raters who routinely give unusually high or low ratings.
  • Guide the group in a discussion of their ratings. There will be differences. This discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Raters should be encouraged to explain by making explicit references to the rubric. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a “3-4” split because it has elements of both categories. This is usually not a problem. You might allow the group to revise the rubric to clarify its use but avoid allowing the group to drift away from the rubric and learning outcome(s) being assessed.
  • Once the group is comfortable with how the rubric is applied, the rating begins. Explain how to record ratings using the score sheet and explain the procedures. Reviewers begin scoring.
  • Are results sufficiently reliable?
  • What do the results mean? Are we satisfied with the extent of students’ learning?
  • Who needs to know the results?
  • What are the implications of the results for curriculum, pedagogy, or student support services?
  • How might the assessment process, itself, be improved?

7. Suggestions for using rubrics in courses

  • Use the rubric to grade student work. Hand out the rubric with the assignment so students will know your expectations and how they’ll be graded. This should help students master your learning outcomes by guiding their work in appropriate directions.
  • Use a rubric for grading student work and return the rubric with the grading on it. Faculty save time writing extensive comments; they just circle or highlight relevant segments of the rubric. Some faculty members include room for additional comments on the rubric page, either within each section or at the end.
  • Develop a rubric with your students for an assignment or group project. Students can the monitor themselves and their peers using agreed-upon criteria that they helped develop. Many faculty members find that students will create higher standards for themselves than faculty members would impose on them.
  • Have students apply your rubric to sample products before they create their own. Faculty members report that students are quite accurate when doing this, and this process should help them evaluate their own projects as they are being developed. The ability to evaluate, edit, and improve draft documents is an important skill.
  • Have students exchange paper drafts and give peer feedback using the rubric. Then, give students a few days to revise before submitting the final draft to you. You might also require that they turn in the draft and peer-scored rubric with their final paper.
  • Have students self-assess their products using the rubric and hand in their self-assessment with the product; then, faculty members and students can compare self- and faculty-generated evaluations.

8. Equity-minded considerations for rubric development

Ensure transparency by making rubric criteria public, explicit, and accessible

Transparency is a core tenet of equity-minded assessment practice. Students should know and understand how they are being evaluated as early as possible.

  • Ensure the rubric is publicly available & easily accessible. We recommend publishing on your program or department website.
  • Have course instructors introduce and use the program rubric in their own courses. Instructors should explain to students connections between the rubric criteria and the course and program SLOs.
  • Write rubric criteria using student-focused and culturally-relevant language to ensure students understand the rubric’s purpose, the expectations it sets, and how criteria will be applied in assessing their work.
  • For example, instructors can provide annotated examples of student work using the rubric language as a resource for students.

Meaningfully involve students and engage multiple perspectives

Rubrics created by faculty alone risk perpetuating unseen biases as the evaluation criteria used will inherently reflect faculty perspectives, values, and assumptions. Including students and other stakeholders in developing criteria helps to ensure performance expectations are aligned between faculty, students, and community members. Additional perspectives to be engaged might include community members, alumni, co-curricular faculty/staff, field supervisors, potential employers, or current professionals. Consider the following strategies to meaningfully involve students and engage multiple perspectives:

  • Have students read each evaluation criteria and talk out loud about what they think it means. This will allow you to identify what language is clear and where there is still confusion.
  • Ask students to use their language to interpret the rubric and provide a student version of the rubric.
  • If you use this strategy, it is essential to create an inclusive environment where students and faculty have equal opportunity to provide input.
  • Be sure to incorporate feedback from faculty and instructors who teach diverse courses, levels, and in different sub-disciplinary topics. Faculty and instructors who teach introductory courses have valuable experiences and perspectives that may differ from those who teach higher-level courses.
  • Engage multiple perspectives including co-curricular faculty/staff, alumni, potential employers, and community members for feedback on evaluation criteria and rubric language. This will ensure evaluation criteria reflect what is important for all stakeholders.
  • Elevate historically silenced voices in discussions on rubric development. Ensure stakeholders from historically underrepresented communities have their voices heard and valued.

Honor students’ strengths in performance descriptions

When describing students’ performance at different levels of mastery, use language that describes what students can do rather than what they cannot do. For example:

  • Instead of: Students cannot make coherent arguments consistently.
  • Use: Students can make coherent arguments occasionally.

9. Tips for developing a rubric

  • Find and adapt an existing rubric! It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. A faculty member in your program may already have a good one.
  • Evaluate the rubric . Ask yourself: A) Does the rubric relate to the outcome(s) being assessed? (If yes, success!) B) Does it address anything extraneous? (If yes, delete.) C) Is the rubric useful, feasible, manageable, and practical? (If yes, find multiple ways to use the rubric: program assessment, assignment grading, peer review, student self assessment.)
  • Collect samples of student work that exemplify each point on the scale or level. A rubric will not be meaningful to students or colleagues until the anchors/benchmarks/exemplars are available.
  • Expect to revise.
  • When you have a good rubric, SHARE IT!

10. Additional resources & sources consulted:

Rubric examples:

  • Rubrics primarily for undergraduate outcomes and programs
  • Rubric repository for graduate degree programs

Workshop presentation slides and handouts:

  • Workshop handout (Word document)
  • How to Use a Rubric for Program Assessment (2010)
  • Techniques for Using Rubrics in Program Assessment by guest speaker Dannelle Stevens (2010)
  • Rubrics: Save Grading Time & Engage Students in Learning by guest speaker Dannelle Stevens (2009)
  • Rubric Library , Institutional Research, Assessment & Planning, California State University-Fresno
  • The Basics of Rubrics [PDF], Schreyer Institute, Penn State
  • Creating Rubrics , Teaching Methods and Management, TeacherVision
  • Allen, Mary – University of Hawai’i at Manoa Spring 2008 Assessment Workshops, May 13-14, 2008 [available at the Assessment and Curriculum Support Center]
  • Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation , 7(25).
  • NPEC Sourcebook on Assessment: Definitions and Assessment Methods for Communication, Leadership, Information Literacy, Quantitative Reasoning, and Quantitative Skills . [PDF] (June 2005)

Contributors: Monica Stitt-Bergh, Ph.D., TJ Buckley, Yao Z. Hill Ph.D.

IMAGES

  1. PowerPoint Presentation Peer Assessment Rubric

    peer review rubric for presentations pdf

  2. Rubrics For Oral Presentations

    peer review rubric for presentations pdf

  3. Rubrics For Oral Presentations

    peer review rubric for presentations pdf

  4. Peer Evaluation Rubric by Angela Schoon

    peer review rubric for presentations pdf

  5. The presentation peer review rubric.

    peer review rubric for presentations pdf

  6. Rube Goldberg Peer Evaluation Rubric.pdf

    peer review rubric for presentations pdf

COMMENTS

  1. PDF Research Presentation Rubric

    The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself presenting your talk using your computer's pre-downloaded recording software or by using the coach in Microsoft PowerPoint. Then review your recording, fill in the rubric ...

  2. PDF Sample Peer Evaluation Rubric

    Sample Peer Evaluation Rubric . Below is a sample peer evaluation rubric used in a team-based learning course with team interactions both in class and on projects. Criteria . Unacceptable Emerging Marginally acceptable Accomplished Exemplary ; Took away from team's ability to perform in the

  3. PDF Oral Presentation Evaluation Rubric

    Organization. Logical, interesting, clearly delineated themes and ideas. Generally clear, overall easy for audience to follow. Overall organized but sequence is difficult to follow. Difficult to follow, confusing sequence of information. No clear organization to material, themes and ideas are disjointed. Evaluation.

  4. PDF Grading rubric for research article presentations (20%)

    Grading rubric for research proposals - oral presentation (15%) Grade component Mostly not true Partly true Mostly true Completely true Background (15%) 0-6% 9% 12% 15% • The literature review is comprehensive and describes relevant material. • The purpose of the study is clearly described. Specific aims (10%) 0-4% 6% 8% 10%

  5. Peer Review Strategies and Checklist

    Make your peer review feedback more effective and purposeful by applying these strategies: Be a reader. Remember you are the reader, not the writer, editor, or grader of the work. As you make suggestions, remember your role, and offer a reader's perspective (e.g., "This statistic seemed confusing to me as a reader.

  6. PDF Research Paper Presentation

    presentation. Those methodological details are clearly articulated, use precise terminology, and cite their original source (through links). You have described the methodological decisions to the degree that it would pass scientific peer- review at top publication venues. Each contribution is clearly and accurately articulated in the presentation.

  7. PDF Effective Peer Review

    Effective Peer Review. When requiring your students to write essays, peer review provides your students with the opportunity to receive feedback from other readers familiar with the assignment, in addition to your feedback. This can provide students with more suggestions and ideas for revisions, potentially increasing the quality of their drafts.

  8. PDF Presentation Evaluation Rubric General

    Evaluation rubrics measure how well a skill is performed and not whether specific rules are satisfied. While an evaluation rubric may appear generic, it measures the core skills that can be shown in each presentation format. You should use the evaluation rubrics with your parents, leaders, and other adults to develop your presentation skills ...

  9. PDF Peer Review Evaluation Rubric

    Peer Review Evaluation Rubric For this week's peer review activity, you will return to the week five "Rough Draft" board and choose ... Quality of Presentation Academic and professional appearance 10 Composure and Communication 5 Clear Organization of Presentation 10 25 Total 100 Paper Rubric Grading Criteria Maximum Points

  10. PDF GROUP PRESENTATION PEER EVALUATION

    GROUP PROJECT/PRESENTATION PEER EVALUATION. Using the below rubric, evaluate your peers' contribution to the group assignment. Peer evaluations are worth 40% of your group project/presentation grade. Your score will be calculated by averaging the scores provided by members of your group. Evaluation sheets are strictly confidential and will ...

  11. PDF Improving Graduate Student Oral Presentations Through Peer Review

    This paper provides a summary of the limited literature on peer review of oral presentations, including information on rubrics and other tools available to conduct peer reviews. The summary is followed by detailed descriptions of three contexts in which peer review has been employed to improve engineering graduate students' oral presentations.

  12. PDF Creating Rubrics in Peerceptiv

    The peer review rubric breaks the rating down from one holistic 'content' area into three specific areas: focus, argument, sources, providing more specific feedback to the students. Peer Review Rubric -Content Excellent Fair Poor s Essay is always focused. Essay is somewhat focused Essay often shifts focus.

  13. Teaching students to evaluate each other

    PeerMark. Best used for providing feedback (formative assessment), PeerMark is a peer review program that encourages students to evaluate each other's work. Students comment on assigned papers and answer scaled and free-form questions designed by the instructor. PeerMark does not allow you to assign point values or assign and export grades.

  14. Rubric for Peer Evaluation (15 points possible)

    Rubric for Peer Evaluation (15 points possible) Several of your classmates not presenting on the day you are will evaluate your presentation using the Scoring Rubric below. Your final points for this portion of your evaluation will be the average rating from everyone who rates your presentation. Performance Criteria. 3 Points. 2 Points. 1 Point.

  15. PDF Instructor Peer-Evaluation

    Instructor Peer-Evaluation Directions: The instructor peer-evaluation rubric is designed to measure instructors' work with students and colleagues as it relates to higher education teaching. The peer evaluator needs to make several observations of classroom teaching; have one-on-one conversations with the ... presentations, webinars, etc ...

  16. PDF Peer Assessment Resource Document

    Peer Assessment (PA) refers to students providing feedback on other students' assignments to help them improve their work. This feedback may or may not involve a grade. When properly implemented, PA can be a reliable and valid method of assessment.2,3,9,12,13,18,19,28,31,32,33,38. 2.1 Benefits.

  17. PDF Video assessment module: self, peer, and teacher post-performance ...

    This study also showed that students can be trained to use online rubrics to score presentations efficiently, giving further validity for using and developing online modules for video assessment. Keywords: video assessment, presentation skills, performance assessment, rubrics, Moodle, self assessment, peer assessment. 1. Introduction

  18. PDF Self & Peer Evaluations of Group Work

    Sample #1: Research Group Project. Self & Peer Evaluation for a Research Paper Project. Students are required to evaluate the personal productivity of each group member, including themselves. Rate yourself and your group members on each of the following 6 categories. Total the score for yourself and each of the group members.

  19. A Standardized Rubric to Evaluate Student Presentations

    A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

  20. PDF Oral Presentation: Peer Assessment

    Oral Presentation The student lack enthusiasm or positive feelings about the topic. Their voice is quiet, and audience members may have difficulty hearing the presentation. They display minimal eye contact with audience, while reading mostly from their notes. Displays mild tension and has trouble recovering from mistakes. Audience has

  21. Mastering Peer Evaluation with Effective Rubrics

    Mastering Peer Evaluation with Effective Rubrics. Shreya Verma. Aug 1, 2023 • 5min read. Peer evaluation has emerged as a powerful process for fostering collaborative learning and providing valuable feedback to students. However, the process of peer evaluation can sometimes be ambiguous and subjective, leading to inconsistent outcomes.

  22. Free Peer Evaluation Forms & Samples (Word

    Introduce the rubrics through learning practices ensuring that the students acquire the ability to apply the rubrics perfectly. Determine if the peer review activities will be carried out as an out-of-class or in-class assignment. For an out-class assignment, the peer evaluation should be facilitated through an online Turnitin.

  23. PDF project-based-learning Video Project

    Read relevant passages from the readings provided by the instructors and review concepts discussed in lectures ... Fill out a Peer Feedback rubric for your team members. 2. Screenplay/Storyboard & Peer Feedback (5%) ... Pitch Presentation Students did not complete a pitch presentation or do not have a clear, agreed upon vision for ...

  24. Creating and Using Rubrics

    6. Scoring rubric group orientation and calibration. When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects).