Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Random vs. Systematic Error | Definition & Examples

Random vs. Systematic Error | Definition & Examples

Published on May 7, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In scientific research, measurement error is the difference between an observed value and the true value of something. It’s also called observation error or experimental error.

There are two main types of measurement error:

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

  • Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently registers weights as higher than they actually are).

By recognizing the sources of error, you can reduce their impacts and record accurate and precise measurements. Gone unnoticed, these errors can lead to research biases like omitted variable bias or information bias .

Table of contents

Are random or systematic errors worse, random error, reducing random error, systematic error, reducing systematic error, other interesting articles, frequently asked questions about random and systematic error.

In research, systematic errors are generally a bigger problem than random errors.

Random error isn’t necessarily a mistake, but rather a natural part of measurement. There is always some variability in measurements, even when you measure the same thing repeatedly, because of fluctuations in the environment, the instrument, or your own interpretations.

But variability can be a problem when it affects your ability to draw valid conclusions about relationships between variables . This is more likely to occur as a result of systematic error.

Precision vs accuracy

Random error mainly affects precision , which is how reproducible the same measurement is under equivalent circumstances. In contrast, systematic error affects the accuracy of a measurement, or how close the observed value is to the true value.

Taking measurements is similar to hitting a central target on a dartboard. For accurate measurements, you aim to get your dart (your observations) as close to the target (the true values) as you possibly can. For precise measurements, you aim to get repeated observations as close to each other as possible.

Random error introduces variability between different measurements of the same thing, while systematic error skews your measurement away from the true value in a specific direction.

Precision vs accuracy

When you only have random error, if you measure the same thing multiple times, your measurements will tend to cluster or vary around the true value. Some values will be higher than the true score, while others will be lower. When you average out these measurements, you’ll get very close to the true score.

For this reason, random error isn’t considered a big problem when you’re collecting data from a large sample—the errors in different directions will cancel each other out when you calculate descriptive statistics . But it could affect the precision of your dataset when you have a small sample.

Systematic errors are much more problematic than random errors because they can skew your data to lead you to false conclusions. If you have systematic error, your measurements will be biased away from the true values. Ultimately, you might make a false positive or a false negative conclusion (a Type I or II error ) about the relationship between the variables you’re studying.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental error simple definition

Random error affects your measurements in unpredictable ways: your measurements are equally likely to be higher or lower than the true values.

In the graph below, the black line represents a perfect match between the true scores and observed scores of a scale. In an ideal world, all of your data would fall on exactly that line. The green dots represent the actual observed scores for each measurement with random error added.

Random error

Random error is referred to as “noise”, because it blurs the true value (or the “signal”) of what’s being measured. Keeping random error low helps you collect precise data.

Sources of random errors

Some common sources of random error include:

  • natural variations in real world or experimental contexts.
  • imprecise or unreliable measurement instruments.
  • individual differences between participants or units.
  • poorly controlled experimental procedures.
Random error source Example
Natural variations in context In an about memory capacity, your participants are scheduled for memory tests at different times of day. However, some participants tend to perform better in the morning while others perform better later in the day, so your measurements do not reflect the true extent of memory capacity for each individual.
Imprecise instrument You measure wrist circumference using a tape measure. But your tape measure is only accurate to the nearest half-centimeter, so you round each measurement up or down when you record data.
Individual differences You ask participants to administer a safe electric shock to themselves and rate their pain level on a 7-point rating scale. Because pain is subjective, it’s hard to reliably measure. Some participants overstate their levels of pain, while others understate their levels of pain.

Random error is almost always present in research, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error using the following methods.

Take repeated measurements

A simple way to increase precision is by taking repeated measurements and using their average. For example, you might measure the wrist circumference of a participant three times and get slightly different lengths each time. Taking the mean of the three measurements, instead of using just one, brings you much closer to the true value.

Increase your sample size

Large samples have less random error than small samples. That’s because the errors in different directions cancel each other out more efficiently when you have more data points. Collecting data from a large sample increases precision and statistical power .

Control variables

In controlled experiments , you should carefully control any extraneous variables that could impact your measurements. These should be controlled for all participants so that you remove key sources of random error across the board.

Systematic error means that your measurements of the same thing will vary in predictable ways: every measurement will differ from the true measurement in the same direction, and even by the same amount in some cases.

Systematic error is also referred to as bias because your data is skewed in standardized ways that hide the true values. This may lead to inaccurate conclusions.

Types of systematic errors

Offset errors and scale factor errors are two quantifiable types of systematic error.

An offset error occurs when a scale isn’t calibrated to a correct zero point. It’s also called an additive error or a zero-setting error.

A scale factor error is when measurements consistently differ from the true value proportionally (e.g., by 10%). It’s also referred to as a correlational systematic error or a multiplier error.

You can plot offset errors and scale factor errors in graphs to identify their differences. In the graphs below, the black line shows when your observed value is the exact true value, and there is no random error.

The blue line is an offset error: it shifts all of your observed values upwards or downwards by a fixed amount (here, it’s one additional unit).

The purple line is a scale factor error: all of your observed values are multiplied by a factor—all values are shifted in the same direction by the same proportion, but by different absolute amounts.

Systematic error

Sources of systematic errors

The sources of systematic error can range from your research materials to your data collection procedures and to your analysis techniques. This isn’t an exhaustive list of systematic error sources, because they can come from all aspects of research.

Response bias occurs when your research materials (e.g., questionnaires ) prompt participants to answer or act in inauthentic ways through leading questions . For example, social desirability bias can lead participants try to conform to societal norms, even if that’s not how they truly feel.

Your question states: “Experts believe that only systematic actions can reduce the effects of climate change. Do you agree that individual actions are pointless?”

Experimenter drift occurs when observers become fatigued, bored, or less motivated after long periods of data collection or coding, and they slowly depart from using standardized procedures in identifiable ways.

Initially, you code all subtle and obvious behaviors that fit your criteria as cooperative. But after spending days on this task, you only code extremely obviously helpful actions as cooperative.

Sampling bias occurs when some members of a population are more likely to be included in your study than others. It reduces the generalizability of your findings, because your sample isn’t representative of the whole population.

You can reduce systematic errors by implementing these methods in your study.

Triangulation

Triangulation means using multiple techniques to record observations so that you’re not relying on only one instrument or method.

For example, if you’re measuring stress levels, you can use survey responses, physiological recordings, and reaction times as indicators. You can check whether all three of these measurements converge or overlap to make sure that your results don’t depend on the exact instrument used.

Regular calibration

Calibrating an instrument means comparing what the instrument records with the true value of a known, standard quantity. Regularly calibrating your instrument with an accurate reference helps reduce the likelihood of systematic errors affecting your study.

You can also calibrate observers or researchers in terms of how they code or record data. Use standard protocols and routine checks to avoid experimenter drift.

Randomization

Probability sampling methods help ensure that your sample doesn’t systematically differ from the population.

In addition, if you’re doing an experiment, use random assignment to place participants into different treatment conditions. This helps counter bias by balancing participant characteristics across groups.

Wherever possible, you should hide the condition assignment from participants and researchers through masking (blinding) .

Participants’ behaviors or responses can be influenced by experimenter expectancies and demand characteristics in the environment, so controlling these will help you reduce systematic bias.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Random and systematic error are two types of measurement error.

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Random vs. Systematic Error | Definition & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/random-vs-systematic-error/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, reliability vs. validity in research | difference, types and examples, what is a controlled experiment | definitions & examples, extraneous variables | examples, types & controls, what is your plagiarism score.

How to Calculate Experimental Error in Chemistry

scanrail / Getty Images

  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Error is a measure of accuracy of the values in your experiment. It is important to be able to calculate experimental error, but there is more than one way to calculate and express it. Here are the most common ways to calculate experimental error:

Error Formula

In general, error is the difference between an accepted or theoretical value and an experimental value.

Error = Experimental Value - Known Value

Relative Error Formula

Relative Error = Error / Known Value

Percent Error Formula

% Error = Relative Error x 100%

Example Error Calculations

Let's say a researcher measures the mass of a sample to be 5.51 grams. The actual mass of the sample is known to be 5.80 grams. Calculate the error of the measurement.

Experimental Value = 5.51 grams Known Value = 5.80 grams

Error = Experimental Value - Known Value Error = 5.51 g - 5.80 grams Error = - 0.29 grams

Relative Error = Error / Known Value Relative Error = - 0.29 g / 5.80 grams Relative Error = - 0.050

% Error = Relative Error x 100% % Error = - 0.050 x 100% % Error = - 5.0%

  • Dilution Calculations From Stock Solutions
  • Here's How to Calculate pH Values
  • How to Calculate Percent
  • Tips and Rules for Determining Significant Figures
  • Molecular Mass Definition
  • 10 Things You Need To Know About Chemistry
  • Chemistry Word Problem Strategy
  • What Is pH and What Does It Measure?
  • Chemistry 101 - Introduction & Index of Topics
  • General Chemistry Topics
  • Understanding Experimental Groups
  • Molarity Definition in Chemistry
  • Teach Yourself Chemistry Today
  • What Is the Difference Between Molarity and Normality?
  • How to Find pOH in Chemistry
  • What Is a Mole Fraction?
  • WordPress.org
  • Documentation
  • Learn WordPress
  • Members Newsfeed

experimental error simple definition

What is an Experimental Error?

  • Teaching Strategies, Tactics, and Methods

experimental error simple definition

Experimental error is the difference between a measured value and its actual value. In other words, inaccuracies stop us from seeing a correct measurement.

Experimental error is prevalent and is, to some degree, inherent in every measurement. However, it is not usually seen as a ‘mistake’ in the traditional sense because a degree of error is perceived as part and parcel of the scientific process.

However, by accepting and understanding how experimental error can impact every scientific procedure, scientists can reduce inaccuracy and acquire results closer to the truth.

Here are why this might occur in an experiment, and these can be divided into subcategories: systematic errors, random errors, and blunders.

Systematic errors

These errors tend to be caused by the process, and their reason can usually be identified. Here are four significant types of systematic errors:

  • Instrumental  – When the tool you are measuring provides incorrect results, e.g., the fluid in a thermometer does not correctly represent the water temperature.
  • Observational  – When the measurement is consistently misread, e.g., a researcher records the water in a measuring cup from above, and the angle obscures the actual height of the water in the cup.
  • Environmental  – When the lab’s surroundings unintentionally influence the test results, e.g., the heat in the laboratory is always too high. It causes water to evaporate from a Petri dish at a higher-than-normal rate.
  • Theoretical  – When the model used to calculate data creates inaccurate results, e.g., when a formula for working out gravity’s influence on acceleration is used. Still, the procedure does not factor in the effect of air resistance on acceleration.

These errors are caused by unforeseeable and unknown factors surrounding the experiment. They often result in random fluctuations in data sets but can be identified or estimated through statistical analysis.

  • Observational  – When a researcher randomly takes an inaccurate reading, e.g., the researcher notes the volume of liquid to the minor division but occasionally determines the wrong number of milliliters.
  • Environmental  – When there are unforeseeable conditions surrounding the experiment, e.g., it’s a very wet day, affecting the humidity in the lab where an investigation with organic materials is being conducted.

These mistakes happen so infrequently that they are not considered random errors. However, it will usually be pretty evident in a data set because it will appear as a distinct anomaly.

  • A Blunder  – An outright mistake, e.g., a scientist not sealing the lid of a container properly and allowing gas to escape.

icon

Related Articles

37

Teaching is a vocation that demands adaptability and resilience, especially when transitioning…

no reactions

Passing a student's educational torch from one teacher to the next is…

212

Starting a career in teaching can be both exciting and overwhelming. While…

experimental error simple definition

Pedagogue is a social media network where educators can learn and grow. It's a safe space where they can share advice, strategies, tools, hacks, resources, etc., and work together to improve their teaching skills and the academic performance of the students in their charge.

If you want to collaborate with educators from around the globe, facilitate remote learning, etc., sign up for a free account today and start making connections.

Pedagogue is Free Now, and Free Forever!

  • New? Start Here
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Service
  • Registration

Don't you have an account? Register Now! it's really simple and you can start enjoying all the benefits!

We just sent you an Email. Please Open it up to activate your account.

I allow this website to collect and store submitted data.

Types of Error — Overview & Comparison - Expii

  • WolframAlpha.com
  • WolframCloud.com
  • All Sites & Public Resources...

experimental error simple definition

  • Wolfram|One
  • Mathematica
  • Wolfram|Alpha Notebook Edition
  • Finance Platform
  • System Modeler
  • Wolfram Player
  • Wolfram Engine
  • WolframScript
  • Enterprise Private Cloud
  • Application Server
  • Enterprise Mathematica
  • Wolfram|Alpha Appliance
  • Corporate Consulting
  • Technical Consulting
  • Wolfram|Alpha Business Solutions
  • Data Repository
  • Neural Net Repository
  • Function Repository
  • Wolfram|Alpha Pro
  • Problem Generator
  • Products for Education
  • Wolfram Cloud App
  • Wolfram|Alpha for Mobile
  • Wolfram|Alpha-Powered Apps
  • Paid Project Support
  • Summer Programs
  • All Products & Services »
  • Wolfram Language Revolutionary knowledge-based programming language. Wolfram Cloud Central infrastructure for Wolfram's cloud products & services. Wolfram Science Technology-enabling science of the computational universe. Wolfram Notebooks The preeminent environment for any technical workflows. Wolfram Engine Software engine implementing the Wolfram Language. Wolfram Natural Language Understanding System Knowledge-based broadly deployed natural language. Wolfram Data Framework Semantic framework for real-world data. Wolfram Universal Deployment System Instant deployment across cloud, desktop, mobile, and more. Wolfram Knowledgebase Curated computable knowledge powering Wolfram|Alpha.
  • All Technologies »
  • Aerospace & Defense
  • Chemical Engineering
  • Control Systems
  • Electrical Engineering
  • Image Processing
  • Industrial Engineering
  • Mechanical Engineering
  • Operations Research
  • Actuarial Sciences
  • Bioinformatics
  • Data Science
  • Econometrics
  • Financial Risk Management
  • All Solutions for Education
  • Machine Learning
  • Multiparadigm Data Science
  • High-Performance Computing
  • Quantum Computation Framework
  • Software Development
  • Authoring & Publishing
  • Interface Development
  • Web Development
  • All Solutions »
  • Wolfram Language Documentation
  • Fast Introduction for Programmers
  • Videos & Screencasts
  • Wolfram Language Introductory Book
  • Webinars & Training
  • Support FAQ
  • Wolfram Community
  • Contact Support
  • All Learning & Support »
  • Company Background
  • Wolfram Blog
  • Careers at Wolfram
  • Internships
  • Other Wolfram Language Jobs
  • Wolfram Foundation
  • Computer-Based Math
  • A New Kind of Science
  • Wolfram Technology for Hackathons
  • Student Ambassador Program
  • Wolfram for Startups
  • Demonstrations Project
  • Wolfram Innovator Awards
  • Wolfram + Raspberry Pi
  • All Company »

Chapter 3

Experimental Errors and

Error Analysis

This chapter is largely a tutorial on handling experimental errors of measurement. Much of the material has been extensively tested with science undergraduates at a variety of levels at the University of Toronto.

Whole books can and have been written on this topic but here we distill the topic down to the essentials. Nonetheless, our experience is that for beginners an iterative approach to this material works best. This means that the users first scan the material in this chapter; then try to use the material on their own experiment; then go over the material again; then ...

provides functions to ease the calculations required by propagation of errors, and those functions are introduced in Section 3.3. These error propagation functions are summarized in Section 3.5.

3.1 Introduction

3.1.1 The Purpose of Error Analysis

For students who only attend lectures and read textbooks in the sciences, it is easy to get the incorrect impression that the physical sciences are concerned with manipulating precise and perfect numbers. Lectures and textbooks often contain phrases like:

For an experimental scientist this specification is incomplete. Does it mean that the acceleration is closer to 9.8 than to 9.9 or 9.7? Does it mean that the acceleration is closer to 9.80000 than to 9.80001 or 9.79999? Often the answer depends on the context. If a carpenter says a length is "just 8 inches" that probably means the length is closer to 8 0/16 in. than to 8 1/16 in. or 7 15/16 in. If a machinist says a length is "just 200 millimeters" that probably means it is closer to 200.00 mm than to 200.05 mm or 199.95 mm.

We all know that the acceleration due to gravity varies from place to place on the earth's surface. It also varies with the height above the surface, and gravity meters capable of measuring the variation from the floor to a tabletop are readily available. Further, any physical measure such as can only be determined by means of an experiment, and since a perfect experimental apparatus does not exist, it is impossible even in principle to ever know perfectly. Thus, the specification of given above is useful only as a possible exercise for a student. In order to give it some meaning it must be changed to something like:

Two questions arise about the measurement. First, is it "accurate," in other words, did the experiment work properly and were all the necessary factors taken into account? The answer to this depends on the skill of the experimenter in identifying and eliminating all systematic errors. These are discussed in Section 3.4.

The second question regards the "precision" of the experiment. In this case the precision of the result is given: the experimenter claims the precision of the result is within 0.03 m/s

1. The person who did the measurement probably had some "gut feeling" for the precision and "hung" an error on the result primarily to communicate this feeling to other people. Common sense should always take precedence over mathematical manipulations.

2. In complicated experiments, error analysis can identify dominant errors and hence provide a guide as to where more effort is needed to improve an experiment.

3. There is virtually no case in the experimental physical sciences where the correct error analysis is to compare the result with a number in some book. A correct experiment is one that is performed correctly, not one that gives a result in agreement with other measurements.

4. The best precision possible for a given experiment is always limited by the apparatus. Polarization measurements in high-energy physics require tens of thousands of person-hours and cost hundreds of thousand of dollars to perform, and a good measurement is within a factor of two. Electrodynamics experiments are considerably cheaper, and often give results to 8 or more significant figures. In both cases, the experimenter must struggle with the equipment to get the most precise and accurate measurement possible.

3.1.2 Different Types of Errors

As mentioned above, there are two types of errors associated with an experimental result: the "precision" and the "accuracy". One well-known text explains the difference this way:

" " E.M. Pugh and G.H. Winslow, p. 6.

The object of a good experiment is to minimize both the errors of precision and the errors of accuracy.

Usually, a given experiment has one or the other type of error dominant, and the experimenter devotes the most effort toward reducing that one. For example, in measuring the height of a sample of geraniums to determine an average value, the random variations within the sample of plants are probably going to be much larger than any possible inaccuracy in the ruler being used. Similarly for many experiments in the biological and life sciences, the experimenter worries most about increasing the precision of his/her measurements. Of course, some experiments in the biological and life sciences are dominated by errors of accuracy.

On the other hand, in titrating a sample of HCl acid with NaOH base using a phenolphthalein indicator, the major error in the determination of the original concentration of the acid is likely to be one of the following: (1) the accuracy of the markings on the side of the burette; (2) the transition range of the phenolphthalein indicator; or (3) the skill of the experimenter in splitting the last drop of NaOH. Thus, the accuracy of the determination is likely to be much worse than the precision. This is often the case for experiments in chemistry, but certainly not all.

Question: Most experiments use theoretical formulas, and usually those formulas are approximations. Is the error of approximation one of precision or of accuracy?

3.1.3 References

There is extensive literature on the topics in this chapter. The following lists some well-known introductions.

D.C. Baird, (Prentice-Hall, 1962)

E.M. Pugh and G.H. Winslow, (Addison-Wesley, 1966)

J.R. Taylor, (University Science Books, 1982)

In addition, there is a web document written by the author of that is used to teach this topic to first year Physics undergraduates at the University of Toronto. The following Hyperlink points to that document.

3.2 Determining the Precision

3.2.1 The Standard Deviation

In the nineteenth century, Gauss' assistants were doing astronomical measurements. However, they were never able to exactly repeat their results. Finally, Gauss got angry and stormed into the lab, claiming he would show these people how to do the measurements once and for all. The only problem was that Gauss wasn't able to repeat his measurements exactly either!

After he recovered his composure, Gauss made a histogram of the results of a particular measurement and discovered the famous Gaussian or bell-shaped curve.

Many people's first introduction to this shape is the grade distribution for a course. Here is a sample of such a distribution, using the function .

We use a standard package to generate a Probability Distribution Function ( ) of such a "Gaussian" or "normal" distribution. The mean is chosen to be 78 and the standard deviation is chosen to be 10; both the mean and standard deviation are defined below.

We then normalize the distribution so the maximum value is close to the maximum number in the histogram and plot the result.

In this graph,

Finally, we look at the histogram and plot together.

We can see the functional form of the Gaussian distribution by giving symbolic values.

In this formula, the quantity , and . The is sometimes called the . The definition of is as follows.

Here is the total number of measurements and is the result of measurement number .

The standard deviation is a measure of the width of the peak, meaning that a larger value gives a wider peak.

If we look at the area under the curve from graph, we find that this area is 68 percent of the total area. Thus, any result chosen at random has a 68% change of being within one standard deviation of the mean. We can show this by evaluating the integral. For convenience, we choose the mean to be zero.

Now, we numericalize this and multiply by 100 to find the percent.

The only problem with the above is that the measurement must be repeated an infinite number of times before the standard deviation can be determined. If is less than infinity, one can only estimate measurements, this is the best estimate.

The major difference between this estimate and the definition is the . This is reasonable since if = 1 we know we can't determine

Here is an example. Suppose we are to determine the diameter of a small cylinder using a micrometer. We repeat the measurement 10 times along various points on the cylinder and get the following results, in centimeters.

The number of measurements is the length of the list.

The average or mean is now calculated.

Then the standard deviation is to be 0.00185173.

We repeat the calculation in a functional style.

Note that the package, which is standard with , includes functions to calculate all of these quantities and a great deal more.

We close with two points:

1. The standard deviation has been associated with the error in each individual measurement. Section 3.3.2 discusses how to find the error in the estimate of the average.

2. This calculation of the standard deviation is only an estimate. In fact, we can find the expected error in the estimate,

As discussed in more detail in Section 3.3, this means that the true standard deviation probably lies in the range of values.

Viewed in this way, it is clear that the last few digits in the numbers above for function adjusts these significant figures based on the error.

is discussed further in Section 3.3.1.

3.2.2 The Reading Error

There is another type of error associated with a directly measured quantity, called the "reading error". Referring again to the example of Section 3.2.1, the measurements of the diameter were performed with a micrometer. The particular micrometer used had scale divisions every 0.001 cm. However, it was possible to estimate the reading of the micrometer between the divisions, and this was done in this example. But, there is a reading error associated with this estimation. For example, the first data point is 1.6515 cm. Could it have been 1.6516 cm instead? How about 1.6519 cm? There is no fixed rule to answer the question: the person doing the measurement must guess how well he or she can read the instrument. A reasonable guess of the reading error of this micrometer might be 0.0002 cm on a good day. If the experimenter were up late the night before, the reading error might be 0.0005 cm.

An important and sometimes difficult question is whether the reading error of an instrument is "distributed randomly". Random reading errors are caused by the finite precision of the experiment. If an experimenter consistently reads the micrometer 1 cm lower than the actual value, then the reading error is not random.

For a digital instrument, the reading error is ± one-half of the last digit. Note that this assumes that the instrument has been properly engineered to round a reading correctly on the display.

3.2.3 "THE" Error

So far, we have found two different errors associated with a directly measured quantity: the standard deviation and the reading error. So, which one is the actual real error of precision in the quantity? The answer is both! However, fortunately it almost always turns out that one will be larger than the other, so the smaller of the two can be ignored.

In the diameter example being used in this section, the estimate of the standard deviation was found to be 0.00185 cm, while the reading error was only 0.0002 cm. Thus, we can use the standard deviation estimate to characterize the error in each measurement. Another way of saying the same thing is that the observed spread of values in this example is not accounted for by the reading error. If the observed spread were more or less accounted for by the reading error, it would not be necessary to estimate the standard deviation, since the reading error would be the error in each measurement.

Of course, everything in this section is related to the precision of the experiment. Discussion of the accuracy of the experiment is in Section 3.4.

3.2.4 Rejection of Measurements

Often when repeating measurements one value appears to be spurious and we would like to throw it out. Also, when taking a series of measurements, sometimes one value appears "out of line". Here we discuss some guidelines on rejection of measurements; further information appears in Chapter 7.

It is important to emphasize that the whole topic of rejection of measurements is awkward. Some scientists feel that the rejection of data is justified unless there is evidence that the data in question is incorrect. Other scientists attempt to deal with this topic by using quasi-objective rules such as 's . Still others, often incorrectly, throw out any data that appear to be incorrect. In this section, some principles and guidelines are presented; further information may be found in many references.

First, we note that it is incorrect to expect each and every measurement to overlap within errors. For example, if the error in a particular quantity is characterized by the standard deviation, we only expect 68% of the measurements from a normally distributed population to be within one standard deviation of the mean. Ninety-five percent of the measurements will be within two standard deviations, 99% within three standard deviations, etc., but we never expect 100% of the measurements to overlap within any finite-sized error for a truly Gaussian distribution.

Of course, for most experiments the assumption of a Gaussian distribution is only an approximation.

If the error in each measurement is taken to be the reading error, again we only expect most, not all, of the measurements to overlap within errors. In this case the meaning of "most", however, is vague and depends on the optimism/conservatism of the experimenter who assigned the error.

Thus, it is always dangerous to throw out a measurement. Maybe we are unlucky enough to make a valid measurement that lies ten standard deviations from the population mean. A valid measurement from the tails of the underlying distribution should not be thrown out. It is even more dangerous to throw out a suspect point indicative of an underlying physical process. Very little science would be known today if the experimenter always threw out measurements that didn't match preconceived expectations!

In general, there are two different types of experimental data taken in a laboratory and the question of rejecting measurements is handled in slightly different ways for each. The two types of data are the following:

1. A series of measurements taken with one or more variables changed for each data point. An example is the calibration of a thermocouple, in which the output voltage is measured when the thermocouple is at a number of different temperatures.

2. Repeated measurements of the same physical quantity, with all variables held as constant as experimentally possible. An example is the measurement of the height of a sample of geraniums grown under identical conditions from the same batch of seed stock.

For a series of measurements (case 1), when one of the data points is out of line the natural tendency is to throw it out. But, as already mentioned, this means you are assuming the result you are attempting to measure. As a rule of thumb, unless there is a physical explanation of why the suspect value is spurious and it is no more than three standard deviations away from the expected value, it should probably be kept. Chapter 7 deals further with this case.

For repeated measurements (case 2), the situation is a little different. Say you are measuring the time for a pendulum to undergo 20 oscillations and you repeat the measurement five times. Assume that four of these trials are within 0.1 seconds of each other, but the fifth trial differs from these by 1.4 seconds ( , more than three standard deviations away from the mean of the "good" values). There is no known reason why that one measurement differs from all the others. Nonetheless, you may be justified in throwing it out. Say that, unknown to you, just as that measurement was being taken, a gravity wave swept through your region of spacetime. However, if you are trying to measure the period of the pendulum when there are no gravity waves affecting the measurement, then throwing out that one result is reasonable. (Although trying to repeat the measurement to find the existence of gravity waves will certainly be more fun!) So whatever the reason for a suspect value, the rule of thumb is that it may be thrown out provided that fact is well documented and that the measurement is repeated a number of times more to convince the experimenter that he/she is not throwing out an important piece of data indicating a new physical process.

3.3 Propagation of Errors of Precision

3.3.1 Discussion and Examples

Usually, errors of precision are probabilistic. This means that the experimenter is saying that the actual value of some parameter is within a specified range. For example, if the half-width of the range equals one standard deviation, then the probability is about 68% that over repeated experimentation the true mean will fall within the range; if the half-width of the range is twice the standard deviation, the probability is 95%, etc.

If we have two variables, say and , and want to combine them to form a new variable, we want the error in the combination to preserve this probability.

The correct procedure to do this is to combine errors in quadrature, which is the square root of the sum of the squares. supplies a function.

For simple combinations of data with random errors, the correct procedure can be summarized in three rules. will stand for the errors of precision in , , and , respectively. We assume that and are independent of each other.

Note that all three rules assume that the error, say , is small compared to the value of .

If

z = x * y

or

then

In words, the fractional error in is the quadrature of the fractional errors in and .

If

z = x + y

or

z = x - y

then

In words, the error in is the quadrature of the errors in and .

If

then

or equivalently

includes functions to combine data using the above rules. They are named , , , , and .

Imagine we have pressure data, measured in centimeters of Hg, and volume data measured in arbitrary units. Each data point consists of { , } pairs.

We calculate the pressure times the volume.

In the above, the values of and have been multiplied and the errors have ben combined using Rule 1.

There is an equivalent form for this calculation.

Consider the first of the volume data: {11.28156820762763, 0.031}. The error means that the true value is claimed by the experimenter to probably lie between 11.25 and 11.31. Thus, all the significant figures presented to the right of 11.28 for that data point really aren't significant. The function will adjust the volume data.

Notice that by default, uses the two most significant digits in the error for adjusting the values. This can be controlled with the option.

For most cases, the default of two digits is reasonable. As discussed in Section 3.2.1, if we assume a normal distribution for the data, then the fractional error in the determination of the standard deviation , and can be written as follows.

Thus, using this as a general rule of thumb for all errors of precision, the estimate of the error is only good to 10%, ( one significant figure, unless is greater than 51) . Nonetheless, keeping two significant figures handles cases such as 0.035 vs. 0.030, where some significance may be attached to the final digit.

You should be aware that when a datum is massaged by , the extra digits are dropped.

By default, and the other functions use the function. The use of is controlled using the option.

The number of digits can be adjusted.

To form a power, say,

we might be tempted to just do

function.

Finally, imagine that for some reason we wish to form a combination.

We might be tempted to solve this with the following.

then the error is

Here is an example solving . We shall use and below to avoid overwriting the symbols and . First we calculate the total derivative.

Next we form the error.

Now we can evaluate using the pressure and volume data to get a list of errors.

Next we form the list of pairs.

The function combines these steps with default significant figure adjustment.

The function can be used in place of the other functions discussed above.

In this example, the function will be somewhat faster.

There is a caveat in using . The expression must contain only symbols, numerical constants, and arithmetic operations. Otherwise, the function will be unable to take the derivatives of the expression necessary to calculate the form of the error. The other functions have no such limitation.

3.3.1.1 Another Approach to Error Propagation: The and Datum

value error

Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},
{796.4, 2.8}}]Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},

{796.4, 2.8}}]

The wrapper can be removed.

{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}

The reason why the output of the previous two commands has been formatted as is that typesets the pairs using ± for output.

A similar construct can be used with individual data points.

Datum[{70, 0.04}]Datum[{70, 0.04}]

Just as for , the typesetting of uses

The and constructs provide "automatic" error propagation for multiplication, division, addition, subtraction, and raising to a power. Another advantage of these constructs is that the rules built into know how to combine data with constants.

The rules also know how to propagate errors for many transcendental functions.

This rule assumes that the error is small relative to the value, so we can approximate.

or arguments, are given by .

We have seen that typesets the and constructs using ±. The function can be used directly, and provided its arguments are numeric, errors will be propagated.

One may typeset the ± into the input expression, and errors will again be propagated.

The ± input mechanism can combine terms by addition, subtraction, multiplication, division, raising to a power, addition and multiplication by a constant number, and use of the . The rules used by for ± are only for numeric arguments.

This makes different than

3.3.1.2 Why Quadrature?

Here we justify combining errors in quadrature. Although they are not proofs in the usual pristine mathematical sense, they are correct and can be made rigorous if desired.

First, you may already know about the "Random Walk" problem in which a player starts at the point = 0 and at each move steps either forward (toward + ) or backward (toward - ). The choice of direction is made randomly for each move by, say, flipping a coin. If each step covers a distance , then after steps the expected most probable distance of the player from the origin can be shown to be

Thus, the distance goes up as the square root of the number of steps.

Now consider a situation where measurements of a quantity are performed, each with an identical random error . We find the sum of the measurements.

, it is equally likely to be + as - , and which is essentially random. Thus, the expected most probable error in the sum goes up as the square root of the number of measurements.

This is exactly the result obtained by combining the errors in quadrature.

Another similar way of thinking about the errors is that in an abstract linear error space, the errors span the space. If the errors are probabilistic and uncorrelated, the errors in fact are linearly independent (orthogonal) and thus form a basis for the space. Thus, we would expect that to add these independent random errors, we would have to use Pythagoras' theorem, which is just combining them in quadrature.

3.3.2 Finding the Error in an Average

The rules for propagation of errors, discussed in Section 3.3.1, allow one to find the error in an average or mean of a number of repeated measurements. Recall that to compute the average, first the sum of all the measurements is found, and the rule for addition of quantities allows the computation of the error in the sum. Next, the sum is divided by the number of measurements, and the rule for division of quantities allows the calculation of the error in the result ( the error of the mean).

In the case that the error in each measurement has the same value, the result of applying these rules for propagation of errors can be summarized as a theorem.

Theorem: If the measurement of a random variable is repeated times, and the random variable has standard deviation , then the standard deviation in the mean is

Proof: One makes measurements, each with error .

{x1, errx}, {x2, errx}, ... , {xn, errx}

We calculate the sum.

sumx = x1 + x2 + ... + xn

We calculate the error in the sum.

This last line is the key: by repeating the measurements times, the error in the sum only goes up as [ ].

The mean

Applying the rule for division we get the following.

This completes the proof.

The quantity called

Here is an example. In Section 3.2.1, 10 measurements of the diameter of a small cylinder were discussed. The mean of the measurements was 1.6514 cm and the standard deviation was 0.00185 cm. Now we can calculate the mean and its error, adjusted for significant figures.

Note that presenting this result without significant figure adjustment makes no sense.

The above number implies that there is meaning in the one-hundred-millionth part of a centimeter.

Here is another example. Imagine you are weighing an object on a "dial balance" in which you turn a dial until the pointer balances, and then read the mass from the marking on the dial. You find = 26.10 ± 0.01 g. The 0.01 g is the reading error of the balance, and is about as good as you can read that particular piece of equipment. You remove the mass from the balance, put it back on, weigh it again, and get = 26.10 ± 0.01 g. You get a friend to try it and she gets the same result. You get another friend to weigh the mass and he also gets = 26.10 ± 0.01 g. So you have four measurements of the mass of the body, each with an identical result. Do you think the theorem applies in this case? If yes, you would quote = 26.100 ± 0.01/ [4] = 26.100 ± 0.005 g. How about if you went out on the street and started bringing strangers in to repeat the measurement, each and every one of whom got = 26.10 ± 0.01 g. So after a few weeks, you have 10,000 identical measurements. Would the error in the mass, as measured on that $50 balance, really be the following?

The point is that these rules of statistics are only a rough guide and in a situation like this example where they probably don't apply, don't be afraid to ignore them and use your "uncommon sense". In this example, presenting your result as = 26.10 ± 0.01 g is probably the reasonable thing to do.

3.4 Calibration, Accuracy, and Systematic Errors

In Section 3.1.2, we made the distinction between errors of precision and accuracy by imagining that we had performed a timing measurement with a very precise pendulum clock, but had set its length wrong, leading to an inaccurate result. Here we discuss these types of errors of accuracy. To get some insight into how such a wrong length can arise, you may wish to try comparing the scales of two rulers made by different companies — discrepancies of 3 mm across 30 cm are common!

If we have access to a ruler we trust ( a "calibration standard"), we can use it to calibrate another ruler. One reasonable way to use the calibration is that if our instrument measures and the standard records , then we can multiply all readings of our instrument by / . Since the correction is usually very small, it will practically never affect the error of precision, which is also small. Calibration standards are, almost by definition, too delicate and/or expensive to use for direct measurement.

Here is an example. We are measuring a voltage using an analog Philips multimeter, model PM2400/02. The result is 6.50 V, measured on the 10 V scale, and the reading error is decided on as 0.03 V, which is 0.5%. Repeating the measurement gives identical results. It is calculated by the experimenter that the effect of the voltmeter on the circuit being measured is less than 0.003% and hence negligible. However, the manufacturer of the instrument only claims an accuracy of 3% of full scale (10 V), which here corresponds to 0.3 V.

Now, what this claimed accuracy means is that the manufacturer of the instrument claims to control the tolerances of the components inside the box to the point where the value read on the meter will be within 3% times the scale of the actual value. Furthermore, this is not a random error; a given meter will supposedly always read too high or too low when measurements are repeated on the same scale. Thus, repeating measurements will not reduce this error.

A further problem with this accuracy is that while most good manufacturers (including Philips) tend to be quite conservative and give trustworthy specifications, there are some manufacturers who have the specifications written by the sales department instead of the engineering department. And even Philips cannot take into account that maybe the last person to use the meter dropped it.

Nonetheless, in this case it is probably reasonable to accept the manufacturer's claimed accuracy and take the measured voltage to be 6.5 ± 0.3 V. If you want or need to know the voltage better than that, there are two alternatives: use a better, more expensive voltmeter to take the measurement or calibrate the existing meter.

Using a better voltmeter, of course, gives a better result. Say you used a Fluke 8000A digital multimeter and measured the voltage to be 6.63 V. However, you're still in the same position of having to accept the manufacturer's claimed accuracy, in this case (0.1% of reading + 1 digit) = 0.02 V. To do better than this, you must use an even better voltmeter, which again requires accepting the accuracy of this even better instrument and so on, ad infinitum, until you run out of time, patience, or money.

Say we decide instead to calibrate the Philips meter using the Fluke meter as the calibration standard. Such a procedure is usually justified only if a large number of measurements were performed with the Philips meter. Why spend half an hour calibrating the Philips meter for just one measurement when you could use the Fluke meter directly?

We measure four voltages using both the Philips and the Fluke meter. For the Philips instrument we are not interested in its accuracy, which is why we are calibrating the instrument. So we will use the reading error of the Philips instrument as the error in its measurements and the accuracy of the Fluke instrument as the error in its measurements.

We form lists of the results of the measurements.

We can examine the differences between the readings either by dividing the Fluke results by the Philips or by subtracting the two values.

The second set of numbers is closer to the same value than the first set, so in this case adding a correction to the Philips measurement is perhaps more appropriate than multiplying by a correction.

We form a new data set of format { }.

We can guess, then, that for a Philips measurement of 6.50 V the appropriate correction factor is 0.11 ± 0.04 V, where the estimated error is a guess based partly on a fear that the meter's inaccuracy may not be as smooth as the four data points indicate. Thus, the corrected Philips reading can be calculated.

(You may wish to know that all the numbers in this example are real data and that when the Philips meter read 6.50 V, the Fluke meter measured the voltage to be 6.63 ± 0.02 V.)

Finally, a further subtlety: Ohm's law states that the resistance is related to the voltage and the current across the resistor according to the following equation.

V = IR

Imagine that we are trying to determine an unknown resistance using this law and are using the Philips meter to measure the voltage. Essentially the resistance is the slope of a graph of voltage versus current.

If the Philips meter is systematically measuring all voltages too big by, say, 2%, that systematic error of accuracy will have no effect on the slope and therefore will have no effect on the determination of the resistance . So in this case and for this measurement, we may be quite justified in ignoring the inaccuracy of the voltmeter entirely and using the reading error to determine the uncertainty in the determination of .

3.5 Summary of the Error Propagation Routines

  • Wolfram|Alpha Notebook Edition
  • Mobile Apps
  • Wolfram Workbench
  • Volume & Site Licensing
  • View all...
  • For Customers
  • Online Store
  • Product Registration
  • Product Downloads
  • Service Plans Benefits
  • User Portal
  • Your Account
  • Customer Service
  • Get Started with Wolfram
  • Fast Introduction for Math Students
  • Public Resources
  • Wolfram|Alpha
  • Resource System
  • Connected Devices Project
  • Wolfram Data Drop
  • Wolfram Science
  • Computational Thinking
  • About Wolfram
  • Legal & Privacy Policy

 

 

 

 

 

Glossary Section:

  Online Courses    I     About SSEI    I     Contact Us    I     Resources     I     Articles  

©2006 Six Sigma eLearning, Inc.  1.800.297.8230

Web Analytics

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 August 2024

α -Synuclein oligomers form by secondary nucleation

  • Catherine K. Xu   ORCID: orcid.org/0000-0003-4726-636X 1 , 2 ,
  • Georg Meisl   ORCID: orcid.org/0000-0002-6562-7715 1 ,
  • Ewa A. Andrzejewska   ORCID: orcid.org/0000-0002-1421-5569 1 ,
  • Georg Krainer   ORCID: orcid.org/0000-0002-9626-7636 1 , 3 ,
  • Alexander J. Dear 1 , 4 ,
  • Marta Castellana-Cruz 1 ,
  • Soma Turi 1 ,
  • Irina A. Edu   ORCID: orcid.org/0000-0002-8915-7375 1 ,
  • Giorgio Vivacqua 5 , 6 ,
  • Raphaël P. B. Jacquat   ORCID: orcid.org/0000-0002-8661-9722 1 ,
  • William E. Arter 1 ,
  • Maria Grazia Spillantini 6 ,
  • Michele Vendruscolo   ORCID: orcid.org/0000-0002-3616-1610 1 ,
  • Sara Linse   ORCID: orcid.org/0000-0001-9629-7109 4 &
  • Tuomas P. J. Knowles   ORCID: orcid.org/0000-0002-7879-0140 1 , 7  

Nature Communications volume  15 , Article number:  7083 ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Microfluidics
  • Protein aggregation

Oligomeric species arising during the aggregation of α -synuclein are implicated as a major source of toxicity in Parkinson’s disease, and thus a major potential drug target. However, both their mechanism of formation and role in aggregation are largely unresolved. Here we show that, at physiological pH and in the absence of lipid membranes, α -synuclein aggregates form by secondary nucleation, rather than simple primary nucleation, and that this process is enhanced by agitation. Moreover, using a combination of single molecule and bulk level techniques, we identify secondary nucleation on the surfaces of existing fibrils, rather than formation directly from monomers, as the dominant source of oligomers. Our results highlight secondary nucleation as not only the key source of oligomers, but also the main mechanism of aggregate formation, and show that these processes take place under conditions which recapitulate the neutral pH and ionic strength of the cytosol.

Similar content being viewed by others

experimental error simple definition

Alpha-synuclein stepwise aggregation reveals features of an early onset mutation in Parkinson’s disease

experimental error simple definition

Small soluble α-synuclein aggregates are the toxic species in Parkinson’s disease

experimental error simple definition

Structures of fibrils formed by α-synuclein hereditary disease mutant H50Q reveal new polymorphs

Introduction.

The process of protein aggregation is associated with over 50 human disorders, including Alzheimer’s and Parkinson’s diseases (PD) 1 . In PD, aggregates of the 14 kDa protein α -synuclein are the major component of Lewy bodies and neurites, which have emerged as the pathological hallmarks of the disease 2 . In addition to its abundance in the characteristic amyloid deposits in PD, α -synuclein is further implicated as a causal agent in PD disease development by the finding that duplications and triplications of the WT α -synuclein gene, as well as a number of single-point mutations, are associated with familial cases of PD 3 , 4 , 5 .

While deposits of fibrillar protein are hallmarks of protein aggregation diseases, oligomeric intermediates are as implicated as the major source of toxicity, as the high molecular weight fibrillar aggregates are typically relatively inert in a biological context 1 , 2 , 6 , 7 , 8 . Moreover, determining oligomer dynamics is crucial for the elucidation of aggregation mechanisms 9 , 10 . Oligomers are nevertheless relatively poorly characterized, due to challenges in their analyses that render them invisible to most conventional biophysical techniques, namely their low abundance, transient nature, and high degree of heterogeneity 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 .

In the case of the Alzheimer’s disease-associated A β peptide, oligomer dynamics have been used to determine the molecular pathways of fibril formation 16 . For α -synuclein, several studies of oligomer dynamics have employed single molecule FRET experiments, which have observed the interconversion of oligomers with different FRET efficiencies 11 , 17 , 18 , 19 . However, such measurements to date have not uncovered the source of oligomers as the aggregation reaction progresses 20 .

Here, we exploit the power of single molecule microfluidics to study the dynamics of α -synuclein oligomers under native-like conditions 21 . Combined with bulk assays and chemical kinetics, this approach allows us to quantitatively determine the molecular mechanism of α -synuclein aggregation under conditions that reflect the neutral pH and ionic strength of the cytosol (Fig.  1 ).

figure 1

By using chemical kinetics we determined that secondary nucleation on fibril surfaces is the dominant mechanism of both α -synuclein oligomers and new fibrils.

α -Synuclein aggregation occurs via secondary pathways

Although α -synuclein oligomers are implicated as toxic species in PD, the molecular mechanisms by which both they and high molecular weight aggregates form remain largely unknown, despite substantial efforts. However, in order to enable the rational design of drugs that target aggregation, for example, inhibiting the formation of toxic oligomeric species, this mechanistic information is required. Detailed investigations into the individual microscopic processes involved have often only been possible under reaction conditions that differ from cytosolic pH and/or ionic strength and often required the investigation of different mechanistic steps at disparate conditions; primary nucleation induced by synthetic lipids, secondary nucleation at moderate ionic strength/acidic pH or high ionic strength/neutral pH, and elongation at moderate ionic strength/neutral pH 22 , 23 , 24 , 25 , 26 . From such studies, we now qualitatively understand certain aspects of the aggregation mechanism but have so far not achieved a quantitative description that can account for the experimental data. In order to unify the reaction steps into one complete mechanistic description of fibril formation from α -synuclein, we established experimental conditions for the reproducible aggregation of α -synuclein at neutral pH (Fig.  2 and Supplementary Figs. S6 , S7 , and S8) 27 .

figure 2

The aggregation kinetics of AlexaFluor-488-labelled α -synuclein were followed by aggregation-induced quenching of the AlexaFluor-488 dye (Fig.  S7) . The aggregation kinetics of the labeled and unlabelled α -synuclein are the same (Fig.  S6 and Table  S1) . The dependence of α -synuclein aggregation kinetics on the absence and presence of varying concentrations of fibrillar seeds. Data were fitted to kinetic models in the absence ( a ) and presence ( b ) of secondary processes. The data are only consistent with a model that includes the fibril-catalyzed formation of new fibrils.

We investigated the fibril formation kinetics of α -synuclein in the absence and presence of varying concentrations of fibrillar seeds. Upon the addition of fibrillar seeds, the lag phase and aggregation half-time ( t 1/2 ) decreased in a seed concentration-dependent manner (Fig.  2 ). This behaviour is highly characteristic of the presence of secondary processes, whereby existing fibrils catalyse the formation of further fibrils 28 , 29 . By contrast, in the absence of secondary processes, the addition of seed fibrils would not significantly alter the aggregation kinetics. Moreover, fitting our data globally to kinetic models, we found that the aggregation kinetics are inconsistent with a mechanism which does not include secondary processes; α -synuclein therefore aggregates via secondary processes 30 (Supplementary Table  S1) .

Secondary nucleation is the dominant mechanism of fibril formation

The dependence of aggregation kinetics on protein concentration can be used to infer the molecular mechanisms of fibril formation, given a careful consideration of the underlying reaction steps 28 , 30 . In our case, the dependence of the unseeded aggregation rates on the total monomer concentration gives rise to a scaling exponent of –0.5, the proportionality factor in the relationship between the logarithms of the aggregation half-times and monomer concentrations. This concentration dependence is consistent with fibril formation mechanisms of both fragmentation and saturated secondary nucleation 24 , 28 . In the former case, the number of fibrils increases by the fragmentation of existing fibrils. In the latter case of saturated secondary nucleation, monomers quickly bind to fibril surfaces, and their subsequent conversion to aggregates and release into solution, which is independent of the free monomer concentration, is the rate limiting step 24 , 28 .

In order to determine which of the two mechanisms is dominant, the fibril lengths in the plateau region can be measured to estimate the fragmentation rate. In the plateau region, the monomer concentration, and therefore the rate of secondary nucleation is minimal, however, fragmentation continues at the same speed as during the aggregation reaction. Therefore, changes in the length distribution in the plateau phase can be used to estimate the fragmentation rate directly. We obtained the fibril length distributions by transmission electron microscopy (TEM) imaging of aggregation mixtures throughout the plateau phase, finding a decrease in the mean fibril length over time (Fig.  3 , and Supplementary Figs. S11 , and S12) . To incorporate our length distribution-derived fragmentation rates into our model of α -synuclein aggregation here, samples were withdrawn from aggregation mixtures under the exact same conditions as the fitted kinetic data in Fig.  2 . From fitting an analytical expression for the average length (for derivation see “Methods”) to the fibril lengths measured by TEM (Fig.  3 ), we obtain an upper bound on the rate of fibril accumulation due to fragmentation κ (frag) = 0.01 h −1 . However, fitting the aggregation kinetics (Fig.  2 ) yields a very clear result: κ  = 0.4 h −1 , i.e., 40-fold faster than would be expected based on fragmentation alone. Similarly, fitting the aggregation kinetics using the rate of aggregate formation due to fragmentation alone completely fails to account for the observed aggregation kinetics (Fig.  3 c). Therefore, our fibril length-based analysis indicates that secondary nucleation, not fragmentation, is the main mechanism of fibril formation.

figure 3

Fibrils were withdrawn from an aggregation reaction at the indicated timepoints in the plateau phase of the aggregation mixture ( a ) and imaged by TEM ( b) , inset, scale bar = 1  μ M). b The mean lengths of fibrils were determined from at least 8 TEM images containing a minimum of 650 fibrils in total, and fitted to kinetic models to determine the fragmentation rate, finding a very low value of 0.01 h −1 . c The kinetic data were then fitted with fragmentation as the mechanism of fibril amplification, with the fragmentation rate constant fixed to the value determined in ( b ).

However, measurements of fibril length distributions may not always be fully representative of the true distribution. Surface-based measurements such as TEM may be limited by non-equal capture of fibrils of different lengths, while solution-based methods such as dynamic light scattering (DLS) are often biased towards larger objects and assume spherical geometry. We thus employed a complementary approach to establishing the dominance of secondary nucleation over fragmentation. The Brichos domain, a molecular chaperone, is a well-established inhibitor of secondary nucleation in amyloid aggregation of both the Alzheimer’s-associated A β peptide and, more recently, α -synuclein 31 , 32 , 33 . We confirmed that Brichos inhibits α -synuclein aggregation under our experimental conditions (Supplementary Fig.  S13) . Since Brichos inhibits secondary nucleation and does not affect fragmentation, we therefore conclude that fragmentation is only a minor contributor to the rate of fibril formation, and that secondary nucleation is the dominant mechanism of α -synuclein fibril formation.

Oligomers form by secondary nucleation on fibril surfaces

In order to elucidate further details of the secondary nucleation process, we investigated α -synuclein oligomer dynamics during aggregation. We previously demonstrated the ability of μ FFE to fractionate complex aggregation mixtures and resolve oligomeric subpopulations according to their electrophoretic mobilities, a function of radius and charge (Fig.  S14) 21 . We have further demonstrated its extension to single molecule spectroscopy to maximize information on fractionated species 34 . Here, we employ μ FFE at the single molecule level to monitor oligomer mass concentrations during α -synuclein aggregation to yield insights into oligomer dynamics.

A key feature of this approach in studying oligomers is its minimal perturbation of the reaction system. The sample under study is rapidly diluted and fractionated in solution just a few milliseconds prior to measurement, a timescale on which the sample composition is unlikely to change. This contrasts with more traditional single molecule approaches requiring the almost million-fold dilution of samples, or size exclusion chromatography, where samples interact differentially with a solid matrix on a timescale of minutes to hours 11 , 16 , 20 . Moreover, the method is agnostic to oligomer structure, as oligomers are detected by their intrinsic fluorescent label, in contrast to antibody-based methods such as ELISA 35 , 36 , 37 , 38 . The reaction mixture is therefore minimally perturbed for the measurement; the attachment of the AlexaFluor-488 fluorophore at position 122 did not affect the aggregation kinetics compared to the WT protein (Fig.  S6 and SupplementaryTable  S1) , likely due to its location outside of the fibril core 39 , 40 . Due to the high concentration of monomer and thus background noise, we used a simple photon count thresholding approach to estimate the oligomer concentration. However, the non-uniformity of the confocal spot laser intensity means that the estimated oligomer concentrations may be on the order of ten- or even thousand-fold smaller than the true concentrations (see SI for details).

Using our microfluidic approach, we observed the maximum in oligomer mass concentration slightly before the half-time. Crucially, by seeding the reaction, this peak in oligomer mass concentration was shifted in time and again similarly located close to the half-time (Fig.  4 ). This oligomer dependence on fibril seeds indicates that oligomers form predominantly via a fibril-catalysed mechanism, rather than directly from monomers. If oligomers formed directly from monomers, then their formation rate should be affected only by the monomer concentration and the introduction of seeds would not be expected to simply result in a shift in time of the otherwise unchanged oligomer peak, as observed in Fig.  4 . These observations therefore point to secondary nucleation as the dominant mechanism of oligomer formation (Supplementary Fig.  S15) .

figure 4

Aggregation mixtures at various time points throughout the reaction were centrifuged (21, 130 × g, 10 min, 20 °C) to remove large fibrillar aggregates and the oligomer content of the resulting supernatant studied by μ FFE at the single molecule level ( a ). Kinetics of fibrillar α -synuclein in unseeded (blue) and seeded (red, 1% seeds) aggregation reactions, measured by fluorescence quenching, are shown alongside the fitted model ( b ). The relative oligomer mass concentrations were determined ( c ) and fitted to a model in which oligomers can form via both primary nucleation from monomers and secondary nucleation on fibril surfaces (model details in SI). X -axis error bars represent the time range over which data were averaged, which corresponds to the standard deviation of the aggregation half times. Where present, y -axis error bars represent the standard error of the mean oligomer mass concentrations from up to 8 biological replicates; points without y -axis error bars represent a single sample.

To further verify this mechanistic conclusion, we derived the rate laws describing aggregation and oligomer formation (see “Methods” for details of the kinetic model) and fitted the measured oligomer concentrations using the rate constants for fibril formation determined above (Supplementary Table  S2) . This model fits the data well and allows determination of bounds on the rate constants of oligomer formation and dissociation: oligomers are formed at a rate greater than 4 × 10 −5  s −1 per mole of fibrils at a monomer concentration of 100 μ M. By comparison, A β secondary oligomers are formed at a rate of 3 × 10 −5 s −1 at a monomer concentration of 5  μ M 16 . Additionally, the oligomer dissociation rate was fast on the timescale of the aggregation reaction, i.e., hours or faster, as shown by the fact that oligomer concentration decline closely tracks monomer consumption during the aggregation reaction. Such oligomers are therefore only detectable due to the fast timescale of our microfluidic approach. In summary, secondary nucleation is therefore not only responsible for the formation of new fibrils, but is also the main source of oligomers.

Oligomers form under quiescent, native-like conditions

We next investigated the role of shaking in α -synuclein aggregation. Under fully quiescent conditions, aggregation proceeded at a much lower rate, demonstrating that shaking increases the rate of aggregation. However, the addition of seeds drastically reduced the t 1/2 , indicating that secondary processes still take place in the absence of agitation. In order to determine which microscopic processes/es are catalyzed by shaking, we studied the oligomer content of quiescent, seeded aggregation reactions using μ FFE at the single molecule level before and after moderate shaking (10 min, 200 rpm). Prior to shaking, very few oligomers were observed, but the concentration of oligomers increased by more than a factor of three following moderate shaking (Fig.  5 ). The post-shaking concentration was also higher than the oligomer concentration during the aggregation plateau phase under constant shaking at the same speed (200 rpm), demonstrating that the majority of these oligomers did not arise through fragmentation of fibrils induced by the shaking. Given that the conversion and dissociation steps are rate-limiting in this system, these must be the processes affected by agitation. From a mechanical perspective, the dissociation of oligomers from fibrils is more likely to be catalyzed by shaking. We therefore conclude that α -synuclein oligomers form by secondary nucleation under quiescent conditions at neutral pH, and that shaking accelerates their formation, likely by facilitating their dissociation from fibril surfaces.

figure 5

Secondary nucleation involves the formation of oligomers on fibril surfaces, followed by their release into solution ( a ).  α -Synuclein was aggregated under both quiescent and shaking conditions ( b , c ), and the oligomer populations investigated. The relative oligomer mass concentrations of quiescent seeded (1%) aggregation reactions at the half-time (indicated by the dotted line in ( c ) before and after shaking for 10 minutes at 200 rpm are shown alongside the oligomer concentration during the plateau phase of the reaction under shaking ( d ). Example timetraces of photon count rates are shown for quiescent seeded (1%) aggregation reactions before ( e ) and after ( f ) shaking.

Parkinson’s disease patient samples catalyse oligomer formation through secondary nucleation

Having characterised the in vitro aggregation mechanism in detail, we next explored its links to Parkinson’s disease. Due to the low concentration of aggregates in CSF 41 , 42 , we chose quiescent aggregation conditions to minimize the primary nucleation rate, in order to maximize the observable seeding effect. While the aggregation of wells seeded with pooled CSF from healthy subjects appeared to be stochastic, the CSF from Parkinson’s disease patients consistently induced α -synuclein aggregation (Figs.  6 and Supplementary Fig. S16) . Given that CSF has been found to inhibit aggregation 43 , these data clearly indicate that aggregates in the disease CSF are sufficient to overcome this inhibition and seed α -synuclein aggregation. We additionally performed RT-QuIC analyses of brain homogenates from patients with several synucleinopathies, including PD, and our labelled α -synuclein (Supplementary Fig.  S17) , demonstrating that patient brain-derived aggregates are able to seed our labelled α -synuclein, thus supporting its use as a model system for the investigation of molecular mechanisms in disease. Moreover, using our microfluidic platform, we investigated the oligomer content of the Parkinson’s CSF-seeded aggregation reaction, detecting oligomers with the same biophysical properties, namely size and electrophoretic mobility, as our in vitro-generated oligomers (Fig.  6 ). These data thus provide evidence for secondary nucleation as an oligomer production mechanism in Parkinson’s disease.

figure 6

a Aggregation kinetics of α -synuclein aggregation in the presence of 4% v/v pooled CSF from Parkinson’s disease patients and a healthy cohort. The extracted lag times (time taken to reach 25% aggregation) are shown alongside unseeded and seeded (1% mass concentration) reactions in the absence of CSF (inset). b , c Example photon count timetraces from aggregation reactions seeded by PD CSF ( b ) and in vitro-generated seeds ( c ). Oligomers from the PD CSF-seeded reaction were investigated by μ FFE at around 30% aggregation ( b ) and found to have similar biophysical properties to a corresponding sample from the in vitro seeded reaction ( c ).

In this study, we have demonstrated that α -synuclein oligomers form by secondary nucleation under physiologically relevant conditions; namely neutral pH and a salt concentration mimicking the osmotic pressure in the cytosol. These oligomers are also the dominant mechanism of the formation of new fibrils, allowing an exponential increase in aggregate mass concentration over time. Previous work has suggested that the contribution of secondary processes to α -synuclein aggregation is highly pH-dependent 22 , 24 , 44 . The presence of secondary processes at neutral pH was hinted at by data from Buell et al., and with our detailed investigation, we have determined that this process is secondary nucleation and not fragmentation 22 . Moreover, several additional studies have reported seed-concentration dependent aggregation kinetics at neutral pH, suggesting that secondary nucleation is a general feature of α -synuclein aggregation 44 , 45 , 46 , 47 , 48 .

Similarly, previous studies on α -synuclein oligomer dynamics have generally focused on the role of primary nucleation in oligomer formation 11 , 18 , 19 , 49 . In light of our findings herein, a re-examination of these data shows that they are in fact consistent with a peak in concentration close to the aggregation half-time that we observe here, and thereby with a secondary nucleation formation mechanism. By investigating both fibril and oligomer dynamics in detail, we quantitatively elucidate the molecular mechanism of α -synuclein aggregation. This was made possible by our development of both aggregation conditions under cytosolic pH and ionic strength, and an oligomer detection method which is both blind to oligomer structure and minimally perturbs the reaction mixture. Through this study, we established that, while agitation is known to be able to induce fibril fragmentation and primary nucleation, under these conditions it markedly increases the rate of secondary nucleation 50 , 51 , 52 . Primary nucleation is likely a heterogeneous process, where agitation is believed to accelerate its rate by increasing turnover at the catalytic surface 53 . Secondary nucleation is similarly a heterogeneous process, the only difference being the catalytic surface is fibrils, therefore it stands to reason that a similar accelerating effect can also affect this process 54 . This is analogous to crystallisation processes, for which mild agitation can increase the rate of secondary nucleation by facilitating detachment, a finding which forms the basis of the use of mild agitation in industrial crystallisation processes 55 , 56 , 57 , 58 .

In conclusion, we identify secondary nucleation as the dominant process for the formation of both oligomers and fibrils in α -synuclein aggregation. α -Synuclein oligomers therefore provide both a potential source of toxicity and mechanism of aggregate spreading in PD, which is not only limited to early stages of the disease, given that oligomer formation is catalysed by fibrillar aggregates 8 , 59 . The detailed mechanistic framework we have elucidated can thus be used to better understand the role of α -synuclein in PD pathology and how to effectively develop therapeutic strategies. Our finding that secondary nucleation is not only a pathway to the formation of new fibrils but also the main source of oligomeric species highlights it as a promising therapeutic target with dual effectiveness: stopping secondary nucleation will stop oligomer production in the short term and slow fibril accumulation in the long term.

Ethical Statement

Brain samples from four patients were used in this paper for RT-QuIC analysis, consisting of 3 male (healthy control, PD, and MSA) and 1 female (DLB). The were provided by the UK Brain bank and the ethical approval was obtained from the Cambridgeshire 2 Research Ethics Committee.

Purification of α -synuclein

α -Synuclein (WT or N122C variant) was overexpressed in Escherichia coli  BL21 cells. The cells were centrifuged (20 min, 3985 ×  g , 4 °C; JLA-8.1000 rotor, Beckman Avanti J25 centrifuge (Beckman Coulter)), and the pellet resuspended in buffer (10 mM tris, 1 mM EDTA, protease inhibitor) prior to lysis by sonication on ice. Debris was removed by centrifugation (20 min, 39,121 ×  g , 4 °C; JLA-25.5 rotor), and the supernatant incubated (20 min, 95 °C). Heat-sensitive proteins were removed by centrifugation (15 min, 39,121 ×  g , 4 °C; JLA-25.5 rotor). Subsequent incubation with streptomycin sulfate (10 mg/mL, 15 min, 4 °C) precipitated out DNA. α -Synuclein was extracted from the supernatant (15 min, 39,121 ×  g , 4 °C; JLA-25.5 rotor) by the gradual addition of ammonium sulfate (361 mg/mL). The α -synuclein-containing pellet was collected by centrifugation (15 min, 39,121 ×  g , 4 °C; JLA-25.5 rotor) and resuspended in buffer (25 mM tris, pH 7.4). Dialysis was used for complete buffer exchange, and the resulting mixture run on a HiLoad TM 26/10 Q Sepharose high performance column (GE Healthcare), at room temperature. Under a gradient of 0–1.5 M NaCl over 600 mL, α -Synuclein was eluted at ~350 mM. Selected fractions were fractionated at room temperature on a Superdex 75 26/600 (GE Healthcare) and eluted in PBS (pH 7.4). For the N122C variant, 3 mM dithiothreitol (DTT) was added to all buffers to prevent dimerization. The concentration of α -synuclein was determined by absorbance at 275 nm, using a molar extinction coefficient of 5600 M -1 cm -1 . Aliquots were then flash-frozen in liquid nitrogen and stored at −80 °C.

Labelling of α -synuclein

The N122C variant of α -synuclein was fluorescently labeled with AlexaFluor-488 dye. DTT was removed from purified α -synuclein by buffer exchange into PBS using P10 desalting columns containing Sephadex G25 matrix (GE Healthcare). The DTT-free protein was incubated (overnight, 4 °C, rolling system) with a 1.5x molar excess of AlexaFluor-488 dye functionalized with a maleimide moiety. Excess unbound dye and α -synuclein dimers were removed by eluting the mixture over P10 desalting columns containing Sephadex G25 matrix (GE Healthcare). The resulting α -synuclein concentration was estimated by the dye absorbance at 495 nm, using a molar extinction coefficient of 72,000 M −1 cm −1 . Aliquots were flash-frozen in liquid nitrogen and stored at -80 °C for up to 3 weeks prior to experiments.

Aggregation of α -synuclein

Aggregation of α -synuclein was carried out in non-binding 96-well plates (Corning) at 37 °C in a FLUOstar Omega microplate reader (BMG Labtech). Each well contained 100  μ L of reaction mixture and a glass bead. The buffer used was Dulbecco’s PBS (pH 7.4) with 0.01% (w/v) sodium azide. Interwell areas and empty wells were filled with PBS prior to sealing the plate with a foil cover. For experiments under shaking conditions, plates were shaken for 355 s at 200 rpm between each reading cycle; quiescent reactions were read at the same rate, but in the absence of all shaking. WT reactions were followed by the addition of 50  μ M thioflavin T, whereas labelled N122C reactions (100% labelled protein) were monitored by AlexaFluor-488 fluorescence (Supplementary Fig.  S7) . Fibrils from unseeded reactions of 100  μ M monomer under the same buffer and shaking conditions were used directly as seeds for seeding reactions, with no sonication (Supplementary Figs.  S9 and Supplementary Fig. S10) . For CSF-seeded aggregation reactions, CSF was added to a total of 4% volume in 100  μ M α -synuclein under the same quiescent conditions as described above, and the sample subjected to shaking (10 min, 200 rpm) prior to measurement by FFE. CSF biospecimens used in the analyses presented in this article were obtained from The Michael J. Fox Foundation for Parkinson’s Research.

Brain samples from UK brain bank and with confirmed autoptic diagnosis of synucleinopathies were employed in the present study. The samples included: Parkinson’s Disease (PD) cingulate cortex, Multiple System Atrophy (MSA) occipital cortex, Dementia with Lewy bodies (DLB) occipital cortex and healthy control occipital cortex. Brain homogenates (BH; 10% w/v) were prepared by homogenizing the tissue in 40 mM PBS containing 1% protease inhibitor (Halt, Thermo Scientific - 1860932) and 0.1% phenylmethylsulfonyl fluoride (PMSF), using a Bead Beater (Biospec Products; 11079110z) for 2 minutes at maximum speed. The homogenate was then spun at 3000 ×  g for 5 minutes at 4 °C and the supernatant was transferred to a new 0.5 ml lowBind Eppendorf tube and stored at −80 °C until RT-QuIC analysis. RT-QuIC reactions were performed in black 96-well plates with a clear bottom (Nalgene Nunc International). Plates were preloaded with 3 silica beads (1 mm diameter, BioSpec Products) per well. For BH-seeded reactions, 4  μ L of the indicated BH was added to wells containing 95  μ L of the reaction buffer to give final concentrations of 40 mM phosphate buffer (pH 8.0), 170 mM NaCl, 100  μ M of monomeric AlexaFluor-488-labelled N122C α -synuclein (filtered through a 100 kDa MWCO filter immediately prior to use). The plate was then sealed with a plate sealer film (Nalgene Nunc Inter- national) and incubated at 37.5 °C in a BMG FLUOstar Omega plate reader with cycles of 1 min shaking (500 rpm double orbital) and 15 minutes rest throughout the indicated incubation time. Fluorescence measurements (490 ± 5-nm excitation and 520 ± 5-nm emission; bottom read) were taken every 15 min. Each sample was run in three technical replicates.

Analysis of bulk kinetic data

The signal obtained during aggregation kinetics (ThT fluorescence or AlexaFluor-488 fluorescence) was taken to be proportional to the fibril mass present in the sample. The data were then fitted using the AmyloFit Platform and following the protocol in Meisl et al. to a model including primary nucleation, elongation and secondary nucleation with reaction order 0 30 . This model was able to describe the data well across concentrations. To produce misfits, the same data were fitted with a model including only primary nucleation and elongation.

Measurement of fibril length distributions

At certain timepoints in the plateau phase of the aggregation reaction, 1  μ L of the reaction was withdrawn and the plate returned to the platereader. The reaction sample was mixed with 9  μ L PBS (pH 7.4), and applied to a transmission electron microscopy (TEM) grid (continuous carbon film on 300 mesh Cu). Following adsorption, the sample was washed with milliQ water (2 × 10  μ L). Samples were negatively stained with uranyl acetate (2% w/v, 10  μ L, 2 min) and washed with milliQ water (2 × 10  μ L). TEM grids were glow discharged using a Quorum Technologies GloQube instrument at a current of 25mA for 60s. TEM images were obtained using a Thermo Scientific (FEI Company, Hillsboro, OR) Talos F200X G2 microscope operated at 200 kV. TEM images were recorded using a Ceta 4k × 4k CMOS camera. The lengths of imaged fibrils were manually determined with ImageJ (example images in Figure S9). The lengths of between 650 and 1550 individual fibrils were measured for each sample.

Analysis of fibril length distributions

The fibril length in the plateau phase of the reaction, during which the fibril mass concentration is constant, can be modeled by the following equations. By definition, in the plateau phase, the aggregate mass concentration is constant, i.e., M ( t ) =  M ∞ . While nucleation processes become negligible when the monomer is depleted, fragmentation still takes place, thus the number concentration of fibrils, P ( t ), is given by

which can be solved to yield

where k − is the fragmentation rate constant, t is the time since the plateau was first attained and P p l a t e a u is the number concentration of fibrils at t  = 0.

The mean length, L ( t ), at the plateau is thus given by:

where L p l a t e a u is the average length when the plateau is first attained. Using the steady-state expression for the average length during an aggregation reaction derived e.g., in Cohen et al. 60 , 61 , 62 as an estimate for L p l a t e a u , the rate of fibril formation due to fragmentation is approximately given by κ (frag) =  L p l a t e a u k −  = 0.01 h −1 . This is thus an estimate of the rate of fibril formation based purely on measurements of fibril lengths which can then be compared with kinetic measurements of the actual rate of fibril accumulation κ to see if this is consistent with a purely fragmentation-driven mechanism.

Fabrication of microfluidic free-flow electrophoresis devices

Microfluidic free-flow electrophoresis ( μ FFE) devices were designed and fabricated as follows. Briefly, acetate masks were used to produce SU-8 molds of devices by photolithography, the heights of which were measured with a profilometer (Dektak, Bruker, Billerica, MA). Polydimethylsiloxane (PDMS; 1:10 mixture of primer and base, Dow Corning) was applied to the mold and baked (65 °C, 1.5 h). μ FFE devices were then excised and biopsy punches used to create holes for tubing and electrode connections, with diameters of 0.75 mm and 1.5 mm, respectively. Following sonication in isopropanol (5 min), devices were bonded to glass coverslips (#1.5) by activation with oxygen plasma. Immediately prior to use, prolonged oxygen plasma treatment was used to hydrophilize device surfaces.

μ FFE device operation

The μ FFE device design used contains liquid electrodes (3 M KCl solution containing 1 nM Atto-488 dye) to connect the electrophoresis chamber to the external electric circuit 21 , 63 . These liquid electrodes were connected to the circuit via hollow metal electrodes made from bent syringe tips, which also constituted the outlets for the liquid electrodes. Samples were flowed into the device at controlled flow rates by the use of syringe pumps (Cetoni neMESYS, Korbussen, Germany), connected to polytetrafluoroethylene (PTFE) tubing (0.012" inner diameter × 0.030" outer diameter, Cole-Parmer, St. Neots, UK). The flow rates used were 1000, 200, 140, and 10  μ L h −1 for the auxiliary buffer (15× diluted PBS in milliQ water), electrolyte, desalting milliQ water, and sample, respectively. The electric field was applied by a benchtop power supply (Elektro-Automatik EA-PS 9500-06, Viersen, Germany) connected to the metal electrode outlets.

Acquisition of μ FFE data

μ FFE data were acquired using laser confocal fluorescence microscopy; a 488 nm wavelength laser beam (Cobolt 06-MLD 488 nm 200 mW diode laser, Cobolt, Stockholm, Sweden) was coupled into single-mode optical fibre (P3-488PM-FC01, Thorlabs, Newton, NJ) and collimated (60FC-L-4-M100S-26, Schäfter und Kirchhoff, Hamburg, Germany) before being directed into the back aperture of an inverted microscope body (Applied Scientific Instrumentation Imaging, Eugene, OR). The laser beam was then reflected by a dichroic mirror (Di03-R488/561, Semrock, Rochester, NY) and focused to a concentric diffraction-limited spot in the microfluidic channel through a high-numerical-aperture water-immersion objective (CFI Plan Apochromat WT 60x, NA 1.2, Nikon, Tokyo, Japan). Photons arising thorugh fluorescence emission were detected using the same objective. Fluorescence was then passed through the dichroic mirror and imaged onto a 30  μ m pinhole (Thorlabs), removing out of focus light. The signal was then filtered through a bandpass filter (FF01-520/35-25, Semrock), and focused onto a single-photon counting avalanche diode (APD, SPCM-14, PerkinElmer Optoelectronics, Waltham, MA). Photons were recorded using a time-correlated single photon counting (TCSPC) module (TimeHarp 260 PICO, PicoQuant, Berlin, Germany) with 25 ps time resolution. Single-photon counting recordings were obtained using custom-written Python code.

Aggregation samples (100  μ L) were withdrawn from the plate at various times during the aggregation reaction and centrifuged (21,130 ×  g , 10 min, 20 °C). The top 70  μ L was carefully withdrawn without disturbing the pellet containing large aggregates. An aliquot of the supernatant was then diluted in PBS to ~5  μ M total monomer mass concentration and injected into the device. An electric field of 300 V was applied and photon count timetraces obtained at 5–10 positions laterally distributed across the field direction for a total of at least 1 min per position.

Analysis of μ FFE data

A detailed account of the analysis of μ FFE data is provided in the SI.

Fitting of oligomer dynamics

The coarse-grained rate equations governing oligomers (concentration S ( t )) formed by primary nucleation during an amyloid fibril formation reaction are:

where m ( t ) is the concentration of monomeric protein, k o1 is the rate constant for oligomer formation from primary nucleation with reaction order n o1 , and k e 1 is the rate constant for dissociation of oligomers to monomers. In a system dominated by secondary nucleation of oligomers, the rate equations are instead well-approximated by:

where M ( t ) is the mass concentration of amyloid fibrils and k o2 is the rate constant for oligomer formation from fibril-dependent processes, with reaction order n o2 . For simplicity, we modelled M ( t ) using the analytical expressions given in ref. 51 with rate parameters chosen as the values determined by fitting the bulk data on fibril formation in the main text (Fig.  2 and Table  S2) . Eqs ( 4 ) and ( 5 ) were then fitted numerically to the experimental data on oligomer concentration using python. Eq. ( 5 ) provided the superior fit, supporting the conclusion that oligomers are formed predominantly by secondary processes in this assay.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The data generated in this study have been deposited in the Zenodo database under the accession code https://doi.org/10.5281/zenodo.11958462 .  Source data are provided in this paper.

Code availability

Analysis code is available at https://github.com/cx220/aS_kinetics 64 .

Chiti, F. & Dobson, C. M. Protein misfolding, amyloid formation, and human disease: a summary of progress over the last decade. Annu. Rev. Biochem. 86 , 27–68 (2017).

Article   PubMed   Google Scholar  

Soto, C. Unfolding the role of protein misfolding in neurodegenerative diseases. Nat. Rev. Neurosci. 4 , 49–60 (2003).

Spillantini, M. G. et al. Alpha-synuclein in lewy bodies. Nature 388 , 839–840 (1997).

Article   ADS   PubMed   Google Scholar  

Flagmeier, P. et al. Mutations associated with familial parkinson’s disease alter the initiation and amplification steps of α -synuclein aggregation. Proc. Natl Acad. Sci. 113 , 10328–10333 (2016).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Bell, R. et al. Effects of n-terminal acetylation on the aggregation of disease-related α -synuclein variants. J. Mol. Biol. 435 , 167825 (2023).

Campioni, S. et al. A causative link between the structure of aberrant protein oligomers and their toxicity. Nat. Chem. Biol. 6 , 140–147 (2010).

Benilova, I., Karran, E. & De Strooper, B. The toxic a β oligomer and alzheimer’s disease: an emperor in need of clothes. Nat. Neurosci. 15 , 349–357 (2012).

Emin, D. et al. Small soluble α -synuclein aggregates are the toxic species in parkinson’s disease. Nat. Commun. 13 , 5512 (2022).

Dear, A. J. et al. Kinetic diversity of amyloid oligomers. Proc. Natl Acad. Sci. 117 , 12087 (2020).

Dear, A. J. et al. Identification of on- and off-pathway oligomers in amyloid fibril formation. Chem. Sci. 11 , 6236–6247 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Cremades, N. et al. Direct observation of the interconversion of normal and toxic forms of alpha-synuclein. Cell 149 , 1048–1059 (2012).

Breydo, L. & Uversky, V. N. Structural, morphological, and functional diversity of amyloid oligomers. FEBS Lett. 589 , 2640–2648 (2015).

Chen, S. W. et al. Structural characterization of toxic oligomers that are kinetically trapped during α -synuclein fibril formation. Proc. Natl Acad. Sci. 112 , E1994–E2003 (2015).

PubMed   PubMed Central   Google Scholar  

Alam, P., Bousset, L., Melki, R. & Otzen, D. E. α -synuclein oligomers and fibrils: a spectrum of species, a spectrum of toxicities. J. Neurochemistry 150 , 522–534 (2019).

Article   Google Scholar  

Kulenkampff, K., Wolf Perez, A.-M., Sormanni, P., Habchi, J. & Vendruscolo, M. Quantifying misfolded protein oligomers as drug targets and biomarkers in alzheimer and parkinson diseases. Nat. Rev. Chem. 5 , 277–294 (2021).

Michaels, T. C. T. et al. Dynamics of oligomer populations formed during the aggregation of alzheimer’s a β 42 peptide. Nat. Chem. 12 , 445–451 (2020).

Tosatto, L. et al. Single-molecule fret studies on alpha-synuclein oligomerization of parkinson’s disease genetically related mutants. Sci. Rep. 5 , 16696 (2015).

Iljina, M. et al. Kinetic model of the aggregation of alpha-synuclein provides insights into prion-like spreading. Proc. Natl Acad. Sci. 113 , E1206–E1215 (2016).

Iljina, M. et al. Nanobodies raised against monomeric α -synuclein inhibit fibril formation and destabilize toxic oligomeric species. BMC Biol. 15 , 57 (2017).

Horrocks, M. H. et al. Fast flow microfluidics and single-molecule fluorescence for the rapid characterization of α -synuclein oligomers. Anal. Chem. 87 , 8818–8826 (2015).

Arter, W. E. et al. Rapid structural, kinetic, and immunochemical analysis of alpha-synuclein oligomers in solution. Nano Lett. 20 , 8163–8169 (2020).

Buell, A. K. et al. Solution conditions determine the relative importance of nucleation and growth processes in α -synuclein aggregation. Proc. Natl Acad. Sci. 111 , 7671–7676 (2014).

Galvagnion, C. et al. Lipid vesicles trigger a-synuclein aggregation by stimulating primary nucleation. Nat. Chem. Biol. 11 , 229 – 234 (2015).

Gaspar, R.et al. Secondary nucleation of monomers on fibril surface dominates α -synuclein aggregation and provides autocatalytic amyloid amplification. Q. Rev. Biophys. 50, e6 (2017).

Peduzzo, A., Linse, S. & Buell, A. K. The properties of α -synuclein secondary nuclei are dominated by the solution conditions rather than the seed fibril strain. ACS Chem. Neurosci. 11 , 909–918 (2020).

Horne, R. I. et al. Secondary processes dominate the quiescent, spontaneous aggregation of α -synuclein at physiological ph with sodium salts. ACS Chem. Neurosci. 14 , 3125–3131 (2023).

Wennerström, H., Vallina Estrada, E., Danielsson, J. & Oliveberg, M. Colloidal stability of the living cell. Proc. Natl Acad. Sci. 117 , 10113–10121 (2020).

Meisl, G. et al. Scaling behaviour and rate-determining steps in filamentous self-assembly. Chem. Sci. 8 , 7087–7097 (2017).

Meisl, G. et al. Uncovering the universality of self-replication in protein aggregation and its link to disease. Sci. Adv. 8 , eabn6831 (2023).

Meisl, G. et al. Molecular mechanisms of protein aggregation from global fitting of kinetic models. Nat. Protoc. 11 , 252–272 (2016).

Cohen, S. I. A. et al. The molecular chaperone brichos breaks the catalytic cycle that generates toxic ab oligomers. Nat. Struct. Mol. Biol. 22 , 207 – 213 (2015).

Arosio, P. et al. Kinetic analysis reveals the diversity of microscopic mechanisms through which molecular chaperones suppress amyloid formation. Nat. Commun. 7 , 10948 (2016).

Adam, L. et al. Specific inhibition of α -synuclein oligomer generation and toxicity by the chaperone domain Bri2 BRICHOS. Prot. Sci. 33 , e5091 (2024).

Krainer, G. et al. Direct digital sensing of protein biomarkers in solution. Nat. Commun. 14 , 653 (2023).

Kayed, R. et al. Fibril specific, conformation dependent antibodies recognize a generic epitope common to amyloid fibrils and fibrillar oligomers that is absent in prefibrillar oligomers. Mol. Neurodegeneration 2 , 18–18 (2007).

Aprile, F. A. et al. Rational design of a conformation-specific antibody for the quantification of a β oligomers. Proc. Natl Acad. Sci. 117 , 13509–13518 (2020).

Kulenkampff, K. et al. An antibody scanning method for the detection of α -synuclein oligomers in the serum of parkinson’s disease patients. Chem. Sci. 13 , 13815–13828 (2022).

Chappard, A. et al. Single-molecule two-color coincidence detection of unlabeled alpha-synuclein aggregates. Angew. Chem. Int. Ed. 62 , e202216771 (2023).

Guerrero-Ferreira, R. et al. Cryo-em structure of alpha-synuclein fibrils. eLife 7 , e36402 (2018).

Frey, L. et al. On the ph-dependence of α -synuclein amyloid polymorphism and the role of secondary nucleation in seed-based amyloid propagation. eLife 12 , RP93562 (2023).

van Steenoven, I. et al. α -synuclein species as potential cerebrospinal fluid biomarkers for dementia with lewy bodies. Mov. Disord. 33 , 1724–1733 (2018).

Lobanova, E. et al. Imaging protein aggregates in the serum and cerebrospinal fluid in parkinson’s disease. Brain 145 , 632–643 (2022).

Frankel, R. et al. Autocatalytic amplification of alzheimer-associated a β 42 peptide aggregation in human cerebrospinal fluid. Commun. Biol. 2 , 365 (2019).

Kumari, P. et al. Structural insights into α -synuclein monomer–fibril interactions. Proc. Natl Acad. Sci. 118 , e2012171118 (2021).

Campioni, S. et al. The presence of an air–water interface affects formation and elongation of α -synuclein fibrils. J. Am. Chem. Soc. 136 , 2866–2875 (2014).

de Oliveira, G. A. P. & Silva, J. L. Alpha-synuclein stepwise aggregation reveals features of an early onset mutation in parkinson’s disease. Commun. Biol. 2 , 374 (2019).

Horvath, I., Kumar, R. & Wittung-Stafshede, P. Macromolecular crowding modulates α -synuclein amyloid fiber growth. Biophysical J. 120 , 3374–3381 (2021).

Article   ADS   Google Scholar  

Ohgita, T., Namba, N., Kono, H., Shimanouchi, T. & Saito, H. Mechanisms of enhanced aggregation and fibril formation of parkinson’s disease-related variants of α -synuclein. Sci. Rep. 12 , 6770 (2022).

Zurlo, E. et al. In situ kinetic measurements of α -synuclein aggregation reveal large population of short-lived oligomers. PLOS ONE 16 , e0245548 (2021).

Lee, C. F., Bird, S., Shaw, M., Jean, L. & Vaux, D. J. Combined effects of agitation, macromolecular crowding, and interfaces on amyloidogenesis *. J. Biol. Chem. 287 , 38006–38019 (2012).

Cohen, S. I. A. et al. Proliferation of amyloid-beta42 aggregates occurs through a secondary nucleation mechanism. Proc. Natl Acad. Sci. 110 , 9758–9763 (2013).

Zhou, J. et al. Effects of sedimentation, microgravity, hydrodynamic mixing and air–water interface on α -synuclein amyloid formation. Chem. Sci. 11 , 3687–3693 (2020).

Grigolato, F., Colombo, C., Ferrari, R., Rezabkova, L. & Arosio, P. Mechanistic origin of the combined effect of surfaces and mechanical agitation on amyloid formation. ACS Nano 11 , 11358–11367 (2017).

Dear, A. J. et al. The catalytic nature of protein aggregation. J. Chem. Phys. 152 , 045101 (2020).

Törnquist, M. et al. Secondary nucleation in amyloid formation. Chem. Commun. 54 , 8667–8684 (2018).

Botsaris, G. D. Secondary Nucleation — A Review , 3–22 (Springer US, Boston, MA, 1976).

Cubillas, P. & Anderson, M. Synthesis Mechanism: Crystal Growth And Nucleation . Vol. 1 (eds. Avelino, C., Stacey, Z., Jiri, C.) Ch.1 (Wiley-VCH Verlag GmbH & Co. KGaA, 2010).

Garside, J. & Davey, R. J. Invited review secondary contact nucleation: kinetics, growth and scale-up. Chem. Eng. Commun. 4 , 393–424 (1980).

Hijaz, B. A. & Volpicelli-Daley, L. A. Initiation and propagation of α -synuclein aggregation in the nervous system. Mol. Neurodegeneration 15 , 19 (2020).

Cohen, S. I. A. et al. Nucleated polymerization with secondary pathways. i. time evolution of the principal moments. J. Chem. Phys. 135 , 065105 (2011).

Cohen, S. I. A., Vendruscolo, M., Dobson, C. M. & Knowles, T. P. J. Nucleated polymerization with secondary pathways. ii. determination of self-consistent solutions to growth processes described by non-linear master equations. J. Chem. Phys. 135 , 065106 (2011).

Cohen, S. I. A., Vendruscolo, M., Dobson, C. M. & Knowles, T. P. J. Nucleated polymerization with secondary pathways. iii. equilibrium behavior and oligomer populations. J. Chem. Phys. 135 , 065107 (2011).

Saar, K. L. et al. On-chip label-free protein analysis with downstream electrodes for direct removal of electrolysis products. Lab a Chip 18 , 162–170 (2018).

Xu, C. α -synuclein oligomers form by secondary nucleation as_kinetics https://doi.org/10.5281/zenodo.12508748 (2024).

Download references

Acknowledgements

We thank Dr Manuela R Zimmermann and Minghao Zhang for helpful discussions on data analysis. We additionally thank Dr Heather Greer for her help with the acquisition of TEM images and the EPSRC Underpinning Multi-User Equipment Call (EP/P030467/1) for funding the TEM. We would like to acknowledge funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program through the ERC grant DiProPhys (agreement ID 101001615), and the following sources: Herchel Smith Research Studentship (C.K.X.), Herchel Smith Fellowship (G.K.), Wolfson College Junior Research Fellowship (G.K.), Marie Skłodowska-Curie grant MicroSPARK (agreement no. 841466; G.K.), Swedish Research Council (VR 2015-00143; S.L.), The Addenbrooke’s Charity Trust (M.G.S., G.V.), Parkinson’s UK (M.G.S.).

Author information

Authors and affiliations.

Centre for Misfolding Diseases, Yusuf Hamied Department of Chemistry, University of Cambridge, Cambridge, UK

Catherine K. Xu, Georg Meisl, Ewa A. Andrzejewska, Georg Krainer, Alexander J. Dear, Marta Castellana-Cruz, Soma Turi, Irina A. Edu, Raphaël P. B. Jacquat, William E. Arter, Michele Vendruscolo & Tuomas P. J. Knowles

Max Planck Institute for the Science of Light, Erlangen, Germany

Catherine K. Xu

Institute of Molecular Biosciences (IMB), University of Graz, Graz, Austria

Georg Krainer

Biochemistry and Structural Biology, Lund University, Lund, Sweden

Alexander J. Dear & Sara Linse

Integrated Research Center (PRAAB), Campus Biomedico University of Rome, Rome, Italy

Giorgio Vivacqua

Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK

Giorgio Vivacqua & Maria Grazia Spillantini

Cavendish Laboratory, University of Cambridge, Cambridge, UK

Tuomas P. J. Knowles

You can also search for this author in PubMed   Google Scholar

Contributions

C.K.X., G.K., W.E.A., M.V., S.L., and T.P.J.K. conceived the study. C.K.X. and M.C.C. developed the α -synuclein kinetics assay. C.K.X., E.A.A., and I.A.E. performed the kinetics experiments. C.K.X., E.A.A., and G.K. acquired μ FFE data. G.V. and M.G.S. performed RT-QuIC experiments. C.K.X., G.M., and S.T. developed a theory for data analysis. C.K.X., G.M., A.J.D., R.P.B.J., and W.E.A. contributed software. C.K.X., G.M., and A.J.D. analyzed data. C.K.X. and G.M. wrote the manuscript with input from all authors.

Corresponding author

Correspondence to Tuomas P. J. Knowles .

Ethics declarations

Competing interests.

At the time of initial submission, Georg Meisl and Alexander J Dear were employees of Wavebreak Therapeutics (formerly Wren Therapeutics). Michele Vendruscolo, Sara Linse, and Tuomas PJ Knowles are co-founders of Wavebreak Therapeutics (formerly Wren Therapeutics). The remaining Authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, reporting summary, peer review file, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, C.K., Meisl, G., Andrzejewska, E.A. et al. α -Synuclein oligomers form by secondary nucleation. Nat Commun 15 , 7083 (2024). https://doi.org/10.1038/s41467-024-50692-4

Download citation

Received : 27 August 2023

Accepted : 19 July 2024

Published : 17 August 2024

DOI : https://doi.org/10.1038/s41467-024-50692-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

experimental error simple definition

IMAGES

  1. PPT

    experimental error simple definition

  2. PPT

    experimental error simple definition

  3. PPT

    experimental error simple definition

  4. PPT

    experimental error simple definition

  5. PPT

    experimental error simple definition

  6. PPT

    experimental error simple definition

COMMENTS

  1. Understanding Experimental Errors: Types, Causes, and Solutions

    These errors are often classified into three main categories: systematic errors, random errors, and human errors. Here are some common types of experimental errors: 1. Systematic Errors. Systematic errors are consistent and predictable errors that occur throughout an experiment. They can arise from flaws in equipment, calibration issues, or ...

  2. Random vs. Systematic Error

    Random and systematic errors are types of measurement error, a difference between the observed and true values of something.

  3. Experimental Error Types, Sources & Examples

    As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.

  4. Sources of Error in Science Experiments

    Random errors are due to fluctuations in the experimental or measurement conditions. Usually these errors are small. Taking more data tends to reduce the effect of random errors. Examples of Random Errors

  5. PDF Introduction to Error and Uncertainty

    from experimental data. In this lab course, we will be using Microsoft Excel to record ... Systematic errors are usually due to imperfections in the equipment, improper or biased observation, or the presence of additional physical e ects not taken into account. (An example might be an experiment on forces and acceleration in which

  6. Random vs. Systematic Error Definitions and Examples

    When weighing yourself on a scale, you position yourself slightly differently each time. When taking a volume reading in a flask, you may read the value from a different angle each time.; Measuring the mass of a sample on an analytical balance may produce different values as air currents affect the balance or as water enters and leaves the specimen. ...

  7. How to Calculate Experimental Error in Chemistry

    Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels.

  8. Systematic vs Random Error

    Proportional errors of this type are called scale factor errors. Drift occurs when successive measurements become consistently higher or lower as time progresses. Electronic equipment is susceptible to drift.

  9. PDF Measurement and Error Analysis

    The uncertainty (or "experimental error") reported above is perhaps more accurately described as the precision of the measurement. The uncertainty reflects the range of values in which we expect to measure a physical quantity, most of the time. In other words, it is the typical scatter that we see in

  10. PDF A Student's Guide to Data and Error Analysis

    Preface. This book is written as a guide for the presentation of experimental including a consistent treatment of experimental errors and inaccuracies. is meant for experimentalists in physics, astronomy, chemistry, life and engineering. However, it can be equally useful for theoreticians produce simulation data: they are often confronted with ...

  11. PDF Chapter 3 Experimental Error

    Consider the function pH = −log [H+], where [H+] is the molarity of H+. For pH = 5.21 ± 0.03, find [H+] and its uncertainty. The concentration of H+ is 6.17 (±0.426) × 10−6 = 6.2 (±0.4) × l0−6 M. The number of significant digits in a number is the required to write the number in scientific notation.

  12. What is an Experimental Error?

    Here are why this might occur in an experiment, and these can be divided into subcategories: systematic errors, random errors, and blunders. Systematic errors. These errors tend to be caused by the process, and their reason can usually be identified. Here are four significant types of systematic errors:

  13. PDF Understanding Experimental Error

    The Excel function LINEST ("line statistics") is able to calculate the errors in the slope and y-intercept of a linear function of the form = + . To do so, follow the directions below: Organize your data into a column of x-values and y-values. Create a scatter plot of your data and fit a linear trendline.

  14. PDF EXPERIMENTAL ERRORS 1) Systematic Error

    SYSTEMATIC ERROR IS THE LIMITING FACTOR IN DETERMINING ACCURACY Thus systematic error, which is always present to some extent, will ultimately determine the accuracy of a measurement; a concept we now formalize. Definition of Accuracy The notion of accuracy is based on the concept of the 'true value' of a measurement, i.e., the quaesitum.

  15. PDF EXPERIMENTAL ERRORS

    EXPERIMENTAL ERRORS 1. PREFACE ... If the errors are truly random and there are a large number of measurements and the ... it will always equal zero as prescribed by the definition of x. One might get around this problem by taking the absolute value of the deviations. This measure of scatter is known as the mean absolute

  16. PDF An Introduction to Experimental Uncertainties and Error Analysis

    Lynn 4 Uncertainties and Error Analysis What Is an Error Bar? In a laboratory setting, or in any original, quantitative research, we make our

  17. Types of Error

    There are four types of systematic error: observational, instrumental, environmental, and theoretical. Observational errors occur when you make an incorrect observation. For example, you might misread an instrument. Instrumental errors happen when an instrument gives the wrong reading.

  18. PDF ERROR ANALYSIS (UNCERTAINTY ANALYSIS)

    4 USES OF UNCERTAINTY ANALYSIS (I) • Assess experimental procedure including identification of potential difficulties - Definition of necessary steps - Gaps • Advise what procedures need to be put in place for measurement • Identify instruments and procedures that control accuracy and precision - Usually one, or at most a small number, out of the large set of

  19. Experimental Errors and Error Analysis

    Experimental Errors and ... For simple combinations of data with random errors, the correct procedure can be summarized in three rules. x, y, ... which is also small. Calibration standards are, almost by definition, too delicate and/or expensive to use for direct measurement.

  20. Experimental Error

    ©2006 Six Sigma eLearning, Inc. 1.800.297.8230 Six Sigma eLearning, Inc. 1.800.297.8230

  21. α -Synuclein oligomers form by secondary nucleation

    α-Synuclein aggregation occurs via secondary pathways. Although α-synuclein oligomers are implicated as toxic species in PD, the molecular mechanisms by which both they and high molecular weight ...