Measurement Good Practice Guide No. 11 (Issue 2). A Beginner’s Guide to Uncertainty of Measurement. Stephanie Bell. Centre for Basic, Thermal and Length Metrology National Physical Laboratory. UK
There are some processes that might seem to be measurements, but are not. For example
However, measurements may be part of the process of a test
Uncertainty of measurement is the doubt about the result of a measurement, due to
Every time we repeat a measurement with a sensitive instrument, we obtain slightly different results
Systematic error which always occurs, with the same value, when we use the instrument in the same way and in the same case
Random error which may vary from observation to another
Do not to confuse error and uncertainty
Error is the difference between the measured and the “true” value
Uncertainty is a quantification of the doubt about the result
Whenever possible we try to correct for any known errors
But any error whose value we do not know is a source of uncertainty
Flaws in the measurement can come from:
The measuring instrument – instruments can suffer from errors including wear, drift, poor readability, noise, etc.
The item being measured – which may not be stable (measure the size of an ice cube in a warm room)
The measurement process – the measurement itself may be difficult to make. Measuring the weight of small animals presents particular difficulties
‘Imported’ uncertainties – calibration of your instrument has an uncertainty
Operator skill – One person may be better than another at reading fine detail by eye. The use of an a stopwatch depends on the reaction time of the operator
Sampling issues – the measurements you make must be representative. If you are choosing samples from a production line, don’t always take the first ten made on a Monday morning
The environment – temperature, air pressure, humidity and many other conditions can affect the measuring instrument or the item being measured
A reading is one observation of the instrument
A measurement may require several reads
For example, to measure a length, we make two reads, and we calculate the difference
The measurement will accumulate the uncertainty
In this context, people has defined the following ideas
BS ISO 5725-1: “Accuracy (trueness and precision) of measurement methods and results - Part 1: General principles and definitions.”, p.1 (1994)
High trueness, Low precision
High precision, Low trueness
In some old material, people say “accuracy” in place of trueness
Other people say bias
These words are still common in science and technology
Be aware of this discrepancy
For a single read, the uncertainty depends at least on the instrument resolution
For example, my water heater shows temperature with 5°C resolution: 50, 55, 60,…
If it shows 55°C, the real temperature is somewhere between 53°C and 57°C
We write 55°C ± 2.5°C
For a single read, \(Δx\) is half of the resolution
Last class we saw the easy rule for error propagation
\[ \begin{aligned} (x ± Δx) + (y ± Δy) & = (x+y) ± (Δx+Δy)\\ (x ± Δx) - (y ± Δy) & = (x-y) ± (Δx+Δy)\\ (x ± Δx\%) \times (y ± Δy\%)& =xy ± (Δx\% + Δy\%)\\ (x ± Δx\%) ÷ (y ± Δy\%)& =x/y ± (Δx\% + Δy\%) \end{aligned} \]
Here \(Δx\%\) represents the relative uncertainty, that is \(Δx/x\)
We use absolute uncertainty for + and -, and relative uncertainty for ⨉ and ÷
It is easy to get confused with relative errors
Instead of \((x ± Δx\%)\) it is better to write \[x(1± Δx/x)\]
Mathematical notation was invented to make things clear, not confusing
Let’s verify the formulas of the previous slide
Remember that we assume that \(Δx/x\) is small
Assuming that the errors are small compared to the main value, we can find the error for any “reasonable” function
For any smooth function \(f,\) we have \[f(x±Δx) = f(x) ± \frac{df}{dx}(x)\cdot Δx + \frac{d^2f}{dx^2}(x+\varepsilon)\cdot \frac{Δx^2}{2}\] When \(Δx\) is small, we can ignore the last part, so
If \(f\) is smooth, there is a value \(c\) between \(a\) and \(b\) such that \[\frac{f(b)-f(a)}{b-a}=\frac{df}{dx}(c)\]
\[(x±Δx)^2\] \[\ln(x±Δx)\] \[\log_{10}(x)\] \[\exp(x±Δx)\]
The curve depends on the initial DNA concentration
We care only about the exponential phase
The signal increases 2 times on every cycle
\[X(C) = X(0)⋅2^C\]
So we can find the initial concentration
\[X(0) = X(C)⋅2^{-C}\]
DNA concentration crosses 50% at 13.73 cycles
DNA concentration crosses 5% at 10 cycles
Start with a large concentration of template, and dilute it several times. Measure the CT of each dilution
These rules are “pessimistic”. They give the worst case
In general the “errors” can be positive or negative, and they tend to compensate
(This is valid only if the errors are independent)
In this case we can analyze uncertainty using the rules of probabilities
In this case, the value \(Δx\) will represent the standard deviation of the measurement
The standard deviation is the square root of the variance
Then, we combine variances using the rule
“The variance of a sum is the sum of the variances”
(Again, this is valid only if the errors are independent)
\[ \begin{align} (x ± Δx) + (y ± Δy) & = (x+y) ± \sqrt{Δx^2+Δy^2}\\ (x ± Δx) - (y ± Δy) & = (x-y) ± \sqrt{Δx^2+Δy^2}\\ (x ± Δx\%) \times (y ± Δy\%)& =x y ± \sqrt{Δx\%^2+Δy\%^2}\\ \frac{x ± Δx\%}{y ± Δy\%} & =\frac{x}{y} ± \sqrt{Δx\%^2+Δy\%^2} \end{align} \]
When using probabilistic rules we need to multiply the standard deviation by a constant k, associated with the confidence level
In most cases (but not all), the uncertainty follows a Normal distribution. In that case
Previously we considered one kind of uncertainty: the instrument resolution
This is a “one time” error
In most measurement situations, uncertainty evaluations of both types are needed
Measurement Good Practice Guide No. 11 (Issue 2). A Beginner’s Guide
to Uncertainty of Measurement.
Stephanie Bell. Centre for Basic, Thermal and Length Metrology National
Physical Laboratory. UK
Standard deviation of rectangular distribution is \[u=\frac{a}{\sqrt{3}}\] when the width of the rectangle is \(2a\)
Standard deviation of noise can be estimated from the data: \[s=\sqrt{\frac{1}{n-1}\sum_i (x_i - \bar x)^2}\]
If the measures are random, their average is also random
It has the same mean but less variance
Standard error of the average of samples is \[\frac{s}{\sqrt{n}}\]