Quantitative chromatography results in data which are based on repetitions.
Only repetitions provide values about the quality of quantitative data .
 >>>  with one exception, which we only found in 2007 - 2009 <<< click here.

One can find for all data a quality number:
A mean X is the first possible quality number of a series of N repeated measurements: x1, x2, x3, ...xN.
The repeatability standard deviation “s” is the quality number for the mean X.
Up to now there is no quality number known for the repeatability standard deviation besides this one:
the “sf4” value - see under “sf4”.
Here is a most simple example to make these statements understandable.
Let us assume, two laboratories measured with differing methods four times a special value - let say the concentration of ethanol in wine by strict consecutive repetition:
Lab1 found : x1=10.9 vol%; x2=11.2 vol%; x3=10.2 vol%; x4= 10.6 vol% ethanol in a wine sample;
Lab2 found : x1=10.709 vol%; x2=10.72 vol%; x3=10.73 vol%; x4= 10.74 vol%

      Data quality numbers:
      X found in Lab 1: 10.73 ;    s found in Lab 1: 0.725 at N=4 
      X found in Lab 2: 10.725;   s found in Lab 2: 0.0133 at N=4

Now how good are the methods - instruments - procedures used in Lab 1 versus Lab 2 although both found the same result “ethanol in (a special) wine” ? This is easily seen by the “sf4” method. In addition the “sf4” procedure will find out if there are systematic errors in the procedure, or the used instrument or just because of the way the samples have been taken. One needs however more than N=4 repetitions as a minimum of analytical effort to be invested. The minimum is N=7, optimal for the “sf4” determination is N=20. If the analytical procedures work fast, there is no problem to use N=20 repetitions and let calculate 20-3 = 17 “sf4” values.
By the way: Lab 2 above did not its best. They have an excellent instrument, but there is a serious sampling problem and if they would solve it, a triple measurement would be MUCH better than a slow, semi precise but regulated and certified (expensive) official method. Can you see it in the given data ? Probably not. But with the help of the “sf4” procedure one can see it clearly.

The worst situation in quantitative analysis is “NO any repetition” which as a critical result means: “nothing is known about the quality of the analytical result”.
The optimal situation are results based on N = four repetitions. Why this is true  will be shown later.

    The structure of a final quantitative analytical result MUST be given in the standard format 
                                          X,  +- s,  N  and this is optimal for N = 4

    NOTE: there are other critical data aspects: RUNAWAYS may exist but must be excluded from all quality management with quantitative data. And in case the qualitative data are wrong, all is wrong, because if “FRU” is not “FRU” but “FRA ”we better forget any corresponding quantity result about the substance “FRU”. THIS type of error is chromatography related and unfortunately widely distributed still in the year 100 since the invention of chromatography.

    And: Chromatography is everywhere. It may TAKE away substances even completely or enter substances into a sample which never existed therein. Both will for sure falsify the analytical result.

    Chromatography acts already in the moment samples are taken, although they have not yet seen any chromatograph. Chromatography happens on any solid or liquid surface. This “environmental or natural everywhere chromatography” can alter drastically an otherwise correctly taken sample or product flow. Thus the “natural everywhere” chromatography is often the source of systematic errors in analytical qualitative as well as quantitative chromatography.

Note: as already mentioned in other parts of this site: Older table calculation programs from Microsoft - Excel 2003, Apple - Appleworks 6, and Open.Office.org 1.1 in LINUX calculate drastically wrong mean and standard deviation values when using other delimiters than the COLON. Unfortunately the possible use of semicolon, minus, double point and in some program versions the single point in commands like “calculate the MEAN (A6;A19)” causes arithmetically FALSE results with errors of easily -20 to plus 80 % relative. There is no warning when using wrong delimiters like  -   .   ;   ..  Only COLON as delimiter works correct.

By natural law quantitative data have errors.  The question is only : their size...

All measured data have errors. Only counting is (should be) free from errors. The number four is exactly and precisely four. However if somewhere the content of ethanol in beer must be measured quantitatively and the analyst finds four (volume-%) ethanol, it is very probably 4.05 volume-%, or 3.92 volume-%. At least it is somewhat uncertain. Important is to find out HOW uncertain the found value is. By natural law measurements have always and under all conditions a non systematic statistical scatter around a mean value. As mentioned the repeatability standard deviation “s” qualifies the found value 4.05 (above as far as this is a MEAN) and the Standard CERTAINTY  stC tells quantitatively, how certain the found value is.  The size of stC depends strongly on the number “N” of the done repetitions and correlates with “s”:

                                           stC = s * t(99, N-1) / sqrt (N)

The values [t(99,N-1) / sqrt (N)] can be combined into one “stC-factor” = fN which makes the formula for the Standard Certainty very simple to : stC = s * fN

    On the data in the table of t(99,N-1) values and the factor fN we see the utmost importance of a very small “s” value. We see, how important is N, the number of done repetitions:

    N     (t99, N-1)    fN

    1      unlimited    unlimited
    2      63.66         45.01       
    3       9.92            5.72
    4       5.84            2.91
    5       4.60            2.06

    10     3.26            1.03
    30     2.75            0.502

With N = 1 we have no any quality information. With N = 4 the Standard Certainty equals 3 times s and after ten repetitions it is still only as small as s. With s = 2% we have an Standard Certainty at N=4 of
6 %, but with an analytical method and a repeatability standard deviation of 0.02 % and N = four repetitions we get a Standard Certainty of 0.12 %. Thus “s” cannot be small enough.

Some laboratories allow only one single measurement (N=1, no any certainty), others stick with double measurements (N=2, 45 times s as standard certainty, very expensive either to work legally or to avoid problems when overriding limits based on law. However 5 and more repetitions cost more time and material than 4 repetitions but bring only a little bit more certainty. Only in case of method, instrument or materials testing  20 to 30 repetitions are done and still only one mean and only one standard deviation are the result to check for procedure quality or instrument qualifications as long as we accept “standard regulations” and see nothing with rspect to systematic errors. There is simply nothing available to provide information about possible systematic data errors.

We found the way out. We take from N repetitions N-3 special standard deviation data as forthcoming quality numbers for the analysis procedure. This way we have time related quality values. Systematic errors may depend on time, may grow or shrink with time during the measurement progress. This error checking is done by “sf4” data. See under “sf4

The Standard Certainty “stC” = s * t(99, N-1) / sqrt(N)                                         [1]

is a clear quantitative basis for legal contracts telling which amount or concentration of an important substance or material has to be delivered in order to fulfill legally a delivery contract.

Or which amount or concentration of substances of environmental concern must be kept how far under the legal or contracted level in order to avoid conflicts normally MUCH more expensive than qualified analytical work.

What is fundamental for the analyst: N MUST be larger than 1.
Note the first exception we found by a new analytical concept using “integrated chromatography” - when N = 1 and no uncertainty exists in case two samples are compared and differ:  click on µ-PLC
At best it should be 4.
More work does not help.
The key for success is the standard deviation value “s”. As repetition is so important, we repeat:
 “s” cannot be small enough. The believe in its power is underdeveloped. But: Only based on “s” the analytical development can be optimized. The use of the “s” quality controlling value “sf4” should be used always. The sf4 data should be checked graphically over N. Otherwise the “AHA” effect may be missing. These data touch only the quantitative analytical values. But all is useless if the qualitative data are false

NOTE: qualitative systematic errors MUST be excluded totally and that means: in order to check for qualitative errors we MUST use more than one chromatographic system, at best multidimensional separation, combination of differing chromatographic methods, µ-PLC behind HPLC, GC on-line HPTLC, GC of HPLC fractions, and more than one detector or at least more than one specifity channel......
It pays back .

[Home] [We can help] [Systematic C-Errors] [Statistics] [Error Detector "sf4"] [Sampling/Calibration] [Qual.Error GC] [Quant.Error GC] [Qual.Error HPLC] [Quant.Error HPLC] [Qual.Error PLC] [Quant.Error PLC] [Integration] [Chrom. Combination] [µPLC Micro Planar LC] [Altern.Chrom.Theory] [Contact IfC] [About the Author]
[Home] [We can help] [Systematic C-Errors] [Statistics] [Error Detector "sf4"] [Sampling/Calibration] [Qual.Error GC] [Quant.Error GC] [Qual.Error HPLC] [Quant.Error HPLC] [Qual.Error PLC] [Quant.Error PLC] [Integration] [Chrom. Combination] [µPLC Micro Planar LC] [Altern.Chrom.Theory] [Contact IfC] [About the Author]