are unfortunately often quite large. There are several reasons, some of them will be discussed here. One of them is REGULATION and missing INFORMATION but this differs globally in wide ranges and cannot be discussed here.
NOTE: In regulated GC methods about quantitative analyses there is normally no any hint given how to check for systematic quantity errors.
The main error sources are:
1. Taking and giving the sample not correct - see Sampling / Calibration.
2. Incorrect integration - especially false data density, false integration start and stop because of poor peak detection, false base line positioning. There are further error sources in quite some - even latest commercial - integration software like missing signal size analysis and missing or poor automatic switching of data density with growing retention time; see under “integration”.
3. Missing quantity control by statistics - see Statistics.
4. Missing quality control of sampling tools. Simply a syringe or a sampling valve, which has strongly sorbing substances on the inner surface, the adsorbed material can easily be displaced by the next selected sample with a differing chemical composition and thus falsifies this next sample. This is especially a problem in court cases about the source for impurities or non kept limits for such substances.
NOTE: If for analysis we use the same syringe or sampling valve as we took some runs before in calibration runs we may find serious sample falsification due to the always and everywhere acting displacement chromatography. Example: Chemists / analysts knowing well the high polarity of ethylene glycols will very probably clean a syringe or a sampling valve by water in order to remove the water soluble glycols. But their sorption strength on glass is so high, that water fails for a complete and immediate clean up. If the next analytical sample has a composition water plus ethanol plus organic acids, this mix sorbs stronger than the ethylene glycols and displaces still sorbed these glycols. If in trace analyses the correct glycol content counts, well reproducible data falsification of several hundred relative percent may happen falsifying also court case decisions. Chromatography acts everywhere, not only in chromatography instruments.
5. Differing substances have differing specific detector sensitivities. Thus “area-% data” differ in many cases dramatically from weight-% values. This is even true with data based on the quantitatively wide ranging flame ionization detector and a sample composition exclusively with hydrocarbons. Specific calibration is mandatory. See under Calibration. The same is true with the heat conductivity detector although at least in gas analysis it behaves quite “democratic”, that means, produces at least quite correct mole-% data without the use of substance specific correction factors. But its linearity is poor.
6. Each detector has a limited quantitative working range, which at best remains under control when the peak height is measured in standard units like volt or ampere. Thus the (correctly taken) peak height acts as alarm value telling the chromatographer, that there is no more any way to correct false data but by the combination of two quantitative methods.
7. Each column has a limited working range for the nature of substances: their molecular weight, their polarity, their temperature stability. As the majority of all quantitative GC analyses is done by temperature programming we have working range limits for the low starting temperature, the upper end of the temperature program and the heating rate. For isocratic GC the selection of the column temperature is limited by working ranges. This is so important that this site shows a series of figures about the temperature working range of a GC column / capillary and how to get the values.
8. All quantity values are false for sure, if the correct identification of one compound in the sample mix failed.
9. The quantity data may be wrong by thousands of percent, if one main compound in the sample could not be quantitized, for which we know two reasons: this main compound is not detected because the detector is specifically “blind” for this substance (example water and flame ionization detector), or the main compound is not eluted, or the chromatogram has been finished too early, or the backflush
(if used) remains blind, or the highest concentration of the sample (given in gram per second for a FID as example or in gr/ml carrier gas for a HCD as example) is outside the detector working range
(see topic 6. above) .
10. Data may be wrong because the chromatographer uses his equipment only 8 hours per day, switches off energy and gases and restarts his equipment without the “early morning test”. The latter can be just a well selected quantitative test mixture to be injected together with a non sorbed but detectable inert gas, may be methan. The quantitative test values must correlate with with late evening value which means: it is a good idea to check the whole working period per day by inclusion into two test run values.
11. A critical source of systematic quantitative errors is the position of separation lines. Main reason for wrong separation lines - lines which theoretically separate one peak from the next - is an overloaded chromatogram, poor separation or total overlapping of two or some peaks not be visible by a too large peak width in half height or long tailings of a sample solvent peak. If substances to be quantitized “sit” on such a tailing their quantity value can be quite false. See base line and separation line problems - click “here”.
12. The mobile phase flow speed may be adjusted wrong because of the use of some classical theoretical rules which are far away from practice - click “here”
According to our results and those from our thousands of course colleagues it is very helpful to use “reduced raw data” in STANDARD format and correspondingly graphics to easily check for possible systematic quantitative errors: the chromatogram is running and than integrated. The integrated values are stored peak by peak as EXPORT data file. The just mentioned reduced raw data consist of
a) the retention time tms in seconds.
b) the peak width b05 in half height in seconds.
NOTE: because it is so important and against a majority believe, it is repeated: the real peak width in half height is taken by measurement, NOT by the wrong assumption, that the peak has Gaussian shape and the peak width value can be calculated. It can NOT. We checked with Europeans largest mathematical center many peak shape mathematics. When done, we realized, that a NEXT one may follow the practice. It did NOT.
c) the peak height in absolute physical units are taken either as ampere or as volt.
d) qualitative data as retention index or - if not yet known - instead of retention index values. Zeros are written into the EXPORT reduced raw data file.
The EXPORT file is at the end of the integration job automatically enlarged by all necessary key words and information in ASCII which later identifies the analytical job and allow for auto reporting.
When calling EXPORT data into one of the EXPORT programs there is a first systematic error check done:
Alarm colors tell, if either a peak height value or a peak width value is out of scale. In this case the peak integral is shown in an alarm color and / or a warning is printed on screen. We have an elegant problem solution for such cases : click on “Chromatogram Combination”. Spikes are automatically removed and the corresponding data deleted. The false data are corrected. A report is seen on screen.
This does in no way manipulate the raw chromatogram data. Raw data (in binary form) remain untouched according to internationally accepted regulation. However the ASCII EXPORT file of reduced raw data is qualified for either a complete or a strictly reduced analytical report, for auto statistics with a graphics report or for any other job analysts and the user of analytical information need.
This is a further important fact of the EXPORT concept :
There are quite differing demands for a final result report.
Some users never need retention index data although they are the key control values to check for qualitative systematic errors - see “Qualitative GC Errors”
Others work already a long time with an existing commercial integration software but suddenly need
add-on calculations of physical values based on the results. Normally they must wait a long time until the instrument company can do the changed programming - or it turns out this is too expensive for just only one customer.
Based on the EXPORT reduced raw data file we never had any problem to offer software add-ons as solution for any new demand quickly.