30 CHAPTER 2: Immunoassay Platform and Designs
own tissues. Several mechanisms may trigger the production of autoantibo-
dies, for example, an antigen formed during fetal development and then
sequestered may be released as a result of infection, chemical exposure, or
trauma, as occurs in autoimmune thyroiditis. The autoantibody may bind to
the analyte-label conjugate in a competition-type immunoassay to produce a
false positive or false negative result. Circulating cardiac troponin I autoanti-
bodies may be present in patients suffering from acute cardiac myocardial
infarction where troponin I elevation is an indication of such an episode.
Unfortunately, the presence of circulating cardiac troponin I autoantibodies
may falsely lower cardiac troponin I concentration (negative interference)
using commercial immunoassays, thus complicating the diagnosis of acute
myocardial infarction [11]. However, falsely elevated results due to the pres-
ence of autoantibodies are more common than false negative results.
Verhoye et al. found three patients with false positive thyrotropin results
that were caused by interference from an autoantibody against thyrotropin.
The interfering substance in the affected specimens was identified as an
autoantibody by gel-filtration chromatography and polyethylene glycol
precipitation [12].
Often the analyte can conjugate with immunoglobin or other antibodies to
generate macro-analytes, which can falsely elevate the true value of the ana-
lyte. For example, macroamylasemia and macro-prolactinemia can produce
falsely elevated results in amylase and prolactin assays, respectively. In
macro-prolactinemia, the hormone prolactin conjugates with itself and/or
with its autoantibody to create macro-prolactin in the patient’s circulation.
The macro-analyte is physiologically inactive, but often interferes with many
prolactin immunoassays to generate false positive prolactin results [13]. Such
interference can be removed by polyethylene glycol precipitation.
CASE REPORT
A 17-year-old girl was referred to a University hospital for On further follow-up her AST level was found to have
having a persistent elevated level of aspartate aminotransfer- increased to 259 U/L. At that point it was speculated that her
ase (AST). One year earlier, her AST level was 88 U/L as elevated AST was due to interference, and further study by
detected during her annual school health check, but she had gel-filtration showed a species with a molecular weight of
no medical complaints. She was not on any medication and 250 kilodaltons. This was further characterized by immuno-
had a regular menstrual cycle. Her physical examination at electrophoresis and immunoprecipitation to be an immuno-
the University hospital was unremarkable. All laboratory test globulin (IgG kappa-lambda globulin) complexed AST that
results were normal, but her AST level was further elevated was causing the elevated AST level in this girl. These com-
to 152 U/L. All serological tests for hepatitis were negative. plexes are benign [14].
Key Points 31
2.11 PROZONE (OR “HOOK”) EFFECT
The Prozone or hook effect is observed when a very high amount of an ana-
lyte is present in the sample but the observed value is falsely lowered. This
type of interference is observed more commonly in sandwich assays. The
mechanism of this significant negative interference is the capability of a high
level of an analyte (antigen) to reduce the concentrations of “sandwich”
(antibody 1:antigen:antibody 2) complexes that are responsible for generat-
ing the signal by forming mostly single antibody:antigen complexes. The
hook effect has been reported with assays of a variety of analytes, such as
β-hCG, prolactin, calcitonin, aldosterone, cancer markers (CA 125, PSA), etc.
The best way to eliminate the hook effect is serial dilution. For example, if
the hook effect is present and the original value of an analyte (e.g. prolactin)
was 120 ng/mL, then 1:1 dilution of the specimen should produce a value of
60 ng/mL; but if the observed value was 90 ng/mL (which was significantly
higher than the expected value), the hook effect should be suspected. In
order to eliminate the hook effect, a 1:10, 1:100, or even a 1:1000 dilution
may be necessary so that the true analyte concentration will fall within the
analytical measurement range (AMR) of the assay..
CASE REPORT
A 16-year-old girl presented to the emergency department suspected the hook effect and dilution of the serum specimen
with a 2-week history of nausea, vomiting, vaginal spotting, (1:1) produced a non-linear value (455.2 IU/L), which further
and lower leg edema. On physical examination, a lower abdo- confirmed the hook effect. After a 1:10 dilution, the urine test
men palpable mass was found. The patient admitted sexual for β-hCG became positive, and finally, by using a 1:10,000
activity, but denied having any sexually transmitted disease. dilution of the specimen, the original serum β-hCG concen-
Molar pregnancy was suspected, and the quantitative β-sub- tration was determined to be 3,835,000 IU/L. Usually the hook
unit of human chorionic gonadotropin (β-hCG) concentration effect is observed with a molar β-hCG level in serum because
was 746.2 IU/L; however, the urine qualitative level was neg- high amounts of β-hCG are produced by molar pregnancy
ative. Repeat of the urinalysis by a senior technologist also [15].
produced a negative result. At that point the authors
KEY POINTS
Immunoassays can be competitive or immunometric (non-competitive, also known
as sandwich). In competitive immunoassays only one antibody is used. This
format is common for assays of small molecules such as a therapeutic drugs or
32 CHAPTER 2: Immunoassay Platform and Designs
drugs of abuse. In the sandwich format two antibodies are used and this format is
more commonly used for assays of relative large molecules.
Homogenous immunoassay format: After incubation, no separation between
bound and free label is necessary.
Heterogenous immunoassay format: The bound label must be separated from the
free label before measuring the signal.
Commercially available immunoassays use various formats, including FPIA, EMIT,
CEDIA, KIMS, and LOCI. In the fluorescent polarization immunoassay (FPIA), the
free label (a relatively small molecule) attached to the analyte (antigen) molecule
has different Brownian motion than when the label is complexed to a large
antibody (140,000 or more Daltons). FPIA is a homogenous competitive assay
where after incubation the fluorescence polarization signal is measured; this signal
is only produced if the labeled antigen is bound to the antibody molecule.
Therefore, intensity of the signal is inversely proportional to the analyte
concentration.
EMIT (enzyme multiplied immunoassay technique) is a homogenous competitive
immunoassay where the antigen is labeled with glucose 6-phosphate
dehydrogenase, an enzyme that reduces nicotinamide adenine dinucleotide (NAD,
no signal at 340 nm) to NADH (absorbs at 340 nm), and the absorbance is
monitored at 340 nm. When a labeled antigen binds with the antibody molecule,
the enzyme label becomes inactive and no signal is generated. Therefore, signal
intensity is proportional to analyte concentration.
The Cloned Enzyme Donor Immunoassay (CEDIA) method is based on
recombinant DNA technology where bacterial enzyme beta-galactosidase is
genetically engineered into two inactive fragments. When both fragments
combine, a signal is produced that is proportional to the analyte concentration.
Kinetic interaction of microparticle in solution (KIMS): In the absence of antigen
molecules free antibodies bind to drug microparticle conjugates to form particle
aggregates that result in an increase in absorption that is optically measured at
various visible wavelengths (500 650 nm).
Luminescent oxygen channeling immunoassays (LOCI): The immunoassay
reaction is irradiated with light to generate singlet oxygen molecules in
microbeads (“Sensibead”) coupled to the analyte. When bound to the respective
antibody molecule, also coupled to another type of bead, it reacts with singlet
oxygen and chemiluminescence signals are generated that are proportional to the
concentration of the analyte antibody complex.
Usually total bilirubin concentration below 20 mg/dL does not cause interferences,
but concentrations over 20 mg/dL may cause problems. The interference of
bilirubin is mainly caused by its absorbance at 454 or 461 nm.
Various structurally related drugs or drug metabolites can interfere with
immunoassays.
References 33
Heterophilic antibodies may arise in a patient in response to exposure to certain
animals or animal products, due to infection by bacterial or viral agents, or use of
murine monoclonal antibody products in therapy or imaging. Heterophilic
antibodies interfere most commonly with sandwich assays used for measuring
large molecules, but rarely with competitive assays, causing mostly false positive
results.
Heterophilic antibodies are absent in urine. Therefore, if a serum specimen is
positive for an analyte (e.g. human chorionic gonadotropin, hCG), but beta-hCG
cannot be detected in the urine specimen, it indicates interference from a
heterophilic antibody in the serum hCG measurement. Another way to investigate
heterophilic antibody interference is serial dilution of a specimen. If serial dilution
produces a non-linear result, it indicates interference in the assay. Interference
from heterophilic antibodies can also be blocked by adding commercially available
heterophilic antibody blocking agents to the specimen prior to analysis.
Autoantibodies are formed by the immune system of a person that recognizes an
antigen on that person’s own tissues, and may interfere with an immunoassay to
produce false positive results (and less frequently, false negative results). Often the
endogenous analyte of interest will conjugate with immunoglobin or other
antibodies to generate macro-analytes, which can falsely elevate a result. For
example, macroamylasemia and macro-prolactinemia can produce falsely elevated
results in amylase and prolactin assays, respectively. Such interference can be
removed by polyethylene glycol precipitation.
Prozone (“hook”) effect: Very high levels of antigen can reduce the concentrations
of “sandwich” (antibody 1:antigen:antibody 2) complexes responsible for
generating the signal by forming mostly single antibody:antigen complexes. This
effect, known as the prozone or hook effect (excess antigen), mostly causes
negative interference (falsely lower results). The best way to eliminate the hook
effect is serial dilution.
REFERENCES
[1] Jolley ME, Stroupe SD, Schwenzer KS, Wang CJ, et al. Fluorescence polarization immunoassay
III. An automated system for therapeutic drug determination. Clin Chem 1981;27:1575 9.
[2] Jeon SI, Yang X, Andrade JD. Modeling of homogeneous cloned enzyme donor immunoas-
say. Anal Biochem 2004;333:136 47.
[3] Snyder JT, Benson CM, Briggs C, et al. Development of NT-proBNP, Troponin, TSH, and FT4
LOCI(R) assays on the new Dimension (R) EXL with LM clinical chemistry system. Clin
Chem 2008;54:A92 [Abstract #B135].
[4] Dai JL, Sokoll LJ, Chan DW. Automated chemiluminescent immunoassay analyzers. J Clin
Ligand Assay 1998;21:377 85.
[5] Forest J-C, Masse J, Lane A. Evaluation of the analytical performance of the Boehringer
Mannheim Elecsys s 2010 Immunoanalyzer. Clin Biochem 1998;31:81 8.
[6] Babson AL, Olsen DR, Palmieri T, Ross AF, et al. The IMMULITE assay tube: a new approach
to heterogeneous ligand assay. Clin Chem 1991;37:1521 2.
34 CHAPTER 2: Immunoassay Platform and Designs
[7] Christenson RH, Apple FS, Morgan DL. Cardiac troponin I measurement with the
ACCESS s immunoassay system: analytical and clinical performance characteristics. Clin
Chem 1998;44:52 60.
[8] Montagne P, Varcin P, Cuilliere ML, Duheille J. Microparticle-enhanced nephelometric
immunoassay with microsphere-antigen conjugate. Bioconjugate Chem 1992;3:187 93.
[9] Henry N, Sebe P, Cussenot O. Inappropriate treatment of prostate cancer caused by hetero-
philic antibody interference. Nat Clin Pract Urol 2009;6:164 7.
[10] Georges A, Charrie A, Raynaud S, Lombard C, et al. Thyroxin overdose due to rheumatoid
factor interferences in thyroid-stimulating hormone assays. Clin Chem Lab Med
2011;49:873 5.
[11] Tang G, Wu Y, Zhao W, Shen Q. Multiple immunoassays systems are negatively interfered
by circulating cardiac troponin I autoantibodies. Clin Exp Med 2012;12:47 53.
[12] Verhoye E, Bruel A, Delanghe JR, Debruyne E, et al. Spuriously high thyrotropin values due
to anti-thyrotropin antibody in adult patients. Clin Chem Lab Med 2009;47:604 6.
[13] Kavanagh L, McKenna TJ, Fahie-Wilson MN, et al. Specificity and clinical utility of methods
for determination of macro-prolactin. Clin Chem 2006;52:1366 72.
[14] Matama S, Ito H, Tanabe S, Shibuya A, et al. Immunoglobulin complexed aspartate amino-
transferase. Intern Med 1993;32:156 9.
[15] Er TK, Jong YJ, Tsai EM, Huang CL, et al. False positive pregnancy in hydatidiform mole.
Clin Chem 2006;52:1616 8.
CHAPTER 3
Pre-Analytical Variables
CONTENTS
3.1 LABORATORY ERRORS IN PRE-ANALYTICAL, 3.1 Laboratory
ANALYTICAL, AND POST-ANALYTICAL STAGES Errors in Pre-
Analytical,
Accurate clinical laboratory test results are important for proper diagnosis Analytical, and
Post-Analytical
and treatment of patients. Factors that are important to obtaining accurate
Stages.................... 35
laboratory test results include:
3.2 Order of Draw of
Patient Identification: The right patient is identified prior to specimen Blood Collection
collection by matching at least two criteria. Tubes..................... 37
Collection Protocol: The correct technique and blood collection tube 3.3 Errors with
have been used for sample collection to avoid tissue damage, prolonged Patient
Preparation ........... 38
venous stasis, or hemolysis.
Labeling: After collection, the specimen was labeled properly with correct 3.4 Errors with
Patient Identification
patient information; specimen misidentification is a major source of pre- and Related
analytical error. Errors..................... 38
Specimen Handling: Proper centrifugation (in the case of serum or 3.5 Error of
plasma specimen analysis) and proper transportation of specimens to the Collecting Blood in
laboratory. Wrong Tubes: Effect
of Anticoagulants. 40
Storage Protocol: Maintaining proper storage of specimens prior to
analysis in order to avoid artifactual changes in analyte; for example, 3.6 Issues with
Urine Specimen
storing blood gas specimens in ice if the analysis cannot be completed
Collection .............. 42
within 30 min of specimen collection.
3.7 Issues with
Interference Avoidance: Proper analytical steps to obtain the correct result
Specimen Processing
and avoid interferences. and
LIS Reports: Correctly reporting the result to the laboratory information Transportation...... 42
system (LIS) if the analyzer is not interfaced with the LIS. 3.8 Special Issues:
Clinician Reports: The report reaching the clinician must contain the right Blood Gas and
result, together with interpretative information, such as a reference range
and other comments that aid clinicians in the decision-making process.
35
A. Dasgupta and A. Wahed: Clinical Chemistry, Immunology and Laboratory Quality Control
DOI: http://dx.doi.org/10.1016/B978-0-12-407821-5.00003-6
© 2014 Elsevier Inc. All rights reserved.
36 CHAPTER 3: Pre-Analytical Variables
Ionized Calcium
Analysis................. 43 Table 3.1 Common Laboratory Errors
Key Points ............. 44 Type of Error
References ............ 45 Pre-Analytical Errors
Tube filling error
Patient identification error
Inappropriate container
Empty tube
Order not entered in laboratory information system
Specimen collected wrongly from an infusion line
Specimen stored improperly
Contamination of culture tube
Analytical Errors
Inaccurate result due to interference
Random error caused by the instrument
Post-Analytical Errors
Result communication error
Excessive turnaround time due to instrument downtime
Failure at any of these steps can result in an erroneous or misleading labora-
tory result, sometimes with adverse outcomes. The analytical part of the anal-
ysis involves measurement of the concentration of the analyte corresponding
to its “true” level (as compared to a “gold standard” measurement) within a
clinically acceptable margin of error (the total acceptable analytical error,
TAAE). Errors can occur at any stage of analysis (pre-analytical, analytical, and
post-analytical). It has been estimated that pre-analytical errors account for
more than two-thirds of all laboratory errors, while errors in the analytical
and post-analytical phases account for only one-third of all laboratory errors.
Carraro and Plebani reported that, among 51,746 clinical laboratory analyses
performed in a three-month period in the author’s laboratory (7,615 labora-
tory orders, 17,514 blood collection tubes), clinicians contacted the labora-
tory regarding 393 questionable results out of which 160 results were
confirmed to be due to laboratory errors. Of the 160 confirmed laboratory
errors, 61.9% were determined to be pre-analytical errors, 15% were analytical
errors, while 23.1% were post-analytical errors [1]. Types of laboratory errors
(pre-analytical, analytical, and post-analytical) are summarized in Table 3.1.
In order to avoid pre-analytical errors, several approaches can be taken,
including:
The use of hand-held devices connected to the LIS that can objectively
identify the patient by scanning a patient attached barcode, typically a
wrist band.
3.2 Order of Draw of Blood Collection Tubes 37
Retrieval of current laboratory orders from the LIS.
Barcoded labels are printed at the patient’s side, thus minimizing the
possibility of misplacing the labels on the wrong patient samples.
When classifying sources of error, it is important to distinguish between cog-
nitive errors (mistakes), which are due to poor knowledge or judgment, and
non-cognitive errors (commonly known as slips and lapses), which are due to
interruptions in a process during even routine analysis involving automated
analyzers. Cognitive errors can be prevented by increased training, compe-
tency evaluation, and process aids (such as checklists); non-cognitive errors
can be reduced by improving the work environment (e.g. re-engineering to
minimize distractions and fatigue). The vast majority of errors are non-
cognitive slips and lapses performed by the personnel directly involved in
the process. These can be easily avoided.
The worst pre-analytical error is incorrect patient identification where a phy-
sician may act on test results from the wrong patient. Another common error
is blood collection from an intravenous line that may falsely increase test
results for glucose, electrolytes, or a therapeutic drug due to contamination
with infusion fluid.
CASE REPORT
A 59-year-old woman was admitted to the hospital due to every 6 hours. On Day 5, phenytoin concentration was
transient ischemic heart attack. During the first day of hospi- 17.0 μg/mL and on Day 7 phenytoin concentration was
talization she experienced generalized tonic-clonic seizure 13.4 μg/mL. Surprisingly on Day 8, phenytoin concentration
and a 1000 mg intravenous phenytoin-loading dose was was at life-threatening level of 80.7 μg/mL, although the
administered followed by an oral dose of 100 mg of phenytoin patient did not show any symptom of phenytoin toxicity.
every three hours for a total of three doses. For the next five Another sample drawn 7 hours later showed a phenytoin
days, the patient received 100 mg phenytoin intravenously or level of 12.4 μg/mL. It was suspected that a falsely elevated
orally every 8 hours. On the evening of Day 5 she received serum phenytoin level was due to drawing of the specimen
two additional 300 mg doses of phenytoin intravenously. from the same line through which the intravenous phenytoin
Beginning with Day 7 the dose was 100 mg intravenously was administered [2].
3.2 ORDER OF DRAW OF BLOOD COLLECTION
TUBES
The correct order of draw for blood specimens is as follows:
Microbiological blood culture tubes (yellow top).
Royal blue tube (no additive); trace metal analysis if desired.
Citrate tube (light blue).
Serum tube (red top) or tube with gel separator/clot activator (gold top
or tiger top).
38 CHAPTER 3: Pre-Analytical Variables
Heparin tube (green top).
EDTA tube (ethylenediamine tetraacetic acid; purple/lavender top).
Oxalate-fluoride tube (gray top).
Tubes with additives must be thoroughly mixed by gentle inversion as per
manufacturer-recommended protocols. Erroneous test results may be
obtained when the blood is not thoroughly mixed with the additive. When
trace metal testing on serum is ordered, it is advisable to use trace element
s
tubes. Royal-Blue Monoject Trace Element Blood Collection Tubes are
available for this purpose. These tubes are free from trace and heavy metals.
3.3 ERRORS WITH PATIENT PREPARATION
There are certain important issues regarding patient preparation for obtaining
meaningful clinical laboratory test results. For example, glucose testing and
lipid panel must be done after the patient fasts overnight. Although choles-
terol concentration is not affected significantly by meals, after meals chylomi-
crons are present in serum that can significantly increase the triglyceride
level.
Physiologically, blood distribution differs significantly in relation to body
posture. Gravity pulls the blood into various parts of the body when recum-
bent, and the blood moves back into the circulation, away from tissues,
when standing or ambulatory. Blood volume of an adult in an upright
position is 600 700 mL less than when the person is lying on a bed, and
this shift directly affects certain analytes due to dilution effects. Therefore,
concentrations of proteins, enzymes, and protein-bound analytes (thyroid-
stimulating hormone (TSH), cholesterol, T4, and medications like warfarin)
are affected by posture; most affected are factors directly influencing hemo-
stasis, including renin, aldosterone, and catecholamines. It is vital for labo-
ratory requisitions to specify the need for supine samples when these
analytes are requested. Several analytes show diurnal variations, most
importantly cortisol and TSH (Table 3.2). Therefore, the time of specimen
collection may affect test results.
3.4 ERRORS WITH PATIENT IDENTIFICATION
AND RELATED ERRORS
Accurate patient and specimen identification is required for providing order-
ing clinicians with correct results. Regulatory agencies like The Joint
Commission (TJC) have made it a top priority in order to ensure patient
3.4 Errors with Patient Identification and Related Errors 39
Table 3.2 Common Analytes that show Diurnal Variation
Analyte Comment
Cortisol Much higher concentration in the morning than afternoon
Renin Maximum activity early morning, minimum in the afternoon
Iron Higher levels in the morning than afternoon
TSH Maximum level 2 AM 4 AM while minimum level 6 PM 10 PM
Insulin Higher in the morning than later part of the day
Phosphate Lowest in the morning, highest in early afternoon
ALT Higher level in the afternoon than morning
Abbreviations: TSH, Thyroid stimulating hormone; ALT, Alanine aminotransferase.
safety. Patient and specimen misidentification occurs mostly during the pre-
analytical phase:
Accurate identification of a patient requires verification of at least two
unique identifiers from the patient and ensuring that those match the
patient’s prior records.
If a patient is unable to provide identifiers (i.e. neonate or a critically ill
patient) a family member or nurse should verify the identity of the
patient.
Information on laboratory requisitions or electronic orders must also
match patient information in their chart or electronic medical record.
Specimens should not be collected unless all identification discrepancies
have been resolved.
The specimens should be collected and labeled in front of the patient and
then sent to the laboratory with the test request. Non-barcoded specimens
should be accessioned, labeled with a barcode (or re-labeled, if necessary),
processed (either manually or on an automated line), and sent for analysis.
Identification of the specimen should be carefully maintained during centri-
fugation, aliquoting, and analysis. Most laboratories use barcoded labeling
systems to preserve sample identification. Patient misidentification can have
a serious adverse outcome on a patient, especially if the wrong blood is
transfused to a patient due to misidentification of the blood specimen sent
to the laboratory for cross-matching. In this case a patient could die from
receiving the wrong blood group.
Although errors in patient identification occur mostly in the pre-analytical
phase, errors can also occur during the analytical and even post-analytical
phases. Results from automated analyzers are electronically transferred to the
LIS through an interface, but if direct transfer of the result from a particular
instrument is not available, errors can occur during manual transfer of the
40 CHAPTER 3: Pre-Analytical Variables
results. Dunn and Morga reported that, out of 182 specimen misidentifica-
tions they studied, 132 misidentifications occurred in the pre-analytical stage.
These misidentifications were due to wrist bands labeled for wrong patient,
laboratory tests ordered for the wrong patient, selection of the wrong medical
record from a menu of similar names and social security numbers, specimen
mislabeling during collection associated with batching of specimens and
printed labels, misinformation from manual entry of laboratory forms, fail-
ure of two-source patient identification for clinical laboratory specimens, and
failure of two-person verification of patient identity for blood bank speci-
mens. In addition, 37 misidentification errors during the analytical phase
were associated with mislabeled specimen containers, tissue cassettes, or
microscopic slides. Only 13 events of misidentification occurred in the post-
analytical stage; this was due to reporting of results into the wrong medical
record and incompatible blood transfusions due to failure of two-person ver-
ification of blood products [3].
CASE REPORT
A 68-year-old male presented to the hospital with sharp patient’s blood revealed that he was actually type O. The
abdominal pain. The patient underwent an appendectomy patient had been sharing a room with another patient whose
and received one unit of type A blood. The patient developed blood was type A. The specimen sent to the blood bank had
disseminated intravascular coagulation and died 24 hours been inappropriately labeled [4].
after receiving the transfusion. Postmortem analysis of the
Delta checks are a simple way to detect mislabels. A delta check is a process
of comparing a patient’s result to his or her previous result for any one
analyte over a specified period of time. The difference or “delta,” if outside
pre-established rules, may indicate a specimen mislabel or other pre-
analytical error.
3.5 ERROR OF COLLECTING BLOOD IN WRONG
TUBES: EFFECT OF ANTICOAGULANTS
Blood specimens must be collected in the right tube in order to get accurate
test results. It is important to have the correct anticoagulant in the tube (dif-
ferent anticoagulant tubes have different colored tops). Anticoagulants are
used to prevent coagulation of blood or blood proteins to obtain plasma or
whole blood specimens. The most routinely used anticoagulants are ethyle-
nediamine tetraacetic acid (EDTA), heparin (sodium, ammonium, or lithium
salts), and citrates (trisodium and acid citrate dextrose). In the optimal
3.5 Error of Collecting Blood in Wrong Tubes: Effect of Anticoagulants 41
anticoagulant, blood ratio is essential to preserve analytes and prevent clot or
fibrin formation via various mechanisms. Proper anticoagulants for various
tests are as follows:
Potassium ethylenediamine tetraacetic acid (EDTA; purple top tube) is
the anticoagulant of choice for complete blood count (CBC).
EDTA is also used for blood bank pre-transfusion testing, flow cytometry,
hemoglobin A1C, and most common immunosuppressive drugs such as
cyclosporine, tacrolimus, sirolimus, and everolimus; another
immunosuppressant, mycophenolic acid, is measured in serum or
plasma instead of whole blood.
Heparin (green top tube) is the only anticoagulant recommended
for the determination of pH blood gases, electrolytes, and ionized
calcium. Lithium heparin is commonly used instead of sodium
heparin for general chemistry tests. Heparin is not recommended
for protein electrophoresis and cryoglobulin testing because of the
presence of fibrinogen, which co-migrates with beta-2 monoclonal
proteins.
For coagulation testing, citrate (light blue top) is the appropriate anticoagulant.
Potassium oxalate is used in combination with sodium fluoride and
sodium iodoacetate to inhibit enzymes involved in the glycolytic
pathway. Therefore, the oxalate/fluoride (gray top) tube should be used
for collecting specimens for measuring glucose levels.
Although lithium heparin tubes are widely used for blood collection for
analysis of many analytes in the chemistry section of a clinical laboratory, a
common mistake is to collect specimens for lithium analysis in a lithium
heparin tube. This can cause clinically significant falsely elevated lithium
values that may confuse the ordering physician.
CASE REPORT
A healthy 15-month-old female was brought in by her mother after admission her serum lithium concentration was elevated
after ingesting an unknown amount of nortriptyline and lith- to 3.1 mEq/L. The patient was given l mg/kg oral sodium
ium carbonate at an undetermined time. The mother reported polystyrene sulfonate, the rate of IV fluids was doubled, and
that the patient had vomited after ingestion. Vital signs were the patient was started on an IV dopamine infusion.
normal. The patient was lethargic but easily aroused, and the However, at 15 h her serum lithium level was 1.6 mEq/L.
physical examination was unremarkable. Initial ECG was also Review of her records revealed that the specimen was
normal for age. The initial lithium level in the serum was collected in a lithium heparin tube. A 19-hour serum
1.4 mEq/L, and a nortriptyline concentration of 36 ng/mL lithium concentration was 0.6 mEq/L, and the patient was
indicated that none of the drug level was in a toxic region. discharged within 24 h after admission without further inci-
The patient was treated with activated charcoal, but 13 hours dent [5].
42 CHAPTER 3: Pre-Analytical Variables
3.6 ISSUES WITH URINE SPECIMEN COLLECTION
Urinalysis remains one of the key diagnostic tests in the modern clinical lab-
oratory, and, as such, proper timing and collection techniques are important.
Urine is essentially an ultrafiltrate of blood. Examination of urine may take
several forms: microscopic, chemical (including immunochemical), and elec-
trophoresis. Three different timings of collection are commonly encountered.
The most common is the random or “spot” urine collection. However, if it
would not unduly delay diagnosis, the first voided urine in the morning is
generally the best sample. This is because the first voided urine is generally
the most concentrated and contains the highest concentration of sediment.
The third timing of collection is the 12- or 24-hour collection. This is the pre-
ferred technique for quantitative measurements, such as for creatinine, elec-
trolytes, steroids, and total protein. The usefulness of these collections is
limited, however, by poor patient compliance.
For most urine testing, a clean catch specimen is optimal, with a goal of col-
lecting a “midstream” sample for testing. In situations where the patient can-
not provide a clean catch specimen, catheterization is another option, but
must be performed only by trained personnel. Urine collection from infants
and young children prior to toilet training can be facilitated through the use
of disposable plastic bags with adhesive surrounding the opening.
For point of care urinalysis (e.g. urine dipstick and pregnancy testing) any
clean and dry container is acceptable. Disposable sterile plastic cups and
even clean waxed paper cups are often employed. If the sample is to be sent
for culture, the specimen should be collected in a sterile container. For rou-
tine urinalysis and culture, the containers should not contain preservative.
For specific analyses, some preservatives are acceptable. The exception to this
is for timed collections where hydrochloric acid, boric acid, or glacial acetic
acid is used as a preservative.
Storage of urine specimens at room temperature is generally acceptable for
up to two hours. After this time the degradation of cellular and some chemi-
cal elements becomes a concern. Likewise bacterial overgrowth of both path-
ologic as well as contaminating bacteria may occur with prolonged storage at
room temperature. Therefore, if more than two hours will elapse between
collection and testing of the urine specimen, it must be refrigerated.
Refrigerated storage for up to 12 hours is acceptable for urine samples des-
tined for bacterial culture. Again, proper patient identification and specimen
labeling is important to avoid errors in reported results.
3.7 ISSUES WITH SPECIMEN PROCESSING AND
TRANSPORTATION
After collection, specimens require transportation to the clinical laboratory. If
specimens are collected in the outpatient clinic of the hospital and analyzed
3.8 Special Issues: Blood Gas and Ionized Calcium Analysis 43
in the hospital laboratory, transportation time may not be a factor. However,
if specimens are transported to the clinical laboratory or a reference labora-
tory, care must taken in shipping specimens. Ice packs or cold packs are espe-
cially useful for preserving specimens at lower temperatures because analytes
are more stable at lower temperature. Turbulence during transportation, such
as transporting specimens in a van to the main laboratory, can even affect
concentrations of certain analytes.
Many clinical laboratory tests are performed on either serum or plasma.
Due to the instability of certain analytes in unprocessed serum or plasma,
separation of serum or plasma from blood components must be performed
as soon as possible, and definitely within two hours of collection.
Appropriate preparation of specimens prior to centrifugation is required to
ensure accurate laboratory results. Serum specimens must be allowed ample
time to clot prior to centrifugation. Tubes with clot activators require suffi-
cient mixing and at least 30 minutes of clotting time, Plasma specimens
must be mixed gently according to manufacturer’s instructions to ensure
efficient release of additive/anticoagulant.
3.8 SPECIAL ISSUES: BLOOD GAS AND IONIZED
CALCIUM ANALYSIS
Specimens collected for blood gas determinations require special care, as the
analytes are very sensitive to time, temperature, and handling. In standing
whole blood samples, pH falls at a rate of 0.04 0.08/hour at 37 C,
0.02 0.03/hour at 22 C, and ,0.01/hour at 4 C. This drop in pH is concor-
dant with decreased glucose and increased lactate. In addition, pCO 2
increases around 5.0 mmHg/hour at 37 C, 1.0 mmHg/hour at 22 C, and
0.5 mmHg/hour at 4 C. At 37 C, pO 2 decreases by 5 10 mmHg/hour, but
o
only 2 mmHg/hour at 22 C. Ideally, all blood gas specimens should be mea-
sured immediately and never stored. A plastic syringe transported at room
temperature is recommended if analysis will occur within 30 minutes of col-
lection, but a glass syringe should be used if more than 30 minutes are
needed prior to analysis and specimens are stored in ice. Bubbles must
be completely expelled from the specimen prior to transport, as the pO 2 will
be significantly increased and pCO 2 decreased within 2 minutes [6].
Blood gas analyzers re-heat samples to 37 C for analysis to recapitulate phys-
iological temperature. However, for patients with abnormal body tempera-
ture, either hyperthermia due to fever, or induced hypothermia in patients
undergoing cardiopulmonary bypass, a temperature correction should be
made to determine accurate pH, pO 2 , and pCO 2 results.
Ionized calcium is often measured with ion-sensitive electrodes in blood gas
analyzers. Ionized calcium is inversely related to pH: decreasing pH decreases
albumin binding to calcium, thereby increasing free, ionized calcium.
44 CHAPTER 3: Pre-Analytical Variables
Therefore, specimens sent to the lab for ionized calcium determinations
should be handled with the same caution as other blood gas samples since
pre-analytical errors in pH will impact ionized calcium results [7].
KEY POINTS
Errors in the clinical laboratory can occur in pre-analytical, analytical, or post-
analytical steps. Most errors (almost two-thirds of all errors) occur in pre-analytical
steps.
During specimen collection, a patient must be identified by matching at least two
criteria. Blood should be collected in the correct tube following the correct order
of draw.
Correct order of drawing blood: (1) microbiological blood culture tubes (yellow
top), (2) royal blue tube (no additive) if trace metal analysis is desired, (3) citrate
tube (light blue), (4) serum tube (red top) or tube with gel separator/clot activator
(gold top or tiger top), (5) heparin tube (green top), (6) EDTA tube (purple/lavender
top), and (7) oxalate-fluoride tube (gray top).
Proper centrifugation (in the case of analyzing serum or plasma specimens) and
proper transportation of the specimen to the laboratory are required, as well as
maintaining proper storage of the specimen prior to analysis in order to avoid
artifactual changes in the analyte.
EDTA (purple top tube) is the anticoagulant of choice for the complete blood
count (CBC). The EDTA tube is also used for blood bank pre-transfusion testing,
flow cytometry, hemoglobin A1C, and most common immunosuppressive drugs
such as cyclosporine, tacrolimus, sirolimus, and everolimus; another
immunosuppressant, mycophenolic acid, is measured in serum or plasma instead
of whole blood.
Heparin (green top tube) is the only anticoagulant recommended for the
determination of pH blood gases, electrolytes, and ionized calcium. Lithium
heparin is commonly used instead of sodium heparin for general chemistry tests.
Heparin is not recommended for protein electrophoresis and cryoglobulin testing
because of the presence of fibrinogen, which co-migrates with beta-2 monoclonal
proteins.
For coagulation testing, citrate (light blue top) is the appropriate anticoagulant.
Potassium oxalate is used in combination with sodium fluoride and sodium
iodoacetate to inhibit enzymes involved in the glycolytic pathway. Therefore the
oxalate/fluoride (gray top) tube should be used for collecting specimens for
measuring glucose level.
Ideally, all blood gas specimens should be measured immediately and never
stored. A plastic syringe, transported at room temperature, is recommended if
analysis will occur within 30 minutes of collection. Otherwise, a specimen must be
stored in ice. Glass syringes are recommended for delayed analysis because glass
References 45
does not allow the diffusion of oxygen or carbon dioxide. Bubbles must be
completely expelled from the specimen prior to transport, as the pO 2 will be
significantly increased and pCO 2 decreased within 2 minutes.
REFERENCES
[1] Carraro P, Plebani M. Errors in STAT laboratory; types and frequency 10 years later. Clin
Chem 2007;53:1338 42.
[2] Murphy JE, Ward ES. Elevated phenytoin concentration caused by sampling through the
drug-administered line. Pharmacotherapy 1991;11:348 350.
[3] Dunn EJ, Morga PJ. Patient misidentification in laboratory medicine: a qualitative analysis of
227 root cause analysis reports in the Veteran Administration. Arch Pathol Lab Med
2010;134:244 55.
[4] Aleccia J. Patients still stuck with bill for medical errors. 2008 2/29/2008 8:26:51 AM ET
[cited 2012 06/28/2012]; Available from: ,http://www.msnbc.msn.com/id/23341360/ns/
health-health_care/t/patients-still-stuck-bill-medical-errors/#.T-yk5vVibJs..
[5] Lee DC, Klachko MN. Falsely elevated lithium levels in plasma samples obtained in lithium
containing tubes. J Toxicol Clin Toxicol 1996;34:467 9.
[6] Knowles TP, Mullin RA, Hunter JA, Douce FH. Effects of syringe material, sample storage
time, and temperature on blood gases and oxygen saturation in arterialized human blood
samples. Respir Care 2006;51:732 6.
[7] Toffaletti J, Blosser N, Kirvan K. Effects of storage temperature and time before centrifugation
on ionized calcium in blood collected in plain vacutainer tubes and silicone-separator (SST)
tubes. Clin Chem 1984;30(4):553 6.
This page intentionally left blank
CHAPTER 4
Laboratory Statistics and Quality Control
4.1 MEAN, STANDARD DEVIATION, AND CONTENTS
COEFFICIENT OF VARIATION 4.1 Mean, Standard
Deviation, and
In an ideal situation, when measuring a value of the analyte in a specimen, Coefficient of
the same value should be produced over and over again. However, in reality, Variation................ 47
the same value is not produced by the instrument, but a similar value is
4.2 Precision and
observed. Therefore, the most basic statistical operation is to calculate the Accuracy ............... 48
mean and standard deviation, and then to determine the coefficient of varia-
4.3 Gaussian
tion (CV). Mean value is defined as Equation 4.1: Distribution and
Reference Range .. 48
X 1 1 X 2 1 X 3 1 ?? 1 X n
Mean ðXÞ 5 ð4:1Þ 4.4 Sensitivity,
n
Specificity, and
Predictive
Here, X 1 ,X 2 ,X 3 , etc., are individual values and “n” is the number of values. Value...................... 50
After calculation of the mean value, standard deviation (SD) of the sample 4.5 Random and
Systematic Errors in
can be easily determined using the following formula (Equation 4.2):
Measurements...... 51
r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P 4.6 Laboratory
2
ðx 1 2xÞ
SD 5 ð4:2Þ Quality Control:
n 2 1
Internal and
External................. 52
Here, X 1 is the individual value from the sample and n is again the number
4.7 Levey Jennings
of observations.
Chart and Westgard
Rules...................... 54
Standard deviation represents the average deviation of an individual value
from the mean value. The smaller the standard deviation, the better the preci- 4.8 Delta Checks.. 56
sion of the measurement. Standard deviation is the square root of variance. 4.9 Method
Variance indicates deviation of a sample observation from the mean of all Validation/
Evaluation of a
values and is expressed as sigma. Therefore (Equation 4.3):
New Method......... 58
σ 5 OSD ð4:3Þ 4.10 How to
Interpret the
Regression
Equation?.............. 59 47
A. Dasgupta and A. Wahed: Clinical Chemistry, Immunology and Laboratory Quality Control
DOI: http://dx.doi.org/10.1016/B978-0-12-407821-5.00004-8
© 2014 Elsevier Inc. All rights reserved.
48 CHAPTER 4: Laboratory Statistics and Quality Control
4.11 Bland Altman Coefficient of variation is also a very important parameter because CV can be
Plot......................... 60
easily expressed as a percent value; the lower the CV, the better the precision
4.12 Receiver for the measurement. The advantage of CV is that one number can be used
Operator Curve..... 60
to express precision instead of stating both mean value and standard devia-
4.13 What is Six tion. CV can be easily calculated with Equation 4.4:
Sigma?................... 61
CV 5 SD=Mean 3 100 ð4:4Þ
4.14 Errors
Associated with
Reference Sometimes standard error of mean is also calculated (Equation 4.5).
Range .................... 62
Standard error of mean 5 SD=On ð4:5Þ
4.15 Basic Statistical
Analysis: Student
t-Test and Related Here, n is the number of data points in the set.
Tests...................... 63
Key Points ............. 63
References ............ 66 4.2 PRECISION AND ACCURACY
Precision is a measure of how reproducible values are in a series of mea-
surements, while accuracy indicates how close a determined value is to
the target values. Accuracy can be determined for a particular test by anal-
ysis of an assayed control where the target value is known. This is typi-
cally provided by the manufacturer or made in-house by accurately
measuring a predetermined amount of analyte and then dissolving it in a
predetermined amount of a solvent matrix where the matrix is similar to
plasma. An ideal assay has both excellent precision and accuracy, but
good precision of an assay may not always guarantee good accuracy.
4.3 GAUSSIAN DISTRIBUTION AND
REFERENCE RANGE
Gaussian distribution (also known as normal distribution) is a bell-
shaped curve, and it is assumed that during any measurement values will
follow a normal distribution with an equal number of measurements
above and below the mean value. In order to understand normal distri-
bution, it is important to know the definitions of “mean,”“median,” and
“mode.” The “mean” is the calculated average of all values, the “median”
is the value at the center point (mid-point) of the distribution, while the
“mode” is the value that was observed most frequently during the mea-
surement. If a distribution is normal, then the values of the mean,
median, and mode are the same. However, the value of the mean,
median, and mode may be different if the distribution is skewed (not
4.3 Gaussian Distribution and Reference Range 49
Gaussian distribution). Other characteristics of Gaussian distributions are
as follows:
Mean 6 1 SD contain 68.2% of all values.
Mean 6 2 SD contain 95.5% of all values.
Mean 6 3 SD contain 99.7% of all values.
A Gaussian distribution is shown in Figure 4.1. Usually, reference range is
determined by measuring the value of an analyte in a large number of nor-
mal subjects (at least 100 normal healthy people, but preferably 200 300
healthy individuals). Then the mean and standard deviations are determined.
The reference range is the mean value 22 SD to the mean value 12 SD. This
incorporates 95% of all values. The rationale for reference range to be the
mean 6 2 SD is based on the fact that the lower end of abnormal values and
upper end of normal values may often overlap. Therefore, mean 6 2SD is a
conservative estimate of the reference range based on measurement of the ana-
lytes in a healthy population. Important points for reference range include:
Reference range may be the same between males and females for many
analytes, but reference range may differ significantly between males and
females for certain analytes such as sex hormones.
Reference range of an analyte in an adult population may be different
from infants or elderly patients.
Although less common, reference range of certain analytes may be
different between different ethnic populations.
For certain analytes such as glucose, cholesterol, triglycerides, high-
density and low-density cholesterol, etc., there is no reference range but
FIGURE 4.1
A Gaussian distribution showing percentage of values within a certain standard deviation from the mean.
(Courtesy of Andres Quesda, M.D., Department of Pathology and Laboratory Medicine, University of
Texas-Houston Medical School.)
50 CHAPTER 4: Laboratory Statistics and Quality Control
there are desirable ranges which are based on the study of a large
population and risk factors associated with certain values of analytes
(e.g. various lipid parameters and risk of cardiovascular diseases).
Although many analytes in the normal population when measured follow nor-
mal distribution, not all analytes follow that pattern (e.g. cholesterol and trigly-
cerides). In this case distribution is skewed and, as expected, mean, median, and
mode values are different.
4.4 SENSITIVITY, SPECIFICITY, AND PREDICTIVE
VALUE
An assay cannot be 100% sensitive or specific because there is some overlap
between values of a particular biochemical parameter observed in normal
individuals and patients with a particular disease (Figure 4.2). Therefore, dur-
ing measurement of any analyte there is a gray area where few abnormal
values are generated from analysis of specimens from healthy people
(false positive) and few normal results are generated from patients (false
negative).
The gray area depends on the width of normal distribution as well as the
reference range of the analyte.
FIGURE 4.2
Distribution of values in normal and diseased states where TN: true negative values; TP: true positive
values; FN: false negative values; and FP: false positive values. (Courtesy of Andres Quesda, M.D.,
Department of Pathology and Laboratory Medicine, University of Texas-Houston Medical School.)
4.5 Random and Systematic Errors in Measurements 51
False positive results may mislead the clinician and lead to unnecessary
investigation and diagnostic tests as well as increased anxiety of the patient.
A false negative result is more dangerous than a false positive result
because diagnosis of a disease may be missed or delayed, which can
cause serious problems.
For a test, as clinical sensitivity increases, specificity decreases. For
calculating clinical sensitivity, specificity, and predictive value of a test,
the following formulas can be used:
TP 5 True positive (result correctly identifies a disease)
FP 5 False positive (result falsely identifies a disease)
TN 5 True negative (result correctly excludes a disease when the
disease is not present in an individual)
FN 5 False negative (result incorrectly excludes a disease when the
disease is present in an individual).
Therefore, when assay results are positive, results are a combination of TP
and FP, and when assay results are negative, results are combination of TN
and FN (Equations 4.6 4.8).
Sensitivity ðindividuals with disease who show positive test resultsÞ
TP ð4:6Þ
5 3 100
TP 1 FN
Specificity (individuals without disease who show negative test results)
TN
5 3 100 ð4:7Þ
TN 1 FP
TP
Positive predictive value 5 3 100 ð4:8Þ
TP 1 FP
A positive predictive value is the proportion of individuals with disease who
showed a positive value compared to all individuals tested. Let us consider
an example where a particular analyte was measured in 100 normal individuals
and 100 individuals with disease. The following observations were made:
TP 5 95, FP 5 5, TN 5 95, and FN 5 5. Therefore, sensitivity5 95/(951 5) 3
100 5 95%, and specificity5 95/(951 5) 3 1005 95%.
4.5 RANDOM AND SYSTEMATIC ERRORS
IN MEASUREMENTS
Random errors and systematic errors are important issues in the laboratory
quality control process. Random errors are unavoidable and occur due to
imprecision of an analytical method. On the other hand, systematic errors
have certain characteristics and are often due to errors in measurement using
52 CHAPTER 4: Laboratory Statistics and Quality Control
a particular assay. Because random errors cannot be eliminated or controlled,
the goal of quality control in a clinical laboratory is to avoid or minimize
systematic errors. Usually recalibration of the assay is the first step taken by a
clinical laboratory technologist to correct systematic error, but more serious
problems such as instrument malfunction may also be responsible for sys-
tematic errors.
4.6 LABORATORY QUALITY CONTROL: INTERNAL
AND EXTERNAL
Good quality control is the heart of a good laboratory operation. Because the
value of an analyte in a patient’s specimen is unknown, clinical laboratory
professionals rely on producing accurate results using controls for an assay.
Controls can be purchased from a commercial source or can be made in-
house. A control is defined as a material that contains the analyte of interest
with a known concentration. It is important that the control material has a
similar matrix to serum or plasma. Different types of controls used in clinical
laboratories are listed below:
Assayed Control: The value of the analyte is predetermined. Most
commercially available controls have predetermined values of various
analytes. The target value must be verified before use.
Un-Assayed Control: The target value is not predetermined. This
control must be fully validated (run at least 20 times in a single run
and then run once a day for 20 consecutive days to establish a target
value).
Homemade Control: If the assayed control material is not easily
commercially available (e.g. for an esoteric test), the control material may
be prepared by the laboratory staff by dissolving correctly weighed pure
material in an aqueous-based solvent or in serum or whole blood (for an
analyte not present in humans, e.g. a drug).
Commercially available control materials may be obtained as a ready-to-use
liquid control or as a lyophilized powder. If control material is available in
the form of lyophilized powder, it must be reconstituted prior to use by
strictly following the manufacturer’s recommended protocol. Control materi-
als must be stored in a refrigerator following manufacturer’s recommenda-
tions and the expiration date of the control must be clearly visible so that an
expired control is not used by mistake. Usually low, medium, and high con-
trols of an analyte are used to indicate analyte concentrations both in a nor-
mal physiological state and a disease state. At least two controls must be
used for each analyte (high and low controls). Control materials must be run
along with patient samples or at least once in each shift (a minimum of
three times in a 24 h period) depending on the assay.
4.6 Laboratory Quality Control: Internal and External 53
Quality control in the laboratory may be both internal and external. Internal
quality control is essential and results are plotted in a Levey Jennings chart
as discussed below. The most common example of external quality control is
analysis of CAP (College of American Pathologists) proficiency samples for
most tests offered by a clinical laboratory. Proficiency samples may not be
available for a few esoteric tests. CLIA 88 (Clinical Laboratory Improvement
Act) requires all clinical laboratories to register with the government and to
disclose all tests these laboratories offer. The test may be “waived tests” or
“non-waived tests:”
“Waived tests” are ones where laboratories can perform such tests as long
as they follow manufacturer protocol. Enrolling in an external
proficiency-testing program such as a CAP survey is not required for
waived tests.
“Non-waived tests” are moderately complex or complex tests.
Laboratories performing such tests are subjected to all CLIA regulations
and must be inspected by CLIA inspectors every two years or by
inspectors from non-government organizations such as CAP or Joint
Commission on Accreditation of Healthcare Organization (JCAHO). In
addition, a laboratory must participate in an external proficiency program
(most commonly CAP proficiency surveys) and must successfully pass
proficiency testing in order to operate legally. A laboratory must produce
correct results for four of five external proficiency specimens for each
analyte, and must have at least an 80% score for three consecutive
challenges.
Since April 2003, clinical laboratories must perform method validation
for each new test, even if such test already has FDA approval.
Currently, most common external proficiency testing samples are offered by
CAP, and there are proficiency specimens for 580 analytes. The major
features of CAP external proficiency testing include:
CAP proficiency samples are mailed to participating laboratories three
times a year and there are at least five samples for each analyte during
this period.
CAP proficiency samples have matrix similar to patient specimens and
such specimens must be analyzed just like a regular patient specimen. For
example, a CAP specimen cannot be analyzed in duplicate or only on the
day shift; such practice to pass CAP proficiency testing is a violation of
established practice guidelines.
CAP proficiency testing results must be reported to CAP and later graded
or ungraded results must arrive at the laboratory for evaluation by
laboratory professionals. A laboratory director or designee must sign
results of a CAP survey and must act if the laboratory fails a survey.
54 CHAPTER 4: Laboratory Statistics and Quality Control
CAP proficiency test results are graded based on performance of all
participating laboratories. There are various criteria for acceptability of a
result. Results must be within 6 2 SD of the peer group mean
(calculated by taking into account all values reported by participating
laboratories) or a fixed percentage of a target value (i.e. within 10% of
target value) or the result must be within a fixed deviation from the
target value (e.g. within 6 4 mol/L of the target value).
The best way to evaluate CAP proficiency testing results of an individual
clinical laboratory is to use the e-lab solution available from the CAP for
downloading.
If CAP proficiency testing is not available, then the laboratory must
validate the test every six months by comparing values obtained by the
test with values obtained by a reference laboratory or another laboratory
offering the test (using split samples). Alternatively, if proficiency
samples are available from another source, for example, AACC (American
Association for Clinical Chemistry), passing such proficiency testing is
also acceptable.
In addition to the CAP external proficiency-testing program, a laboratory
may participate in other proficiency testing programs. However, for
laboratory accreditation by CAP, it is required that the laboratory must
participate in a CAP proficiency survey, provided that the proficiency
specimen is available from the CAP.
There are a number of publications that indicate that participating in external
proficiency surveys such as offered by CAP is useful in improving the quality
of a clinical laboratory operation [1 3].
4.7 LEVEY JENNINGS CHART AND WESTGARD
RULES
In addition to participating in the CAP program, clinical laboratories must
run control specimens every shift, at least three times in a 24 h cycle. Also,
instruments must be calibrated as needed in order to maintain good labora-
tory practice. Calibration is needed for all assays that a clinical laboratory
offers. Calibration of immunoassays is discussed in Chapter 2. However,
other assays are calibrated using calibrators that are either commercially
available or homemade:
Calibrators are defined as materials that contain known amounts of the
analyte of interest. For a single assay, at least two calibrators are needed
for calibration, a zero calibrator (contains no analyte) and a high
calibrator containing the amount of the analyte that represents the upper
end of the analytical measurement range. However, five to six calibrators
4.7 Levey Jennings Chart and Westgard Rules 55
are commonly used for calibration. One calibrator must be a zero
calibrator and the highest calibrator must contain a concentration of the
analyte at the upper end of the analytical measurement range. Other
calibrators usually have concentrations in between the zero calibrator and
the highest calibrator, and represent normal values of the analyte as well
as values expected in a disease state (for drugs, values below therapeutic
range, between therapeutic ranges, and then toxic range).
Controls are materials that contain a known amount of the analyte. The
matrix of the control must be similar to the matrix of the patient’s
sample; for example, matrix of the control must resemble serum for
assays conducted in serum or plasma.
A Levey Jennings chart is commonly used for recording observed values of
controls during daily operation of a clinical laboratory. A Levey Jennings
chart is a graphical representation of all control values for an assay during an
extended period of laboratory operation. In this graphical representation,
values are plotted with respect to the calculated mean and standard devia-
tion, and if all controls are within the mean and 6 2 SD, then all control
values are within acceptable limits and all runs during that period will have
acceptable performance (Figure 4.3). In this figure, all glucose low controls
were within acceptable limits for the entire month. The Levey Jennings chart
must be constructed for each control (low and high control or low, medium,
and high control) for each assay the laboratory offers. For example, if the lab-
oratory runs two controls (low and high) for each test and offers 100 tests,
then there will be 100 3 2, or 200 Levey Jennings charts each month.
Usually a Levey Jennings chart is constructed for one control for one month.
The laboratory director or designee must review all Levey Jennings charts
each month and sign them for compliance with an accrediting agency.
93.6 + 3 SD
90.4 + 2 SD
87.2 + 1 SD
84 Mean
80.8 – 1 SD
77.6 – 2 SD
74.4 – 3 SD
0 5 10 15 20 25 30
Days
FIGURE 4.3
Levey Jennings chart with no violation.
56 CHAPTER 4: Laboratory Statistics and Quality Control
Table 4.1 Westgard Rules
Violation Comments Accept/Reject Error
Run Type
One control value is outside 6 2 SD limit, but other control within Accept run Random
1 2s
6 2 SD limit
One control exceeds 6 3 SD Reject run Random
1 3s
Both controls outside 6 2 SD limit, or two consecutive controls Reject run Systematic
2 2s
outside limit
One control 12 SD and another 22 SD Reject run Random
R 4s
Four consecutive control exceeding 11SD or 21 SD Reject run* Systematic
4 1S
103 Ten consecutive control values falling on one side of the mean Reject run* Systematic
*Although these are rejection rules, a laboratory may consider these violations as warnings and may accept the runs and take steps
to correct such systematic errors.
However, if technologists review results of the control during a run and
accept the run if the value of the control is within an acceptable range estab-
lished by the laboratory (usually a mean of 6 2 SD), then the laboratory
supervisor can review all control data on a daily basis; usually the supervisor
reviews all control data weekly.
Usually Westgard rules are used for interpreting a Levey Jennings chart, and
for certain violations a run must be rejected and the problem resolved prior
to resuming testing of a patient’s samples. Various errors can occur in
Levey Jennings charts, including shift, trend, and other violations
(Table 4.1). The basic principle is that control values must fall within
6 2 SD of the mean, but there are some situations when violation of
Westgard rules occurs despite control values that are within the 6 2 SD lim-
its of the mean. Usually 1 2s is a warning rule and occurs due to random error
(Figure 4.4), and other rules are rejection rules. In addition, shift (Figure 4.5)
and trend (Figure 4.6) may be observed in Levey Jennings charts, indicating
systematic errors where corrective actions must be taken. When 10 or more
consecutive control values are falling on one side of the mean, a shift is
observed (103 rule). In addition, when a 10 3 violation is observed, it may
also indicate a trend when control values indicate an upward trend.
4.8 DELTA CHECKS
Delta checks are an additional quality control measure adopted by the com-
puter of an automated analyzer or the laboratory information system (LIS)
where a value is flagged if the value deviates more than a predetermined
limit from the previous value in the same patient. The limit of deviation for
4.8 Delta Checks 57
96.8
1 2S violation
93.6 2 2S violation + 3 SD
Glucose control,mg/dL 87.2 + 1 SD
+ 2 SD
90.4
Mean
84
80.8
– 1 SD
77.6
74.4 4 1S violation – 2 SD
– 3 SD
1 3S violation
71.2
0 5 10 15 20 25 30
Days
FIGURE 4.4
Levey Jennings chart showing certain violations.
93.6 + 3 SD
90.4 + 2 SD
87.2 + 1 SD
Shift
84 Mean
80.8 – 1 SD
77.6 – 2 SD
74.4 – 3 SD
0 5 10 15 20 25 30
Days
FIGURE 4.5
Levey Jennings chart showing shift of control values.
93.6 Trend +3 SD
+2 SD
90.4
Glucose control, mg/dL 87.2 +1 SD
Mean
84
80.8
–1 SD
77.6
74.4 –2 SD
–3 SD
0 5 10 15 20 25 30
Days
FIGURE 4.6
Levey Jennings chart showing trend.
58 CHAPTER 4: Laboratory Statistics and Quality Control
each analyte is set by laboratory professionals. The basis of the delta check is
that the value of an analyte in a patient should not deviate significantly from
the previous value unless certain intervention is done; for example, a high
glucose value may decrease significantly following administration of insulin.
If a value is flagged as a failed delta check, then a further investigation
should be made. A phone call to the nurse may address issues such as erro-
neous results due to collection of a specimen from an IV line or collection of
the wrong specimen. Quality control of the assay must also be addressed to
ensure that the erroneous result is not due to instrument malfunction.
The value of a delta check is usually based on one of the following criteria:
Delta difference: current value previous value should be within a
predetermined limit.
Delta percent change: delta difference/current value.
Rate difference: delta difference/delta interval 3 100.
Rate percent change: delta percentage change/delta interval.
4.9 METHOD VALIDATION/EVALUATION
OF A NEW METHOD
Since April 2003, clinical laboratories must perform method validation for
each new test implemented in the laboratory even though such tests have
FDA approval. The following are steps for method validation as well as
implementation of a new method in the clinical laboratory:
Within-run assay precision must be validated by running low, medium,
and high controls, or low and high controls 20 times each in a single
run. Then mean, standard deviation, and CV must be calculated
individually for low, medium, and high control.
Between-run assay precision must be established by running low,
medium, and high control, or low and high control once daily for
20 consecutive days. Then mean, standard deviation, and CV must be
calculated.
Although assay linearity is provided by the manufacturer, it must be
validated in the clinical laboratory prior to running patient specimens.
Linearity is essentially the calibration range of the assay (also called
“analytical measurement range”). In order to validate the linearity, a
high-end calibrator or standard can be selected and then diluted to
produce at least four to five dilutions that cover the entire analytical
measurement range. Then, if the observed value matches the expected
value, the assay can be considered linear over the stated range.
The detection limit should be traditionally determined by running a zero
calibrator or blank specimen 20 times and then determining the mean
4.10 How to Interpret the Regression Equation? 59
and standard deviation. The detection limit (also called the lower limit of
detection) is the mean 12 SD value. However, the guidelines of the
Clinical Laboratory Standard Institute (CLSI, E17 protocol) advise that a
specimen with no analyte (blank specimen) should be run; then the
Limit of Blank (LoB) 5 Mean 1 1.654 SD. This should be established by
running blank specimens 60 times, but if a company already established
a guideline, then 20 runs are enough. Limit of Quantification is usually
defined as a concentration where CV is 20% or less [4].
Comparison of a new method with an existing method is a very
important step in method validation. For this purpose, at least 100
patient specimens must be run in the laboratory at the same time with
both the existing method and the new method. It is advisable to batch
patient samples and then run these specimens by both methods on the
same day, and, if possible, at the same time (by splitting specimens).
Then results obtained by the existing method should be plotted in the
x-axis (reference method) and corresponding values obtained by the new
method should be plotted in the y-axis. Linear regression is the simplest
way of comparing results obtained by the existing method in the
laboratory and the new method. The linear regression equation is the line
of best fit with all data points. A computer can produce the linear
regression line as well as an equation called a linear regression equation,
which is the equation representing a straight line (regression line),
Equation 4.9:
y 5 mx 1 b ð4:9Þ
Here, “m” is called the slope of the line and “b” is the intercept. The
computer calculates the equation of the regression line using a least
squares approach. The software also calculates “r,” the correlation
coefficient, using a complicated formula.
4.10 HOW TO INTERPRET THE REGRESSION
EQUATION?
The regression equation (y 5 mx 1 b) provides a lot of important informa-
tion regarding how the new method (y) compares with the reference method
(x). Interpretations of a linear regression equation include:
Ideal value: m 5 1, b 5 0, and y 5 x. In reality this never happens.
If the value of m is less than 1.0, then the method shows negative bias
compared to the reference method. Bias can be calculated as 1 2 m; for
example, if the value of “m” is 0.95, then the negative bias is
1 2 0.95 5 0.05, or 0.05 3 100 5 5%.
60 CHAPTER 4: Laboratory Statistics and Quality Control
If the value of m is over 1.0, it indicates positive bias in the new method.
For example, if m is 1.07, then positive bias in the new method is
1.07 2 1 5 0.07, or 0.07 3 100 5 7%.
The intercept “b” can be a positive or negative value and must be a
relatively small number.
An ideal value of “r” (correlation coefficient) is 1, but any value above
0.95 is considered good, and a value of 0.99 is considered excellent. The
correlation coefficient indicates how well the new method compares with
the existing method, but cannot tell anything about any inherent bias in
the new method. Therefore, slope must be taken into account to
determine bias.
In our laboratory, we evaluated a new immunoassay for mycophenolic acid,
an immunosuppressant, with a HPLC-UV method, the current method in our
laboratory, using specimens from 60 transplant recipients after de-identifying
specimens [5]. The regression equation was as follows (Equation 4.10):
y 5 1:1204 31 0:0881 ðr 5 0:98Þ ð4:10Þ
This equation indicated that there was an average 12.04% positive bias with
the new immunoassay method compared to the reference HPLC-UV method
in determining mycophenolic acid concentration. This was most likely due
to cross-reactivity of mycophenolic acid acyl glucuronide with the mycophe-
nolic acid assay antibody because metabolite does not interfere with myco-
phenolic acid determination using HPLC-UV. However, the correlation
coefficient of 0.98 indicates good agreement between both methods.
4.11 BLAND ALTMAN PLOT
Although linear regression analysis is useful for method comparison, such
analysis is affected by extreme values (where one or a series of “x” values dif-
fers widely from the corresponding “y” values) because equal weights are
given to all points. A Bland Altman plot compares two methods by plotting
the difference between the two measurements on the y-axis, and the average
of the two measurements on the x-axis. The difference between two methods
can be expressed as a percentage difference between two methods or a fixed
difference such as 1 SD or 2 SD or a fixed number. It is easier to see bias
between two methods using a Bland Altman plot.
4.12 RECEIVER OPERATOR CURVE
A receiver operator curve (ROC) is often used to make an optimal decision
for a test. ROC plots the true positive rate of a test (sensitivity) either as a
4.13 What is Six Sigma? 61
1
0.9 Decision point 3
True positive (sensitivity) 0.6 Decision point 1
0.8
0.7
Decision point 2
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
False positive (1-specificity)
FIGURE 4.7
Receiver operator curve (ROC) showing various decision points.
scale of 0 1 (1 is highest sensitivity) or as a percent on the y-axis versus a false
positive rate (1-specificity). As sensitivity increases, the specificity decreases. In
Figure 4.7, a hypothetical ROC curve is given. If decision point 1 is selected
for the test value then sensitivity of the test is 0.57 or 57% but specificity is
very high (99%, in the scale 1-specificity: 0.01). On the other hand if a higher
value of the test is selected for a decision point (decision point 3), the sensitiv-
ity has been increased to nearly 90% but specificity was decreased to 42% (in
the scale 1-specificity: 0.58) (Figure 4.7). Therefore, a decision point can be
made which can be used for making a clinical decision. In general, the closer
the decision point is to the y-axis, the better the specificity.
4.13 WHAT IS SIX SIGMA?
Six sigma originated from Motorola Corporation’s approach for total quality
management during manufacturing with an objective to reduce defects in
manufacturing. Although six sigma was originally developed for a
manufacturing process, the principles can be applied to total quality
improvement of any operation, including a clinical laboratory operation. The
goal of six sigma is to achieve an error rate of 3.4 out of one million for a
process or an error rate of only 0.00034%. An error rate of 0.001% is consid-
ered a 5.8 sigma. The goal of a clinical laboratory operation is to reduce the
error rate to at least 0.1% (4.6 sigma), but preferably 0.01% (5.2 sigma) or
higher. Improvement can be made during any process of the laboratory oper-
ation (pre-analytical, analytical, or post-analytical) with an overall goal of
reducing laboratory errors.
62 CHAPTER 4: Laboratory Statistics and Quality Control
4.14 ERRORS ASSOCIATED WITH REFERENCE
RANGE
Reference ranges are given with patients’ values to help clinicians interpret
laboratory test results. However, most reference ranges include values in the
range of mean 6 2 SD as observed with the normal population. Therefore,
reference range only accounts for 95% of the values observed in healthy indi-
viduals for the particular tests, and statistically 5% of the values of the nor-
mal population should fall outside the reference range. If more than one test
is used, then a greater percentage of the values should fall outside the refer-
ence range. The likelihood of “n” test results falling within the reference
range can be calculated with Equation 4.11:
n
% Results falling within normal range 5 0:95 3 100 ð4:11Þ
The percent of results falling outside the reference range in normal people is
shown in Equation 4.12:
n
ð1 2 0:95 Þ 3 100 ð4:12Þ
For example, if five tests are ordered for health screening of a healthy person,
then Equation 4.13 holds true:
5
% Results falling outside normal range 5 ð1 2 0:95 Þ 3 100
ð4:13Þ
5 ð1 2 0:773Þ 3 100 5 22:7%
In Table 4.2, examples of a number of tests falling within and outside the
reference range are given.
Table 4.2 Testing and Reference Range*
Number of Tests Results within Outside Reference
Reference Range Range
1 95% 5%
2 90% 10%
3 85.7% 14.3%
4 81.4% 18.6%
5 77.3% 22.7%
6 73.5% 26.5%
10 59.8% 40.2%
*For multiple tests ordered in a healthy subject, chances of the number of tests falling within the
reference range and the number of tests falling outside the reference range.
Key Points 63
4.15 BASIC STATISTICAL ANALYSIS: STUDENT
t-TEST AND RELATED TESTS
A new method can be validated against an existing method by using regres-
sion analysis as stated earlier in the chapter. Bias can be calculated based on
the analysis of the slope or Bland Altman plot. However, in some instances,
bias between the two methods can be significant and in this case a laboratory
professional needs to know if values on an analyte determined by the refer-
ence method are significantly different from the values determined by the
new method. This can be calculated by the mean of two sets of values and
the standard deviation using Student t-test:
The Student t-test is useful to determine if one set of values is different
from another set of values based on the difference between mean values
and standard deviations. This statistical test is also useful in clinical
research to see if values of an analyte in the normal state are significantly
different from the values observed in a disease state.
The Student t-test is only applicable if both distributions of values are
normal (Gaussian).
If the “t” value is significant based on the degrees of freedom (n 1 1 n 2 2 1,
where n 1 and n 2 represent the number of values in set 1 and set 2
distributions), then the null hypothesis (there is no difference between two
sets of values) is rejected and it is assumed that values in the set 1
distribution are statistically different from values in the set 2 distribution.
The value of t can be easily obtained from published tables.
The F-test is a measure of differences in variances and can also be used to
see if one set of data is different from another set of data. The F-test can
be used for analysis of multiple sets of data, when it is called ANOVA
(analysis of variance).
If the distribution of data is non-Gaussian, then neither the t-test nor the
F-test can be used. In this case, the Wilcoxon rank sum test (also known
as the Mann Whitney U test) should be used.
The formulas for the t-test and Mann Whitney U test can be found in any
textbook on statistics. However, a detailed discussion on these statistical
methods is beyond the scope of this book.
KEY POINTS
The formula for coefficient of variation (CV): CV 5 SD/mean 3 100.
Standard error of mean 5 SD/On, where n is the number of data points in the set.
If a distribution is normal, the value of the mean, median, and mode is the same.
However, the value of the mean, median, and mode may be different if the
distribution is skewed (not a Gaussian distribution).
64 CHAPTER 4: Laboratory Statistics and Quality Control
In Gaussian distributions, the mean 6 1 SD contains 68.2% of all values, the
mean 6 2 SD contains 95.5% of all values, and the mean 6 3 SD contains 99.7% of
all values in the distribution.
The reference range when determined by measuring an analyte in at least 100
healthy people and the distribution of values in a normal Gaussian distribution is
calculated as mean 6 2 SD.
For calculating sensitivity, specificity, and predictive value of a test, the following
formulas can be used, where TP 5 true positive, FP 5 False positive, TN 5 True
negative, and FN 5 False negative: (a) Sensitivity (individuals with disease who
show positive test results) 5 (TP/(TP 1 FN)) 3 100; (b) Specificity (individuals
without disease who show negative test results) 5 (TN/(TN 1 FP)) 3 100; and
(c) Positive predictive value 5 (TP/(TP 1 FP)) 3 100.
In a clinical laboratory, three types of control materials are used: assayed control where
the value of the analyte is predetermined, un-assayed control where the target value is
not predetermined, and homemade control where the control material is not easily
commercially available (e.g. an esoteric test).
Quality control in the laboratory may be both internal and external. Internal quality
control is essential and results are plotted in a Levey Jennings chart; the most
common example of external quality control is analysis of CAP (College of
American Pathologists) proficiency samples.
“Waived tests” are not complex and laboratories can perform such tests as long as
they follow manufacturer’s protocol. Enrolling in an external proficiency-testing
program such as a CAP survey is not required for waived tests.
“Non-waived tests” are moderately complex or complex tests and laboratories
performing such tests are subjected to all CLIA regulations and must be inspected
by CLIA inspectors every two years or by inspectors from non-government
organizations such as CAP or Joint-Commission on Accreditation of Healthcare
Organization (JCAHO). In addition, for all non-waived tests laboratories must
participate in an external proficiency program, most commonly CAP proficiency
surveys, and must successfully pass proficiency testing in order to operate legally.
A laboratory must produce correct results for four of five external proficiency
specimens for each analyte, and must have at least an 80% score for three
consecutive challenges.
Since April 2003, clinical laboratories must perform method validation for each
new test, even if such test is already FDA approved.
A Levey Jennings chart is a graphical representation of all control values for an
assay during an extended period of laboratory operation. In this graphical
representation, values are plotted with respect to the calculated mean and
standard deviation. If all controls are within the mean and 6 2 SD, then all
control values were within acceptable limits and all runs during that period have
acceptable performance. A Levey Jennings chartmustbeconstructed foreach
control (low and high control, or low, medium, and high control) for each assay
the laboratory offers. The laboratory director or designee must review all
Key Points 65
Levey Jennings charts each month and sign them for compliance with an
accrediting agency.
Usually Westgard rules are used for interpreting Levey Jennings charts, and for
certain violations, a run must be rejected and the problem must be resolved prior
to resuming testing of patients’ samples. Various errors can occur in
Levey Jennings charts, including shift, trend, and other violations. Usually 1 2s is a
warning rule and occurs due to random error; other rules are rejection rules (see
Table 4.1).
A delta check is important to identify laboratory errors and can be based on any of
the criteria, including delta difference, delta percent change (delta difference/
current value), rate difference (delta difference/delta interval 3 100), or rate percent
change (delta percentage change/delta interval). Usually within and between runs
precision is expressed as CV. Then linearity of the assay is revalidated. Detection
limits should be determined by running a zero calibrator or blank specimen 20
times and then determining the mean and standard deviation. The detection limit
(also called the lower limit of detection) is considered as a mean 1 2 SD value, but
more sophisticated methods of calculating limit of detection have also been
described.
Comparison of a new method with an existing method is a very important step in
method validation. For this purpose, at least 100 patient specimens must be run
with the existing method in the laboratory at the same time as the new method.
Then values are plotted and a linear regression equation determines the line of
best fit as expressed by the equation y 5 mx 1 b, where “m” is the slope of the
line and “b” is the intercept. The computer calculates the equation of the
regression line using a least squares approach. The software also calculates “r,”
the correlation coefficient, by using a complicated formula. The ideal value of
m is 1, while the ideal value of b is zero. In reality, if slope is less than 1.0, it
indicates negative bias with the new method compared to the old method, and if
the slope is over 1.0, it indicates positive bias.
A receiver operator curve (ROC) is often used to make an optimal decision level
for a test. ROC plots the true positive rate of a test (sensitivity) either as a scale of
0 1 (1 is highest sensitivity) or as percent on the y-axis versus a false positive rate
(1-specificity).
Six sigma goal is achieved if the error rate is only 3.4 out of one million processes,
or error rate is only 0.00034%.
The likelihood of “n” test results falling within the reference range can be
n
calculated from the formula % results falling within normal range 5 0.95 3 100.
Therefore % results falling outside the reference range in normal people is
n
(1 2 0.95 ) 3 100.
The Student t-test is useful for determining if one set of values is different from
another set of values based on the difference between mean values and standard
deviations. This statistical test is also useful in clinical research to see if values of
66 CHAPTER 4: Laboratory Statistics and Quality Control
an analyte in the normal state are significantly different from the values observed
in a disease state.
REFERENCES
[1] Jenny RW, Jackson KY. Proficiency test performance as a predictor of accuracy of routine
patient testing for theophylline. Clin Chem 1993;39:76 81.
[2] Theolen D, Lawson NS, Cohen T, Gilmore B. Proficiency test performance and experience
with College of American Pathologist’s programs. Arch Pathol Lab Med 1995;119:307 11.
[3] Boone DJ. Literature review of research related to the Clinical Laboratory Improvement
Amendments of 1988. Arch Pathol Lab Med 1992;116:681 93.
[4] Armbuster DA, Pry T. Limit of blank, limit of detection and limit of quantification. Clin
Biochem Rev 2008;29(Suppl. 1):S49 51.
[5] Dasgupta A, Tso G, Chow L. Comparison of mycophenolic acid concentrations determined
by a new PETINIA assay on the Dimension EXL analyzer and a HPLC-UV method. Clin
Biochem 2013;46:685 7.
CHAPTER 5
Water, Homeostasis, Electrolytes, and
Acid Base Balance
CONTENTS
5.1 DISTRIBUTION OF WATER AND ELECTROLYTES 5.1 Distribution of
IN THE HUMAN BODY Water and Electrolytes
in the Human Body... 67
Water is a major constituent of the human body that represents approxi- 5.2 Plasma and Urine
Osmolality.................. 68
mately 60% of body weight in men and 55% of body weight in women.
Two-thirds of the water in the human body is associated with intracellular 5.3 Hormones Involved
in Water and
fluid and one-third is found in extracellular fluid. Extracellular fluid is com- Electrolyte Balance ... 69
posed mostly of plasma (containing 92% water) and interstitial fluid. A
5.4 Renin
major extracellular electrolyte is sodium. The human body contains approxi- Angiotensin
Aldosterone System.. 70
mately 4,000 mmol of sodium out of which 70% is present in an exchange-
able form; the rest is found in bone. The intracellular concentration of 5.5 Diabetes
Insipidus..................... 71
sodium is 4 10 mmol/L. The normal sodium level in human serum is
135 145 mmol/L. Potassium is the major intracellular electrolyte with an 5.6 The Syndrome of
Inappropriate
intracellular concentration of approximately 150 mmol/L. The normal potas- Antidiuretic Hormone
sium level in serum is usually considered to be 3.5 5.1 mmol/L. The balance Secretion (SIADH) ..... 72
between intracellular and extracellular electrolytes is maintained by a 5.7 Hyponatremia, Sick
Cell Syndrome, and
sodium potassium ATPase pump present in cell membranes. Hypernatremia .......... 73
Along with sodium and potassium, other major electrolytes of the human 5.8 Hypokalemia and
Hyperkalemia ............ 75
body are chloride and bicarbonate. Electrolytes are classified either as posi-
tively charged ions known as cations (sodium, potassium, calcium, and mag- 5.9 Introduction to
Acid Base Balance .. 77
nesium, etc.) or negatively charged ions known as anions (chloride,
bicarbonate, phosphate, sulfate, etc.). Four major electrolytes of the human 5.10 Diagnostic
Approach to
body (sodium, potassium, chloride, and bicarbonate) play important roles Acid Base
in human physiology, including: Disturbance................ 78
5.10.1 Metabolic
Maintaining water homeostasis of the body. Acidosis....................79
Maintaining proper pH of the body (7.35 to 7.45). 5.10.2 Metabolic
Alkalosis...................80
Maintaining optimal function of the heart.
Participating in various physiological reactions.
Co-factors for some enzymes.
67
A. Dasgupta and A. Wahed: Clinical Chemistry, Immunology and Laboratory Quality Control
DOI: http://dx.doi.org/10.1016/B978-0-12-407821-5.00005-X
© 2014 Elsevier Inc. All rights reserved.
68 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
5.10.3 Respiratory It is important to drink plenty of water and take in adequate salt on a daily basis
Acidosis....................81 to maintain proper health. Healthy adults (age 19 50) should consume 1.5 g
5.10.4 Respiratory
of sodium and 2.3 g of chloride each day or 3.8 g of salt each day to replace lost
Alkalosis...................81
salt. The tolerable upper limit of daily salt intake is 5.8 g (5800 mg), but many
5.11 Short Cases:
Acid Base Americans exceed this limit. The average daily sodium intake is 3.5 6g
Disturbances.............. 81 (3,500 6,000 mg) per day. Processed foods contain high amounts of sodium
Key Points .................. 82 because manufacturers add it for food preservation. For example, a can of
tomato juice may contain up to 1,000 mg of sodium. Adults should consume
References ................. 84
4.7 g of potassium each day, but many Americans do not meet this recom-
mended potassium requirement. Potassium-rich foods include bananas, mush-
rooms, spinach, almonds, and a variety of other fruits and vegetables. High
sodium intake may cause hypertension. The Dietary Approaches to Stopping
Hypertension (DASH) eating plan recommends not more than a daily intake of
1,600 mg (1.6 g) of sodium. In general, high sodium intake increases blood
pressure; replacing a high sodium diet with a diet low in sodium and high in
potassium can decrease blood pressure. Sodium and potassium are freely
absorbed from the gastrointestinal tract, and excess sodium is excreted by the
kidneys. Potassium filtered through glomerular filtration in the kidneys is
almost completely reabsorbed in the proximal tubule and is secreted in the dis-
tal tubules in exchange for sodium under the influence of aldosterone.
Interestingly, African Americans excrete less urinary potassium than Caucasians
even while consuming similar diets in the DASH trail. However, consuming a
diet low in sodium may reduce this difference [1].
5.2 PLASMA AND URINE OSMOLALITY
Plasma osmolality is a way to measure the electrolyte balance of the body.
Osmolality (measured by an osmometer in a clinical laboratory) is techni-
cally different than osmolarity, which can be calculated based on the mea-
sured sodium, urea, and glucose concentration of the plasma. Osmolality is
a measure of osmoles of solutes per kilogram of a solution where osmolarity
is a measure of osmoles per liter of solvent. Because one kilogram of plasma
is almost one liter in volume, osmolality and osmolarity of plasma can be
considered as the same for all practical purposes. Normal plasma osmolality
is 275 300 milliosmoles/kg (mOsm/kg) of water while urine osmolality is
50 1,200 mOsm/kg of water. Although plasma and urine osmolality can be
measured by using an osmometer, it is also calculated by the following for-
mula (Equation 5.1):
Plasma osmolality 5 2 3 Sodium 1 Glucose
ð5:1Þ
1 Urea ðall concentrations in mmol=LÞ
5.3 Hormones Involved in Water and Electrolyte Balance 69
Although the sodium value is expressed as mmol/L, in clinical laboratories
concentrations of glucose and urea are expressed as mg/dL. Therefore the
formula can be modified as follows to calculate osmolality (Equation 5.2):
Plasma osmolality 5 2 3 ½Sodium in mmol=L
ð5:2Þ
1 ½Glucose in mg=dL=18 1 ½BUN in mg=dL=2:8
Here, BUN stands for blood urea nitrogen.
Although this formula is commonly used, a stricter approach to calculate
plasma osmolality takes into account other osmotically active substances in
plasma such as potassium, calcium, and proteins by adding 9 mOsm/kg to
yield Equation 5.3:
Plasma osmolality 5 1:86 ½Sodium in mmol=L
1 ½Glucose in mg=dL=18 1 ½BUN in mg=dL=2:8 1 9
ð5:3Þ
Plasma osmolality increases with dehydration and decreases with over hydra-
tion. Plasma osmolality regulates secretion of antidiuretic hormone (ADH).
Another important laboratory parameter is the osmolar gap, defined in
Equation 5.4:
Osmolar gap 5 Observed osmolality 2 Calculated osmolality ð5:4Þ
If the measured osmolality is higher than the calculated osmolality then this is
referred to as the osmolar gap and can be due to the presence of abnormal
osmotically active substances such as overdose with ethanol, methanol, and
ethylene glycol, or if fractional water content of plasma is reduced, due to
hyperlipidemia or paraproteinemia. Although normal urine osmolality of ran-
dom urine is relatively low, fluid restriction can raise urine osmolality to
850 mOsm/kg or higher (although within the normal range of urine osmolal-
ity). However, greater than normal urine osmolality may be seen when:
There is reduced renal perfusion (e.g. dehydration, shock, renal artery
stenosis).
Excessive water retention without renal hypoperfusion (e.g. SIADH).
Osmotically active substances in urine (e.g. glycosuria).
5.3 HORMONES INVOLVED IN WATER AND
ELECTROLYTE BALANCE
Antidiuretic hormone (ADH) and aldosterone play important roles in the
water and electrolyte balance of the human body. ADH along with oxytocin
70 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
are produced in the supraoptic and paraventricular nuclei of the hypothala-
mus. These hormones are stored in the posterior pituitary and released in
response to appropriate stimuli. ADH secretion is regulated by plasma osmo-
lality. If plasma osmolality increases, it stimulates secretion of ADH, which
acts at the collecting duct of the nephron where it causes reabsorption of
only water and produces concentrated urine. In this process water is con-
served in the body, and as a result, plasma osmolality should be reduced. A
low serum osmolality, on the other hand, reduces secretion of ADH and
more water is excreted as urine (diluted urine) and plasma osmolality is cor-
rected. However, ADH at high concentrations causes vasoconstriction, thus
raising blood pressure. Increased water retention due to ADH can result in
the following conditions:
Concentrated urine
Increased plasma volume
Reduced plasma osmolality.
Therefore, it is logical to assume that ADH secretion is stimulated by low
plasma volume and increased plasma osmolality. In humans, urine produced
during sleep is more concentrated than urine produced during waking hours.
Usually urine in the morning (first void) is most concentrated. This may be
partly due to less or no fluid intake during sleeping hours, but plasma ADH
concentration is also higher during the night than during the day. It has been
postulated that rapid eye movement (REM) sleep or dreaming sleep induces
ADH secretion.
5.4 RENIN ANGIOTENSIN ALDOSTERONE
SYSTEM
With low circulating blood volume, the juxtaglomerular apparatus of the kid-
ney secretes renin, a peptide hormone, into the blood stream. Renin converts
angiotensinogen released by the liver into angiotensin I, which is then con-
verted into angiotensin II in the lungs by angiotensin-converting enzyme
(ACE). Angiotensin II is a vasoconstrictor and also stimulates release of aldo-
sterone from the adrenal cortex. This is defined as the “Renin Angiotensin
Aldosterone” system. Aldosterone is a mineralocorticoid secreted from the
zona glomerulosa of the adrenal cortex. It acts on the distal tubules and
collecting ducts of the nephron and causes:
Retention of water
Retention of sodium
Loss of potassium and hydrogen ions.
Retention of water and sodium results in increased plasma volume and blood
pressure. An increase in plasma potassium is a strong stimulus for aldosterone
5.5 Diabetes Insipidus 71
synthesis and release. Atrial natriuretic peptide (ANP) and brain natriuretic
peptide (BNP) are secreted by the right atrium and ventricles, respectively. The
main stimulus for secretion of these peptides is volume overload.
5.5 DIABETES INSIPIDUS
Diabetes insipidus is an uncommon condition that occurs when the kidneys
are unable to concentrate urine properly. As a result, diluted urine is pro-
duced, affecting plasma osmolality. The cause of diabetes insipidus is lack of
secretion of ADH (cranial diabetes insipidus, also known as central diabetes
insipidus) or is due to the inability of ADH to work at the collecting duct of
the kidney (nephrogenic diabetes insipidus). Cranial diabetes insipidus is
due to hypothalamic damage or pituitary damage. The major causes of such
damage include the following conditions:
Head injury
Stroke
Tumor
Infections affecting the central nervous system
Sarcoidosis
Surgery involving the hypothalamus or pituitary.
Diabetes insipidus due to viral infection is rarely reported, but one report illus-
trates diabetes insipidus due to type A (sub-type: H1N1, swine flu) influenza
virus infection in a 22-year-old man who produced up to 9 liters of urine per
day [2]. Neuroendocrine complication following meningitis in neonates may
also cause diabetes insipidus [3]. Pituitary abscess is a rare life-threatening
condition that may also cause central diabetes insipidus. Autoimmune diabe-
tes insipidus is an inflammatory non-infectious form of diabetes insipidus
that is rare and is presented with antibodies to ADH secreting cells.
CASE REPORT
A 48-year-old woman with diffuse large cell lymphoma and diagnosis of diabetes insipidus was made based on low urine
severe hepatic involvement presented with herpes zoster osmolality of 153 mmol/kg and undetectable vasopressin
infection on the right eye and was treated with acyclovir (ADH) levels. However, a brain MRI showed no pituitary
orally. When she was undergoing chemotherapy, on the ninth abnormality, but encephalitis was present as evidenced by
day she developed a fever, weakness, hypotension, pancyto- hyperintensities in the area of the left lateral ventricle of the
penia, renal failure, and a highly elevated C-reactive protein. cerebrum. Analysis of cerebrospinal fluid showed herpes zos-
A diagnosis of Gram-negative sepsis was made and she was ter infection. The authors concluded that central diabetes
treated with intravenous antibiotic along with acyclovir, cate- insipidus was due to herpes encephalitis in this patient. The
cholamine, and hydrocortisone. Three days later she devel- patient responded to desmopressin (synthetic analog of vaso-
oped hypotonic polyuria (12 liters of urine per day) and a pressin, also known as ADH) therapy [4].
72 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
Nephrogenic diabetes insipidus is due to the inability of the kidney to con-
centrate urine in the presence of ADH. The major causes of nephrogenic dia-
betes include:
Chronic renal failure
Polycystic kidney disease
Hypercalcemia, hypokalemia
Drugs such as amphotericin B, demeclocycline, lithium.
In both types of diabetic insipidus, patients usually present with diluted
urine with low osmolality, but plasma osmolality should be higher than
normal. These patients also experience excessive thirst and drink lots of
fluid to compensate for the high urine output. Even if a patient is not
allowed to drink fluid, urine still remains diluted with a possibility of
dehydration. In contrast, in a normal healthy individual fluid deprivation
results in concentrated urine. This observation is the basis of the water
deprivation test to establish the presence of diabetes insipidus in a
patient. In order to differentiate cranial diabetes insipidus from nephro-
genic diabetes insipidus, intranasal vasopressin is administered. If urine
osmolality increases then the diagnosis is cranial diabetes insipidus, but if
urine is still dilute with no change in urine osmolality, then the diagnosis
is nephrogenic diabetes insipidus. The congenital form of nephrogenic
diabetes is a rare disease and most commonly inherited in an X-linked
manner with mutations of the arginine vasopressin receptor type 2
(AVPR2) [5].
5.6 THE SYNDROME OF INAPPROPRIATE
ANTIDIURETIC HORMONE SECRETION (SIADH)
The syndrome of inappropriate antidiuretic hormone secretion (SIADH, also
known as Schwartz Bartter syndrome) is due to excessive and inappropriate
release of antidiuretic hormone (ADH). Usually reduction of plasma osmo-
lality causes reduction of ADH secretion, but in SIADH reduced plasma
osmolality does not inhibit ADH release from the pituitary gland, causing
water overload. The main clinical features of SIADH include:
Hyponatremia (plasma sodium ,131 mmol/L)
Decreased plasma osmolality (,275 mOsm/kg)
Urine osmolality .100 mOsm/kg) and high urinary sodium
( .20 mmol/L)
No edema.
Various causes of SIADH are listed in Table 5.1.
5.7 Hyponatremia, Sick Cell Syndrome, and Hypernatremia 73
Table 5.1 Causes of SIADH*
Type of Specific Disease/Comments
Disease
Pulmonary Pneumonia, pneumothorax, acute respiratory failure, bronchial asthma,
diseases atelectasis, tuberculosis.
Neurological Meningitis, encephalitis, stroke, brain tumor infection.
Malignancies Lung cancer especially small cell carcinoma, head and neck cancer,
pancreatic cancer.
Hereditary Two genetic variants, one affecting renal vasopressin receptor and
another affecting osmolality sensing in hypothalamus have been
reported.
Hormone Use of desmopressin or oxytocin can cause SIADH.
therapy
Drugs Cyclophosphamide, carbamazepine, valproic acid, amitriptyline, SSRI,
monoamine oxidase inhibitors and certain chemotherapeutic agents
may also cause SIADH.
*SIADH: Syndrome of Inappropriate Antidiuretic Hormone Secretion.
5.7 HYPONATREMIA, SICK CELL SYNDROME, AND
HYPERNATREMIA
Hyponatremia can be either absolute hyponatremia or dilutional hyponatre-
mia, although in a clinical setting, dilutional hyponatremia is encountered
more commonly than absolute hyponatremia. In absolute hyponatremia,
total sodium content of the body is low. The patient is hypovolemic, which
results in activation of the renin–angiotensin system, causing secondary
hyperaldosteronism and also increased levels of ADH. In dilutional hypona-
tremia total body sodium is not low, rather, total body sodium may be
increased. The patient is volume overloaded with resultant dilution of
sodium levels. Examples of such conditions include congestive heart failure,
renal failure, nephrotic syndrome, and cirrhosis of the liver. Although hypo-
natremia is defined as any sodium value less than reference range (135 mEq/
L), usually clinical features such as confusion, restlessness leading to drowsi-
ness, myoclonic jerks, convulsions, and coma are observed at much lower
sodium levels. Hyponatremia is common among hospitalized patients, and
affects up to 30% of all patients [6]. However, a sodium level below
120 mEq/L is associated with poor prognosis and even a fatal outcome [7].
Major types of hyponatremia include:
Absolute hyponatremia (patient is hypovolemic) related to loss of
sodium through the gastrointestinal tract or loss through the kidneys
74 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
due to kidney diseases (pyelonephritis, polycystic disease, interstitial
disease) or through the kidneys due to glycosuria or therapy with
diuretics or less retention of sodium by the kidney due to
adrenocortical insufficiency.
Dilutional hyponatremia (patient hypervolemic). This condition is
related to SIADH or conditions like congestive heart failure, renal failure,
nephrotic syndrome, and cirrhosis of the liver.
Pseudohyponatremia as seen in patients with hyperlipidemia and
hypergammaglobulinemia (also known as factitious hyponatremia).
Sick cell syndrome is defined as hyponatremia seen in individuals with acute
or chronic illness where cell membranes leak, allowing solutes normally inside
the cell to escape into extracellular fluid. Therefore, leaking of osmotically
active solutes causes water to move from intracellular fluid to extracellular
fluid, causing dilution of plasma sodium and consequently hyponatremia.
Sick cell hyponatremia also produces a positive osmolar gap. Sick patients
also produce high levels of ADH, which causes water retention, causing
hyponatremia.
CASE REPORT
A 36-year-old man was hospitalized with 3 days history of high. His serum osmolality was 259 mOsm/kg, but calculated
malaise, drowsiness, and jaundice. He had a history of agora- osmolality was 214 mOsm/kg with an osmolar gap of
phobia and alcohol abuse. On admission there was no menin- 135 mOsm/kg. His serum albumin was 2.8 mg/dL. The
gismus, focal neurological signs, or liver failure. However, patient deteriorated despite aggressive therapy and later
later the patient became unconscious and developed hypo- died. The patient suffered from critical illness with multi-
tension and grand mal seizure and was transferred to the organ failure. Standard causes of hyponatremia were ruled
ICU. His serum sodium level was 101 mEq/L and potassium out, and he showed a markedly positive osmolar gap with
was 3.6 mmol/L, but all liver function tests were abnormally severe hyponatremia due to sick cell syndrome [8].
Hypernatremia is due to elevated serum sodium levels (above 150 mEq/L).
Symptoms of hypernatremia are usually neurological due to intraneuronal
loss of water to extracellular fluid. Patients exhibit features of lethargy, drowsi-
ness, and eventually become comatose. Hypernatremia may be hypovolumic
or hypervolumic. The most common cause of hypovolemic hypernatremia is
dehydration, which may be due to decreased water intake or excessive water
loss through the skin (heavy sweating), kidney, or gastrointestinal tract (diar-
rhea). Patients usually present with concentrated urine (osmolality over
800 mOsm/kg) and low urinary sodium (,20 mmol/L). Hypervolemic
hypernatremia may be observed in hospitalized patients receiving sodium
bicarbonate or hypertonic saline. Hyperaldosteronism, Cushing’s syndrome,
and Conn’s disease may also cause hypervolemic hypernatremia.
5.8 Hypokalemia and Hyperkalemia 75
5.8 HYPOKALEMIA AND HYPERKALEMIA
Hypokalemia is defined as a serum potassium concentration ,3.5 mEq/L,
which may be caused by loss of potassium or redistribution of extracellular
potassium into the intracellular compartment. Hypokalemia may occur due
to the following:
Loss of potassium from the gastrointestinal tract due to vomiting,
diarrhea, and active secretion of potassium from villous adenoma of
rectum.
Loss of potassium from the kidneys due to diuretic therapy, and
glucocorticoid and mineralocorticoid excess. Increased levels of lysozyme
(seen in monocytic leukemia) may also cause renal loss of potassium.
Bartter’s, Liddle and Gitelman syndromes are rare inherited disorders due
to mutations in the ion transport proteins of the renal tubules that may
cause hypokalemia.
Intracellular shifts due to drug therapy with beta-2 agonists (salbutamol),
which drives potassium into the cell, or due to alkalosis (hydrogen ions
move out of the cell in exchange with potassium), or insulin therapy or
familial periodic paralysis and hypothermia.
Clinically, patients with hypokalemia present with muscle weakness, areflexia,
paralytic ileus, and cardiac arrhythmias. Electrocardiogram findings include
prolonged PR interval, flat T, and tall U.
CASE REPORT
A 69-year-old white man with a history of high-grade pros- scan and MRI study showed multiple small liver lesions
tate carcinoma and widely metastatic adenocarcinoma pre- and multiple thoracic and lumbar intensities consistent
sented to the hospital with metabolic alkalosis (arterial blood with diffuse metastatic disease. The severe metabolic
pH of 7.61, pO 2 of 45, and pCO 2 of 48), hypokalemia (potas- alkalosis secondary to glucocorticoid-induced excessive
sium 2.1 mEq/L), and hypertension secondary to ectopic mineralocorticoid activity and hypokalemia were treated
ACTH (adrenocorticotropic hormone) and CRH (corticotro- with potassium supplements, spironolactone, and ketoco-
pin-releasing hormone) secretion. His serum cortisol was nazole. This patient had Cushing’s syndrome, most likely
also markedly elevated (135 μg/dL) along with ACTH as a result of ectopic ACTH and CRH secretion from met-
(1,387 pg/dL) and CRH (69 pg/dL). As expected, his urinary astatic adenocarcinoma of the prostate gland [9].
cortisol was also elevated (16,267 μg/24 h). An abdominal CT
Most of potassium of the body resides intracellularly. Hyperkalemia presents
as elevated serum or plasma potassium levels; a common cause is hemolysis
of blood, where potassium leaks from red blood cells into serum, thus artifi-
cially increasing potassium levels.
76 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
Causes of hyperkalemia include:
Lysis of cells: in vivo hemolysis, rhabdomyolysis, and tumor lysis.
Intracellular shift. In acidosis, intracellular potassium is exchanged with
extracellular hydrogen ions, causing hyperkalemia. Thus hyperkalemia
typically accompanies metabolic acidosis. An exception is renal tubular
acidosis (RTA) types I and II where acidosis without hyperkalemia is
observed. Acute digitalis toxicity (therapy with digoxin or digitoxin)
may cause hyperkalemia (please note digitalis toxicity is precipitated in
the hypokalemic state).
Renal failure.
Pseudohyperkalemia. Although pseudohyperkalemia or artificial
hyperkalemia is most commonly seen secondary to red cell hemolysis,
it is also seen in patients with thrombocytosis and rarely in patients
with familial pseudohypokalemia. Patients with highly elevated white
blood cell counts, such as patients with chronic lymphocytic leukemia
(CLL), may also show pseudohyperkalemia. Diagnosis of
pseudohyperkalemia can be made from observation of higher serum
potassium than plasma potassium (serum potassium exceeds plasma
potassium by 0.4 mEq/L provided both specimens are collected
carefully and analyzed within 1 h), or measuring potassium in whole
blood (using a blood gas machine) where whole blood potassium is
within normal range.
Clinical features of hyperkalemia include muscle weakness, cardiac arrhyth-
mias, and cardiac arrest. EKG findings include flattened P, prolonged PR
interval, wide QRS complex, and tall T waves. Drugs that may cause hyperka-
lemia are listed in Table 5.2.
Table 5.2 Drugs that may Cause Hyperkalemia
Potassium supplement and salt substitute
Beta-blockers
Digoxin and digitoxin (acute intoxication)
Potassium sparing diuretics (spironolactone and related drugs)
NSAIDs (non-steroidal antiinflammatory drugs)
ACE inhibitors
Angiotensin II-blockers
Trimethoprim/sulfamethoxazole combination (Bactrim)
Immunosuppressants (cyclosporine and tacrolimus)
Heparin
5.9 Introduction to Acid Base Balance 77
CASE REPORT
A 51-year-old male patient with CLL demonstrated high aerosol, glucose, insulin, and Kayexalate. His potassium
plasma potassium of 6.8 mEq/L, but no abnormality was remained high for the next two days (in the range of low 6 s),
observed in his electrocardiogram. He showed normal creati- but his whole blood potassium was normal (2.7 mEq/L).
nine (1.1 mg/dL), low hemoglobin (7.3 g/dL), and high white Based on these observations, diagnosis of pseudohyperkale-
blood cell count (273.9 k/microliter). He was treated in the mia was established. Interestingly, his plasma potassium was
emergency room with a presumed diagnosis of hyperkalemia increased to 9.0 mEq/L, but his whole blood potassium was
with calcium gluconate, sodium bicarbonate, albuterol still 3.6 mEq/L [10].
5.9 INTRODUCTION TO ACID BASE BALANCE
In general, an acid is defined as a compound that can donate hydrogen ions,
and a base is a compound that can accept hydrogen ions. In order to deter-
mine if a solution is acidic or basic, the pH scale is used, which is the abbrevia-
tion of the power of hydrogen ions; pH is equal to the negative log of
hydrogen ion concentration in solution. Neutral pH is 7.0. If a solution is
acidic, pH is below 7.0, and basic if above 7.0. Therefore, a physiological pH
of 7.4 is slightly basic. Concentration of hydrogen ions that are present in
both the extracellular and intracellular compartments of the human body are
tightly controlled. Although the normal human diet is almost at a neutral pH
and contains very low amounts of acid, the human body produces about
50 100 mEq of acid in a day, principally from the cellular metabolism of pro-
teins, carbohydrates, and fats; this generates sulfuric acid, phosphoric acid,
and other acids. Although excess base is excreted in feces, excess acid generated
in the body must be neutralized or excreted in order to tightly control near
normal pH of the blood (arterial blood 7.35 7.45 and venous blood
7.32 7.48). Carbonic acid (H 2 CO 3 ) is generated in the human body due to
dissolution of carbon dioxide in water present in the blood (Equation 5.5):
1
CO 2 1 H 2 O 5 H 2 CO 3 5 H 1 HCO 2 ð5:5Þ
3
The hydrogen ion concentration of human blood can be calculated from the
Henderson Hasselbalch equation (Equation 5.6):
pH 5 pKa 1 log½salt=½acid ð5:6Þ
2
Here, salt is the concentration of bicarbonate [HCO 3 ] and the concentration
of acid is the concentration of carbonic acid, which can be calculated from
the measured partial pressure of carbon dioxide. The value of pKa is 6.1,
which is the dissociation constant of carbonic acid at physiological tempera-
ture. The concentration of carbonic acid can be calculated by multiplying the
78 CHAPTER 5: Water, Homeostasis, Electrolytes, and Acid Base Balance
partial pressure of carbon dioxide (pCO 2 ) by 0.03. Therefore the
Henderson Hasselbalch equation can be expressed as Equation 5.7:
2
½HCO
3
pH 5 6:1 1 log ð5:7Þ
0:03 3 pCO
2
The body has three mechanisms to maintain acid base homeostasis:
A physiological buffer present in the body that consists of a
bicarbonate carbonic acid buffer system, phosphate in the bone, and
intracellular proteins.
Respiratory compensation, where the lungs can excrete more carbon
dioxide or less depending on the acid base status of the body.
The kidneys can also correct acid base balance of the human body if
other mechanisms are ineffective.
Respiratory compensation to correct acid base balance is the first compensa-
tory mechanism. It is effective immediately, but it may take a longer time for
initiation of the renal compensatory mechanism. At the collecting duct,
sodium is retained in exchange for either potassium or hydrogen ions, and if
excess acid is present, more hydrogen ions should be excreted by the kidney
to balance acid base homeostasis. In the presence of excess acid (acidosis),
kidneys excrete hydrogen ions and retain bicarbonate, while during alkalosis,
kidneys excrete bicarbonate and retain hydrogen ions. However, when there
is excess acid, hydrogen ions may also move into the cells in exchange for
potassium moving out of the cell. As a result, metabolic acidosis usually
causes hyperkalemia. Concurrently, the bicarbonate concentration is reduced
because hydrogen ions react with bicarbonate ions to produce carbonic acid.
The kidneys need to reabsorb more of the filtered bicarbonate, which takes
place at the proximal tubule.
5.10 DIAGNOSTIC APPROACH TO ACID BASE
DISTURBANCE
Major acid base disturbances can be divided into four categories: metabolic
acidosis, respiratory acidosis, metabolic alkalosis, and respiratory alkalosis.
In general, metabolic acidosis or alkalosis is related to abnormalities in regu-
lation of bicarbonate and other buffers in blood, while abnormal removal of
carbon dioxide may cause respiratory acidosis or alkalosis. Both states may
also co-exist. However, it is important to know normal values of certain para-
meters measured in blood for diagnosis of acid base disturbances:
Normal pH of arterial blood is 7.35 7.45.
Normal pCO 2 is 35 45 mmHg.
5.10 Diagnostic Approach to Acid Base Disturbance 79
Normal bicarbonate level is 23 25 mmol/L.
Normal chloride level is 95 105 mmol/L.
The first question is whether the pH value is higher or lower than normal. If the
pH is lower than normal, then it is acidosis, and if the pH is higher than normal,
the diagnosis of alkalosis can be made. If the diagnosis is acidosis, then the next
question to ask is whether the acidosis is metabolic or respiratory in nature.
Similarly, if the pH is above normal, the question is whether the alkalosis is met-
abolic or respiratory in nature. In general, if the direction of change from normal
pH is the same direction for change of pCO 2 and bicarbonate, then the distur-
bance is metabolic in nature, and if the direction of change from normal pH is
in the opposite direction of change for pCO 2 and bicarbonate, then the distur-
bance is respiratory. Therefore four different scenarios are possible:
Metabolic acidosis, where the value of pH is decreased along with decreases
in the values of pCO 2 and bicarbonate (both values below normal range).
Respiratory acidosis, where the value of pH is decreased but values of
both pCO 2 and bicarbonate are increased from normal values.
Metabolic alkalosis, where the value of pH is increased along with values
of both pCO 2 and bicarbonate (both values above reference range).
Respiratory alkalosis, where the value of pH is increased, but values of
both pCO 2 and bicarbonate are decreased.
5.10.1 Metabolic acidosis
Metabolic acidosis may occur with an increased anion gap (high) or normal
anion gap. Anion gap is defined as the difference between measured cations
(sodium and potassium) and anions (chloride and bicarbonate) in serum.
Sometimes concentration of potassium is omitted because it is low compared
to sodium ion concentration in serum (Equation 5.8):
Anion gap 5 ½sodium 2 ð½chloride 1 ½bicarbonateÞ ð5:8Þ
The normal value is 8 12 mmol/L (mEq/L).
In metabolic acidosis bicarbonate should decrease, resulting in increased
anion gap metabolic acidosis. If the chloride level increases, then even with a
decline in bicarbonate, the anion gap may remain normal. This is normal
anion gap metabolic acidosis. Thus, normal anion gap metabolic acidosis is
also referred to as hyperchloremic metabolic acidosis. Causes of normal anion
gap metabolic acidosis include loss of bicarbonate buffer from the gastrointes-
tinal tract (chronic diarrhea, pancreatic fistula, and sigmoidostomy), or renal
loss of bicarbonate due to kidney disorders such as renal tubular acidosis and
renal failure. Causes of increased anion gap metabolic acidosis can be remem-
bered by the mnemonic MUDPILES (M for methanol, U for uremia, D for