17. ASTM E 1570-95, Standard Practice for 27. Friddell, K.D., A.R. Lowrey and
Computed Tomographic (CT) B.M. Lempriere. “Application of
Examination. West Conshohocken, PA: Medical Computed Tomography (CT)
ASTM International (1995). Scanners to Advanced Aerospace
Composites.” Review of Progress in
18. Bischof, C.J. and J.C. Ehrhardt. Quantitative Nondestructive Evaluation.
“Modulation Transfer Function of the Vol. 4. New York, NY: Plenum Press
EMI CT Head Scanner.” Medical (1985): p 1239-1246.
Physics. Vol. 4, No. 2. Melville, NY:
American Institute of Physics, for 28. Bossi, R.H., K.K. Cooprider and
American Association of Physicists in G.E. Georgeson. “X-Ray Computed
Medicine (1977): p 163-167. Tomography of Composites.” Report
WRDC-TR-90-4014. Wright-Patterson
19. Barret, H.H. and W. Swindell. Air Force Base, OH: Air Force Research
Radiological Imaging: The Theory of Laboratory (July 1990).
Image Formation, Detection and
Processing. Vols. 1 and 2. New York, 29. Bossi, R.H., K.D. Friddell and
NY: Academic Press (1981). A.R. Lowrey. “Computed
Tomography.” Non-Destructive Testing
20. Shepp, L.A. and J.A. Stein. “Simulated of Fibre-Reinforced Plastics Composites.
Reconstruction Artifacts in Ch. 4. London, United Kingdom:
Computerized X-Ray Tomography.” Elsevier Science Publishers (1990).
Reconstruction Tomography in Diagnostic
Radiology and Nuclear Medicine. 30. Georgeson, G.E. and R.H. Bossi.
Baltimore, MD: University Park Press “Computed Tomography for Advanced
(1977). Materials and Processes.” Report
WL-TR-91-4101. Wright-Patterson Air
21. Burstein, P., R. Mastronardi and Force Base, OH: Air Force Research
T. Kirshner. “Computerized Laboratory (June 1992).
Tomography Inspection of Trident
Rocket Motors: A Capability 31. Bossi, R.H., G.E. Georgeson and
Demonstration.” Materials Evaluation. R.D. Rempt. “X-Ray Computed
Vol. 40, No. 11. Columbus, OH: Tomography for Emerging Aerospace
American Society for Nondestructive Materials and Processes Development.”
Testing (November 1982): p 40. Report WL-TR-93-4054.
Wright-Patterson Air Force Base, OH:
22. Bossi, R.H. and G.E. Georgeson. Air Force Research Laboratory
“Computed Tomography Analysis of (May 1993).
Castings.” Report WL-TR-91-4121.
Wright-Patterson Air Force Base, OH: 32. Bossi, R.H. and G.E. Georgeson.
Air Force Research Laboratory “Composite Structure Development
(January 1992). Decisions Using X-Ray CT
Measurements.” Materials Evaluation.
23. Bossi, R.H., J.L. Cline, E.G. Costello Vol. 53, No. 10. Columbus, OH:
and B.W. Knutson. “X-Ray Computed American Society for Nondestructive
Tomography of Castings.” Report Testing (October 1995): p 1198-1203.
WRDC-TR-89-4138. Wright-Patterson
Air Force Base, OH: Air Force Research 33. Boyd, J.E. “Limited-Angle Computed
Laboratory (March 1990). Tomography for Sandwich Structures
Using Data Fusion.” Journal of
24. Georgeson, G.E. and R.H. Bossi, Nondestructive Evaluation. Vol. 14,
“X-Ray Computed Tomography of No. 2. New York, NY: Plenum Press
Full-Scale Castings.” Report (1995): p 61-76.
WL-TR-91-4049. Wright-Patterson Air
Force Base, OH: Air Force Research 34. McCullough, E.C., J.T. Payne,
Laboratory (October 1991). H.L. Baker, Jr., R.R. Hattery, P.F.
Sheedy, D.H. Stephens and E.
25. Georgeson, G.E. and R.H. Bossi. Gedgaudus. “Performance Evaluation
“Computed Tomography for Casting and Quality Assurance of Computed
Development.” Report Tomography Scanners, with
WL-TR-92-4032. Wright-Patterson Air Illustrations from the EMI, ACTA, and
Force Base, OH: Air Force Research Delta Scanners.” Radiology. Vol. 120.
Laboratory (September 1992). Oak Brook, IL: Radiological Society of
North America (July 1976): p 173-188.
26. Georgeson, G.E., R.H. Bossi and
R.D. Rempt. “Computed Tomography 35. Payne, J.T., E.C. McCullough, T. Stone
Demonstration for Castings.” Report and E. Gedgaudas. “Acceptance
WL-TR-93-4048. Wright-Patterson Air Testing of a Computerized
Force Base, OH: Air Force Research Tomographic Scanner.” Optical
Laboratory (May 1993). Engineering. Vol. 16, No. 1.
Bellingham, WA: International Society
for Optical Engineering
(January-February 1977): p 28-31.
340 Radiographic Testing
36. Goodenough, D.J., K.E. Weaver and 47. Sivers, E.A. and M.D. Silver.
D.O. Davis. “Development of a “Performance of X-Ray Computed
Phantom for Evaluating Assurance of Tomographic Imaging Systems.”
Image Quality in CT Scanning.” Materials Evaluation. Vol. 48, No. 6.
Optical Engineering. Vol. 16, No. 1. Columbus, OH: American Society for
Bellingham, WA: International Society Nondestructive Testing (June 1990):
for Optical Engineering p 706-713.
(January-February 1977): p 62-65.
48. Jacoby, M.H. and D.E. Lingenfelter.
37. White, D.R., R.D. Speller and “Monitoring the Performance of
D.M. Taylor. “Evaluating Performance Industrial Computed Tomography
Characteristics in Computerized Inspection Systems.” Materials
Tomography.” British Journal of Evaluation. Vol. 47, No. 10. Columbus,
Radiology. Vol. 54. London, United OH: American Society for
Kingdom: British Institute of Nondestructive Testing
Radiology (1981): p 221-231. (October 1989): p 1196-1199.
38. AAPM 1977, Phantoms for Performance 49. Bossi, R.H., J.L. Cline and B.W.
Evaluation and Quality Assurance of CT Knutson. “Computed Tomography of
Scanners. Report No. 1. New York: Thermal Batteries and Other Closed
American Institute of Physics, for Systems.” Report WRDC-TR-89-4113.
American Association of Physicists in Wright-Patterson Air Force Base, OH:
Medicine (1977). Air Force Research Laboratory
(December 1989).
39. Mees, C.K. Theory of the Photographic
Process. New York, NY: MacMillan 50. Hytec Incorporated. “Small Flashlight
Company (1942). Results.” Technical Report,
HT 107990-0003. Los Alamos, NM:
40. Judy, P.F. “The Line Spread Function Hytec Incorporated (February 2001).
and Modulation Transfer Function of a
Computed Tomography Scanner.” 51. Bergstrom, M. “Performance
Medical Physics. Vol. 3, No. 4. Melville, Evaluation of Scanners.” Radiology of
NY: American Institute of Physics, for the Skull and Brain: Vol. 5, Technical
American Association of Physicists in Aspects of Computed Tomography.
Medicine (1976): p 233-236. Chapter 123. Saint Louis, MO:
C.V. Mosby Company (1981):
41. Hanson, K.M. “Detectability in p 4212-4227.
Computed Tomographic Images.”
Medical Physics. Vol. 6, No. 5. Melville, Bibliography
NY: American Institute of Physics, for
American Association of Physicists in Armistead, R.A. “CT: Quantitative 3D
Medicine (1979): p 441-451. Inspection.” Advanced Materials and
Processes. Materials Park, OH: ASM
42. Cohen, G. and F. Di Bianca. “The Use International (March 1988): p 42-48.
of Contrast-Detail-Dose Evaluation of
Image Quality in a CT Scanner.” Bossi, R.H. and B. Knutson. “The
Journal of Computer Assisted Advanced Development of X-Ray
Tomography. Vol. 3, No. 2. New York, Computed Tomography Applications.”
NY: Raven Press (1979): p 189-195. Report WL-TR-93-4016.
Wright-Patterson Air Force Base, OH:
43. Dennis, M.J. “Industrial Computed Air Force Research Laboratory
Tomography.” Metals Handbook, ninth (May 1993).
edition: Vol. 17, Nondestructive
Evaluation and Quality Control. Bossi, R.H. and R.J. Kruse. “X-Ray
Materials Park, OH: ASM International Tomographic Inspection of Printed
(1989): p 358-386. Wiring Assemblies and Electrical
Components.” WRDC-TR-90-4091.
44. ASTM E 1441-95, Standard Guide for Wright-Patterson Air Force Base, OH:
Computed Tomography (CT) Imaging. Air Force Research Laboratory
West Conshohocken, PA: ASTM (October 1990).
International (1995).
Bossi, R.H. and W. Shepherd. “Computed
45. ASTM E 1672-95, Standard Guide for Tomography for Failure Analysis
Computed Tomography (CT) System Investigations.” Report
Selection. West Conshohocken, PA: WL-TR-93-4047. Wright-Patterson Air
ASTM International (1995). Force Base, OH: Air Force Research
Laboratory (May 1993).
46. ASTM E 1695-95, Standard Test Method
for Measurement of Computed
Tomography (CT) System Performance.
West Conshohocken, PA: ASTM
International (1995).
Computed Tomography 341
Bossi, R.H., A.R. Crews and Feldkamp, L.A. and G. Jesion. “3D X-Ray
G.E. Georgeson. “X-Ray Computed Computed Tomography.” Review of
Tomography for Failure Analysis.” Progress in Quantitative Nondestructive
Report WL-TR-92-4017. Testing. IS-4923. New York, NY:
Wright-Patterson Air Force Base, OH: Plenum Press (1986): p 555-566.
Air Force Research Laboratory
(August 1992). Gupta, N. and V. Alreja. “Tangential
Scanner for Waste Drum Inspection.”
Bossi, R.H., J.L. Cline and G.E. Georgeson. ASNT Industrial Computed Tomography
“High Resolution X-Ray Computed Topical Conference [Huntsville, AL,
Tomography.” Report WL-TR-91-4102. May 1996]. Columbus, OH: American
Wright-Patterson Air Force Base, OH: Society for Nondestructive Testing
Air Force Research Laboratory (1996): p 85-88.
(July 1992).
Hendee, W.R. The Physical Principles of
Bossi, R.H., R.J. Kruse and B.W. Knutson. Computed Tomography. Boston, MA:
“Computed Tomography of Little, Brown and Company (1983).
Electronics.” Report
WRDC-TR-89-4112. Wright-Patterson Herman, G.T. Image Reconstructions from
Air Force Base, OH: Air Force Research Projections: The Fundamentals of
Laboratory (December 1989). Computerized Tomography. New York,
NY: Academic Press (1980).
Brooks, R.A. and G. Di Chiro. “Principles
of Computer Assisted Tomography Kak, A.C. and M. Slaney. Principles of
(CAT) in Radiographic and Computerized Tomographic Imaging.
Radioscopic Imaging.” Physics in New York, NY: IEEE Press (1987).
Medicine and Biology. Vol. 21, No. 5.
London, United Kingdom: Institute of Kropas, C.V., T.J. Moran and R.N. Yancey.
Physics in association with the “Effects of Composition on Density
American Institute of Physics and the Measurement by X-Ray Computed
American Association of Physicists in Tomography.” Materials Evaluation.
Medicine (1976). Vol. 49, No. 4. Columbus, OH:
American Society for Nondestructive
Bueno, C., M.D. Barker, R.A. Betz, Testing (April 1991): p 487-490.
R.C. Barry and R.A. Buchanan.
“Nondestructive Evaluation of Aircraft Macovski, A. Medical Imaging. Upper
Structures Using High-Resolution Saddle River, NJ: Prentice-Hall (1983).
Real-Time Radiography.” Nondestructive
Evaluation of Aging Aircraft, Airports, Mandelkorn, F. and H. Stark.
Aerospace Hardware and Materials. SPIE “Computerized Tomosynthesis,
Proceedings, Vol. 2455. Bellingham, Stereoscopy and Coded-Scan
WA: International Society for Optical Tomography.” Applied Optics. Vol. 17.
Engineering (June 1995). Washington, DC: Optical Society of
America (1978): p 175-80.
Burstein, P. and R.H. Bossi. “A Guide to
Computed Tomography System Marshall, C. The Physical Basis of
Specifications.” Report Computed Tomography. Saint Louis,
WRDC-TR-90-4026. Wright-Patterson MO: Warren H. Green, Incorporated
Air Force Base, OH: Air Force Research (1982).
Laboratory (August 1990).
Newton, T.H. and D.G. Potts, ed. Radiology
Copley, D., J. Eberhard and G. Mohr. of the Skull and Brain: Vol. 5, Technical
“Computed Tomography: Part 1, Aspects of Computed Tomography. Saint
Introduction and Industrial Louis, MO: C.V. Mosby (1981).
Applications.” JOM. Warrendale, PA:
Minerals, Metals and Materials Society Perceptics Corporation. High-Resolution
[TMS] (January 1994): p 14-26. Three-Dimensional Computed
Tomography. Wright Laboratory Final
Crews, A.R. and R.H. Bossi. “X-Ray Report, WLTR-96-4117. Wright
Computed Tomography for Whole Patterson Air Force Base, OH: United
System Evaluation (Small Jet States Air Force (October 1996).
Engines).” Report WL-TR-91-4109.
Wright-Patterson Air Force Base, OH: Radon, J.H. “Über die Bestimmung von
Air Force Research Laboratory Funktionen durch ihre Integralwerte
(May 1992). längs gewisser Mannigfaltigkeiten.”
Berichte Sächsische Akademie der
Crews, A.R., R.H. Bossi and Wissenschaften [Berichte über die
G.E. Georgeson. “X-Ray Computed Verhandlungen der Königlich Sächsischen
Tomography for Geometry Gesellschaft der Wissenschaften zu
Acquisition.” Report WL-TR-93-4036. Leipzig]. Vol. 69. Leipzig, Germany:
Wright-Patterson Air Force Base, OH: Math.-Phys. Klasse (1917): p 262-277.
Air Force Research Laboratory
(March 1993). Schneberk, D.J., S.G. Azevedo, H.E. Martz
and M.F. Skeate. “Sources of Error in
Industrial Tomographic
Reconstructions.” Materials Evaluation.
Vol. 48, No. 5. Columbus, OH:
American Society for Nondestructive
Testing (May 1990): p 609.
342 Radiographic Testing
Seeram, E. Computed Tomography
Technology. Philadelphia, PA:
W.B. Saunders Company (1982).
Smith, B.D. “Cone-Beam Tomography:
Recent Advances and a Tutorial
Review.” Optical Engineering. Vol. 29,
No. 5. Bellingham, WA: International
Society for Optical Engineering (May
1990).
Stanley, J. “Standards for Computed
Tomography.” ASNT 1993 Fall
Conference and Quality Testing Show
[Long Beach, CA]. Columbus, OH:
American Society for Nondestructive
Testing (November 1993).
Stanley, J. and J. LePage. “CT System
Performance Evaluation.” Joint
Army-Navy-NASA-Air Force 1990 NDE
Subcommittee Meeting [Idaho Falls,
ID, April 1990].
Computed Tomography 343
13
CHAPTER
Image Data Analysis
Daniel J. Schneberk, Lawrence Livermore National
Laboratory, Livermore, California
Harry E. Martz, Lawrence Livermore National
Laboratory, Livermore, California
PART 1. Fundamental Properties of Digital
Images and Processing Schemes
Image analysis can be described as the image. To decide whether this change in
process that relates the features in the intensity derives from the state of the
image to the state of the object being object or is some artifact of the scanning
tested. Not all features in an image carry modality requires an assessment of these
the same significance for the test, nor are three properties of the imaging system.
all features in an image necessarily related
to the object. Operations applied to The image performance achieved on
images can highlight, enhance and any acquisition is the result of a number
sometimes remove certain features in the of factors. Digital radiographic images or
image. Image analysis proceeds by computed tomographic images are the
developing a sensible rationale for the combined result of the radiographic
steps and procedures connecting features technique used, the detection scheme and
in an image to an interpretation about the all the processing steps that result in the
state of the object. Depending on the image used for the test.
situation the amount of image analysis
applied to a particular image can be The entire imaging process for digital
minimal or very extensive. radiography and computed tomography
can be represented in sequential phases,
The goal of the test drives the image with digital radiography requiring fewer
data analysis for any image. Tests are steps than computed tomography (Fig. 1).
always about measuring a property of an Indeed, the properties of the computed
object, with some precision and tomographic reconstructed image depend
repeatability. Operator based tests use strongly on the properties of the
image analysis techniques to extract the underlying digital radiographic images.
necessary information about the object
from the image. Automated test systems Components of X-Ray
embed an analysis of the image in the Transmission Images
processing. The role of image analysis is
to develop the techniques for The radiographic beam is the starting
manipulating the image data and at the point for all radiographic scanners. The
same time assess the weaknesses of the properties of the beam, its shape and
procedure. It is best when an image energy content and the geometry of the
analysis technique can be reduced to a set object with respect to the beam are crucial
of image quality standards for the input determinants of imaging quality. The
images. These standards can also beam generates an image in space and the
highlight some fundamental aspect of the detector samples the X-ray intensity of
radiographic method, for that this beam. The sampling has both a
radiographic source and that detector. spatial and energy discriminating aspect
Consequently, evaluating an image for some position in space. Before the
analysis technique and its potential for introduction of any detector or detector
conveying reliable information about the and collimation scheme a number of
object involves a whole matrix of different sources of signal are present in
issues — from the fidelity of the image in the transmission measurements. For any
space, the particular detector used and the detector position (some three-dimensional
procedures used for evaluating the digital position on the other side of the object
image. Problem areas and limitations of from the X-ray beam), the transmitted
image analysis schemes can result from X-ray photons intersecting this solid angle
detector choices as well as processing divide into different types as follows:
schemes. For an understanding of these
issues some recognition of the [ ] [ ] [ ](1) NT S(E),d = NP S(E),d + NS S(E),d
fundamentals of digital images and the
landscape of acquisition and processing [ ] [ ](2) NS S(E),d = NSbk S(E),d
for digital images is useful. [ ]+ NSobj S(E),d
Three properties of digital images are
fundamental: (1) signal-to-noise ratio,
(2) spatial resolution and (3) contrast
sensitivity (defined below). Each
identifiable feature corresponds to some
change in measured intensity in the
346 Radiographic Testing
[ ] [ ](3) NTO S(E),d = NPO S(E),d mechanisms are photoelectric effect,
[ ]+ NSbk S(E),d compton scattering and pair production.
The schema above decomposes the total 1. In the photoelectric effect an X-ray
photon flux from an X-ray source S(E), for dissipates its entire energy by
the solid angle subtended by detector knocking out an electron from an
position d, into primary photons NP and atom.
scattered photons NS. Further there are
two types of scattered photons: 2. In compton scattering, the X-ray
background scatter NSbk and object scatter imparts some energy to an electron
NSobj. Most X-ray measurements have a but survives with a lower energy and
means for taking into account the different direction.
radiation field independent of the object,
NT0. Typically this measurement contains 3. In pair production an X-ray is
some background scattered photons from absorbed to create an electron positron
the supporting fixtures in the pair. For pair production to occur the
radiographic scanner or in the detection incident X-ray must have an energy
hardware, as well as the primary photons equal to or greater than the rest mass
launched by the X-ray source NPO. of an electron positron pair. This
energy is 1.02 MeV because the rest
The attenuation of electromagnetic mass of each is 0.511 MeV.
radiation — for example, X-rays passing
through matter — involves three A more complete discussion of these
mechanisms, each accompanied by processes can be found elsewhere in this
secondary processes. The three volume and in books by Evans and
Heitler.1,2
FIGURE 1. Imaging process for digital
radiography and computed tomography. The different attenuation mechanism
combine in any scan to produce signal
X-ray or gamma from a variety of secondary processes. For
ray interactions example, a photoelectric electron interacts
with the material and creates scattered
with object X-rays. Likewise, scattered X-rays at lesser
energy than the initial flux undergo
Image at some photoelectric attenuation before reaching
detector plane an exit plane in the object. At lower
energies the secondary processes are not
in space prominent. However, at medium to
higher energy regimes the secondary and
Acquisition of ancillary processes can dominate imaging
digital image quality.
from detector
The performance of imaging
Digital radiographic techniques using penetrating radiation
image processing for follows from the properties and the
display and analysis proportions of the different classes of
X-ray photons in the image. The primary
Digital radiographic or X-ray photons account for the
computed tomographic performance of the transmission image.
These straight line projections through the
processing of materials best conform to the idealized ray
transmission image path implicit to computed tomographic
reconstruction algorithm textbooks.3,4
Computed tomographic Spatial resolution of the primary photons
image reconstruction is bounded by the spot size blur and in
the case of heavily collimated systems the
Computed tomographic effective width of the source and detector
image display and apertures. The contrastive properties of
analysis the primary photons have the distribution
of an X-ray gage with standard deviation
proportional to the inverse of the square
root of N.
Primary photon flux NP includes the
total X-ray photons emitted by the X-ray
source and transmitted straight through
the object, undergoing the different
attenuation processes. As shown
elsewhere, photons emitted from the
X-ray source are a poisson random
variable and total X-ray attenuation is
represented as distributed binomial. The
two variables taken together result in
Eq. 4:
Image Data Analysis 347
[ ] [ ](4) NP S(E),d = NPO S(E),d image, even though the photons are
present because of the detector itself. The
[ ]( )× exp −Σµi ρ,Z xi detector hardware includes all the aspects
of collimation, scatter from adjacent parts
where µι(ρ,Z) is the X-ray attenuation of the cabinet or detector enclosure,
coefficient of the ith material with fluence from the fixturing from the
object, as well as sources of scatter within
thickness xI in the direction subtended by the detector (blooming in the scintillator
the detector element looking back at the and other sources). Background scatter
fills up the detector dynamic range with
X-ray source through the object. counts that do not carry information
about the object.
Following this development NT is a
poisson random variable with mean NT, As indicated above, contrast is also
variance also equal to NT and standard affected by the types of photons detected
deviation NT–1.3 at that position in the image. A signal
that arrives at the detector element from
Contrastive performance of a adjacent ray paths will blur the contents
of the ray in straight lines through the
transmission image is that ability to object. Consequently, certain features in
the object will be masked by adjacent
detect intensity changes due to some features. By substituting Eqs. 1 to 3 into
Eq. 5, this effect can be quantified:
change in the attenuation of the object
(6) S = ∆ NP
along some ray path. All measurement N NT
systems involve some noise. One bound = ∆ NP
1 + NSobj + NSbk
on the contrast in an image is the ratio of NP NP
the change in intensity of the signal to Contrast is reduced in two ways, from the
presence of object scatter and background
the random intensity variations caused by scatter in the detector package. The two
quantities in the denominator of Eq. 6 are
the noise in the system. For some change usually referred to as scatter-to-primary
in attenuation ∆ and by using the poisson ratios. Here the effects of the two different
types of scatter are separated out because
variable N, the signal-to-noise ratio can be of their separate physical mechanisms. A
couple of different techniques are
written: available for measuring these quantities5
and these measurements are
(5) S = ∆ NP = ∆ NP recommended as elements in any
N NP standard image calibration procedure.
Feature contrast in an object is directly Spatial resolution is nominally defined
related to the number of transmitted and as the size of the smallest detectable
detected photons. In the limit too few feature. Notice the necessary involvement
photons (weak signal) will always result in of contrastive performance. The feature
poor contrast. must be detectable to be sized or be
attributed some dimensionality. Image
The photons scattered in the object are contrast must be at some minimal level in
more difficult to interpret. The line or order for spatial resolution to be assessed
path of the scattered photons is energy at all. Alternatively, poor spatial
dependent, usually forward peaked and resolution because of some source of
always broader than the path of the uncontrolled blur (background scatter or
primary photons.3 Object scatter can vary object scatter) can compromise the best
throughout the image because of the contrastive performance. The modulation
change in the lengths and types of transfer function (MTF)6 for a system
material in the specimen. Highly provides a good view of the contrastive
attenuating sections of an object adjacent performance at different spatial
to low attenuating elements will generate frequencies. The point spread function
substantial signal for the image of the low (PSF), the fast fourier transform pair to the
attenuating sections. In radiography the modulation transfer function, also
contrast in the low attenuating sections is provides a useful measure of the spatial
simply compromised and this effect is resolution of a digital radiographic or
independent of the detector. For computed tomographic imaging system.
computed tomographic reconstructed Good estimates of these functions for an
images scatter results in streak artifacts
along the directions of the longest chord
lengths or highest attenuation.4 The
spatially variant character of object scatter
makes removal of this source of artifacts
difficult.
Background scatter is the fluence that
arises from the detector and its
environment, for example, cabinet and
room hardware, independent of the
object. If there was no detector or cabinet
hardware there would be no photons of
this type but then there would be no
image. Consequently, this category is
included as always part of any acquired
348 Radiographic Testing
imaging system are invaluable for systems can be accounted for by
subsequent image analysis.6 analyzing the properties of the different
types of photons, images and tests are
Obtaining good estimates for these built on the measured intensities,
functions for a system requires careful I [S(E), d] and I0[S(E),d], the digitized
procedures. version of NT and NT0. Digitization of the
photon flux has two aspects: (1) a change
1. All procedures for digital radiography into a more measurable energy deposition
and computed tomography involve (for example, X-rays converted by a
obtaining an image containing a scintillator into visible light or X-rays
material edge, where the edge spread is converted by high purity germanium
due only to the imaging system not crystal into current) and (2) some readout
the object. For radiographic systems of the energy deposition into a digitized
this requires a perpendicular quantity (bit depth). The issues for
orientation of the flat aspect of the converting the X-ray energy pivot about
object to the X-ray beam. the treatment of the incoming X-ray
energy spectrum.
2. There should be some transmission
through the object or the edge Scintillators differ in their sensitivity to
sharpness is artificially created by the different energy ranges and their ability to
opaqueness of the object and masks record much of the signal for high energy
the imaging of the edge by the system. X-rays. On occasions, the energy
windowing character of X-ray detectors is
3. Data acquisition for the image should an advantage. For computed tomography
be typical of system operations. It is the choice of scintillator directly affects
important to know and monitor the the amount of beam hardening in the
properties of the regular image being reconstructed image. This effect can be
acquired every day. more significant for industrial computed
tomography where the energies are
The photons arising in the course of substantially higher than in medical
X-ray attenuation mechanisms can be computed tomography. The act of
organized into classes and the different digitizing the signal adds noise to the
classes do not have the same properties signal and can be more significant for
for imaging the object. The contrastive systems involving signal amplification as
performance and spatial resolution for the part of digitization (such as image
different classes of photons are different intensifiers). Independent of other
and the system performance will reflect considerations lower noise and higher bit
the proportions of the different types of depth will provide the most faithful
photons. Better imaging performance will acquisition of X-ray images in space.
enable more reliable image data analysis
and interpretations. Lastly, much of the The physical configuration of the
image processing for digital radiographic detector fundamentally affects the
and computed tomographic images is proportions of the above quantities
based on an idealized ray path model of detected for X-ray imaging. Indeed the
X-ray attenuation. This is the kind of detector performs a sampling of the X-ray
imaging one would expect if the received image for both the intensity of the
signal was composed exclusively of spectrum and the spatial landscape of the
primary photons. Imaging artifacts are transmission through the object.
often the result of this mismatch between Additionally, the internal configuration of
theory and actual measurement. This the X-ray detector determines important
mismatch varies with each particular practical performance properties. In
scanner. Whereas the actual properties of general, there are three types of X-ray
the photons in space before digitization detectors (1) highly collimated
can be somewhat elusive, it is important single-detector systems, (2) linear detector
to know the general properties of the arrays and (3) area detector systems. By
different photons. Hopefully this their physical arrangement each of these
knowledge can guide the analyst in detector modalities treats the classes of
deciding whether a particular image effect photons in the transmission differently.
is just the radiation doing its job or whether
some particular source of the effect can be 1. For the single-detector system, both
remediated. the source and detector are highly
collimated. This is the closest physical
Digitizing Transmission realization of the idealized ray path
Images used in computed tomographic theory
texts.
Acquiring digital radiographic and
computed tomographic data involves 2. Linear detector arrays use collimation
some mechanism for digitizing the image on both sides of the short aspect of
in space, which involves its own set of the array. Background scatter and
issues. Whereas the performance of digital object scatter are much reduced from
radiographic and computed tomographic the collimation.
Image Data Analysis 349
3. Area detectors use less collimation and area detectors can improve system speed
are susceptible to more effects of by a factor of 20. However, all types of
scatter. However, area detectors make area detectors include varying amounts of
good use of the radiation envelope scatter blur, which directly reduces the
generated by the X-ray source and dynamic range of the system.9
have inherent speed advantages over
single detectors or linear array Contrastive performance for any
detectors for the same source output. scanner is limited by the system dynamic
For three-dimensional applications, range that varies by the scanner and by
system speed is limited by the efficient the application. The dynamic range of the
use of source output. system is not simply the number of bits in
the detector or the number of bits
The best detector for the application is subtracting the readout noise. Rather,
the choice that best meets the specific dynamic range in a particular area of the
goals, which involves all kinds of practical image is bit depth minus readout noise,
considerations. However to clearly minus the background scatter signal in
describe the strengths and weaknesses of the system. For certain medium energy
specific modalities, imagers can be systems the proportion of background
categorized according to five properties: scatter signal can be as high as 20 percent
(1) ratio NP·NS–1 of primary to scattered of the detected fluence, reducing the
photon flux, (2) detector blur, (3) number effective dynamic range greatly. These
of internally scattered photons, (4) source kinds of considerations are important for
efficiency and (5) quantum efficiency. scanner selection and for interpreting
Together these properties define scans of objects with high scatter
important tradeoffs that result from the fractions.
different detector designs (Table 1).
Depending on the system, different
The greater range of choices for procedures have been developed to
industrial scanners provides opportunities correct for some of the effects in the
and tradeoffs for a particular application. measured signals resulting from the
For data quality, the best choice is the particular scanner configuration. The goal
spectroscopy digital radiographic and of these procedures is to process the
computed tomographic based systems.7 transmission data into the ray path model
This is the experimental analogue of that is the basis for image reconstruction.
imaging with the primary photons. Industrial computed tomographic
Sources of scatter are removed by scanners can be classified according to
collimation or from the energy resolution how many processing steps are applied to
in the detector. However, the impact on transmission data before reconstruction
system speed is dramatic (factor of a (see Table 1). Only minimal processing is
thousand). needed for single-detector spectroscopy
systems, because they are the closest
Slit collimated linear detector array physical realization of the ideal ray path.
(LDA) systems offer the next best Linear detector arrays require a bit more
alternative for acquiring digital processing but much less than area
radiographic and computed tomographic detector arrays.
data with the greatest proportion of
primary photons.8 The difficulty is when It is typical for linear detector array
the focus of the application is on scanners to involve some detector
obtaining higher spatial resolution or full balancing or detector linearity corrections.
three-dimensional test data. In this case Area arrays involve at least the application
TABLE 1. Performance characteristics of different detector designs.
Detector Ratio NP·NS–1 Detector Quantity Quantum Source
of Primary Blur Efficiency Efficiency
NS of
to Scattered Scattered
Photons Photons
Area Array low small low low medium
Film low small to medium medium high high
Flat panel low medium to high high low high
Camera or scintillator with small cone medium high high high high
Camera or image intensifier with large cone
high small low high low
Linear Array medium medium to small low medium low
Slit collimated with septa
Slit without septa highest smallest smallest medium lowest
Single Detector
Single detector with spectroscopy
350 Radiographic Testing
of some detector balancing correction and resolution. Reducing the energy spectrum
in some cases involve a correction for for the scanning makes for slower scans
spatial distortions. Each one of these for most conventional X-ray sources.
corrections can create scanning artifacts Making tradeoffs is required — the test
(ring artifacts or spatial distortions) in designer must understand the application
itself and can mask features in the object, well enough to make the best choice.
depending on the location of the feature.
Practical Considerations
Fundamental issues for applications of for Digital Imaging
computed tomography to industrial
objects involve this recognition of artifact There are a number of practical limits on
content in different computed the performance of digital imagery. For
tomographic scanners. All of the different spatial resolution, the performance of all
types of scanners can provide quality test techniques applied to digital images are
data for a particular application (see bounded in the limit by the pixel sizes
below). At the same time different and spacing.6 Features at the detector that
applications are better suited for different are smaller than a detector element or on
types of scanners. As indicated above, the size of the pixel spacing are difficult
some artifacts (for example, object scatter) to image. If the spatial fidelity of the
are part of the transmission image in image in space is less than a digitized
space independent of the detection pixel the contrast change for the feature
scheme. A fully optimized scanner still will be averaged over the pixel and only
has some artifacts that sometimes some fraction of the change in intensity
compromise test performance. At best will be recorded in that pixel. The
they are benign features in the image. modulation transfer function arising from
Also, the detector configuration can a particular size of a rectangular discrete
contribute its own set of artifacts that detector element has been described in
mask the imaging of object features. The detail elsewhere.6
importance of scanner characterization
through computed tomographic Contrastive performance is limited by
phantoms (well known three-dimensional the effective dynamic range of the
objects) cannot be overestimated. imaging system.10 Implicit in the above
Phantoms are the main means for decomposition of the photons in space
distinguishing artifacts from features into the different types, determining the
within the object. dynamic range of a system requires
measurements with standards for the
In summary, the different industrial particular technique. The first bound for
digital radiographic and computed dynamic range for any image is bit depth,
tomographic scanners can be plotted on a that is, the range of discrete digital counts
four axis scale as in Fig. 2. At the four axes possible in the detector system.
are spatial resolution, contrast resolution, Digitization generates some noise in the
energy resolution and system speed. image, which subtracts from the total
Experience suggests that improvement in range of counts. Lastly, for the particular
any two of the parameters results in a technique, the contrastive performance of
worsening of at least one of the other the system is limited by the number of
two. More collimation will improve image background scattered counts recorded by
contrast and with smaller apertures better the detector.
spatial resolution can be obtained.
However, the cost in system speed can be (7) EDR = + BD − NR − NSbk
prohibitive for even the hottest X-ray
source. Obtaining more scanning speed where EDR = effective dynamic range,
usually sacrifices image contrast and BD = bit depth, NR = readout noise and
NSbk = background scatter. Effective
FIGURE 2. Tradeoffs in industrial digital radiographic versus dynamic range defines the best an
computed tomographic detectors. inspector can do with a particular system
for a particular technique. Actual results
Energy will likely include less performance.
resolution
From the above it would seem the
Spatial limits of digital X-ray detection would be
resolution advanced by smaller pixel sizes, lower
readout noise and higher bit depth. This
Central pivot is certainly true but there are a number of
practical issues to consider. As mentioned
System speed Contrast resolution earlier smaller pixels will require more
X-ray flux. Cutting the pixel size in half
in both dimensions reduces the solid
angle by one fourth. For the same
Image Data Analysis 351
detection scheme the X-ray flux needs to
be four times greater to obtain equivalent
signal in the smaller pixel in the same
amount of time. Likewise, if more bit
depth increases the readout time by some
factor (this is usually the case) or if the
detector cost increases by a factor, the
greater dynamic range may be no bargain.
On another level, if the background
scatter in the particular cabinet or
physical configuration is large, no
increase in bit depth can provide the
necessary dynamic range.
One practical technique for obtaining
greater dynamic range on any image
frame based system is frame averaging or
summation. Readout noise is mostly
random, with some exceptions. For the
random components, image averaging
reduces the variation leaving more bit
depth for the changes in the transmission
image through the object. One of the
differences in systems is in the amount of
frame averaging required to achieve a
certain contrastive performance. These
kinds of measurements have to be
acquired on the particular system and the
performance metered with the appropriate
standards of the American Society for
Testing and Materials. Frame averaging
always requires more time but is a
straightforward technique for increasing
imaging performance. Averaging frames is
not to be confused with integration time,
the duration of time the sensor is exposed
to the X-ray flux. Effective dynamic range
is ultimately driven by the number of
photons arriving at a detector position in
space. In circumstances where only a
small number of photons are available,
frame averaging may be of marginal
utility.
352 Radiographic Testing
PART 2. Image Analysis Techniques and
Radiographic Tests
The following discussion supposes an Recent advances in detector technology
in-hand digital image, which includes have resulted in rather extensive
enough fidelity to convey something calibration schemes for generating more
about the object. Analysis techniques radiometric images free of artifacts. Many
divide into groups depending on the tests are performed on calibrated images
technique for final determination of the configured to resemble film or the output
test: (1) visual evaluation of the raw of an image intensifier. The process has
image, (2) visual evaluation of a become more explicit: specialized gain
transformed image, (3) computer corrections that were the task of analog
evaluation with a human visual hardware have become digital transforms.
checkpoint and (4) computerized However, this has introduced another
automated inspection. variable into the whole job of system
maintenance and optimization; maintain
Also, analysis techniques get more or the source, maintain the detector and
less complicated depending on the maintain a good set of calibration data.
intrinsic characteristics of the test object: Because system calibration is crucial it is
(1) discontinuity or dimensions of a best to develop quick measurements of
feature in a single-thickness image degradation due to calibration drift.
single-material section of an object, If planned for ahead of time, image
(2) discontinuity or dimensions of a quality indicators can be imbedded in
feature in a single-material varying part holders, or procedures for salting the
thickness section of an object, test can be inserted into the process.
(3) discontinuity or dimensions of a
feature in a single-thickness but Visual Enhancement of
multiple-material section of an object and Digital Images
(4) discontinuity or dimensions of a
feature in a multiple-material Image enhancement encompasses a wide
multiple-thickness section of an object. variety of techniques for evaluating,
The simplest type of image analysis to manipulating and transforming images.
configure is the visual evaluation for a The image enhancement for visualization
discontinuity in a single-material is divided into two types: (1) interactive
single-thickness section of an object. At techniques and (2) transform based
the other end of the spectrum is techniques. The first class includes the
automated inspection of a variety of interactive operations to extract
multiple-material object, through a pixel information, colorize the content of
section of multiple-material thicknesses. the image and accentuate the features of
Image analysis techniques play a role in an image to enable a better visual
each of these different types of tests. determination.
The following discussion pertains to The second class of techniques
the performance and oversight of the test. transform the image with the interest of
Suppliers of digital radiographic test amplifying a certain class of features, at
equipment have access to an ever the expense of other features in the
increasing array of image processing and image. In the present discussion the
image manipulation technologies. As well, organizing principle for image
the choices of modalities in digital enhancement is visualization, and
imaging are changing rapidly. New and quantitative properties of the enhanced
novel image transform and analysis image are not of central importance.
algorithms are being developed equally
fast for a wide variety of imaging Interactive Tools
applications. However, not all techniques
developed for other imaging modalities Interactive image enhancement
apply well to radiographic transmission techniques are usually provided in toolbars
images. As described above, the content of that can be accessed on the desktop
radiometric images follows from the display of the control information for the
physical mechanisms of X-ray interaction. software package. There are three types of
This results in images with some peculiar toolbars of which two are important in
and subtle properties. These differences this discussion: (1) point processing
should be carefully noted.
Image Data Analysis 353
toolbars, (2) color manipulation toolbars The schema in Fig. 3 contains the
and (3) image modification, drawing or different data structures and elements for
text annotating tool sets. colorizing a digital image in computer
memory.
Point Processing. Point processing tool
sets at a minimum should include pixel The process for viewing an image on a
value query, lineout extraction, image computer monitor involves two steps,
zoom and region-of-interest extraction. scaling the image for copying to the
With these tools, the values of the images display hardware and manipulating the
can be directly examined. lookup table that maps the pixel values of
the image in the display hardware into
1. The pixel query tool makes it possible colors displayed on the monitor. One
to click on a position in the image and added feature here is the particular state
see the values or a neighborhood of of the display as configured in the
values. operating system. The simplest type of
mapping is a linear lookup table over the
2. The lineout extraction tool makes it minimum and maximum of the image.
possible to draw line across a position This kind of display is shown in Fig. 4,
in the image, extract the values along where the same image is displayed with
that line and view the extracted signal three different color lookup tables. The
as a vector plot. image is a 0 to 255 linear ramp with the
display minimum is set to 0, the display
3. Image zoom tools display resampled or maximum is set to 255 (the total range of
pixel replicated versions of an image the image) and the 256 gray scale colors
or region of the image. are mapped directly from the minimum
to the maximum, one gradation of color
The pixel query and lineout extraction for each value of the image.
tools permit the operator to view the
values in the image and make it possible Most color tools allow for a way to
to change intensity in the region of change the slope of the transfer function
interest. In images with a lot of detail for applying the color map to the image
these kinds of tools can verify the nature
of the intensity change in the image. FIGURE 3. Different data structures and elements for
colorizing of digital image in computer memory.
Statistics. Related to these tools is some
statistics capability, which can calculate Digital image in Image scaled to Hardware for
means and standard deviations of small or global access certain display manipulating
selected areas. computer minimum and display; color
memory display maximum look up table;
Archiving. It is advantageous if the for video display scroll bars etc.
software allows for archiving some
extracted part of an image for later Display
comparison. monitor
Access to these tools is crucial for FIGURE 4. Linear lookup table with gamma = 1.0 applied to
evaluating the state of an inspection. It is image (valued 0 to 255) for three different color maps.
important to know the fundamental
difference in pixel value that accounts for
the identification of a feature. Algorithms
for visual or computerized testing can
highlight very small changes in pixel
intensity that indicate real changes in the
object. Actually looking at the numbers in
the image is the only way to assess the
state of the imaging system. Experience
shows that most test systems degrade in
the field over time. The antidote for this
possibility is to be able to accurately track
image quality in a rigorous way as
discussed below and elsewhere.
Color Tools and Lookup Tables
Tools for manipulating the colorizing of
an image fall into two categories. First,
the type that manipulates the color
lookup table (LUT) in the display for the
computer (see Figs. 3 to 8). Second the
types that rescale the intensities of the
image to a particular display
minimum — display maximum, selected
by the user or by selecting a region of
interest and thereby restrict the number
of colors to a small range in the image.
354 Radiographic Testing
being displayed. In Fig. 5 the display system is about 2500 to 1, with spatial
minimum has been increased, the display resolution on the order of four line pairs
maximum reduced and the slope per millimeter. With the slope of the
increased to 2. For this state of the color transfer function set to one and the full
map the values in the higher ranges of the range of colors spread over the minimum
image have been set to one value, as have and maximum of the image a wide range
the values in the lower ranges. The colors of details of this electronic component
are now distributed over a smaller range can be viewed.
of the total intensity of the image
allowing a more detailed look at those Choosing different lookup tables or
pixel values in the middle of the intensity different display minimum and display
range of the image. The intensity values maximums can make the difference
of the image have been bundled up into a between seeing a feature or missing the
smaller set of colors. A segmentation of feature completely. It is often the case
the image has been performed and that an automated visual evaluation will
features that differ along those have a number of lookup tables for
segmentation boundaries are easier to inspecting the different features in an
view. image. In Fig. 7, the same image is
displayed with different lookup tables
To illustrate the significance of this (color based segmentation) applied and
type of image enhancement Fig. 6 the operator makes a judgement as to the
contains a radiograph of a synchronous presence or absence of a feature. Figure 8
dynamic random access memory
(SDRAM) memory chip acquired with an FIGURE 7. Image of synchronous dynamic
amorphous silicon flat panel. The random access memory chip with
effective dynamic range of this particular gamma = 2.0, linear lookup table.
FIGURE 5. Linear lookup table with gamma = 2.0 applied to
image (valued 0 to 255) for three different color maps.
FIGURE 6. Digital image of synchronous FIGURE 8. Digital radiograph of steel fuel
dynamic random access memory chip with injector with two different lookup table
linear lookup table. settings: (a) high intensity, low contrast
setting; (b) high constrast setting.
(a) (b)
Image Data Analysis 355
shows a similar result for a different likewise one color. The value of this
application. threshold is an estimate of some property
of the object. Consequently, this value has
Repeatability of visual evaluation some real significance — in its mean and
depends on a variety of physical variance and error properties. These
components, not to mention the alertness transforms are considered more explicitly
of the operator. Monitors can and do below, evaluating the resultant image for
degrade over time. Perhaps more information about the test object.
importantly, calibrating monitors to
ensure operators in different places at The present discussion classifies image
different times are viewing the same transforms used for digital radiographic
image is not a simple task. As expected, and computed tomographic data into five
small degradations in image quality, (that groups.
is, larger noise content) can undo the
visual content of any lookup table 1. Low level transforms are used to
segmented image. Similarly, growth in the calibrate or account for artifactual
spot size can lower spatial resolution, features in the raw image.
thereby masking features. For any
particular test that uses the colorization of 2. Spatially invariant transforms are
an image it is a good idea to calculate the applied through image kernel based
smallest difference that accounts for the processing.
visual identification. Next, it is a good
idea to calculate a local standard 3. Morphological transforms involve
deviation for the image values and erosion and dilation operators.
compare this value to the difference in
the pixel values being accentuated by the 4. Specialized numerical processing
lookup table. algorithms involve taking projections
on a single axis of an image,
Image Transform shrinkwrap operations and freeman
Techniques chain code algorithms for finding
curvilinear boundaries.
Digital images have the distinct advantage
of residing in a memory location enabling 5. There are multiscale, multiresolution
direct manipulation by a wide variety of transforms.
algorithms. Image transforms directly
manipulate the values of the image to Many good texts and papers are available
emphasize certain features and suppress on these different transform options. The
others. Some image transforms proceed by following illustrative examples may help
decomposing the image into a particular to focus the investigation for any
feature space where aspects of the image particular test at issue. Table 2 compares
are sifted for relevant information about the transform techniques discussed below.
the object. The importance of image
transforms follows from their availability, Calibration and Low Level
the easy way in which commonplace Transforms
computers can perform the operations
and the interesting way transforms can Many properties of an X-ray image are not
easily show discontinuities or image a result of some property of the object.
features in certain circumstances. The barrel distortion or veiling glare in
image intensifier images5 is a result of the
Moreover, transforms can be strung inner workings of the image intensifier
together in a sequence to perform and is not related to the object. Bad pixels
sophisticated image enhancement. in X-ray panel images are discontinuities
Transforms can be particularly useful for in the detector. Saturated pixels in charge
exploratory analyses of digital coupled device scintillator images are
radiographs. This is especially true for stray X-rays that have come in contact
high bit depth (14 to 16 bit) digital with the charge coupled device chip (and
radiographic data where everyday this can occur from any direction).
monitors and lookup table techniques are Readout lines or bands in some
hard pressed to reflect the large dynamic amorphous silicon detectors are part of
range of the data. the structure of the detector. Finally,
intensity differences and edge distortion
In an important sense the interactive due to beam divergence and large cone
operations discussed above are simple angles are the result of the X-ray
thresholding transforms. The image viewed source-to-detector geometry. The artifacts
through the lookup table organizes the in images from these sources can mask
pixel intensities in such a way as to make features of interest and make image
all pixels lower than a certain value one interpretation difficult.
color. Also, at the high end, all values
greater than a certain pixel intensity are One approach to removing these types
of artifacts from image data is to acquire
additional data that include the source of
these artifacts and divide out, subtract or
calibrate out those aspects of the image.
Figure 9 includes two images of nine balls
of a ball grid array from a charge coupled
356 Radiographic Testing
device camera scintillator system, one routines exist, most based on local
with no correction applied, one with an neighborhood statistics for the pixels in
attenuation correction applied from an the image. Figure 10 contains images of
image of the radiation without the object an aluminum casting before and after
in the field. This digital radiograph is calibration and bad pixel correction.
from a system with a microfocal source
and magnification of 20. Notice how the Properly acquired calibration data can
attenuation correction flattens the produce much clearer images. Artifacts of
intensity of the beam divergence. Also
notice how the transform in the FIGURE 9. Ball grid arrays: (a) raw image;
logarithmic scale enables a clearer view of (b) image after attenuation transform.
the higher attenuating features in the (a)
image. The attenuation correction
proceeds by dividing the object image Iobj (b)
with the image of the radiation (no
object) Irad and taking the natural
logarithm of the result. In terms of the
digitized quantities I[S(E),d], and Io[S(E),d],
the attenuation image A can be defined as
[ ] [[ ]](8)
A S(E),d I S(E),d
= − ln Io S(E),d
where the expression [S(E),d] merely
designates the digitized d signal S of the
energy E.
It is common for flat panel based
detector manufacturers to provide
schemes for calibrating their detectors.
The calibration scheme requires the
acquisition of images at differing levels of
detector saturation. From these images a
much clearer and more linear image is
generated. Bad pixels or errant pixels are
those intensity values in the image that
carry no information about the object or
have received a hit from the radiation. A
number of reliable bad pixel removal
TABLE 2. Comparison of transform techniques.
Transform Resources Time Required Advantages for Drawbacks
Technique Required for Use for Application Application
Spatially invariant hardware and software with specialized many proven choices will not handle spatially
kernel based with well known varying aspects of
transforms readily available hardware, time required properties radiographs
Image calibration is almost insignificant eliminates errors or problems in
transforms troublesome artifacts calibration data have big
significant can be substantial — correctly impacts
maintenance must be applied to every
image
Morphological hardware and software more significant than can show important large changes to image
operators features independent content — will simply
readily available standard kernels of single-pixel noise remove certain features
Specialized from the image
numerical input requirements can can be computer will address a specific
transforms analysis need in algorithms are not well
be demanding intensive and disk image data known and error
Wavelet based properties are not readily
transforms intensive different and novel available
transforms of images
Multiscale software packages pyramidlike application difficult to assess the error
multiresolution must be evaluated structure provides many good decomposition properties of these
transforms relative to application choices of images into their transforms
component parts
software packages usually more significant difficult to assess the error
must be evaluated than standard kernels properties of the different
relative to application multiscale transforms
Image Data Analysis 357
the detector are removed using the technique. The outer edges of the image
characteristic patterns within the detector. would require a different threshold than
Beam divergence artifacts can be reduced the inner parts of the image because the
by acquiring background images. When pixel intensities are different with the
combined with the natural logarithmic beam divergence of the X-ray source. This
transform, artifact reduction can easily is even more important for some image
reveal detail on the more attenuating intensifiers where distortion can account
parts of the object. Bad pixel corrections for a 30 percent change in the pixel values
are usually recommended and mask the independent of the object. Dividing out
effects of troublesome pixels, enabling a the background image sufficiently flattens
clearer view of the rest of the image. the image to enable the subsequent
application of a simple threshold
Proper calibration is both critical and technique.
problematic for tests. In many cases the
best image analysis routines are the FIGURE 11. Images of casting in Fig. 10:
simplest. For the single-material, (a) calibrated image; (b) attenuation image;
single-thickness test image, thresholding (c) unsharp mask image.
can help identify discontinuities in a
straightforward and reliable way. In this (a)
type of operation a pixel value is chosen
that represents the not anomalous
intensity level and pixels on one side or
the other of that value are identified
indicating the presence of a discontinuity.
Figure 11 includes a calibrated image,
background corrected image and
processed image of the casting in Fig. 10.
Without calibration the identification of a
useful threshold is complicated by the
beam divergence in this particular
FIGURE 10. Aluminum casting: (a) raw flat (b)
panel image; (b) image after calibration and
bad pixel correction.
(a)
(c)
(b)
358 Radiographic Testing
Extensive calibration schemes bring emphasize the different aspects of this
their own set of issues. To acquire an object — the position of the grid array
image for testing it is necessary to acquire balls, the porosity in the balls and the
many different images. Tests take more array of bond wires to the left of the
time and resources. Both the images for image.
testing and the calibration images need to
be maintained and possibly archived to For digital radiographic imaging,
track system performance over time. More lookup tables and interactive visual
importantly, the image being used to test enhancement tools can be very time
the object now includes the error variance consuming on multiple-material,
of all the calibration images. Poorly multiple-thickness images. Field flattening
acquired calibration data can ruin a test. transforms can be useful in quickly
Consequently, it pays to take very good inspecting different regions of an image in
calibration images (that is, lots of frame a single view. This is especially true for
averages). Also, there is a limit to bad high bit depth detectors (12 bits or
pixel removal. Clusters of pixels greater
than 3 × 3 are problematic. It is best to FIGURE 12. Four different kernel based
specify a limit on the number of bad transforms of digital radiograph of
pixels clusters and bad lines in the electronic component: (a) original image;
detector before purchase. (b) unsharp mask image; (c) sharpened
image; (d) gradient transform image.
Image thresholding techniques are used (a)
throughout image analysis in a wide
variety of contexts, on raw, calibrated and (b)
transformed images. There are two basic
types: (1) fixed thresholds and (c)
(2) histogram derived thresholds. Fixed
thresholds are determined from a set of (d)
considerations, usually an analysis of this
class of images, and are independent of
the particular image being analyzed. The
value of the threshold is known before
the analysis of the image. For histogram
derived thresholds, the value of the
threshold is not known before analysis.
Rather, the histogram of the image is
calculated first and a threshold value is
determined from an analysis of the
histogram. Consequently, the threshold
will vary with the particular image. There
are a variety of rules and approaches for
choosing the value of the threshold from
the histogram.11 Histogram based
techniques have the advantage of
correctly changing with small intensity
changes due to X-ray source fluctuations
but are more complicated to develop and
apply. As indicated above, the viability of
thresholding techniques strongly depends
on consistent quality in the analyzed
images. This implicates the raw and
calibration data.
Spatially Invariant Kernel Based
Transforms
This class of image transforms afford the
most variety. This is the class of image
transforms covered in most signal and
image processing books.12–14 Within this
class of transforms are the most
commonly referred to routines for
smoothing, edge detection and image
sharpening. Figure 12 contains images of
a section of a digital radiograph with
transforms applied: original image,
unsharp mask image, sharpened image
and gradient transform image. These
different representations of this image
Image Data Analysis 359
greater). Figure 13 includes raw and field commonly used for edge detection and in
flattened (also called unsharp mask combination with some thresholding
transforms) images of objects with operation discontinuities and the edges of
multiple thickness and multiple materials. an object can be identified. Once edges
The transformed image provides an easier are identified the object can then be
identification of indications in different segmented out of the image or some
regions of the objects. The output of these background feature taken out to enable
field flattened transforms can be then clearer processing of the parts of the
thresholded for more detailed or image of interest. Automated processing
automated identification of features (see for many different visible light tests have
Fig. 13). been developed using these tools and
these results are illustrated in different
A large variety of kernel based texts and articles.4 These transforms will
transforms have been developed for a be mentioned again in the discussion on
number of different types of radiographic automated processing.
testing. In most cases, these different
kernels were developed to sift a particular Limitations for this whole class of
feature from the image. As such, these transforms stems from the spatially
different transforms apply to a certain invariant character of how the transform
class of images where that feature is goes about its work. The effects of each of
present. Specific to radiographic data the transforms in this class can be
some analyses have led to kernel based explained from the fourier representation
transformations for removing the effects of the processed image. To the extent
of veiling glare5 and in some cases the noise is substantial in the image,
effect of background scatter. These types emphasizing the high frequency
of transforms have been shown to result components of the image will increase the
in an image that shows features difficult proportion of noise in the image. To the
to visualize with more standard extent the high spatial frequency content
techniques. is suppressed, noise is reduced, the edges
of the object in the image will be
Automated and semiautomated tests somewhat compromised. This difficulty
often involve one or more of these can usually be addressed by
transforms as one step in the process. The acknowledging the tradeoff in processing.
laplacian gaussian transforms are The fundamental problem in radiographic
images is the presence of artifacts in both
FIGURE 13. Electronic circuit board: (a) raw the high frequency and low frequency
image; (b) field flattened image. content of the image.
(a) The various kinds of noise present in
radiographic images are mostly high
(b) frequency. However, background scatter,
beam hardening and object scatter are all
low frequency effects. Consequently,
image regions through long chords of
material contain more noise and contain
more blur (object scatter).
In spite of the nonspatially invariant
character of some effects in radiographic
images, useful kernel based transforms
will continue to emerge as useful tools in
many circumstances. Many software
packages include highly developed
options for kernel based processing.
Hardware for performing the operations is
commonplace and will continue to
perform this kind of imaging operation
with more ease and accuracy.
Morphological Transforms
Image morphological operators are
important tools for the identification of a
variety of features of an image. Many
good texts are available on morphological
operators and image analysis.15
Single-pixel noise (also called quantum
noise) in computed tomographic images
can make the process of extracting
features from computed tomographic
reconstructed images difficult.3 For digital
360 Radiographic Testing
radiographic and computed tomographic for this particular computed tomographic
data images, erosion and dilation are slice is 0.038 mm (1.5 × 10–3 in.). This
useful techniques for reducing the particular material is a mixture of low-Z
single-pixel noise, for labeling the and high-Z materials and a certain
different regions and for extracting amount of porosity. In this case the
component objects and segments of erosion operator replaced each voxel with
image regions from images. In a binary the minimum of the 3 × 3 neighborhood
image, each pixel is assigned a value of around that voxel. In this way the
one (1) or zero (0). Dilation of a binary intensities in the regions with more
image means that a pixel in the output porosity are leveled to a fewer number of
image is a one if any one of its eight values. Also, the voxels on the edge of the
closest neighbors is a one in the input porous sections are replaced with values
image. Erosion means that the pixel in the closer to the center of the porous regions.
output image is a one if all of its eight The high-Z regions in the image are
neighbors are ones in the input image. likewise leveled in intensity and
connectivity in the high-Z areas is easier
Figure 14 contains a computed to identify.
tomographic slice through a metal sample
for a new casting process and an image Morphological operators can be
processed with 3 × 3 kernel gray scale connected together to perform many tasks
erosion dilation operators. The pixel size useful in the analysis of images.
Calculating or estimating boundaries of
FIGURE 14. Metal sample: (a) computed objects in digital radiographic or
tomography slice; (b) gray scale computed tomographic images is useful in
morphological transform; (c) difference many different contexts: assembly
image. verification or to qualify a process or to
track the movement of different
(a) components in an assembly. Subtracting
an eroded image from the original,
subtracting a dilated image from the
original or subtracting the dilated image
from an eroded image can provide a good
estimate of the boundary of attenuating
objects. Figure 15 contains an image of
FIGURE 15. Radiographs of ball grid array:
(a) attenuation radiograph; (b) boundary
image.
(a)
(b)
(c) (b)
Image Data Analysis 361
the electronic component radiograph in discontinuity signatures within a structure
Fig. 9 and an image of the boundaries of of an object.17,18
the different materials generated by
subtracting the eroded image from an Weld testing often involves features
image of the digital radiograph. organized in a vertical or horizontal
dimension of the digital radiograph. In
Morphological operators are used in some of these circumstances it can be an
selected steps in automated image analysis advantage to sum the contents of the
contexts. As objects get more complicated image into a single vector by averaging
(multiple-material, multiple-thickness), the rows or columns of a region of the
automated analysis routines use a number digital image. The average vector has
of processing steps to generate the better signal to noise properties than the
identification of a discontinuity or to find individual rows (columns) of the image
some feature out of tolerance in an object. and it can turn out that the signature of
Erosion and dilation operations can the discontinuities are preserved in the
generate an image by showing single vector. This vector can be used for
connectivity between pixels or by correlation to signatures of discontinuities
grouping a number of pixels together in a or can be decomposed into suspect
binary image. discontinuity sites.
Issues surrounding the application of A number of radiographic tests involve
morphological operators depend on the numbers of the same structures organized
test. Erosion and dilation operators can in some way in the image. Sometimes this
significantly change the values of the is by design where a large area detector is
pixels in the transformed image. For larger used to test large numbers of small objects
kernels (greater than 3 × 3), smaller (that is, film radiographs of spark plugs,
features are completely removed. small aluminum castings and other
Although this can be crucial for an objects). In these circumstances it is an
analysis of larger features in an image, it is advantage if the computer can pick the
important to be aware of just what image objects out of the image and either
content was left behind. It is best when present them for analysis or provide some
the motivation for the image analysis can analysis of the constituent parts.
be developed into a determination of a
kernel size and a justification applied to One such example are the ball grid
the extracted image content. It is also array images in Figs. 9, 12 and 15. In
useful if a residual image is calculated and these images the constituent balls are a
saved to track the features excluded. central component for the test. Blob
analysis19 can be an important tool for
Specialized Numerical Transforms the analysis of these images. This
algorithm finds contiguous groups of
A number of specialized image operations pixels satisfying a pixel intensity
have been developed for particular tasks threshold, then identifies and extracts
common in image analysis of digital those segments for further processing. For
radiographic data. Only a few of the the automated analysis of ball grid array
possibilities are mentioned here but each images this routine finds the balls in the
addresses particular problems in circuit board and extracts the image
radiographic tests. Also, each showcases segments from the rest of the image.
some possibilities of digital data. These image segments can then be
subsequently passed on to other analysis
In many circumstances the region of algorithms.
the object to be tested sits on top of a
structure in the material that is not of In some tests, neither segmentation or
interest to the test. A number of pixel classification is the important
approaches have been developed to feature for the analysis; rather the
subtract out the structure independent of boundary of an object or object
the object to be tested. One class of these component is central to the test. This
techniques involves curve fitting the kind of analysis is applied more often to
structure with some function that leaves computed tomographic reconstructed
the important part of the test alone. In images where the boundaries of object
one example of this technique, third components are not obscured by other
degree polynomials were used to subtract structures in the path of the X-rays. As
out the underlying structure for the such these techniques can be applied to
automated testing of welds in railroad two-dimensional or three-dimensional
rails16 (see the discussion of automatic data. A number of different techniques
discontinuity recognition for an example have been developed to find boundaries
of subtracting trends). A second approach in complicated objects.
involves using prior knowledge of the
discontinuity types to generate synthetic This operation is different than the
profiles or templates of discontinuities for above transforms. By using other
extraction or identification of transforms edges of high attenuating or
low attenuating materials can be
highlighted. Likewise, with morphological
transforms images can be generated that
362 Radiographic Testing
show the outline of material boundaries. Advanced Transforms for
In these transforms the output is not an Multiscale, Multiresolution
image but a set of boundary points Analysis
(sometimes referred to as a point cloud).
The need for a different kind of image
Shrinkwrap codes are one way to analysis has arisen out of the different
estimate material boundaries in computed problems encountered by the above
tomographic scans. The routines work by techniques. Just how to determine the
expanding some kind of wave front diameter of the kernels used is a matter of
through the pixels or voxels in the image; some controversy in medical and
then at a particular threshold the industrial imaging. The question of
expansion stops, resulting in a set of optimal kernel size is difficult within a
coordinates measuring where the wave particular application and completely
front stopped. There are different means intractable across different tests. Digital
for expanding the wave front and images are getting deeper in bit depth,
different means for controlling the possess greater latitude, are lower in noise,
positions for stopping around curved are larger in extent (sometimes larger than
surfaces. any available monitor) and can include
substantial spatial resolution. Viewing
A second type of technique used is the these images on devices having much less
freeman chain code, which finds a dynamic range is a challenge. Techniques
boundary point and then uses a circular covered above each glean some set of
sewing operation to find the next features from the image at the expense of
boundary point and this process others. Unsharp mask transforms
continues until the original boundary highlight the high frequency content of
point is found. This type of routine can the image but can amplify the noise to
find boundaries of many different types of overwhelm the low contrast features. It is
objects. Figure 16 illustrates the operation often the case that the choice of kernel,
of this code applied to the computed lookup table or imaging transform is a
tomographic scan of an automotive part. compromise between competing and
legitimate testing requirements and the
FIGURE 16. Computed tomographic images: losses in any one choice can be
(a) shrink wrap routine applied; (b) freeman significant.
chain code routine applied.
The process in multiscale,
(a) multiresolution analysis is one way of
dealing with the image content at
(b) different scales in a more explicit fashion.
The first step involves transforming the
original image into a family of images.
Each of the decomposed images contains
a subband of the scale and contrast space
of the image. In different terms, the image
is split into componentlike images, each
component represents a class of features
of the image at a particular scale. The
component images can then be
manipulated, transformed or amplified
and then recombined to obtain a
transformed image with a contrastive
performance matching the viewing range
of the monitor or making the best of the
contrast in the image (taking into account
the noise). The low contrast features of
the image, mostly found in some of the
component images can be amplified
somewhat independently of the noise
content. Figure 17 is a pictorial
representation of this process used in a
commercial image enhancement
process.20
Wavelet based transforms also feature
this concept of decomposing images into
different scales and resolutions. Although
this varies with the particulars of the
wavelet generating transform, different
numbers of subimages are obtained from
a particular transform.21 Depending on
the image each subimage contains
Image Data Analysis 363
information at a different scale or in a definition. However, making the most of
different region. Matched filters can be image contrast for display purposes is
applied to the sets of subimages, or filters different from reliably indicating the size
can be applied that are derived from or even the presence of a discontinuity.
templates of an object of interest.22 The Low signal-to-noise images make poor
subimages are combined and the inverse input data for qualitative work.
transform results in a transformed image. Compromises like these exist at all levels
of the image formation and analysis
Multiscale, multiresolution analysis process (see Fig. 2).
provides the best context for explicit
manipulation of both the spatial content Furthermore, at this juncture the
and the contrastive information in a statistical properties of these transforms
particular image. The transformed image are not well understood. Low contrast
is built up of the weighted and features wrestled out of the noise have a
transformed subimages and this provides tendency to disappear with changes in the
a means for enhancing images and taking scanner. As indicated above, image
into account the radiation contrast of a formation is a statistical process.
set of features. A number of different Radiographic scanners degrade in the field
studies have shown the benefit of these over time. Running a large number of
transforms for object identification and samples through a scanner and an
optimal image contrast. algorithm to establish the error properties
of a routine may miss the point.
All of this image manipulation can be
implemented in a compact and Subtleties of Radiographic Images
computationally efficient way. Once the
initial transform is complete the bulk of Transmission images from X-ray sources
the computations are performed on include a number of subtle features not
smaller, less complicated pieces of the found in other imaging modalities. As
image. This aspect of the analysis can be a explained above, each pixel in a
big advantage for images with large radiographic image includes content from
numbers of pixels (that is, 3000 × 2000), the scatter in the detector and scatter in
where even fast computers are hard the object. Neither the scatter profile of
pressed to repeatedly process the image at the detector nor the scatter profile for the
its full size. object at that particular orientation to the
X-rays is necessarily spatially invariant.
The benefits of multiscale, Perhaps more important, these effects
multiresolution analysis notwithstanding, occupy both high and low spatial
there are issues with this class of frequencies in the image. Figure 18
techniques. These techniques may provide includes an image of a step wedge from a
the best means for managing the tradeoff camera scintillator system and a lineout
inherent in showing high bit depth through the step wedge. This particular
images on monitors with much less range radiograph was acquired with the step
or in providing a means for extracting low wedge at 1.17 m (46 in.) from the source
contrast features without losing edge whereas the detector was 1.22 m (48 in.)
from the X-ray source. The wedge was
FIGURE 17. Multiresolution multiscale image enhancement. shot at 200 kV peak to ensure good
transmission through all the steps in this
Original Enhanced result wedge. The step sizes are 3.175 mm
(0.125 in.) and the image has been
Density processed by dividing out the background
plot along image and taking the natural logarithm of
center line the result (units of the image are in
attenuation units).
Multiscale
presentation Notice the sharpness of the edges for
Finest layer the different levels in the step wedge and
the noise levels in the steps as indicated
Contrast in the lineout. The crispness of the edges
equalization decreases and the noise over the flat
Reconstruction portions of the steps increases with
Decomposition attenuation. The noise levels are easily
explained by the fewer photons
Coarsest layer transmitted through the steps (and this is
a circumstance in which there is ample
transmission through each of the steps).
The more blurry edges and some of the
increased noise can be explained as
greater proportions of background and
object scatter in the pixels behind the
thicker sections of aluminum. Also, notice
364 Radiographic Testing
the nonlinearity in the relative constant intensity parts of the image (low
attenuation value of each step thickness. frequency).
Figure 19 contains the image of the The problem with these techniques for
wedge and the histogram of the image. radiometric transmission images is the
This is another way of showing the same artifactual image content at nearly all
result. Notice the spacing between the frequency components of the image.
peaks and the width of each peak. As the Techniques that apply processing kernels
attenuation increases, the noise in the flat to the entire image to enhance a
sections of the wedges also increases as particular feature will invariably do the
shown by the larger width of the peaks. wrong thing to other components in the
Also, the spacing of the peaks gets slightly image.
closer together with larger attenuation
values. This is the combined effect of The subtleties of digital radiographic
small amounts of scatter in the object, images mentioned here do not make these
scatter in the detector and beam transforms a uniformly bad choice for
hardening. processing and analysis. Many successful
processing schemes have been developed
Kernel based processing techniques, for digital radiographs and computed
applied to the entire image, operate on all tomography images based on these
the spatial frequencies in an image in a techniques. The most important aspects
uniform fashion (spatial invariance). As of particular image transforms are their
shown in many texts,23 the effects of any limitations. All transforms emphasize
particular kernel based transform can be some features and suppress or eliminate
evaluated from the fourier representation others. Depending on the nature of the
of the transform. From this representation test this selectivity can be good or bad.
it can be seen just what gets multiplied to
what spatial components of an image, the The point is to understand the nature
sharp features of an image (high of why some procedures do not perform
frequency) and the smoother more and how this might relate to the nature of
these images. A technique developed for
reflected light images is not always a good
FIGURE 18. Step wedge: (a) attenuation image; (b) lineout through wedge.
(a)
(b) 0.43 Relative attenuation (abitrary unit)
0.42 12.7 25.4 38.1 50.8 63.5 76.2
0.40 (0.50) (1.00) (1.50) (2.00) (2.50) (3.00)
0.38
0.36
0.34
0.32
0.30
0.28
0.26
0.24
0.22
0.20
0.18
0.15
0
Location, mm (in.)
Image Data Analysis 365
choice for transmission images. There is like the features to be detected or
no substitute for thorough testing of measured can be produced and later
image analysis operations. Thoroughness degraded with computer generated noise
entails running a sufficient number of and blur in a precise way without running
images of different types through a the system. In this way the point at which
particular algorithm to enable some the algorithm stops working can be
statistical evaluation of image analysis determined. By increasing the amount of
performance. noise and blur in a particular image in a
graduated and precise way the effects
Although it is typical to perform a likely to occur over time can be estimated
variety of tests for any algorithm this and maintenance intervals adjusted.
strategy may have limitations. It is often
the case that digital radiographic or Dimensional
computed tomographic systems degrade Measurements
in the field. There are a number of reasons
for this degradation. Scintillators become There are a variety of techniques for
less sensitive over time and produce less obtaining dimensional measurements
light. Cameras, intensifier tubes and from digital radiographic or computerized
amorphous panels become increasingly tomographic images. Three types of
noisy with extensive use. techniques are common: (1) interactive
point and click procedures, (2) lineout
The degradation is not fast but is extraction techniques with some
persistent. Consequently, the images fed algorithm applied to the lineout after
to an algorithm when the system is fresh extraction from the image and
may not be typical of the images fed to (3) boundary identification techniques for
the system after it has been operating for contour estimation. The first two types of
a while.
Simulated radiographic images can
help the operator to assess the
degradation. A set of simulated images
FIGURE 19. Step wedge: (a) attenuation image; (b) histogram of image.
(a)
(b) 1032Pixel count
1000
900
800
700
600
500
400
300
200
100
0
0.146 0.176 0.206 0.236 0.266 0.296 0.327 0.357 0.387 0.417 0.447
Relative attenuation (arbitrary unit)
366 Radiographic Testing
techniques are common for obtaining tomographic data set for a 75 mm (3 in.)
wall thickness measurements, although outside diameter copper pipe was
the third set of techniques are used evaluated. This type of data set includes
mostly in obtaining boundaries from both digital radiographic and computed
three-dimensional computed tomographic tomographic data and more importantly
data. All three of these different allows easy access for a physical
techniques produce a dimensional measurement of the wall thickness as a
measurement of a component in an reference. The data consist of 360 digital
image, with more or less accuracy. radiographic images acquired with a 300 ×
400 mm (12 × 16 in.) flat panel and a
Application to Copper Pipe 9 MeV linear accelerator. The computed
tomographic scan was performed with
To illustrate these techniques and rotational scanning only. The detector was
compare accuracy a flat panel computed 6 m (20 ft) away from the source and the
object 0.50 m (20 in.) away from the
FIGURE 20. Digital radiograph and computed detector.
tomographic slices of copper pipe:
(a) digital radiograph; (b) cross section; Digital radiographic images were
(c) cross section; (d) cross section. calibrated with the two-gain coefficient
(a) technique and then processed into
attenuation radiographs by dividing out
(b) the background image in the logarithmic
scale. For this scan the digital
(c) radiographic data generate a volume of
computed tomographic data with width
(d) and height roughly corresponding to the
width and height of the digital
radiographs (the computed tomographic
data set will be a little smaller because of
magnification but this is small in this data
set). Figure 20 contains digital
radiographic and computed tomographic
images from this scan; these images are
adduced in subsequent discussion.
Interactive point and click techniques
measure the number of pixels between
two clicks of a mouse or pointer. For each
click the closest x,y coordinate is returned
from the interaction and the cartesian
distance calculated between the two sets
of x,y pairs of points. After this operation,
the pixel distance is converted to object
distance by multiplying by the pixel size
at that magnification of the object. For
digital radiographic images orientation is
important: only walls 90 degrees to the
X-ray beam–to–detector envelope can be
measured. The goal for this operation is to
pick the position of maximum chord
length (the position of the inner wall) and
the position of the outer edge of the part
(the outer boundary). For computed
tomographic images the goal is simply to
pick the inner and outer edges.
Using these techniques a number of
measurements of wall thickness were
taken on the digital radiograph and
computed tomographic slices in Fig. 20.
The positions of these measurements are
indicated in Fig. 21 (these are not the
lines drawn for the measurements — just
indicators of the position). Because there
is substantial porosity in this particular
sample an explicit attempt was made to
avoid taking computed tomographic
measurements in locations of porosity or
corrosion on the inner wall. Six different
measurements of wall thickness were
made for the digital radiographic and
Image Data Analysis 367
computed tomographic data. Results for techniques depend strongly on the
the digital radiographic measurements accuracy of the operator, precision of the
were 4.53, 4.65, 4.76, 4.88, 4.76 and measurements of pixel size at the object
4.65 mm (0.178, 0.183, 0.187, 0.192, and somewhat on pixel size itself. In the
0.187 and 0.183 in.). Results for the hands of a seasoned operator these
computed tomographic measurements are techniques can produce accurate results.
4.332, 4.55, 4.3, 4.13, 4.46 and 4.36 mm Alternatively, cavalier clicks on the image
(0.1706, 0.179, 0.169, 0.163, 0.176 and result in sloppy estimates and lack of
0.172 in.). precision. It is best to train operators with
a standards set of images, with different
Error variance for measurements levels of noise, contrastive performance
performed with point-and-click and spatial resolution.
FIGURE 21. Digital radiographic image and Pixel size at the object is determined
computed tomographic slices of copper from a measurement of the pixel size at
pipe with locations of measurements the detector and the X-ray magnification
identified: (a) digital radiograph; (b) cross or from the radiograph of an object that
section; (c) cross section; (d) cross section. includes a dimensional standard. It is best
(a) when both techniques are used to
produce an estimate of pixel size at the
(b) object and used for comparison. For
digital data, the operation of
(c) pointing-and-clicking will only be good to
a pixel, because everything is in terms of
(d) pixels. Given these sources of error, its
difficult to understand how this
measurement can be better than a couple
of pixels, with substantial variation from
operator to operator. At the same time, for
some tests, this accuracy may be good
enough.
A second more rigorous technique
involves the extraction of a lineout across
the feature and then an analysis of that
lineout. The typical example of this type
of approach is for measuring a wall
thickness from a computed tomographic
slice image of a cylinder. There are three
steps.
1. The user draws a lineout across the
wall, taking care to orient the lineout
as perpendicular to the wall as
possible.
2. The extracted lineout is processed with
a code combining derivative
calculation and fitting.
3. The pixel distance between the
selected positions in the lineout are
calculated and the physical size of the
pixel is applied to the pixel distance.
Knowing the physical pixel size is still
a source of measurement error in this
technique. Also, not orienting the lineout
perpendicular to the wall to be measured
will result in some variability from
measurement to measurement. Lastly, the
noise in the lineout can be problematic
for the processing applied to the lineout.
Most processing schemes apply a
derivative filter to the lineout, resulting in
two peaks, each measuring the line spread
function of the system. The two peaks are
then put through some kind of fitting
code to determine the center of the peak
and the distance between the two fitted
values is the pixel distance. This
technique was applied to the different
lineouts around the wall in the computed
368 Radiographic Testing
tomographic scan and resulted in manufacture models or models can be
measurements with an average of 4.3 mm input to finite element analysis mesh
(0.17 in.). When properly extracted, these generating codes. It is important to
techniques can produce subpixel accuracy emphasize here that these sets of
for wall thicknesses imaged clearly by boundary points (often referred to as point
computed tomographic scanners. clouds) are performing a measurement
Figures 22 and 23 display one of the operation and have an error variance
computed tomographic slices from the possibly greater than the above
copper pipe scan and illustrate the results techniques and the error variance is not
of transforming and fitting the vector constant throughout the computed
drawn across the wall of the copper pipe. tomographic volume data.
The last technique — in this case, a There is a special character to blurs in
class of techniques — builds on some of computed tomographic reconstructed
the transform techniques mentioned images. It is usually the case that the
above. Any transforms mentioned that inner edges of computed tomographic
can convert the inner and the outer edges reconstructed images include more blur
of the computed tomographic data to than the outside edges. The reason for
boundary measurements qualify as a this is in the nature of X-ray imaging, the
dimensioning techniques. For example beam that is imaging the inner edges has
the shrinkwrap, freeman chain code more scatter than the beam that images
algorithm was applied to one of the the outer edges. This greater blur on the
computed tomographic slices in the inside edges is less true for highly
copper pipe data set. Figure 24 shows the
results of the application of this algorithm FIGURE 22. Computed tomographic slice with lineout
to this data set. Using these boundary indicated and line trace with derivative displayed.
points and taking distances the average
wall thickness was measured at 4.45 mm (a)
(0.175 in.).
(b)
It is hard to assess the accuracy of the
different techniques. A variety of Relative attenuation (abitrary unit) 0.016
measurements were acquired around the 0.015
pipe with a micrometer. The physically 0.014 1.74 3.49 5.23 6.98 8.72 10.46 12.21
measured thickness values ranged from 0.013 (6.9) (13.7) (20.6) (27.5) (34.3) (41.2) (48.1)
4.2 to 4.7 mm (0.165 to 0.185 in.). This 0.012
object is a copper pipe with substantial 0.011
pitting and corrosion on its inner wall. 0.010
The corrosion can be seen in the 0.009
computed tomographic scans. The 0.008
measurements of the wall thickness from 0.007
the digital radiographic images are hard 0.006
pressed to account for the loss of material 0.005
on the inner wall and this explains why 0.004
the measurements are a little high. 0.003
0.002
Indeed, the very nature of the 0.001
integration over the tangent to the wall 0.000
performed by the radiation is an –0.001
averaging operation. The computed –0.002
tomographic point-and-click
measurements were a little higher than 0.00
the fitted measurements because of the
subpixel estimation done by the Location, mm (10–3 in.)
nonlinear fitting to the lineout. The
average measurements from the boundary
image are close to all the other computed
tomographic measurements. On a point
by point basis the boundary image
techniques used in this comparison
overestimate the wall thickness because of
the stopping at a pixel boundary inherent in
the routine.
This class of techniques generates sets
of boundary points from
three-dimensional computed tomographic
volumes. The sets can be used in a variety
of applications. The boundary points can
be used in the process of generating
three-dimensional solid models, computer
aided design and computer aided
Image Data Analysis 369
FIGURE 23. Derivative of line trace and levalier-marquart gaussian fit.
Relative attenuation (abitrary unit) 2.6
2.4
2.2 1.74 3.49 5.23 6.98 8.72 10.46 12.21 13.95
2.0 (6.9) (13.7) (20.6) (27.5) (34.3) (41.2) (48.1) (54.9)
1.8
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
–0.2
–0.4
–0.6
–0.8
–1.0
–1.2
–1.4
–1.6
–1.8
0.00
Location, mm (10–3 in.)
collimated computed tomographic FIGURE 24. Cross section images;
scanners and more true for area detector (a) computed tomographic slice;
scanners with less collimation, that is, (b) freeman-chain code estimate of inner
area detector based scanners. This and outer boundaries.
property is present in the copper pipe (a)
data and can be seen in the skew of the
lineouts on the inner edge. Consequently (b)
thickness of walls and inner features can
easily be overestimated.
The problem with all of these
techniques is the absence of a real error
bar connected to any of these
measurements. All of the measurements
are interesting and suggestive. Also, some
studies have been performed comparing
lots of computed tomographic wall
thickness measurements to other
modalities.7 In the first case computed
tomographic measurements were found to
be lacking in measurement accuracy, in
the second the accuracy was equal to any
of the other techniques used. Both of
these studies were conducted on real
objects, not standards blocks (although
standards blocks were used in the
calibration). In each case it was very
difficult to get a physical measurement of
a physically concealed feature. For
computed tomographic dimensional
measurements to emerge from an
interesting technique to a metrology tool
more quantitative investigations are
necessary. This problem gets worse for
automated three-dimensional tools that
find the boundaries of inner walls and
attribute a physical significance to an
estimate of unknown accuracy.
370 Radiographic Testing
PART 3. Automated Testing Techniques
Need for Automated expensive in both factory floor time and
Systems part flow. Radiographic nondestructive
testing systems will be a mature part of
The need for fast tests, the added cost of manufacturing operations when they
operators and the known issues with more easily fit into factory operations and
variability in operator scoring have possess the same reliability for both the
emphasized the need for automated test automated testing and connectivity as
systems. Improvements in computer other systems.
speeds and increased detector sensitivity
have made automated techniques a good In spite of all of these issues the
possibility for many applications. demand for automatic discontinuity
recognition systems is high and growing.
Despite the relatively large The details of any particular automatic
bibliography of articles on automated discontinuity recognition system are
discontinuity recognition techniques, the usually proprietary. What follows is an
number of successful automated overview of the elements in most
discontinuity recognition systems remains automatic discontinuity recognition
small. There are a variety of reasons for routines followed by an example and
this, many of which are not part of the recommendations for fielding and
image analysis for a test. Design changes maintenance.
and changes in criteria are increasingly
common in manufacturing operations. Discontinuity Recognition
The effective lifetime for parts and
components is getting smaller. In the Automated testing techniques are
automotive industry, continuous quality intentionally application specific. Just as
improvement programs and weight its difficult to decide on one kernel
reduction programs can change the diameter, no single automatic
properties of a component to the point discontinuity recognition routine works
where a new test is required. Developing for all tests. Tests involving penetrating
robust test techniques can be expensive in radiation can be classified into four
time and resources. For nondestructive classes: (1) single material, single
testing systems, it is difficult to be thickness, (2) single material, multiple
responsive to every new change in a part thickness, (3) multiple material,
or component. Consequently, there is a single thickness and (4) multiple material,
disinclination to enter expensive multiple thickness. A second layering on
development programs with short payoff this classification is whether the test
lifetimes. involves digital radiography or computed
tomography. Automatic discontinuity
Automated X-ray nondestructive recognition routines are more common
testing systems face a growing list of for the first two types of tests and much
expectations. Fielded systems in more common for digital radiography
manufacturing operations are increasingly than computed tomographic systems.
more connected to the other systems on
the factory floor. Statistical process Automatic discontinuity recognition
control functions are part of this routines include a similar number of
connection with each component system steps. Underneath each system is some
reporting status and results. X-ray definition of a suspect pixel within the
nondestructive testing systems are not object.
always as connected as other systems.
1. The first step usually involves
Furthermore, reporting functions subtracting out or extracting out the
within a system are not common and portions of the object that get in the
maintenance intervals can be problematic. way of clearly identifying the
It is rare for radiographic nondestructive discontinuity. In digital radiography
testing systems to include recognition of this is usually referred to as trend
an error variance in their measurements removal or subtracting out the object.
even though these measurements include Other algorithms compare indication
substantial variability. Radiographic signals to some template of an
nondestructive testing systems are often acceptable object at this point.
treated as a special case and this is
Image Data Analysis 371
2. After trend removal the processed processed images. Connectivity analysis
image is in the best scale to identify routines like blob analysis mentioned
the discontinuity. Some segmentation above can be useful as well. In different
routine is applied at this point, usually applications trend removal is performed by
guided by a threshold, to pick out the a least squares fit to a simple function to
pixels that are part of some segment of take out the changing lengths in the part
a discontinuity. at that place. The automatic discontinuity
recognition routine is the sum of all the
3. Next, a connectivity routine is applied steps combined and the robustness of the
to bundle pixels that are close together entire routine is a function of the match
and have been identified as between the data presented and the
anomalous. At the end of this step a routines used.
binary image of the anomalous and
not anomalous pieces of the Evaluation of Indications
radiograph can be generated to show
the results of the operation. Application to Pipe Welds
4. Lastly, the discontinuity criteria are To illustrate this process, a technique
applied to the different segments of similar to work presented by Doering and
discontinuity locations and decisions Basart16 is applied to digital radiographs
are made about whether the object is of pipe welds. In this case the welds are in
good or bad. small tubes, 25 mm (1.0 in.) outside
diameter. Figure 25 contains two digital
The different steps in automatic
discontinuity recognition routines are
built up from the transforms covered
above. The same thresholding transforms
are used throughout the process on the
FIGURE 25. Tube weld: (a) 0 degree view; FIGURE 26. Digital radiographic image of
(b) 90 degree view. tube weld: (a) 0 degree view; (b) boundary
image.
(a)
(a)
(b)
(b)
372 Radiographic Testing
radiographs of the tubes taken at 0 and 90 FIGURE 27. Extracted portion of tube weld:
degrees. Because evaluating the (a) before processing ; (b) results of fit;
discontinuities on the long tangent (c) result of 2 sigma fixed threshold.
chords of the tube is more difficult, the (a)
two views will be used and those sections
of the object will be avoided. The first (b)
step is an automated extraction of the
center portion of the tube. To extract (c)
from the center of the tube in an
automated way the boundary points are
used to calculate an extracted inner
rectangle.
Figure 26 includes one of the digital
radiographs and the boundary point
image. The discontinuities that sit on the
integration through the double walls
without the tangents can be identified
with a polynomial least squares fit applied
row wise to the extracted pieces. The
result is the discontinuities on an
otherwise flat image.
Now that the discontinuities have been
isolated from the tube shape, a fixed
threshold of minus two standard
deviations of the noise calculated from
the center of the tube in an unwelded
area. Pixels less than this value are
considered to be reliable indicators of less
mass in the weld of the object. After the
threshold is applied only the
discontinuities are left against a binary
image. Figure 27 contains the results of
this operation.
At this point the image has been
processed to identify discontinuities with
a two-sigma fixed criterion. In this case a
number of different segmentation
schemes (that is, blob analysis) can be
applied to extract out the discontinuities.
From these extracted segments, a number
of different quantities can be calculated:
total discontinuity area, largest
discontinuity and others. If the images
have been calibrated into material length,
it is also possible to calculate the total
mass lost, in total and per discontinuity
area. In spite of all these calculations,
more information is needed to classify
these different image segments into
categories. Then still more information is
required to accumulate the different
discontinuity indications in the different
categories and make a determination of a
good or bad weldment.
Discontinuity Detection Criteria
In the automatic discontinuity
recognition process there are a number of
important roles for product engineers.
1. It is important to specify the
discontinuity or discontinuities to be
detected. Full part drawings are useful
as well as any assistance in precise
mounting that can be offered.
2. Developing the criteria for finding
discontinuity indications should be a
cooperative effort.
Image Data Analysis 373
3. It is most useful if the product MOVIE.
engineer is involved in the selection or Exfoliation
acquisition of realistic sample images corrosion, thin
for analysis work. to thick.
4. Standard parts need to be fabricated MOVIE.
with known discontinuities. These General
parts are essential for system corrosion, thin
verification and are equally useful for to thick.
salting the regular operation of the
scanner. MOVIE.
Cracks around
Summary fasteners.
In spite of the many issues with MOVIE.
radiographic automatic discontinuity Cracks around
recognition systems the nature of fasteners, in
radiographic testing makes this modality layers from
particularly attractive. Radiographic top.
testing is a noncontact test method that
can adapt to design changes better than
contact based methods. Radiographic
sources can cover a wide range of
energies, letting a single system be used
on a variety of parts. Sometimes only
software needs to be changed to inspect a
new part. Advances in computer hardware
makes more extensive automatic
discontinuity recognition algorithms
more tractable within the strict time
requirements for factory floor cycle times.
Although some of the promise of
radiographic automatic discontinuity
recognition depends on more general
purpose tools for stitching together
automatic discontinuity recognition
routines, it is reasonable to expect
radiographic automatic discontinuity
recognition machines to be more
commonplace and achieve the reliability
and repeatability obtained in other
factory floor equipment.
374 Radiographic Testing
References
1. Evans, R.D. The Atomic Nucleus. New 14. Bracewell, R.N. The Fourier Transform
York, NY: McGraw Hill (1955). and Its Applications, third edition.
Boston, MA: McGraw-Hill (2000).
2. Heitler, W. Elementary Wave Mechanics
with Applications to Quantum Chemistry, 15. Serra, J. and P. Soille, eds. Mathematical
second edition. Oxford, United Morphology and Its Applications to Image
Kingdom: Clarendon Press (1955). Processing. Dordrecht, Netherlands:
Kluwer Academic Publishers (1994).
3. Barrett, H.H. and W. Swindell.
Radiological Imaging. New York, NY: 16. Doering, E.R. and J.P. Basart. “Trend
Academic Press (1981). Removal in X-Ray Images.” Review of
Progress in Quantitative Nondestructive
4. Kak, A.C. and M. Slaney. Principles of Evaluation. Vol. 7. New York, NY:
Computerized Tomographic Imaging. Plenum Press (1988): p 785-794.
New York: IEEE Press (1987).
17. Gayer, A., A. Saya and A. Shiloh.
5. Bushberg, J.T., J.A. Seibert, E.M. “Automatic Recognition of Welding
Leidholdt, Jr. and J.M. Boone. The Defects in Real-Time Radiography.”
Essential Physics of Medical Imaging, NDT International. Vol. 23, No. 3.
second edition. Philadelphia, PA: Oxford, United Kingdom: Elsevier
Lippincott Williams & Wilkins (1994). Science Limited (1990): p 131-136.
6. Dainty, J.C. and R. Shaw. Image Science: 18. Kaftandjian, V., A. Joly, T. Odievre,
Principles Analysis and Evaluation of M. Courbière and C. Hantrais.
Photographic Type Imaging Processes. “Automatic Detection and
New York, NY: Academic Press (1974). Characterisation of Aluminium Weld
Defects: Comparison between
7. Martz, H., D. Schneberk, C. Logan and Radiography, Radioscopy and Human
J. Haskins. “New Modalities in X-Ray Interpretation.” 7th European
Detection and Their Use for Industrial Conference on Non-Destructive Testing
DR/CT.” Presented at International [Copenhagen, Denmark]. Vol. 2.
Symposium on Computerized Tomography Copenhagen, Denmark: 7th ECNDT
for Industrial Applications and Image Copenhagen (May 1998):
Processing in Radiology [Berlin, p 1179-1186.
Germany, March 1999]. Berlin,
Germany: Deutsche Gesellschaft für 19. Davis, K. and C. Lazinsky. “The Quest
Zerstörungsfreie Prüfung (March for Speed, Precision and High Yield.”
1999). Imaging Insight. Vol. 2, No. 3. Dorval,
Quebec, Canada: Matrox Electronic
8. Macovski, A. Medical Imaging Systems. Systems Limited (Fall 2000): p 4-5.
Upper Saddle River, NJ: Prentice-Hall
(1983). 20. Vuylsteke, P. and E. Shoeters. “Image
Processing in Computed
9. Martz, H.E., D.J. Schneberk, Radiography.” International Symposium
G.P. Roberson and S.G. Azevedo. on Computerized Tomography for
Computed Tomography. Industrial Applications and Image
UCRL-ID-112613. Livermore, CA: Processing in Radiology [Berlin,
Lawrence Livermore National Germany, March 1999]. DGZfP
Laboratory (September 1992). Proceedings BB 67-CD. Berlin,
Germany: Deutsche Gesellschaft für
10. Johns, H.E. and J.R. Cunningham. The Zerstörungsfreie Prüfung (March
Physics of Radiology, fourth edition. 1999): p 87-101.
Springfield, IL: Charles C. Thomas
(1983). 21. Jawerth, B.D., M.L. Hilton and
T.L. Huntsberger. “Local Enhancement
11. Sahoo, P.K., S. Soltani and of Compressed Images.” Journal of
A.K.C. Wong. “A Survey of Mathematical Imaging and Vision.
Thresholding Techniques.” Computer Vol. 3, No. 1. Norwell, MA: Kluwer
Vision, Graphics, and Image Processing. Academic Publishers (1993): p 39-49.
Vol. 41. San Diego, CA: Academic
Press (1988): p 233-260.
12. Oppenheim, A.V. and R.W. Schafer.
Discrete-Time Signal Processing. Upper
Saddle River, NJ: Prentice Hall (1999).
13. Lim, J.S. Two-Dimensional Signal and
Image Processing. Upper Saddle River,
NJ: Prentice Hall (1990).
Image Data Analysis 375
22. Strickland, R.N. and H.I. Hahn. “ Dolan, K.W., J.J. Haskins, D.E. Perskins
Wavelet Transform Methods for Object and R.D. Rikard. “X-Ray Imaging:
Detection and Recovery.” IEEE Digital Radiography.” Thrust Area
Transactions on Image Processing. Vol. 6, Report. UCRL 53868-93. Livermore,
No. 5. New York, NY: Institute of CA: Lawrence Livermore National
Electrical and Electronics Engineers Laboratory, Engineering Research,
(May 1997). Development and Technology (1993).
23. Brigham, E.O. Fourier Transforms. Espinal, F., T. Huntsberger, B.D. Jawerth
Upper Saddle River, NJ: Prentice Hall and T. Kubota. “Wavelet-Based Fractal
(1974). Signature Analysis for Automatic
Target Recognition.” Optical
Bibliography Engineering. Vol. 37, No. 1.
Bellingham, WA: International Society
Azevedo, S.G., H.E. Martz, D.J. Schneberk for Optical Engineering (1998):
and G.P. Roberson. “Quantitative p 166-174.
Measurement Tools for Digital
Radiography and Computed Feldkamp, L.A. L.C. Davis and J.W. Kress.
Tomography Imagery.” “Practical Cone-Beam Algorithm.”
UCRL-53868-94. Livermore, CA: Journal of the Optical Society of America.
Lawrence Livermore National Vol. 1. Washington, DC: Optical
Laboratory (1994). Society of America (1984): p 612-619.
Berger, H. Neutron Radiography — Methods, Freedman, M. and N.H. Strickland.
Capabilities and Applications. “Digital Radiology and PACS: Image
Amsterdam, Netherlands: Elsevier Processing in Computed
(1965). Radiography.” Grainger & Allison’s
Diagnostic Radiology: A Textbook of
Berger, H., ed. Practical Applications of Medical Imaging. Edinburgh, United
Neutron Radiography and Gaging. Kingdom: Churchill Livingstone
Special Technical Publication 586. (1997).
West Conshohocken, PA: ASTM
International (1976). Goebbels, J., U. Zscherpel and W. Bock.
Computertomographie und
Bossart, P.-L., H.E. Martz, H.R. Brand and Bildverarbeitung [Proceedings,
K. Hollerbach. “Application of 3D International Symposium on
X-Ray CT Data Sets to Finite Element Computerized Tomography for Industrial
Analysis.” Review of Progress in Applications and Image Processing in
Quantitative Nondestructive Evaluation. Radiology, Berlin, Germany, 1999].
Vol. 15. New York, NY: Plenum Press DGZfP Proceedings BB 67-CD. Berlin,
(1996): p 489-496. Germany: Deutsche Gesellshaft für
Zerstörungsfreie Prüfung (1999).
Brand, H.R., D.J. Schneberk, H.E. Martz,
P.-L. Bossart and S.G. Azevedo. Progress Goodman, D.M., E.M. Johansson and
in 3-D Quantitative DR/CT. T.W. Lawrence. Chapter 11, “On
UCRL-53868-95. Livermore, CA: Applying the Conjugate Gradient
Lawrence Livermore National Algorithm to Image Processing
Laboratory (1995). Problems.” Multivariate Analysis: Future
Directions. New York, NY: Elsevier
Chinn, D., J. Haskins, C. Logan, D. Haupt, Science Publishers (1993).
S. Groves, J. Kinney and A. Waters.
“Micro-X-Ray Computed Tomography Grangeat, P. “Analysis d’un Systeme
for PBX Characterization, Engineering d’Imagerie 3D par Reeconstruction á
Research, Development and Partir de Radiographies X en
Technology.” UCRL-53868-98. Center Géométrie Conique.” Ph.D.
for Nondestructive Characterization, Dissertation. Grenoble, France: L’Ecole
UCRL-ID 132770. Livermore, CA: Nationale Superieure des
Lawrence Livermore National Telecommunications (1987).
Laboratory (February 1999).
Grozdins, L. “Optimum Energies for
Dolan, K.W., H.E. Martz, J.J. Haskins and X-Ray Transmission Tomography of
D.E. Perkins. “Digital Radiography and Small Samples.” Nuclear Instruments
Computed Tomography for and Methods in Physics Research.
Nondestructive Evaluation of Vol. 206. Amsterdam, Netherlands:
Weapons.” Thrust Area Report. North-Holland Publishing Company
UCRL-53868-94. Livermore, CA: (1983): p 541.
Lawrence Livermore National
Laboratory, Engineering Research, Grodzins, L. “Critical Absorption
Development and Technology (1994). Tomography of Small Samples.”
Nuclear Instruments and Methods in
Physics Research. Vol. 206. Amsterdam,
Netherlands: North-Holland
Publishing Company (1983): p 547.
376 Radiographic Testing
Haddad, W.S., I. McNulty, J.E. Trebes, Kinney, J.H., T.M. Breunig, T.L. Starr,
E.H. Anderson, R.A. Levesque and D. Haupt et al. “X-Ray Tomographic
L. Yang. “Ultrahigh-Resolution X-Ray Study of Chemical Vapor Infiltration
Tomography.” Science. Vol. 266. Processing of Ceramic Composites.”
Washington, DC: American Science. Vol. 260, No. 5109.
Association for the Advancement of Washington, DC: American
Science (1994): p 1213. Association for the Advancement of
Science (1993): p 789-792.
Herman, G.T. Image Reconstruction from
Projections: The Fundamentals of Knoll, G.F. Radiation Detection and
Computerized Tomography. New York, Measurement. New York, NY: John
NY: Academic Press (1980). Wiley and Sons (1989).
Hollerbach, K. and A. Hollister. Logan, C., J. Haskins, K. Morales,
“Computerized Prosthetic Modeling.” E. Updike, D.J. Schneberk, K. Springer,
BioMechanics. Albany, NY: CMP United K. Swartz, J. Fugina, T. Lavietes,
Business Media (September 1996): G. Schmid and P. Soltani. “Evaluation
p 31-38. of an Amorphous Selenium Array for
Industrial X-Ray Imaging.” Engineering
Horstemeyer, M., K. Gall, A. Gokhale, NDE Center Annual Report.
M. Dighe and K. Dolan. “Visualization, UCRL-ID-132315. Livermore, CA:
Quantification and Prediction of Lawrence Livermore National
Damage Evolution in Cast A356 Laboratory (1998).
Aluminum using High Resolution
Experimental and Numerical Martz, H.E. “Computed Tomography of
Techniques.” Manuscript (1999). Bridge Pins, Cast Aluminum
Automotive Components and Human
Jain, A.K. “Image Data Compression: Joints.” Report No. UCRL-JC-133573.
A Review.” Proceedings IEEE. Vol. 69, 6th Nondestructive Evaluation Topical
No. 3. New York, NY: Institute of Conference [San Antonio, TX, April
Electrical and Electronics Engineers 1999]. New York, NY: American
(March 1981): p 349-389. Society of Mechanical Engineers
(April 1999).
Jain, A.K. Fundamentals of Digital Image
Processing. Upper Saddle River, NJ: Martz, H.E. “The Role of Nondestructive
Prentice Hall (1989). Evaluation in Life Cycle
Management.” Frontiers of Engineering:
Keswani, R., S. Gangotra, S. Muralidhar, Reports on Leading Edge Engineering from
P.M. Ouseph, K.C. Sahoo and 1997 NAE Symposium on Frontiers of
D.S.C. Purushotham. “Computed Engineering [Third Annual Symposium
Tomography of Irradiated Nuclear Fuel on Frontiers of Engineering].
Elements.” 15th World Conference on Washington DC: National Academy
Nondestructive Testing [Rome, Italy, Press (1998) p 56-71.
October 2000]. Brescia, Italy:
Associazione Italiana Prove non Martz, H.E., S.G. Azevedo, D.J. Schneberk,
Distruttive e Monitoraggio M.F. Skeate, G.P. Roberson and
Diagnostica (2000). D.E. Perkins. Computerized Tomography.
UCRL-53868-90. Livermore, CA:
Kinney, J.H., S.R. Stock, M.C. Nichols, Lawrence Livermore National
U. Bonse et al. “Nondestructive Laboratory (October 1991).
Investigation of Damage in
Composites Using X-Ray Tomographic Physics Today. Special issue celebrating the
Microscopy (XTM).” Journal of centenary of Röntgen’s discovery of
Materials Research. Vol. 5, No. 5. X-rays. Vol. 48, No. 11. Melville, New
Pittsburgh, PA: Materials Research York: American Institute of Physics
Society (May 1990): p 1123-1129. (November 1995).
Kinney, J.H. and M.C. Nichols. “X-Ray Pontau, A.E., A.J. Antolak, D.H. Morse,
Tomographic Microscopy Using A.A. Ver Berkmoes, J.M. Brase,
Synchrotron Radiation.” Annual D.W. Heikkinen, H.E. Martz and
Reviews of Materials Science. Vol. 22. I.D. Proctor. “Ion Microbeam
Palo Alto, CA: Annual Reviews (1992): Microtomography.” Nuclear Instruments
p 121-152. and Methods in Physics Research:
Section B, Beam Interactions with
Kinney, J.H., D.L. Haupt, M.C. Nichols, Materials and Atoms. Vol. B40/41.
T.M. Breunig. G.W. Marshall and Amsterdam, Netherlands: Elsevier
S.J. Marshall. “The X-Ray Tomographic Science Publishers (1989): p 646.
Microscope: 3-Dimensional
Perspectives of Evolving Proceedings of ASNT Topical Conference on
Microstructure.” Nuclear Instruments Industrial Computerized Tomography
and Methods in Physics Research: [Seattle, WA, July 1989]. Columbus,
Section A, Accelerators, Spectrometers, OH: American Society for
Detectors and Associated Equipment. Nondestructive Testing, (1989).
Vol. 347. Amsterdam, Netherlands:
Elsevier Science Publishers (1994):
p 480-486.
Image Data Analysis 377
Real-Time Radioscopy Radioscopy and Digital Tonner, P.D. and J.H. Stanley.
Imaging [Manshantucket, Connecticut, “Supervoltage Computed Tomography
August 1998]. Topical Conference for Large Aerospace Structures.”
Paper Summaries Book. Columbus, Materials Evaluation. Vol. 12, No. 12.
OH: American Society for Columbus, OH: American Society for
Nondestructive Testing (1998). Nondestructive Testing (December
1992): p 1434-1438, 1445.
Rowlands, J. and S. Kasap. “Amorphous
Semiconductors Usher in Digital X-Ray Van der Meulen, M.C.H. “Mechanical
Imaging.” Physics Today. Vol. 50, Influences on Bone Development and
No. 11. Melville, NY: American Adaption.” Frontiers of Engineering:
Institute of Physics (1997): p 24. Reports on Leading Edge Engineering from
1997 NAE Symposium on Frontiers of
Savona, V., H.E. Martz, H.R. Brand, Engineering [Third Annual Symposium
S.E. Groves and S.J. De Teresa. on Frontiers of Engineering].
“Characterization of Static- and Washington DC: National Academy
Fatigue-Loaded Carbon Composites by Press (1998): p 12-15.
X-Ray CT.” Review of Progress in
Quantitative Nondestructive Evaluation. Varian Corporation. Flashscan 30 Manual.
Vol. 15. New York, NY: Plenum Press Palo Alto, CA: Varian Corporation [no
(1996): p 1223-1230. date].
Scott, V.D. and G. Love. Quantitative Waters, A.M., H. Martz, K. Dolan,
Electron-Probe Microanalysis. Chichester, M. Horstemeyer, D. Rikard and
United Kingdom: Ellis Horwood R. Green. “Characterization of Damage
Limited (1983). Evolution in an AM60 Magnesium
Alloy by Computed Tomography.”
Seibert, J.A., O. Nalcioglu and W.W. Proceedings of Nondestructive
Roeck. “Removal of Image Intensifier Characterization of Materials IX. AIP
Veiling Glare by Mathematical Conference Proceedings 497 [Sydney,
Deconvolution Techniques.” Medical Australia, June-July 1999]. Melville,
Physics. Vol. 12, No. 3. Melville, NY: NY: American Institute of Physics
American Institute of Physics, for the (1999): p 616-621.
American Association of Physicists in
Medicine (May/June 1985). Weisfield, R.L., M.A. Hartney, R.A. Street
and R.B. Apte. “New
Sengupta, S.K. “IMAN-3D: A Software Amorphous-Silicon Image Sensor for
Tool-Kit for 3-D Image Analysis.” X-Ray Diagnostic Medical Imaging
Engineering Research, Development Applications.” SPIE Medical Imaging:
and Technology, UCRL 53868-98. Physics of Medical Imaging. Vol. 3336.
Livermore, CA: Lawrence Livermore Bellingham, WA: International Society
National Laboratory (1998). for Optical Engineering (1998):
p 444-452.
Smith, B.D. “Cone-Beam Tomography:
Recent Advances and a Tutorial
Review.” Optical Engineering. Vol. 29,
No. 5. Bellingham, WA: International
Society for Optical Engineering (1990):
p 524-534.
Soltani, P.K., D. Wysnewski and K. Swartz.
“Amorphous Selenium Direct
Radiography for Industrial Imaging.”
DGZfP Proceedings BB 67-CD,
Computerized Tomography for Industrial
Application and Image Processing in
Radiology [Berlin, Germany, March
1999]. Berlin, Germany: Deutsche
Gesellschaft für Zerstörungsfreie
Prüfung (1999).
Strickland, R.N. and H.I. Hahn. “Wavelet
Transforms for Detecting
Microcalcifications in Mammograms.”
IEEE Transactions on Medical Imaging
[1995]. Vol. 15, No. 2. New York, NY:
Institute of Electrical and Electronics
Engineers (April 1996): p 422-425.
Tabb, M. and N. Ahuja. “Multiscale Image
Segmentation by Integrated Edge and
Region Detection.” IEEE Transactions
on Image Processing. Vol. 6, No. 5. New
York, NY: Institute of Electrical and
Electronics Engineers (May 1997).
378 Radiographic Testing
14
CHAPTER
Backscatter Imaging
Lawrence R. Lawson, Bradford, Pennsylvania
PART 1. Physical Principles
Backscatter imaging involves the interaction with electrons is most
single-sided collection of scattered important. In most interactions, there is a
radiation rather than the transmitted transfer of energy between the photon
radiation to form an image. Although in a and the electron.
typical X-ray test configuration at least as
many photons are scattered as are Types of Scattering
transmitted, imaging with them is much
more difficult. Consequently, backscatter There are four scattering processes
imaging is usually a digital technique. The currently used for backscatter imaging in
development of backscatter imaging has the broadest sense. These are elastic,
followed the evolution of digital fluorescence, compton and resonance
radiography. fluorescence.
The motivations for backscatter Elastic scattering involves no energy
imaging have been several. Perhaps the loss. It is also called rayleigh scattering or
foremost has been the desire to image coherent scattering. It is significant when
from one side. Furthermore, the the photon wavelength is on the order of
backscatter technique can image a volume atomic dimensions. Because it is coherent,
rather than a plane. For these reasons, the wave function of the scattered photon
backscatter imaging has been found useful is predictably related to that of the
for such applications as aircraft pressure incident photon, it gives rise to
bulkhead inspections where access to both diffraction effects. The entire field of
sides of the bulkhead is impracticable. X-ray diffraction is based on elastic
scattering. Elastic scattering from
Other motivations for using backscatter crystalline materials takes place only at
imaging include its ability to be certain angles. Hence it is possible to
configured for direct measurement of the measure the amount of some crystalline
electron density of the object being constituent using elastic scattering at
measured. This property has been used in selected angles. In noncrystalline
medical measurements of bone density. It materials, statistical parameters describing
was soon recognized that this property the spatial distribution of neighboring
could be exploited to detect the difference atoms can be determined from analyzing
between filled and unfilled voids within the scattered X-rays using techniques such
steel casings. The case in point was that of as X-ray absorption spectroscopy. By
artillery shells. The shells themselves are measuring small changes in the scattering
of heavy material while detonators are angles, elastic strains and hence residual
nearly transparent radiographically. stresses can be measured. But because the
Transmission radiography could not energies involved are on the order of a
always detect whether or not ordnance couple thousand electronvolts,
shells were armed. Backscatter penetration is limited. Elastic scattering is
radiography could. In fact, inspection of used primarily for one-dimensional
baggage for explosives and contraband measurements at a point. An example is
has been the major driving force in the the measurement of thin coatings with
development of backscatter radiography. low energy X-rays. In this case, the source
detector geometry can be configured to
Another motivation for using exclude reflection from the (steel)
backscatter radiography is that it can substrate, because that reflection will
perform a certain amount of chemical occur only at specific angles.
analysis on the object being imaged. This
faculty is most acute at very low (1 keV) Similar to elastic scattering compton
and very high (>2 MeV) energies. At scattering is also scattering by the
energies around 60 keV, the dual energy electrons that surround the nuclei of
technique permits the estimation of the atoms. In this case there is energy loss
atomic number of the material being from the incident photon to the electron
inspected through comparison of that recoils in what amounts to a
scattering and absorption coefficients. collision. Compton scattering is
important in the range of tens to
Scattering takes place through the hundreds of thousands of electronvolts. It
interaction of an X-ray or gamma ray is the basis for most of the attenuation of
photon’s oscillating electromagnetic field
with either the charge of an electron or of
the nucleus. For imaging purposes
380 Radiographic Testing
high energy photons, for example, has advantages for the examination of
gamma rays. It is also the basis of most thick or dense structures and like low
backscatter imaging techniques. energy fluorescence facilitates chemical
analysis from the backscatter energy
There are definite relations between the spectrum.
amount of energy lost and the angle of
scatter. Phase information is essentially The resonance fluorescence technique
lost in the scattering process. So, unlike has drawbacks. Particle accelerators are
elastic scattering, diffraction effects do not required to generate the incident X-ray
take place detectably. Compton scattering illumination. These are bulky and
is not isotropic but varies with the energy. expensive. Another drawback is that
At high energies, forward scattering above 500 keV, it is possible to generate
predominates. Compton scattering residual radioactivity. This residual
competes with the photoelectric effect as radioactivity problem is significant at
a means of consuming photon energy. 10 MeV for some materials.
Hence, in low Z materials, where the
photoelectric effect for X-rays is weak, Compton Scatter
compton scattering is relatively intense
compared with high Z materials in which Compton scattering is the type most often
photons are actually lost because of used in backscatter imaging. In this type
photoelectric absorption. Thus compton of scattering the electron receives energy
scattering is particularly useful for from the incident photon and recoils.
detecting electronically dense low Z Momentum is conserved and a lower
materials such as opiate drugs and high energy scattered photon emerges. Because
explosives. momentum is conserved, it is possible to
relate the scattering angle to the amount
In the event that a photon is absorbed of energy lost by the photon. This gives
in a photoelectric process, a high energy rise to the relationship:
electron is generated. This secondary
electron may itself be a K or L shell [ ]( )(1)ν0 h ν0
electron. It may also interact with other ν′ =1+ mc2 1 − cos 2θ
atoms to eject K or L shell electrons. The
cascade that results to refill the missing K where ν0 is incident photon frequency
or L shell electron may result in the (reciprocal second), ν´ is scattered photon
emission of a photon of the energy frequency, hν0 is incident photon energy,
corresponding to the difference between mc2 is about 500 keV and 2θ is the
that and some higher shell. For high Z scattering angle (in the sense used in
materials these photons can have energies diffractometry). Multiplying up and down
in thousands of electronvolts. This process by h, Planck’s constant, shows that the
is called fluorescence. expression is equal to the ratio of incident
to scattered photon energies. The
As with radiation used for elastic compton scattering process is described by
scattering, fluorescence radiation has the well known klein-nishina formula,
limited penetration. Backscattered X-ray which for ray pencils can be written, apart
fluorescence imaging is the basis of from specific geometry factors:
several chemical analysis tools used by
surface scientists. Most of these require r02 ν′ 2
placing the part to be examined in a 2 ν0
vacuum chamber and are therefore (2) Ψout = Ψin Z N Ωd
destructive testing. X-ray and gamma ray
fluorescence has also been used in probes ( )× ν0 ν′
to sort alloys, to detect and measure the ν′ + ν0 − sin2 2θ t
lead in paint coatings and to perform
similar tasks. × exp(−µl)
Another type of fluorescence occurs at where l is the total path length (meter), N
very high energies and is called resonance
fluorescence. At energies in the vicinity of is the number of atoms per cubic meter,
10 MeV, incident photons can cause
changes in the energy of the nuclei of r02 is the classical electron radius constant
atoms. After absorption of a photon the (where r 2 × 10–30 m2), t is the
nucleus relaxes emitting other photons at 0 = 7.94
lower energies but still in the millions of
electronvolts range. These fluorescence thickness (meter) of the scattering
photons, like their lower energy
counterparts, are emitted at random element in the incident beam direction, Z
angles nearly isotropically. This isotropy
means that nuclear resonance is the average atomic number, µ is the
fluorescence makes backscatter imaging
possible at energies above those where linear attenuation coefficient (m–1), θ is
compton scattering would only take place
in the forward direction. This technique the beam angle (radian is preferred but
degree is conventional, depending on
choice of sin), Ψ is the photon flux
(photons per second) and Ωd is the
detector solid angle (steradian) into which
the photon is scattered. Figure 1a
Backscatter Imaging 381
illustrates the scattering pattern given by component µc, and of elastic scattering µe
Eq. 2. It gives the number scattering cross (significant at low energies) of
section per electron per unit of solid angle photoelectron component µp:
in units of 10–30 m2.
(4) µT = µc + µp + µe
Because the solid angle per increment
of scattering angle at the poles is zero, this Compton scattering is, in fact, the main
graph may be confusing. Figure 1b replots cause of attenuation at energies over
the data for 50 keV in terms of increment about 55 kV. In Fig. 1a., a polar plot of the
of scattering angle where the solid angle differential scattering cross section,
increment is in terms of the scattering scattering is very roughly isotropic at the
angle 2θ: energy of 50 keV and below. The amount
of backscatter however becomes
(3) d Ω = 2π sin(2θ) d(2θ) diminished at 200 keV. Further increases
in energy result in further reductions in
The units are again 10–30 m2. backscatter intensity by the compton
Compton scatter also contributes to mechanism.
attenuation in that it scatters photons out Because the amount of scattering in a
of a beam and consumes energy at the volume element depends on the incoming
same time. The total linear attenuation flux, proportional to the area of the
coefficient µT is composed of compton element, times the thickness of the
element, the intensity of the radiation
FIGURE 1. Compton scattering pattern: (a) polar plot of scattered from a material volume element
differential scattering cross section; (b) incremental of ∆V, a voxel, inside an object, can be
scattering angle for 50 keV. Unit of angle is the degree. expressed as
(a) 8 120 90 ( )(5) Isc = K Ne ∆V exp −µl
Number cross section7 50 KeV 60 where K is a constant including the
200 keV strength of the incident beam, Ne is the
6 1 keV electron density of the material within the
5 voxel and µ¯ is an average linear
4 30
attenuation coefficient over the total path
3 of length l, starting where the
to-be-scattered photon enters the material
2 and ending where it leaves it. The
attenuation coefficient not only is a
1 function of the material through which
the photons pass but also is affected by
00 the compton shift, that is, by the energy
change on scattering. In addition to the
210 330 flux intensity Isc given above, there are
300 also background and a component
Incidence direction 240 resulting from multiple scatter, photons
60 scattered more than once. Multiple scatter
(b) 30 270 30 within the voxel may augment the signal
90 but a system would probably be poorly
25 0 designed if that were significant because it
20 Number cross section 120 implies a great deal of fuzziness in the
15 image. The balance of the multiple scatter
10 appears as enhanced background or noise.
In fact, most of the background in
5 backscatter scanning appears to result
0 from multiple scatter rather than natural
radioactivity and X-ray leakage. Making
210 330 the multiple scatter background negligible
240 is a matter of good design, proper choice
300 of energies (frequencies) and software
270 corrections.
Scatter As Function of Material
During their passage through material,
X-rays are both absorbed and scattered.
Figures 2 and 3 illustrate the relative
amounts of these two processes for lead
and aluminum respectively. The curve
382 Radiographic Testing
shown for scattering is composed of both fraction of total scattering. Notice that
elastic and inelastic scattering but the even near 120 keV, compton scattering
former is not significant at energies near remains the minority process in iron.
100 keV and above. At low incident photon energies, the
binding of the electron to the nucleus
Lead has a high atomic number and becomes significant. The klein-nishina
shows considerable photoelectric formula given as Eq. 2 is no longer strictly
absorption relative to compton scatter at accurate, because electrons cannot all be
energies of 1 MeV. Another absorption regarded as being free.
mechanism is also shown: pair
production. This occurs in all materials At high incident energies, the binding
but is only significant at energies greater energy of the electron is trivial compared
than 1 MeV above the usual range for with its recoil energy. When the incident
backscatter imaging. Aluminum has a low energy is so low that the binding energy is
atomic number and consequently close to the recoil energy, two types of
photoelectric absorption exceeds compton effects are observed. Line broadening is
scattering only at very low energies. This one effect that can in principle be
makes aluminum a suitable material for observed up to several tens of keV
inspection by backscatter imaging. depending on the energy sensitivity of the
detector and the monochromatic purity of
Figure 4 compares the significance of the source.
compton scattering to photoelectric
absorption by showing the compton Much chemical information can be
gained about the bonding of electrons
FIGURE 2. Scattering and absorption for lead.Attenuation coefficient (cm–1) through studies of compton broadening.
At still lower energies the scattering
100 coefficients themselves vary. Figure 5
Photoelectric absorption illustrates this effect through comparing
the scattering coefficients for two ions
10 having the same total number of
Compton scattering electrons, 21, but different nuclear
Pair production charges.1 The ions are titanium (Ti+) and
vanadium (V+2). The compton scattering
1 intensity (divided by the breit recoil
factor) is plotted versus an energy loss
0.1 parameter, sin(θ)·λ–1 where λ is the
wavelength in units of 0.1 nm. At large
0.01 12 34 5 67 energy losses, the scattering intensities are
0 nearly the same for both ions. But at low
energies, they differ. Such differences can
Energy (MeV) in principle be observed by comparing
images made at two different energies. In
Attenuation coefficient (cm–1)FIGURE 3. Scattering and absorption for aluminum. Compton fraction (ratio)practice, these effects are not easily
observed in backscatter.
10
FIGURE 4. Compton fraction of total scattering compared to
Compton scattering photoelectric absorption. Even near 120 keV, compton
1 scattering remains the minority process in iron.
Photoelectric absorption 1
0.1
Oxygen
0.8
Aluminum
0.6
0.4
0.2 Iron
0.01 0 50 60 70 80 90 100 110 120
0 40
12 3 4 5 67 Incident energy (keV)
Energy (MeV)
Backscatter Imaging 383
Other modalities such as fluorescenceScattering intensitywould allow only forward scatter. The
and coherent scattering are less often used(thompson units per electron)second advantage is that the backscattered
for nondestructive testing. Coherent photons have energies characteristic of
scattering, X-ray diffraction, is often a the atoms from which they originated.
reflection technique. Because of the low This allows chemical analysis to be
energies involved, however, penetration is performed — even of light elements — of
minimal. materials within heavy absorptive
enclosures.
X-ray diffraction also tends to be slow.
Scans can be performed but they are even Single and Multiple Scattering
slower. For this reason X-ray diffraction is
seldom used as an imaging technique There are basically two scattering models
except in the highly specialized task of useful for backscatter imaging. They will
inspecting large crystals for discontinuities be discussed only for compton scatter but
in structure. in the single scattering case, the same
remarks apply qualitatively to elastic
The main advantage of X-ray scatter as well. In the first, single
diffraction is that it measures composition scattering model, an incident photon
by crystal structure. Hence, it is used travels with some probability of
commercially to detect diamonds in absorption to a region within the
diamond mining. Low energy X-ray or material. It is there scattered and returns
gamma fluorescence has been used in along a straight path with some
commercial paint gages to detect and probability of absorption to a detector.
measure lead paint. Resonance The simplicity of this model allows closed
fluorescence is a high energy technique form calculations and has proven
recently developed. extremely useful in the service of
nondestructive testing. Computations are
Resonance fluorescence involves based on geometry and are relatively
transitions between energy levels within simple compared with the corresponding
the nucleus itself. It was predicted before computations for visible light because the
it was discovered. Early attempts using wave nature of X-rays can generally be
isotope sources to detect it were not ignored because of their small
highly successful because the narrow wavelengths.
energy ranges of these sources did not
match the energies favorable for The second model is multiple
resonance transitions. The development scattering. Two entirely different
of small accelerators providing broad approaches may be used but both involve
band bremsstrahlung in the 10 MeV range solving the boltzmann transport equation.
has made resonance fluorescence available Historically, the first approach was to
as an imaging tool. The techniques solve the boltzmann transport equation
involved use flying spot scanning (to be directly. Except in a very few special cases,
discussed) in combination with broad area this approach was fraught with errors and
detectors. The primary advantage of disappointments due to the simplifying
resonance fluorescence lies in the large assumptions that were needed.
amount of backscatter obtained at high
energies where the compton process What emerged as the most successful
approach is solution by monte carlo
FIGURE 5. Compton scattering intensity (divided by breit simulation. The overall energy transfer is
recoil factor) is plotted versus energy loss parameter estimated from averaging over many such
(sinθ·λ–1), where λ is wavelength in units of 10–10 m. paths generated by random numbers. This
computationally intensive approach is
Titanium + ion greatly facilitated by digital computers.
10 Unlike early attempts at solving the
boltzmann transport equation directly,
Vanadium+2 ion monte carlo methods usually show
excellent agreement with experiment. In
1 either single or multiple scattering, the
0 0.2 0.4 0.6 0.8 1 1.2 theoretical treatment depends somewhat
on the detector model. Detectors may
Energy loss parameter, sinθ·λ–1 (0.1 nm–1) count photons or respond to photon
energy. The following discussion of
backscatter presumes photon counting.
The energy of the counted photons is
measured separately in modern detectors
and provides supplementary information.
The single scattering model is
applicable, at least as an initial estimate,
to almost all backscatter imaging
applications to date. The reason for this is
that most imaging is done with photons
384 Radiographic Testing
in the 30 to 100 keV energy range. In material (see Fig. 7). Presume that the
most materials, for example, transition detector is configured to collect all
metals, there is significant absorption in photons on paths for which the cosine of
this energy range. Consequently, the angle between them and the incident
multiple-scattered photons have lost more pencil is unity (implied is that the solid
energy because of the compton process angle of the detector is not a factor); this
and are therefore better targets for would be about 180 degree scatter. The
photoelectric absorption. In addition, probability of scattering in a volume
they are laterally displaced. In systems element of thickness t is σt, where σ is the
with both collimated sources and linear scattering coefficient. If z is the
collimated detectors, multiple-scattered distance from the scattering element to
photons must follow very specific paths the surface, the amount of scattering in
that undo the lateral displacement effect that element is I0σt·exp(–µz), taking into
to reach the detector. Because these paths account attenuation on the way in.
are specific, their associated probabilities Including attenuation on the way out
are low. gives:
Consider an X-ray pencil beam ( )(6) Scattering = I0 σt exp −2µz
entering a uniform material (Fig. 6). A
particular set of photons passes through Taking the limit as t becomes
the upper layer of material, is scattered in infinitesimal and integrating over the
some volume element and returns back entire specimen give:
through the material to the detector.
There is a round trip path that contributes [ ]( )(7) Scattering = I0 σ 1 − exp −2 µz
to the attenuation of photons. This is the 2µ
first element of the process. In the second
element, the scattering itself, the where now z is the thickness of the
reciprocal of the linear dimension of the specimen, as proportional to the strength
interrogated area (scattering zone) defines of the signal at the detector. For an
the resolution. This dimension is related infinitely thick specimen, the signal is
to the solid angle subtended by the pencil simply proportional to the ratio of the
beam. The throughput or fraction of scattering coefficient and the linear
available photons participating is attenuation coefficient.
proportional to this solid angle. If
uncollimated detectors are used, then by A frequent type of measurement is to
this reasoning, the throughput is inversely estimate the density of a material from its
proportional to the square of the backscatter signal. But, given this
resolution. Under such circumstances the expression, backscatter can be expected to
resolution is in two dimensions only; the reveal very little about the density of
beam interrogates more or less the entire absorptive materials, because both the
depth of the specimen at once and the linear scattering coefficient σ and the
resulting signal is an integral over the linear attenuation coefficient µ are
depth. proportional to density. A way around
this limitation is to use multiple detectors
As an example consider a configuration to provide different path lengths for the
where a narrow pencil enters a uniform scattered photon. This allows µ to be
decoupled from σ. More commonly in
FIGURE 6. Scattering model. density measurements a different
approach separates the source and
Source Singly scattered photon detector; its application to concrete is
Multiply scattered photon discussed below. If the material is a weak
absorber, then expanding in a series
Workpiece makes the signal instead proportional:
Incident radiation pencil
( )(8) Scattering = I0 σx + O µx2
where x is the material’s thickness.
When a collimated detector is used, its
solid angle relative to the point of
scattering becomes important. The
throughput is then proportional to the
solid angles of both the source pencil and
that of the detector. This proportionality
suggests that geometry could result in the
signal’s being inversely proportional to
the fourth power of the resolution.
Practical volume imaging systems make
Backscatter Imaging 385
one of the imaging elements a slit so that throughput may diminish but the overall
the throughput is inversely proportional efficiency would be improved by making
to the cube of the resolution. This is still a the source larger. Were it possible to
severe limitation. In some systems, the replace the detector aperture by an
resolution requirement is relaxed in less annular slit, the power would then be
important dimensions to overcome this reduced to three. The single scatter
restriction. For example, in a geometry has been extensively modeled
commercially developed backscatter in the literature.
tomography system,2,3 resolution in the
direction perpendicular to the plane of The multiple scattering model is of
the tomograms has been somewhat primary importance only in those
reduced. In depth profiling backscatter techniques that use detectors collimated
schemes, the resolution in directions to exclude singly scattered photons,
perpendicular to depth is considerably because single scattering usually
reduced. dominates even when uncollimated
detectors are used. The multiple scattering
This resolution problem is shown in model can be described through the
Fig. 7. The scattering volume is assumed transport equation. This equation states
to be roughly spherical. Referring to the that in any element or particular region in
scattered rays approaching the detector, it space, the rate of change in the number of
should be apparent that the resolved photons within the region equals the
dimension can be no smaller than the fluxes integrated over that element’s
diameter of the aperture shown between boundary.
the scattering volume and the detector;
imagine the aperture at first located at the These surface fluxes come from two
scattering volume itself and then while sources. First are those arising from
adjusting the size of the detector photons passing through the element
withdraw it in steps toward the detector unscattered (the convection term). Second
to visualize. The solid angle Ω is are those arising from photons being
subtended by the detector aperture: scattered into the element. A third flux is
proportional to the volume of the
(9) Ω = πd2 element and comes from photons created
16 r 2 or absorbed in the element. Generally, the
creation of photons within the element
where d is both the aperture diameter and can be neglected. A monte carlo solution
proportional to (if not identical to) the of the transport equation might divide
resolved dimension and r is the distance the space into discrete elements and track
from the scattering volume to the hypothetical photons, scattered or not
detector. If the source is large in diameter depending on generated random numbers
compared to the resolved dimension, then as they pass from element to element.
the same construction can be applied to Summing over many trial incident
the source resulting in the fourth power photons allows calculation of the
relationship mentioned. If the source is expected fraction of them in any volume
smaller than the resolved dimension, the element at any time going in any
power of aperture diameter d in the particular direction with any particular
energy.
FIGURE 7. Resolved dimension can be no smaller than
diameter of aperture shown between scattering volume and Apart from monte carlo and other
detector. numerical methods, many approaches
have been tried to obtain solutions in
Geometric closed form. The most common of these
bright zone has been to assume that the scattering
angles are small. These have resulted in a
Source number of so called straight ahead
approximations, useful when the energies
Detector of the incident photons are very high. An
example is Yang’s method, which allows
Resolved dimension calculation of the average lateral
Scattering volume displacement because of scattering of a
photon passing through a thin layer.4
These methods are of little use under
imaging conditions. If the convection
term is made very small, absorption
negligible, scattering isotropic, source
term zero and the number of photons in
any volume element independent of time,
then it is possible to convert the transport
equation into a form of Laplace’s
equation. This give the diffusional
approximation. Diffusional approximations
could yield closed form estimates to some
386 Radiographic Testing
flying spot scanning problems that might
be useful under fortuitous conditions. But
generally diffusional assumptions would
seldom be realistic. Retaining the
diffusional assumptions and reintroducing
the convection term would give rise to a
fokker-planck equation for which there is
an extensive literature of solutions.
However, the consistently good accuracy
reported for monte carlo methods has
largely obviated closed form
approximations.
Multiple scattering may be significant
in the detection of low density regions or
regions of low atomic number in
materials. In the case of low density
regions, initially scattered photons may
travel the length of the region before
being rescattered out of the material. In
the case of low Z materials, the relative
absence of photoelectric absorption may
allow a single photon to exist through
many scattering events. It has been
reported that when nearly
monochromatic cobalt-60 radiation is
backscattered from steel containing an air
gap, a detectable energy peak varies in
energy as a function of the size of the air
gap. This effect is enhanced at small
scattering angles and probably results
from multiple scatter within the air gap.5
Cracks can be detected through a
combination of multiple scattering and
fluorescence. By placing a film in contact
with a metal surface and illuminating the
part obliquely, Hasenkamp was able to
visualize small cracks.6 Just as a deep
notch in a red hot part will usually appear
brighter than its surrounding surface, a
crack will appear brighter than its
surroundings when the part is illuminated
with X-rays in a direction such that only
scattered X-rays can reach the detector.
Backscatter Imaging 387
PART 2. Backscatter Imaging Techniques
Pinhole only one volume element needed to be
checked. Figure 8 illustrates one form of
One of the earliest examples of industrial multiaperture detector. Conical holes are
scatter imaging was performed using a drilled radially in a shielding material
pinhole camera. Pinhole cameras have such as lead. The specimen, shown in
been used to image sources such as X-ray gray, is illuminated from the side. Only
tube anode spots or isotopes. The pinhole those photons scattered at the geometric
camera technique gives a two-dimensional center of the collimator are detected.
image on film. Its simplicity is offset by
the very small throughput obtainable Moving Slits
because of the small solid angle subtended
by a pinhole. Illumination of the object to By elongating a pinhole into a slit, its
be examined by a fan beam confined to a solid angle may be significantly increased
plane parallel to the film plane can be without sacrificing resolution in one
used as a form of laminography.7 direction. Imaging slits have long been
used for X-ray diffraction to create one or
Closely related to the pinhole camera is even two-dimensional images in
the multiple aperture collimator. This is reciprocal space. Reciprocal space, the
essentially a plurality of pinholes and fourier transform of ordinary space, is of
masks designed to image a selected great interest to crystallographers because
volume element repeatedly onto a it defines the periodicities of crystals. By
detector. For that particular volume selecting a certain region of reciprocal
element, the efficiency improves by the space with slits, two- and even
number of times that there are apertures. three-dimensional images of a selected
Because typically only one volume crystal periodicity can be obtained in
element is imaged at a time, no actual ordinary space by scanning the part to be
image is formed unless the aperture examined relative to the configuration of
assembly is scanned. This technique was fixed source to slit detector. A major
first developed for radioisotope scans in application is the mapping of residual
nuclear medicine and later adapted to the stresses in machined parts.
checking of ordnance detonators where
When applied to compton scattering,
FIGURE 8. Multiaperture collimator. In one slit imaging has been used in several
type of multiaperture collimator, conical depth profiling schemes. In these, the slits
holes are drilled radially in shielding material are configured to give high resolution in a
such as lead. Specimen is illuminated from direction defined as the depth at the
side and only photons scattered at sacrifice of resolution in other directions.
geometric center of collimator are detected. This configuration allows a maximization
of throughput. In one case the source slit
Detectors detector assembly is a rigid assembly and
is physically moved relative to the surface
X-ray illumination of the object under study. This makes
positional accuracy independent of X-ray
parameters or construction accuracy. In a
second configuration, the source and one
slit are stationary while the detector and
an imaging slit are moved across the
surface to collect scatter from
progressively deeper layers (Fig. 9). Both
systems have their advantages. The former
is easily calibrated for depth because it is
inherently a 1:1 system. It is usually
configured so that the angles of both the
incident and scattered photon paths are
about or less than 45 degrees with respect
to the surface normal. The latter is usually
configured so that the scattered photon
makes an angle much larger than
388 Radiographic Testing
45 degrees with respect to the surface The backscattered photons are usually
normal. detected with broad area uncollimated
detectors. This technique is used
In principle, three-dimensional images extensively in luggage scanners to test for
can be obtained through a combination bombs. As previously discussed, resolution
of depth scanning and lateral is limited by the size of the hole formed
displacement of the imaging apparatus to by the moving mask while throughput
create three-dimensional scanning. increases with the square of its linear
However, when high resolution is dimension.
required in the depth direction, scanning
in other directions as well is likely to be Using a detector in conjunction with
impracticable from the standpoint of the the beam catcher (safety shield),
time required. Such methods have been transmission and backscatter images are
used in the laboratory to examine organic often collected simultaneously. These can
composites. be processed to enhance features
associated with dense materials of low
Flying Spot atomic number such as heterocyclic drugs
and explosives.
Flying spot scanning is by far the most
popular backscatter modality because of Flying spot scanners have been scaled
the large throughput obtainable. The up to sizes that allow the inspection of
detector solid angle can reach nearly 2π entire trucks and freight cars. They have
steradians. Flying spot scanning was used been adapted to mine detection by
in the very earliest forms of television, directing the beam into the ground. A
which antedated broadcast television. It recent adaptation has been the addition
was rediscovered as a means of reducing of collimated detectors to detect only
unwanted scatter in transmission X-rays multiple scattered photons as a means of
in the early 1970s and soon thereafter was further highlighting low Z materials
applied to backscatter imaging.8 having high densities. By making the hole
in the mask quite large, for example,
The most common technique involves 30 mm (1.2 in.), and by using very large
placing a chopper wheel in front of a long area detectors, the technique has been
and often semicircular slot (Fig. 10).8 The adapted for scanning humans for
combination of the slot in the chopper concealed drugs and explosives at a low
wheel and the fixed slot together form a X-ray dosage. Collecting data at two or
moving mask that limits the incident more different energies allows estimation
X-ray beam to whatever will pass through of the ratio of the scattering cross section
the mask. Usually a raster is scanned. The to the linear absorption coefficient. This
test object moves on a conveyor or in may be used to estimate the effective
extreme cases the entire scanning atomic number of the material being
apparatus moves. Because no scanned to partially identify explosives,
reciprocating motion is needed, the speed drugs or water in metal and other
of scanning can be correspondingly rapid materials.
and free of vibration.
A significant variation on the flying
FIGURE 9. Moving detector depth scanning. In slit imaging spot scanner has been its combination
configuration, source and one slit are stationary while with an imaging slit and segmented
detector and imaging slit are moved across surface to collect detectors. In one such configuration,2 a
scatter from progressively deeper layers. reciprocating aperture formed by a
rotating slotted cylinder placed very close
Collimated detector
FIGURE 10. Flying spot X-ray backscatter system.8
Source Object Backscatter
Translation detectors
Direction of Slit
scanning collimator
Flying spot
X-ray beam
Interrogated zone
Workpiece Rotating
collimation
disk
Backscatter Imaging 389
to the anode of an X-ray tube scans theBackscatter counts (103) times dimensionlessThe scanning beam, reversed geometry
beam in one direction. Calling thatfactor equal to unity at surfacetechnique is discussed elsewhere in this
direction X, the raster is completed by a volume.
drive that moves the detectors, tube and
X direction scanner in the Y direction. A Depth Profiling
long imaging slit (actually two, one on
each side of the scanner) parallel to the X Depth profiling has already been
direction images the Z direction onto discussed under moving slit techniques
segmented detectors. Thus a true where the moving detector technique was
three-dimensional imaging system is discussed. Here the fixed detector
formed. To increase speed, three scanning technique will be treated. In either case, a
slits and sets of detectors can be worked major issue in depth profiling is the
simultaneously off one tube. This system’s simultaneous optimization of flux and
resolution is best in the X and Y resolution in the depth direction.9 The
directions, about 2 line pairs per 1 mm exact solution depends on the source
(0.04 in.). The Z direction is divided into collimation, the linear absorption
about 1 mm (0.04 in.) thick slices. The coefficient and the depth being scanned.
scanned area is about 50 × 100 mm For materials having small absorption
(2 × 4 in.). It has been found useful for coefficients, the angle between the source
aircraft bulkhead inspections and is one of and the detector should exceed
the many backscatter techniques used for 90 degrees, permitting for a very high
testing aircraft honeycomb. throughput at a given resolution. For
absorptive materials, a symmetrical
A different approach to flying spot configuration having about a 90 degree
scanning has been to use magnetic coils separation between source and detector is
to move the electron beam back and forth a good choice although others including
along the anode in an X-ray tube. This asymmetrical configurations may perform
technique has been tried with a fixed as well with compensating advantages.
aperture to move the X-ray beam back
and forth or with a film anode tube to The fixed detector technique has the
scan a raster in low energy X-rays for detector in a fixed position relative to the
micro examination of thin objects. source and both combined into a single
Scanning the beam would allow a higher assembly. Slits are used to control the
spot intensity than would otherwise be degree of collimation and the source and
possible in a fixed anode tube. Difficulties detector angles. Lead screws move the
with this technique as a means of beam assembly perpendicular to the surface of
steering include the problem of keeping the material being probed. An array of
the electron beam in focus over a optical gages measures the true location of
significant angular deflection, providing a the surface. Because the interrogated
tube, anode and window wide enough for region is at a fixed depth with respect to
significant deflection. the assembly, the gages return the exact
depth being scanned. Depth accuracy of
FIGURE 11. Depth profile of boron fiber composite patch on 13 µm (0.0005 in.) has been achieved
cold bonded aluminum aircraft skin. under field conditions. Actual accuracy in
measuring the thickness of a layer
10 depends on the size of the interrogated
zone, the roughness of the surfaces
Simulated disbond involved and the amount of noise. Typical
8 accuracies are in the range of ±0.04 mm
(±0.0015 in.).
6
Figure 11 shows a depth profile of a
4 boron fiber composite patch applied to an
Patch aluminum aircraft skin. The scan was
made on an aircraft in the field. The chart
Skin (two layers plus bond) shows a parameter roughly proportional
2 to the electron density times a factor that
decreases with increasing photoelectric
0 absorption. As expected, the organic
0 0.5 1.0 1.5 2.0 2.5 3.0 material containing the low Z element,
0 (0.02) (0.04) (0.06) (0.08) (0.10) (0.12) boron, gives a higher signal than the
aluminum substrate. Each flat topped
Depth, mm (in.) peak corresponds to a layer of material.
When needed, layer thicknesses are
measured at half the height of the peak
using a least squares fit to the data. The
simulated disbond is a thin layer of
fluorocarbon polymer introduced to test
ultrasonics. Commonly noise increases
390 Radiographic Testing