The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Five-dimensional interpolation: Recovering from acquisition constraints Daniel Trad1 ABSTRACT Although3Dseismicdataarebeingacquiredinlargervol-umesthaneverbefore ...

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by , 2016-10-27 23:00:03

Five-dimensional interpolation: Recovering from ...

Five-dimensional interpolation: Recovering from acquisition constraints Daniel Trad1 ABSTRACT Although3Dseismicdataarebeingacquiredinlargervol-umesthaneverbefore ...

GEOPHYSICS, VOL. 74, NO. 6 ͑NOVEMBER-DECEMBER 2009͒; P. V123–V132, 12 FIGS., 1 TABLE.
10.1190/1.3245216

Five-dimensional interpolation: Recovering from acquisition constraints

Daniel Trad1

ABSTRACT ference of data and thus is sensitive to irregular and coarse sampling
͑Abma et al., 2007͒. Analysis of amplitude variations with offset and
Although 3D seismic data are being acquired in larger vol- azimuth ͑AVO, AVAz͒, which we want to observe in the migrated
umes than ever before, the spatial sampling of these volumes domain, are also affected by the presence of gaps and undersam-
is not always adequate for certain seismic processes. This is pling.
especially true of marine and land wide-azimuth acquisi-
tions, leading to the development of multidimensional data There are many different approaches to tackling this problem. The
interpolation techniques. Simultaneous interpolation in all only perfect solution is to acquire well-sampled data; all other ap-
five seismic data dimensions ͑inline, crossline, offset, azi- proaches deal with the symptoms of the problem rather than the
muth, and frequency͒ has great utility in predicting missing problem itself, and there is no guarantee that they can adequately
data with correct amplitude and phase variations. Although solve it. However, given that, in the real world, we usually cannot go
there are many techniques that can be implemented in five di- back to the field and fix the actual problem, we need to address this
mensions, this study focused on sparse Fourier reconstruc- issue using the processing tools at our disposal.
tion. The success of Fourier interpolation methods depends
largely on two factors: ͑1͒ having efficient Fourier transform Most seismic algorithms implicitly apply some sort of interpola-
operators that permit the use of large multidimensional data tion because they assume correctly sampled data. Typically, missing
windows and ͑2͒ constraining the spatial spectrum along di- samples are assumed to be zero or similar to neighboring values. The
mensions where seismic amplitudes change slowly so that advantage of using a separate interpolation algorithm is that more in-
the sparseness and band limitation assumptions remain valid. telligent assumptions can be made by using a priori information. For
Fourier reconstruction can be performed when enforcing a example, sinc interpolation uses the constraint that there is no energy
sparseness constraint on the 4D spatial spectrum obtained at frequencies above Nyquist. This is more reasonable than assum-
from frequency slices of five-dimensional windows. Binning ing that the unrecorded data are zeros. Interpolation algorithms can
spatial positions into a fine 4D grid facilitates the use of the then be viewed as methods to precondition the data with intelligent
FFT, which helps on the convergence of the inversion algo- constraints.
rithm. This improves the results and computational efficien-
cy. The 5D interpolation can successfully interpolate sparse Interpolation of wide-azimuth land data presents many challeng-
data, improve AVO analysis, and reduce migration artifacts. es, some quite different from those of interpolating narrow-azimuth
Target geometries for optimal interpolation and regulariza- marine data sets. The most familiar interpolation algorithms have
tion of land data can be classified in terms of whether they been developed for marine streamer surveys. Marine data are usual-
preserve the original data and whether they are designed to ly well sampled in the inline direction and coarsely sampled in the
achieve surface or subsurface consistency. crossline direction. Many algorithms based on Fourier interpolation
are quite successful at infilling the crossline direction, even in the
INTRODUCTION presence of aliasing and complex structure ͑Schonewille et al., 2003;
Xu et al., 2005; Abma and Kabir, 2006; Poole and Herrmann, 2007;
All current 3D seismic acquisition geometries have poor sam- Zwartjes and Sacchi, 2007͒. Land data interpolation brings addition-
pling along at least one dimension. This affects migration quality, al complications because of noise, topography, and the wide-azi-
which is based on the principle of constructive and destructive inter- muth nature of the data. In particular, the azimuth distribution re-
quires interpolation to use information from all spatial dimensions at
the same time because sampling along any particular subset of the
four spatial dimensions is usually very poor.

Multidimensional interpolation algorithms have become feasible
even for five dimensions ͑Trad et al., 2005͒. This capability raises

Manuscript received by the Editor 12 August 2008; revised manuscript received 29 May 2009; published online 25 November 2009.
1CGGVeritas, Calgary, Alberta, Canada. E-mail: [email protected].
© 2009 Society of Exploration Geophysicists. All rights reserved.

V123

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

V124 Trad

new possibilities but also brings new challenges and questions. The A related distinction is the number of dimensions that the algo-
general principle is the same: Missing data are assumed to have a rithm can handle simultaneously. Usually, the time dimension is well
similar nature to data recorded in their neighborhood, but the term sampled, so only spatial dimensions need be interpolated. Although
“neighborhood” can have different meanings in multiple dimen- 3D seismic data have four spatial dimensions, many traditional
sions. An additional complication for wide-azimuth data interpola- methods use data along one spatial dimension only. If the method is
tion in five dimensions is that these data are always very irregular cascaded through the different dimensions, the order of these opera-
and sparse in at least two of the four spatial dimensions because of tions becomes extremely important. However, interpolation of
acquisition and processing costs. sparse wide-azimuth data is more likely to succeed in a full 5D space
because often at every point there is at least one spatial direction
Interpolation implementations have two different aspects: the along which seismic amplitudes change slowly. Information along
general interpolation strategy ͑choice of spatial dimensions, win- this direction helps to constrain the problem along the other dimen-
dow size, and target geometry͒ and the mathematical engine used to sions where events are harder to predict.
predict the new traces from some kind of model. A discussion of
these two aspects follows. Also, seismic amplitude variations are smoother in five dimen-
sions than they are in any projection into a lower dimensional space.
INTERPOLATION STRATEGIES To see why, consider an analogy: Imagine the shadow of an airplane
flying over a mountain range. The shadow of the airplane is a com-
Interpolation methods differ in complexity, assumptions, and op- plex path even if the airplane goes in a simple trajectory. Interpola-
erator size. Local methods ͑e.g., short-length prediction filters͒ use tion of the airplane flight path is much more difficult on the 2D sur-
simple models ͑usually linear events͒ to represent the data in small face ͑shadow͒ than in the original 3D space. A similar argument can
windows. Therefore, they tend to be robust, fast, adaptable, and easy be made about seismic wavefield variations in the full 5D space.
to implement. Their shortcoming is an inability to interpolate large
gaps because the local information they need does not exist ͑there My approach to interpolation is to work with large operators in 5D
are no data around the trace to interpolate͒. windows. In practice, the data window size is often constrained by
the processing system capabilities, particularly when using clusters
Global methods use all of the data simultaneously ͑up to some ap- in busy computing networks. I normally apply windows of 30ϫ 30
erture limit defined by the physics of the problem͒ and models with lines, 1000-m offsets, and all azimuths. Larger windows are occa-
many degrees of freedom because they cannot assume simple data sionally required to deal with very sparse data. The spatial dimen-
events at a large scale. They are slower, less adaptable, and harder to sions in these windows are chosen so that the data look as simple as
implement. However, they can, at least in theory, interpolate large possible along each dimension. After extensive testing in different
gaps by using information supplied from distant data. Most practical domains ͑shot, receiver, cross spreads, and common-offset vector
methods fall between these two extremes; but the sparser the sam- domains͒, I have chosen the inline-crossline-azimuth-offset-fre-
pling, the larger the operator size needs to be. If the geology is com- quency ͑i.e., midpoint, offset, and azimuth͒ with NMO-corrected
plex, some methods with a large operator can smear geologic fea- data for the following reasons:
tures and decrease resolution. A safe choice is to work with global in-
terpolation methods that behave like local interpolators when local 1͒ These are the dimensions where amplitude variations are most
information is available. important ͑structure, AVO, and AVAz͒. Interpolation is always
an approximation of the truth, and that approximation is better
a) c) Receiver trace number along the dimensions where the algorithm is applied.

Receiver trace number 400 500 600 700 2͒ AVO and AVAz are usually slow ͑after NMO͒; therefore, data
400 500 600 700 Receiver line have limited bandwidth in the Fourier spectra along these di-
0 mensions. The azimuth dimension also has the advantage of be-
ing cyclic in nature, making it particularly fit for discrete Fouri-
Time (s) 1 er transform representation.

2 3͒ The interval between samples in the inline crossline dimen-
sions ͑i.e., midpoints͒ is on the order of the common-midpoint
b) d) ͑CMP͒ bin size. In the shot or receiver domain, the sampling
can be as coarse as shot/receiver line sampling ͑several CMP
1Time (s) bins͒.

2 Figure 1 shows a simple synthetic experiment to demonstrate the
advantage of 5D interpolation over 3D interpolation. The original
Figure 1. Synthetic data: comparison of 5D versus 3D interpolation. traces from an orthogonal survey were replaced by synthetic seismic
͑a͒ Synthetic data, one shot ͑window͒. ͑b͒ After removing every sec- events while preserving the original recording geometry of the trac-
ond line. ͑c͒ Interpolation in 5D ͑inline/crossline/offset/azimuth/fre- es. The distance between receiver lines was 500 m, with 12 receiver
quency͒. ͑d͒ Interpolation in 3D ͑receiver. x, receiver. y, frequency͒. lines per shot. Every second receiver line of this synthetic data set
Traces are sorted according to receiver numbers. was removed, simulating a 1000-m line interval, and then predicted
with Fourier reconstruction.

In the first case, I interpolate on a shot-by-shot basis ͑three dimen-
sions͒, and in the second case in the inline-crossline-offset-azimuth-
frequency domain ͑five dimensions͒. It is evident in Figure 1 that the
algorithm can reproduce all data complexity when using five inter-

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

Five-dimensional interpolation V125

polation dimensions, but it is unable to repeat this using only three receivers following the original design ͑types 1a and 1b͒ has the ad-
dimensions. Because the algorithm is exactly the same, this example vantage that original data can be preserved and interpolation is well
shows the importance of the additional information supplied by the constrained. Preserving the original data is generally safer than re-
extra dimensions for Fourier interpolation. placing all of the acquisition with interpolated data, particularly for
complex noisy data from structured areas in the presence of topogra-
The actual location of the newly created traces is an important is- phy. This approach works well for Kirchhoff time and depth migra-
sue for interpolation. I can distinguish six cases, of which only four tion. By adding new shots and receivers, the subsurface sampling
are used for land data wide-azimuth surveys: can be improved according to well-understood acquisition concepts
͑e.g., Cordsen et al., 2000͒.
1͒ Preserving original data ͑interpolation͒
Type 2a, surface-consistent interpolation with perfectly regular
a͒ Decrease shot and receiver interval ͑decrease bin size͒. shot and receiver lines, is useful for wave equation migration, inter-
b͒ Decrease shot and receiver line interval ͑increase offset polation of very irregular surveys, and time-lapse applications. Type
2b, subsurface-consistent uniform coverage of offsets and azimuths
and azimuth sampling͒. for each CMP, is desirable for migration in general. However, this
c͒ Make shot and receiver line interval equal to shot and re- design implies a large number of shots and receivers with nonuni-
form shot and receiver fold. This is a problem for ray-tracing meth-
ceiver interval ͑fully sampled͒. This is a particular case of ods and any kind of shot or receiver processing. Therefore, its appli-
1b. cation seems to be limited to time migration and, because of the large
size of the resulting data sets, for small surveys. Probably, it can also
2͒ Replacing data totally with predicted traces ͑regularization͒ be applied well to common-offset Gaussian beam migration.

a͒ Target geometry regular on shot and receiver locations Finally, types 1c and 2c, complete coverage of shots and receiv-
͑surface consistency͒. ers, are desirable for all seismic processing, but the resulting large
size of the data makes it impractical.
b͒ Target geometry regular on CMP, offset, and azimuth ͑sub-
surface consistency͒. Any of these interpolation types can be used for infilling acquisi-
tion gaps. A modification of type 2b from polar to Cartesian coordi-
c͒ Target geometry regular on surface and in subsurface. nates can be used to produce common-offset vector gathers. Types

Possibilities 1a, 1b, 2a, and 2b each have important applications
͑see Table 1͒. Adapting to the acquired data by adding new shots and

Table 1. Types of land data interpolation and main benefits. The size of the circle is proportional to the real use in production
(based on use from 2005 to 2008). The font style in the bottom row reflects a positive (bold) or negative (italic) remark.

Main application Increasing Increasing Type Regularizing Full sampling
inline- offset- CMP/offset/ ͑1c and 2c͒
b Heavy use crossline azimuth Regularizing azimuth positions
ᶁ Occasional use shot/receiver ᭺
᭺ Possible, but never sampling sampling ͑2b͒
͑1a͒ ͑1b͒ positions ᭺
used ͑2a͒ ᭺
Interpolation of large gaps ᶁ b ᶁ
Time Kirchhoff migration ᶁ ᭺
Depth Kirchhoff migration ᶁ b ᶁ
Wave-equation migration b
Merging surveys with ᶁ b
different bin size and/or
design ͑2D and 3D, parallel bᶁ ᶁ ᭺ ᭺
and orthogonal, etc.͒
Increase resolution for b b ᶁ ᭺ ᭺
steep dips ͑relax antialias
filters during migration͒ ᶁ ᶁ Wave ᭺ ᭺
Improve CIGs ͑AVO, Merging equation
AVAz, velocity analysis͒ Time/depth migration ᭺ ᭺
4D applications ͑matching Reliable Kirchhoff Time migration Not used
time-lapse surveys͒ Sometimes migration Good small surveys because of
Main use produces sampling for high cost
time slice Reliable Great sampling
Main advantage- artifacts Difficult for any Expensive Best
disadvantage topography migration possible
Less reliable for shot-receiver sampling
processing Expensive
for all
processing

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

V126 Trad

1c, 2b, and 2c are fully implemented and have been used in internal spatial dimension. They link the frequency slices, making the fre-
tests but have not yet been used in production projects. Notice that
one case missing in the table is to replace all data with predictions quency axis behave as the fifth interpolation dimension, although
onto a given geometry. This is the situation in 4D time lapse, where it
is usual to interpolate the monitor locations to match the baseline. frequencies are not really interpolated.

For typical wide-azimuth land data surveys in a complex environ- The model m is in the ␻-x domain ͑x is a vector representing all
ment, the safest choice seems to be surface-consistent interpolation
͑1a and 1b͒. This allows one to preserve the original data untouched spatial directions͒. If kmax is the maximum wavenumber on each di-
and to apply careful quality control ͑QC͒ to the new traces. Some QC mension for the maximum dip of the data, then the case of pk ‫ ס‬1 for
parameters can be added to the headers, making it possible to discard k Յ kmax and pk ‫ ס‬0 for k Ͼ kmax corresponds to sinc interpolation.
new traces with high risk or low confidence after the interpolation. The variable ␭ in equation 1 is a hyperparameter that controls the
There are several possible quality parameters. Two QC parameters
that complement each other and are often useful are ͑1͒ the distance balance between fitting the data and enforcing sparseness on the
along the four spatial dimensions between the new and the original
traces and ͑2͒ the ratio of original to interpolated traces. spectrum. This parameter is eliminated by changing the cost func-

For the 5D configuration discussed in this paper, a meaningful cal- tion in equation 1 to the standard form and using the residuals to de-
culation of the first parameter requires a weighted average of the dis-
tance along inline, crossline, offset, and azimuth. The weights de- fine the number of iterations ͑Trad et al., 2003͒. The actual geophysi-
pend on the structural complexity, residual moveout, and anisotropy.
The second parameter refers to the ratio of the number of original cal meaning of the spatial dimensions is irrelevant to the algorithm.
traces to the number of sampling points on the 4D spatial grid used in
the numerical algorithm. This ratio is usually much smaller than the However, for the method to work well, at least one of these dimen-
ratio of input to output traces for a given area.
sions must have a sparse spectrum or a band limited spectrum.

The multidimensional spectrum can be calculated using discrete

Fourier transforms ͑DFTs͒ that exactly honor space locations or the

fast Fourier transforms ͑FFTs͒ that require binning the data into a

grid with exactly one trace per position. In practice, I define m to be a

regular supersampled 4D grid that contains many more traces than

the target geometry. This allows us to use FFTs but forces us to bin

the data during the interpolation.

The bin intervals along the spatial dimensions are kept small to

avoid smearing and data distortion. The binning errors along the in-

INTERPOLATION ENGINE line/crossline directions can be made negligible by subdividing

CMP bins into subbins if necessary, but CMP grid bin size usually is

The second major component of the interpolation problem is the adequate. The binning errors along offset and azimuth dimensions
choice of a mathematical algorithm to predict new information giv-
en a set of recorded traces. One method with the flexibility to adapt to are kept small by applying NMO and static corrections before inter-
the requirements for multidimensional global interpolation is mini-
mum weighted norm interpolation ͑MWNI͒ ͑Liu and Sacchi, 2004͒, polation. However, data with significant residual NMO and strong
which extends the work from Sacchi and Ulrych ͑1996͒ to multiple
dimensions. MWNI is a constrained inversion algorithm. The actual anisotropy require small bin intervals along offset and azimuth.
data d are the result of a sampling matrix T acting on an unknown
fully sampled data set m ͑m and d are containers for multidimen- Large bins reduce computation time and improve numerical stability
sional data, and T is a mapping between these two containers͒.
but reduce precision. There is a trade-off between precision and nu-
The unknown ͑interpolated͒ data are constrained to have the same
multidimensional spectrum as the original data. Enforcing this con- merical stability that requires careful parameterization and imple-
straint requires a multidimensional Fourier transform, which is the
most expensive part of the algorithm. To solve for the unknown data, mentation. A good rule of thumb for land surveys is to use, as offset
a cost function is defined for every frequency slice and is minimized
using standard optimization techniques. The cost function J is de- bin interval, a fraction of the receiver group interval ͑e.g., 1 or 1 ͒, de-
fined, frequency by frequency, as 2 4

creasing from near to far offsets and geologic complexity. Azimuth

intervals are usually chosen in the range between 20° and 45°, de-

creasing with offset and anisotropy.

DFTs can also be used for the spectrum with the advantage that

they do not require binning. The problem with DFTs is computation-

al cost. For N variables, a 1D FFT requires computation time propor-

tional to N log N, but DFT requires a computation time proportional

to N2. This constraint makes the cost in two spatial dimensions pro-

portional to N4 and four spatial dimensions proportional to N8. Al-

J ‫ ס‬ʈd ‫ מ‬Tmʈ2 ‫␭ ם‬ʈmʈW, ͑1͒ though numerical tricks such as nonuniform FFTs ͑Duijndam and

where ʈ ʈ2 indicates an ᐉ2-norm and ʈ ʈw indicates an ᐉ2-weighted Schonewille, 1999͒ can improve these numbers dramatically, a 4D
norm calculated as
DFT algorithm is quite expensive in terms of computer time and has

been unfeasible for production demands until now. Very recently,

ʈmʈW ‫ ס‬mHFn‫מ‬1͉pk͉‫מ‬2Fnm. ͑2͒ this has become possible ͑Gordon Poole, personal communication,

2009͒, although it demands large computer resources.

In equation 2, Fn is the multidimensional Fourier transform, with n There are many differences between working with FFTs or DFTs.
indicating the number of spatial dimensions of the data, mH the
transpose conjugate of the model m, and pk the multidimensional On the negative side, working with FFTs has the potential to distort
spectrum of the unknown data.
data because of the binning. However, binning spatial coordinates is
The multidimensional vector pk contains weight factors that give
freedom to the model to be large where it needs to be large. They can often applied in seismic processing, even by methods that can use
be obtained by bootstrapping from the previous temporal frequency
in a manner similar to that done for Radon transforms ͑Herrmann et exact spatial coordinates. For example, when working on common-
al., 2000͒. These weights are defined in the ␻-k domain, where ␻ is
the temporal frequency and k is the wavenumber vector along each offset volumes, a binning along offset is applied. On the positive

side, when working with FFTs, the results improve because the in-

creased speed of the iterations permits us to obtain a solution close to

the one that would have been obtained after full convergence.

Furthermore, the nature of the system of equations solved at every

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

Five-dimensional interpolation V127

frequency changes, depending on whether we use regular sampling, cally without loss of precision, but T becomes nonorthogonal. In-
irregular sampling, or regular sampling as a result of binning. To un- creasing bin interval does not affect F‫מ‬n 1 and decreases nonorthogo-
derstand why, let us incorporate the sparseness constraint into the nality on T but introduces data distortion.
operator by transforming equation 1 from the general form to the
standard form ͑Hansen, 1998͒. By defining a new model u␻k, Figure 2 illustrates the effects of sampling in the matrix distribu-
tion for the left side of the system of equation 6. Let us consider two
u␻k ‫ ס‬pk‫מ‬1Fnm␻x, ͑3͒ different cases: coarse regular sampling ͑left column͒ and irregular
sampling ͑right column͒. The matrix distribution for these two cases
which is m after transforming to the ␻-k domain and inverse weight- is shown when applying three different methods: coarse binning,
ing with pk, equation 1 becomes true locations, and fine binning.

J ‫ ס‬ʈd ‫ מ‬TF‫מ‬n 1pku␻kʈ2 ‫␭ ם‬ʈu␻kʈ2. ͑4͒ The first row, Figure 2a and b, shows the structure of the system of
equations when coarse binning is used ͑which allows the use of
The weighted norm ͑ʈ ʈw͒ now becomes an ᐉ2-norm, and the oper- FFTs͒. The system of equations is quite sparse, with most elements
ator absorbs the spectral weights. This allows us to include the along the main diagonal; therefore, the optimization converges very
sparseness constraint into the operator, i.e., to modify the basis func- quickly. In the decimation case on the left ͑Figure 2a͒, the secondary
tions of the transformation to include the sparseness constraint ͑Trad peaks produced by operator aliasing are as strong as the nonalias
et al., 2003͒. The mapping between d and the new model u is now component. In practice, they can be taken care of by filters and boot-
performed by the operator: strapping weights from low to high frequency.

L ‫ ס‬TFn‫מ‬1pk. ͑5͒ The second row, Figure 2c and d, shows the same for irregular
sampling ͑true spatial locations͒. The system of equations is fully
Solving this equation for u␻k requires solving the following system populated because the irregularly sampled Fourier transform intro-
of equations: duces cross-terms between the model elements ͑the basis functions
are nonorthogonal͒ and convergence is slower ͑Figure 2c and d͒. Op-
͑pkHFnTHTFn‫מ‬1pk ‫␭ ם‬I͒u␻k ‫ ס‬pkHFnTHdH, ͑6͒ erator aliasing, on the other hand, becomes less strong ͑Figure 2c͒.
The third row, Figure 2e and f, shows the same for fine binning. The
where I is the identity matrix and the super index H means conjugate system of equations becomes almost fully populated again, but in
this case not because of F as before but because of T. The multidi-
transpose. mensional case is more difficult to visualize, but the same ideas ap-
ply. In that case, the nonaliased directions help to constrain the solu-
Because of the large size of the system of equations in our prob- tion and attenuate the effect of aliasing.
lem, on the order of 105 equations, the final full solution u␻k is never
achieved. Instead, an approximate solution is obtained by using an In my experience, if the bin size is not made too small, the large
computational advantage of FFT algorithms over DFTs is more ben-
iterative algorithm and running only a few iterations. Components of eficial than the consequent increase on nonorthogonality on T. This
u␻k that have a weak mapping through operator L ͑such as low-am- is possible when working along spatial dimensions where the data
plitude spectral components͒ can be resolved with this limited num- look simple. In this case, the method can preserve localized ampli-
ber of iterations only if the system of equation 6 has good conver- tude variations better than inversion using irregularly sampled spa-
gence. This convergence improves as the operator L ‫ ס‬TF‫מ‬n 1pk be- tial locations because it is possible to iterate more and to obtain a so-
comes closer to orthogonal, i.e., as

LHL → I. ͑7͒

The operator pk is usually a diagonal operator; therefore, conver- A=(FTH TFϪ1)
gence depends mainly on the two operators Fn‫מ‬1 and T, which in turn
depend on the spatial axes and the missing samples, respectively. a) 1 b) 1
0.5 0.5
The operator Fn‫מ‬1 maps the 4D spatial wavenumber k to the inter- 10 0 10 0
polated 4D spatial axis x. If x and k are perfectly regular, then 20 1 20 1
FnF‫מ‬n 1 ‫ ס‬I. If, in addition, there are no missing traces, the left side of 30 0.5 30 0.5
equation 6 is diagonal and the system converges in one iteration. The 0 0
wavenumber axes k ͑one axis per spatial dimension͒ can always be 10 20 30 1 10 20 30 1
0.5 0.5
made regular, but the axes x depend on the input data. Binning the in- c) 0 d) 0

put data makes x regular. 10 10
20 20
The sampling operator T that connects interpolated data to ac- 30 30

quired data depends on the missing traces. It is orthogonal when 10 20 30 10 20 30

there are no missing traces. Binning the data without decreasing e) f)

sampling interval does not affect T but can introduce data distortion. 20 20
40 40
Making bin intervals small to avoid data distortion introduces nonor- 60 60

thogonality into the system of equations 6, making convergence 20 40 60 20 40 60

more difficult. Figure 2. Matrix distributions for the left side of the system of equa-

As we see from this analysis, there is a trade-off between nonor- tions 6 for irregularly sampled data in two different cases, coarse
thogonality on Fn‫מ‬1 and T. Moreover, there is a trade-off between
binning ͑and data distortion͒ on one side and convergence of the sys- sampling and gaps ͑columns͒, using coarse binning, true locations,
and fine binning ͑rows͒. ͑a͒ Coarse binning on decimated data. ͑b͒
tem of equations on the other. Precisely honoring spatial coordinates Coarse binning on data with gaps. ͑c͒ True locations on decimated
slows down convergence because of the nonorthogonality of F‫מ‬n 1. data. ͑d͒ True locations on data with gaps. ͑e͒ Fine binning on deci-
Alternatively, Fn‫מ‬1 can be made regular by fine binning of x, practi- mated data. ͑f͒ Fine binning on data with gaps. The color represents

amplitudes in absolute numbers, with dark blue representing zeros.

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

V128 Trad

lution closer to the one obtained with full convergence. This high fi- vantage of the fact that the Fresnel zone increases in size with offset.
delity for localized events makes the algorithm very useful for land A practical combination would be a hybrid method where binning is
data, where amplitude changes very quickly. At the same time, it applied for near and middle offsets and exact locations are applied
makes the method less useful in removing noise. on long offsets.

Pseudorandom noise ͑noise that looks random but has a coherent A complete discussion of the topic is beyond the scope of this pa-
source͒ can be propagated into the new traces, becoming locally co- per. The comments above are intended to point out the effect of sam-
herent and therefore very difficult to remove. Although this is a dis- pling in the system of equation 6 and the impact this has in predicting
advantage, it is important to realize that interpolation and noise at- localized amplitude variations in the data.
tenuation have different and sometimes conflicting goals. Noise at-
tenuation should predict only signal and should filter out noncoher- APPLICATIONS AND DATA EXAMPLES
ent events. Interpolation should predict all of the data, even if the
events are very weak or badly sampled. Undersampled events can Applications for land data interpolation usually involve increas-
look noncoherent; therefore, their preservation depends on the algo- ing inline and crossline sampling ͑decreasing bin size͒ and/or in-
rithm not being too selective in terms of coherence. Although simul- creasing offset and azimuth sampling ͑increasing fold͒. This classi-
taneous interpolation and noise attenuation is a very desirable goal, fication is too broad, however, because there are many possible ways
better chances of success are achieved by applying noise attenuation to increase the sampling, just as there are many possible geometry
and interpolation iteratively in sequence rather than in a single pass. designs. Table 1 shows several applications classified according to
the six types defined earlier. All of these cases have been used in
On the other hand, there are many advantages in using exact posi- practice, but only a few of them are often required in production
tions to eliminate aliasing and difficulties of binning for complex projects. In this section, we review examples for the most common
structure. The first aspect is balanced by working in the full 5D space cases:
of the data. The second can be addressed in most cases by using small
binning intervals. • increasing offset and azimuth sampling by decreasing shot and
receiver line interval to improve migration ͑type 1b͒
Some problems appear often, however, when binning long offsets
in structured data because of rapid amplitude variations caused by • increasing offset and azimuth sampling for better velocity, AVO,
anisotropy and residual moveout. This is a problem for land and and AVAz estimates after migration ͑type 1b͒
ocean-bottom ͑OBC͒ data, where far offsets usually have poor sam-
pling because of the rectangular shape of shot patches. Also, this is- • increasing inline and crossline sampling to improve imaging of
sue makes binning more difficult for marine data where residual mo- steep reflectors by relaxing antialias filters in migration algo-
veout can be very significant at long offsets. A possible solution is to rithms ͑type 1a͒
use larger bins along inline and crossline for far offsets, taking ad-
• increasing inline and crossline sampling for changing natural bin
a) size, as in merging surveys acquired with different geometries
͑type 1a͒

• infilling missing shots and receivers in acquisition gaps ͑type 1b
in this example͒

b) Increasing offset and azimuth sampling for imaging

Figure 3. Foothills survey: orthogonal geometry. Shots are located The first example shows the benefits of interpolation for aniso-
along vertical lines; receivers are located along horizontal lines. Col- tropic 3D depth migration in a Canadian Foothills area with signifi-
or represents fold. ͑a͒ Before interpolation. ͑b͒ After interpolation cant structure, topography, and noise. These surveys often can bene-
͑shot and receiver line spacing decreased by a factor of two͒. fit from interpolation because usually they have shot and receiver
lines acquired quite far apart because of high acquisition costs on to-
pographic areas. Foothills acquisitions on structurally complex ar-
eas, however, are difficult to interpolate because small imperfections
in static corrections affect coherence in the space domain. Also,
these data often are severely affected by ground roll noise, which
makes interpolation difficult — particularly for shallow structures.

Figure 3a shows an orthogonal geometry ͑vertical lines are shots,
horizontal lines are receivers͒. CMPs are overlain on the shot/receiv-
er locations, with their color indicating the fold. Figure 3b shows the
target geometry that contains all of the same shots and receivers as in
Figure 3a, along with the new interpolated shot and receiver lines.
Notice that these new lines follow the geometry of the original lines,
permitting us to preserve all original data because original shots and
receivers do not need to be moved.

By halving shot and receiver line intervals, the CMP fold increas-
es by a factor of four, giving a better sampling of offsets and azi-
muths. This can be seen in Figure 4a and b, which shows the offset/
azimuth distribution for a group of CMPs before and after interpola-
tion. The increased sampling benefits migration because imaging al-

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

Five-dimensional interpolation V129

a) gorithms rely on interference to form the correct image and therefore

require a proper sampling ͑at least two samples per cycle͒ to work
correctly.

These benefits can be observed in the final stacked image, but they
are more obvious in common-image gathers ͑CIGs͒. Figure 5 com-
pares the CIGs with and without interpolation. The better continuity
of the events will certainly bring improvements to the results of gath-
er processing, especially for AVAz and AVO analysis ͑which are
very sensitive to acquisition footprint͒ and automated processes
such as tomography based on residual curvature analysis. Figure 6

b) shows the image stack from 0 to 1000-m offsets. The continuity of

events has been improved over nearly all of the section.

Figure 4. Foothills survey: offset/azimuth distribution in an area of Increasing offset-azimuth sampling for AVO
the ͑a͒ original and ͑b͒ interpolated surveys.
Prestack migration of the seismic data prior to performing AVO
a) o inversion has been advocated for more than 10 years ͑Mosher et al.,
1996͒. However, poor sampling typical of land seismic acquisition
makes practical implementation of this concept quite difficult.
Downton et al. ͑2008͒ demonstrate that these problems can be ad-
dressed by performing interpolation prior to prestack migration, re-
sulting in better AVO estimates. These workers performed a series of
comparisons of processing flows for AVO, including the 5D interpo-
lation method presented in this paper. By calculating the correlation
between AVO estimates to well-log information for the Viking For-
mation in Alberta, they concluded that the combination of interpola-

a)

b) o

b)

Figure 5. Foothills survey: migrated gathers ͑3D anisotropic depth Figure 6. Foothills survey: migrated stack section, 0 – 1000 m ͑a͒
migration͒ from ͑a͒ original data and ͑b͒ with interpolation before without interpolation, and ͑b͒ with interpolation before migration.

migration.

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

V130 Trad

tion and prestack time migration provided the best AVO estimates, spacing along lines was halved to reduce the bin size from 12.5
achieving a correlation increase from 0.39 for migration without in- ϫ 50 to 12.5ϫ 25 m. Figure 8 shows the shot locations after inter-
terpolation to 0.57 for migration after interpolation. This improve- polation. The red dots indicate the locations of the original shots, and
ment can be taken as evidence of amplitude preservation during in- the blue dots indicate the locations of the new shots.
terpolation.
As a comparison, a prestack time migration stack was produced
Figure 7 shows CIGs from prestack time migration with and with- using the original acquired data; then the stack was interpolated, as
out interpolation before migration. The migration applied in this shown in Figure 9a. In Figure 9b, the prestack data were interpolated
case was type 1b, decreasing shot and receiver lines by half and in- before migration using 5D interpolation. The prestack interpolation
creasing fold by four times. Hunt et al. ͑2008͒ give a complete de- produced a data set input for migration that was better sampled than
scription of the experiment. the noninterpolated data set. This allowed the migration to operate
with greater fidelity on the steep-dip events — in this case, applying
Increasing inline-crossline sampling for steep dips fewer antialiasing constraints. The prestack interpolation did not add
information to the data, but it did allow the migration to make better
This example, also described in Gray et al. ͑2006͒, shows the ben- use of the information that was already in the data, allowing it to pro-
efits of reducing the bin size ͑increasing inline-crossline sampling͒ duce an image with greater structural detail.
before migration rather than afterward. The land data set in this ex-
ample was acquired over a structured area in Thailand using an or- Increasing inline-crossline sampling for survey merging
thogonal layout. The objective of the interpolation was to obtain
more information on steep dips by including moderate- to high-fre- Often, surveys acquired with different natural bin sizes need to be
quency energy that the migration antialias filter removed from the merged into a common grid. If a survey is gridded into a bin size
original, more coarsely sampled data. For this purpose, the shot
a)

a) o b) o

Figure 7. Foothills II. CIGs from prestack time migration ͑a͒ without
interpolation and ͑b͒ with interpolation before migration.

b)

Figure 8. Thailand: shot locations after interpolation for an orthogo- Figure 9. Thailand: prestack time migration stacks. ͑a͒ Interpolation
performed after stacking the migrated images. ͑b͒ Interpolation per-
nal geometry. Red dots are original shots; blue dots are new ͑interpo-
lated͒ shots. The two large gaps are 1000– 1500 m in diameter ͑be- formed before migration. The improved imaging of the steep-dip
fore interpolation͒.
event in the center of the section is evident in ͑b͒. ͑Data courtesy of
PTT Exploration and Production͒

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

Five-dimensional interpolation V131

smaller than it was designed for, the CMP coverage becomes very Figure 10a and b shows one CMP before and after interpolation.
poor, affecting further interpretation even after migration. A solution Figure 11a shows a time slice from the stack of original data in the
is to use prestack interpolation to reduce the natural bin size to match 10ϫ 10-m grid. Figure 11b shows the same time slice from the stack
the merge grid. This can be achieved by increasing sampling in the of the interpolated data.
inline-crossline domain or, alternatively, by using the surface-con-
sistent approach to decrease the distance between shots and receiv- Infilling large gaps
ers along lines.
It is common for 3D acquisitions to have large gaps with missing
Trad et al. ͑2008͒ show a case history from the Surmont bitumen shots or receivers because of inaccessibility in some areas ͑lakes,
project in northern Alberta. In this area, nine surveys had to be hills, population, etc.͒. Although it usually is impossible to infill
merged into a common 10ϫ 10-m CMP grid. Of the nine surveys in large gaps completely, decreasing their size has a large impact on mi-
the project, one was acquired with a natural bin size of 15ϫ 30 m, gration results. The following example shows the infilling of a large
giving poor coverage when binned in the 10ϫ 10-m CMP grid used gap produced by an open-pit coal mine in an area with structured ge-
for the merge. Furthermore, this survey was the only one in the ology. This obstacle prevented shots and receivers from being de-
project with a parallel design ͑the other surveys were acquired with ployed at this location during the 3D acquisition. New shots and re-
an orthogonal geometry͒. By adding new shots and receivers using ceivers were added on the border of the gap. Time migration of the
the method presented in this paper, the coarser survey was trans-
formed from a parallel geometry with 10ϫ 30-m bin size to an or- a)
thogonal survey of 10ϫ 10-m bin size and twice the original fold.
The original data were fully preserved and the numbers of shots and
receivers were each increased by three, so the final size was nine
times the original size. The interpolation allowed this survey to
merge with the other surveys in the Surmont area, avoiding the need
for reshooting.

a)

0

b)
b)

Figure 10. Surmont: comparison of a CMP ͑a͒ before and ͑b͒ after in- Figure 11. Surmont: time slice comparison from ͑a͒ the stack of the
terpolation. Empty traces have been added to the CMP before inter- original data and ͑b͒ a time slice from the stack of the interpolated

polation to match the traces obtained after interpolation. The CMP in data.
͑b͒ was created by using information from many other CMPs ͑not
shown in the figure͒.

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/

V132 b) Trad

a) benefits of tighter acquisition sampling patterns, higher fold, and/or
smaller bin size.

By working in five dimensions, this interpolation method can in-
crease sampling in surveys that are problematic for lower dimen-
sional interpolators. The technique might be applied to overcome ac-
quisition constraints at a fraction of field acquisition costs, merge
data sets with different bin sizes, and eliminate differences caused by
acquisition, avoiding the need to reshoot surveys. Benefits include
more reliable prestack processes: velocity analysis, prestack migra-
tion, AVO and AVAz analyses, reduction of migration artifacts, and
improved imaging of steep dips.

Figure 12. Coal mine: comparison of time migration images ͑a͒ ACKNOWLEDGMENTS
without and ͑b͒ with interpolation for a 3D survey acquired on top of
a large gap. I would like to thank CGGVeritas for permission to publish this

original seismic data produced the image in Figure 12a. After inter- paper and CGGVeritas Library Canada, PTT Exploration and Pro-
polation, time migration produced the image in Figure 12b. The in-
terpolated image shows an anticline underneath the open-pit mine duction, PetroCanada, ConocoPhillips Canada Ltd., and Total E&P
that was confirmed by 2D seismic and well logs acquired before the
existence of the mine. Canada Ltd. for data examples. Special thanks are owed to several

DISCUSSION colleagues who helped produce the interpolation examples and who

Interpolation fills missing traces by incorporating information in provided useful ideas and discussions on interpolation over the
the data using a priori assumptions. This provides, for standard pro-
cesses, information that is already in the data but is not accessible years. In particular, my thanks to Bin Liu and Mauricio Sacchi,
without these constraints. On the negative side, results from interpo-
lated data sometimes can be worse than results without interpola- whose work constitutes the cornerstone of this method.
tion. This can happen because some processes such as stacking
might work better for zero traces than for wrong traces. In addition, REFERENCES
interpolation can add spurious information in a coherent manner, a
problem that stacking is unable to fix. Interpolation must be applied Abma, R., and N. Kabir, 2006, 3D interpolation of irregular data with POCS
very carefully to ensure this does not happen. algorithm: Geophysics, 71, no. 6, E91–E97.

Several factors play against interpolation because it is by nature Abma, R., C. Kelley, and J. Kaldy, 2007, Sources and treatments of migra-
an ill-conditioned problem. Not only do unrecorded samples have to tion-introduced artifacts and noise: 77th Annual International Meeting,
be estimated, but they also must be located in a manner consistent SEG, Expanded Abstracts, 2349–2353.
with the rest of the data, for example, at the proper elevations with
proper static information. Usually, geometries that can benefit from Cordsen, A., M. Galbraith, and J. Peirce, 2000, Planning land 3-D seismic
interpolation do not lend themselves to good noise attenuation. Ac- surveys: SEG.
curate interpolation becomes more difficult as the structure becomes
more complex, as gaps become larger or sampling poorer, and as the Downton, J., B. Durrani, L. Hunt, S. Hadley, and M. Hadley, 2008, 5D inter-
signal-to-noise ratio gets lower. Therefore, careful QC is necessary polation, PSTM and AVO inversion for land seismic data: 70th Annual
to select interpolated data according to some quality criteria. A use- Conference and Technical Exhibition, EAGE, Extended Abstracts, G029.
ful criterion is the minimum distance between a trace and its original
neighbors, but many other QC parameters can be estimated and Duijndam, A. J. W., and M. A. Schonewille, 1999, Nonuniform fast Fourier
saved into headers. After interpolation, these parameters can be used transform: Geophysics, 64, 539–551.
together to decide if a new trace is acceptable.
Gray, S., D. Trad, B. Biondi, and L. Lines, 2006, Towards wave-equation im-
CONCLUSIONS aging and velocity estimation: CSEG Recorder, 31, 47–53.

Wide-azimuth geometries often are undersampled along one or Hansen, P., 1998, Rank-deficient and discrete ill-posed problems: Numerical
more dimensions, and interpolation is a very useful tool to precondi- aspects of linear inversion: Society for Industrial and Applied Mathemat-
tion the data for prestack processes such as migration, AVO, and ics.
AVAz. I have discussed a 5D interpolation technique to create new
shots and receivers for 3D land seismic data that honors amplitude Herrmann, P., T. Mojesky, M. Magesan, and P. Hugonnet, 2000, De-aliased,
variations along inline, crossline, offset, and azimuth. Although not high-resolution Radon transforms: 70th Annual International Meeting,
intended to replace acquiring adequate data for processing, this tool SEG, Expanded Abstracts, 1953–1957.
is useful for overcoming acquisition constraints and for obtaining
Hunt, L., J. Downton, S. Reynolds, S. Hadley, M. Hadley, D. Trad, and B.
Durrani, 2008, Interpolation, PSTM, & AVO for Viking and Nisku targets
in West Central Alberta: CSEG Recorder, 33, 7–19.

Liu, B., and M. D. Sacchi, 2004, Minimum weighted norm interpolation of
seismic records: Geophysics, 69, 1560–1568.

Mosher, C. C., T. H. Keho, A. B. Weglein, and D. J. Foster, 1996, The impact
of migration on AVO: Geophysics, 61, 1603–1615.

Poole, G., and P. Herrmann, 2007, Multidimensional data regularization for
modern acquisition geometries: 77th Annual International Meeting, SEG,
Expanded Abstracts, 2585–2589.

Sacchi, M. D., and T. J. Ulrych, 1996, Estimation of the discrete Fourier
transform — A linear inversion approach: Geophysics, 61, 1128–1136.

Schonewille, M. A., R. Romijn, A. J. W. Duijndam, and L. Ongkiehong,
2003, A general reconstruction scheme for dominant azimuth 3D seismic
data: Geophysics, 68, 2092–2105.

Trad, D., J. Deere, and S. Cheadle, 2005, Understanding land data interpola-
tion: 75th Annual International Meeting, SEG, Expanded Abstracts,
2158–2161.

Trad, D., M. Hall, and M. Cotra, 2008, Merging surveys with multidimen-
sional interpolation: CSPG CSEG CWLS Conference, Expanded Ab-
stracts, 172–176.

Trad, D., T. Ulrych, and M. Sacchi, 2003, Latest views of the sparse Radon
transform: Geophysics, 68, 386–399.

Xu, S., Y. Zhang, D. Pham, and G. Lambare, 2005, Antileakage Fourier trans-
form for seismic data regularization: Geophysics, 70, no. 4, V87–V95.

Zwartjes, P. M., and M. D. Sacchi, 2007, Fourier reconstruction of nonuni-
formly sampled, aliased seismic data: Geophysics, 72, no. 1, V21–V32.

Downloaded 29 Jan 2010 to 80.194.194.190. Redistribution subject to SEG license or copyright; see Terms of Use at http://segdl.org/


Click to View FlipBook Version