GeoScienceWorld
Volume

3D Seismic Imaging

By Biondo L. Biondi

Abstract

In the past decade, 3D reflection seismology has replaced 2D seismology almost entirely in the seismic industry. Recording 3D surveys has become the norm instead of the exception. The application of 2D seismology is limited mostly to reconnaissance surveys or to locations where recording 3D data is still prohibitively expensive, such as in rough mountains and wild forests. However, academic research and teaching have struggled to keep up with the 3D revolution. As a consequence of this tardiness, no books are available that introduce the theory of seismic imaging from the 3D perspective. This book is aimed at filling the gap.

Seismic processing of 3D data is inherently different from 2D processing. The differences begin with data acquisition: 3D data geometries are considerably more irregular than 2D geometries. Furthermore, 3D acquisition geometries are never complete because sources and receivers are never laid out in dense areal arrays covering the surface above the target. These fundamental differences, along with the increased dimensionality of the problem, strongly influence the methods applied to process, visualize, and interpret the final images. Most 3D imaging methods and algorithms cannot be derived from their 2D equivalent by merely adding a couple of dimensions to the 2D equations. This book introduces seismic imaging from the 3D perspective, starting from a 3D earth model. However, because the reader is likely to be familiar with 2D processing methods, I discuss the connections between 3D algorithms and the corresponding 2D algorithms whenever useful.

The book covers all the important aspects of 3D imaging. It links the migration methods with data acquisition and velocity estimation, because they are inextricably intertwined in practice. Data geometries strongly influence the choice of 3D imaging methods. At the beginning of the book, I present the most common acquisition geometries, and I continue to discuss the relationships between imaging methods and acquisition geometries throughout the text. The imaging algorithms are introduced assuming regular and adequate sampling. However, Chapters 8 and 9 explicitly discuss the problems and solutions related to irregular and inadequate spatial sampling of the data.

Velocity estimation is an integral component of the imaging process. On one hand, we need to provide a good velocity function to the migration process to create a good image. On the other hand, velocity is estimated in complex areas by iterative migration and velocity updating. Migration methods are presented first in the book because they provide the basic understanding necessary to discuss the velocity updating process.

Seismic-imaging algorithms can be divided into two broad categories, integral methods (e.g., Kirchhoff methods) and wavefield-continuation methods. Integral methods can be described by simple geometric objects such as rays and summation surfaces. Thus, they are understood more easily by intuition than wavefield-continuation methods are. My introduction of the basic principles of 3D imaging exploits the didactic advantages of integral methods. However, wavefield-continuation methods can yield more accurate images of complex subsurface structures. This book introduces wavefield-continuation imaging methods by leveraging the intuitive understanding gained during the study of integral methods. Wavefield-continuation methods are the subject of my ongoing research and that of my graduate students. Therefore, the wavefield-continuation methods described are more advanced, although less well established, than the corresponding integral methods.

Seismic-imaging technology is data driven, and the book contains many examples of applications. The examples illustrate the rationale of the methods and expose their strengths and weaknesses. The data examples are drawn both from real data sets and from a realistic synthetic data set, the SEG-EAGE salt data set, which is distributed freely and used widely in the geophysical community. For the reader's convenience, a subset of this data set (known as C3 narrow-azimuth) is contained in the DVD included with this book. Appendix 2 briefly describes this data set.

The software needed to produce many examples also will be distributed freely over the Internet. A reader with the necessary computer equipment (a powerful Unix workstation) and the patience to wait for weeks-long runs could reproduce the images obtained from the SEG-EAGE salt data set. Appendix A describes the foundations of SEPlib3d, the main software package needed to generate most of the results shown in this text.

The book starts from the introduction of the basic concepts and methods in 3D seismic imaging. To follow the first part of the book, the reader is expected to have only an elementary understanding of 2D seismic methods. The book thus can be used for teaching a first-level graduate class as well as a short course for professionals. The second part of the book covers more complex topics and recent research advances. This material can be used in an advanced graduate class in seismic imaging. To facilitate the teaching of the material in this book, the attached DVD includes a document in PDF format that has been formatted specifically to be projected electronically during a lecture. All the figures in this electronic document can be animated by clicking on a button in the figure caption. Several of these figures are movies that provide a more cogent illustration of the concepts described in the text. All figures are included on the attached DVD as GIF files.

  1. Page 1
    Abstract

    Because of practical and economical considerations, 3D surveys never are acquired with complete and regular sampling of the spatial axes. The design of 3D surveys presents many more degrees of freedom than does the design of 2D surveys, and it has no standard or unique solution. Design of 3D acquisition geometries is the result of many tradeoffs among data quality, logistics, and cost. Furthermore, nominal designs often must be modified to accommodate operational obstacles encountered in the field.

  2. Page 9
    Abstract

    The ultimate goal of recording seismic data is to recover an image of the geologic structure in the subsurface. Imaging is the most computationally demanding and data-intensive component of seismic processing. Therefore, researchers have spent considerable effort in devising effective imaging strategies that yield accurate images but are computationally affordable. The most general and most expensive types of migration are those that operate directly on the entire prestack data set; thus, they are called full prestack migrations. All other imaging methods are designed to approximate full prestack migration; that is, they aim to achieve the same image accuracy but at a fraction of the computational cost. In this chapter, we analyze the main characteristics and the computational costs of 3D prestack migration. An understanding of these features is important if we are to determine when full prestack migration is required and to evaluate its several available approximations.

  3. Page 19
    Abstract

    Ideally, every seismic data set should be imaged by use of 3D prestack migration. In practice, however, 3D prestack migration is applied to only a small but growing number of surveys. The high computational cost of prestack migration is the main rationale for its limited use. Often, we can apply less expensive methods that yield satisfactory results in less time and with fewer resources. However, computational complexity is not the only consideration. Prestack migration is very sensitive to the choice of velocity function and to irregular sampling of the data. This chapter introduces approximation methods for imaging prestack data — methods that are less expensive and often more robust than full prestack migration.

  4. Page 39
    Abstract

    Wavefield-continuation migration methods can yield better images than Kirchhoff methods do for depth-migration problems. Wavefield-continuation methods provide an accurate solution of the wave equation over the whole range of seismic frequencies, whereas Kirchhoff methods are based on a high-frequency approximation of the wave equation. Furthermore, wavefield-continuation methods naturally handle multipathing of the reflected energy induced by complex velocity functions. In contrast, when multipathing occurs, Kirchhoff methods require summation of the data over complex multivalued surfaces. That process can be cumbersome and error prone.

  5. Page 51
    Abstract

    The numerical solution of the one-way wave equation is the cornerstone of all downward-continuation migration methods. Over the years, geophysicists have proposed a variety of solutions based on approximations of the SSR operator introduced in Chapter 4. No absolute optimum exists among these methods, because the problem at hand determines the ideal balance of accuracy, computational cost, flexibility, and robustness in the selection of a downward-continuation method. This chapter provides a thorough overview of the numerical methods that have been developed for downward-continuing wavefields.

  6. Page 65
    Abstract

    All the migration methods presented in previous chapters produce a migrated cube that is a function of the three spatial coordinates. To analyze velocity and amplitude, often one should exploit the redundancy of seismic data and produce images with more dimensions than the three coordinates of physical space. For example, when one applies Kirchhoff migration, it is straightforward to image the data as a function of their recording offset and azimuth by subdividing the domain of integration during the Kirchhoff summation. The migrated cubes obtained by subdividing the domain of integration use only a subset of the data and thus can be referred to as partial, or prestack, images. The term prestack images can be confusing, because it may refer to images obtained by prestack migration; therefore, in the following, I will use the term prestack partial images. The whole image is a hypercube (usually five-dimensional) made by the ensemble of all partial images. When the partial images are created according to the data offset, the two additional dimensions are either the absolute offset and azimuth (h, θh), or the inline and crossline offsets (xh, yh). When the partial images are created according to the reflection angles at the reflection point, the two additional dimensions are the reflection opening angle (γ) and the reflection azim

  7. Page 83
    Abstract

    Wavefield-continuation methods can be more accurate for 3D prestack migration than methods based on the Kirchhoff integral are, but their computational cost discourages their use in most imaging projects. Wavefield-propagation algorithms are most efficient when the computational grid is regular and has a horizontal extent similar to the horizontal coverage of the recorded data being imaged. The first difficulty to overcome — although not the most challenging one — is that realistic 3D data-acquisition geometries are not regular (Chapter 1). Thus, we must regularize the data geometry before migration by applying data-regularization algorithms, such as those described in Chapter 9, or methods that achieve similar results.

  8. Page 103
    Abstract

    The quality of 3D images is influenced strongly by the spatial sampling of the data and by whether the imaging operators properly take into account the data sampling. Strong aliasing artifacts degrade the images when the data are sampled poorly and the imaging operators are not implemented carefully. The sampling problem is more acute in 3D imaging than in 2D imaging, because the spatial axes of 3D data often are sampled sparsely and irregularly. In this chapter, I analyze the problems caused by regular data grids that are sampled too coarsely, although regularly. Chapter 9 discusses the issues related to irregularity of both data acquisition and reflector illumination.

  9. Page 123
    Abstract

    In Chapter 8, we analyzed how the spatial sampling rate influences image quality. If data sampling is not sufficiently dense, the seismic image may lose resolution and/or it may be affected by artifacts.

  10. Page 143
    Abstract

    In previous chapters, when we analyzed methods for imaging zero-offset and prestack data, we assumed that the velocity function was known, either for rms velocity or interval velocity. Of course, in reality that is not true, and we must estimate velocity from the seismic data.

  11. Page 159
    Abstract

    In the presence of complex structures and/or strong lateral velocity variations, extracting velocity information in the data space is both inaccurate and time-consuming. In such situations, the image space is a more appropriate domain for extracting kinematic information because migration focuses and greatly simplifies the events. Even when the migration velocity is far from the true velocity, incomplete focusing of the reflections is a step in the right direction. Velocity-estimation methods that use the focusing capabilities of migration to extract kinematic information more reliably are known commonly as migration velocity analysis (MVA) methods.

  12. Page 185
    Abstract

    Every velocity-estimation method presented in Chapters 10 and 11 is based explicitly or implicitly on a ray-tracing modeling of the kinematics of the reflections. When we estimate velocity, a ray approximation is convenient for several reasons. First, it is faster to trace rays than it is to trace propagating waves, so an inversion method based on rays has a large computational advantage. An even more important advantage is the intuitive nature of the link that rays establish between the velocity function and the kinematics of the reflections.

  13. Page 207
    Abstract

    To process real 3D data sets, which typically are recorded with irregular geometries and are huge in size, we must use software tools designed specifically for 3D data. Older software packages designed for 2D data either are not flexible enough to handle irregular geometries or allow only a sequential access to the data, which is too inefficient.

  14. Page 217
    Abstract

    Throughout this book, several imaging examples are based on the SEG-EAGE salt data set. Furthermore, a subset of this data set, also known as the C3-narrow-azimuth classic data set, is distributed with the print version of this book and is available to institutional subscribers of the e-book. The geologic model and the data have been described extensively in periodic updates by the leaders of the project that modeled the data (Aminzadeh et al., 1994; Aminzadeh et al., 1995; Aminzadeh et al., 1996). This appendix simply provides a handy reference to help readers to better understand the examples presented in the book.

Purchase Chapters

Recommended Reading