Compressive Seismic Imaging
COP Proprietary Seismic Acquisition Technology
Compressed Sensing (CS) is a mathematical tool that creates hi-resolution datasets from lo-resolution samples. It is being used in the medical field for shortening MRI acquisition times and improving results. CS uses the principles of sparsity, a measure of an image’s complexity or lack of complexity, to iteratively reconstruct the hi-resolution image from a small subset of the data. The iterations start with large shapes applied over the low-resolution dataset and progressively adds smaller and smaller shapes, seeking the simplest or sparsest solution. The image with the highest degree of sparsity is almost always the correct solution, or very close to it. This means that a hi-resolution image can be acquired by recording less, rather than more, data.
- COP proprietary technology. Industry leading algorithms and methodology.
- Compressive sensing provides a new sampling theory
- More/better data with less acquisition time and effort (cost) using reconstruction algorithms
- Broader bandwidth data due to Non-Uniform Optimal Sampling (NUOS) acquisition geometry
- Increases special resolution by a factor of 2 or more
- CSI surveys are ideal for AVO and 4D applications
- Careful acquisition and de-blending can improve data quality
- “Blended Sources” = simultaneous shooting
- “De-Blending” = recovering individual source recordings. Reduces shot noise.
- Acquisition efficiency can increase by an order of magnitude
- BETTER data FASTER acquisition CHEAPER cost
Examples in CSI
View what Compressive Seismic Imaging is being used for.
Compressive Sensing in Medical Imaging
Comparison of MR images with and without compressed sensing. The image on the left is of a child’s abdomen and chest where the image has been acquired at 7.2 times the regular acquisition speed, resulting in an image that is undersampled compared to a traditional MRI. Compressed sensing techniques (right image) have improved the image quality and reveled structures not previously visible (indicated by arrows). Photo Credit: Shreyas Vasanawala, Stanford University.
Compressive Sensing for Seismic Data
In seismic the traditionally applied sampling criteria is Nyquist sampling. Nyquist theory states that a minimum of two points per wavelength is required in all dimensions to adequately reconstruct a signal from discrete, uniformly spaced samples. CS theory (e.g., Baraniuk, 2007; Herrmann 2010) relaxes that requirement, allowing a signal to be recovered from a smaller number of “advantageous” nonuniformly spaced samples. If we can recover the same signal with fewer samples, the efficiency of seismic acquisition is improved significantly. ConocoPhillips applies proprietary optimization framework for CS-based acquisition design and processing as compressive seismic imaging (CSI) (e.g., Mosher et al., 2014a). CSI design concepts improve total sampling efficiency for a particular design by an order of magnitude or more. CSI principles enable more efficient use of acquisition resources, providing the ability to acquire more seismic data than a high-density conventional seismic shoot while maintaining comparable data quality. The impact of CSI is unlocking development-grade seismic data at exploration-grade cost.
Different sub-sampling schemes and their imprint in the Fourier domain for a signal that is the superposition of three cosine functions (modified from Herrmann, 2010)
a-b: Conventional Nyquist sampling fully samples waveform.
c-d: Regular undersampling (1/3) of signal results in aliasing which has a stronger signal strength than background Gaussian noise and is difficult to remove.
e-f: Randomized undersampling (1/3) using CS and sparsity concepts captures signal while driving subsampling artifacts below random noise threshold. Signal is easily reconstructed using simple denoising algorithms.
Compressive sensing schemes aim to design acquisition that specifically creates Gaussian-noise like subsampling artifacts (Donoho et al., 2006, 2009). As opposed to coherent subsampling-related artifacts (d), these noisy artifacts (f) can subsequently be removed by a sparse recovery procedure, during which the artifacts are separated from the signal and amplitudes are restored (Herrmann, 2010).
Impact on Acquisition Efficiency, Data Quality and Cost
Application of CSI to seismic acquisition methodologies has the potential to improve data quality and/or coverage, depending on the needs of the survey.
A. Conventional data acquisition, uniform shot and receiver distribution.
B. CSI Reconstruction - data quality/density is doubled for the same cost of acquisition.
C. CSI Reconstruction – same data quality but lower cost.
D. CSI Reconstruction – same data quality, same cost, larger area.
Actual Example – Onshore Alaska Vibroseis Comparison of the 2015 CSI acquisition parameters with a planned high-density conventional survey that was not acquired, as well as the legacy seismic data (1999). For the 2015 CSI survey, the source and receiver line spacing were held constant, and the receiver and source spacing along those lines were varied nonuniformly. The nominal receiver spacing was slightly larger than the original conventional design, with fewer overall receiver stations. This CSI design allows the receivers to be reconstructed to a much finer spacing in processing.
Production CSI acquisition (blue) compared to projected conventional acquisition (green) showing efficiency improvement from the CSI design. The CSI implementation of simultaneous sources with nonuniform optimally sampled source and receiver points (Mosher et al., 2014b) allowed for more efficient field production given the project resources. Compared to the conventionally designed high-density survey, trace density is an order of magnitude higher with smaller bin sizes. The Alaska CSI survey took fixed acquisition resources and produced a survey with greater (S/N) and higher spatial resolution.
Example Offshore CSI Streamer Survey
The nominal streamer spacing is 50m with a minimum streamer separation of 25m, and the average shot interval along each line was 37.5m. Non-uniform shot and receiver placements are used to generate the “advantageous sampling” required for CSI reconstruction. In this example, the design was optimized to allow for reconstruction down to a 12.5m bin size from 37.5m. Operationally, the survey was conducted identically to a standard towed-streamer survey with a conventional towing configuration. The distribution of nonuniform samples has a significant impact on reconstruction fidelity. In the ConocoPhillips CSI design framework, we analyze the distribution of nonuniform sample spacing to assure that the distribution covers our targeted reconstruction bandwidth. In other words, in order to reconstruct to 12.5 m spacing, the distribution of sample spacing should include coverage at 12.5 m.
ConocoPhillips proprietary deblending algorithm has also been applied to some of the CSI surveys to further improve operational efficiency. By continuously recording and using multiple source vessels, the acquisition times have been significantly reduced in both onshore and offshore examples. This is especially important in environments where there is a short weather window or other constraint on potential acquisition time. Deblending also has the additional benefit of reducing noise in the gathers.
A comparison of receiver gathers before and after deblending. (a) Raw receiver gather that contains production shots and shot during vessel turns. (b) Receiver gather after deblending.
Example of the CSI deblending process. (a) A pseudo-deblended shot record with two simultaneous shots recorded. (b) A CSI deblended shot record. (c) The difference between panel (a) and (b), demonstrating the additional energy removed in CSI deblending.
Data Examples and Comparisons
After years of trials and testing, ConocoPhillips has now applied CSI seismic acquisition in multiple production surveys in both onshore and offshore environments with great success. The improvements in data quality associated with CSI not only allow for better resolution of structural elements, but also improves the interpretability of amplitude and AVO analysis.
A comparison of image quality from legacy and CSI simultaneous-source surveys.
Comparison of image quality between legacy data prestack time migration (converted to depth) and CSI data PSDM. (a) Postmigration stack image comparison. (b) Above-reservoir coherence semblance comparison.
A comparison of image quality between legacy data PSDM and CSI data PSDM. (a) Postmigration stack image comparison. (b) Shallow coherence semblance comparison.
Reservoir far stack in (a) 1999 time migrated legacy dataset converted to depth (b) the 2015 CSI data set. In the CSI data, the slumps blocks from 6400 to 7300 ft depth are better defined (green arrows), the reservoir structure at around 7800 ft is less distorted, the fault image is sharper (black arrow), the clinoforms from 8000–9000 ft are more coherent, and the structural deformation in the horizon at 9000 ft is better imaged (blue arrow).
Timeslice before and after CSI reconstruction.
Full-stack images from a preliminary prestack time migration of the Lookout data set, (a) before CSI reconstruction and (b) after. CSI reconstruction reduces the shadow effects of source coverage gaps in the image.
Overburden full stack in (a) 1999 time migrated legacy dataset converted to depth (b) 2015 CSI dataset. Compared to the legacy data, the reflectivity in the CSI data is coherent at a shallower depth (green arrow), and faulting (red arrows) and stratigraphic features (blue arrows) are clearer.
Map view of far-stack amplitude at reservoir level for (a) 1999 legacy data and (b) 2015 CSI data. Warm colors are more negative. The 2015 CSI data have amplitudes less distorted by the overburden. Although the high-amplitude regions are similar, the 2015 CSI data have higher S/N out to the reservoir edge where it is expected to be thinner.
Improved resolution and AVO analysis of hydrocarbon bearing intervals using CSI.
About ConocoPhillips CSI
- Baraniuk, R. G., 2007, Compressive sensing: IEEE Signal Processing Magazine, 24, no. 4, 118–121, http://dx.doi.org/10.1109/ MSP.2007.4286571.
- Donoho, D., A. Maleki, and A. Montanari, 2009, Message passing algorithms for compressed sensing: Proceedings of the National Academy of Sciences.
- Donoho, D., and J. Tanner, 2009, Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing: Philosophical Transactions of the Royal Society A: Mathematical: Physical and Engineering Sciences, 367, 4273–4293.
- Donoho, D. L., 2006, Compressed sensing: IEEE Transactions on Information Theory, 52, no. 4, 1289–1306, doi: 10.1109/TIT.2006.871582.
- Herrmann, F. J., 2010, Randomized sampling and sparsity: Getting more information from fewer samples: Geophysics, 75, no. 6, WB173–WB187, http://dx.doi.org/10.1190/1.3506147.
- Mosher, C., C. Li, L. Morley, Y. Ji, F. Janiszewski, R. Olson, and J. Brewer, 2014a, Increasing the efficiency of seismic data acquisition via compressive sensing: The Leading Edge, 33, no. 4, 386– 391, http://dx.doi.org/10.1190/tle33040386.1.
- Mosher, C. C., C. Li, L. C. Morley, F. D. Janiszewski, Y. Ji, and J. Brewer, 2014b, Non-uniform optimal sampling for simultaneous source survey design: 84th Annual International Meeting, SEG, Expanded Abstracts, 105–109, https://doi.org/10.1190/segam2014- 0885.1.
- Charles C. Mosher, Chengbo Li, Frank D. Janiszewski, Laurence S. Williams, Tiffany C. Carey, and Yongchang Ji (2017a). “Operational deployment of compressive sensing systems for seismic data acquisition.” The Leading Edge, 36(8), 661–669. https://doi.org/10.1190/tle36080661.1
- Charles C. Mosher, Chengbo Li, Frank D. Janiszewski, Laurence S. Williams, Tiffany C. Carey, and Yongchang Ji (2017b). “Operational deployment of compressive sensing systems for seismic data acquisition.” The Leading Edge, 36(8), 661–669. https://doi.org/10.1190/tle36080661.1
- August 2017 THE LEADING EDGE 671: Special Section: Impact of Compressive Sensing on Seismic Data Acquisition and Processing
- Wired, March 2010, Fill In the Blanks: Using Math To Turn Lo-Res Datasets Into Ho-Res Samples https://www.wired.com/2010/02/ff_algorithm/