If you are interpreting seismic data, my view is that it is important to know where it comes from, and what the limitations of that dataset are.Part 1: The CDP is a lie
The idea of a CDP is the core of most seismic imaging concepts; more accurately its not a Common Depth Point
, but a Common Mid Point
, although CMP and CDP tend to be used interchangeably.
In practice, collecting data up into bins based on the midpoint of the source and receiver is only accurate for plane, horizontal layers of constant velocity. If this was actually the case we wouldn't need to shoot seismic at all, as we could accurate predict the sub-surface structure(!!)
While the CDP is flawed, its very powerful. So in processing, we use the CDP approach, but apply corrections to the data so that it becomes "true." How important these corrections are depends on how complex your data is.Part 2 : Dip
Dip "breaks" the common midpoint model, essentially because of the Reflection Law - the angle of incidence is equal to the angle of reflection.
If you draw this up the problem becomes apparent - draw a flat surface and a single dipping layer. Then construct the "normal" to the dipping layer (ie at right angles) and project two "rays" to the flat surface, remembering that the angle between each ray and the "normal" is constant. Where one ray crosses the "flat surface" label a receiver, and where the other ray crosses the flat surface label the "shot."
What you'll notice is that that place the rays reflect off is NOT below the midpoint between the source and receiver. The trivial case is the "normal ray", where the ray-path goes up-and-down along the normal, and the source and receiver are in the same location.Part 3 : The Normal Ray Section
A stacked section is a "normal ray zero-offset section"
It is "zero offset" because we have applied the "normal moveout correction" to data gathered up by its midpoint (the CMP or CDP gather) and summed (stacked) these traces to improve the signal to noise ratio. The effect of this is to simulate what would happen if we had the source and receiver in the same place (ie at zero offset)
It is a "normal ray" section because if you traced the ray path (in depth) from the source/receiver (which are coincident!) to any given reflection interface, it would make a "normal" to that interface - the energy goes up and down along the same ray path, as you saw above.
You can think of this as the result you would get if you put a single planar wave into the earth; the sub-surface dip is going to scatter and diffract events. The results you get are only laterally positioned correctly if the layers are flat - if not, the energy is scattered and mis-positioned. Part 4: Migration
"Migration" is a process designed to correct for this scattering caused by dip. It works by applying a mathematical model of the wave-equation to the normal ray section, reconstructing from the scattered data an image of the true sub-surface structure. There are many different mathematical models that are applied in migration - some fast, some slow, some accurate, some less so.
There are some sub-classes of migration to worry about too:Post- or pre-stack migrations.
Post stack migrations are cheap and fast to run. The problem is that they might not fix all of your issues. As you (hopefully) saw when drawing around with different ray-paths above, the source/receiver midpoints for a real reflection point on a dipping surface vary with the source-receiver offset.
There's a diagram showing this here : http://seismicreflections.globeclaritas ... ation.html
So - if you want to get a better image, you really need to run a pre-stack migration. This means splitting the data into "offset planes" and migrating each "plane" (or cube in 3D) separately. If you have 120 fold (ie traces per CDP bin), then it will take 120x as long as migrating one offset plane. Time and Depth Migrations
Time migrations result in an "image ray" sections. The ray-path between the zero-offset source-and receiver makes a right-angle at the reflected surface, but if there is a velocity gradient, this ray-path is curved. We also have refraction effects bending the raypath at each interface between rock layers of different velocities.
Depth migrations take into account these effects - they correct for refraction - but they need complex and accurate models of the sub-surface that can be "ray traced" in order to do so. These calculations take much longer that time migrations, but give a true "vertical ray" section, where the data is correctly positioned laterally under the common mid point.Part Five - FK and other tricks
After migration (or before it) the stacked section can be noisy, with lots of steep dip energy that is not related to seismic reflections. If you are interpreting this, its common to apply some kind of "coherency filter" to remove the noise and increase the "coherence" of the events you are after. These filters all use different mathematical estimates of "noise" - sometimes based on the properties of the data (frequency, wavenumber etc), sometimes based on weighted summation and smoothing (runmix), and sometimes based on statistical estimates.
FK filtering is a "dip filter" - that is to say we grab a block of traces, and reject all of the information that has an apparent dip greater than a given value. The problem is that this will produce a result that looks very "mixed" - the steep dip data has high frequencies (in time, and spatially) which give us resolution on small, subtle events, and of course we might actually have steep dip events that we are rejecting. The biggest impact is that an FK filter will "smooth out" faults, which of course tend to be steep dip.
The choices tend to be between having noise, and preserving high frequencies or steep dips. Part 6 Raw and Final, Stacks and Migrations
We always used to supply interpreters with four sections - the "raw and final" and "stacks and migrations"; The "raws" had no filtering applied - they were the raw stack and raw migration, with no attempts to improve coherency or apply filters. The "finals' had scaling and noise-attenuation techniques applied.
These were (paper) copies, but also digital data. This allowed the interpreters to apply different filters (or even migrations) cheaply and easily, as well as to see and understand what the impact of these were. Looking at how the diffractions had appeared on the seismic before migration could also show how well these had been collapsed or mispositioned before migration.Conclusion
I'd suggest that while you should never be interpreting just a stacked section, its useful to be able to access the stack to understand what is going on. Similarly post-stack time migrated results should really be considered as being the lowest grade of interpretable product you should use, with pre-stack time being the work-horse, and pre-stack depth being required where you have complex structure or steep (>45 degree) dips, especially if you are interested in the deeper part of the section.
Always be cautious of any multi-trace post-migration processes, and check to make sure you know what that process does and how it may have removed information from your data. Be very suspicious of any dip-removal techniques (FK, runmix, FX runmix, velocity filtering etc.) if you have either steep dip data or need to see subtle features/faulting.
Ideally ask for "raw" results (migrated) so that you can verify your interpretations on these, or use these in complex areas, with the "filtered" result being used to make autopicking easier in the simple bits.
Remember that the processor isn't an interpreter - if you have the "raw" dataset you can always go back and fine-tune the processing to preserve dips in key areas.