|Exploration & Production Geology
|Page 1 of 1|
|Author:||Harold [ Wed Mar 16, 2011 1:28 pm ]|
|Post subject:||Subseismic Faults|
A review of current applications for exploration and production geologists
Here's an article I did on Subseismic Faults that may be usefull for you all; Comments are welcome!
Subseismic faults are small faults that can not be observed on seismic profiles as they do not have a big enough fault heave. In general the smaller faults are the higher their density and large amounts of subseismic faults can greatly influence fluid flow. It is becoming more and more important for the oil industry to be able to predict subseismic faults as more structurally complex reservoirs are being developed. This and new computer techniques have caused increased interest in finding subseismic faults. A typical workflow for finding subseismic faults is by first enhancing the available seismic data and finding seismic attributes like coherence and curvature. Then, interpreted seismic horizons and faults are loaded into a mechanical model that than uses volumetric backward modelling to find areas that have experienced strain, that can be linked to fault density. Also fault orientation can be found through this mechanical modelling. Finally, all data is inputted into a stochastic model that uses a certain randomness and the fractal power law relationship between fault density and size to produce the final result. Usually, results are calibrated using well data or by using flow simulations and comparing that outcome with reservoir characteristics.
In almost every hydrocarbon reservoir there are numerous faults that remain undetected by reservoir engineers and geologists. These faults lie below the seismic resolution. This means that they do not displace enough strata to be visible on a seismic profile. A high quality seismic survey may produce a resolution of 5 to 10 meters. Layers thinner than 5 meters will not be visible on the seismic profile as they do not produce a reflective surface. Seismic resolution depends on the wavelength used and the amount of noise that has to be filtered out. Layers only create reflective surfaces if their thickness is bigger than the wavelength used, and longer wavelengths are necessary to penetrate deeper into the subsurface. It is thus not uncommon for deep low quality seismics to have a resolution of 20 to 30 meters. Faults with a heave of this size can therefore not be detected by a seismic interpreter, and are called subseismic faults. In general the smaller faults become, the more numerous they are, and subseismic faults can greatly increase or decrease the permeability of rocks. Even small displacement along faults can cause juxtaposition of clay and sand layers providing these layers are small as well. Also clay smearing and diagenesis in faults can create barriers for flow. On the other hand fractures and subseismic faults can cause conduits for fluid and in both cases of great importance for the oil industry.
But, even for larger faults is often very difficult to predict weather or not they are permeable or a barrier for fluid flow. For subseismic faults this is not very different, but rather than knowing the properties of one particular fault it is often sufficient to predict the bulk properties of a block containing subseismic faults, due to their size in relation to the overall reservoir. This makes modelling easier, but one will still have to predict where subseismic faults may occur, in what density, what their characteristics are and which orientation they may have. Knowing all this, it obviously makes it easier to predict seal integrity and estimate migration paths, although for the latter more details about the temporal evolution of the subseismic faults need to be known. Currently subseismic faults are mainly used for predicting fluid flow in producing reservoirs, mainly due to the availability of detailed data like well logs, cores and fluid flow information, which all can be used for calibrating models. In the future the models and methods developed today may be able to aid in exploration and the development of new fields. And because most “easy oil” is already under development and in the future oil companies will need to incorporate more complex reservoirs in their portfolios, a better understanding of these smaller subseismic faults is very important. This development and the fact that computers are becoming faster and faster has lead to many software packages to incorporate ways of modelling and finding subseismic faults (e.g. 3dMove, Dynel3D, Fracman & Havana). Computer models and filters are, for now, the only way of finding subseismic faults as they allow us to accurately analyse and enhance seismic data, model strains and stresses in 3D and quickly combine data using statistics and stochastic modelling. This review will discuss the different computer techniques and methods for finding subseismic faults and will address possible future developments of this exciting new field for exploration and reservoir geology.
Enhancing seismic data
As stated before, subseismic faults are not visible on seismic profiles. That is, they can not be seen by a seismic interpreter, but they do often leave their footprint in the seismic data. By using computers to analyze seismic attributes the locations and density of faults and fractures can be found. A computer can quickly analyse large areas and do so in great detail. Small artefacts in the seismic data, caused by faults and fractures, can therefore be picked up.
Because faults and fractures can absorb (or channel) energy better than a continues surface a decrease in amplitude of the seismic reflector is not uncommon above fractured surfaces. If one knows that a particular surface has a homogenous composition and it’s thickness does not vary spatially a decrease in amplitude of the seismic reflector might indicate a higher fracture density in that area. If a 3D seismic section is analysed using this property, data about the orientation of faults can be gathered. A seismic profile shot parallel to the fractures in a layer will return a less disturbed seismic horizon than a profile shot perpendicular to the fractures.
Also, the more dilated faults and fractures are the more energy they can absorb due to the presence of fluids in the fractures. Therefore, a higher directional anisotropy is often an indicator of more dilated, or open faults.
Another technique often used to find fractured areas from seismic data is by using coherence. Coherence is the measure of similarity between two neighbouring seismic traces. If two traces show a lot of dissimilarity the coherence in that area is low. If the traces are exactly the same the area has high coherence. Coherence is usually measured for one seismic horizon at a time to enhance accuracy and to get more detail. Low coherence indicates a local displacement of strata, thus often indicates a fault. The technique is thus very similar to what a geologist does when interpreting seismics, as they also look for displacements of seismic horizons, but a computer can do this in much more detail and faster.
Similar to coherence, curvature is another seismic attribute that indicates that the original horizontal surface has been disturbed. Curvature is a measure of the convexity of concavity of a seismic horizon and can quickly be calculated by a computer. In general, curved or disturbed surfaces can be caused by 4 processes: domes & sags (salt or shale diapirism), differential compaction (variations in overburden), diagenetic dissolution and collapse or paleostress. If the first three can be ruled out due to the nature of the rock and it’s surrounding lithologies and structures, the curved surfaces must have been caused by paleostress. Paleostress and the resulting strain are often a direct link to the fracture density. Recent developments in coherence algorithms like “Structural Entropy” and “Shaded Relief” (Trappe et al, 2007) have significantly improved the quality of these seismic attributes and show that also in this field research is very valuable.
Analysing varying amplitudes, coherence and curvature can thus produce 3D models that indicate areas of probable higher or lower fracture densities directly from the seismic data. However, low quality seismics may also cause these attributes and these techniques should be used with caution. Increasing seismic quality and resolution (for example using Well-Driven Seismics and Vertical Seismic Profiling) is therefore very valuable, but one must understand there will always be subseismic faults, they will only become smaller and the data used to find them will just become better.
From sandbox modelling and field studies geologists have learned that while large faults relieve stress in one area they usually causes stresses in others. These observations have lead to many structural styles that have been categorized and extensively documented. These studies resulted in computer models that are able to calculate where stress may have been localized in relation to large observable structures. These mechanical computer models are under rapid development as they can be used to find subseismic faults.
Finding fracture densities and orientations through mechanical modelling is done via backward modelling. In backward modelling the paleostress and resulting strain that has formed the current day subsurface structure is calculated. Usually the fault geometries and seismic horizons interpreted from the 3D seismics are loaded and the direction and magnitude of movement is inputted in the computer model. The computer renders a mesh consisting of either cubes, tetrahedral or pyramids. Then, using manually added tie-points the computer rejoins the seismic horizons displaced by the faults creating a continues surfaces. These, usually curved, surfaces are then made flat, whereby the model computes how much stress and strain each volume of the mesh has experienced. This results in a 3D model displaying areas that must experience compression or extension if the current structure was retro-deformed to its original morphology directly after deposition. Because of this backward modelling the values of compression and extension must be inverted to find the orientation and magnitude of stress on each mesh volume to achieve the current structure.
The second step is then to use Andersonian models to calculate the orientation (strike and dip) of the subseismic fault per mesh volume and by using a coefficient of internal friction to estimate fault dip.
The final result is a 3D model in which local fault density and orientation can be inferred from the amount of strain each mesh volume has experienced. The actual sizes and thus, connectivity can not yet be known as mechanical modelling only produces an insight into which areas have most probably experiences fracturing and faulting to relieve stresses due to the displacements of larger, observable faults and the formation of curved surfaces.
As for now mechanical models have mainly proved useful for simple structures, especially normal faults. Mearten et al. 2006 showed that he achieved good result for fault densities as well as fault orientations by mechanical modelling of the normal faulted Brent horizon in the Øseberg Sør region in the North Sea. For more complex structures better models are still under development. This is mainly due to the fact that in most cases the actual paleo surface or morphology is difficult to recreate. Multiple deformation phases and inversion tectonics make it difficult for both a human and a computer to find how layers must have been orientated during deformation. Also syn-deformation deposits in the target area, which are very common in deltaic settings, provide problems for recreating the paleo surface. In the later multiple backward models can be made for each deposited layer that can then be added up to produce a final result, but this requires great computational power.
However, in some cases where inversion tectonics have made finding the fault displacement of the larger faults impossible mechanical modelling can still be used as in that case not the actual strain is important but rather the relative spatial variation of strain (Lohr et al., 2008). This is due to the fact that in most subseismic fault prediction workflows fault size and density is calculated via stochastic modelling and the mechanical modelling only produces the locations fault are most probable to be and their orientations.
Also it is difficult to know weather or not subseismic fault developed because of stress variation induced by the larger faults or if they were there before the development of the larger observable faults. If the later is the case, mechanical modelling will probably produce inaccurate results. The smaller faults may relieve stresses as they develop and most models do not take this into account. If they were dilated due to an earlier deformation event they may be able to cope with large compression forces and prevent other faults to form. Diagenetic changes may cause structures not related to tectonics to form. Thermal expansion and contraction, elasticity and fluid interaction are other factors usually not incorporated in mechanical models. Al these influences can greatly change the nature and influence the formation of subseismic faults making mechanical modelling very difficult. However, some research has shown that different ways of backward modelling (e.g. assuming different deformation histories) may result in the same strain and stress regimes (Lewis et al., 2004). Thus, the way the structure formed is sometimes not as important as is stressed in most structural geology textbooks, for finding the areas that suffered compression or extension.
Once the fracture density and probability maps calculated by the seismic enhancement and mechanical modelling have been made all data is entered in a stochastic model. Lohr et al. (2008) has shown that combining both coherency data (enhanced seismics) and mechanical models produce best result after comparing the outcome of her study to well data. So ideally both data sets should be used in the stochastic model to produce a final subseismic fault model.
A stochastic model relies on random variation of several parameters to calculate the probability distributions and potential outcomes. While varying certain parameters the outcome of the model is recorded. If this is done thousands of times a distribution of outcomes can be produced. Stochastic models are useful as they include a certain randomness, as often is seen in nature, and they produce results which have a numerical probability for being correct. This probability values are practical for budget calculations and risk assessments.
Because fracture density and probability (and orientation) are only inputted in the stochastic model a method for finding the size of faults and fractures was needed.
Childs et al. (1990) discovered that the size (both heave and length) of faults is related to fault density. Several other studies have also showed this fractal nature of faults to be true and that this relationships can be extended to fault heaves of 1m (Watterson et al., 1996; Yielding et al., 1992), but Nicol et al. (1996) suggest this power law may be unreliable at smaller fault sizes due to spatial systematics. In most cases using a single power law has proven reliable, but when implementing this principle one must be cautious about stratabound faults, fault genetics and spatial variation as these may cause changes in these trends. Usually the power law is calculated by plotting the fault density vs. fault heave of the faults interpreted from the seismics on a log-log plot, after which the slope is calculated. This value is then used to find how big the subseismic faults must be on density maps fed into the stochastic model. To get more accurate and sturdy power law values the faults from a larger area than the one modeled are used, but this sometimes does cause problems with spatial variation of fault systems.
Then, because fault connectivity is so important for fluid flow, and thus for reservoir characteristics, this needs to be accurately predicted too. Currently connectivity is modeled in to basic ways. The first, growth modeling, uses the density maps to create blocks of higher stress and assumes faults as non-planar surfaces. Then seeds are planted in random blocks. These seeds will grow causing faults relieving stress gradually and following the orientation calculated for each block. Whenever a block achieves a stress not viable of maintaining fault growth faulting stops. Whenever a growing fault hits another fault further fault growth stops. Usually a maximum fault size is also calculated using the fault length distribution. After faulting stops the throw of the fault is calculated using it’s length.
The second way of modeling fault connectivity is by using the marked point process (Støyan et al.,1987). In this case faults can intersect, their maximum displacement is related by a power law to their maximum length and their size distribution is generated randomly, by using the fractal power law described above. Because each fault displaces a subseismic horizon, each fault has constraints as to how much displacement is possible, based on the seismic resolution or an user-defined tolerance. Damsleth et al. (1998) and Hollund et al. (2002) have used this technique with the software package HavanaTM developemed by the Norwegian Computing Center and achieved good results. Maerten et al. (2006) tested and compared both techniques and concluded that growth modeling produced subseismic faults with accurate orientations, while the marked point process produces faults more inline with fault statistics and density.
Lastly, the model should be calibrated as today still many assumptions must be made and many parameters have to be estimated in this workflow. Calibration can be done by comparing well data or cores. In this case fault density, strike, dip, permeability and displacement can be reliably found and used to produce a model that is consistent with the fault structure of the subsurface. However, wells are very localized and spatial variation may be great. This and alteration caused by the removal of cores make calibration this way rather risky. Also a fault model consistent with well data may be a good representation of reality, but is not particularly useful for production and reservoir engineers as flow characteristics still need to be modelled.
This step again requires calibration and may produce an entirely different model. Thus in most cases the output of the stochastic model is directly imputed into a flow simulator that is than compared with actual reservoir characteristics. The 3D fault model is recalibrated until it is compliant with the actual flow within the reservoir and may be used to predict future flow. This way of calibrating is thus mostly used for actual reservoirs, while well log calibration is mainly used to develop models and for research purposes.
Through enhancing seismic data, detecting seismic attributes and mechanical modelling and by combining this data in stochastic models, subseismic faults can be found. Subseismic faults are of great influence on fluid flow and although they are relatively small they are usually numerous greatly influencing reservoir characteristics. In most applications and studies results only produce representations of actual fault structures, but this is often enough. This is because in most cases only bulk properties of faulted rock is sufficient. However in the future when more complex oil fields will have to be developed subseismic faults will become more and more important. Predicting subseismic faults is a relatively new technique and interest in it is quickly growing. Today’s models still use rather simple assumptions and relationships, but already produce usable results when tested on simple geological structures. Especially reliable mechanical modelling of more complex structures should be improved.
Also increasing seismic quality and resolution will prove useful, but subseismic fault will always exist. Their size and importance will only decrease and the data used to find them will become more accurate. Improving seismic quality will certainly help develop more complex field but the importance of mechanical and stochastic modelling to find subseismic faults should not be underestimated.
Damsleth, E., V. Sangolt, G. Aamodt, 1998. Subseismic faults can seriously affect fluid flow in the Njord field off western Norway - A stochastic fault modelling case study: Society of Petroleum Engineers Annual Technical Conference and Exhibition, New Orleans, SPE Paper 49024, 10 p.
Hollund, K., Mostad P., Nielsen B.F., Holden L., Gjerde J., Contursi M.G., McCann J., Townsend C., Sverdrup E., 2002. Havana - A fault modeling tool, in A. G. Koestler and R. Hunsdale, eds., Hydrocarbon seal quantification: Norwegian Petroleum Society (NPF) Special Publication 11, p. 157–171.
Lewis H., Guest J., Hammond L., Hall S.A., 2004. Kinematics to Geomechanics - an Exercise in Predicting Reservoir Deformation and Flow. American Association of Petroleum Geologists
Lohr T., Oncken O., Krawczyk C.M., 2008. Seismic and sub-seismic deformation on different scales in the NW German Basin; Fachbereich Geowissenschaften, Freie Universität Berlin
Maerten, L., Gillespie P., Daniel, J.M., 2006. Three-dimensional geomechanical modelling for constraint of subseismic fault simulation. AAPG Bulletin 90, 1337-1358
Nicol A., Walsh J.J., Watterson J., Gillespie P.A., 1996. Fault size distributions - are they really power-law?, Journal of Structural GeologyVolume 18, Issues 2-3, February-March 1996, Pages 191-197.
Støyan, D., Kendall W.S., Mecke J., 1987. Stochastic geometry and its applications, (2d ed.): Chichester, Wiley, 456 p.
Watterson J., Walsh J.J., Gillespie P.A., Easton S., 1996. Scaling systematics of fault sizes on a large scale-range fault map: Journal of Structural Geology, v. 18, p. 199– 214.
Yielding G., Walsh J.J., Watterson J., 1992. The prediction of small-scale faulting in reservoirs: First Break, v. 10, p. 449–460.
|Author:||ciel071 [ Wed Sep 03, 2014 12:44 am ]|
|Post subject:||Re: Subseismic Faults|
I was reading your article and first of all I want to mention that it is very well written and interesting. I am also interested in citing your article. Is this article actually published? And if so, where could I find it? Thanks.
|Page 1 of 1||All times are UTC + 1 hour|
|Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group