Browsing by Author "Zhu, Hejun"
Now showing 1 - 20 of 20
- Results Per Page
- Sort Options
Item Ambient noise full waveform inversion for noise source distribution(2018-08) Zhang, Shuo; Zhu, HejunWe use full waveform inversion of ambient noise to estimate noise source distributions and model parameters. We derive the sensitivity kernel with respect to a noise source distribution from ambient noise data. To evaluate the capability of the algorithm, we perform several numerical examples. Our approach is to a simulate propagation of a diffusive wavefield and to calculate interstation correlograms. Waveform differences are used to measure differences between predicted correlograms, and correlation functions computed from observed seismic recordings. This measurement is used as a adjoint source to propagate the adjoint wavefield, and to calculate the associated gradients. Due to the low resolution and accuracy, we apply L1 model regularization in the Fourier domain to constrain the inversion result and remove artifacts. As with conventional full waveform inversion, the inversion with ambient noise information suffers from multi-parameter problems. We test the capability of ambient noise full waveform inversion with a complicated velocity model. The inversion results illustrate that full waveform inversion of ambient noise algorithm can give an accurate estimation of noise source distribution with sparse data acquisition.Item An Innovative Vibroseis Experiment to Detect the Moho Below the Valles Caldera, New Mexico(2022-12-01T06:00:00.000Z) Zhang, Yang; Lumley, David; Zhu, Hejun; Ferguson, JohnIn this study, we present a new 1-D velocity-structure model that reveals several geological structures beneath the Valles Caldera. The result is obtained by applying a sequence of processing techniques that mainly include extended cross-correlation, NMO correction, and horizontal super- gather stacking on Vibroseis (P-wave reflection) data collected from Valles Grande. For validating and evaluating the robustness of the processed result, we implement and test the model by iteratively forwarding modeling to a suite of travel time observed in processed data. These lead to a quantitative constrained 1-D model that resolves three shallow caldera structures at depths of 1.2 km, 2.1 km, and 3 km, the possible upper and lower bound of the magma reservoir at the middle crust with a depth of ~ 5 km and ~ 10 km respectively, and the possible Moho at near a depth of 34 km. We also found a good match of multiples that relates to the detected geological features by incorporating and comparing the modeled data with the observed real data. In our result, the depth of the Moho beneath the Valles Caldera is shallower compared to those results obtained from the teleseismic data with surface waves, possibly related to the uprising materials from the mantle that uplifts the Moho to a shallower depth as part of the Rio Grande Rift zone. This is an implication that the Rio Grande rifting process is possibly driven by upwelling mantle flow beneath this region.Item Crustal Wave Speed Structure of North Texas and Oklahoma Based on Ambient Noise Cross-Correlation Functions and Adjoint Tomography(Oxford University Press) Zhu, Hejun; Zhu, HejunRecently, seismologists observed increasing seismicity in NorthTexas and Oklahoma. Based on seismic observations and other geophysical measurements, numerous studies have suggested links between the increasing seismicity andwastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their triggering mechanisms, we need an accurate 3-D crustalwave speed model for the study region. Considering the uneven distribution of earthquakes in this area, seismic tomography with local earthquake records has difficulties achieving even illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wave speeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wave speeds. A total of 25 preconditioned conjugate gradient iterations is used to updatemodel parameters and minimize datamisfits. Features in the new crustal model TO25 correlate well with geological provinces in the study region, including the Llano uplift, the Anadarko basin, the Ouachita orogenic front, etc. In addition, there are relatively good correlations between seismic results with gravity and magnetic observations. This new crustal model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicentre location as well as moment tensor solutions, which are important for investigating triggering mechanisms between these induced earthquakes and unconventional oil and gas exploration activities. © The Author(s) 2018. Published by Oxford University Press on behalf of The Royal Astronomical Society.Item Elastic Wavefield Separation in Anisotropic Media Based on Eigenform Analysis and Its Application in Reverse-Time Migration(Oxford University Press, 2019-02-13) Yang, Jidong; Zhang, H.; Zhao, Y.; Zhu, Hejun; 0000-0002-7452-075X (Zhu, H); Yang, Jidong; Zhu, HejunSeparating compressional and shear wavefields is an important step in elastic reverse-time migration, which can remove wave-mode crosstalk artefacts and improve imaging quality. In vertical (VTI) and titled (TTI) transversely isotropic media, the state-of-the-art techniques for wavefield separation are based on either non-stationary filter or low-rank approximation. They both require intensive Fourier transforms for models with strong heterogeneity. Based on the eigenform analysis, we develop an efficient pseudo-Helmholtz decomposition method for the VTI and TTI media, which produces vector P and S wavefields with the same amplitudes, phases and physical units as the input elastic wavefields. Starting from the elastic VTI wave equations, we first derive the analytical eigenvalues and eigenvectors, then use the Taylor expansion to approximate the square-root term in the eigenvalues, and finally obtain a zero-order and a first-order pseudo-Helmholtz decomposition operator. Because the zero-order operator is the true solution for the case of ϵ = δ, it produces accurate wavefield separation results for elliptical anisotropic media. The first-order separation operator is more accurate for non-elliptical anisotropy. Since the proposed pseudo-Helmholtz decomposition requires solving an anisotropic Poisson's equation, we propose two fast numerical solvers. One is based on the sparse lower-upper (LU) factorization, which can be repeatedly applied to the input elastic wavefields once computing the lower and upper triangular matrices. The second solver assumes the model parameters are laterally homogeneous within a given migration aperture. This assumption allows us to efficiently solve the anisotropic Poisson's equation in the z k x domain, where k x and z denote the horizontal wavenumber and depth, respectively. Using the coordinate transform, we extend the pseudo-Helmholtz decomposition to the TTI media. The separated vector wavefields are used to produce PP and PS images by applying a dot-product imaging condition. Several numerical examples demonstrate the feasibility and applicability of the proposed methods. © The Author(s) 2019. Published by Oxford University Press on behalf of The Royal Astronomical Society.Item Enhancing Gravitational Wave Detection Using Machine Learning and Ambient Noise Suppression(August 2023) Murali, Chinthak 1993-; Lumley, David; Williams, Nathan; King, Lindsay J.; Zhu, Hejun; Kesden, Michael; Arrowsmith, StephenIn this thesis, we present two separate deep learning pipelines for the detection and parameter estimation of astrophysical gravitational waves. In part one, we present a convolutional neural network, designed in the auto-encoder configuration that can detect and denoise gravitational waves from merging black hole binaries, orders of magnitude faster than the conventional matched-filtering based detection that is currently employed at advanced-LIGO (aLIGO) and the LVK (LIGO-VIRGO-KAGRA) network in general. The Neural-Network architecture is such that it learns from the sparse representation of data in the time-frequency domain and constructs a non-linear mapping function that maps this representation into two separate masks for signal and noise, facilitating the separation of the two, from raw data. This approach is the first of its kind to apply machine learning based gravitational wave detection/denoising from binary mergers in the 2D representation of gravitational wave data. We applied our formalism to the first gravitational wave event detected, GW150914, successfully recovering the signal at all three phases of coalescence at both aLIGO detectors. This method is further tested on the gravitational wave data from the second observing run (O2) of aLIGO, reproducing all binary black hole mergers detected in O2 at both detectors. The Neural-Net seems to have uncovered a pattern of ‘ringing’ after the ringdown phase of the coalescence, which is not a feature that is present in the conventional binary merger templates. This method can also interpolate and extrapolate between modeled templates and explore gravitational waves that are unmodeled and hence not present in the template bank of signals used in the matched-filtering detection pipelines. Faster and efficient detection schemes, such as this method, will be instrumental as ground based detectors reach their design sensitivity, likely to result in several hundreds of potential detections in a few months of observing runs. In part two, we present another deep learning based architecture using convolutional neural network to estimate the intrinsic parameter of the black holes from the observed raw gravi- tational wave data. This framework has the capability to estimate parameters of coalescing binaries, orders of magnitude faster than the conventional Bayesian analysis based param- eter inference employed at the LVK network. We also attempt to estimate the individual spin components of both black holes, which is often not possible using Bayesian inference. This machine learning based parameter inference scheme, to the best of our knowledge is the first of its kind that can make direct estimation of individual spin components of both black holes from raw detector data. In part three, we present an overview of Ambient Seismic Noise (ASN) and the importance of its mitigation in enhancing ground based gravitational wave detection. ASN is one of the biggest noise contributors below 20Hz in ground based gravitational wave detectors. Seismic Newtonian noise arising from gravity gradients created by seismic waves will become the limiting noise source at low frequencies for second generation gravitational wave detectors. Low frequency noise suppression will especially enhance gravitational wave detection from heavier coalescing astrophysical systems. In this part, we also present results from a series of seismic weight-drop experiments performed at the gravitational wave test detector site at Western Australia. This analysis would be one of many seismic experiments that can help characterize and study the detector location for better seismic isolation and noise suppression.Item Ergodic Properties of Generalized Billiards(December 2023) Sechkin, Georgii 1992-; Arnold, Maxim; Zhu, Hejun; Dragovic, Vladimir; Ohsawa, Tomoki; Coskunuzer, BarisWe consider the generalization of the classical billiard map via different reflection laws. We study several dynamical properties of the resulting dynamical systems with the emphasis on the ergodic properties of the dynamic.Item Extraction of Seismic Properties and Models From, and Full Waveform Inversion of, Dispersed Seismic Waves(2022-12-01T06:00:00.000Z) Ren, Li; Zhu, Hejun; Ferraris, John P.; McMechan, George A.; Ferguson, John F.; Stern, Robert J.Surface waves, which propagate along boundaries between two different media, play an important role in resolving geological structures of different scales targeted from global seismology, exploration seismology, geotechnical engineering, to nondestructive testing. Over the past half century or so, different methods have been explored to process and invert surface waves for underground model properties, especially the shear wave velocity. However, there are still many problems waiting to be solved. Conventional dispersion curve inversion (DCI) is limited to 1-D model assumption and has increased uncertainty when the structure is complicated. It also requires picking of dispersion curves from field data, which is often a labor intensive process. Although, methods in the framework of full waveform inversion of surface waves yield models with good resolution, both laterally and vertically when carefully implemented and applied, they are computationally intensive and can easily suffer from the cycle skipping problem. Wave-equation based dispersion curve inversion method combine some of the advantages of those in conventional dispersion curve inversion and full waveform inversion, but also requires picking of dispersion curves from both field and synthetic data. This dissertation focuses to partially solve some of the above issues and leads to more work that can be done in the future. To automate the picking of dispersion curves from surface waves, which is required for many approaches for shallow-subsurface characterization using surface waves, my first project presents a convolutional-neural-network (CNN) based machine learning approach to automatically pick the curves for the fundamental and higher modes along the two azimuths of any 2D seismic profile. Various attributes such as amplitudes, coherency, and local phase velocity as well as frequency and wavenumber of dispersion curves are derived; different sub-sets of these are tested in the CNN training process to assess the best combinations. We use a U-net architecture that is modified to convert the conventional 2D image segmentation problem in the (f,k) domain into direct multi-mode curve fitting and a subsequent picking process. To make the automatic picking algorithm more practical, we (1) introduce a second loss function that combines conventional wavenumber residuals and curve slope residuals; (2) use the transfer learning strategy, in which the network is pre-trained with synthetic data and then with a relatively small portion of the field data, to improve the efficiency of the algorithm; (3) evaluate two categories of uncertainty, the epistemic uncertainty from the method itself and input data, and uncertainty from non-deterministic factors such as random initialization of model weights and random shuffle of samples in training in the CNN, and in GPU parallelism. The epistemic uncertainty is an important indicator of the picking quality and can be used as a weighting of data in subsequent inversion; (4) perform post-processing to determine the effective dispersive frequency range of the picked curves by using different criteria, such as long/short moving average ratios (MAR) of squared picked wavenumbers, posterior uncertainty etc. The effectiveness of the automatic picking process is demonstrated in this study through applications to a field OBN dataset where different modes of Scholte waves were recorded. To reduce cycle skipping in FWI and to increase resolution of the estimated model, my second project develops and illustrates concurrent elastic full-waveform inversion (FWI) of P and S body waves and Rayleigh waves using interleaved envelope- and waveform- based misfit functions, in a gradually-increasing frequency, multi-scale, inversion strategy, to estimated both lateral and horizontal variations of models, which breaks the 1D assumption of conventional DCI. Computing correlation coefficients between the observed and predicted data, and between the inverted and correct models, provides quantitative measures of the composite contributions, of the starting model, the chosen data flow, and the depth extent of the solution space, to the fits of the corresponding solutions. Treating the whole wavefield as a single data set means that it is not necessary to separate, or even to identify, different types of body and surface waves.Item Full Wavefield Reconstruction and Full Waveform Inversion(2020-11-16) Wu, Yulang; Zhu, HejunTwo-way reverse time extrapolation of the recorded seismic data is the essential step in seismic reverse time migration (RTM). Various RTM algorithms have been developed to produce accurate image locations rather than correct amplitude information because of inadequate compensation of attenuation, dispersion, and transmission losses. Thus, there is a need to evaluate the requirements, and determined the theoretical feasibility, of true amplitude recovery of recorded seismic data. We apply analytic Zoeppritz equations and numerical elastodynamic finite differencing to evaluate the validity and requirements for true amplitude recovery of incident plane and spherical waves, respectively. An application of true amplitude recovery to remove downgoing reflections produced at interfaces at depths shallower than a horizontal well, in reconstructed incident wavefields, is demonstrated. The migration performance also relies on the accuracy of the velocity model, which determines both the kinematics (image locations) and amplitude information. Conventional full waveform inversion (FWI) is a least-squares optimization method to update a seismic velocity model by iteratively fitting the calculated data to the observed data. We propose three FWI algorithms to update a seismic velocity model in different ways: parametric convolutional neural-network-domain FWI (CNN-domain FWI), source-domain FWI, and CNN-boosted FWI. Parametric CNN-domain FWI implements CNN within full waveform inversion to automatically capture the salient features (e.g., salt bodies) in a given initial training velocity model, and then concentrate the velocity inversion on the captured features in the velocity model. Source-domain FWI updates velocity model by minimizing source-domain misfit function, which contains the least-squares virtual source artifacts. Virtual source artifacts are created by replacing the propagating source wavefield by the forward-time observed data at the receiver positions, as a data-residual constraint. Similar to conventional FWI, source domain FWI can be implemented in either the frequency or the time domain, which is unlike previous source-domain solutions, which have to be implemented only in the time domain, to solve the normal equations. CNN-boosted FWI applies CNN as a weak learner, at each iteration, to approximate the model residuals, by minimizing the data residuals. In addition to finding the optimal step length, just as gradient-descent FWI does, CNN-boosted FWI fixes this optimal step length and optimizes the CNN, which is originally trained to approximate the negative gradients at each iteration, to update the velocity model. CNN-boosted FWI inverts for the velocity model with lower model and data errors than the gradient-descent FWI does.Item Full-Waveform Inversion Using Seislet Regularization(Society of Exploration Geophysicists, 2017-06-21) Xue, Z.; Zhu, Hejun; Fomel, S.; 0000-0002-7452-075X (Zhu, H); Zhu, HejunBecause of inaccurate, incomplete, and inconsistent waveform records, full-waveform inversion (FWI) in the framework of a local optimization approach may not have a unique solution, and thus it remains an ill-posed inverse problem. To improve the robustness of FWI, we have developed a new model regularization approach that enforced the sparsity of solutions in the seislet domain. The construction of seislet basis functions requires structural information that can be estimated iteratively from migration images.We implement FWI with seislet regularization using nonlinear shaping regularization and impose sparseness by applying soft thresholding on the updated model in the seislet domain at each iteration of the data-fitting process. The main extra computational cost of the method relative to standard FWI is the cost of applying forward and inverse seislet transforms at each iteration. This cost is almost negligible compared with the cost of solving wave equations. Numerical tests using the synthetic Marmousi model demonstrate that seislet regularization can greatly improve the robustness of FWI by recovering high-resolution velocity models, particularly in the presence of strong crosstalk artifacts from simultaneous sources or strong random noise in the data. © The Authors. Published by the Society of Exploration Geophysicists.Item High-resolution Time-lapse Seismic Velocity Estimation Using Improved Envelope Full Waveform Inversion(December 2023) Xiong, Kai 1992-; Zhang, Michael Qiwei; Lumley, David; Minkoff, Susan E.; Zhu, Hejun; Sickmann, Zachary; Brikowski, Thomas H.An inverse problem is a general framework used to convert observed measurements into information about a physical object or system. Seismic exploration serves as a typical example of an inverse problem, utilizing seismic energy to probe beneath the Earth’s surface and often aiding in the search for valuable deposits of oil, gas, or minerals. Man-made seismic waves are generated at source stations, radiating outward as three-dimensional waves that travel downward into the Earth. Upon encountering changes in the Earth’s geological layering, they produce echoes that travel back up to the surface and are captured by geophones or hydrophones. These recorded seismic waves can be employed to infer physical properties such as P-wave velocity (Vp) and subsurface density. Specifically, Full Waveform Inversion (FWI) is a high-resolution seismic inversion method based on fitting the entire content of seismic waveforms with an optimization objective function to extract physical parameters of the medium sampled by seismic waves. Furthermore, 4D seismic data (seismic data acquired repeatedly at multiple times) can be used to invert the changes in seismic response during the production phase. Likewise, time-lapse FWI using the entire 4D seismic waveform has the potential to reveal high-resolution velocity changes related to reservoir production. However, FWI is a highly nonlinear inverse problem and suffers from a well-known cycle- skipping issue, and heavily depends on low frequencies, long offset data, a good initial velocity model, and accurate source wavelet estimation. Envelope inversion (EI) was proposed to help overcome cycle-skipping issue and recover the long-wavelength background model by fitting the envelop of entire content of seismic waveforms, rather than the waveforms themselves. Nevertheless, some issues like instability of the adjoint source and gradient term make it challenging to use in practice for many seismology applications. In this study, we firstly examine the EI methods with different envelope function exponent p- values from an adjoint source analysis perspective, and reveal the adverse effects of the adjoint source weighting term on the data residuals, which degrade or mute important inversion gradient contributions, especially from reflected or scattered phases in the seismic data. We develop the theory and implementation for an improved envelope inversion (IEI) method by adding an upshift constant (zero-frequency DC bias term) to the seismic waveform data before calculating the envelope functions, which can help solve the instability issue of the adjoint source of EI with power value p = 1. We test the performance of all three methods in detail using the highly realistic SEAM4D model and data, and show the advantages of our new IEI method compared to conventional EI and FWI methods. In addition, we implement IEI on marine seismic data and find that the amplitude mismatching between the modeled and observed data is a critical factor that prevents the successful application of IEI on real data. To match amplitude better, we introduce an amplitude normalization strategy using the amplitude of the head waves as the normalization reference. We demonstrate the successful application of IEI on a 2D marine seismic data example. We estimate a reasonable velocity model with IEI and a high-resolution velocity model with IEI + FWI using data bandpassed to a maximum frequency content of 24 Hz. In addition, we apply the traditional EI methods with different power values on the envelope and discuss their limitations on the real data example. Furthermore, we implement inversions using different initial velocity models to show the ability of IEI to overcome the cycle-skipping issue and recover low-wavenumber velocity components compared to FWI, including a low-velocity reservoir zone. Finally, time-lapse FWI can introduce inversion artifacts, which might mask the true time- lapse signature within the reservoir zone, especially for the inversion on real data examples. We implement sequential bootstrap time-lapse FWI on the time-lapse marine seismic data from North Australia and observe noticeable inversion artifacts. To better understand these time-lapse FWI artifacts, we implement time-lapse FWI using several seismic data with different frequency bands and observe the distribution of the potential true time-lapse signature and inversion artifacts. On the inverted Vp change models using data with different frequency bands, the distribution of inversion artifacts exhibits inconsistencies, while the inverted Vp change at the reservoir remains consistent. In order to comprehend these observations, we analyze the origin of these time-lapse artifacts from a theoretical perspective and find that a portion of these artifacts may arise from inversion null space and differences in data residuals between baseline and monitoring inversions. Building on these insights, we develop a novel time-lapse FWI method to suppress these inversion artifacts. We use the energy of the inverted Vp change model utilizing data with a lower dominant frequency band as a gradient-weighting term for time-lapse FWI using data with a higher dominant frequency band. The test of the novel time-lapse FWI method on marine data demonstrates its capability to effectively suppress inversion artifacts.Item Locating and Monitoring Microseismicity, Hydraulic Fracture and Earthquake Rupture Using Elastic Time-Reversal Imaging(Oxford Univ Press on behalf of The Royal Astronomical Society, 2019-01) Yang, Jidong; Zhu, Hejun; 0000-0002-7452-075X (Zhu, H); Yang, Jidong; Zhu, HejunLocating and monitoring passive seismic sources provides us important information for studying subsurface rock deformation, injected fluid migration, regional stress conditions as well as fault rupture mechanism. In this paper, we present a novel passive-source monitoring approach using vector-based elastic time-reversal imaging. By solving the elastic wave equation using observed multicomponent records as boundary conditions, we first compute back-propagated elastic wavefields in the subsurface. Then, we separate the extrapolated wavefields into compressional (P-wave) and shear (S-wave) modes using the vector Helmholtz decomposition. A zero-lag cross-correlation imaging condition is applied to the separated pure-mode vector wavefields to produce passive-source images. We compare imaging results using three implementations, that is dot-product, energy and power. Numerical experiments demonstrate that the power imaging condition gives us the highest resolution and is less sensitive to the presence of random noises. To capture the propagation of microseismic fracture and earthquake rupture, we modify the traditional zero-lag cross-correlation imaging condition by summing the multiplication of the separated P and S wavefields within local time windows, which enables us to capture the temporal and spatial evolution of earthquake rupture. 2-D and 3-D numerical examples demonstrate that the proposed method is capable of accurately locating point sources, as well as delineating dynamic propagation of hydraulic fracture and earthquake rupture.Item Micro-earthquake Source Characterization With Full-wavefield Imaging, Uncertainty Analysis, and Deep-learning(2022-12-01T06:00:00.000Z) Duan, Chenglong; Lumley, David; Li, Qiwei; Ferguson, John F.; Zhu, Hejun; Pirouz, MortazaMicroseismic events are very weak earthquakes that occur at very small spatial scales, either natural or induced by man-made changes to the in-situ stress conditions of the earth’s interior. This induced seismicity or induced earthquakes can be used to monitor fluid and pressure fronts, characterize reservoirs (oil and gas, geothermal resource, CO2 storage, etc.), and help optimize production. However, if the increasing fluid pressure pushes a fault or fracture closer to failure, it can trigger larger and sometimes felt earthquakes. In this sense, correctly locating micro-earthquakes events in or near the reservoir help improve reservoir characterization and production recovery, and locating events near faults, especially in basement rocks, can help avoid triggering large felt earthquakes which may cause damage to infrastructure or people. In practice, observed arrival times of P and S wave phases are often used to locate the micro- earthquake sources and estimate the relative source origin times. For low signal-to-noise ratio microseismic data, or complex wave phenomena, phase picking can be inaccurate and ray- based methods may fail to focus seismic energy at the correct source location. In terms of source location uncertainty and resolution, how to reduce the uncertainty and enhance the resolution based on a certain kind of earthquake source location method and incorporating as much information as we can in the recorded waveforms is still an important research topic. In addition, extracting useful information from the continuously-recorded microseismic data and using these micro-earthquake events to characterize the subsurface fracture growth helps us to understand the mechanisms of induced seismicity. In this study, instead of using acoustic data or direct P wave arrivals only, I use elastic multicomponent data and present a new method that uses the full P and S adjoint wavefields to image the micro-earthquake source locations. I separate the P and S waves from the data, and extrapolate the P and S wavefields of each receiver subarray by solving the P and S adjoint wave equations in parallel. I formulate three source imaging conditions by multiplying over subarrays the adjoint P wavefield, S wavefield and cross-correlated P and S wavefields. I perform numerical experiments on the highly realistic SEG SEAM4D reservoir model using surface acquisition array geometries. I discuss the impacts of S-wave attenuation and frequency bandwidth on the source location images. I perform noise tests to mimic the surface monitoring data contaminated by ambient noise. I test the source imaging results using smoothed velocity models and provide 90% confidence ellipse of the source location due to Gaussian-distributed velocity model errors. Furthermore, I have developed a P- and PS-wavefront imaging method to estimate the source origin time sequentially, which overcomes the unknown variable of source origin time in Kirchhoff-type imaging and helps to reduce the location uncertainties originating from the source origin time. I also develop a 3D synthetic example by finite-difference modeling the realistic elastic 3C data using the SEG SEAM4D earth model, and jointly compare the reservoir and basement microearthquake source location uncertainties based on traveltime inversion, wavefront imaging and full wavefield imaging methods. For each method, we also investigate the source location uncertainties between the P- and PS-wave results. We compare the source location uncertainties from data noise and velocity scaling errors, as well as the computational costs based on different imaging methods. Finally, since many observed induced seismicity events have characteristics that are similar to natural earthquake events dominated by shear slip across a fault plane, it would be interesting to find out the evidence if a certain type of induced seismicity involves the direct interaction between the injected fluid and the fracture system through which the fluid flows. Based on a continuously-recorded real data set from a fluid injection project in Texas, I use a 7-layer U-Net autoencoder neural network (UANN) to observe and classify two distinct types of long-duration microseismic events which are frequency-drop long-duration (FDLD) events and low-frequency long-duration (LFLD) events. I perform Gaussian Mixture model clustering to label the event signals based on the latent feature vectors from the UANN. I discuss the potential causes and applications of LFLD events using proppant injection histories, cumulative seismic moments, and spatiotemporal evolution of microseismic event locations.Item Radial Anisotropy of the North American Upper Mantle Based on Adjoint Tomography with Usarray(Oxford Univ Press, 2017-07-31) Zhu, Hejun; Komatitsch, Dimitri; Tromp, Jeroen; 0000-0003-3556-8096 (Zhu, H); Zhu, HejunWe use seismic data from USArray to image the upper mantle underneath the United States based on a so-called 'adjoint tomography', an iterative full waveform inversion technique. The inversion uses data from 180 regional earthquakes recorded by 4516 seismographic stations, resulting in 586 185 frequency-dependent measurements. Three-component short-period body waves and long-period surface waves are combined to simultaneously constrain deep and shallow structures. The transversely isotropic model US₂₂ is the result of 22 pre-conditioned conjugate-gradient iterations. Approximate Hessian maps and point-spread function tests demonstrate good illumination of the study region and limited trade-offs among different model parameters. We observe a distinct wave-speed contrast between the stable eastern US and the tectonically active western US. This boundary is well correlated with the Rocky Mountain Front. Stable cratonic regions are characterized by fast anomalies down to 250-300 km, reflecting the thickness of the North American lithosphere. Several fast anomalies are observed beneath the North American lithosphere, suggesting the possibility of lithospheric delamination. Slow wave-speed channels are imaged beneath the lithosphere, which might indicate weak asthenosphere. Beneath the mantle transition zone of the central US, an elongated north-south fast anomaly is observed, which might be the ancient subducted Farallon slab. The tectonically active western US is dominated by prominent slow anomalies with magnitudes greater than -6 per cent down to approximately 250 km. No continuous lower to upper mantle upwellings are observed beneath Yellowstone. In addition, our results confirm previously observed differences between oceans and continents in the anisotropic parameter. xi (beta(h)/beta(v))(2). A slow wave-speed channel with xi > 1 is imaged beneath the eastern Pacific at depths from 100 to 200 km, reflecting horizontal shear within the asthenosphere. Underneath continental areas, regions with. > 1 are imaged at shallower depths around 100 km. They are characterized by fast shear wave speeds, suggesting different origins of anisotropy underneath oceans and continents. The wave speed and anisotropic signatures of the western Atlantic are similar to continental areas in comparison with the eastern Pacific. Furthermore, we observe regions with xi < 1 beneath the tectonically active western US at depths between 300 and 400 km, which might reflect vertical flows induced by subduction of the Farallon and Juan de Fuca Plates. Comparing US₂₂ with several previous tomographic models, we observe relatively good correlations for long-wavelength features. However, there are still large discrepancies for small-scale features.Item Reservoir Characterization of Non Gaussian Field Using Combined Ensemble Based Method(2020-08) Folarin, Samson Babatunde; Ramakrishna, Viswanath; Zhu, HejunThe proper reservoir characterization was a decade-long challenge for both geoscientists and reservoir engineers because of the complex subsurface structures. Ensemble Kalman filter was studied rigorously as a promising method for quantify the reservoir uncertainty quantification. Because of its computational efficiency and easy implementation on any simulator, petroleum engineers recommended ensemble Kalman filter (EnKF) as a useful history matching technique. It failed because of the Gaussian assumption and the inconsistency that exist between the updated static and dynamics parameters, which violate the field’s material balance. Wang et al., (2012) introduced an improved method called half iterative ensemble Kalman filter (HIEnKF) to overcome the shortcomings of the EnKF. The new method (HIEnKF) appears to be promising in the field of history matching but has its drawback. Mary Wheeler et al., (2013) introduced the ensemble smoother method to overcome the computational cost induced by HIEnKF. Various challenges arise from the existing methods that motivated us to propose new techniques that can give a promising result in petroleum engineering. We designed our first method by considering the advantage of half iterative EnKF and ensemble smoother. The combined half iterative EnKF and ensemble smoother (CoHIEnKFS) characterize a Gaussian field better than the existing methods. Previous work has shown poor characterization on reservoirs with non-Gaussian permeability distribution, which fails to satisfy HIEnKF Gaussian assumption. We apply a normal score transformation on (CoHIEnKFS) to meet this assumption. The new method (normal score combined half iterative EnKF and ES) produces a good result in the petroleum history matching field.Item Seismic Data Reconstruction With Low-rank Tensor Optimization(2022-05-01T05:00:00.000Z) Popa, Jonathan; Minkoff, Susan E.; Lou, Yifei; Meloni, Gabriele; Zweck, John; Zhu, HejunSeismic data recorded in the field often has gaps due to missing or failed receivers or aperture restrictions and may be contaminated by external noise. Reconstruction is the process of completing missing data and removing noise. Multi-dimensional seismic data, for example in 3, 4, or 5D, can be efficiently stored in a tensor or multi-dimensional array. Low-rank tensor optimization is a model used to reconstruct tensor data under the assumption that the underlying data has low rank. Data has low rank when it has redundant rows or columns, causing the singular values to decay at a rapid rate. Because minimizing rank is NP-hard, a relaxation of rank can be used such as the tensor nuclear norm (TNN), derived from the tensor singular value decomposition (tSVD). The alternating direction method of multipliers (ADMM) effectively solves the TNN model in which the sum of singular values is minimized. ADMM splits the minimization problem into smaller subproblems. The combination of the ADMM method and TNN model is referred to as TNN-ADMM and is useful for reconstruc- tion of missing data. Exploiting the conjugate symmetry of the multi-dimensional Fourier transform (the most expensive part of the tSVD algorithm) reduces the runtime of the tSVD algorithm for real- valued order-p tensors by approximately 50%. The relation between the tSVD of a tensor and the SVD of a corresponding block-diagonal matrix reveals how the singular values of the tensor and matrix change as the orientation of the tensor changes and provides evidence for the success of the most-square orientation when used for low rank data reconstruction. For seismic data, the most-square orientation has frontal faces formed over the spatial dimen- sions, so the tensor contains more redundancies than pairing a spatial dimension with time. On real data reconstruction examples, TNN-ADMM outperforms two other data completion methods, projection onto convex sets and multi-channel singular spectrum analysis (MSSA), with less error and 10-1000× faster runtime. In exploration seismology, an initial baseline survey informs decisions to produce a region, and monitor surveys conducted during production provide updated subsurface information. The time-lapse difference between the baseline and monitor surveys reveals changes in the Earth due to production. Non-repeatability issues, such as inconsistent receiver locations, negatively impact one’s ability to accurately identify time-lapse changes. If receiver locations are regularized onto a common grid, then the baseline and monitor surveys can be compared. The resulting tensors are incomplete due to the potential for grid blocks to not contain receivers, so we apply TNN-ADMM to each data tensor to successfully fill in missing data, and a time-lapse difference can be computed between the reconstructed tensors. The unconstrained formulation of TNN-ADMM simultaneously completes and denoises data. The convergence analysis of this method proves that the iterative solution converges to a local minimum, provided that the step size parameter is greater than one. The convergence proof requires new properties of the Frobenius norm for tensors, as well as Hadamard (entry- wise) product properties. On a synthetic problem unconstrained TNN-ADMM outperforms MSSA with 18%-27% less error and 10× faster runtime.Item Seismic Modeling, Imaging and Inversion in Viscoacoustic Media(2020-05) Yang, Jidong; 0000-0002-6652-5366 (Yang, J); Zhu, HejunDuring wave propagation, seismic energy is dissipated by the geometrical spreading, heterogeneity scattering and lattice internal friction. The energy decay related to internal friction is known as intrinsic attenuation, which reflects the viscosity (anelastic) property of subsurface minerals and rocks. Particularly, the saturations of gas give rise to strong seismic intrinsic attenuation. Incorporating attenuation into seismic modeling, imaging and inversion enables accurate detection of hydrocarbon reservoir and characterization of fluid properties. To date, a number of wave equations have been developed to describe the intrinsic attenuation effects. For example, in the frequency-domain, the viscous behavior can be described using a complex-valued velocity. It explicitly incorporates the quality factor (Q) into the wave equation and therefore is easy to utilize to compensate attenuation effects in reverse-time migration (RTM) and full-waveform inversion (FWI). But its requirements for solving the Helmholtz equation include large computer memory cost, which is still challenging for largescale 3D models. On the other hand, the attenuation can be incorporated in the time-domain wave equation based upon the standard linear solid (SLS) theory. Since the dispersion and dissipation are coupled in the SLS, and the quality factor Q has to be transformed to stress and strain relaxation times, it is difficult to use in seismic imaging and inversion. Another popular time-domain wave equation is formulated based on constant-Q theory. Although this approach has the advantage that the dispersion and dissipation terms are decoupled, it is necessary to calculate a mixed-domain operator using complicated numerical solvers, such as the low-rank approximation. In this study, starting from the frequency-domain viscoacoustic wave equation, I first use a second-order polynomial to approximate the dispersion term, followed by a pseudo-differential operator to approximate the dissipation term. These two approximations make it possible to transform the frequency-domain equation into the time domain, and derive a new complexvalued viscoacoustic wave equation. The advantages of the new wave equation include: (1) the dispersion and dissipation effects are naturally separated, which can be used to compensate amplitude loss in seismic migration by reversing the sign of the dissipation term; (2) Q is explicitly incorporated into the wave equation, which makes it easy to directly derive the misfit gradient with respect to Q and estimate subsurface attenuation models using Q-FWI; and (3) this new viscoacoustic wave equation can be numerically solved using finite-difference time marching and a Fourier transform, which does not require mixed-domain solvers as required in the constant-Q method, and has lower memory cost than the frequency-domain approach. Based on the new complex-valued wave equation, I develop a viscoacoustic RTM workflow to correct the attenuation-associated dispersion and dissipation effects. A time-reversed wave equation is derived to extrapolate receiver-side wavefields, in which the sign of the dissipation term is reversed while the dispersion term remains unchanged. In wavefield extrapolation, both source and receiver wavefields are complex-valued and their real and imaginary parts satisfy the Hilbert transform. This analytic property helps to explicitly decompose up- and down-going waves. Then, a causal imaging condition, which crosscorrelates the down-going source-side wavefield and the up-going receiver-side wavefield, is utilized to suppress lowwavenumber artifacts in migrated images. Furthermore, with limited recording apertures, finite-frequency source functions, irregular subsurface illuminations, viscoacoustic RTM is still insufficient to produce satisfactory reflectivity images with high resolution and amplitude fidelity. By incorporating the complexvalued wave equation into a linear waveform inversion scheme, I develop a viscoacoustic least-squares reverse-time migration (LSRTM) scheme. Based on the Born approximation, I first linearize the wave equation and derive a viscoacoustic demigration operator. Then, using the Lagrange multiplier method, I derive the adjoint viscoacoustic wave equation and the corresponding sensitivity kernels. With the forward and adjoint operators, a linear inverse problem is formulated to estimate the subsurface reflectivity model. A total-variation regularization is introduced to enhance the robustness of the proposed viscoacoustic LSRTM, and a diagonal Hessian is used as a preconditioner to accelerate convergence. Traditional waveform inversion for attenuation is commonly based on the SLS wave equation, in which case the quality factor Q has to be converted to the stress and strain relaxation times. When using multiple attenuation mechanisms in the SLS method, it is difficult to directly estimate these relaxation time parameters. Based on the new time-domain complex-valued viscoacoustic wave equation, I present an FWI framework for simultaneously estimating subsurface P-wave velocity and attenuation distributions. Since Q is explicitly incorporated into the wave equation, I directly derive sensitivity kernels for P-wave velocity and attenuation using the adjoint-state method, and simultaneously estimate their distributions. By analyzing the Gauss-Newton Hessian, I observe strong inter-parameter crosstalk artifacts, especially the leakage from velocity to Q. I approximate the Hessian inverse using a preconditioned L-BFGS method in FWI, which significantly reduces inter-parameter crosstalk, and produces accurate velocity and attenuation models.Item Seismic Rock Physics Analysis of Organic-rich Shales Using Statistical and Machine Learning Methods(2022-08-01T05:00:00.000Z) Lee, Jaewook; Shi, Xiaoyan; Lumley, David; Zhu, Hejun; Pujana, Ignacio; Pirouz, MortazaShale is the most common sedimentary rock, which accounts for approximately 70 percent of the rocks in the crust of the Earth. In a conventional petroleum system, organic-rich shales have been regarded as source rocks generating hydrocarbon resources. Over the past decade, these shale rocks have become both the source rocks and unconventional reservoirs because of the combination of horizontal drilling and hydraulic fracturing. Although drilling and development technologies have been advancing rapidly, there is a lack of exploration involved with many shale resource operations. The current ’drilling on a grid’ strategy causes inefficient hydrocarbon production and unnecessary expense because many shale reservoirs are more heterogeneous and complex than conventional sandstone reservoirs. To identify shale reservoir production ’sweet spots’ from seismic data to help optimize recovery, we need to develop more e↵ective seismic reservoir characterization methods. Also, we have to reduce the impact of greenhouse gas emissions and associated climate change while developing unconventional oil and gas. For this reason, we need a better rock physics model (RPM) to describe shale reservoirs and understand their heterogeneity or anisotropy. However, in contrast to conventional sandstone reservoirs, many important shale reservoir rock physics properties are not well understood. Moreover, the connections between seismic, elastic, and shale rock properties are complex and need more research to improve quantitative inversion and interpretation of seismic data. There are two main geophysical problems for better shale characterization: (1) building a proper seismic RPM of organic-rich shales, and (2) estimating accurate shale-specific reservoir properties from seismic data. To help integrate these multi-disciplinary concepts, this dissertation proposes three topics as part of my PhD research: (1) Improved shale RPM methods to estimate the total organic carbon (TOC) and mineralogical brittleness index (MBI) using statistical and machine learning methods, (2) Seismic amplitude variation with o↵set (AVO) forward modeling and AVO attribute analysis to model an accurate synthetic seismic data and predict important shale properties such as TOC and clay volume, and (3) Enhanced seismic anisotropy characterization methods to estimate shale reservoir properties from anisotropic seismic data. Therefore, successful estimation of shale rock properties from seismic properties may allow us to better characterize the organic-rich shale rocks. Also, these approaches can interpret the e↵ect of variations in porosity, mineralogy, and organic carbon concentration on seismic property changes. In conclusion, these methods potentially improve the shale reservoir characterization and time- lapse monitoring for shale hydrocarbon production. They can also help to provide seismic rock physics frameworks for accurate interpretation and monitoring of the physical property changes, for example, during CO 2 enhanced oil recovery (EOR), carbon capture and storage (CCS), and geothermal energy production.Item Seismogram Registration via Markov Chain Monte Carlo Optimization and Its Applications in Full Waveform Inversion(Oxford Univ Press, 2018-11-05) Zhu, Hejun; 0000-0003-3556-8096 (Zhu, H); Zhu, HejunCycle skipping is a serious issue in full waveform inversion (FWI) since it leads to local minima. To date, most FWI algorithms depend on local gradient based optimization approaches, which cannot guarantee convergence towards the global minimum if the misfit function involves local minima and the starting model is far from the true solution. In this study, I propose a misfit function based on non-stationary time warping functions, which can be calculated by solving a seismogram registration problem. Considering the inherent cycle skipping and local minima issues of the registration problem, I use a Markov chain Monte Carlo (MCMC) method to solve it. With this global optimization approach, I am able to directly sample the global minimum and measure non-stationary traveltime differences between observed and predicted seismograms. The a priori constraint about the sparsity of the local warping functions is incorporated to eliminate unreasonable solutions. No window selections are required in this procedure. In comparison to other approaches for measuring traveltime differences, the proposed method enables us to align signals with different numbers of events. This property is a direct consequence of the usage of MCMC optimization and sparsity constraints. Several numerical examples demonstrate that the proposed misfit function allows us to tackle the cycle skipping problem and construct accurate long-wavelength velocity models even without low frequency data and good starting models.Item Time-Domain Least-Squares Migration Using the Gaussian Beam Summation Method(Oxford University Press) Yang, Jidong; Zhu, Hejun; McMechan, George A.; Yue, Yubo; 0000-0002-7452-075X (Zhu, H); 103911551 (McMechan, GA); Yang, Jidong; Zhu, Hejun; McMechan, George A.With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator.We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration. © The Author(s) 2018. Published by Oxford University Press on behalf of The Royal Astronomical Society.Item Using Seismic Ambient Noise to Monitor Environmental Changes(August 2023) Luo, Bingxu 1995-; Dabkowski, Mieczyslaw K.; Zhu, Hejun; Minkoff, Susan E.; Lumley, David; Chen, Feng; Brikowski, Thomas H.Over the past decades, the Earth has suffered from a variety of environmental hazards, such as global warming, floods, drought, ice sheets melting and sea level rise. Scientists would like to monitor, interpret and predict these hazards by using different techniques. As a seismologist, I would like to generalize seismological techniques and apply them to explore these severe hazards. One of my research directions is to measure near-surface seismic velocity changes (dv/v) from ambient noise records, and then correlate them with independent environmental variables. The first topic is to measure decadal dv/v by auto- correlating noise records at each seismographic station deployed in Greenland. Our results demonstrate that dv/v for most stations have less than 3 months lag times in comparison to the surface ice mass change. These various lag times may provide us with constraints for the thickness of the subglacial till layer over different regions in Greenland. Moreover, we also observe a change in the long-term trend of dv/v in southwest Greenland, which is consistent with the mass change rate during the “2012-2013 warm-cold transition”. The second topic is to explore the potential of using regional averaged dv/v records to predict incoming flood events in the Yellowstone National Park (YNP). Over the past ten years, we find that the annual peaks of dv/v variations always have one to two months lead-time in comparison to anomalous water discharges in summers. In particular, there was a short- term high velocity perturbation around 62 days ahead of the 2022 historic flood in the YNP. We therefore suggest that the annual peaks of dv/v in springs are mainly driven by snow accumulation during winters, which might be the major contributor to the flood events in the YNP. This study demonstrates the potential of using seismic observables to predict incoming flood events one to two months in advance. The third topic is to use machine learning (convolutional neural networks) to build a complete aftershock catalog for the 2020 MW 6.5 Stanley, Idaho earthquake. This new catalog has over seven times more events and 0.9 lower completeness magnitude than the current USGS-NEIC catalog. The distribution and expansion of these aftershocks improve the resolution of two north-northwest-trending faults with different dip angles, providing us further support for a central stepover region that changed the earthquake rupture trajectory and induced sustained seismicity.