# Imaging

## Introduction

At SLIM, we develop algorithms for the wave-equation based linearized inversion, a.k.a. the least-squares migration. The features include:

• high fidelity on-the-fly source estimation,
• utilizing surface-related multiples,
• fast yet robust inversion by source-encoding combined with curvelet-domain sparsity promotion, borrowing ideas from compressive sensing (CS) and approximate message passing (AMP).

## A general formulation of the least-squares migration

Least-squares migration identifies the acquired seismic data as an approximate linear response of the subsurface medium perturbations w.r.t. a smooth background-velocity model. By inverting the so-called “de-migration” operator (i.e., the least-squares migration) that modells the linearized wave propagation, we can get these perturbations that form a seismic image. Compared with the conventional reverse-time migration (RTM), the least-squares approach offers several benefits: (i) higher spatial resoltuion as the source wavelets are deconvolved as part of the inversion procedure (Tu et al., 2013b); (ii) less acquisition footprints as the the inversion procedure also accounts for the source illuminations (Nemeth et al., 1999); (iii) being able to handle surface-related multiples, which is not possible by conventional RTM (Tu and Herrmann, 2014).

Least-squares migration with a given source wavelet follows a general formulation: $$$\label{eq:1} \min_{\delta\mathbf{m}} \sum_{i=1}^{n_f}\sum_{j=i}^{n_s}\ \|\mathbf{d}_{i,j}-\nabla\mathbf{F}[\mathbf{m}_0,q_i\mathbf{e}_j]\delta\mathbf{m}\|_2^2,$$$ where $$i$$ indexes the frequencies with $$n_f$$ the total number of frequencies, $$j$$ indexes the sources with $$n_s$$ the number of source experiments, $$\nabla\mathbf{F}$$ is the linearized Born scattering operator, $$\mathbf{m}_0$$ is the background model, $$q_i$$ is the frequency spectrum of the given source wavelet at the $$i-\mathrm{th}$$ frequency, $$\mathbf{e}_j$$ is the $$j$$-th column of the identity matrix which physically corresponds to the $$j$$-th sequential source excitation, and $$\delta\mathbf{m}$$ is the model perturbation.

There are at least three major challenges in solving this problem:

1. Excessive computational cost. Each gradient update involves four PDE solves for each monochromatic source experiment, and the total simulation cost is proportional to the number of frequencies and sources in the inversion.

2. Coherent events in the data such as surface-related multiples can result in phantom reflectors. However, if used properly, multiples actually provide wider illumination coverages than primaries (Lu et al., 2011). Therefore we want to think of a way to use these multiples in a way that they can be mapped to the true reflectors in the subsurface and increase the illumination coverages of the sources.

3. An accurate estimate of the source wavelet is usually unknown in practice. We have demonstrated that a wrong source wavelet can lead to erroneous results (Tu et al., 2013a). In light of this, it is also desirable to simultaneously invert for the source wavelet accurately.

In the next sections, we are going to explain how we achieve these goals.

## Fast inversion with CS and AMP

As the simulation cost in solving wave equations is proportional to the number of monochromatic source experiments, we reduce the simulation cost by forming fewer randomized simultaneous sources and selecting a random subset of frequencies during the inversion procedure. However, we achieve this speed-up at the expense of introducing source-crosstalk by randomized source superimposition. We follow sparsity promoting ideas from Compressive Sensing (CS) (Emmanuel J. Candès et al., 2006; D. Donoho, 2006) and remove these artifacts by solving the following optimization program: \label{eq:2} \begin{aligned} &\min_{\mathbf{x}} \|\mathbf{x}\|_{\mathrm{1}} \\ &\mathrm{subject\ to} \sum_{i=1}^{n_f^\prime}\sum_{j=1}^{n_s^\prime} \|\underline{\mathbf{d}}_{i,j}-\nabla\mathbf{F}[\mathbf{m}_0,q_i\mathbf{s}_j]\mathbf{C}^H\mathbf{x}\|_2^2 \leq\mathrm{\sigma}. \end{aligned} The number of simultaneous sources is $$n_s^\prime$$ and the number of frequencies is $$n_f^\prime$$. The $$j$$-th simultaneous source is denoted by $$\mathbf{s}_j$$, which is a randomized superimposition of sequential sources (Tu and Herrmann, 2014). The corresponding simultaenous data is denoted by the underlined variable $$\underline{\mathbf{d}}_{i,j}$$. We use a tolerance $$\sigma$$ to allow for noise and modelling errors in the data and the linearized modelling. Matrix $$\mathbf{C}^H$$ is the curvelet synthesis operator (E. J. Candès et al., 2006). We incorporate the curvelet transform mainly because they are multi-scale and omni-directional, and can sparsely represent the piecewise smooth model perturbations. One difficulty of solving equation $$\ref{eq:2}$$ is that the objective function of is non-differentiable. Therefore instead of directly trying to solve equation $$\ref{eq:2}$$, we alternatively solve a series of nonlinear LASSO subproblems for gradually relaxed $$\tau$$’s (Aravkin and Leeuwen, 2012; van den Berg and Friedlander, 2008): \label{eq:3} \begin{aligned} &\min_{\mathbf{x}} \sum_{i=1}^{n_f^\prime}\sum_{j=1}^{n_s^\prime} \|\underline{\mathbf{d}}_{i,j}-\nabla\mathbf{F}[\mathbf{m}_0,q_i\mathbf{s}_j]\mathbf{C}^H\mathbf{x}\|_2^2 \\ &\mathrm{subject\ to}\ \|\mathbf{x}\|_{\mathrm{1}} \leq\mathrm{\tau}, \end{aligned} where $$\tau$$ is computed implicitly using Newton’s method on the Pareto trade-off Curve (van den Berg and Friedlander, 2008), which relates the $$\ell_1$$ norm of the solution vector and the $$\ell_2$$ norm of the data misfit. We also propose to draw a new subsampling operator for each LASSO subproblem, by borrowing ideas from Approximate-Message-Passing (AMP) and randomized block Kaczmarz (Montanari, 2010; Needell and Tropp, 2014), which has been shown to improve convergence and robustness to linearization errors (Tu and Herrmann, 2012; Tu et al., 2013b). We demonstrate the effectiveness of this dimensionality reduction technique with the SEG 2D salt model. The following two images are obtained by i) reverse-time migration with fully sampled data, ii) our fast linearized inversion with subsampling and rerandomization, where the computational cost is comparable to a single RTM of fully sampled data. We can see that we get an overall higher-resolution image by inverting the linearized modelling operator without a huge increase on the computational budget. Read Tu et al. (2013b) for more details about this example.

The linearized inversion framework facilitates the inclusion of surface-related multiples by identifying the downgoing receiver wavefield as the areal source wavefield for the surface-related multiples, which can be derived from the SRME formulation (Tu and Herrmann, 2014; Verschuur et al., 1992). To write this explicitly, the data $$\mathbf{d}_{i,j}$$ in equation $$\ref{eq:3}$$ has both primaries and multiples in it, and accordingly, the $$j$$-th point source $$q_i\mathbf{e}_j$$ is replaced by an areal source wavefield $$q_i\mathbf{e}_j-\mathbf{d}_{i,j}$$. You can find more examples on this page.

## On the fly source estimation

For field data processing, one difficulty is that the true source wavelet is more or less unknown. Even direct arrivals measured in a towed streamer survey suffer from free-surface ghosts and therefore should not be used as the true source. On the other hand, using an inaccurate source can greatly degrade the image quality (Tu et al., 2013a). The following example shows a comparison of images obtained by using the true source wavelet and a wrong wavelet (simply an impulsive source at t=0 in this case).

Therefore we proposed to estimate the source wavelet as part of the optimization process. We use an alternating approach: once we update $$\mathbf{x}$$ in equation $$\ref{eq:3}$$, we estimate the source wavelet by wavefield matching, using multiples to alleviate the non-uniqueness in blind deconvolution: $$$\label{eq:4} \tilde{q}_i(\mathbf{x}) = \frac{\sum_{j=1}^{n_s^\prime}< \nabla\mathbf{F}[\mathbf{m}_0, \mathbf{s}_j]\mathbf{C}^H\mathbf{x},\underline{\mathbf{d}}_{i,j}+\nabla\mathbf{F}[\mathbf{m}_0, \underline{\mathbf{d}}_{i,j}]\mathbf{C}^H\mathbf{x}>}{\sum_{j=1}^{n_s^\prime}\|\nabla\mathbf{F}[\mathbf{m}_0, \mathbf{s}_j]\mathbf{C}^H\mathbf{x}\|_2^2}.$$$ The result is shown here:

## References

Aravkin, A. Y., and Leeuwen, T. van, 2012, Estimating nuisance parameters in inverse problems: Inverse Problems, 28. Retrieved from http://arxiv.org/abs/1206.6532

Candès, E. J., Demanet, L., Donoho, D. L., and Ying, L., 2006, Fast discrete curvelet transforms: Multiscale Modeling and Simulation, 5, 861–899. doi:10.1137/05064182X

Candès, E. J., Romberg, J. K., and Tao, T., 2006, Stable signal recovery from incomplete and inaccurate measurements: Communications on Pure and Applied Mathematics, 59, 1207–1223. doi:10.1002/cpa.20124

Donoho, D., 2006, Compressed sensing: Information Theory, IEEE Transactions on, 52, 1289–1306. doi:10.1109/TIT.2006.871582

Lu, S., Whitmore, N. D., and Valenciano, A. A., 2011, Imaging of primaries and multiples with 3D sEAM synthetic: SEG technical program expanded abstracts. Retrieved from http://link.aip.org/link/?SEGEAB/30/3217/1

Montanari, A., 2010, Graphical models concepts in compressed sensing: arXiv:1011.4328v3 [cs.IT].

Needell, D., and Tropp, J. A., 2014, Paved with good intentions: Analysis of a randomized block kaczmarz method: Linear Algebra and Its Applications, 441, 199–221. doi:http://dx.doi.org/10.1016/j.laa.2012.12.022

Nemeth, T., Wu, C., and Schuster, G. T., 1999, Least-squares migration of incomplete reflection data: Geophysics, 64, 208–221.

Tu, N., and Herrmann, F. J., 2012, Imaging with multiples accelerated by message passing: SEG technical program expanded abstracts. SEG; SEG. Retrieved from https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2012/tu2012SEGima/tu2012SEGima.pdf

Tu, N., and Herrmann, F. J., 2014, Fast imaging with surface-related multiples by sparse inversion: UBC; UBC. Retrieved from https://www.slim.eos.ubc.ca/Publications/Private/Submitted/2014/tu2014fis/tu2014fis.pdf

Tu, N., Aravkin, A. Y., Leeuwen, T. van, and Herrmann, F. J., 2013a, Fast least-squares migration with multiples and source estimation: EAGE. doi:10.3997/2214-4609.20130727

Tu, N., Li, X., and Herrmann, F. J., 2013b, Controlling linearization errors in $$\ell_1$$ regularized inversion by rerandomization: SEG technical program expanded abstracts. SEG. doi:10.1190/segam2013-1302.1

van den Berg, E., and Friedlander, M. P., 2008, Probing the pareto frontier for basis pursuit solutions: SIAM Journal on Scientific Computing, 31, 890–912. Retrieved from http://link.aip.org/link/?SCE/31/890