|Date||Speaker||Presentation Title & File Link|
|11/20/08||Yemi Arogunmati||Monitoring Sequestered CO2 with Sparse Seismic Data Using Data Evolution|
Y. Arogunmati, Stanford University
I am proposing an approach to quasi-continuous, geophysical monitoring of sequestered CO2 in geological reservoirs with sparse seismic data. This approach, called data evolution, takes advantage of the small amount of change in the seismic property of a geological reservoir that occurs in a small time interval during or after injection of CO2. The goal of this approach is to obtain high temporal and spatial resolution in reconstructed, time-lapse geophysical models using the same resources that would have provided high spatial but low temporal resolution. This is done by acquiring sparse data at small time intervals. In this case, a sparse dataset refers to that dataset which is a small fraction (as little as 5%) of what would be required to reconstruct a high spatial resolution image of the subsurface. The high spatial resolution obtained by the proposed approach occurs because unmeasured data are estimated from future and past data. With high temporal and spatial resolution, early detection of leaks in a sequestered CO2 reservoir is guaranteed. This approach comprises two key steps: a missing data estimation step, which converts the sparse data to full data, and then a geophysical data inversion step, which is used to reconstruct the geophysical model.
|10/29/08||Victor Pereyra||Computation of Equispaced Pareto Fronts for Multiobjective Optimization|
V. Pereyra, Weidlinger Associates (Retired)
Many optimization problems in science and engineering have multiple objectives. Often these problems are solved by converting them to single objective constrained ones, which allows the use of familiar tools. However, this conversion can lead to sub-optimal results since it usually does not explore the whole space of solutions of the original problem.
In this presentation we will introduce the basic theory of optimality for multiobjective optimization problems and describe methods for sampling adequately the space of solutions, the so called Pareto front.
>An interesting application of these ideas to problems in Reservoir Optimization could be in the area of cooperative inversion, when multiple data sets (seismic, pressure histories, electromagnetic, etc.) are blended together to better image material properties. Although it is natural to think that the solutions of different inversion problems with common parameters will lead to consistent results, this is hardly the case, since ill-conditioning, insufficient data and non-linearities, usually make these problems very hard to solve.
Blending all the data into a single objective using some weights is one of the favorite methods. Unfortunately, this just changes the problem to having to select the weights, since different weights lead to different solutions (in fact, different Pareto optimal points!).
|10/08/08||John Dennis||A Progressive Barrier Approach to Derivative-Free Nonlinear Programming|
M.A. Abramson (1), C. Audet (2), J. Dennis (3) and S. Le Digabel (2)
(1) The Boeing Company (2) Ecole Polytechnique de Montreal (3) Rice University and University of Washington
This is ongoing work towards our goal of providing effective algorithms for derivative-free blackbox nonlinear programming. The class of problems we target are evaluated by calling "blackbox" computer codes. Often, some constraints are much cheaper to evaluate than others and some of the constraints only return "yes" if they are satisfied and "no" if they are not. Some constraints must be satisfied or the objective function and other constraints cannot be evaluated. There are even cases that show up unexpectedly as the solution process proceeds by failing to return a value for the objective function or constraints despite being called with an argument that is feasible with respect to the given constraints.
Our class of mesh adaptive direct search (MADS) algorithms has a satisfying convergence analysis for locally Lipschitz functions. We discuss those results, and we give a new example, orthoMADS. We give some numerical tests using our "progressive barrier" version orthoMADS-PB. The progressive barrier idea is related to the derivative-free filter approach, and it is available in our NOMAD codes.
|07/31/08||Mohsen Dadashpour||Estimation of Porosity and Permeability from 4D-Seismic and Production Data Using Principal Component
M. Dadashpour, D. Echeverria Ciaurri, T. Mukerji, J. Kleppe and M. Landrø
Norwegian University of Science and Technology (NTNU) and Stanford University
This study presents a method based on the Gauss-Newton optimization technique for continuous reservoir model updating (porosity and permeability) with respect to production history and 4D seismic data (in the form of zero offset amplitudes and AVO gradients). Using only production data or zero offset 4D seismic amplitudes as observation data in the parameter estimation process, cannot properly explore the solution space. Therefore, integration of production data and 4D seismic zero offset amplitudes and AVO gradients, combined with empirical knowledge about rock types from laboratory measurements, is one way to further constrain the inversion process.
In the first part of this talk we study the feasibility of integrating data as input to reservoir parameter estimation, and we present results for a semi-synthetic history matching problem extracted from the Norne field in the Norwegian Sea.
|05/29/08||Bruno Kaelin||Next-Generation Seismic Imaging: High-Fidelity Algorithms and High-End Computing|
B. Kaelin, 3DGeo Inc.
Future development of hydrocarbon resources will include exploration in increasingly complex geological environments, necessitating increasing advances in computationally intensive imaging technologies for both exploration and exploitation.
Among these technological advances, reverse time migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontiers in designing better velocity estimation algorithms. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computational power. RTM needs about thirty times the computing power used in exploration today to be commercially viable and widely used.
To overcome these challenges, the Kaleidoscope Project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc. and IBM brings together the necessary components of modeling, algorithms and the uniquely powerful computing power of the MareNostrum Supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software and hardware, steps that are traditionally taken sequentially. This unique integration will accelerate seismic imaging by several orders of magnitude compared to conventional solutions on standard Linux Clusters.
|05/22/08||Erick Delage||Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems|
E. Delage and Y. Ye, Stanford University
Stochastic programs can effectively describe the decision-making problem in an uncertain environment (e.g., optimal management of reservoir floods). Unfortunately, such programs are often computationally demanding to solve. In addition, their solutions can be misleading when there is ambiguity in the choice of a distribution for the random parameters.
In this talk, we propose a model describing uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and the moments form (mean and covariance). In fact, by deriving new confidence regions for the mean and covariance of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical observations of the parameters. We demonstrate that for a wide range of convex cost functions the associated distributionally robust stochastic program can be solved efficiently. We also consider our framework in the context of more general optimization problems, where our methods can lead to better decisions in terms of risk-adjusted performance with respect to the undetermined environment.
|04/24/08||Jef Caers||Distance-based Random Field Models and Their Applications|
J. Caers, Stanford University
Traditional to geostatistics is the quantification of spatial variability through a variogram or co-variance model, often within a Multi-Gaussian random field description. Since most practical applications work on a grid, the input of many algorithms is essentially a co-variance table, in the theoretical limit, a large N x N matrix whereby N is the number of cells in the model grid. In most practical applications, N is larger than the number of realizations, N_R, generated. Co-variance-based models are however limited in representing realistic spatial variability, hence recently, multi-point geostatistical methods have been developed to better represent actual spatial variation. The multi-point approach has mostly relied on the construction of efficient algorithms whose application has been successful, but whose further progress may be hampered by the lack of a theoretical framework. In this regard, Markov random fields have yet to be proven practical and robust models in 3D.
In this paper, a new type of random field model is proposed that is based on a parameterization by means of the distance between any two outcome realizations of this random field model. The main idea is based on a simple duality between the covariance table calculated from a set of N_R realizations and the Euclidean distance between these realizations. Hence, instead of defining a random field in a very high-dimensional Cartesian space (dim = N), we define the random field in a much lower-dimensional and mathematically/computationally tractable metric space (dim max = N_R), since it is expected that the number of realizations is much less the number of grid-cells. The classical Karhunen-Loeve expansion of a Gaussian random field, based on the eigenvalue decomposition of an N x N covariance table, can now be formulated as function of the N_R x N_R Euclidean distance table. To achieve this, we construct an N_R x N_R kernel matrix using the classical radial basis function, which is function of the Euclidean distance and perform eigenvalue decomposition of this kernel matrix. The generalization to non-Gaussian random fields is easily achieved by using distances other than the Euclidean distance. In fact, the distance chosen can be tailored to the particular application of the random field model at hand.
It is shown how this modeling approach creates new avenues in spatial modeling, including the generation of a differentiable random field, a random field model for multiple-point simulations, the ability to easily update existing realizations with new data, a more realistic modeling of spatial uncertainty, and the fast and effective construction of prior model spaces for solving inverse problems.
|04/17/08||Gboyega Ayeni||Time-Lapse Monitoring of Reservoirs under Complex Overburden|
G. Ayeni and B. Biondi, Stanford University
Time-lapse (4D) imaging of conventional reservoirs is a well-established technology. However, there has been little success of 4D applications in complex geology (e.g. sub-salt reservoirs) or in areas where repeatability is difficult, expensive or impossible (e.g. due to the development of facilities between surveys). A regularized least-squares inversion of the linearized wave equation is proposed as a means for compensating for poor and uneven sub-surface illumination under complex overburden as well as image differences resulting from different acquisition geometries. This approach involves a joint deconvolution of migrated images from different vintages with explicitly computed filters derived from the Hessian of the linearized wave equation. By using such a formulation, we solve both the imaging and monitoring challenges as a single problem. A more accurate image of the sub-surface and its evolution through time would be obtained without the need (and hence cost) for perfectly repeatable survey geometries. The realistic numerical experiments we have conducted so far indicate that the joint inversion technique appears to yield more accurate time-lapse results, and to be more robust with respect to errors in the forward-modeling operator, than the direct differencing of independently inverted images.
|04/03/08||Tamara G. Kolda||Generating Set Search Methods for Practical Optimization|
T.G. Kolda, Sandia National Laboratories
In this talk, I will describe Generating Set Search (GSS), a derivative-free optimization method that is an extension of pattern search. GSS is an iterative method that generates new trial points according to a search pattern at each iteration. These methods date back to Fermi and Metropolis and are often characterized as "slow but sure"; however, their robustness makes them worth a closer look - they are particularly well-suited to engineering optimization problems because they only require function evaluations and are largely immune to errors and break-downs in the simulations. Moreover, these methods are embarrassingly parallel, so it is possible to make them relatively fast by performing the function evaluations simultaneously.
A major focus of the GSS work at Sandia has been the development of APPSPACK (Asynchronous Parallel Pattern Search package), our implementation of an asynchronous version of the algorithm. Removing the synchronization barrier in the standard GSS algorithm makes much better use of parallel resources and leads to reduced runtime, even though the number of function evaluations can sometimes increase. Moreover, APPSPACK is designed to be easy-to-use, robust, and efficient (in that order). I will compare it to other derivative-free software packages in features and performance.
I will also describe the theory that underlies GSS and how it has been adapted to handle constraints. Linear constraints require that the search pattern is modified to appropriately conform to the nearby boundary. Nonlinear constraints are handled by exact and smoothed-exact penalty functions. We have implemented these methods in APPSPACK, and I will present results on standard test problems.
I will conclude by summarizing the benefits of GSS and its implementation in APPSPACK for real-world problems in optimization. If time permits, I will discuss our current work on hybridizing APPSPACK to be suitable for global optimization.
|03/12/2008||Radu Serban||Adjoint-Based Methods for Analysis of Dynamical Systems|
R. Serban, Lawrence Livermore National Laboratory (LLNL)
Adjoint-based techniques and variational analysis are powerful mathematical methodologies for tackling some computational problems that are essential to many science and engineering applications. The capability of constructing and solving appropriate adjoint models provides (1) an efficient way to evaluate perturbations to data and (2) relatively cheap derivative information for quantities of interest. The applications of these include forward and inverse sensitivity analysis, parameter identification and dynamic optimization, error estimation, model evaluation, etc. We begin by introducing the underlying ideas for using adjoint models to evaluate derivatives of quantities of interest. We focus on problems described by ordinary differential (ODE) or differential-algebraic (DAE) equations and briefly discuss topics related to the derivation and analysis of the adjoint models as well as related implementation and software issues. We conclude with some applications of adjoint methods to (1) computation of derivatives in the context of optimization for robustness and (2) assessment of the quality of reduced-order models under perturbations.
|03/06/2008||Sanghui Ahn||Deconvolution Optimization in Permanent Downhole Gauges|
S. Ahn, Stanford University
Pressure and flow rate data monitored from permanent downhole gauges are complex in the sense that the two signals might not change in reciprocal direction (as required due to reservoir physics). To study the interaction between these two (noisy) signals we will compute the optimal response function (pressure inferred from flow rate) by solving a sequence of convex optimization problems. We will then see how this methodology can be used to reduce the noise originally present in the signals obtained. The results of this procedure will be compared to those of an approach based on least squares. We will eventually end the presentation by discussing ongoing research issues.
|02/21/2008||Rami Younis||A Smart Automatically Differentiable High Performance Vector Calculus Package|
R. Younis, Stanford University
Reservoir simulation equations are invariably nonlinear, and solving numerical approximations to them often requires the construction and evaluation of large sparse Jacobian matrices. Moreover, the possibility of phase-transitions dictates that the precise physics governing the process is not known at any point until runtime. That is, the sparse nonlinear systems may change in dimension, functional form, and degree, all depending on the current evaluation state.
Automatic Differentiation (AD) is an algorithmic technique to automatically encode the handful of rules defining differentiation. Numerous AD software packages are currently available, and they vary in the assumptions they make regarding the intended usage and the client's need for speed. I present a unique high-performance differentiable vector calculus framework and software library that can be used in numerical simulation where some aspects of the physics may be known at compile time, and others at runtime.
Of more specific relevance to optimization, I show how the library can be used to assemble adjoints with variable degrees of how much is to be pre-computed and cached versus re-computed. Ultimately, adjoint-based codes developed within the framework, would be smart enough to understand both the machine architecture and the current problem in order to evaluate adjoints with the minimal necessary runtime cost, automatically.
|01/31/2008||Knut Sund||Smart Fields: How Effective is the Current Relationship between Operating Companies and Suppliers?|
K. Sund, University of Stavanger
Over the past few years there has been an unprecedented wave of capital spending in the exploration and production industry. Still, the expectations for improved capital efficiency from Smart Fields (sometimes called Integrated Operations) and its promise of "faster and better decisions" have not materialized. Industry headlines are filled with notable examples of multi-year, multi-billion-dollar overruns. Indications show that leaders of oil and gas companies may be less satisfied with their overall performance than at any time in the history of industry.
In this presentation, the main focus is on inter-organizational relationships between operators and suppliers in the context of Smart Fields. We have surveyed one large operator, three large suppliers, and some small suppliers operating on the Norwegian Continental Shelf (NCS). The survey included a broad range of technical professionals at different management and business levels and included questions related to the collaborative relationship between operators and suppliers. In this work we present and discuss some of the results from the survey. We will discuss the disconnection between operators and suppliers related to contractual/incentive based contracts. Further, end results with use of incentive based contracts will be illustrated and possible improvements will be discussed.
Improving collaboration between operators and suppliers offers perhaps the greatest challenge and, we believe, the greatest potential in achieving the much anticipated value creation from Smart Fields/Integrated Operations. This paper contributes to this by identifying the key disconnects between operating companies and suppliers.
|01/16/2008||Marco Cardoso||Reduced-Order Modeling Applied for 3-D Reservoir Flow Simulation|
M. Cardoso, Stanford University
In this presentation I will talk about the current state of Reduced Order Modeling (ROM) applied to reservoir flow simulation. After a brief introduction I will focus on the implementation of ROM in Stanford's General Purpose Research Simulator (GPRS) and on the application of ROM to a 3-D reservoir with 60,000 grid blocks.
The most important points of my presentation are: a) flow scenario selection: just one simulation was needed to generate the reduced-order basis; b) examples showing the predictive capability of the reduced-order basis for different scenarios; c) three different options for the use of ROM: 1) Proper Orthogonal Decomposition (POD), 2) POD + clustering 3) POD + clustering + Missing Point Estimation (MPE); d) the number of unknowns of the model can be reduced from1 20,000 (pressure and saturation for all grid blocks) to 39; e) significant speed-up of the ROM-based model when comparedw ith the original one (GPRS).
« Return to all presentations