Third EAGE Workshop on High Performance Computing for Upstream

Date
1 - 4 October
Location
Athens, Greece
Registration
Closing soon
Call for papers
Closed

General Information

Workshop Overview

The energy market is experiencing historic changes in the price of oil that have prodded the industry to seek higher productivity, lower costs and increased efficiencies. HPC mod­elling and simulation is a leading technology in this effort. Faster algorithms and hardware lead to improved visibility of the subsurface and the systematic investigation of more drilling and production scenarios. The larger the memories available, the higher the resolution of those simulations. Bet­ter mathematics and algorithms produce more accurate so­lutions using fewer calculations. Co-design of algorithms to computer architectures can yield reductions in total cost of ownership. This technical evolution in HPC helps make the industry Faster, Better, and Cheaper, which is the underlying theme of this third instalment of the EAGE workshop for HPC in Upstream.

Upstream simulation and modelling is our principal mecha­nism for the accurate location of hydrocarbons and their optimal production. The reliance on data for making better business decisions at a lower cost is becoming critical. Seis­mic data are explored using traditional imaging algorithms such as Reverse Time Migration (RTM), Full Waveform Inver­sion (FWI) and Electromagnetic Modeling (EM) to illuminate the hidden subsurface of the earth and reservoir simulation is used to optimally produce fields and predict the time evolu­tion of assets. Both are highly compute-intensive activities, which push the leading edge of HPC storage, interconnect and calculation. The industry is evolving on several fronts. Changes in the underlying hardware with the advent of co-processing technologies and many-core CPUs are challenging practitioners to develop new algorithms and port old ones to reap the most performance from modern hardware. The explosion of data and the recent rapid development in ma­chine learning (ML) are leading to non-traditional ways of interpreting seismic and reservoir data. The emergence of significantly faster reservoir simulation technology is breath­ing new life into multi-resolution and uncertainty quantifica­tion workflows.

The ability to create and mine these data relies on the opti­mal utilisation of supercomputers. This is the result of vari­ous synergies between industries, companies, departments and, most importantly, people. HPC IT departments (or even HPC cloud solution providers) are focused on minimis­ing turnaround times for various workloads, but also de­ploy the various compute architectures in a cost competitive fashion while adapting to the fast-paced innovation in the semiconductor industry. Research groups and software ap­plication teams in both academia and industry develop new algorithms and keep abreast with the latest while adapt­ing and optimizing existing or new production frameworks to the latest parallel programming model, language and architecture.

The workshop brings together experts in order to understand state-of-the-art key applications employed in the upstream industry and anticipate what ambitions are enabled by in­creased computational power.