Third EAGE Workshop on High Performance Computing for Upstream
The energy market is experiencing historic changes in the price of oil that have prodded the industry to seek higher productivity, lower costs and increased efficiencies. HPC modelling and simulation is a leading technology in this effort. Faster algorithms and hardware lead to improved visibility of the subsurface and the systematic investigation of more drilling and production scenarios. The larger the memories available, the higher the resolution of those simulations. Better mathematics and algorithms produce more accurate solutions using fewer calculations. Co-design of algorithms to computer architectures can yield reductions in total cost of ownership. This technical evolution in HPC helps make the industry Faster, Better, and Cheaper, which is the underlying theme of this third instalment of the EAGE workshop for HPC in Upstream.
Upstream simulation and modelling is our principal mechanism for the accurate location of hydrocarbons and their optimal production. The reliance on data for making better business decisions at a lower cost is becoming critical. Seismic data are explored using traditional imaging algorithms such as Reverse Time Migration (RTM), Full Waveform Inversion (FWI) and Electromagnetic Modeling (EM) to illuminate the hidden subsurface of the earth and reservoir simulation is used to optimally produce fields and predict the time evolution of assets. Both are highly compute-intensive activities, which push the leading edge of HPC storage, interconnect and calculation. The industry is evolving on several fronts. Changes in the underlying hardware with the advent of co-processing technologies and many-core CPUs are challenging practitioners to develop new algorithms and port old ones to reap the most performance from modern hardware. The explosion of data and the recent rapid development in machine learning (ML) are leading to non-traditional ways of interpreting seismic and reservoir data. The emergence of significantly faster reservoir simulation technology is breathing new life into multi-resolution and uncertainty quantification workflows.
The ability to create and mine these data relies on the optimal utilisation of supercomputers. This is the result of various synergies between industries, companies, departments and, most importantly, people. HPC IT departments (or even HPC cloud solution providers) are focused on minimising turnaround times for various workloads, but also deploy the various compute architectures in a cost competitive fashion while adapting to the fast-paced innovation in the semiconductor industry. Research groups and software application teams in both academia and industry develop new algorithms and keep abreast with the latest while adapting and optimizing existing or new production frameworks to the latest parallel programming model, language and architecture.
The workshop brings together experts in order to understand state-of-the-art key applications employed in the upstream industry and anticipate what ambitions are enabled by increased computational power.