DHPC User Stories

About DHPC

Scientists increasingly need extensive computing power to solve complex problems in physics, mechanics and dynamics. The Delft High Performance Computing Centre (DHPC) deploys the infrastructure (hardware, software and staff) for TU Delft that is capable of complex analysis and modelling for researchers. At the same time we provide Bachelor, Master and PhD students with hands-on experience using the tools they will need in their careers.

Both high-performance simulations and high-performance data science are evolving rapidly and the combination of these techniques will lead to completely new insights into science and engineering, an increase in innovation, and the training of high-performance computing engineers for the future.

Due to the rapidly evolving hardware and tools for numerical simulations, HPC has significantly changed the way fundamental research is conducted at universities. Simulations not only replace experiments, but also add very valuable fundamental insights. We see the results in all kinds of disciplines, such as materials science, fluid dynamics, quantum mechanics, design optimization, big data mining and artificial intelligence.


News

Agenda

21 maart 2025 12:30 t/m 13:15

[NA] Antoine Lechevallier : Enhancing History Matching with Machine Learning: A Hybrid Newton Approach

"History matching involves adjusting model parameters to improve the fit between observed and simulated data, ensuring better forecasting and reservoir management. Traditional ensemble-based methods, while effective, are computationally intensive, particularly for large datasets and complex reservoir models. This study introduces an innovative methodology that builds on the ""Hybrid Newton"" method by leveraging machine learning (ML) for non-linear preconditioning to accelerate history matching, with a specific focus on incremental learning for near-well prediction.

Part 1: Methodology Introduction on the Drogon Field (Rehearsal for SPE RSC25)

The iterative nature of history matching requires frequent updates to parameters such as porosity and permeability to account for uncertainties. Ensemble-based techniques refine multiple model realizations but at a high computational cost, largely due to repeated simulations. Machine learning presents a promising solution, with deep learning capturing non-linear and time-dependent dynamics. Surrogate models help reduce computational demands while maintaining accuracy, yet training these models for ensemble-based history matching remains challenging due to large datasets and generalization issues.

To address this, we enhance standard solvers with ML-based optimizations. Deep neural networks can predict reservoir outputs efficiently and improve solver performance by refining initial guesses. The Hybrid Newton method reduces the number of iterations required for convergence by leveraging ML-assisted initial guesses. Specifically, the local hybrid Newton strategy focuses on near-well regions, where well events introduce non-linear stiffness. Well dynamics often cause significant changes in reservoir behavior, necessitating drastic time-step reductions for iterative non-linear solvers to achieve convergence. Well behavior tends to exhibit similarity across spatial and temporal dimensions, presenting an opportunity for incremental learning.

We capitalize on this by training deep learning models tailored to each well’s near-well region after each history matching ensemble. These models predict solutions in the near-well region and serve as initial guesses for Newton’s method, ensuring faster convergence. Our methodology integrates ML models as local non-linear preconditioners for subsequent history matching ensembles, leveraging nested parameter distributions from prior simulations. This approach is cost-effective, as it does not require additional simulations and remains computationally feasible due to its localized application. Implemented using the open-source Python Ensemble Toolbox for ensemble modeling and OPM Flow for simulation, our methodology is validated on the Drogon field dataset. Results demonstrate a significant reduction in non-linear solver iterations required for convergence. Moreover, training deep learning models for each well can be carried out in parallel on CPUs, further enhancing efficiency.

Part 2: Assessment of SWIM on a Simplified Test Case

While the methodology has shown promising results on the Drogon field dataset, further assessment is necessary to evaluate computational efficiency and generalization capabilities. The Sample Where It Matters (SWIM) strategy has been proposed to reduce training time by optimizing data selection during supervised learning, particularly when integrating ML into solvers. High offline data generation costs for training neural networks have historically limited computational benefits, and SWIM aims to address this issue.

To validate SWIM, we employ a simplified test case with basic reservoir geometries and black-oil equations within the OPM Flow simulator. The strategy reduces computational demand by selectively training ML models where discrepancies are most pronounced, ensuring targeted learning without excessive redundant computations. Fully connected neural networks enhance Newton’s method, with SWIM ensuring efficient training on parallel CPUs. The combination of ensemble-based history matching with nested parameter distributions and SWIM optimization results in significant improvements in efficiency and solver robustness.

Conclusion

This research highlights the potential of machine learning to augment traditional history matching processes. By integrating ML-enhanced solvers and incremental learning strategies, we achieve a scalable and efficient framework that accelerates history matching while maintaining robustness. The proposed methodology can be seamlessly integrated into existing workflows, complementing conventional solvers while preserving their theoretical guarantees. Ultimately, our work paves the way for broader ML applications in reservoir engineering, optimizing computational resources and improving predictive accuracy."