Abstract:
At present, unconventional crude oil production in China accounts for less than 2% of total oil output, while mature oilfields remain the primary contributors to stable production over an extended period. Re-fracturing is a crucial component of reservoir stimulation, and accurate post-fracturing production prediction plays a key role in the selection of target wells for re-fracturing. However, due to internal discontinuities in the reservoir, heterogeneity in porosity and permeability, and missing critical reservoir parameters, conventional post-fracturing production prediction methods based on empirical formulas or numerical simulations exhibit limited applicability in mature oilfields. Deep learning models provide a promising alternative. Traditional deep learning approaches, such as Recurrent Neural Network (RNN) and long short-term memory (LSTM) networks, suffer from gradient vanishing and limited capability in modeling long-term dependencies, making them inadequate for handling petroleum time-series data characterized by high dimensionality, non-stationarity, and noise interference. The Transformer architecture, leveraging its multi-head attention mechanism and parallel computing capabilities, effectively captures both short- and long-term dependencies in production time series. A comprehensive review of re-fracturing technology advancements and recent progress in deep time-series forecasting models has been conducted. Based on this, a Transformer-based deep time-series prediction model is proposed for forecasting post-refracturing production in low-efficiency wells. A case study is performed using historical production data from Block W in an oilfield located in the Junggar Basin. This study represents an innovative attempt to establish a theoretical and methodological framework for large-scale, efficient, and precise well selection in mature oilfields undergoing re-fracturing, offering novel perspectives and solutions for maintaining stable production. Future research should focus on two key directions. First, to control computational costs, optimizing the attention mechanism in the classical Transformer architecture and integrating time-series decomposition techniques is recommended to enable low-computation refracturing production prediction. Second, for multi-block collaborative well selection, incorporating domain adaptation theories—particularly adversarial domain adaptation and pseudo-label domain adaptation—should be explored to develop a Transformer backbone with transfer learning capabilities.