Oil industry for engineers, programmers, mathematicians and the masses of workers, part 4

image

This is the fourth part of a series of articles for future mathematicians and programmers who will have to solve problems related to oil production modeling and development of engineering software in the field of oil production support.

Today we will talk about why field models are needed and how to build them. A model is the very plan of action that must necessarily be and the intended result of these actions.

Modeling, Forecast, Uncertainty

All of the physical effects listed in previous articles (one, two, three) are important to understand not just to know how the world works. Most likely they will have to be taken into account when building a model that knows how to correctly predict the future. Why should we be able to predict the future in oil production if the price of oil and coronavirus are still not predictable? But then, why and everywhere: to make the right decisions.

image

In the case of the field, we cannot directly observe what is happening underground between the wells. Almost everything that is accessible to us is tied to wells, that is, to rare spots on the vast expanses of swamps (all that we can measure is about 0.5% of the rock, we can only “guess” about the properties of the remaining 99.5%). These are the measurements taken at the wells when the well was being built. These are the readings of the instruments that are installed in the wells (bottomhole pressure, the proportion of oil water and gas in the output). And these are the measured and set parameters of the wells – when to turn on, when to turn off, at what speed to pump.

The right model is one that correctly predicts the future. But since the future has not yet arrived, and if you want to understand whether the model is good already, they do it this way: put all the available actual information about the field into the model, add their own guesses about the unknown information in accordance with the assumptions (the catch phrase “two geologists – three opinions ”just about these conjectures) and they simulate the processes of filtering, pressure redistribution that took place underground, and so on. The model gives out which well performance indicators should have been observed, and they are compared with the actual observed indicators. In other words, we are trying to build a model that reproduces the story.
In fact, you can cheat and just require the model to produce the data you need. But, firstly, it’s impossible to do this, and secondly, they’ll notice it anyway (experts in the very state agencies where the model needs to be handed over).

image

If the model cannot reproduce the story, it is necessary to change its input, but what? Actual data cannot be changed: this is the result of observation and measurement of reality – data from devices. Devices, of course, have their own error, and the devices are used by people who can also screw up and lie, but the uncertainty of the actual data in the model is usually small. It is possible and necessary to change what has the greatest uncertainty: our assumptions about what is happening between the wells. In this sense, building a model is an attempt to reduce the uncertainty in our knowledge of reality (in mathematics, this process is known as solving an inverse problem, and inverse problems in our area – like bicycles in Beijing!).

If the model correctly enough reproduces the story, we have the hope that our knowledge of reality embedded in the model does not differ much from this reality itself. Then and only then can we launch such a model for a forecast, in the future, and we will have more reasons to believe such a forecast.

What if it was possible to make not one, but several different models that all reproduce the story well enough, but at the same time give a different forecast? We have no choice but to live with this uncertainty, make decisions with that in mind. Moreover, having several models giving a range of possible forecasts, we can try to quantify the risks of making a decision, while having one model, we will remain in unjustified confidence that everything will be as the model predicts.

Models in the life of the field

In order to make decisions in the process of developing a field, you need a holistic model of the entire field. Moreover, now without such a model it is impossible to develop a field at all: such a model is required by government bodies of the Russian Federation.

image

It all starts with a seismic model, which is created by the results of seismic exploration. Such a model makes it possible to “see” three-dimensional surfaces underground — specific layers from which seismic waves are well reflected. It gives almost no information about the properties we need (porosity, permeability, saturation, etc.), but it does show how some layers bend in space. If you made a multi-layered sandwich, and then somehow bent it (well, or someone sat on it), then you have every reason to believe that all layers are bent approximately the same. Therefore, we can understand how the layered cake was curved from various sediments attacking the ocean floor, even if we see only one of the layers on the seismic model, which, by a lucky chance, reflects seismic waves well. At this point, the data science engineers revived, because the automatic selection of such reflecting horizons in a cube, which was done by the participants of one of our hackathons, is a classic task of pattern recognition.

image

Then exploratory drilling begins, and as the wells are drilled, instruments are lowered onto them that measure all sorts of different indicators along the wellbore, that is, they conduct well logging (geophysical surveys of wells). The result of such a study is well logging, i.e. a curve of a certain physical quantity, measured with a certain step along the entire wellbore. Different instruments measure different quantities, and trained engineers then interpret these curves to obtain meaningful information. One instrument measures the natural gamma radioactivity of a rock. Clays “fonit” stronger, sandstone “fonit” fainter – any interpreter knows this and identifies them on a logging curve: there are clays, here is a layer of sandstone, here is something in between. Another instrument measures the natural electrical potential between adjacent points that occurs when a drilling fluid enters a rock. High potential indicates the presence of a filtration bond between the points of the reservoir, the engineer knows and confirms the presence of permeable rock. The third instrument measures the resistance of the fluid saturating the rock: salt water passes current, oil does not pass current, and it allows separating oil-saturated rocks from water-saturated rocks and so on.
At this point, the data-science engineers revived again, because the input for this problem is simple numerical curves, and replacing the interpreting engineer with some ML-model that can draw conclusions about the rock properties instead of the engineer in the form of a curve means to solve classic classification problem. It was only then that the data science engineers began to twitch their eyes when it turns out that some of these accumulated curves from old wells are only in the form of long paper footcloths.

image

In addition, during drilling, a core is taken out of the well – samples of more or less intact (if lucky) and intact rock during drilling. These samples are sent to the laboratory, where they determine their porosity, permeability, saturation and all sorts of different mechanical properties. If it is known (and if this is done correctly) from what depth a specific core sample was taken, then when data from the laboratory arrives, it will be possible to compare what values ​​at this depth were shown by all geophysical instruments and what values ​​of porosity, permeability and The rock had saturation at this depth according to core laboratory research. Thus, it is possible to “shoot” the readings of geophysical instruments and then only by their data, without a core, draw a conclusion about the rock properties we need to build a model. The whole devil is in the details: the instruments do not measure exactly what they determine in the laboratory, but this is a completely different story.

Thus, having drilled several wells and conducted research, we can fairly confidently state which rock and with what properties is located where these wells were drilled. The problem is that we do not know what is happening between the wells. And here the seismic model comes to our aid.

image

At the wells, we know exactly what properties the rock has at what depth, but we do not know how the rock layers observed at the wells propagate and bend between them. The seismic model does not allow you to accurately determine which layer is located at what depth, but it confidently shows the nature of the spread and bending of all layers at once, the nature of the bedding. Then the engineers mark certain characteristic points in the wells, placing markers at a certain depth: at this depth at this depth is the roof of the formation, at this depth is the bottom. And the surface of the roof and the sole between the wells, roughly speaking, is drawn parallel to the surface that is seen in the seismic model. The result is a set of three-dimensional surfaces that span the space of interest to us, and we, of course, are interested in formations containing oil. What happened is called a structural model, because it describes the structure of the formation, but not its internal content. The structural model does not say anything about porosity and permeability, saturation and pressure inside the formation.

image

Then comes the discretization stage, in which the area of ​​space occupied by the field is divided into such a curved parallelepiped made of cells in accordance with the bedding (the character of which is still visible on the seismic model!). Each cell of this curved box is uniquely determined by three numbers, I, J and K. All layers of this curved box are laid out according to the distribution of the layers, and the number of layers in K and the number of cells in I and J are determined by the detail that we can afford.
How much detailed rock information do we have along the wellbore, that is, vertically? So detailed as how often did the geophysical instrument take measurements of its size when moving along the wellbore, that is, as a rule, every 20-40 cm, so each layer can be 40 cm or 1 m.

How detailed is our lateral information, i.e. away from the well? Not at all: away from the well, we have no information, so it makes no sense to divide into very small cells along I and J, and most often they are 50 or 100 m in both coordinates. Choosing the size of these cells is one of the important engineering tasks.

image

After the entire area of ​​space is divided into cells, the expected simplification is made: within each cell, the value of any of the parameters (porosity, permeability, pressure, saturation, etc.) is considered constant. Of course, this is not so in reality, but since we know that the accumulation of sediments at the bottom of the sea went in layers, the rock properties will change much more vertically than horizontally.

image

So, we have a grid of cells, each cell has its own (unknown to us) value of each of the important parameters that describe both the rock and its saturation. So far this grid is empty, but wells pass in some cells through which we passed with the device and obtained the values ​​of the curves of geophysical parameters. Interpretation engineers, using laboratory core tests, correlations, experience, and such a mother, convert the values ​​of the curves of geophysical parameters to the values ​​of the characteristics of the rock and saturating fluid that we need, and transfer these values ​​from the well to the grid cells through which this well passes. The result is a grid that in some places has values ​​in the cells, but in most cells there are still no values. The values ​​in all other cells will have to be imagined using interpolation and extrapolation. The geologist’s experience, his knowledge of how rock properties are usually distributed, allows you to choose the right interpolation algorithms and fill in their parameters correctly. But in any case, we have to remember that all this is speculation about the uncertainty that lies between the wells, and it is not for nothing that they say, once again, I will remind this common truth that two geologists will have three different opinions about the same deposit.

The result of this work will be a geological model – a three-dimensional curved parallelepiped, divided into cells, describing the structure of the field and several three-dimensional arrays of properties in these cells: most often these are arrays of porosity, permeability, saturation and the feature “sandstone” – “clay”.

image

Then, hydrodynamic specialists take up the work. They can enlarge the geological model by combining several layers vertically and recounting the properties of the rock (this is called “upscaling”, and it is a separate difficult task). Then they add the rest of the necessary properties so that the hydrodynamic simulator can simulate what it will flow to: in addition to porosity, permeability, oil, water, gas saturation, it will be pressure, gas content, and so on. They will add wells to the model and enter information on them about when and in what mode they worked. You have not forgotten that we are trying to reproduce the story in order to have hope for a correct forecast? Hydrodynamics will take reports from the laboratory and add to the model the physicochemical properties of oil, water, gas and rock, all their dependencies (most often on pressure) and everything that happens, and this will be a hydrodynamic model, will be sent to a hydrodynamic simulator. He honestly calculates from which cell to which everything will flow at what point in time, gives out graphs of technological indicators for each well and carefully compares them with real historical data. The hydrodynamic speaker will take a sigh, looking at their discrepancy, and go to change all the uncertain parameters that he is trying to guess so that the next time he starts the simulator, he will get something close to the real data observed. Or maybe at the next start. Or maybe the next and so on.

image

An engineer preparing a model of surface arrangement will take those flow rates that the field will produce according to the simulation results and place them already in its model, which will calculate which pipeline will have what pressure and whether the existing pipeline system will be able to “digest” the field’s production: to clean the produced oil, prepare the required volume of injected water and so on.

And finally, at the highest level, at the level of the economic model, the economist will calculate the flow of expenses for construction and maintenance of wells, electricity for the operation of pumps and pipelines and the flow of income from the delivery of oil to the pipeline system, multiply by the desired degree of discount coefficient and get the total NPV from a finished field development project.

The preparation of all these models, of course, requires the active use of databases for storing information, specialized engineering software that implements the processing of all input information and modeling itself, that is, predicting the future from the past.

To build each of the above models, a separate software product is used, most often bourgeois, often almost uncontested and therefore very expensive. Such products have been developing for decades, and repeating their path with the help of a small institute is pointless. But dinosaurs were eaten not by other dinosaurs, but by small, hungry, purposeful ferrets. The important thing is that, as in the case of Excel, only 10% of functionality is needed for daily work, and our duplicates, like the Strugatsky’s, will be “only those who know that … – but they know how to do it well” just these 10%. In general, we are full of hopes for which there are already certain reasons.

This article describes only one, the pillar way of the life cycle of the model of the entire field, and already there is a place for software developers to take a walk, and competitors will have enough work with current pricing models. The next article will spin-off “Outcast One” about some of the particular tasks of engineering modeling: hydraulic fracturing modeling and flexible tubing.

To be continued…

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *