Gradient search for quadratic regression coefficients

My main hobby is aerospace. And “space” tasks – putting payloads into orbit, returning from orbit through the atmosphere – are functions of a whole set of parameters. For example, the launch vehicle control at the first stage, even in two-dimensional (without taking into account the change in azimuth and dog-leg maneuver) is described by at least 5 variables:

  • the duration of the vertical section of the climb after the start;

  • angle of attack at the beginning of the gravity turn;

  • the duration of the descent maneuver;

  • angle of attack after performing a gravity turn;

  • the duration of pauses between the cut-off of the first stage engines, the separation of the second stage and the start of the second stage engines.

You can also add here engine throttling, velocity head restrictions, fuel requirements for the return and landing of the first stage.

So even in the simplest setting, you need to consider a vector of 10-15 components (do not forget, there are also 2nd and 3rd stages, the fields of incidence of spent stages and fairings, and much more interesting things). Such a problem cannot be solved analytically through a set of ready-made formulas. Numerical methods are needed, among which are well-known, simple and understandable methods of gradient search. (I know about random search methods, genetic algorithms and swarm methods, but I need a transparent, fast, dirty and reliable solution that will not try to become smarter than me)

Gradient (from lat gradients, genus. case gradientis – walking, growing) – vectorwith its direction indicating the direction of the fastest increase of a certain value.

Computing a Gradient from Finite Differences
Computing a Gradient from Finite Differences

It remains only to find the step size by which we will move in the direction of the gradient along each dimension. A constant step is a simple and understandable solution, but different variables have different effects on the objective function, and with a constant step it is likely to start wandering along the slopes of the “ridge” from local optima.

To find the optimal step, we use the dichotomy method. Simple, not requiring even the calculation of derivatives.

We divide the segment in half, compare the values ​​​​around the crushing point, move on

We combine these two methods, at each step we first calculate the gradient, then we find the optimal step values ​​for each component of the objective function argument vector, then we move to the next point, and if the modulus of the difference between the value of the objective function at this point and the previously obtained maximum value turns out to be less than a sufficiently small epsilon, then we will end the search.

And in order not to dislocate your brain right away, let’s start with a simple and verifiable quadratic regression problem. The essence is simple – there is a set of experimental data describing the dependence of a certain Y on the parameter X (the set is noisy and littered).

It is necessary to find such coefficients a0, a1 and a2 for the parabolic function, so that this approximation function, passed through the experimental set of points, would have the smallest average deviation between the quadratic approximation and the experimental data. Such a problem has been solved many times in a variety of ways, and these solutions have long been built into Excel, openOffice, MathCAD.

Let’s set the problem: for a fixed data set of the form [X, Y] find the values ​​of the regression coefficients a0, a1, a2 so that the sum for all X from our sample |R(a0, a1, a2, X) – Y| was minimal.

Let’s start from the point [-5, -5, -5] (this is not the best point, and ideally the search for the starting point should be solved separately, but starting from suboptimal conditions is a good check of the design scheme for stability).

From a difference of 5540 to a modest 27 in 14 steps

Now let’s compare the results with the quadratic regression built into openOffice.

The red line is planMaker’s built-in quadratic regression, the dark crimson line is the regression whose coefficients are obtained by gradient search

Now let’s compare the quantitative characteristics.

*-index of values ​​obtained by the gradient method, t-index of the built-in planmaker regression. Homemade method wins by 9.46%

Conclusions:

Gradient search with selection of the step length using the dichotomy method can indeed find the maximum function of a vector of arbitrary dimension.

The efficiency of the method was confirmed on the problem of finding quadratic regression coefficients; the polynomial regression function built into openOffice planmaker was used as a reference.

Someday I will screw this gradient engine to my rocketscience.

Algorithm sources live on my github

Similar Posts

Leave a Reply Cancel reply