What is dynamic programming – experts explain

The computational complexity of many tasks does not depend on the input parameters, for example, getting an element by index in an array does not depend on the length of the array and is always performed in constant time. At the same time, there are tasks for which the execution time linearly depends on the value of the parameters, for example, getting an item by index in a linked list – the more items, the longer it will take to find the desired item. Naturally, there are also problems for which the use of exhaustive search requires too much time, for example, finding the shortest route between two cities in the case of exhaustive search will require constructing n! variants, which is ineffective for large n.

Some similar problems can be solved by dividing the original problem into subproblems of smaller size and complexity, by solving which and combining the results, you can obtain a solution to the original problem. This approach is called dynamic programming and was proposed by Richard Bellman in the 1940s.

Dynamic programming takes two approaches to solving problems:

  • Top-down dynamic programming, where the original problem is broken down into subtasks. After solving subproblems, the results are combined to obtain a solution to the original problem.
  • Bottom-up dynamic programming, when subproblems are solved first, and then their results are combined to solve a common problem.

One of the most famous dynamic programming problems is the problem of calculating Fibonacci numbers, and it is effectively solved by the method of bottom-up dynamic programming. When using this approach, to find the Nth element in the sequence, the first, second and subsequent elements are sequentially found as the sum of the two previous ones. This approach has complexity O (N), while using the classical recursive approach has an approximate complexity of O (2 ^ N), which is much more.

Many phenomena are described using Fibonacci numbers, however, let’s look at a more practical example – the task of a traveling salesman or, in a modern way, a courier – a courier must visit N addresses, having visited each of the addresses exactly once and complete his route at the starting point. The task is to find the shortest route.

This problem can be solved by the brute-force method – to generate all possible routes (there are N of them in total!) For N addresses, and then select the route of the minimum length. The complexity of this algorithm is O (N! * N) and the computation time grows very quickly with an increase in the number of addresses – if for three addresses you need to consider six options, then for 10 there are already about four million!

Using dynamic programming approaches may not provide the best solution, but it will still be good enough in a reasonable amount of time. The essence of the approach to solving this problem is to find the nearest address at each step – from the starting point, then the next nearest destination from the first address, and so on. The resulting solution will not be perfect, but it will take much less time – O (N ^ 2 * 2 ^ N), which is 1124 calculations for the same 10 addresses.

Thus, we have considered the main idea of ​​dynamic programming – splitting a complex problem into a collection of simpler ones, which in total can be solved in much less time.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *