Solving a Problem in Concurrent Computing Management

Hydra, which is dedicated to this particular area, decided to fill this gap. By the way, what do you think is the best way to translate the word concurrent into Russian? We chose the “competitive” option, but there seems to be no consensus here.

Edsger W. Dijkstra
Eindhoven University of Technology, Netherlands

A number of predominantly independent sequential-cyclic processes with limited means of communication with each other can be implemented in such a way that at any time one and only one of them is in the “critical section” of its cycle.


In this paper, we present a solution to a problem whose solvability, as far as the author knows, has been open since at least 1962. The work consists of three parts: problem, solution and proof. Although the problem statement may at first seem somewhat academic, the author hopes that anyone familiar with the logical problems that arise in the interaction of computers will appreciate the significance of the fact that this problem can, in principle, be solved.


To begin with, consider N computers, each of which executes its own process, which for our purposes can be considered cyclic. Each of the cycles has a so-called “critical section”, and computers must be programmed in such a way that at any time only one of these N cyclic processes is in its critical section. In order to implement this mutual exclusion of execution of critical sections, computers can communicate with each other through storage. Writing to this store or non-destructively reading from it are inseparable operations; that is, when two or more computers attempt to simultaneously interact (either for reading or writing) with the same storage, these operations will occur one after the other, but in an unknown order.

The solution must meet the following requirements.

(a) The solution must be symmetrical for N computers; therefore it is not allowed to introduce a fixed priority.

(b) No assumptions about the relative speeds of these N computers are allowed; we cannot even assume that their speeds are constant in time.

(c) If any of the computers terminated unequivocally outside of their critical section, it must not be allowed to lead to the potential blocking of the others.

(d) If more than one computer approaches the entrance to its critical section, it should be impossible for them to assume such final speeds that the decision of which one will enter its critical section first would be delayed indefinitely. In other words, constructions in which “After You”-“After You” locking is still possible, though unlikely, should not be considered valid solutions.

We encourage the inquisitive reader to stop here for a moment and try to think for themselves, because only in this way can one feel all the tricky consequences of the fact that each computer can send only one one-way request message at a time. And only in this way will the reader be able to realize to what extent this task is far from trivial.


A number of predominantly independent sequential-cyclic processes with limited means of communication with each other can be implemented in such a way that at any time one and only one of them is in the “critical section” of its cycle.

The repository contains:

Boolean array b, c[1:N]; integer k

An integer k will satisfy 1 ≤ k ≤ N, b[i] and c[i] can only be changed by the i-th computer; however, they can be read by others. It is assumed that all computers start well outside of their critical sections, with all Boolean arrays mentioned set to true. The initial value of k does not matter.

The program for the i-th computer (1 ≤ i ≤ N) is given below:


To begin with, the solution is secure in the sense that no two computers can be in their critical section at the same time. Since the only way to get into the critical section is to execute the compound expression Li4 without returning to Li1, i.e. detection truth all other c after own c became false.

The second part of the proof should show that no “After you”-“After you” infinite blocking can occur; namely, when none of the computers is in its critical section, at least one of the looped computers (i.e. jumping back to Li1) – and hence exactly one – will be admitted to its critical section at the appropriate time.

If the kth computer is not among the looped ones, then b[k] will true, and looped computers will all detect k ≠ i. As a result, one or more of them will be found in Li3 Boolean b[k]true and hence one or more will decide to assign “k := i”. After the first assignment “k := i”, b[k] becomes false and no new computers can decide to assign a new value to k again. When all resolved assignments to k have been completed, k will point to one of the looped computers and will not change its value for the time being, namely, until b[k] won’t true, i.e. until the kth computer completes its critical section. As soon as the value of k does not change anymore, the kth computer will wait (by the compound proposition Li4) until all other c become truebut this situation will certainly arise, if it has not already, since all other looped computers will be forced to install their c true, since they will find k ≠ i. And on this, in the opinion of the author, the proof ends.

Similar Posts

Leave a Reply