About C ++ and Object Oriented Programming

Hello, Habr!

We would like to draw your attention to an article, the author of which does not approve of a purely object-oriented approach when working with the C ++ language. We ask you to evaluate, if possible, not only the author’s argumentation, but also the logic and style.

Lately a lot write about C ++ and the direction in which the language is developing and that much of what is called “modern C ++” is simply not an option for game developers.

While I fully share this point of view, I tend to view the evolution of C ++ as the result of ingraining pervasive ideas that most developers are guided by. In this article, I’ll try to organize some of these ideas along with my own thoughts – and maybe I’ll get something slim.

Object-oriented programming (OOP) as a tool

Although C ++ is described as multiparadigm programming language, in practice most programmers use C ++ purely as an object-oriented language (generic programming is used to “supplement” OOP).

OOP is supposed to be a tool, one of many paradigms that a programmer can use to solve problems in code. However, in my experience, OOP is accepted by most professionals as the gold standard for software development. Basically, developing a solution begins with determining what objects we need. The solution to a specific problem begins after the code has been distributed among the objects. With the transition to this kind of object-oriented thinking, OOP turns from a tool to a whole toolbox.

On entropy as the secret force that fuels software development

I like to think of an OOP solution as a constellation: it’s a group of objects with randomly drawn lines between them. Such a solution can also be considered as a graph in which objects are nodes, and the relations between them are edges, but the phenomenon of a group / cluster, which is conveyed by the constellation metaphor, is closer to me (compared to it, the graph is too abstract).

But I don’t like how such “constellations of objects” are composed. In my understanding, each such constellation is nothing more than a snapshot of the image that has formed in the programmer’s head and reflects what the solution space looks like at a particular moment. Even taking into account all the promises that are made in object-oriented design about extensibility, reusability, encapsulation, etc … the future is unpredictable, so in each case we can offer a solution for exactly the task that we face now.

We should be encouraged that we are “just” solving the problem that is directly before us, but in my experience, a programmer using design principles in the spirit of OOP creates a solution, while constraining himself with the assumption that the problem itself will not change significantly and , accordingly, the solution can be considered permanent. I mean that from here on, people start to talk about the solution in terms of the objects that form the aforementioned constellation, and not in terms of data and algorithms; the problem itself is abstracted.
Nevertheless, the program is subject to entropy no less than any other system and, therefore, we all know that the code will change. Moreover, in an unpredictable way. But for me in this case it is absolutely clear that the code will degrade in any case, sliding into chaos and disorder, if you do not consciously fight it.

I’ve seen this manifest in many different ways in OOP solutions:

  • New intermediate levels appear in the hierarchy, whereas they were not originally intended to be introduced.
  • New virtual functions are added with empty implementations in most of the hierarchy.
  • One of the objects in the constellation requires more processing than planned, due to which the connections between the other objects begin to slip.
  • Callbacks are added to the hierarchy so that objects at one level can communicate with objects at another level without having explicit knowledge of each other.
  • Etc. …

These are all examples of improperly organized extensibility. Moreover, the outcome is always the same, it can come in a few months, or maybe in a few years. With help refactoring are trying to eliminate violations of the OOP design principles, made when new objects were added to the constellation, and they were added due to the reformulation of the problem itself. Sometimes refactoring helps. For a while. Entropy is steady, and programmers don’t have time to refactor every OOP constellation in order to overcome it, so any project regularly finds itself in the same situation, whose name is chaos.

In the life cycle of any OOP project, sooner or later there comes a point after which it is impossible to maintain it. Typically, at this point, one of two actions should be taken:

  • Go to the “black box”: hide the constellation behind some facade and slowly pull it out of the rest of the code. The system can continue to solve the original problem for which it was created, if it still works decently, but the development of new features completely stops, and fixing bugs takes a lot of time, if at all leads to success.
  • Rewrite from scratch: The OOP design created to solve the original problem is already so far from its current state that no gradual refactoring can adjust it to the current solution.

Please note: the option with a black box will still require rewriting in case the development of new features has to continue and / or the need to eliminate bugs remains.

The situation with rewriting a solution brings us back to the phenomenon of a snapshot of the available solution space at a particular moment. So what has changed between OOP design # 1 and the current situation? Basically, that’s it. The problem has changed, therefore, a different solution is required.

While we were writing the solution following the principles of OOP design, we abstracted the problem, and as soon as it changed, our solution fell apart like a house of cards.
I think it is at this moment that we begin to wonder what went wrong, we try to go the other way and update the strategies for solving the problem based on the results of postmortem (debriefing). However, whenever I come across such a “time to rewrite” scenario, nothing changes: the principles of OOP are used again, according to which a new snapshot is implemented, corresponding to the current state of the problem space. The whole cycle is repeated.

Ease of code removal as a design principle

In any system built on the principle of OOP, it is the objects in the “constellation” that receive the main attention. But I believe that the relationships between objects are as important, if not more, than the objects themselves.

I prefer simple solutions in which the code’s dependency graph consists of the minimum number of nodes and edges. The simpler the solution, the easier it is not only to change it, but also to remove it. I also found that the easier it is to remove the code, the faster you can refocus the solution and adapt it to changing problem conditions. At the same time, the code becomes more resistant to entropy, since it takes much less effort to keep it in order and prevent it from sliding into chaos.

About performance by definition

But one of the main considerations for avoiding OOP design is performance. The more code you need to run, the worse the performance will be.

It is also impossible not to note that OOP features, by definition, do not shine with performance. I have implemented a simple OOP hierarchy with an interface and two derived classes that override a single pure virtual function call in Compiler Explorer

The code in this example either prints “Hello, World!” Or it doesn’t, depending on the number of arguments passed to the program. Instead of directly programming everything that I have just described, one of the standard OOP design patterns, inheritance, will be used to solve this problem in the code.

In this case, what is most striking is how much code compilers generate, even after optimization. Then, looking closer, you can see how costly and at the same time useless such maintenance: when a nonzero number of arguments is passed to the program, the code still allocates memory (calling new), loads addresses vtable both objects, loads the address of the function Work() for ImplB and jumps over to her, then immediately return, since there is nothing to do there. Finally, deleteto free the allocated memory.

None of these operations were necessary at all, but the processor performed them all properly.

Thus, if one of the primary goals of your product is the achievement of high performance (strange if it would be otherwise), then in the code you should avoid unnecessary costly operations, preferring simple ones, which are easy to judge, and use constructs that help achieve this goal.

Take for example Unity… As part of their recent practice performance is correctness C #, an object-oriented language, is used because that language is already used in the engine itself. However, they settled on subset of C #, moreover, on one that is not rigidly tied to OOP, and on its basis create constructs sharpened for high performance.

Given that a programmer’s job is to solve problems using a computer, it’s unthinkable that our business devotes so little attention to writing code that actually makes the processor do the work that the processor is particularly good at.

About fighting stereotypes

The article Angelo PesceOvercomplication is the root of all evilThe author gets to the point (see last section: People) by admitting that most software problems are actually human factors.

The people on the team need to interact and develop a common understanding of what the overall goal is and what is the path to achieve it. If there is disagreement in the team, for example, about the path to the goal, then for further progress it is necessary to develop a consensus. This is usually not difficult if the differences of opinion are small, but it is much more difficult to tolerate if the options differ fundamentally, say “OOP or not OOP”.
Changing your mind is not easy. Doubting your point of view, realizing how wrong you were and adjusting your course is hard and painful. But it is much more difficult to change the mind of someone else!

I had a lot of conversations with different people about OOP and its inherent problems, and although I believe that I have always been able to explain why I think this way and not otherwise, I do not think that I managed to turn anyone away from OOP.

True, over the years of work, I have identified three main arguments for myself, because of which people are not ready to give the other side a chance:

  • “That wouldn’t work with good OOP.” “It’s a poorly designed OOP.” “This code doesn’t follow OOP principles” and the like. I heard things like that when I demonstrated examples of OOP that went all the way (as I said above, this inevitably happens with OOP code). This is a typical example of the logical fallacy “Not a single true Scotsman …”.
  • “I know OOP, and if you start from scratch, I don’t want to use anything else.” It is the fear of losing one’s “senior” status after using the principles of the PLO throughout his career and leading other people who were also required to use these principles. I believe we are dealing here with an example of a “sunk cost error.”
  • “Everyone knows OOP, it is very convenient to speak with people in a common language, having general knowledge.” This is a logical mistake called the “argument to the people”, that is, if almost all programmers use the principles of OOP, then this idea cannot be inappropriate.

I am fully aware that revealing logical errors in argumentation is not enough to debunk them. However, I believe that seeing the flaws in your own judgments, you can get to the bottom of the truth and find the deep reason why you reject an unusual idea.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *