Eternal Sunshine of the Clean .NET
In general, if it’s interesting to mentally return to that era and reflect on .NET together in the context of “It was – It has become”, then I invite you to cat. I think it will be interesting both to those who code recently and do not know about the features of previous versions, and to those who want to indulge in nostalgia.
Age of Pain
When I started developing, SharePoint 2010 worked on the .NET Framework 3.5 and already included a lot: LINQ appeared and there was primitive AJAX. But it was very limited by the platform, since it was rather difficult to expand it, and there were simply no adequate tools then.
Pain 1: Creating a single application
Then the technology for developing web parts for the “ball” was based on WebForm’s, with each web part being essentially a separate application. Personally, it was unrealistic for me to make a single application like this, because when developing the web part, each of them initializes its own context for connecting to the database – it turns out that it was impossible to make a single context. Well, for example, to display data from the database on the pages, I used SqlDataSource (by connecting to the database separately in the widget), and to connect to 3-4 tables, I had 3-4 DataSource on the widget, which, of course , influenced page loading speed. By that time, the ADO.NET Entity Framework had already appeared, but it was inconvenient to use it in SharePoint until version 4.1, because there were problems with the interaction of the products.
Pain 2: Inaccessibility of support and patterns
Web parts for SP 2010 we wrote on the technology of creating web part SP 2007, because there were no templates or support for the 2008 studio. Gradually, with the release of Visual Studio 2010, their templates appeared, and it became easier to work: it became possible to create definitions of lists and code them from the studio, to create a website template (encode the desired content-type and list description). Previously, all this was done by hands through editing XML files, and this, undoubtedly, was a pain for those who were just immersed in .NET development, because the person did not understand what kind of file he was editing and for what purpose, but focused only on Uncle's words from the forum.
Pain 3: Asynchrony …
In the .NET Framework 3.5 there was no asynchrony ¬¬ in the form that we know it now, and we had to run certain code in another thread, communicate through delegate handlers, and in WinForms it was possible to use the background of the worker (that is, the second process running in parallel in which work was performed). It turns out that programming of asynchronous applications existed, but their implementation was beyond understanding June.
In the .NET Framework 4, the Task Parallel Library appeared, which means that task’s appeared, i.e. we could not declare delegates, but do a task, passing action to it, and execute it in a parallel thread, knowing the status / state and, when necessary, receive a signal on its implementation. It was progress for parallel development, when you need to process a large amount of data, because before it was done with a larger entry bar.
… and frozen windows
You need to understand that the web is very different from the development of console applications (here we understand not the global naming, but the one that we use when describing the theses: not specifically ConsoleApp, but all the applications that run in the OS interface). In a console application, all operations are performed by default synchronously, and if there is a long processing time, the interface will “freeze” as if the application were frozen. In order not to feel that the program is not responding, we performed all the operations in a separate thread and entered progress bars: this way the user saw the application’s activity, and it was possible to control from another thread through a delegate, for example.
Pain 4: Deployment is Coming
Also in the .NET Framework 3.5, another painful technology appeared – MS AJAX. UpdatePanel content was updated from the backend, while everything else was not rebuilt at all. In SharePoint, he worked very crookedly due to the specifics of initializing controls in the page life cycle. Here, he worked for us after the first post-back (sometimes after the second), and in general it was difficult to make MS AJAX work well the first time, although it was used quite simply with clean WebForm UpdatePannel. And it was not possible to use classic AJAX (XMLHttpRequest) in that version of the “ball”, because for each action it was necessary to write a separate handler on the back and hang them in a pack of each web part’a. At the same time, it was not always possible to wind up this functionality.
When in parallel I worked with other applications written in WebForm’s for “near-ball” tasks, I was surprised that the problem of deploying a project to SP is a problem only for SP. The rest of the applications were initialized at the moment: the window loaded, and it works (magic!). In the balloon, the deployment took from 2 to 3 minutes, and you are in a constant cycle:
In general, everyone understood that this was a long process of deployment and mini-breaks. But I am grateful for this pain – this is how I learned to generate more code and make fewer mistakes in one iteration of development.
Pain 5: Windows and nothing but Windows
At that time, .NET still positioned itself as a development platform for Windows. Yes, there was the Mono project, which, in essence, was a “bicycle” of .NET on Linux, but it was an alternative version of the main Framework, and still on the project page www.mono-project.com/docs/ about-mono / compatibility) lists features that are not added to it by version of the Framework. When you developed something for Linux, it was far from user-friendly, since Mono did not have that support and community, and if you turned to some unrealized things, then the code could simply break. That is, if you do not develop it for Mono initially, you cannot write platform-independent code in principle. I do not exclude the significance of this project for the development of .NET as a whole, because without it Core would not have appeared, but personally, I did not have any combat experience with it.
Age of Pros (Painkiller)
The very use of pure .NET latest version in their projects eliminates almost all of these problems. There are many pluses in the Framework now, but then we’ll talk about the advantages with binding to Core, as I worked with him.
Plus 1: Performance
When .NET Core appeared, it became possible to do familiar operations much cooler and faster. The final applications on it work according to some data up to 5000 times faster than their counterparts on the .NET Framework. But compilation and launching sometimes take longer – "harness for a long time – drive fast."
Plus 2: Cross-platform
The main plus of .NET Core is the fact that the written code works simultaneously on Windows, Linux and Mac. In this case, you can write an application on the microservice architecture of an asynchronous logging service through a message queue. I remember how I, a developer who writes mainly under Windows, wrote daemons (services) under Linux, and they worked stably, quickly and the first time, and the whole system worked in tandem: in the application, API service and the message queue itself. It’s just space, when you write in your usual language on a platform that was not originally designed for this OS!
Plus 3: Async of everything
Now it is possible to write backing not in parallel, not multithreaded, but completely asynchronously (!), Which allows you to remove individual tasks from the main stream into special asynchronous methods or code blocks. This, in turn, allows you to write beautiful, clean code that is devoid of bulky constructions: it is easy to understand, asynchronous methods are written as synchronous, and work as they should.
Plus 4: Unloading libraries and less intensive memory consumption
If you look at the current 8th version of C #, then it has a lot of sugar, and the changes are fascinating. Firstly, before we did not have the ability to dynamically unload the initially unloaded DLL (we dynamically loaded the libraries into the project, but they remained hanging in memory). With the release of 3rd Core, it became possible to dynamically load and unload libraries depending on the goals. For example, if we want to make a file search application, and the user selects the XML extension, we dynamically load the XML parser for documents and search in its tree. If we want to search by JSON, then we begin to search by its body – libraries that are dependent on certain conditions, and there is no need to keep them in RAM. And yes. The application has stopped constantly consuming memory. When we unload the assembly, we free all the resources that cling to this assembly.
Plus 5: tuples
The language is still young, vibrant and actively developing. The latest version added a lot of cool things: for example, tuples are an active topic. Yes, there were tuple before, but as a separate class, which included many elements. Now it has been transformed into tuples, when we can create a method that returns not 1 object, but several. Previously, to return more than 1 parameter, it was necessary that an output / reference parameter be declared, or to invent a separate class and drag it further, but now you can return it as a tuple.
Many developers have such an attitude towards language changes: until we were done well, we did not know what was bad. .NET Core is open source, so everyone can add a feature and write about their pain on the portal. There are, of course, a lot of controversial issues: someone is waiting for changes that seem completely uncomfortable to others. For example, control of Nullable Reference types was included in version 8 of the language, and so far the convenience question is controversial: the innovation was announced in 2 previous versions, and was included only in the final Core 3.0, and by default this feature is disabled, since its inclusion can lead to to the breakdown of a major project. But when writing an application from scratch, it is extremely useful and allows you to make the application more clean and transparent.
In my opinion, the platform that is now is already a strong player in the development world with a fairly low entry threshold (there is even lower, but working on them is more difficult). Of course, choosing a platform means working with a number of factors and being dependent on goals. If this is a complex application that processes terabytes of data and needs to be verified to a byte, then this is complicated programming with pluses. But you need to understand that this is half a year for development and two for revision, and by the time it is released it will become obsolete. In addition, there are not many people who code for pluses. If we are talking about enterprise development, when the time until release is 2 weeks, then it makes sense to choose another technology that helps to get the finished result faster (Java, .NET, php).