Of course, the first rational question is why bother and make some kind of hybrid of Radeon and Nvidia cards, if, if you have two PCI-e slots, you could simply install two Nvidia cards in SLI and enjoy life with the same, if not more FPS.
The answer is extremely simple. It was possible to install a powerful Radeon and a mediocre Nvidia, and this was already enough for a big increase. Judging by the Web Archive, the technology actively existed until the 13th year. And to assemble some good AMD, and adapt an old GeForce to it for pennies, sounded quite like a plan. Like a plan for a pervertbut still a plan.
Physics – it evaporated
But here everything is much simpler, at one time the physical PhysX coprocessor did not belong to Nvidia, just like the Voodoo graphics accelerators from 3dfx. At a time when physics was fashionable and was the engine of progress, Nvidia tried to place it wherever it could (Ageia was purchased back in early 2008).
When physics ceased to be fashionable (and especially after the incident that made it possible to run physics hybrids together with AMD cards), Nvidia closed the shop and made the libraries almost publicly available, but without the possibility of hardware acceleration (if the calculations are not performed on an Nvidia graphics chip). At the same time, the very implementation of Physics took place under some kind of license, under which games could be developed, but upon sales, a percentage had to go to Nvidia (I’m not strong here, if they correct me in the comments, I’ll make corrections here).
What’s the result? If you don’t have Nvidia on board, then you won’t see physics (yes, there are other engines, including Havoc and Bullet, but Nvidia has and still has the most impressive physics).
What is the main game development market? That’s right – console, and then you have to connect a whole library, and it won’t work everywhere. Here is the answer to the question of where physics disappeared from games, given right around 1915. (of course, it didn’t disappear from everywhere, and not completely. And there were some games, on their own development, like Hydrophobia, and the physics in the same Witcher seems to be calculated through PhysX, and works great on maps from the red ones, but these are details).
But the problem is not even this, but the fact that Nvidia created (bought, stole, modified, Underline whatever applicable) there are a lot of interesting technologies that have sunk into oblivion (perhaps the rays are next in line).
Detailed article on how it works Reflex.
A technology used in shooters on 144 Hz and higher monitors to reduce input lag when working with third-generation neural networks. One of those examples when Nvidia itself creates problems and solves them itself.
In general, Reflex has the right to exist in eSports, if not for one thing, but! Most eSports players deliberately drop graphics into the baseboard in order to achieve minimal latency natively, therefore, for them Reflex itself is redundant. And other players who do not use DLSS 3.0 simply will not feel the difference.
We have free CUDA kernels. Where should we locate them? Let’s draw fur? In fact, it is a gimmick technology that did not catch on for two reasons.
The result is simply disgusting.
It did not in any way affect the development process itself (unlike rays).
Yes, and she loaded the cards like oh-oh-oh. Perhaps if we added texture to the hair, or increased the amount of hair (wool/fur), everything could go in a completely different direction. And now we would enjoy beautiful scarves instead of realistic lighting.
Despite the fact that there were many hair and hair physics, the most memorable remains The Witcher 3. Where it was still turned off for purely aesthetic reasons.
Remained a tech demo. Although quite impressive.
Was only used in Watch Dogs 2. Where no one noticed it. In many ways it became redundant with the advent of rays. And if you remember the story well, I drew such shadows Carmack back in 2001!
Despite all the physical authenticity, water in Assasin’s Creed: Black Flag (where it was used) did not behave much better than in Corsairs: KVL (GPK) which was released on StormEngine.
SLI left us
Before the finale, one interesting observation. Since the release of the 10th series of GeForce, Radeon cards in the desktop segment have taken a catching-up position.
And Nvidia’s powerful breakthrough in 2016 practically eliminated the need for SLI and Crossfire. What is clearly noticeable is that already in the 10th series, it was available only to cards starting from 1070, and by the 30th it was generally limited to cards of the 3090 level.
And if it weren’t for the miners (who slowed down progress and raised the prices of cards to the skies), then perhaps Nvidia would have abandoned SLI even earlier.
It is important to note that Nvidia has never considered the possibility of using SLI as described earlier, when all the physics is placed on one card, and all the rendering on another.
The whole SLI (and/or Crossfire) has all along come down to one of two implementations:
Alternate training of personnel. A dead-end branch due to the fact that the personnel have different complexity, and therefore it is not possible to ideally parallelize the training of personnel.
Preparing the same frame in half (when each part of the frame is generated by its own video card).
The second implementation, although it seemed ideal, in practice turned out to be not so rosy. The bandwidth of the SLI connector did not allow for a twofold increase in speed. And later it became economically inexpedient, when one card of the conditionally 70th line within the series was cheaper than 2 50ths. Which is why we actually came to what we have – huge solid chips from Nvidia.
Moreover, this is a rather interesting phenomenon, considering that in processors the trend is strictly the opposite. In recent years, it has only been gaining momentum with the parallelization of processes and processors, and even the emergence of chiplets.
We took a wrong turn – Why AMD’s idea looks more interesting, even though it was catching up
And while I was talking about how SLI is ineffective, Nvidia is great, but AMD is not, I missed the most important thing – over the last few generations (namely, with the release of RDNA), AMD has been actively moving from a single crystal to a chiplet system for manufacturing video cards.
So far this is only happening in the mobile sector, but the trend itself shows that the technology is by no means as dead-end as it seems. Because with the help of a chiplet system, AMD can create a single line on which unified chips will be assembled, and cover with one line the entire production of video cards, from the most budget to the most top-end, simply by placing, relatively speaking, a different number of chips on the board.
Already now in Navi 31, which is used in the 7900XT, there are already 7 crystals. A video chip and 6 cache memory chips. This means already with Navi 4C, several dozen chips on one substrate, where everyone can do their own calculations.
With a general reduction in cost, the technology has almost unlimited scalability, which can turn AMD from catching up to leading. But what’s most interesting is that such an approach, divided into separate chips, can finally return hardware acceleration of physics to us, and in a few years we will see objects that are not nailed to the floor.
At the same time, the reason why AMD switched to chiplets is most likely banal; they hit the performance ceiling on their monocrystals. Nvidia are somewhere in the same place (and the very need to use DLSS to render games confirms this). And most likely, placing several chips on one board will most likely give a greater increase than connecting them with a cable, which can lead to a rather interesting and sharp jump in performance.
It’s also funny that the modern chiplet system suspiciously (in general terms) resembles what history led to during the time of the Cell processors on the PlayStation 3. Of course, the role of progress and avoiding mistakes is obvious, but the observation is entertaining.