Image reconstruction technologies have many followers, but also many detractors. In a utopian world our graphics hardware should be able to render our games at very high resolutions using ray tracing and offering us, at the same time, an outstanding graphic finish and a very high sustained frame rate.
The problem is that in practice this scenario it is not acceptable. At least it is not with most graphics hardware, and not with much of the games.
This is the context in which image reconstruction technologies have arrived, and they have landed with one purpose: to balance what current graphics hardware is able to do and what users ask you to do.
It doesn’t matter if you look at NVIDIA’s DLSS technology, AMD’s FidelityFX Super Resolution (FSR), or any other image reconstruction procedure; all of them seek to offer us a visual finish with the highest possible quality, and, at the same time, a sustained frame rate of at least 60 FPS.
Image reconstruction technologies seek to balance what graphics hardware is capable of and what we ask it to do
However, despite having the same purpose, the strategy you use each image reconstruction technology to achieve it is different. And the result that each one of them gives us is also usually different.
We can currently classify them into two different categories: those that resort to deep learning in an attempt to maximize image quality and those that implement reconstruction using an algorithm of space scaled. NVIDIA’s DLSS technology belongs to the first category; and FSR, from AMD, and Image Scaling, from NVIDIA, to the second.
In practice, as we confirmed in our analysis of DLSS 2.0, this deep learning image reconstruction procedure It works very well. And it does so because in most of the games that implement it, it offers us a very high image quality and often close to rendering at native resolution, and, at the same time, a high frame rate per second.
However, this technology requires that the graphics hardware incorporate specialized functional units to efficiently solve matrix operations that support high parallelization, such as Tensor cores featuring GeForce RTX graphics processors.
The link between DLSS technology and hardware means that this image reconstruction technique is only available on GeForce RTX family graphics cards. But fortunately, technologies that use spatial scaling, such as AMD’s FSR or NVIDIA Image Scaling, among other options, they are more benevolent with hardware and they can be used in a very wide range of graphics cards.
A few weeks ago NVIDIA revealed that it had finalized the revision 2.3 of DLSS, and today it has published an update of its graphics driver that implements a new spatial scaling algorithm that promises to take Image Scaling technology to another level. And what does the new DLSS 2.3 that just landed in ‘Doom Eternal’ and ‘Cyberpunk 2077’ promise us? According to NVIDIA, a more precise handling of motion vectors (we will talk about them below) that seeks to reduce the ghostingas well as a more precise recreation of particle effects, among other improvements.
Reconstruction through deep learning vs. spatial scaling
The starting point of DLSS technology (Deep Learning Super Sampling) is very different from the spatial scaling techniques we currently use. Leaving aside the more complicated details derived from the way in which NVIDIA has implemented this innovation, what we are interested to know is that DLSS works with a temporary buffer which allows the reconstruction algorithm to access the three frames prior to the one that needs to be reconstructed.
DLSS uses a temporary buffer, motion vectors, and an inference engine to retrieve as much detail as possible
It also uses some mathematical objects known as motion vectors They are used to describe the displacement of elements that change position when comparing two consecutive frames. Once all this information has been collected, the inference engine collects it and processes it using deep learning techniques in order to reconstruct a new image that incorporates the maximum level of detail possible.
Very roughly this is how DLSS works, but, as we have seen, there are also several spatial scaling algorithms (those used by AMD and NVIDIA are different) that are postulated as an alternative to this technology. All of them have in common the fact that they dispense with the analysis of multiple frames, the handling of motion vectors, and also the subsequent processing by means of a deep learning engine. And they share a similar strategy when it comes to scaling the images.
Most of them approach the reconstruction procedure by dividing it into two different phases. During the first one, they “stretch” the frame that needs to be scaled to adapt it to the resolution we need to obtain, “inventing” the missing pixels in a more or less ingenious way from the adjacent pixels. And in the second phase of this process they analyze the resulting image and rebuild the edges of all objects in order to eliminate jagged edges and give the frame a higher level of detail.
Image reconstruction using spatial scaling is easier than processing using DLSS. Its main advantage is that it does not require the graphics hardware to incorporate specific functional units to carry out this procedure, which places this technique within the reach of almost any relatively modern graphics card. However, as we have seen, DLSS only works on GeForce RTX graphics cards because, among other things, requires Tensor cores.
DLSS technology is more complex and demanding on hardware than spatial scaling, but it gives us better results
Our graphics card reviews since the first GeForce RTX arrived have shown us that DLSS technology get more detail back and it is less sensitive to motion artifacts than spatial scaling techniques.
Still, both technologies continue to improve. As I mentioned above, NVIDIA recently released DLSS revision 2.3, and today announces the arrival of a new spatial scaling and sharpening algorithm that becomes part of its Image Scaling technology.
Unlike DLSS, the Image Scaling image reconstruction procedure works on all GeForce GTX cards, and this new revision, according to NVIDIA, has a beneficial impact on performance and image quality. However, this is not all that this company has presented today; has also published ICAT, a new analysis tool designed to test the image quality provided by graphics cards.
Soon we will use it to prepare a comparison in which we will analyze in detail image quality proposed by DLSS 2.3 and Image Scaling reconstruction technologies, both from NVIDIA, and FidelityFX Super Resolution, from AMD.
More information | NVIDIA