Dynamic upscaling is a major component in modern games and the latest and greatest graphics cards, but there are different modes and models to pick from. Intel’s Xe Super Sampling (XeSS), Nvidia’s Deep Learning Super Sampling (DLSS), and AMD’s Fidelity FX Super Sampling (FSR) all do things in their own way and aren’t always the same in performance, visual quality, game support, and hardware support.
Although there’s an argument to be made for just turning on whatever your hardware and games support, if you have the choice between them or are considering different graphics cards based on their XeSS, DLSS, and FSR support, it’s important to know the differences between them. Here’s a key breakdown of these supersampling algorithms and which one might be the best fit for you.
In general, DLSS leads the pack in image quality thanks to its AI approach, but it’s not the clear leader anymore because of FSR 2.0. The original implementation of FSR was pretty mediocre, but the new 2.0 update puts it on nearly equal footing with DLSS. We really like FSR 2.0 for its hardware support, as it has worked on nearly every GPU made within the past five years.
XeSS is a little different. Unlike DLSS and FSR, there’s not one definitive version. Instead, there’s an Intel Arc exclusive version of XeSS that takes advantage of the XMX cores on Arc GPUs, and also a vendor-agnostic version of XeSS that is like FSR because it doesn’t require any AI hardware.
So, where does XeSS fit in? Well, the AI-powered version is significantly behind DLSS and FSR 2.0 in terms of image quality, which is basically the same position FSR 1.0 occupied: not terrible, but not amazing, either. When we compared the Performance modes of XeSS and DLSS in Shadow of the Tomb Raider, we found that DLSS had superior quality in general. We haven’t yet tested XeSS against FSR 2.0, but the conclusion would probably be similar, as FSR 2.0 is usually just as good as DLSS.
For detailed images, have a look at our Shadow of the Tomb Raider XeSS Performance comparison (click, drag, resize).
Higher quality modes of XeSS aren’t all that much better, either. In this shot from Hitman 3, you can see that in every single mode, the foliage is always blurrier compared to the scene in native resolution. For the performance gains (which we’ll discuss in a moment), the tradeoff in image quality isn’t impressive.
For detailed images, have a look at our Hitman 3 XeSS comparison (click, drag, resize).
We expected XeSS to be more comparable to DLSS thanks to its utilization of AI hardware, but this clearly isn’t the case. At the same time, FSR 2.0 proves that AI hardware isn’t even necessary to make a good upscaler (though FSR 2.0 isn’t without its problems). Intel’s approach to upscaling seems to be in a very awkward middle ground between AMD and Nvidia because there’s a version that utilizes AI hardware like DLSS and a version that’s for GPUs without AI hardware that works like FSR.
Both DLSS and FSR had bad starts, though, so there’s no reason to believe XeSS can’t eventually become good. Hopefully, XeSS 2.0 will put Intel on equal footing with its competitors once people start buying the new Arc GPUs.
Performance is the other side of the upscaling coin because it’s not worth greater performance if it looks terrible, but then it needs to have an impact on FPS. Otherwise, you might as well go native. It basically comes down to how much image quality is worth sacrificing for a higher frame rate, which is why all these upscalers offer different modes so that you can tweak quality and performance to your tastes.
Since Nvidia GPUs support XeSS, FSR, and DLSS, they’re the ideal cards to compare performance between each upscaler (minus the AI-powered version of XeSS for Intel Arc). In our Intel Arc A770 and A750 review, we tested the RTX 3060 in Shadow of the Tomb Raider and Hitman 3 using all available quality modes for XeSS and DLSS, and the results are pretty conclusive.
In Shadow of the Tomb Raider, XeSS was able to improve performance by up to 43% by utilizing the Performance mode, but DLSS was able to get a 67% frame rate increase with its own Performance mode. In Ultra Performance mode, DLSS was able to double the frame rate, a much larger improvement than XeSS was able to deliver. When you consider the image quality difference between each upscaler’s Performance mode (which we showed in the previous section), DLSS is the clear winner.
It’s a similar story in Hitman 3. The margins here are basically the same as in Tomb Raider except for DLSS’s Ultra Performance mode, which couldn’t double the frame rate. Even without Ultra Performance, though, DLSS is still the clear winner when it comes to performance.
We should also note that DLSS stands to get even faster with the upcoming 3.0 version, which brings AI-generated frames into play. Nvidia promises big performance gains with DLSS 3, but game support will be limited for a while, and image quality takes a noticeable hit from our testing with the RTX 4090. DLSS 3 isn’t an existential threat to XeSS at the moment, but it’s not great for Intel to be lacking a feature that might be more useful in the future.
As for FSR 2.0, usually, it’s about on par with DLSS’s performance, so while we haven’t tested it directly against XeSS, it’s pretty likely we’d see FSR in the lead and XeSS significantly behind, as we do with XeSS versus DLSS. FSR doesn’t have AI-generated frames like DLSS 3, however, and it’s not clear how AMD will bridge this gap in the future since its GPUs have no AI hardware, at least for now.
Still, FSR 2.0 was good enough at launch that we started to consider whether DLSS was even necessary anymore. DLSS 3 might change that if you can afford an RTX 4000 series graphics card, but considering most can’t, that may leave FSR as the upscaling king long-term.
DLSS is the oldest of the three upscaling technologies, and unsurprisingly, it supports the most games. It’s available in dozens of titles, including Cyberpunk 2077, Marvel’s Avengers, and Outriders, and Nvidia is constantly adding support for new games. It is tiered, however, with the greatest number of games supporting DLSS 1 and 2, with DLSS 3 support still limited for now.
FSR is much newer, but that hasn’t held it back from growing an impressive list of supported titles. At the time of publication, the heavy hitters are God of War, Deathloop, and Red Dead Redemption II. FSR 2.0 support is also planned for Hitman 3, Microsoft Flight Simulator, and upcoming games like Forspoken and Uncharted‘s PC port.
Generally speaking, if a game has FSR, it’ll have DLSS, and vice versa, though older titles that came out before DLSS will often only have DLSS. It seems like we’ll see a similar trend with XeSS, as several games that have XeSS or will support it in the near future also have DLSS and FSR 2.0. For example, both of the games we tested for image quality support (Shadow of the Tomb Raider and Hitman 3) support DLSS and XeSS.
The biggest difference between DLSS, FSR, and XeSS is hardware support — and it may be the difference that defines which is the best upscaling option. DLSS requires an Nvidia RTX graphics card. Not only is the feature limited to Nvidia hardware, but it’s also limited to the latest generations of Nvidia hardware: specifically, you need at least an RTX 2000 card to use DLSS at all and an RTX 4000 to use DLSS 3.
That’s because DLSS requires the Tensor cores on recent Nvidia graphics cards, which handle the AI calculations. FSR doesn’t use AI, so it doesn’t require any particular hardware. The strength of FSR isn’t that a lot of games support it or that it has better image quality compared to DLSS, because it has neither of those; it’s that anyone can use it.
Outside of graphics cards from AMD and Nvidia, FSR also works on integrated graphics, APUs, and graphics cards that are older than a couple of generations. There’s a quality trade-off, but most gamers don’t have a recent Nvidia graphics card. The majority of people are still using older GPUs, an AMD card, or integrated graphics.
XeSS has a nice compromise between the two. Like DLSS, XeSS uses dedicated cores — called XMX cores on Intel graphics cards — to handle the AI calculations. XeSS requires these cores to work, so the full version of XeSS will only work on Intel graphics cards. But Intel is making two versions.
This is something we wanted to see out of DLSS. Essentially, Intel is offering developers two different versions of XeSS: one that requires the dedicated XMX cores and another that’s a general-purpose solution for a “wide range of hardware.” In theory, it’s the best of DLSS and FSR mashed up into one, but there is one clear problem: Two versions mean twice the work for developers if XeSS is difficult to implement, so it’s possible that developers won’t adopt it as widely into their games.
A rough start for player three
DLSS leads the pack in quality and game support. If it worked across multiple generations of graphics cards from different brands, it would make FSR obsolete. Instead, FSR is filling the void that DLSS can’t fill by offering similar image quality and performance boosts to a wider variety of cards. It’s a classic closed-source versus open-source battle, and it’s difficult for either solution to make the other obsolete when each one has distinct advantages and disadvantages.
XeSS tries to one-up both DLSS and FSR by combining the best things about both: DLSS’s image quality powered by AI hardware and FSR’s wide compatibility. Unfortunately, the visual fidelity and performance gains are not in XeSS’s favor. At the moment, XeSS simply serves no purpose for Nvidia users because DLSS is simply better; as for AMD and Intel users, XeSS is only appealing if FSR 2.0 isn’t an option.
A future version of XeSS might change things, or rather, a future version of XeSS needs to change things. We saw it with DLSS and FSR: the second version actually made the feature worth using. XeSS is in the exact same position, and hopefully, Intel will be able to replicate Nvidia’s and AMD’s progress.