Oh look! A test post! I haven't made one of these in a very long time. That's right. Now you can read to your heart's content the crap I am about to type. YES!
RX 5500 XT?
Uh, okay. Well, I bought an RX 5500 XT. And you might be like; "Sash, why did you buy an RX 5500 XT when your RX 590 is technically faster in games?". Well, you're kind of right, and also kind of wrong (I'll come to that in a sec). Anyway, if you read that post I linked above you'll realise that not all my decisions concerning my PC are FoR ThE HiGhEsT FrAmErAtE. Long story short, the 5500 XT 8GB provides the performance of my RX 590 (that I am happy with) at half the power use, whilst also having 8GB. Nvidia can't offer that memory capacity until you hit the £330+ mark, and RX 580 isn't exactly that much more efficient. I also wanted to do this test so there you have it.
So I took some games I usually bench, along with Warframe and did some tests. And then threw together some charts. But first I'll tell you my system configuration...
Ryzen 9 3950X @ stock (Be Quiet Shadow Rock TF 2)
2x8GB DDR4-3600 C15-15-15-35-CR1-tRFC485
Asrock X570M Pro4
AMD Radeon Driver 20.2.2
Windows 10 1909
It might be a bit better for keeping it GPU bound than my Ryzen 3 1200 I used to bench with. I still have that, by the way, I am considering what to do with it, might keep it as a spare. Anyway, let's talk notes. I always like to put some notes before the test and this is no different, because there is a big note here regarding my RX 590 that you might have noticed from the title of the post...
My RX 590 is derpy. In a recent post I mentioned some performance variation, and this seems to remain the case. Essentially, it seems to provide a Firestrike result about 5% lower than the average RX 590. It's not really far off a heavily OC RX 580 in terms of the graphics score in Firestrike. I didn't test other games before the variation (too busy playing them...) and 5% is not a whole lot but it does make a difference in synthetic tests like this. Whether this is an AMD driver, hardware or 3DMark / software issue I do not know. I just thought I'd point it out (worst case: add ~5% to the results here). Either way, rather than rage quit the entire test I thought I'd go ahead, since this is actually the performance I was getting with the RX 590.
RTSS is enabled in all tests.
I have a second monitor connected in all tests.
All games are set to their highest preset at 1920x1080. This is Ultra, or 'Crazy' in Ashes of the Singularity's case.
GameWorks Light Shafts are enabled in Batman: Arkham Knight. (I don't care, it's between two Radeons and I'd play it with them enabled).
HairWorks is enabled in Metro Exodus. (See above).
All games use a built-in benchmark, aside from Warframe. Warframe result is taken by walking through Fortuna. Deferred rendering is enabled.
It's results time! Rather than spend 87 years making a ton of charts, I'll put them all together in one big one for your viewing pleasure.
Well, that probably isn't what you were expecting if you came to this post from a popular YouTube reviewer, or publication. Because most of their results have the RX 5500 XT slightly behind the RX 590 on average. That is, unless you realised that my RX 590 result says "derpy". I can't actually confirm if the results are derpy in normal games but just keep that in mind. My Firestrike score dropped about 5% and it was consistent.
If you're mad that my RX 590 result isn't perfect, then you could just add 5% to its results and that probably puts it in line with a 'Not Derpy' RX 590. In any case, for me, I got a slight increase in performance which I didn't care for, because I was happy with that - but the power this card uses to deliver that is a lot lower.
The RX 5500 XT is quite a bit more efficient. For this efficiency chart I analyised the software readings of the GPU Only power reported by the AMD driver in games. Since I spend a lot of time with my eyes glued to my OSD, I know pretty much what the RX 590 uses in various games and I'll do an average range from what I saw, versus the same for the RX 5500 XT in these testing scenes. If you don't like me eyeballing the results you don't have to read this bit. ¯\_(ツ)_/¯
Keep in mind this is GPU only power draw. This doesn't include the fans or the memory chips. Or the tiny LEDs in the RX 590's case. For the RX 590, you can add around 25-30 W for the 8x 1GB GDDR5 DRAM chips. On the RX 5500 XT, there are only four GDDR6 DRAM chips, each with 2GB capacity. This memory sub system should be more efficient, and you can add around 15-20W for the board power including memory. Both of these results put my results in line with the official TBP for the cards.
This is why I love this card. This also means that less heat is dumped into my case, and less heat to warm the NVME drives that are right under the card, and the various other components and SMDs around that area. I'm not even concerned about noise, the RX 590 wasn't loud because its heatsink is a lot larger, but it still had to dump that thermal energy into my case.
But Sash, you're stupid you wasted money on the same performance!
( ͡ಠ ʖ̯ ͡ಠ)
Conclusion and things.
Sash got the same performance with a much lower power draw and better features and DirectX Support. Sash is happy, because s/he also gets to try out a new GPU that s/he hasn't owned before. RDNA is cool, I really want to type a babble on it soon. Also...
I am going to type a mini Tech-Babble on Navi 14 soon(ish), too. Maybe today, maybe not. I was going to do it here, but I want to play games on the new card and tweak it a bit with auto-undervolting, and reduced power limit. I think this card has the potential to be signficantly more efficient than any other budget GPU on the market, given the 7nm process, presence of only 4 DRAM chips on 128-bit interface, and Navi's tendency to gain a lot more performance per watt with scaled back clocks than Pascal or Turing (RX 5600 XT pre-update is the most efficient video card available).
The high voltage and lacklustre (I mean, it's not that bad, but I get it) performance per watt versus Nvidia's 12nm processors is definitely an AMD thing, not a TSMC 7nm thing. Like Vega, Navi's efficiency curve really sort of becomes a wall over like 1600 MHz.
Anyway, I'm happy because I feel most comfortable not spending over £200 on a video card. Every single time I spend more, I always sell it because it's too much performance, too expensive and scares me O_o.
It's not cheap, I prefer the term 'cost effective'. Except, if I'm brutally honesty, I paid £185 for this card, and if you don't see the 8GB + Perf/watt (former versus 1650 Super, latter versus Polaris) combination niche that I wanted, it's really not amazing value. Not terrible, just not great.
I'll stop babbling now.