I`m sorry for this post being so boring, writing isn't my strongest point. Typing a quick-short post is what I`m only capable of, but... for some reason I hope you`ll get the idea. Let's try to recall what are the differences between our candidates, Rx 5700 and Rx 5700 XT. First of all, both of those GPUs are based on Navi 10 die which is manufactured on TSMC's 7nm. The non-XT GPU is a cut down version of "the-XT" (36 vs 40 compute units). *Quick napkin math gives us a theoretical 11% performance difference on same clocks between both cards. What results are we actually seeing in games? (Results are taken from those videos https://www.youtube.com/watch?v=oOt1lOMK5qY , https://www.youtube.com/watch?v=C8UMu5zJ_dU )
Results in Shadow of the Tomb Raider : https://imgur.com/a/WCkMRId , https://imgur.com/a/NPdm3Ap ,here with 4-5% core frequency advantage of non-XT, we see an even performance between non-XT and "the-XT".
Basically, in any video game, you`ll see the same difference... none
That reminds of Vega 56/64, where we`ve seen same behavior in games with these GPUs. Clock for clock results were same, which was explained by under utilization of Vega in games. What I`m trying to finally bring up is that I`m afraid that we`re gonna see all over again this kind of CUs under utilization with Big Navi. Presumably it should come with 80CUs, if we imagine a cut down version with around 72CUs. We might see the same performance controversy.
P.S. If you have any ideas about this situation, I beg you, please write it down. Maybe I just dont get something with RDNA and GCN, some kind of bottleneck out side of CUs?? Thanks, I have no idea how have you read this hole mess above, gj.