I made a simple test on my laptop with the Ryzen 7 2700X PRO in it. The test is to determine if disabling a CCX on the processor and using the spared thermal and electrical headroom to increase the frequency of the remaining cores, (speed over throughput) provides higher performance in Warframe - in the "corpus.outbreak" scenario. A simple scene with lots of AI stuff and measures how quickly the CPU can spit out frames. It is not the "be all and end all" test for Warframe, and this result is advisable to be taken as academic in nature. However, the insight is useful and the result seems to favour the wider, but slower clocked CPU.
In the single CCX mode, I set all the cores in CCX0 (with the best core from Ryzen Master) to 4025 MHz and locked them there. In this mode, the cores have up to 8MB of addressable L3 cache and there is no inter-CCX penalty that is often cited as some of the causes of the "lower performance" on earlier Zen-based processor models in video games.
In the second mode, the CPU is operating at stock parameters and that is ~3.7 GHz on all 8 cores, with both CCXs enabled. In this mode, individual cores still only have up to 8MB of fast addressable L3 cache, but there are now 2X as many cores and in scenarios where all those can be used, "effective" use of up to 16MB of L3 can be made.
My conclusion is that since modern Windows 10's scheduler is able to understand the CCX-topology of all Ryzen processors, and the game software seems to be able to make effective use of ~8 threads; offloading work to physical cores instead of logical SMT ones is improving frame rate by freeing up execution time on existing cores that are running latency-sensitive threads such as render.
A nice test to show that throughput over latency and speed is advantageous and the age of the "4 core 8 thread" reign is truly over, even with fast-running popular games like Warframe. My prediction is that this will continue to develop, dispelling many of the arguments that Ryzen 1000 and 2000 specifically, are "poor for gaming".
Note: as the scene is randomised, there are moments when the frame rate is different based on different amounts of mobs in the visual window. The instantaneous framerate, as such, is not a great metric of performance analysis, but the entire length of the video should provide some insight. I will update this with frame-time and frame-rate performance logs from both tests at some point.
Update: Here is the updated video with the results from the frametime/rate analysis. Also, I included the full scene so you can see what the game was rendering at the time.
Results: (no, you don't get a fancy graph)
(2 CCX, stock): 26190 frames rendered in 169.328 s
Average framerate : 154.6 FPS
Minimum framerate : 63.7 FPS
Maximum framerate : 292.3 FPS
1% low framerate : 107.8 FPS
0.1% low framerate : 1.4 FPS
(1 CCX, 4 GHz): 25042 frames rendered in 167.250 s
Average framerate : 149.7 FPS
Minimum framerate : 52.7 FPS
Maximum framerate : 314.5 FPS
1% low framerate : 104.9 FPS
0.1% low framerate : 1.4 FPS