Our review of AMD’s 12-core Ryzen 9 3900X CPU, in five words:
Damn, this CPU is fast.
But keep reading, because the Ryzen 9 3900X is likely as significant, and likely as game-changing, as AMD’s original K7 Athlon-series of CPUs that crossed the 1GHz line first, or its Athlon 64 CPU that ushered in 64-bit computing in a desktop PC.
You’d think the Ryzen 9 3900X would have a hard time achieving the same greatness. It's true that it doesn't quite shake all the gaming-performance bugaboos of past generations. But we think when the dust settles, the CPU series will easily be a first-ballot, CPU hall of fame entry.
It is, after all, the first consumer x86 chip to be produced on a 7nm process node. Intel’s current desktop chips are still all built on a 14nm process node, and the company will just begin to move to 10nm later this year. We suspect the chip giant is a little envious that AMD reached this tiny die shrink first.
With that production technology lead, AMD breaks out a redesigned 2nd generation “Zen” core for the Ryzen 3000 that promises double the floating point performance over the previous Ryzen 2000 series, as well as a 15-percent increase in “instructions per clock” (think overall efficiency per clock).
On an even deeper level, AMD said it has improved instruction pre-fetching, further enhanced the instruction cache and doubled the micro-op cache. Besides doubling the floating point performance, AMD has now adopted AVX-256 (256-bit Advanced Vector Extensions) (and yes, Intel fans, we know Core has AVX-512). AVX’s impact is mostly seen in video encoding today, but it can rear its performance head elsewhere too.
AMD has essentially doubled the L3 cache on the Ryzen 3000 chips, and the company is going for some Apple-esque marketing by calling it Game Cache. The cache, up to 70MB on the Ryzen 9 3900X, goes a long way toward reducing memory latency on the Ryzen 3000s. It also tends to boost gaming performance dramatically on the CPU, so AMD feels calling it Game Cache can help the average consumer understand its benefits. Yes, that larger L3—err, Game Cache will also help application performance, but no one gets excited about App Cache we guess.
Besides the cores, AMD has also significantly rejiggered its chiplet design. While the original Zen-based Ryzens featured two 14nm CCDs along with their memory and PCIe controller joined with Infinity Fabric, the Zen 2-based Ryzen 3000s actually separate the memory controller and PCIe 4.0 controller on separate IO dies. Unlike the 7nm compute cores, the IO die is built on a 12nm process. This helps lower the overall cost of the CPU because it saves AMD’s fab partner, TSMC, from using even more valuable 7nm process wafers to build the IO dies.
The thousand-dollar question is whether gaming—which has dogged Ryzen performance from Day One—has finally been erased in situations where the GPU is not the limiting factor. We can say from what we’ve seen that Ryzen 3000 doesn’t quite win all of the time, but it’s so close now, even with Nvidia’s brutally fast RTX 2080 Ti driving it, that it just won’t matter 99 percent of the time.
Yes, we said PCIe 4.0, which is the next iteration of PCIe. PCIe 4.0 essentially doubles the clock speed and throughput over PCIe 3.0. AMD’s move to PCIe 4.0 is another feather in its cap, as Intel continues to stick with PCIe 3.0 speeds on its CPUs. Nvidia, likewise, “only” has PCIe 3.0-based GPUs.
While the actual performance of PCIe 4.0 outside of SSDs today won’t be easily realized, the new standard does help free up more lanes and more ports elsewhere in the PC. If you want to take advantage of PCIe 4.0 SSDs, AMD’s Ryzen 3000 and the new X570 chipset is the only game in town.
You can read all about PCIe 4.0 in this explainer. If you’re confused about the simultaneous presence of PCIe 5.0 and PCIe 6.0 in earlier stages of development, remember that it takes time to go from initial spec to actual hardware. PCIe 4.0 is basically the only answer today and a decent bragging point for AMD.
Boatloads of value
And yes, there’s still value. While Intel charges $488 for its flagship 8-core Core i9-9900K, AMD will give you 12 cores it claims are just as fast, if not faster, for $499, with a bundled RGB cooler too.
How much value? To find out just how much buck per thread you’re getting, we mapped out the CPUs based on how much you each thread costs. As you can see, AMD owns Intel in this category. At $21 per thread for the Ryzen 9 3900X, the $31 for the Core i9-9900K isn’t even in the same ballpark.
None of the cost-per-thread or fancy 7nm process matters, though, if the performance isn’t there. Let’s get on with what you came here for: to find out how fast the Ryzen 9 3900X is.
How we tested
For this review we decided to focus in on three key CPUs. First we use AMD’s 2nd-generation Ryzen 7 2700X as a baseline. Second we bring in the main competitor at $488: Intel’s mighty Core i9-9900K. The last chip is none other than AMD’s $499 Ryzen 9 3900X.
We tested each CPU in parallel, with the Ryzen 7 2700X mounted in an MSI X470 Gaming M7 AC, the Core i9-9900K into an Asus Maximus XI Hero, and the Ryzen 9 3900X into an MSI X570 Godlike.
For graphics, the initial CPU and some game testing were performed with Founders Edition GeForce GTX 1080 cards. We ran additional game testing with Founders Edition GeForce RTX 2080 Ti cards.
All three builds used the latest available UEFI/BIOS and drivers and fresh installs of Windows 10 Professional 1903. This version of Windows is particularly important, because AMD says 1903 now includes scheduler optimizations that help it dispatch threads in more efficient manner on Ryzen 3000.
Remember: AMD’s CPUs are constructed with small cities of CPU cores and high-speed, but slower access between the cities of CPU cores. In older versions of Windows, the scheduler would dispatch one thread to a city in one cluster. Because it was never designed for multi-die designs, it would send a second thread to a different cluster of CPU cores. This would sap some performance, as the threads would then have to deal with crossing between the two clusters of cores instead of simply dispatching both threads to the same cluster of CPU cores. That’s now fixed, and Windows 1903 will send threads the same cluster of CPU cores when possible. This, AMD claims, can yield up to a 15-percent boost. Officials also say it’s not uniform in all applications, however, so your mileage will vary.
While we used the same amount of DDR4 in dual-channel modes in all three builds, we did vary one aspect by running the Core i9-9900K and Ryzen 7 2700X with 16GB of DDR4/3200 CL14, and the Ryzen 9 3900X with 16GB of DDR4/3600 CL15. We wanted to test the Ryzen 9 with its optimal memory clock, which is 3,600MHz. We intend to test it at 3,200MHz as well. Due to time constraints, we’re initially showing only DDR4/3600 performance and will update it with DDR4/3200 once time permits. We are told by AMD, however, that DDR4/3200 CL14 should yield roughly single-digit performance differences compared to DDR4/3600 CL15.
The other variable here is storage. For the Ryzen 7 and Core i9, both were tested with very fast MLC-based Samsung 960 Pro 512GB SSDs at PCIe 3 Gen 3 speeds. The Ryzen 9 3900X, however, is the first CPU and platform we’ve seen to support PCIe 4.0. Because it is a key feature of the new platform, we used a 2TB Corsair MP600 PCIe 4.0 SSD plugged directly into the CPU’s PCIe lanes. For the tests we’re running, storage should not impact CPU performance.
To MCE or not?
For this review, as with our original Core i9-9900K review, we were torn as to whether to enable the “multi-core enhancement” feature. MCE is a motherboard-enabled feature that runs Intel “K” CPUs at higher clock speeds, while using more power and producing more heat. What offends some is that MCE is technically a violation of the letter of Intel’s law and considered an “overclock.”
You’d think that makes turning it off an easy decision. The problem is, just about every mid- to high-end Intel motherboard implements MCE set to auto out of the box. That means that any reviews of these new CPUs with the feature explicitly set to off, is not quite an honest portrayal of the true nature of the Core i9-9900K’s speed that most consumers will experience.
Leaving it on gets even stickier, because every motherboard maker implements it slightly differently. There’s no easy way to draw an exact bead on performance with MCE on.
In the end, our tests are all run with MCE off on the Intel CPU, and AMD’s somewhat similar Precision Boost Overdrive turned off as well. We’ll explore this in greater depth in another story, but for now note that running with MCE off tends to hurt Intel CPUs more than running with PBO off on AMD CPUs.
Got all that? Then keep reading to see charts, charts, and more performance charts.