Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Largely yes, but the 7800X3D ($370->$300 on sale, 120W TDP) still beats the 14900KS since the extra L3 cache is more important than clock for most games: https://www.techpowerup.com/review/intel-core-i9-14900ks/18....

(As mentioned another post here, for even more CPU-intensive games like Factorio, the difference is even starker.)

For general mixed workloads (assuming you need more threads), I think the 7950X3D is the way to go - 8 3D VCache cores for similar gaming, 20% cheaper than the 14900KS, generally neck and neck on development workloads but also has AVX512 support, and of course all while using significantly less power (also 120W TDP, 160W PPT - you can do even better w/ PBO+UV or Eco mode). Here's the OOTB power consumption comparison, it's a bit bonkers: https://www.techpowerup.com/review/intel-core-i9-14900ks/23....



All of these benchmarks are not useful unless they’re maxing out memory speeds. If you’re buying the top of the line cpu, you’re also buying the fastest ram.

7950x3d is limited to ddr5 6000 while 14900 can do ddr5 8000.

The benchmarks you shared are using ddr5 6000. So by upgrading the ram, the 14900 should come out on top. The memory controller is a key part of the cpu. It makes sense to test them with the best components they are able to use. I know it probably doesn’t matter, but if you’re chasing those last 5% of performance fast ram is a better value than custom water cooling.


I would love to see some data to back that up. I expect L3 cache to have a much more drastic difference than memory bandwidth.


AMD CPU idle at high power, which kinda negates efficiency when loaded.Now their G series desktop CPU the opposite, idle very very efficiently.


While the G series (basically mobile chips) do idle a lot lower, here's a chart that shows that Ryzen chips idle at about the same (a few watts less actually, but negligible) than their Intel high-end desktop counterparts: https://www.guru3d.com/review/amd-ryzen-7-8700g-processor-re...

One thing to keep in mind is that while slightly higher idle power might cost you a bit more, high power when processing means that you will also need a much beefier cooling solution and have higher noise levels and exhaust more heat to boot. Again, from the TechPowerup review https://www.techpowerup.com/review/intel-core-i9-14900ks/22.... they show that for a multi-threaded blender workload, the 7950X3D sits at 140W, while the 14900KS hits 374W. I don't believe there's a single consumer air cooler that can handle that kind of load, so you'll be forced to liquid cool (or go even more exotic).


I find a lot contradicting info about what the idle consumption of AMDs is, different graphs show different numbers (so I do not trust the source you've linked, as my own 12500 machine idles at about 25w) , but overall consensus on Reddit and other sources is that a typical AMD system will idle at +10 watts. compared to Intel. Some posts claim that idle dissipation for AMD (cpu only ) reaches 55w. My own exdperience with Ryzens (it was 3600 last time) is they indeed idle (and while lightly loaded too) at higher power. For my typical usage scenario, where 80-85% of times CPU does nothing it matters.


For anyone that doesn't need the highest performance and where efficiency is super important, I can recommend the current generation of mini PCs/NUCs that run mobile chips. Last summer I picked up Ryzen 7 7940HS-based mini PC that idles at about 10W (from the wall), has a decent iGPU (the Radeon 780M is about on par with a mobile GTX 1650), and its 65W-max Zen 4 cores (8x) actually manages to be very competitive with my old custom 5950X workstation: https://github.com/lhl/linuxlaptops/wiki/Minisforum-UM790-Pr...

Intel Meteor Lake NUCs should perform similarly, but they tend to be a few hundred dollars more for basically the same performance (the Minisforum EliteMini UM780 XTX barebones is currently $440, the cheapest Core Ultra 7 155H minipc I could find was the ASRock Industrial NUC BOX-155H at $700). At this point though, personally, I'd wait for the upcoming Zen5/RDNA3.5 Strix Point APUs, which should be the next big jump up in terms of performance.


Definitely agree on the all over the place metrics for AMD. This is somewhat complicated by the chipset. The x570 chipset actually used like 7-8 extra watts over the x470 by itself because the repurposed i/o die turned into a chipset was the idle power using part of the CPU.

Different motherboards and settings are sort of a hidden factor in this in general it seems.


Don't ignore that "idle" isn't a real thing. Most Reddit users complaining about high idle consumption have a program causing the problem. For me, shutting down the Steam program took my Ryzen 3600 from 22 watts "idle" to 2 watts idle.

There is no such thing as idle in a modern desktop.


This problem also shows up in Intel's latest Meteor Lake laptop processors, which are supposed to be able to power off the CPU chiplet and idle with the two low-power cores on the SoC chiplet. In practice, most OEMs ship too much crapware on their laptops for that capability to kick in, and it's really hard to get Windows cleaned up enough to get the battery life the chip ought to provide.


LOL


Do the X3D chips with mixed 3D and normal cache still have scheduling issues where games can use the slow cache? Back when they were new I heard you want the 8 core version so you have no chance of that happening.


There have been updates on core parking ("Game mode") in Windows so it's probably fine, but I think for max perf, people are still using Process Lasso (I don't use a mixed X3D chip myself, so I haven't paid super close attention to it).


I do have a 7950X3D.

It’s improved a lot, I still use Lasso but strictly speaking I don’t really need to.


That is still a problem, that is the reason why the 7800x3d is as good as the 7950x3d, but if you do other things, you can go with the 7950x3d it’s more expensive tough


.. because the cache gives you better single threaded performance?


In many cases yes. Some single-threaded workloads are very sensitive to e.g. memory latency. They end up spending most of their time with the CPU waiting on a cache-missed memory load to arrive.

Typically, those would be sequential algorithms with large memory needs and very random (think: hash table) memory accesses.

Examples: SAT solvers, anything relying on sparse linear algebra


Obscure personal conspiracy theory: The CPU vendors, notably Intel, deliberately avoid adding the telemetry that would make it trivial for the OS to report a % spent in memory wait.

Users might realize how many of their cores and cycles are being effectively wasted by limits of the memory / cache hierarchy, and stop thinking of their workloads as “CPU bound”.


Arm v8.4 onwards has exactly this (https://docs.kernel.org/arch/arm64/amu.html). It counts the number of (active) cycles where instructions can't be dispatched while waiting for data. There can be a very high percentage of idle cycles. Lots of improvements to be found with faster memory (latency and throughput).


The performance counters for that have been in the chips for a long time. You can argue that perf(1) has unfriendly UX of course.


I think AMD has a tool to check something somewhat related (Cache misses) in AMD uProf


Right, so does Intel in at least their high end chips. But a count of last-level misses is just one factor in the cost formula for memory access.

I appreciate it’s a complicated and subjective measurement: Hyperthreading, superscalar, out-of-order all mean that a core can be operating at some fraction of its peak (and what does that mean, exactly?) due to memory stalls, vs. being completely idle. And reads meet the instruction pipeline in a totally different way than writes do.

But a synthesized approximation that could serve as the memory stall equivalent of -e cycles for perf would be a huge boon to performance analysis & optimization.


At the end, in many real time situations, yes. Just look at what is recommended month after month for top gaming PC builds. Intel stuff is there 'just if you really have to go Intel way, here is worse, louder and more power hungry alternative'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: