Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
6.2 GHz Intel Core I9-14900KS Review (tomshardware.com)
149 points by tomcam on March 17, 2024 | hide | past | favorite | 270 comments


When did CPUs break 5-6GHz? It feels surprising to see that high GHz number after being conditioned that "clock speed can't go up anymore, so number of cores go up" and not seeing anything much higher than 4GHz


Intel got 5GHz back in 2019 with the i9-9900XE. The X900-suffixsuffix lines have been Intel's "screw it, we need the highest single core performance" product spot for a long time, and really what that SKU can do is really not a real reflection on any other part of the product line or what is even "reasonable", just what is "possible" (barely).

On the other hand, Intel did get the 12700k to 5GHz in 2021. That's an actual flagship part that actual people might actually buy for an actual purpose.

When Intel ran into their 14nm woes and started actually getting pushed by AMD, Intel responded by upping their core counts (part way) to erode the AMD core count advantage, while really leaning into their single core performance advantage (which was truly a real thing in Zen1/2) by really starting to up their clock speeds.


> not a real reflection on any other part of the product line or what is even "reasonable", just what is "possible" (barely)

As an example: to hit these speeds, Intel and AMD are pushing 1.4V into chips built on processes that work best at and below 1V. And the result isn't a stable, long-lasting chip: https://news.ycombinator.com/item?id=39478551


this is also why servers and chips that run for a long long time (think servers that aren’t decommissioned for years) don’t run such high voltages and clock speeds.

What I would want to know is what happens to these good binned chips when you lower their voltages and frequencies and run them at a more reasonable 3.5GHz or something for a long time. Is the price / power / performance ratio better than a server chip or is it worse ? Would love some data on that. But no one is buying these for it and server chips have different things going on for them like being able to replace the chip for years etc. still would be interesting to find out if these the best ‘quality’ chips out there


> What I would want to know is what happens to these good binned chips when you lower their voltages and frequencies and run them at a more reasonable 3.5GHz or something for a long time

I'm not sure about intel chips, but amd's last couple generations of chips have an "eco mode" which underclocks the chip for you. You get about 85-100% of the performance while consuming 60% as much power. The chips probably last way longer like that too. NVidia's GPUs have similar options. 3090/4090 cards can have a totally reasonable power budget if you're happy to lose ~10% of your framerate.

Arstechnica added eco mode to their benchmarks:

https://arstechnica.com/gadgets/2023/03/ryzen-7950x3d-review...


Where are you getting that ECO mode cause a 15% performance drop?

In my pretty unscientific testing, I saw ECO mode have zero impact on single core workloads and ~4% on multicore with a big drop in cpu temperature.

If I am reading the “Gaming CPU” chart correctly from your link, the AMD 170w performance was 114.8 FPS, but the 105w ECO performance was 111.4 FPS. Effectively nothing for a huge drop in power.


> Where are you getting that ECO mode cause a 15% performance drop?

It depends on the test. In that link I posted, 3dmark’s cpu test shows eco mode drop the score from 19000 to 14000. Which I guess is more like a 25% perf drop. In other tests, performance increased in eco mode.

It seems like it depends a lot on the workload.


Was this on a laptop or desktop? If a huge drop in power results in no meaningful drop in performance I would say that is a sign that previously you were hitting a point where heat was more of a problem than power.


My testing was on a desktop with a way over specced cooler. Regardless, the above link shows that they were getting huge power drops for basically no loss in performance.

With makes sense to me. We have seemingly run out of cheap architectural wins. CPUs and GPUs keep cheating by amping up the power draw to improve the performance numbers for vanishingly small returns. To get an extra 5% performance on these chips can require 10s of watts.


Thanks. The 7950X doesnt have a non X variant. So it seems the non X variant chips are basically doing the same - giving you lower clocks for lower wattage with an option to tinker at your own risk for pushing the chip as those are not as good as the X variants.


I run 50mv undervolted 12500 which gives me about 20% lower consumption.


I design chips in similar technologies. What these binned parts are fast-fast parts, meaning both PMOS and NMOS transistors have lower threshold voltage than typical due to stochastic process od production or intentional skewing of the process during fabrication. You can run them at faster clock speeds. If you stack higher voltage on top you can run quite a bit faster. For a while.. The transistors age. Aging is an exponential function of voltage level and temperature, and lunear function of clock frequency. With aging the device threshold voltages increase rapidly, so the devices get slower. On top of this there is electromigration (EM). With EM the interconnect resistance increases. These effects combined, you have a horrible product lifetime at those cobditions. After failung to work at 5-6 GHz at 1.4V, it would most likely still work at a lower clock frequecy, because in essence it becomes a slower chip.

Now the answer your question: A fast binned part operating at nominal conditions will perform exactly the same as the same sdries typical chip. The power consumption would be like 10% or so higher due to higher dynamic power and leakage. The lifetime would be muuuch more longer than a typical device though. So, in short, wouldn't be outperforming anything.


I would suggest not spreading FUD unless you can cite hard evidence.

Intel warrants their processors for 3 years if they are operated according to spec, which includes supplying 1.3~1.4V as far as I'm aware.

From what I can tell, the linked thread and article concern CPUs that were either defective out of the box (and thus under warranty, regardless usage) or were driven out of spec by mobo overclocking by default (which would void warranty).


Wasn't there an article here on HN more recently? Anyway, check out https://semiengineering.com/the-rising-price-of-power-in-chi... which also describes a few issues with actual chip aging. That may explain why Intel for example gives only 3 years of warranty, despite the fact that people still believe in "that is solid state electronics, it will last a lifetime!"

Also, the overall cost of design and production is going up for years (>10 years already or so), which is why only these few fabs are left that run these advanced processes and they need massive scale to stay profitable. This "going up" is btw. non-linear, more like exponential and that also applies to power consumption of the resulting chips if you want to eek out "a bit more".


I thought the 3 year thing was standard for many years, pretty much as long as there have been retail-boxed CPUs.

I think the logistics of a long term CPU warranty might be difficult too; aside from a few perennial, typically embedded market, products, manufacturers want to close out product lines and keeping warranty spares for decades becomes a burden.

Imagine the fracas at the RMA office: "He says he doesn't WANT a new 14900KS, he wants a 50MHz 486SX2 to replace the one that blew out, and he has the state attorney general on line 2."


Most PCs are operated for way longer than 3 years, but still are there any reports of dead CPUs that were operated in spec? With millions of devices out there, you'd expect some to pop up, but I only ever hear of dead HDDs or defective RAM.


I recently had mother board die on me, I think at age of 4 years and bit... Everything else is still working fine. So that might be in the pile of things that could act up in some cases.


Had a 6900k bail out on my after 4-ish years. Got a new one from extended warranty.


It's not FUD to point to real problems that are occurring in the field. A processor that's unstable is still unstable even if it's covered by warranty. And to the extent that these real issues are being caused by aggressive motherboard defaults, Intel is not absolved: they have huge leverage over their motherboard partners, and are seemingly failing to use it to ensure users get a good, stable experience out of the box (likely because Intel cares more about high scores on popular benchmarks than about subtle stability issues in this market segment).


Yes, real problems stemming from factory-defective products or products that were driven out of spec. The article linked even admits they (author or publisher) don't have enough evidence to pinpoint what the problem is; the best they could do was the aforementioned. Nowhere do they say "Driving 1.4V is killing CPUs." or something similar; just potential workarounds like reducing clock multipliers below spec and configuring mobos to enforce Intel's power limits.

Drive known-good products according to published specifications at load for statistically significant durations. If the results are that the majority of products fail to perform as warranted, then we can talk about how Intel (and I guess AMD) are driving their products to the point of failure.

Otherwise in the absence of such data, I'm going to look at the silent majority satisfied with their purchases and infer that the products concerned are working fine.


> If the results are that the majority of products fail to perform as warranted, then we can talk about how Intel (and I guess AMD) are driving their products to the point of failure.

That's a stupidly high bar. Recalls and class-action lawsuits don't need to be justified by failure rates as high as 50%, and I'm merely discussing that there are signs of trouble, not demanding a recall or other serious action from Intel. Intel's recent top of the line desktop chips are misbehaving in a way that is genuinely noteworthy, even if we don't have the impact solidly quantified and don't have a smoking gun. It's worth discussing, and worth keeping an eye out for similar issues from other chips that are being pushed to similar extremes.


>Intel's recent top of the line desktop chips are misbehaving in a way that is genuinely noteworthy, even if we don't have the impact solidly quantified and don't have a smoking gun.

And all I am asking is for you to cite proper evidence for your claim. The article you linked does not say driving 1.4V is damaging the CPUs, it's actually explicit that the cause is unknown. Speaking more broadly, most people who have bought the CPUs concerned have had no problems (or at least do not voice such concerns).

To reiterate, I am asking you to cite evidence for your claim that "Intel and AMD are pushing 1.4V into chips built on processes that work best at and below 1V. And the result isn't a stable, long-lasting chip." If you can't or won't, this is just FUD.


If the instability is something that develops over time as a genuine change in behavior of the chip, and not merely an artifact due to the evidence of instability taking time to pile up, then the extreme voltages are by far the most plausible culprit. And if on the other hand these chips are slightly unstable out of the box, despite the high voltages required to hit these peak frequencies and record-setting benchmark scores, it suggests that the clock speeds are being pushed too far.

Either way, the high operating voltages compared to what we see in laptop and server CPUs (and GPUs for any market segment) is worth raising an eyebrow. At a minimum, it's a symptom of the desperation Intel and AMD have for perennially leapfrogging each other in ways that are increasingly irrelevant to the average customer and the rest of their product stack.


Some of this is modeled over expected life-time / usage. In general, things such as electro migration, self-heating, bit-cell degradation, etc. are modeled either for 3y, 5y or 10y, depending on CPU, skew, and target market. Now, whether the process corner(s), voltage, frequency that was picked to perform this analysis is a good reflection of how the CPU is being actually used/pushed, is a different matter.


This explains a lot about my computer experiences over the last few years. I've been buying absolute top end hardware, and been consistently running into a plethora of absolute weird technical issues unlike any of my previous experiences. I even had to rma a 12900ks directly with Intel after it couldn't run at stock settings bug free.


>If the instability is something that develops over time as a genuine change in behavior of the chip, and not merely an artifact due to the evidence of instability taking time to pile up, then the extreme voltages are by far the most plausible culprit.

That's all fine and dandy, but can you please cite some evidence to support those claims?

This should not be such a farfetched request.


Would be nice to refer to something that does not say the reason is unknown anyway.


Exactly, a warranty is fine in theory, but I care about machine uptime and where my own time is going.

Often the time to deal with warranty exceeds the cost of a new component, in which case I don’t bother and just try a different brand.

Case in point, had some trouble with WD NVME drives - just tossed them, bought Samsung, and moved on.


There was no FUD there at all. Grow up.


It is FUD to state that 1.4V damages CPUs without citing evidence to support that statement.

Personally, I have a 14700K in my desktop and my laptop has a 12700H. Both routinely push 1.3~1.4V under load operating according to spec. If that is causing damage, I certainly would like to know and I asked for citation. I see none after prodding, so as far as I'm concerned it's FUD.


People say a lot of things that are not technically true. CPU speeds will probably keep going up for years or decades. But when CPU Hz were rising, the routine was things like 4MHz -> 8MHz every other year or so. Going from 4 to 6GHz over around 10-20 years is, effectively, flat compared to the rate of change people were dealing with before the year 2000.


https://www.extremetech.com/computing/158178-amd-unveils-wor...

Officially, 10-11 years ago. (This first 5Ghz CPU wasn't that impressive, and part of the reason "chasing Mhz" started to fade into irrelevance.)

For modern AMD, it wasn't until 2022 that they ventured back across 5Ghz, but this time it's much more impressive.

https://www.theverge.com/2022/5/23/23137217/amd-ryzen-7000-c...


Before that, IBM POWER6 reached 5 GHz in 2008: https://en.wikipedia.org/wiki/POWER6


Just today there was news of a 9GHz overclocking record. It was cooled with liquid helium though.



The 6.2 GHz is just the boost frequency, the base frequency is still well below 4 GHz, at 3.2 GHz. So, your CPU can essentially go into berserk mode for a few seconds and double its frequency, but cannot maintain this frequency. This is similar to a marathon runner who can sprint intermittently.


The 14900K(S) has an unlimited boost duration. It'll run those clocks until your cooler can't handle it anymore, they never give up on their own. Intel added this not that long ago to differentiate the xx900k from the xx700k.


"The 14900K(S) has an unlimited boost duration."

Okay, but unless you have cooling with liquid nitrogen you cannot maintain 6.2 GHz for long due to thermals in practice, hence after a short sprint it will fall back to a much lower frequency.


Very recently:

12900K: 5.2 GHz peak

13900K: 5.8 GHz peak

14900K: 6.0 GHz peak

14900KS: 6.2 GHz peak


For how many milliseconds ?


Single core only, for a good few seconds at full whack - if you have the enthusiast-grade cooling


Which sort of defeats whole purpose for >95% of the target market - super loud machines are extremely annoying to everybody, and all non-it (and most it) folks look at them as failed engineering/pc building effort.


Custom loop watercooling (not the all in one kind) and especially, direct die (delidded) cooling, can just about tame the beast. Someone has linked the DerBauer vid where he does it.

Of course, the market for enthusiast stuff like direct die watercooling and KS-series purchasers have a significant overlap <:o)


I have yet to read a review for how-to-build-your-own-pc that recommends water coolers (or more exotic stuff) as risk-free and maintenance-free. I don't mean a year or two.

I run my desktop since... 2018? and didn't have to think about noise, power consumption nor worries about short-circuiting everything, ever. There is no computer in this world power worth losing this (for me, I know I am pretty far from enthusiasts in this).


I don't think aircooling is going to do it for these – though even watercooling can become loud if stressed. I guess phase-change heat pumps are the logical next step. Peltier cooling is of course totally silent, and enthusiasts dabbled on it already back in the 90s – not sure how common it's these days outside astrophoto imaging rigs. But you can fairly easily get temperature deltas of around 100 kelvin with a Peltier heat pump. You'd still have to dissipate the heat somehow, of course.


I'd be really worried about condensation with peltier.


Phase change enthusiasts already have to deal with condensation.


If you tweak bios settings you can easily do one core at those speeds, or two cores 100Mhz lower, indefinitely without thermal worries.


I have a 12900k and was transcoding video on it yesterday. It stays stable at ~4.5gHz indefinitely when working on all cores.


2010, IBM z196 was 5.2GHz, then in 2012 the zEC12 was 5.5GHz. As always, the consumer world follows with a bit of lag.


I was surprised too. Then I checked my ryzen laptop and .... yep, max speed 6GHz.

I haven't noticed before either.

But given that my laptop pulls this off at a way lower TDP makes me wonder how competitive that Intel chip is, especially as the articles title already says it has a huge power consumption.....


If you have the ability to cool the cpu then it's not a hard limit; just one that's not practical for mainstream consumers.


A long time ago. AMD was famously first with 5ghz 10 years ago and then Intel not too long later.


AMD got 5GHz with Raphael in 2022, Intel hit it few years earlier with Coffee Lake in 2018.

(no, fx-9590 does not count)


Maybe never? Bram Nauta's ISSCC 2024 talk covered this and worth watching.



Sustained or multicore clock speeds cannot really go up. But it's easier to force a single core to go up to 6Ghz for 30s or so, which is really what this is talking about.


Technically AMD had CPUs boosting to 5GHz in 2013, but it was the horrible buldozer FX series with abysmal IPC.


Intel is still the king in single threaded performance. However, single threaded performance will end up being the bottle neck for fewer and fewer workloads. I have to wonder, with a $690 price tag and a 320 watt power budget who is this chip for? Team red wins most of the benchmarks with cheaper silicon.


>I have to wonder, with a $690 price tag and a 320 watt power budget who is this chip for?

There used to be a business in selling these ultra performant ST chips to high frequency traders, who'd overclock the absolute crap out of them. Luckily, this is no longer a business. But these kind of products used to be the "waste stream" from them, left overs.

But now, the KS-series intel chips are for slightly insane gamers and overclocking enthusiasts with more money than sense. That's okay, there's a real market segment there. At our work, we buy the 14700K for dev machines like sane people.


Case in point, the Xeon X5698, which was a 2-core 4.4ghz base freq Westmere made just for HFT. The regular ones were 3.6@4C and 3.467@6C, so it was quite a boost.


>There used to be a business in selling these ultra performant ST chips to high frequency traders, who'd overclock the absolute crap out of them. Luckily, this is no longer a business.

HF trading is no longer a business?


FPGAs have even lower latency than general purpose processors.


>single threaded performance will end up being the bottle neck for fewer and fewer workloads

the bottleneck will always be the fact that software is either unable or unwilling to be paralellised


I think they are in similar segment as sport or super cars just slightly lower bracket. Those with disposable income that just want to get the top tier thing.

You can get performant enough parts for half and reasonable enough parts for even less than that. But some people have hobby where they just want the fastest thing.


Not quite: even single threaded, the workload matters.


„Single threaded“ IS the work load, right? I‘m confused.


Different types of single threaded work stress different parts of the Core. Some might be throughput heavy, some might require a lot of cache, ...


Isn’t most games still dominated by single thread performance?


Largely yes, but the 7800X3D ($370->$300 on sale, 120W TDP) still beats the 14900KS since the extra L3 cache is more important than clock for most games: https://www.techpowerup.com/review/intel-core-i9-14900ks/18....

(As mentioned another post here, for even more CPU-intensive games like Factorio, the difference is even starker.)

For general mixed workloads (assuming you need more threads), I think the 7950X3D is the way to go - 8 3D VCache cores for similar gaming, 20% cheaper than the 14900KS, generally neck and neck on development workloads but also has AVX512 support, and of course all while using significantly less power (also 120W TDP, 160W PPT - you can do even better w/ PBO+UV or Eco mode). Here's the OOTB power consumption comparison, it's a bit bonkers: https://www.techpowerup.com/review/intel-core-i9-14900ks/23....


All of these benchmarks are not useful unless they’re maxing out memory speeds. If you’re buying the top of the line cpu, you’re also buying the fastest ram.

7950x3d is limited to ddr5 6000 while 14900 can do ddr5 8000.

The benchmarks you shared are using ddr5 6000. So by upgrading the ram, the 14900 should come out on top. The memory controller is a key part of the cpu. It makes sense to test them with the best components they are able to use. I know it probably doesn’t matter, but if you’re chasing those last 5% of performance fast ram is a better value than custom water cooling.


I would love to see some data to back that up. I expect L3 cache to have a much more drastic difference than memory bandwidth.


AMD CPU idle at high power, which kinda negates efficiency when loaded.Now their G series desktop CPU the opposite, idle very very efficiently.


While the G series (basically mobile chips) do idle a lot lower, here's a chart that shows that Ryzen chips idle at about the same (a few watts less actually, but negligible) than their Intel high-end desktop counterparts: https://www.guru3d.com/review/amd-ryzen-7-8700g-processor-re...

One thing to keep in mind is that while slightly higher idle power might cost you a bit more, high power when processing means that you will also need a much beefier cooling solution and have higher noise levels and exhaust more heat to boot. Again, from the TechPowerup review https://www.techpowerup.com/review/intel-core-i9-14900ks/22.... they show that for a multi-threaded blender workload, the 7950X3D sits at 140W, while the 14900KS hits 374W. I don't believe there's a single consumer air cooler that can handle that kind of load, so you'll be forced to liquid cool (or go even more exotic).


I find a lot contradicting info about what the idle consumption of AMDs is, different graphs show different numbers (so I do not trust the source you've linked, as my own 12500 machine idles at about 25w) , but overall consensus on Reddit and other sources is that a typical AMD system will idle at +10 watts. compared to Intel. Some posts claim that idle dissipation for AMD (cpu only ) reaches 55w. My own exdperience with Ryzens (it was 3600 last time) is they indeed idle (and while lightly loaded too) at higher power. For my typical usage scenario, where 80-85% of times CPU does nothing it matters.


For anyone that doesn't need the highest performance and where efficiency is super important, I can recommend the current generation of mini PCs/NUCs that run mobile chips. Last summer I picked up Ryzen 7 7940HS-based mini PC that idles at about 10W (from the wall), has a decent iGPU (the Radeon 780M is about on par with a mobile GTX 1650), and its 65W-max Zen 4 cores (8x) actually manages to be very competitive with my old custom 5950X workstation: https://github.com/lhl/linuxlaptops/wiki/Minisforum-UM790-Pr...

Intel Meteor Lake NUCs should perform similarly, but they tend to be a few hundred dollars more for basically the same performance (the Minisforum EliteMini UM780 XTX barebones is currently $440, the cheapest Core Ultra 7 155H minipc I could find was the ASRock Industrial NUC BOX-155H at $700). At this point though, personally, I'd wait for the upcoming Zen5/RDNA3.5 Strix Point APUs, which should be the next big jump up in terms of performance.


Definitely agree on the all over the place metrics for AMD. This is somewhat complicated by the chipset. The x570 chipset actually used like 7-8 extra watts over the x470 by itself because the repurposed i/o die turned into a chipset was the idle power using part of the CPU.

Different motherboards and settings are sort of a hidden factor in this in general it seems.


Don't ignore that "idle" isn't a real thing. Most Reddit users complaining about high idle consumption have a program causing the problem. For me, shutting down the Steam program took my Ryzen 3600 from 22 watts "idle" to 2 watts idle.

There is no such thing as idle in a modern desktop.


This problem also shows up in Intel's latest Meteor Lake laptop processors, which are supposed to be able to power off the CPU chiplet and idle with the two low-power cores on the SoC chiplet. In practice, most OEMs ship too much crapware on their laptops for that capability to kick in, and it's really hard to get Windows cleaned up enough to get the battery life the chip ought to provide.


LOL


Do the X3D chips with mixed 3D and normal cache still have scheduling issues where games can use the slow cache? Back when they were new I heard you want the 8 core version so you have no chance of that happening.


There have been updates on core parking ("Game mode") in Windows so it's probably fine, but I think for max perf, people are still using Process Lasso (I don't use a mixed X3D chip myself, so I haven't paid super close attention to it).


I do have a 7950X3D.

It’s improved a lot, I still use Lasso but strictly speaking I don’t really need to.


That is still a problem, that is the reason why the 7800x3d is as good as the 7950x3d, but if you do other things, you can go with the 7950x3d it’s more expensive tough


.. because the cache gives you better single threaded performance?


In many cases yes. Some single-threaded workloads are very sensitive to e.g. memory latency. They end up spending most of their time with the CPU waiting on a cache-missed memory load to arrive.

Typically, those would be sequential algorithms with large memory needs and very random (think: hash table) memory accesses.

Examples: SAT solvers, anything relying on sparse linear algebra


Obscure personal conspiracy theory: The CPU vendors, notably Intel, deliberately avoid adding the telemetry that would make it trivial for the OS to report a % spent in memory wait.

Users might realize how many of their cores and cycles are being effectively wasted by limits of the memory / cache hierarchy, and stop thinking of their workloads as “CPU bound”.


Arm v8.4 onwards has exactly this (https://docs.kernel.org/arch/arm64/amu.html). It counts the number of (active) cycles where instructions can't be dispatched while waiting for data. There can be a very high percentage of idle cycles. Lots of improvements to be found with faster memory (latency and throughput).


The performance counters for that have been in the chips for a long time. You can argue that perf(1) has unfriendly UX of course.


I think AMD has a tool to check something somewhat related (Cache misses) in AMD uProf


Right, so does Intel in at least their high end chips. But a count of last-level misses is just one factor in the cost formula for memory access.

I appreciate it’s a complicated and subjective measurement: Hyperthreading, superscalar, out-of-order all mean that a core can be operating at some fraction of its peak (and what does that mean, exactly?) due to memory stalls, vs. being completely idle. And reads meet the instruction pipeline in a totally different way than writes do.

But a synthesized approximation that could serve as the memory stall equivalent of -e cycles for perf would be a huge boon to performance analysis & optimization.


At the end, in many real time situations, yes. Just look at what is recommended month after month for top gaming PC builds. Intel stuff is there 'just if you really have to go Intel way, here is worse, louder and more power hungry alternative'.


Most games love cache. AMD's X3D CPUs tend to be either better, or at worst extremely competitive with Intel's top chips for games, at a far lower power budget.


To a degree, but it's typically more like 'this workload uses 2-4 cores and wants peak performance on all of them', not really 'this workload uses a single core' on modern games. And then some games will happily use 8+ cores now

The user-mode video driver and kernel mode driver both use other cores as well


But with all the background OS activity, will this chip ever sustain turbo to 6.2 for noticeable periods? Pure benchmarking win imo.


You think 16 efficient cores will not make it?


They're clocked lower.


Not anymore. Modern games do multi-threading quite good.


> However, single threaded performance will end up being the bottle neck for fewer and fewer workloads.

True, the bottleneck will shrink - but according to Amdahl's law [0], it will never really go away.

Also, the more cores you have, the more the single-threaded performance increase multiplies. Imagine a million-core CPUs in the future - even a tiny increase in single-threaded performance will yield millionfold.

[0] https://en.wikipedia.org/wiki/Amdahl's_law


I use 13900k (5.8 GHz) and will say this: Chrome/Brave with 50+ tabs opens instantly : - ) And I mean it. You click the icon and it is opened.

(Work paid for it for ML prototyping.)


Isn’t that mostly IO limited anyways?


If you can afford a 13900k you most likely have a ludicrous amount of ram and a decent % of your HD is cached in RAM anyway.


It depends on the algorithm that I am designing or working with. Having said that, I do have 128 GB of DDR5, which helps with IO.


This has been exactly my experience, and why I love single thread perf.

I've had 12900k, 12900ks,13900k processors, I'm going to build a new one with either a 14900k or ks. I own a P5800x optane ssd to match.


How many windows are the tabs in? That's a much bigger factor since they added lazy loading years ago.


Intel beats AMD on idle consumption though


As a home consumer though why would I care about power consumption. What is that like a few extra dollars in power per month?


Power costs vary wildly from country to country, let alone state to state in the US. [1]

Also, many more people are installing solar power residential batteries, so there’s that.

https://www.statista.com/statistics/263492/electricity-price...


PC power consumption is an important metric for those with backup power systems, especially in countries with infrequent electricity delivery (e.g. South Africa).

If you can afford a 320W CPU in the first place, you can probably afford the batteries to power the thing for a few hours, but it does still add a considerable amount to your backup costs.


fwiw a 320w cpu isn't running at 320w all of the time. if you're powering something off of batteries in a consumer situation, a 240w vs 320w cpu isn't going to move the needle unless you're really running it hard (like a game)


The baseline draw is still much higher than lower spec alternatives, and it means you would need to cater for the high end scenario in your battery estimations if you intend to actually use it during power outages.

I switched from a 5950X + 3080, to a 5700G APU, to finally an M1 MBP + Steam Deck last year for this exact reason. Far cheaper to have a 250wh battery that can handle those two for the ±2h outages every day.


For an ordinary consumer UPS that's not trying to keep the system up for hours but just a few minutes, the peak power consumption probably matters more than the average: a 750VA UPS might simply trip its overcurrent protection if you're unlucky enough to have a power outage coincide with a spike in system load. With a big enough GPU, even a 1000VA unit might not be safe.

And it might be hard to get your system configured to throttle itself quickly enough when the power goes out: having the UPS connected over USB to a userspace application that monitors it and changes Windows power plans in response to going on battery could be too slow to react. It's a similar problem to what gaming laptops face, where they commonly lose more than half of their GPU performance and sometimes a sizeable chunk of CPU performance when not plugged in, because their batteries cannot deliver enough current to handle peak load.


I am in a western country with good power. Again why would I care?


It's not always about you.


[flagged]


Power delivery is fine where I live, mostly decarbonated, too. So, power draw in and of itself is not an issue for me personally.

The reason I care is that a CPU (or any component) drawing this much power will turn it to a lot of heat. These components don't like actually being hot, so require some kind of contraption to move that heat elsewhere. Which usually means noise. I hate noise.

Also, air conditioning isn't widespread where I live, and since it's an apartment, I need permission from the HOA to set up an external unit (internal units are noisy, so I don't care for them, because I hate noise). So having a space heater grilling my legs in the summer is a pain. I also hate heat.

So, I don't see this as buying a "subpar product". I see it as trying to figure the correct compromise between competing characteristics for a product given my constraints.


This is a forum, fyi


Yes and I am allowed to post, thank you


Then get a low power processor. There’s no reason power consumption should really come up as a prime selling point in any discussion regarding home usage. Obviously when you do things at scale in a data center it makes sense to talk about.


Your original post was to ask why should anyone care about power draw, to which I provided an answer.

Now you shift goalposts to saying you should only care in data centre contexts, in a thread discussing a desktop processor.

It's okay if power consumption doesn't matter to you, but that doesn't mean it doesn't matter to everyone. That's why it's important to have these metrics in the first place and ideally to try and optimise them.


Dont forget cooling. The 14900ks almost requires good watercooling to get its full potential.


I care because I don't like noisy computers and I don't like to run a space heater during summer.

Lower power consumption means less heat, which means less noise from cooling fans and lower temperature increase in my room.

YMMV


Maybe you don’t. But most other people do and so does Intel. It’s not good business to have a poor perf per watt chip in 2024. Everything from phones, laptops, to servers care very much about perf per watt except the very hardcore DIY niche that you might belong to.


Why does it matter? It’s always plugged in and the cost difference is negligible.


Normal people dont know what perf per watt means dont be like that


They might not know the term "perf per watt" but they feel the heat, fan, noise, and speed on devices they use.


None of those are a problem on a desktop PC only on shitty laptops.


Do you have experience with modern high watagae CPU during summer? Yes, good cooler can make it work. But where does that heat end up? It gets blown out from the case and first heats your legs (depending on desk, position to wall, etc), and then the entire room. It can be very noticable and not in a good way.


I have 2 saved profiles in my bios, one where the cpu is allow to consume has much current as it want that I use from mid October to mid May and one where the CPU is capped at 65W for the rest of the year.

I do something similar with my GPU, 75% cap in the summer, 105% cap in the winter.


>Do you have experience with modern high watagae CPU during summer?

i7 4790k w noctua cooler is fine :)


Do you have A/Cin that room?

I actually have a similar desktop next to my legs, only it has the Xeon version with twice the cores. It's an absolute PITA in the summer with no A/C. It's also quite noisy when the temperature in the room reaches 27 ºC.


Same cpu, not bothered to replace until maybe these days


  None of those are a problem on a desktop PC only on shitty laptops.
My post specifically states the perf per watt advantage on laptops, phones, any small device, servers. I also mentioned this advantage being less on hardcore DIY computers.


Do you realise how big the PC gaming sector is these days? High performing desktop chips are not for the hardcore DIY enthusiast market anymore. There are now millions of gamers buying of the shelf PCs with the highest spec components as standard.


The reverse of what you said is true.

DIY desktop market is smaller than ever. You can see this in the number of discrete GPU sales which has drastically declined over the last 20 years[0] save for a few crypto booms.

Gaming laptops are now more popular than gaming desktops.[1]

If you disagree, I'd like to see your sources.

[0]https://cdn.mos.cms.futurecdn.net/9hGBfdHQBWtrbYQKAfFZWD-120...

[1]https://web.archive.org/web/20220628032503/https://www.idc.c...


Lokos like gaming laptop vs gaming desktop sales are roughly the same:

"In terms of market share, according to a 2020 report by Statista, Notebooks / laptops accounted for 46.8% of the global personal computer market, while desktop PC made up 40.6% of the market. The remaining market share was made up of other devices such as tablets and workstations."

https://www.statista.com/statistics/1119850/gaming-pc-market...

https://www.tech-bazaar.com/laptop-market-as-compared-to-des...


I'm upvoting this to counter the downvotes because, unfortunately, normal people don't know.

Specifically, normal people don't know what "watt" is. Seriously. There is a reason electrician is a skilled profession. Most of us here do know watts and the like, so it's easy to forget that normal people aren't like us.


Normal people understand watts. Because they know what an electric heater is.

And using 2x to 3x more electricity means more heat in their room.

Also many countries have smart electricity meters with in home units which tell them exactly how many watts are currently being consumed and how much that costs them.


I’m going to push back on this with a simple example. Go to your local hardware store and check out the electric space heater section. There will be a wide variety of units rated for small rooms, medium rooms, and large rooms, based on square footage. The heaters will have a variety of form factors and physical dimensions. Many of them will have mentions of “eco” and “efficiency”. Every single one of them, and I mean literally, will be a 1500W heater (or whatever your region’s equivalent maximum load per plug is which may vary in 240v countries). Exact same wattage, all 100% efficient because their only job is to produce heat, with wildly different dimensions and text on the box. Customers will swear up and down about the difference between these units.


I had to do this after my gas costs went well above my electric costs. Maybe you are in a country/area where your hardware store doesn’t supply a variety of heaters, but at my local store, no two models were the same wattage.


It grinds my gears that electric lawn mowers are marketed by voltage even though that has no relation to grass cut per time.


Normal people know that 60w bulb can burn you.


Do they? Even I have problem nowadays with this, because they write 60W but it's a LED and it acts like a 60W bulb but it's not 60W. 60W is more like branding.


60w bulb can literally blind you (temporarily)these days.


So you're just going to waste for no reason? Do you also leave the faucet running when you brush your teeth? It's a few cents per month at most, after all.


It’s not for no reason, you’re getting more performance. It’s like saying everyone should buy a 4 cylinder Corolla for every situation, including tractors used in farms.


The context is precisely that there are other, power-efficient options that are at least as performant.


For me it's about having a silent PC and not making the room feel like a sauna.

I have a Ryzen 7950X in ECO mode 105W that's very fast in every workload I can throw at it, trivial to run whisper quiet with a basic Noctua cooler, and barely warms the room.


Maybe your morals guide you to act to mitigate the climate crisis, or maybe you don't have air conditioning and get hot summers.


Get off your high horse. This is negligible power usage on any scale of things.


You should of course prioritize low hanging fruit and eliminate flying, car use, meat, low efficiency housing, etc. But thinking that it's only worth doing these one at a time serially is a fallacy, as is getting stuck on the low impact of individual actions. Dividing the emissions pie to insignificant slices and then arguing that nothing significant can be done is just fooling yourself into inaction.

Regarding horses, pointing out the emissions angle in response to "why would I care about power consumption" is basic table stakes in the world's current situation, no need to get offended.


In the US, not known for its frugality, "Residential daily consumption of electricity is 12 kilowatt-hours (kWh) per person."


Freedom is consuming as much electricity as we desire. So long as we're happy to pay for it, of course. (I am.)


Freedom is also choosing to respect our environment and making a conscious effort not to waste our resources just for the hell of it.


Keyword there being choose. Freedom is the right to choose whether I buy more power or not, which I'm happy to pay.

I have no interest in living in your dictatorial world where everyone must hug trees.


Emitting CO2 does harm to others, not just yourself, so there's good grounds for regulating it.


power equals more heat. Water cooling and large heatsinks and loud fans are not desirable.


Noctua coolers and fans m8


Where do you think those fans blow the heat?


I am in Sweden summer is max 30C here... so not really an issue... for now...


I mentioned power just to round out three major complaints about the chip. Intel chips are more expensive, consume more electricity, and benchmark lower.

In general, as a home user, you should care about power consumption in desktop computers for the following reasons.

* Higher power requirements means more expense in other parts of the build. It means higher PSU requirements, and stronger cooling requirements.

* If you ever plan on running the PC on battery backup (UPS or Tesla Powerwall) or if you plan on installing solar energy then power consumption becomes a bigger expense.


> why would I care about power consumption

Environmental impact maybe? Higher power usage = more heat = more noise?

There’s a bunch of other cases too.


You are absolutely right, most home users don't care about power consumption when it comes to desktop computers.

All this negative shilling about power consumption of Intel and AMD desktop CPUs started after Apple ARM processors appeared on the desktop. Apple sells soldered SSD and CPUs with un-upgradeable RAM, and the proprietary SoC is no longer a true general purpose CPU as only macOS can run properly on it. Performance wise also they don't truly beat AMD (and in some cases Intel) processors. This is a huge negative factor for Apple Silicon based hardwares. Thus, the only negative marketing they can do about Intel and AMD processors is based on its higher power consumption.

That said, Intel and AMD will (and do seem) to care about power consumption for their laptop and server segments.


Apple’s chips are well regarded simply because they’re laptop chips - which they excel at because that’s where power consumption matters.


You can run Linux and openbsd on Apple M series chips, what do you mean only macOS?


Linux and *bsd run crippled on M series chips as Apple doesn't provide the hardware specifications or device APIs for system programmers for utilising the GPU etc on its SoC. Linux developers are forced to reverse engineer everything for the Apple Silicon because Apple is hostile to system developers. This in in sharp contrast to Intel or AMD processors where these OSes can fully utilise the chip hardware to deliver maximal performance, because Intel and AMD are more open with their hardware literature.


If we're going to benchmark single core optimised CPU's on games, I'd love to see sim games on the list instead of just the current AAA titles.

How many TPS can I get on my Factorio megabase.


The 7800X3D is in a completely different universe for Factorio from the 14900KS, it's like 60% faster which is just absurd. Factorio absolutely loves that extra L3 cache.

https://www.tomshardware.com/news/ryzen-7-7800x3d-smashes-co...

https://www.reddit.com/r/factorio/comments/12ckmc3/hardwares... (this doesn't have the 14900K(S) but given that it's the same thing as the 13900K you can guess where it'll end up, and it ain't close to the X3D's)


Cache is king for gaming because of the sheer amount of data needing to be processed. Anything ALU limited in games is categorically not being cache or memory limited because of how memory speed starved modern CPU cores are so making the cores stupid fast only takes you so far. Especially with the insane power budget these chips ask for.

Games need to do a crazy amount of branchy unpredictable work on an enormous dataset so they're very often limited by memory latency which is alleviated most readily by more cache and prefetching. It's why the ECS pattern is so popular as of late from a performance standpoint because it encourages more logic to be written as iterations over dense linear arrays which are just about the best case for a CPUs cache management and prefetching logic.

Factorio is probably the best example of this because it's a game where performance is entirely CPU limited where AAA games more often used in benchmarks are much more dominated by GPU time.


Why is the 7800X3D so much faster in that benchmark than 7950X3D? The 7950X3D seems to boost on paper to even higher frequency. Is the cache different per core?


7950X3D only has the 3D stacked cache on one of the two chiplets, so as far as gaming is concerned you only want to use 8 of the 16 cores. 7800X3D would boost harder for gaming because the 7950X3D has a second vestigial (as far as gaming is concerned) chiplet that eats power that could be better spent in the v-cache enabled die.


Factorio didn't get detected as a game properly so it was half on the vcache cores and half not on it. It's kinda heterogenous compute, and schedulers aren't that smart about it (at least Windows' isn't). If you set affinity to just the 8 cores with vcache then the 7950x3d matches the 7800x3d


When the cache is filled up then they're pretty much tied.

https://m.youtube.com/watch?v=0oALfgsyOg4

Excluding the massive power difference...


Gamer's Nexus is starting to do simulation benchmarks, like here https://gamersnexus.net/cpus/new-amd-ryzen-7-5700x3d-cpu-rev...

(direct link to image: https://gamersnexus.net/u/styles/large_responsive_no_waterma...)


Same here, I'm more interested how it performs in X4 Foundations with one of the benchmark saves. This is a very CPU and RAM heavy game.


I am finding it hard to come to terms with the power and thermal figures. The CPU draws 320watts and likes to operate at 100 degrees Celsius. The CPU uses adaptive boost technology to pull more power to get to the 100C temperature mark. Again, from the review, most enthusiast motherboards default to limits near 4096W of power and 512A of current.

Compared to house hold appliances, with 4000watts of power, you can run 3 microwaves (at 1200 watts), more than 5 refrigerators (at 800 watts), a reasonably sized central air conditioning unit (though 5KW models aren't that rare), etc.

These power and thermal figures make me wonder why Intel is not moving towards Apple's design philosophy behind the M1, M2, M3 series of chips.


> Most enthusiast motherboards default to limits near 4096W of power and 512A of current.

No, that's just a way to set no upper limit. A very beefy desktop power supply is 1600W, and that's typically over-specced to handle brief surges of power.


Why are you comparing the absolutely most power hungry, performant chip meant for desktop use in workstations and gaming machines with a low powered ARM chip meant for laptops?

Intel makes laptop ships too and those don't use 100W.

Why does everyone these days hate consumer choice and market diversity so much?


>Why does everyone these days hate consumer choice and market diversity so much?

A lot of people got into CPU topics because Apple created their M1s and they think that ARM is some unparallelled thing that every1 must adopt and that Apple's design goals are most important (other market segments are irrelevant)


The extremely nonlinear scaling is what gets me.


You can get a Mac Pro "desktop tower/cheese grater" with M2. iMac too.


I9-14900KS and top AMD chips run laps around Apple chips.

They shouldn't even be compared since they target different audiences.


And? Those are still significantly slower than this chip on the benchmarks they can run.

Why do you need to pull in Apple so badly into this?

This bloviating is really getting tiring here - this is the most expensive, most highend chip that people put into gaming machines and their worstations. People who really want the maxiumum power no matter the cost or heat.

If you utter "laptop", "M", "ARM" in this context you're not the target market for this chip. That's OK. Not everything needs to be a medium powered laptop chip for browsing.


You can downvote me all you want, it does not matter. You can call all ARM chips "laptop" grade all you want. But this misrepresents how this arch is used in servers and desktop "PCs" right now. And thats the only point I want to make. ARM arch chips are not laptop only. Can they match the power of this new I9? No. But that wasn't my point.


You can get x86 chips designed for laptops in desktop form factors too.

It's not about Arm. All of Apple's M chips so far have been primarily designed for mobile use, and that strongly affects how the power usage scales. It makes a basic comparison of watts not very useful.


> All of Apple's M chips so far have been primarily designed for mobile use

There, you said it again. This is wrong - you hear Apple M chip and you think "laptop" primarily, but that is no longer true. Just as it would be to say ARM is primarily for smartphones. Now I don't want to talk about sales numbers, but the M chips in the Apple Studio and Mac Pro (and those used by other manufacturers in servers) are a different category than "laptop". For a quick shallow impression see: https://nanoreview.net/en/cpu-compare/intel-core-i9-14900k-v...

(nevermind it's the i9-14900K)

Again, the I9 is more powerful, but that doesn't make all of ARM or M a "laptop".


> the M chips in the Apple Studio and Mac Pro (and those used by other manufacturers in servers) are a different category than "laptop".

The M2 ultra is basically two M2 Maxs stuck together. It's the same chip that's in laptops, and was designed around a laptop power budget.

In what way is it in a different category?

Designing for different power targets has a significant effect on power and performance metrics. When a chip is designed to be able to take tons of watts, that hurts its efficiency even when you're currently running at a low wattage. So comparing chips with different wattage philosophies gets tricky.


They have a broad line up.

The provide laptop chips, desktop, high-end enthusiast desktop and server chips.

Why should they just copy apple?!


Manufacturers try to reuse as much as possible for efficiency's sake but one size does not fit all. If you try to have the same underlying blocks powering your super low-power ultra-portables, as well as the high power server chips, and everything in between (including the monstrosity in the article) the definition of efficiency starts to need a very subjective understanding.


Not sure whether it was AMD or Asus but a few months ago Asus pushed a firmware update to their recent AMD motherboards preventing users from undervolting their CPU in the motherboard because it would fry the motherboard. I can't find the article now but it happened last year.


They do, check Lunar Lake


Because of PC gaming crowd - marketing of the bigger plays to them - nobody cares or understand IPC or power efficiency.

Apple had it easier with the switch to ARM - they use vague metrics like over 2 times faster without actually getting into benchmarking or technical details like the PC crowd does.


Apple's crowd is more tolerant to BS too.

See Apple still comparing M3 to M1 instead of comparing M3 to M2.

They will tell you "It's because M1 users are the most probable to upgrade" but nothing says Apple couldn't have compared to the M2 also.

That's the amount of copium one has to inhale when stuck to the whims of a single vendor.


> Apple's crowd is more tolerant to BS too.

> That's the amount of copium one has to inhale when stuck to the whims of a single vendor.

These are such weird things to add here. Your comment was a perfectly good criticism of the way Apple markets their M1/2/3 CPUs, but then you added that extra nonsense and just came off as someone who spent way too much time in a PC gaming culture war on Reddit that the rest of us don't really care about.


Please refrain from ad-hominem attacks as per HN guidelines.

As for Apple consumers being more susceptible to being deceived, it seems we'll have to agree to disagree on our opinions. At least I provided an example.

Your comment adds nothing.


They said the content of your comment was bad. That is significantly less "ad hominem" than your comment was.


Oddly that power draw is close to a ThreadRipper 7995wx with 96 cores on OEM systems


at 512A current you are capped somewhere around 750w.


I do like the Task Energy plot towards the end (second to last plot).

It shows how wasteful these new Intel processors are compared to their direct competition.


A great gift for the Stellaris fan in your life!


You mean you get that far that CPU matters without some random catastrophe waltzing through your empire like you were dust?


play on a large map with lots of civilizations, doesn't take long.

add multiplayer into the mix for an even worse experience.


Fair enough. My friend and I play relatively small galaxies and both have somewhat beefy processors, so we haven't really noticed it slowing down outside the "hey where'd the weekend go" amounts of time it consumes.


15 years ago we made a Celeron run at 8Ghz, things have really stagnated http://www.madshrimps.be/articles/article/937/LN2-Overclocki...


I'd prefer more cores evenly scaled, not a few cores with some crazy frequencies.


I build art using an M1 Ultra with 20 cores. Many times a day, I have all cores running for 5-20 seconds at a time. I have also never heard a fan running in my Mac Studio. Having a single core that runs at a crazy speed but is cooled by some crazy cooling fluid will not make much difference to me. I can see a single-thread game being helped by these processors, however. I can't see anyone being silly enough to run these in a datacenter all day long unless you build it in space.


I agree but I just wanted to point out, in case you're curious, that it's actually harder to dissipate heat in a vacuum because you can't use ambient air for convection. An underwater data center though...


Single threaded games is not a thing today, so not something to worry about.

Besides, games are GPU bound, not CPU bound for the most part.


Depends on what you do. This is still a consumer cpu so having fewer faster cores benefit the average consumer application more.


I don't agree. Average consumer application these days should be parallelized like any other. So more cores should be better. It's not last century anymore.


The time taken for any process is still limited by Amdahl's law. You can't make inherently serial parts any faster by throwing more cores at it.


I disagree. Any CPU you get nowadays has a minimum of 4 cores which is plenty for the average consumer application. The most basic things people do are still bottlenecked by ST more than MT such as email, browsing, messaging. Even most AAA gaming doesn't scale more than 8 cores.


It’s not about a single application scaling to all cores but modern OSes do A LOT in the background


Even on my decade old CPU, all those background OS processes fit into a single core.


And those background tasks can be scheduled onto the slower cores.


Basic things like compression can be needed for a consumer application. And it's just one example.

So I completely disagree with "2 cores are enough for consumer application" idea.

And it's even further form truth for games. Last time I looked at something like Cyberpunk 2077 in the debugger, it had 81 threads. 81! Part of it were vkd3d-proton ones, but only a small part.

And it actually does load CPU pretty evenly if you monitor it, so I'd say it scales OK.


Cyberpunk may not be an extreme outlier, but it's definitely better than the average game at making effective use of more than a handful of CPU cores.


The data those threads are touching is just as important. Cache locality is hugely important to many game workloads.


Not the case for gaming, single core performsnce is still king for the average game.


For average game from 10 years ago - may be. Not for modern games. For them - GPU performance is king, not single core CPU one.


6.2 GHz? When did that happen?


When it happens, it only happens for less than a minute as thermal throttling kicks in, unless you're doing heroic cooling things that cost extra $$$.


Tell me more about these heroics. Their water-cooled setup still maxed out, interested in more aggressive options.


You can remove the heat spreader, AKA delidding the CPU. It's a significant bottleneck in cooling modern, power-dense CPUs like this, if you're relying on a fluid at ambient temperature.


Yep, the venerable de8auer channel has confirmed this does allow the P-cores to run 6GHz continuously at ~75C:

* DE: https://youtu.be/S_d74JB2ECY * EN: https://youtu.be/5AA2AsK2ewE


For the real heroics we'd need cooperation from the manufacturer. Such as using monoisotopic diamond as substrate or putting some channels for liquid helium into the chip.


Things that are not liquids at 1 atm of pressure. Liquid helium, liquid nitrogen, etc.


You can run a water chiller, but managing condensation gets tricky.


Closed loop helium gas cooling system = lots of gigahertz


With cloud, why aren't more heroic things done? aka LN2 cooling for CPUs to sustain 6 GHz.


Cloud cares about performance per watt (when you are effectively running a small powerplant). Threadripper absolutely dominate in this space.

https://youtu.be/yDEUOoWTzGw?si=RoA5HLGRSPzrsamX&t=6m


Are there workloads bottlenecked only by single-threaded perf per watt, where having multiple cores awake in the same box would be antithetical to the point? Some kind of scale-out HFT liquidity strategy, perhaps?


Minecraft servers are single threaded. For hosting one of those you want a purpose hoster which will usually have Intel boxes.


I comparison-shop dedicated hosting services a lot, and I've often wondered why a "Minecraft server" is advertised as its own thing, separate from a regular machine. I had always assumed it was just an SEO thing. TIL.


Sounds like they’re saying it’s the base rate.



Suppose Apple created a larger M-series cpu that had the same power draw (320 W) and operating temperature (100 C). How well would it perform against this?


For single core, it wouldn't perform as well as you think because performance per watt is not linear. It would push this hypothetical CPU well into the zone of diminishing returns.

For multi core you can just look at AMD Epyc CPUs which achieve great performance by using many (up to 128) relatively low power cores. An all core load pulls around 400W of power.


> and operating temperature (100 C)

Like the new MacBook Air, you mean?


Zing!


Haha. It would have active cooling and sit in a pretty silver pizza box in a dark dungeon.

But why is this such a silly idea? A rebirth of the Apple x-server. They make excellent performance cores, they have half of TSMC dedicated to them, and developers love them.


> This allows it to hit the previously unheard of 6.2 GHz on two cores

Does this mean that only two cores can be simultaneously at that speed?


Yes, that's "Intel Turbo Boost Max Technology 3.0" https://www.tomshardware.com/reference/intel-favored-cpu-cor...

The CPU has 2 golden cores that can hit max boost, and the others can't (at least not consistently enough to market it)


Intel chips are already running extremely hot and take a lot to properly cool down, there’s not enough thermal headroom to boost all cores to those voltages in the latest generation of chips. 2 cores is probably pushing it.


What in the actual, 6.2 GHz!? The power draw must be insane.


I hate P-cores and E-cores. It feels like they're not trying to improve performance, and instead are just searching after the marketing for ever-higher numbers of cores.

This of course backfires once people realize it doesn't matter if you have 64 cores when most of them run like crap and cause you to have to constantly self-manage whitelisting processes to run only on performance cores.

I'd rather have fewer cores of all the same higher performance that don't cause a hassle.


> when most of them run like crap

That's not even close to an accurate description of Intel's E cores. They have the same performance per clock as the Skylake cores they shipped in all their desktop processors from 2015 until just three years ago, and the E cores run at similar speeds. They may be descended from Intel Atom, but they're not crap.

The general idea of the E cores is sound: for tasks that can scale to more than a handful of threads, the E cores provide better performance per mm^2 of silicon and better performance per Watt. Going beyond the current 6-8 P cores in a consumer processor makes no sense; extra silicon budget should be spent on more E cores (and maybe more cache per E core).


Except as AMD is showing you can have both and not be at the mercy of hoping your OS's scheduler is able to consistently figure out what to do.


AMD's main advantage is that their P cores are much more reasonably-sized than Intel's P cores, so they don't need a heterogeneous solution as badly as Intel. But even so, AMD has an entry-level consumer chip that is heterogeneous, though not as severely as Intel's consumer chips.


AMD has the advantage of Zen 4c being functionally the same as Zen 4 in every way except its frequency profile. The cache hierarchy and IPC between Phoenix 2's heterogeneous cores is identical. It's much easier to make scheduling decisions than on Intel.


AMD "c" cores are about 65% as big as the normal ones.

Intel "E" cores are about 30% as big as the normal ones.

AMD avoids the scheduling issues, but does not get the density benefits.


E core boost is limited to ~4 ghz on most models. Falling back to a ~2015 mid range CPU when your OS makes a scheduling error feels really bad and it is very noticeable.


I wish someone would release a CPU with a ton of E-cores, otherwise the idea makes no sense to me as a user of a workstation-class machine.

When I can get a CPU which is all P-cores and has significantly more cores than even the biggest P + E hybrid CPU then what's the point? But if they could give me, say, a hybrid CPU with double the number of cores compared to the biggest available P-only CPU - now we're talking! Alas, the market for such a thing is probably not big enough to be economical.


The power performance area tradeoff is 4 E cores fit in the space of one P core. Some newer laptops use Intel 2P+8E or 2P+4E.


But having E-cores does seem to offer the benefit of greater multi-threaded performance and reduced power consumption - something intel has struggled with [1][2]. I find this innovation from intel much more refreshing than their typical route of clocking CPU's beyond diminishing returns with regard to power consumption. I imagine this higher degree of power consumption flexibility makes a lot of sense for laptops also.

Though I don't have experiencing using these CPUs so I don't know how well management of processes is implemented (probably only going to improve from now though).

Sure, marketing might play a bit into it. Less savvy buyers might make the mistake of comparing core counts across different brands / architectures rather than checking benchmark comparisons. Makes me think of the class action that happened with the Bulldozer architecture [3]. As long as they advertise the cores as distinct P and E cores, I think it is fair enough.

[1] https://www.tomshardware.com/reviews/intel-core-i9-12900k-an...

[2] https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html...

[3] https://en.wikipedia.org/wiki/Bulldozer_(microarchitecture)#...


I believe Intel E-cores are all about maximizing performance. Each E-core has 40-50% of the performance of a P-core, but it only takes ~1/4 of the space. You need a few fast cores for tasks that don't parallelize well, but beyond that, you really want to maximize performance per die area and/or power efficiency.

It's a bit inconvenient that all cores are not identical, especially in real-time applications such as video games. Some schedulers may also not be able to use them well automatically. But if you care mostly about throughput, E-cores are superior to P-cores.


Even the P cores are not identical. On a 14900K there are two cores with +2 speed bins (6GHz) vs all the others (5.8GHz). And there are local thermal effects, because the 8 P cores are arranged in two rows of four. Some of them get heated from all sides, some don't. Optimal scheduling on such a system is not easy.


[flagged]


Please try not to take perceived criticism of Intel as a personal attack, and please don't accuse others of being partisan without stopping to consider whether the bias you perceive is present in the comment itself or only in your reading of it.

Jeffbee's comment can be reasonably read as simply pointing out the futility of asking for all cores to be identical: the existence of minor but measurable variations between nominally identical cores is the unavoidable reality that undermines any attempt to insist on complete symmetry between cores on the same chip. This fact is not an attack on Intel; it applies just as much to chips made by AMD, and nobody tried to present this as a unique downside for Intel, and jeffbee doesn't seem to actually be asking for the idealized chip that can be optimally managed by the simplest schedulers.


I write low level code. I need to profile it. It's enough pain to try to understand performance in the turbo boost model. Then Intel was adding avx512 throttling. Now the P vs E cores and some P cores having different max ghz from others.

AMD adds its own set of problems with the slower (ghz) CCD with x3d.

As a developer I don't mind the P/E on my laptop. I actually enjoy the apple silicon version of it. On a desktop though, where cooling and power are not a problem, I would strongly prefer a uniform, predictable architecture.


I personally feel that if you have bought a high-end machine and are not thinking about how to get peak performance, you might have wasted your money.


E cores make a significant difference to most common computer use cases.

The case where you have many lower workload tasks or system processes, and a few high workload tasks.

This allows for maximizing throughput for the majority of users, while also optimizing for silicon usage and power draw.

Perhaps for your use cases, E cores don’t benefit you, but there’s been over a decade of proof of the use of heterogenous cores across mobile, laptops and now desktops.


P-Cores and E-Cores make engineering sense, that's why they do it. Having every core with the same feature set and maximum clock wastes energy and heat for no measurable gain in the majority of real-world workloads.


I suspect that's the outcome of trying to optimise for some specific efficiency benchmark.


It's the outcome of trying to optimize for cost while trying to maintain good performance on two very different categories of workload, and hurriedly throwing together a product based on the components they had on hand (the Core and Atom families of CPU core designs).


For well scaling workloads optimizing for efficiency is optimizing for performance. For serial workloads even one P-core is enough. There aren't too many workloads in the middle that scale but only to a few cores.


That is a way to increase core count at manageable cost, given that having threads sharing the same core wasn't a good idea after all.


And this “backfiring” is it in the room with us now? Keeping low priority tasks from blocking other work that needs the performance is a great feature.


e-cores would be fine if there's on the order of a hundred+ of them in exchange for a p-core's die area, and the memory bandwidth to keep them fed.


We have those, but we call them GPUs.

E cores are meant to make a much milder tradeoff of performance and features than would be required to jam that many computing units into that area.


I don't care about any of these things. Just give me more memory channels.


Needs to be $140 cheaper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: