Yeah, the obvious thing with processors is to do something similar:
(1) Measure MIPS with perf (2) Compare that to max MIPS for your processor
Unfortunately, MIPS is too vague since the amount of work done depends on the instruction, and there's no good way to measure max MIPS for most processors. (╯°□°)╯︵ ┻━┻
The advantage of stress-ng is that it's easy to make it run with specific CPU utilization numbers. The tests where I run some number of workers at 100% utilization are interesting since they give such perfect graphs, but I think the version where I have 24 workers and increase their utilization slowly is more realistic for showing how production CPU utilization changes.
Fun data point though, I just ran three data points of the Phoronix nginx benchmark and got these results:
- Pinned to 6 cores: 28k QPS
- Pinned to 12 cores: 56k QPS
- All 24 cores: 62k QPS
I'm not sure how this applies to realistic workloads where you're using all of the cores but not maxing them out, but it looks like hyperthreading only adds ~10% performance in this case.
Some esoteric methods of measuring CPU utilizations are to calculate either the current power usage over the max available power, or the current temperature over the max operating temperature. Unfortunately these are typically even more non-linear than the standard metrics (but they can be useful sometimes).
except it doesn't really tell you much, because having some parts of CPUs underutilized doesn't mean adding load will utilize them. Like if load underutilizes floating point units and you have nothing else that uses them
Thanks for the feedback. I think you're right, so I changed a bunch of references and updated the description of the processor to 12 core / 24 thread. In some cases, I still think "cores" is the right terminology though, since my OS (confusingly) reports utilization as-if I had 24 cores.
There are observable differences. For example, under HT, TLB flush or context switch will likely be observable by a neighboring thread whereas for in a full dedicated core, you won't observe such things.
A big part of this is that CPU utilization metrics are frequently averaged over a long period of time (like a minute), but if your SLO is 100 ms, what you care about is whether there's any ~100 ms period where CPU utilization is at 100%. Measuring p99 (or even p100) CPU utilization can make this a lot more visible.
The vertical for this company was one where the daily traffic was oddly regular. That the two lines matched expectations likely has to do with the smoothness of the load.
The biggest problem was not variance in request rate it was variance in request cost, which is usually where queuing kicks in, unless you're being dumb about things. I think for a lot of apps p98 is probably a better metric to chase, p99 and p100 are useful for understanding your application better, but I'm not sure you want your bosses to fixate on them.
But our contracts were for p95, which was fortunate given the workload, or at least whoever made the contracts got good advice from the engineering team.
If your SLO is 100 ms you need far more granular measurement periods than that. You should measure the p99 or p100 utilization for every 5-ms interval or so.
To be fair, in most of these tests hyperthreading did provide a significant benefit (in the general CPU stress test, the hyperthreads increased performance by ~66%). It's just confusing that utilization metrics treat hyperthread usage the same as full physical cores.
Expanding on that last point, one of the examples in the article is someone making ~$70 per month in cryptocurrency. The CI people -could- send a lawyer to Vietnam to try to collect that $70 but even if they succeed it's very not worth it.
They're not starting a CI job per hash (that would be too slow). I'm not sure exactly how each of these cryptocurrencies works, but presumably what they're doing is starting a miner which attempts hashes for a while and then stops. And the only reason the jobs stop at all is that it would be too obvious if they ran continuously.
Investment info sites seem to be skeptical that there's anything wrong with naked shorts. They became illegal after the 2008 crisis, but they're bad in the sense of being extremely risky, not that they're fraudulent.
It's also unclear if naked shorting is actually happening here. It's entirely possible for the short interest to be far above 100%. Consider this setup:
- WidgetCo floats exactly one stock
- Alice buys the one stock (short interest = 0%)
- Alice lends the stock to Bob
- Bob sells the stock to Carol (short interest = 100%)
- Carol lends the stock to Dave
- Dave sells the stock to Eve (short interest = 200%)
- Repeat until short interest reaches the moon
Bob and Dave both legitimately borrowed a shared and sold it, so there's no naked shorting going on.
I suspect what's happening with GameStop is that the stock price is just so absurdly overvalued that short interest has reached a level that people previously thought was impossible.
Interesting I suppose I didn't realize the exact definition of "naked" -- I was regarding "not naked" a bit more abstractly as "protected from inability to purchase the share when required" with "naked" implying that you did not have that protection ...
To me the deviation from the abstract definition above almost Seems like a bug in the "borrow checker" ... I'm not totally clear on why the system is designed allow the same stock to be "loaned" multiple times in this manner ...
Why is this (not particularly sound) definition of "borrow" desirable ...? I suppose this scheme has the effect of making assets appear more liquid than they might actually be -- but -- why is that good?
Doesn't it seem like there's a kind of bias introduced by this "illusion of more liquidity". Saying this mechanism "increases liquidity" feels almost like putting grease on a slope and saying "the slope is steeper" ...
> I suspect what's happening with GameStop is that the stock price is just so absurdly overvalued that short interest has reached a level that people previously thought was impossible.
This is backwards. The stock was already over 140% shorted when it was valued less than the cash it had on hand. Plus there were a number of factors converging on a potential turnaround. That's how this all started; it was originally a strong value play with the short squeeze as only a possible cherry on top. You can check the old videos from Roaring Kitty on YouTube to see the original analysis.