Great time to research if those choices are still valid or if there's a better way. In any regard, its just an overview, not a total rewrite from the AI's perspective.
To the last point I would see it the other way around. Rearranging code for pipelined 0 cycle FXCH Pentium FPU speed up floating point by probably way more than x2 compared to heavily optimized code running on K5/K6. Im not even sure if K6/-2 ever got 0 cycle FXCH, K6-3 did, but still no FPU pipelining until Athlon.
Quake wouldnt happen until Pentium 2 if Intel didnt pipeline FPU.
You're not wrong, the performance gain from proper FPU instruction scheduling on a Pentium was immense. But applications written prior to Quake and the Pentium gaining prominence or non-game oriented would have needed more blended code generation. Optimizing for the highest end CPU at the time at the cost of the lowest end CPU wouldn't necessarily have been a good idea, unless your lowest CPU was a Pentium. (Which it was for Quake, which was a slideshow on a 486.)
K6 did have the advantage of being OOO, which reduced the importance of instruction scheduling a lot, and having good integer performance. It also had some advantage with 3DNow! starting with K6-2, for the limited software that could use it.
Jeff is so worried about advertisers at this point he cant even call it what it is. Double rainbows are odd, and so are axolotls. This thing is not odd, its just bad.
Why "bad"? It seems to me it does exactly what it sets out to do.
Obviously, if you just want a fast laptop with a long battery life and you don't care what is inside it then you should get a Mac, or possibly something with the latest Qualcomm SoC, or an x86.
If so then this isn't for you anyway.
Jeff's facts are, obviously, correct but I really wish he'd drop all the snark. Just start off right at the start by saying "If you don't want this BECAUSE it's the RISC-V then it's not for you, wait for the 8-wide RVA23 machines in a year or so" and then stick to the facts from then on.
The people who are actually interested in something like this need a machine to work on for the next year, and this is by far the best option at the moment (unless you need RVV).
It's, so far, and for many purposes, the fastest RISC-V machine you can buy [1] and you can carry it around and even use it without power in a cafe or something for a while.
I don't even know what the last time was I wanted to use my laptop away from AC for more than 2-3 hours. As a 24 core i9 the battery life is only slightly longer anyway -- about 5 hours of light editing and browsing in Linux, but if I start to actually do heavy compiling using 200W then it's dead really quickly.
[1] the Milk-V Pioneer with 64 slower cores is faster for some things, but there isn't all that much that can really use more than 8 cores, even most software builds. And it's been out of production for a year, and costs $2500+ anyway.
No, not on a laptop with anything like a comparable number of cores.
Any x86 or Apple Silicon laptop that can match the DC-ROMA II in QEMU will need around three times as many cores -- if the task even scales to that many cores -- and will cost a lot more.
I tried compiling GCC 13 on my i9-13900HX laptop with 24 cores, and on Milk-V Megrez which is the same chip but only one of them (4 cores, not 8):
on Megrez:
real 260m14.453s
user 872m5.662s
sys 32m13.826s
On docker/QEMU on i9:
real 209m15.492s
user 2848m3.082s
sys 29m29.787s
Only just 25% faster on the x86 laptop. Compared to an 8 core RISC-V it would be slower.
And 3.2x more CPU time on the x86 with QEMU than on the RISC-V natively, so you'd need that many more "performance" cores than the either this RISC-V laptop has RISC-V.
Or build Linux kernel 7503345ac5f5 (almost exactly a year old at this point) using RISC-V defconfig:
VisionFive 2 (4x U74 in-order cores @1.5 GHz, similar to RPi 3)
real 67m35.189s
user 249m55.469s
sys 13m35.877s
Milk-V Megrez (4x P550 cores @1.8 GHz)
real 42m12.414s
user 149m5.034s
sys 11m33.624s
The cheap (~$50) VisionFive 2 is the same speed as an M1 Mac with qemu, or twice as fast as the 6 core Zen 2).
The 4 core Megrez takes around twice as long as the 24 core i9 with qemu. Eight of the same cores in the DC-Roma II will match the 24 core i9 and be more than three times faster than the 8 core M1 Mac.
It might have been human operated, but it also might have just been copying its training data.
A robot that properly supports being teleoperated wouldn't immediately fall over the moment someone deactivates a headset. Falling over is almost the worst thing a robot can do, you would trash a lot of prototypes and expensive lab equipment that way if they fell over every time an operator needed the toilet or to speak to someone. If you had such a bug that would be the very first thing you would fix. And it's not like making robots stay still whilst standing is a hard problem these days - there's no reason removing a headset should cause the robot to immediately deactivate.
You'd also have to hypothesize about why the supposed Tesla teleoperator takes the headset off with people in front of him/her during a public demonstration, despite knowing that this would cause the robot to die on camera and for them to immediately get fired.
I think it's just as plausible that the underlying VLA model is trained using teleoperation data generated by headset wearers, and just like LLMs it has some notion of a "stop token" intended for cases where it completed its mission. We've all seen LLMs try a few times to solve a problem, give up and declare victory even though it obviously didn't succeed. Presumably they learned that behavior from humans somewhere along the line. If VLA models have a similar issue then we would expect to see cases where it gets frustrated or mistakes failure for success, copies the "I am done with my mission" motion it saw from its trainers and then issues a stop token, meaning it stops sending signals to the motors and as a consequence immediately falls over.
This would be expected for Tesla given that they've always been all-in on purely neural end-to-end operation. It would be most un-Tesla-like for there to be lots of hand crafted logic in these things. And as VLA models are pretty new, and partly based on LLM backbones, we would expect robotic VLA models to have the same flaws as LLMs do.
Well, the human operator was just taking off a VR headset (and presumably forgot to deactivate the robot first). It just so happened to also look like the robot was fed up with life.
Yes, and pay them in crypto using wallets confirmed to be in FSB control, and manage them using FSB agents. All you have to do is take over FSB, its really simple!
In other news pro russian opposition and PIS (former anti EU ruling party) refused to overrule presidents veto on Crypto despite evidence of all the captured saboteurs being paid this way. https://www.reuters.com/business/polish-parliament-upholds-c...
reply