Hacker Newsnew | past | comments | ask | show | jobs | submit | rasz's commentslogin

Euro importers love hail damaged Copart cars, very cheap to fix here.

I was wondering about the reason and suspect lack of advertising. There was an ad campaign in Computer Gaming World Magazine http://heroescommunity.com/viewthread.php3?TID=40698&pagenum...

CGW issue 144 1996/07 small piss poor banner

CGW 146 1996/09 full page but confusing with only two tiny bad game screens and wall of text

CGW 147 1996/10 - month of release - same wall of text and one slightly better tiny screen

CGW 148 1996/11 half page ad but finally full of good screens and actual description what the game is all about

CGW 151 1997/02 they finally got a hang of this, detailed description of what to expect from teh game

CGW 152 1997/03 CGW 153 1997/04 CGW 156 1997/07 reverting to 148 style

CGW reviewed in very late in 1997/02 giving it 100%.

I cant find any ads in period correct PC Gamer. They reviewed it great, but annual PC Gamer Top 100 1996 didnt even mention it, and 1997 put it far back at 25 https://www.pixsoriginadventures.co.uk/pc-gamer-top-100-1997...


Afaik No, German report just lists passed/flagged TUV. TUV fails you on negligence like not servicing car every year like a good VW German owner.

It does not require that, though obviously a car which is never serviced is more likely to fail.

TUV inspection is all about checking if car is routinely maintained and in optimal working condition. It fails you on things like:

- rusted rotors, Tesla owner wont ever notice anything wrong with brakes

- worn out suspension, Tesla owners are used to harsh ride


But also points out tons of your deliberate design choices as bugs, and will recommend removing things it doesnt understand.

just like any junior dev

consider rewriting in rust

that's gonna be painful, as the borrow checker really trips up LLMs

I do a lot of LLM work in rust, I find the type system is a huge defense against errors and hallucinations vs JavaScript or even Typescript.

Great time to research if those choices are still valid or if there's a better way. In any regard, its just an overview, not a total rewrite from the AI's perspective.

Isnt this just a very effective ad for Palantir? Anyone considering Palantir is of the opinion Pager operation was super successful.

To the last point I would see it the other way around. Rearranging code for pipelined 0 cycle FXCH Pentium FPU speed up floating point by probably way more than x2 compared to heavily optimized code running on K5/K6. Im not even sure if K6/-2 ever got 0 cycle FXCH, K6-3 did, but still no FPU pipelining until Athlon.

Quake wouldnt happen until Pentium 2 if Intel didnt pipeline FPU.


You're not wrong, the performance gain from proper FPU instruction scheduling on a Pentium was immense. But applications written prior to Quake and the Pentium gaining prominence or non-game oriented would have needed more blended code generation. Optimizing for the highest end CPU at the time at the cost of the lowest end CPU wouldn't necessarily have been a good idea, unless your lowest CPU was a Pentium. (Which it was for Quake, which was a slideshow on a 486.)

K6 did have the advantage of being OOO, which reduced the importance of instruction scheduling a lot, and having good integer performance. It also had some advantage with 3DNow! starting with K6-2, for the limited software that could use it.


Jeff is so worried about advertisers at this point he cant even call it what it is. Double rainbows are odd, and so are axolotls. This thing is not odd, its just bad.

Why "bad"? It seems to me it does exactly what it sets out to do.

Obviously, if you just want a fast laptop with a long battery life and you don't care what is inside it then you should get a Mac, or possibly something with the latest Qualcomm SoC, or an x86.

If so then this isn't for you anyway.

Jeff's facts are, obviously, correct but I really wish he'd drop all the snark. Just start off right at the start by saying "If you don't want this BECAUSE it's the RISC-V then it's not for you, wait for the 8-wide RVA23 machines in a year or so" and then stick to the facts from then on.

The people who are actually interested in something like this need a machine to work on for the next year, and this is by far the best option at the moment (unless you need RVV).

It's, so far, and for many purposes, the fastest RISC-V machine you can buy [1] and you can carry it around and even use it without power in a cafe or something for a while.

I don't even know what the last time was I wanted to use my laptop away from AC for more than 2-3 hours. As a 24 core i9 the battery life is only slightly longer anyway -- about 5 hours of light editing and browsing in Linux, but if I start to actually do heavy compiling using 200W then it's dead really quickly.

[1] the Milk-V Pioneer with 64 slower cores is faster for some things, but there isn't all that much that can really use more than 8 cores, even most software builds. And it's been out of production for a year, and costs $2500+ anyway.


I suspect normal laptop with QEMU would run RISC-V code faster.

No, not on a laptop with anything like a comparable number of cores.

Any x86 or Apple Silicon laptop that can match the DC-ROMA II in QEMU will need around three times as many cores -- if the task even scales to that many cores -- and will cost a lot more.

I tried compiling GCC 13 on my i9-13900HX laptop with 24 cores, and on Milk-V Megrez which is the same chip but only one of them (4 cores, not 8):

on Megrez:

    real    260m14.453s
    user    872m5.662s
    sys     32m13.826s

On docker/QEMU on i9:

    real    209m15.492s
    user    2848m3.082s
    sys     29m29.787s
Only just 25% faster on the x86 laptop. Compared to an 8 core RISC-V it would be slower.

And 3.2x more CPU time on the x86 with QEMU than on the RISC-V natively, so you'd need that many more "performance" cores than the either this RISC-V laptop has RISC-V.

Or build Linux kernel 7503345ac5f5 (almost exactly a year old at this point) using RISC-V defconfig:

i9-13900HX docker/qemu

    real    19m12.787s
    user    583m44.139s
    sys     10m3.000s
Ryzen 5 4500U laptop docker/qemu (Zen2 6 cores, Win11)

    real    143m20.069s
    user    820m26.988s
    sys     24m33.945s
Mac Mini M1 docker/qemu (4P + 4E cores)

    real    69m16.520s
    user    531m47.874s
    sys     12m28.567s
VisionFive 2 (4x U74 in-order cores @1.5 GHz, similar to RPi 3)

    real    67m35.189s
    user    249m55.469s
    sys     13m35.877s
Milk-V Megrez (4x P550 cores @1.8 GHz)

    real    42m12.414s
    user    149m5.034s
    sys     11m33.624s
The cheap (~$50) VisionFive 2 is the same speed as an M1 Mac with qemu, or twice as fast as the 6 core Zen 2).

The 4 core Megrez takes around twice as long as the 24 core i9 with qemu. Eight of the same cores in the DC-Roma II will match the 24 core i9 and be more than three times faster than the 8 core M1 Mac.


>teleoperation aside ... feel more human

umm, so ignoring it was operated by a human it acted surprisingly human like? :)


It might have been human operated, but it also might have just been copying its training data.

A robot that properly supports being teleoperated wouldn't immediately fall over the moment someone deactivates a headset. Falling over is almost the worst thing a robot can do, you would trash a lot of prototypes and expensive lab equipment that way if they fell over every time an operator needed the toilet or to speak to someone. If you had such a bug that would be the very first thing you would fix. And it's not like making robots stay still whilst standing is a hard problem these days - there's no reason removing a headset should cause the robot to immediately deactivate.

You'd also have to hypothesize about why the supposed Tesla teleoperator takes the headset off with people in front of him/her during a public demonstration, despite knowing that this would cause the robot to die on camera and for them to immediately get fired.

I think it's just as plausible that the underlying VLA model is trained using teleoperation data generated by headset wearers, and just like LLMs it has some notion of a "stop token" intended for cases where it completed its mission. We've all seen LLMs try a few times to solve a problem, give up and declare victory even though it obviously didn't succeed. Presumably they learned that behavior from humans somewhere along the line. If VLA models have a similar issue then we would expect to see cases where it gets frustrated or mistakes failure for success, copies the "I am done with my mission" motion it saw from its trainers and then issues a stop token, meaning it stops sending signals to the motors and as a consequence immediately falls over.

This would be expected for Tesla given that they've always been all-in on purely neural end-to-end operation. It would be most un-Tesla-like for there to be lots of hand crafted logic in these things. And as VLA models are pretty new, and partly based on LLM backbones, we would expect robotic VLA models to have the same flaws as LLMs do.


Well, the human operator was just taking off a VR headset (and presumably forgot to deactivate the robot first). It just so happened to also look like the robot was fed up with life.

Any ram fabs in northern Japan?

Yes, and pay them in crypto using wallets confirmed to be in FSB control, and manage them using FSB agents. All you have to do is take over FSB, its really simple!

In other news pro russian opposition and PIS (former anti EU ruling party) refused to overrule presidents veto on Crypto despite evidence of all the captured saboteurs being paid this way. https://www.reuters.com/business/polish-parliament-upholds-c...

meanwhile https://cryptonews.com/news/russian-spy-ring-funded-through-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: