I was shocked when IBM acquired Red Hat a few years ago. I had silently assumed at the time that Red Hat was far bigger than IBM nowadays, so the reverse would have made more sense to me.
honestly I think it's a net positive (for me at least) because it ensures Fedora has great POWER support (I'll never be able to afford a POWER machine at this rate, but the architecture is an absolute pleasure to work with whenever I have to)
Strange that SpaceX doesn’t seem to be suffering from that limitation. Could it be that the real problem is pork barrel spending and government wastefulness?
Why would they go to the moon? They’re far too busy doing things that actually matter, such as slashing launch costs by 80% or more, while achieving the highest reliability of any launch system ever.
What are you talking about? SLS is on the way to the Moon now. Starship is still in development. SpaceX only exists because of massive NASA subsidy. Any success from SpaceX is thanks to NASA.
NASA provided SpaceX some money as a startup to bet they could just start commercial space, and they won to the tune of saving millions of dollars. There was never massive subsidies and there isn’t any subsidies at all today.
This is a lie. SpaceX has received at least 3.5 billion dollars from NASA for contracts. You can claim these aren’t subsidies but they are direct funding that allowed SpaceX to build up revenue streams like Starlink using the launch vehicles paid for by NASA. It’s the exact same funding model that Boeing takes advantage of. SpaceX would not exist without NASA. They’re collaborators, not competitors.
Almost all of what makes spaceflight “cool” today is inherited excitement and nostalgia, most of it unearned by the current generation of space endeavors.
Apollo was a humanity-defining undertaking. Repeating the same 60 years later with outdated technology at outrageous costs for pork barrel spending, while far superior launch systems have been available for a decade, is about as far away from being “cool” as I can imagine.
The average ESA environmental observation satellite is a lot cooler (and a lot more important) than this launch.
> A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare.
Considering what the entire system ends up being capable of, 500k lines is about 0.001% of what I would have expected something like that to require 10 years ago.
You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
> You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
You really need to compare it to the model weights though. That’s the “code”.
... what are you even talking about? "The system
that literally writes code" has a few hundreds of trillions of parameters. How is this smaller than LibreOffice?
> Across these variations, the overall result stays quite consistent: under certain conditions, ordinary people can be led to do harmful things.
The pop culture version of what happened in those experiments is “regular people will administer potentially lethal shocks when told to”, and that claim has been refuted experimentally many times over.
Contrary to most reports, the original experimenters never told participants that the shocks are supposedly lethal or even dangerous. When participants were actually told that there was a health risk, and that they should ignore it, the vast majority of participants refused to administer the shocks in a later recreation.[1]
In other words, the Milgram experiment, as commonly understood, is somewhere between sensationalism and an outright lie.
Many cloud products now continuously send themselves the input you type while you are typing it, to squeeze the maximum possible amount of data from your interactions.
I don’t know whether ChatGPT is one of those products, but if it is, that behavior might be a side effect of blocking the input pipeline until verification completes. It might be that they want to get every single one of your keystrokes, but only after checking that you’re not a bot.
It's still possible to let users already type from the beginning, just delay sending the characters until checks are complete. Hold them in memory until then.
This was actually one of the reasons why Instagram felt smooth.
Another thing but Facebook/Instagram have also detected if a person uploads an image and then deletes it and recognizes that they are insecure, and in case of TEENAGE girls, actually then have it as their profile (that they are insecure) and show them beauty products....
I really like telling this example because people in real life/even online get so shocked, I mean they know facebook is bad but they don't know this bad.
[Also a bit offtopic, but I really like how the item?id=3913919 the 391 came twice :-) , its a good item id ]
I just checked the network inspector, the only thing it does per key press is to generate an autocomplete list. It doesn't seem too hard to wait with the autocomplete generation until after whichever checks you run pass.
I wondered if ChatGPT streams my message to the GPU while I type it, because the response comes weirdly fast after I submit th message. But I don't know much about how this stuff works.
Absolutely. I went through great lengths to install Asahi on my work M1, only to have most things not work (RTFM). So when one is forced to use MacOS, may it round corners in hell, for work…
I think it’s even crazier that a visible slice of the address space that is supposed to last for the rest of humanity’s future has already been allocated.
It's 1/8 of that space and it's being allocated in big blocks that are expected not to run out unless humanity expands to the whole solar system. If it does run out, there are 7 tries left. More if you only use half as much space next time.
reply