Hacker Newsnew | past | comments | ask | show | jobs | submit | latexr's commentslogin

Believe it or not, humans can be quite imaginative and creative and had been coming up with these types of hacks way before current LLMs.

> I only hope the models get good enough to not be so samey in the future.

Why would you hope to be more easily fooled?


Not GP but I'm personally hoping that if I'm inevitably doomed to be exposed to this horseshit every day that it becomes tolerable to read. For world-shaking language-based superintelligences, they can't write to save their very expensive lives.

> Well the main thing he is known for is "it's all gonna crash" and that's a fact that this page admits he's wrong about.

Have there been specific claims about when it’s going to crash? I find it hard to believe he claimed it was all going to crash by early 2026. Maybe I’m wrong, I haven’t read all of his posts. But neither did the author, they admit in the repo this is all LLM, nothing was verified by humans.


> All verdicts are LLM-scored, not human-verified.

In other words, could be all slop. Or maybe it’s not. Maybe it’s mixed. No one knows.


Fair critique. The methodology doc covers this: both pipelines agree on the high-confidence clusters (security vulnerabilities, bubble predictions) even though they disagree on edge cases. The repo is public specifically so people can spot-check. If you find a claim where the scoring is wrong, I'd genuinely like to know.

> If you find a claim where the scoring is wrong, I'd genuinely like to know.

So you’re asking me to do the work you should have done in the first place? If you didn’t put any effort into it, why should I waste my time checking your non-work and correcting it to your credit?

If you had actually put in the effort then sure, I’d be amenable to helping making this the best it can be. But you didn’t, so what’s the point? Why should anyone spend their time fixing other people’s slop?


I am curious whether claims are scored more accurately by LLMs when reviewed and edited by LLMs prior to posting the claim.

> Apple Silicon systems also require Rosetta2

Then this software is on a clock. Apple has announced the end of Rosetta.

https://9to5mac.com/2026/02/16/macos-26-4-will-notify-users-...


Safari’s extension model could be really good by now, had they not stopped putting effort into it. You are able to define which extensions have access to which websites, and if that applies always or only in non-Private¹ mode. You can also easily allow an extension access for one day on one website.

But there are couple of things I find subpar:

You can’t import/export a list of website permissions. For a couple of extensions I’d like to say “you have access to every website, except this narrow list” and be able to edit that list and share it between extensions.

On iOS, the only way to explicitly deny website access in an extension’s permissions is to first allow it, then change the configuration to deny. This is bonkers. As per the example above, to allow an extension access to everything except a narrow list of websites is to first allow access to all of them.

Finally, these permissions do not sync between macOS and iOS, which increases the maintenance burden.

¹ Private being the equivalent to incognito.


Contents of the blog are themselves written by LLM.

https://github.com/coollabsio/llmhorrors.com/blob/main/CLAUD...

The whole website seems to be focused on promoting the author and their projects more than sharing the information. Just link to the original.

https://www.reddit.com/r/googlecloud/comments/1reqtvi/82000_...

Posted to HN twice recently.

https://news.ycombinator.com/item?id=47231708

https://news.ycombinator.com/item?id=47184182


What do you expect from a website named llmhorrors.com?

I would expect it to not be written by an LLM. Molly White didn’t run Web3 is Going Great on the blockchain.

https://www.web3isgoinggreat.com/


And looking at her main website https://www.citationneeded.news/ there is a tip jar but it doesn't accept crypto. I was expecting her to take at least the major coins like Ada, Eth and BTC, but she's consistent with her views.

False equivalence, Tesla also does not run their website from a Model S.

The joke is, LLM Horrors is anti-LLM, Web3 is Going Just Great is anti Web3. The equivalent for Tesla would be Tesla putting a ICE inside their model 2 if they didn't believe in EVs.

Another plea for @dang to integrate pangram into all story and comment submissions

Some BSD server somewhere which was last rebooted in 1994. No one is really sure where it’s physically located, but it keeps everything running.

And it still pings, of course


> He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.

But all of those matter, and are not isolated. When the leader of an organisation is distracted by the other organisations they control, it matters. It also matters when they are repeatedly wrong about their predictions, even if on another organisation, because it helps you calibrate expectations.

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...


> But all of those matter, and are not isolated.

And have been done to death elsewhere.

Meanwhile, Berger produces balanced, informed, interesting, and informative coverage of space tech (in general, not just SpaceX).


Fair. I misunderstood your original post as “not wanting to read/care at all about”, but you did say “yet more”. Thank you for clarifying.

> that old fallacy that you think the news is correct about everything, until they report about something you know.

Gell-Mann Amnesia.

https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...

The description is slightly backwards. The problem is you continue to trust the news after seeing how wrong they are about something on which you’re an expert.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: