Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately, to a majority of the population approximately 100% of LLM output seems entirely legitimate.


I agree wholeheartedly, and this is the core problem - many of the people evangelizing LLMs for a particular task (especially investors and AI gold rush "entrepreneurs") do not have enough expertise in that particular field to effectively evaluate the quality of the output. It sure looks the part though, and for those with a shallow understanding, it is often enough.


That, combined with the confidence any of its output is communicated back to the user.


I’ve been trying ChatGPT for transit directions on Shanghai’s metro and it has been absolutely terrible. Hallucinating connections and routes.

But all of it’s responses definitely seem convincing (as it has been trained to do)


Except for things they happen to know something about.


Unfortunately, too few people are making the obvious leap from "LLMs aren't great for topics I have expertise in" to "maybe that means LLMs aren't actually great for the other topics either."


We as humans aren't good at it. Before AI it was already coined as the "Gell-Mann Amnesia" effect


And a sizable portion of the population believe vaccines don't work and/or have 5G!

I feel like I'm watching a tsunami about to hit while literally already drowning from a different tsunami.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: