Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"...




It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck.

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: