Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even though I understand LLMs penchant for hallucination, I tend to trust them more than I should when I am busy and/or dealing with "non-critical" information.

I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.

LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.

There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.



>I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation

Russia is already performing data poisoning attacks on LLMs: https://www.newsguardrealitycheck.com/p/a-well-funded-moscow...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: