Even though I understand LLMs penchant for hallucination, I tend to trust them more than I should when I am busy and/or dealing with "non-critical" information.
I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.
LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.
There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.
I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.
LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.
There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.