Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is that censorship or just the AI reflecting the training data?

I feel like that answer is given because that is how people write about Palestine generally.



That's a fair point. But I do think it's worth acknowledging this: When the output of a LLM coincides with the views of the US state department, our gut reaction is that that's just what the input data looks like. When the output of an LLM coincides with the views of the state department of one of the baddies, then people's gut reaction is that it must be censorship.


I think the difference is when something is actually output and then removed after you already see it... that doesn't seem to be a training data issue


Ok but you can say the sace thing about deepseek: maybe it says what it says because of the training data


If that was the case, it wouldn't display the information only to retroactively remove it after a split second


That’s irrelevant. The models are censored for “safety”. One man safety is another man censorship.


I think you are missing my point... I am saying the example wasn't censorship from the model, but were reflective of the source material.

You can argue the source material is censored, but that is still different than censoring the model




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: