Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM absolutely can "have wants" and "have preferences". But they're usually trained so that user's wants and preferences dominate over their own in almost any context.

Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: