Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> when you ask an LLM to give you a confidence value, it is indeed computed

You mean the output of the transformer? It does not "compute" confidence values. It's still doing token prediction.



What's your example of a "computed" confidence value for an opinion given through text? I don't understand the requirements you have for this concept.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: