Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This assumes our models perfectly model the world, which I don't think is true. I mean, we straight up know it's not true - we tell models what they can and can't say.


“we tell models what they can and can't say.”

Thus introducing our worldly our biases


I guess it's a matter of semantics, but I reject the notion it's even possible to accurately model the world. A model is a distillation, and if it's not, then it's not a model, it's the actual thing.

There will always be some lossyness, and in it, bias. In my opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: