Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Beyond just checking for mistakes, it would be interesting to see if Cyc has concepts that the LLMs don't or vice versa. Can we determine this by examining the models' internals?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: