Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> gpt-5.2 did ~2x better than gpt-5.2-codex.. why?

Optimising a model for a certain task, via fine-tuning (aka post-training), can lead to loss of performance on other tasks. People want codex to "generate code" and "drive agents" and so on. So oAI fine-tuned for that.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: