Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you have a sense of whether these validation loss improvements are leading to generalized performance uplifts? From afar I can't tell whether these are broadly useful new ideas or just industrialized overfitting on a particular (model, dataset, hardware) tuple.


Why set the bar higher on generalization for autoresearch vs the research humans generally do?


industrialized overfitting is basically what ML researchers do




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: