Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The state space of chess is so huge that even a giant training set would be a _very_ sparse sample of the stockfish-computed value function.

So the network still needs to do some impressive generalization in order to „interpolate“ between those samples.

I think so, anyway (didn‘t read the paper but worked on alphazero-like algorithms for a few years)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: