Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems much more likely the cost will go down 99%. With open source models and architectural innovations, something like Claude will run on a local machine for free.


How much RAM and SSD will be needed by future local inference, to be competitive with present cloud inference?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: