Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the expectation is that we won't be prompting this models the way we do now down the road. As in: prompting is the command line of LLM, at some point we'll get the equivalent of a GUI (either because we can be clueless on how to prompt because the LLM is so good, or it's so good at eliciting your requirements, or because there is no prompt at all and you interface with the LLM completely differently)

You could foresee that under the covers there is always going to be some prompting but it's going to be performed rarely by few people?



I agree that there will be new and interesting abstractions for prompting, but I have this feeling that the promise of LLMs for the foreseeable future is to apply them to new and unique business cases. I think this will always require interacting with them at a lower level to some extent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: