Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> To an "just autocomplete" person, the authors are straightforwardly sharing summaries of some sci-fi fan fiction that they actively collaborated with their models to write.

Also on the cynical side here, and agreed: The real-world LLM is a machine that dreams new words to append to a document, and the people using it have chosen to guide it into building upon a chat/movie-script document, seeding it with a fictional character that is described as an LLM or AI assistant. The (real) LLM periodically appends text that "best fits" the text story so far.

The problem arises when humans point to the text describing the fictional conduct of a fictional character, and they assume it is somehow indicative of the text-generating program of the same name, as if it were somehow already intelligent and doing a deliberate author self-insert.

Imagine that we adjust the scenario slightly from "You are a Large Language Model helping people answer questions" to "You are Santa Claus giving gifts to all the children of the world." When someone observes the text "Ho ho ho, welcome little ones", that does not mean Santa is real, and it does not mean the software is kind to human children. It just tells us the generator is making Santa-stories that match our expectations for how it was trained on Santa-stories.

> It just reads as an increasingly convoluted setup, driven by continued the time, money, and attention, that keeps being poured into the research effort.

To recycle a comment from some months back:

There is a lot of hype and investor money running on the idea that if you make a really good text prediction engine, it will usefully impersonate a reasoning AI.

Even worse is the hype that a extraordinarily good text prediction engine will usefully impersonate a reasoning AI that will usefully impersonate a specialized AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: