It is disheartening to see how many people are trying to tell you you're wrong when this is literally what it does. It's a very powerful and useful feature, but the over selling of AI has led to people who just want this to be so much more than it actually is.
It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage. It does not have a concept of "leave alone" and it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not, so it's more complex than a basic lookup, but the amount of borderline worship this is getting is disturbing.
A transformer is a universal approximator and there is no reason to believe it's not doing actual calculation. GPT-3.5+ can't do math that well, but it's not "just generating text", because its math errors aren't just regurgitating existing problems found in its training text.
It also isn't generating "the most likely response" - that's what original GPT-3 did, GPT-3.5 and up don't work that way. (They generate "the most likely response" /according to themselves/, but that's a tautology.)
The "most likely response" to text you wrote is: more text you wrote. Anytime the model provides an output you yourself wouldn't write, it isn't "the most likely response".
I believe that ChatGPT works by inserting some ANSWER_TOKEN, that is a prompt like "Tell me about cats" would probably produce "Tell me about cats because I like them a lot", but the interface wraps you prompt like "QUESTOION_TOKENL:Tell me about cats ANSWER_TOKEN:"
text-davinci-003 has no trouble working as a chat bot: https://i.imgur.com/lCUcdm9.png (note that the poem lines it gave me should've been green, I don't know why they lost their highlight color)
Yeah, that's an interesting question I didn't consider actually. Why doesn't it just keep going? Why doesn't it generate an 'INPUT:' line?
It's certainly not that those tokens are hard coded. I tried a completely different format and with no prior instruction, and it works: https://i.imgur.com/ZIDb4vM.png (again, highlighting is broken. The LLM generated all the text after 'Alice:' for all lines except for the first one.)
Then I guess that it is learned behavior. It recognizes the shape of a conversation and it knows where it is supposed to stop.
It would be interesting to stretch this model, like asking it to continue a conversation between 4-5 people where the speaking order is not regular and the user is 2 people and the model is 3
That’s just a supervised fine tuning method to skew outputs favorably. I’m working with it on biologics modeling using laboratory feedback, actually. The underlying inference structure is not changed.
I wonder if that was why when I asked v3.5 to generate a number with 255 failed all the time, but v4 does it correctly. By the way, do not even try with Bing.
One area that is really interesting though is that it can interpret pictures, as in the example of a glove above a plank with something on the other end. Where it correctly recognises the objects, interprets them as words then predicts an outcome.
This sort of fusion of different capabilities is likely to produce something that feels similar to AGI in certain circumstances. It is certainly a lot more capable than things that came before for mundane recognition tasks.
Now of course there are areas it would perform very badly, but in unimportant domains on trivial but large predictable datasets it could perform far better than humans would for example (just to take one example on identifying tumours or other patterns in images, this sort of AI would probably be a massively helpful assistant allowing a radiologist to review an order of magnitude more cases if given the right training).
This is a good point, IMO. A LLM is clearly not an AGI but along with other systems it might be capable of being part of an AGI. It's overhyped, for sure, but still incredibly useful and we would be unwise to assume that it won't become a lot more capable yet
Absolutely. It's still fascinating tech and very likely to have serious implications and huge use cases. Just drives me crazy to see tech breakthroughs being overhyped and over marketed based on that hype (frankly much like the whole "we'll be on Mars by X year nonsense).
One of the biggest reasons these misunderstandings are so frustrating is because you can't have reasonable discussion about the potential interesting applications of the tech. On some level copy writing may devolve into auto generating prompts for things like GPT with a few editors sanity checking the output (depending on level of quality), and I agree that a second opinion "check for tumors" use has a LOT of interesting applications (and several concerning ones such as over reliance on a model that will cause people who fall outside the bell curve to have even more trouble getting treatment).
All of this is a much more realistic real world use case RIGHT NOW, but instead we've got people fantasizing about how close we are to GAI and ignoring shortcomings to shoehorn it into their preferred solution.
Open AI ESPECIALLY reinforces this by being very selective with their results and they way they frame things. I became aware of this as a huge dota fan for over a decade when they did their games there. And while it was very very interesting and put up some impressive results, the framing of those results does NOT portray the reality.
Nearly everything that has been written on the subject is misleading in that way.
People don't write about GPT: they write about GPT personified.
The two magic words are, "exhibit behavior".
GPT exhibits the behavior of "humans writing language" by implicitly modeling the "already-written-by-humans language" of its training corpus, then using that model to respond to a prompt.
Right, anthropomorphization is the biggest source of confusion here. An LLM gives you a perfect answer to a complex question and you think wow, it really "understood" my question.
But no! It doesn't understand, it doesn't reason, these are concepts wholly absent from its fundamental design. It can do really cool things despite the fact that it's essentially just a text generator. But there's a ceiling to what can be accomplished with that approach.
It's presented as a feature when GPT provides a correct answer.
It's presented as a limitation when GPT provides an incorrect answer.
Both of these behaviors are literally the same. We are sorting them into the subjective categories of "right" and "wrong" after the fact.
GPT is fundamentally incapable of modeling that difference. A "right answer" is every bit as valid as a "wrong answer". The two are equivalent in what GPT is modeling.
Lies are a valid feature of language. They are shaped the same as truths.
The only way to resolve this problem is brute force: provide every unique construction of a question, and the corresponding correct answer to that construction.
Not entirely. It's modeling a completion in a given context. That language model "understands" that if one party stops speaking, the other party generally starts, etc. It also "understands" that if someone says something 'wrong' the other party often mentions it, which makes the first party respond thusly, and so forth.
If you ask it what the outcome of a lie is on the conversation it can generally answer. If you ask it for a sample conversation where someone is factually incorrect, or lying, and caught out, it can generate it.
If you give it a fact and ask it to lie about that fact, it will.
I'd agree it doesn't understand anything, but I think it does "understand" things. And yes, it's a language model so semantic distance and other textual details are all it has to go by.
> not by logical decision
Almost entirely yes, but you can have it textually model logic analysis and then check that own model itself. It's not "doing logic" but it almost never fails simply exercises either.
> The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.
Of course. But "a little knowledge is a dangerous thing" as well. Often even real knowledge and analysis leads to the wrong place. In both cases (with a junior human or a LLM as an assistant) you can model their basic processes and stack the information in such a way that their simple model will lead them to the correct place.
It may not know what a lie is, in the sense of having felt the need to hide the truth to avoid personal punishment, but it certainly "knows" what one is and how it shapes the conversation for the purposes of writing a lie, writing a response to a lie, detecting potential lies, etc.
How do we define "understanding" though? Because you can add to your prompt: "Do it step by step, showing how you come up with the solution" and it will simplify the original complex problem into multiple smaller pieces. It's clearly doing something.
For example, I just asked bing: What does the following expression equal? Show your work, step by step: 4(13+2)7-1(4+3+9)+7^(2(2+1))
And bing responded with:
"To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:
4(13+2)7-1(4+3+9)+7^(2(2+1))
Step 1: Simplify inside the parentheses first.
4(15)*7-1(16)+7^(6)
Step 2: Apply the exponents next.
420-16+117649
Step 3: Add or subtract from left to right.
118053
The final answer is 118053. I hope this helps.
"
I wrote the silly expression randomly. I'm not saying it's "thinking" or "understanding", but I wouldn't expect a text-generator to be able to deduce the problem down like that.
> To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:
It isn't actually thinking about any of that statement. That's just boilerplate that goes at the beginning of this story. It's what bing is familiar seeing as a continuation to your prompt, "show your work, step by step".
It gets more complicated when it shows addition being correctly simplified, but that behavior is still present in the examples in its training corpus.
---
The thinking and understanding happened when the first person wrote the original story. It also happened when people provided examples of arithmetic expressions being simplified, though I suspect bing has some extra behavior inserted here.
All the thought and meaning people put into text gets organized into patterns. LLMs find a prompt in the patterns they modeled, and "continues" the patterns. We find meaning correctly organized in the result. That's the whole story.
In 1st year engineering we learned about the concept of behavioral equivalence, with a digital or analog system you could formally show that two things do the same thing even though their internals are different. If only the debates about ChatGPT had some of that considered nuance instead of anthropomorphizing it, even some linguists seem guilty of this.
No because behavioral equivalence is used in systems engineering theory to mathematically prove that two control systems are equivalent. The mathematical proof is complete, e.g. for all internals state transitions and the cross product of the two machines.
With anthropormization there is zero amount of that rigor, which lets people use sloppy arguments about what ChatGPT is doing and isn't doing.
The biggest problem I've seen when people try to explain it is in the other direction, not people describing something generic that can be interpreted as a Markov chain, they're actually describing a Markov chain without realizing it. Literally "it predicts word-by-word using the most likely next word".
I don't know where this comes from because this is literally wrong. It sounds like chomsky dismissing current AI trends because of the mathematical beauty of formal grammars.
First of all, it's a black-box algorithm with pretty universal capabilities when viewed from our current SOTA view. It might appear primitive in a few years, but right now the pure approximation and generalisation capabilities are astounding. So this:
> It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage
can not be stated as truth without evidence. Same here:
> it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not
Where's your evidence?
The enormous parameter space coupled with our so far best performing network structure gives it quite a bit of flexibility. It can memorise things but also derive rules and computation, in order to generalise. We do not just memorise everything, or look up things into the dataset. Of course it learned how to solve things and derive solutions, but the relevant data-points for the puzzle could be {enormous set of logic problems} where it derived general rules that translate to each problem. Generalisation IS NOT trying to find the closest data-point, but finding rules explaining as much data-points, maybe unseen in the test-set, as possible. A fundamental difference.
I am not hyping it without belief, but if we humans can reason then NNs can potentially also. Maybe not GPT-4. Because we do not know how humans do it, so an argument about intrinsic properties is worthless. It's all about capabilities. Reasoning is a functional description as long as you can't tell me exactly how we do it. Maybe wittgenstein could help us: "Whereof one cannot speak, thereof one must be silent". As long as there's no tangible definition of reasoning it's worthless to discuss it.
If we want to talk about fundamental limitations we have to talk about things like ChatGPT-4 not being able to simulate because it's runtime is fundamentally limited by design. It can not recurse. It can only run only a fixed number of steps, that are always the same, until it has to return an answer. So if there's some kind of recursion learned through weights encoding programs intercepted by later layers, the recursion depth is limited.
Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.
People parroting the position from you and the person before you are like doctors who learned about something in school but haven't kept up with emerging research that's since invalidated what they learned, so they go around spouting misinformation because it was thought to be true when they learned it but is now known to be false and just hasn't caught up to them yet.
So many armchair experts who took a ML course in undergrad pitching in their two cents having read none of the papers in the past year.
This is a field where research perspectives are shifting within months, not even years. So unless you are actively engaging with emerging papers, and given your comment I'm guessing you aren't, you may be on the wrong side of the Dunning-Kreuger curve here.
That's a very strong claim. I believe you there's a lot happening in this field but it doesn't seem possible to even answer the question either way. We don't know what reasoning looks like under the hood. It's still a "know it when you see it" situation.
> GPT model builds internalized abstract world representations from the training data within its NN.
Does any of those words even have well defined meanings in this context?
I'll try to figure out what paper you're referring to. But if I don't find it / for the benefit of others just passing by, could you explain what they mean by "internalized"?
> Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.
I've seen this asserted without citation numerous times recently, but I am quite suspicious. Not that there exists a study that claims this, but that it is well supported.
There is no mechanism for directly assessing this, and I'd be suspicious that there is any good proxy for assessing it in AIs, either. research on this type of cognition in animals tends to be contentious, and proxies for them should be easier to construct than for AIs.
> the wrong side of the Dunning-Kreuger curve
the relationship between confidence and perception in the D-K paper, as I recall, is a line, and its roughly “on average, people of all competency levels see themselves slightly closer to the 70th percentile than they actually are.” So, I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?
> I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?
This is being far too generous to parent’s claim, IMO. Note how much “people of all competency levels see themselves slightly closer to the 70th percentile than they actually are” sounds like regression to the mean. And it has been compellingly argued that that’s all DK actually measured. [1] DK’s primary metric for self-assessment was to guess your own percentile of skill against a group containing others of unknown skill. This fully explains why their correlation between self-rank and actual rank is less than 1, and why the data is regressing to the mean, and yet they ignored that and went on to call their test subjects incompetent, despite having no absolute metrics for skill at all and testing only a handful of Ivy League students (who are primed to believe their skill is high).
Furthermore, it’s very important to know that replication attempts have shown a complete reversal of the so-called DK effect for tasks that actually require expertise. DK only measured very basic tasks, and one of the four tasks was subjective(!). When people have tried to measure the DK effect on things like medicine or law or engineering, they’ve shown that it doesn’t exist. Knowledge of NN research is closer to an expert task than a high school grammar quiz, and so not only does DK not apply to this thread, we have evidence that it’s not there.
The singular reason that DK even exists in the public consciousness may be because people love the idea they can somehow see & measure incompetence in a debate based on how strongly an argument is worded. Unfortunately that isn’t true, and of the few things the DK paper did actually show is that people’s estimates of their relative skill correlate with their actual relative skill, for the few specific skills they measured. Personally I think this paper’s methodology has a confounding factor hole the size of the Grand Canyon, that the authors and public both have dramatically and erroneously over-estimated it’s applicability to all humans and all skills, and that it’s one of the most shining examples of sketchy social science research going viral and giving the public completely wrong misconceptions, and being used incorrectly more often than not.
Why are you taking the debate personally enough to be nasty to others?
> you may be on the wrong side of the Dunning-Krueger curve here.
Have you read the Dunning & Krueger paper? It demonstrates a positive correlation between confidence and competence. Citing DK in the form of a thinly veiled insult is misinformation of your own, demonstrating and perpetuating a common misunderstanding of the research. And this paper is more than 20 years old...
So I’ve just read the Harvard paper, and it’s good to see people exploring techniques for X-ray-ing the black box. Understanding better what inference does is an important next step. What the paper doesn’t explain is what’s different between a “world model” and a latent space. It doesn’t seem surprising or particularly interesting that a network trained on a game would have a latent space representation of the board. Vision networks already did this; their latent spaces have edge and shape detectors. And yet we already know these older networks weren’t “reasoning”. Not that much has fundamentally changed since then other than we’ve learned how to train larger networks reliably and we use more data.
Arguing that this “world model” is somehow special seems premature and rather overstated. The Othello research isn’t demonstrating an “abstract” representation, it’s the opposite of abstract. The network doesn’t understand the game rules, can’t reliably play full Othello games, and can’t describe a board to you in any other terms than what it was shown, it only has an internal model of a board, formed by being shown millions of boards.
It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage. It does not have a concept of "leave alone" and it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not, so it's more complex than a basic lookup, but the amount of borderline worship this is getting is disturbing.