Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Are AI language models being used to flood us with hype?
4 points by vba616 on Feb 15, 2023 | hide | past | favorite | 4 comments
I'm finding it inexplicable the gap between what everybody writes recently and actually using ChatGPT.

I asked it something like "give me suggestions for distinguishing situation X from not-X probabilistically with data from a smartphone's sensor array".

It gives me a bunch of plausible suggestions, but the bullet points look suspiciously like an embellished list of all of the commonly used sensors on a phone. (why shouldn't it...)

So I decide to zero in on one of them arbitrarily, and I ask it to characterize the probability distribution for sensor Y, in the "X" and "not-X" case (I'm not wording it like I do here, so don't jump to conclusions)

Then it says, contradicting the earlier response, that really, you can't expect any difference between X and not-X and it avoids giving anything quantitative. It smoothly describes a distribution centered around zero with a small variance, but that's the very opposite of helpful.

Ok, but I keep reading that it's very useful if you know how to prompt it, if you break things down into smaller steps, and so on.

So next I ask it to provide me some of the competing sources for the two contradictory claims, that the output of sensor Y is, or is not, differing depending on X. If ChatGPT can't understand its sources, perhaps I can read them.

And it gives me a really nice looking result, with three citations and hyperlinks for each.

But thennnnn...I click on the links. First for the less probable claim that sensor Y can determine X. Two go nowhere (possibly related to having numbers in the URL) and the third goes to a completely unrelated study about...cats.

Ok, but I know what you want to say...you want to say it's still useful if you can see the other three links aren't garbage. Alas, it didn't work that way. None of them was verifiable, and the only one that wasn't verifiably worthless went to a very large and paywalled document, impossible to be sure about. I would bet the contents aren't even in the training data though.

Next I pointed out the invalid links supporting what I wanted to be true, that sensor Y is useful for my purpose. ChatGPT immediately groveled, as it does, saying it had reviewed all the information, and I was correct, and there was nothing on the internet supporting its original suggestion.

But that doesn't help! You can't prove a negative, and it obviously hasn't done any sort of search. I stated that, but it apparently was stuck in its local min/max, I suppose due to the prior conversation.

At this point it seems clear to me that these language models are simply ELIZA combined with a method for auto-generating language rules from data instead of a hand-written script.

But everything I read to the contrary is leading to a sense of derealization. Are the people who are writing what seems like fan-fiction trying to fool others? Or are they fooled? Or am I?

One thing seems clear to me - the tone or flavor of ChatGPT content, that seems both inhuman and annoying when trying to get anything useful out of it, does not seem new to me. A great deal of online journalism for quite some time resembles it in my opinion, take it for what you will.



All because ChatGPT is a language model, not an knowledge model.

LLMs are designed to produce output that is similar (in language terms) to that its training saw in similar circumstances ... Yes, LLM work on many indistinct levels; Yes, they refer to the input you provide; Yes, LLMs produce results that are easily mistaken for intelligent or useful; but No, the LLM doesn't understand anything - it's merely producing output similar to that seen in training.

https://www.newyorker.com/tech/annals-of-technology/chatgpt-... gives a really nice analogy.

> At this point it seems clear to me that these language models are simply ELIZA combined with a method for auto-generating language rules from data instead of a hand-written script. Agreed - much more complex than ELIZA but not much smarter. And ELIZA fooled some of the people, some of the time; LLMs are fooling more of the people, more of the time. :(


I posted this a couple months ago https://news.ycombinator.com/item?id=33781661

  [Excitement over Chatgpt is] really not that different from Eliza being cool for a few minutes before being obviously found wanting
and got what I though was a funny response of feigned amazement that I could think what way. It's funny to see the wave of hype starting to break.


The thing is I've gotten useful work out of it.

I'm not trying to say it's AGI or perfect, but it saves time on some things.

And for coding I goto google and stack overflow less. maybe 8 out of 10 times I'm happy with ChatGPT. Since I mostly use it for coding I can verify pretty quickly if it worked out or not. for me it saves lot of typing.


... but enormously disappointing to see the way so many people have been taken in.

The number and strength of defensive statements on hacker news is appalling. Aren't we supposed to be the smart people who know what's going on ... not the gullible people being taken in. (But, as they say, a salesperson is the easiest person to sell to.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: