Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is supremely annoying when i ask in a group if someone has experience with a tool or system and some idiot copies my question into some LLM and paste the answer. I can use the LLM just like anyone, if i'm asking for EXPERIENCE it is because I want the opinion of a human who actually had to deal with stuff like corner cases.


It's the 2025 version of lmgtfy.


Nah, that’s different. Lmgtfy has nothing to do with experience, other than experience in googling. Lmgtfy applies to stuff that can expediently be googled.


In my experience, usually what people had done was take your question on a forum, go to lmgtfy, paste the exact words in and then link back to it. As if to say "See how easy that was? Why are you asking us when you could have just done that?"

Yes is true there could have been a skill issue. But it could also be true that the person just wanted input from people rather than Google. So that's why I drew the connection.


I largely agree with your description, and I think that’s different from the above case of explicitly asking for experience and then someone posing the question to an LLM. Also, when googling, you typically (used to) get information written down by people, from a much larger pool and better curated via page ranking, than whoever you are asking. So it’s not like you were getting better quality by not googling, typically.


That's why I said it's the 2025 version of that, given the new technology. I'm not saying it's the same thing. I guess I'm not being clear, sorry.


It’s not clear to me in what way it is a version of that, other than the response being different from what the asker wanted. The point of lmgtfy is to show that the asker could have legitimately and reasonably easily have found the answer by himself. You can argue that it is sometimes done on cases where googling actually wouldn’t provide the desired information, but that is far from the common case. This present version is substantially different from that. It is invariably true that an LLM response won’t give you the awareness and judgement of someone with experience in a certain topic.


Okay I see the confusion. We are coming from different perspectives.

There are three main reasons I can think of for asking the Internet a question in 2010:

1. You don't know how to ask Google / you are too lazy.

2. You don't trust Google.

3. You already tried Google and it doesn't have the answer or it's wrong.

Maybe there are more I can't think of. But let's say you have one of those three reasons, so you post a question to an Internet forum in the year 2010. Someone replies back with lmgtfy. There are three typical responses depending on which of the those reasons you had f or posting:

1. "Thanks"

2. "Thanks, but I don't trust those sources, so I reiterate my question."

3. "Thanks, but I tried that and the answer is wrong, so I reiterate my question."

Now it's the year 2025 and you post a question to an Internet forum because you either don't know how to ask ChatGPT, don't trust ChatGPT, or already tried it and it's giving nonsense. Someone replies back with an answer from ChatGPT. There are three typical responses depending on your reason for posting to the forum.

1. "Thanks"

2. "Thanks, but I don't trust those sources, so I reiterate my question."

3. "Thanks, but I tried that and the answer is wrong, so I reiterate my question."

So the reason I drew the parallel was because of the similarity of experiences between 2010 and now for someone who doesn't trust this new technology.


In my experience what happened was the top hit for the question was a topical forum, with a lmgtfy link as a response to the exact question I'm googling.


The whole point of paying a domain expert is so that you don't have to google shit all day.


That’s exactly how I feel


If it's not worth writing, it's not worth reading.


Reminds me of something I wrote back in 2023: "If you wrote it with an LLM, it wasn't worth writing" https://jfloren.net/b/2023/11/1/0


There's a lot of documentation out there that I've found was left unwritten but that I would have loved to read


I mean, there is a lot of hand written crap to, so even that isn't a good rule.


Both statements can be true at the same time, even though they seem to point in different directions. Here's how:

1. *"If it's not worth writing, it's not worth reading"* is a normative or idealistic statement — it sets a standard or value judgment about the quality of writing and reading. It suggests that only writing with value, purpose, or quality should be produced or consumed.

2. *"There is a lot of handwritten crap"* is a descriptive statement — it observes the reality that much of what is written (specifically by hand, in this case) is low in quality, poorly thought-out, or not meaningful.

So, putting them together:

* The first expresses *how things ought to be*. * The second expresses *how things actually are*.

In other words, the existence of a lot of poor-quality handwritten material does not invalidate the ideal that writing should be worth doing if it's to be read. It just highlights a gap between ideal and reality — a common tension in creative or intellectual work.

Would you like to explore how this tension plays out in publishing or education?



> If it's not worth writing, it's not worth reading.

It does NOT mean, AT ALL, that if it is worth writing, it is worth reading.

Logic 101?


That rule does not imply the inverse


I mean we have automated systems that 'write' things like tornado warnings. Would you rather we have someone hand write that out?

It seems the initial rule seems rather worthless.


1. I think the warnings are generally "written" by humans. Maybe some variables filled in during the automation.

2. So a rule with occasional exceptions is worthless, ok


>I mean, there is a lot of hand written crap to

You know how I know the difference between something an AI wrote and something a human wrote? The AI knows the difference between "to" and "too".

I guess you proved your point.


It is a necessary but not sufficient condition, perhaps?


Necessary != sufficient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: