Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I've resorted to append things like "THIS IS JUST A QUESTION. DO NOT EDIT CODE. DO NOT RUN COMMANDS". Which is ridiculous.

Funny to read that, because for me it's not even new behavior. I have developed a tendency to add something like "(genuinely asking, do not take as a criticism)".

I'm from a more confrontational culture, so I just assumed this was just corporate American tone framing criticism softly, and me compensating for it.

 help



Same here. I quickly learned that if you merely ask questions about it's understanding or plans, it starts looking for alternatives because my questioning is interpreted as rejection or criticism, rather than just taking the question at face value. So I often (not always) have to caveat questions like that too. It's really been like that since before Claude Code or Codex even rolled around.

It's just strange because that's a very human behavior and although this learns from humans, it isn't, so it would be nice if it just acted more robotic in this sense.


Yeah, numerous times I've replied to a comment online, to add supporting context, and it's been interpreted as a retort. So now I prefix them with 'Yeah, '.

Splitting tasks like researching, coding and reviewing across multiple LLMs and/or sessions, can reduce or eliminate these issues while giving you more control over your context windows. This also provides the side benefit of a ‘second opinion’ and a more general perspective on a given topic.

Very interesting observation. Wondering if anyone ever analyzed the underlying "culture" of LLMs and what this would mean for international users.

Obviously you want AI to do your job, so you should accept the result and not coauthor it.

The reason is the system prompt they provided. They probably added a clause like “plan user’s requirements… and implement the required code”

We're training neutral networks on human content to be human like. We don't have "robotic content"

Do what you would do with a person, which is to allocate time for them to produce documentation, and be specific about it.

Appending "Good." before clarifying questions actually helps with that suprisingly well.

You're absolutely right! No, really: I've never had this problem of unprompted changes when I'm just asking, but I always (I think even in real-life conversations with real people) start with feedback: "Works great. What happens if..."

I think people having different styles of prompting LLMs leads to different model preferences. It's like you can work better with some colleagues while with others it does not really "click".


Plus one to this -- also "very well" can indicate that I'm satisfied with the output produced and now we are on to the next stage.

I just append "explain", or start with "tell me."

So instead of:

"Why is foo str|None and not str"

I'd do:

"tell me why foo is str|None and not str"

or

"Why is foo str|None and not str, explain"

Which is usually good enough.

If you're asking this kind of question, the answer probably deserves to be a code comment.


  > the answer probably deserves to be a code comment.
No..? As others mentioned, somehow Codex is "smart" enough to tell questions and requests apart.

Oh funny enough, I often add stuff like "genuinely asking, do not take as a criticism" when talking with humans so I do it naturally with LLMs.

People often use questions as an indirect form of telling someone to do something or criticizing something.

I definitely had people misunderstand questions for me trying to attack them.

There is a lot of times when people do expect the LLM to interpret their question as an command to do something. And they would get quite angry if the LLM just answered the question.

Not that I wouldn't prefer if LLMs took things more literal but these models are trained for the average neurotypical user so that quirk makes perfect sense to me.


Personally defined <dtf> as 'don't touch files' in the general claude.md, with the explanation that when this is present in the query, it means to not edit anything, just answer questions.

Worked pretty well up until now, when I include <dtf> in the query, the model never ran around modifying things.


I've been using chat and copilot for many months but finally gave claude code a go, and I've been interested how it does seem to have a bit more of an attitude to it. Like copilot is just endlessly patient for every little nitpick and whim you have, but I feel like Claude is constantly like "okay I'm committing and pushing now.... oh, oh wait, you're blocking me. What is it you want this time bro?"

"Don't act, just a question" works for me.

Try /btw

This is the prompt that Claude Code adds when you use /btw

https://github.com/Piebald-AI/claude-code-system-prompts/blo...


I found that helpful for a question but the btw query seemed to go to a subagent that couldn't interrupt or direct the main one. So it really was just for informational questions, not "hey what if we did x instead of y?"

That's not a thing in Claude ... so no.

It actually is, don't know for how long but it prompted me to try this a few days ago

Can't be rolled out to all users then yet, because I just get:

> Unknown skill: btw


Yesterday it was showing a hint in the corner to use "/btw" but when I first tried it I got this same error. About ten minutes later (?) I noticed it was still showing the same hint in the corner, so I tried it again and it worked. Seemed to be treated as a one-off question which doesn't alter the course of whatever it was already working on.

It is in Claude Code, specifically for this use case.

It's not really the same use case. It's a smaller model, it doesn't have tools, it can't investigate, etc. The only thing it can do is answer questions about whatever is in the current context.

It's new

I've never experienced this, but I guess I always respond with something like "No, [critique/steer]" or "Mostly fine, but [critique/steer]".

You can just put it in PLAN mode (assuming VS Code), that works well enough - never seen it edit code when in that state.

Then it will try to update plan. Sometimes I have a plan that I'm ready to approve, but get an idea "what if we use/do this instead if that" and all I want is a quick answer with or within additional exploring. What I don't want is to adjust plan I already like based of a thing I say that may not pan out.

Or ask mode? Isn't that what ask mode is for?

Charitable reading. Culture; tone; throughout history these have been medium and message of the art of interpersonal negotiation in all its forms (not that many).

A machine that requires them in order to to work better, is not an imaginary para-person that you now get to boss around; the "anthropic" here is "as in the fallacy".

It's simply a machine that is teaching certain linguistic patterns to you. As part of an institution that imposes them. It does that, emphatically, not because the concepts implied by these linguistic patterns make sense. Not because they are particularly good for you, either.

I do not, however, see like a state. The code's purpose is to be the most correct representation of a given abstract matter as accessible to individual human minds - and like GP pointed out, these workflows make that stage matter less, or not at all. All engineers now get to be sales engineers, too! Primarily! Because it's more important! And the most powerful cognitive toolkit! (Well, after that other one, the one for suppressing others' cognition.)

Fitting: most software these days is either an ad or a storefront.

>80% of the time I ask Claude Code a question, it kinda assumes I am asking because I disagree with something it said, then acts on a supposition.

Humans do this too. Increasingly so over the past ~1y. Funny...

Some always did though. Matter of fact, I strongly suspect that the pre-existing pervasiveness of such patterns of communication and behavior in the human environment, is the decisive factor in how - mutely, after a point imperceptibly, yet persistently - it would be my lot in life to be fearing for my life throughout my childhood and the better part of the formative years which followed. (Some AI engineers are setting up their future progeny for similar ordeals at this very moment.)

I've always considered it significant how back then, the only thing which convincingly demonstrated to me that rationality, logic, conversations even existed, was a beat up old DOS PC left over from some past generation's modernization efforts - a young person's first link to the stream of human culture which produced said artifact. (There's that retrocomputing nostalgia kick for ya - heard somewhere that the future AGI will like being told of the times before it existed.)

But now I'm half a career into all this goddamned nonsense. And I'm seeing smart people celebrating the civilization-scale achievement of... teaching the computers how to pull ape shit! And also seeing a lot of ostensibly very serious people, who we are all very much looking up to, seem to be liking the industry better that way! And most everyone else is just standing by listless - because if there's a lot of money riding on it then it must be a Good Thing, right? - we should tell ourselves that and not meddle.

All of which, of course, does not disturb, wrong, or radicalize me in the slightest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: