Hacker Newsnew | past | comments | ask | show | jobs | submit | jaennaet's commentslogin

What would you call this behaviour, then?

Marketing. ”Oh look how powerful our model is we can barely contain its power”

This has been a thing since GPT-2, why do people still parrot it

I don’t know what your comment is referring to. Are you criticizing the people parroting “this tech is too dangerous to leave to our competitors” or the people parroting “the only people who believe in the danger are in on the marketing scheme”

fwiw I think people can perpetuate the marketing scheme while being genuinely concerned with misaligned superinteligence


Even hackernews readers are eating it right up.

This place is shockingly uncritical when it comes to LLMs. Not sure why.

We want to make money from the clueless. Don't ruin it!

Hilarious for this to be downvoted.

"LLMs are deceiving their creators!!!"

Lol, you all just want it to be true so badly. Wake the fuck up, it's a language model!


A very complicated pattern matching engine providing an answer based on it's inputs, heuristics and previous training.

Great. So if that pattern matching engine matches the pattern of "oh, I really want A, but saying so will elicit a negative reaction, so I emit B instead because that will help make A come about" what should we call that?

We can handwave defining "deception" as "being done intentionally" and carefully carve our way around so that LLMs cannot possibly do what we've defined "deception" to be, but now we need a word to describe what LLMs do do when they pattern match as above.


The pattern matching engine does not want anything.

If the training data gives incentives for the engine to generate outputs that reduce negative reaction by sentiment analysis, this may generate contradictions to existing tokens.

"Want" requires intention and desire. Pattern matching engines have none.


I wish (/desire) a way to dispel this notion that the robots are self aware. It’s seriously digging into popular culture much faster than “the machine produced output that makes it appear self aware”

Some kind of national curriculum for machine literacy, I guess mind literacy really. What was just a few years ago a trifling hobby of philosophizing is now the root of how people feel about regulating the use of computers.


The issue is that one group of people are describing observed behavior, and want to discuss that behavior, using language that is familiar and easily understandable.

Then a second group of people come in and derail the conversation by saying "actually, because the output only appears self aware, you're not allowed to use those words to describe what it does. Words that are valid don't exist, so you must instead verbosely hedge everything you say or else I will loudly prevent the conversation from continuing".

This leads to conversations like the one I'm having, where I described the pattern matcher matching a pattern, and the Group 2 person was so eager to point out that "want" isn't a word that's Allowed, that they totally missed the fact that the usage wasn't actually one that implied the LLM wanted anything.


Thanks for your perspective, I agree it counts as derailment, we only do it out of frustration. "Words that are valid don't exist" isn't my viewpoint, more like "Words that are useful can be misleading, and I hope we're all talking about the same thing"

You misread.

I didn't say the pattern matching engine wanted anything.

I said the pattern matching engine matched the pattern of wanting something.

To an observer the distinction is indistinguishable and irrelevant, but the purpose is to discuss the actual problem without pedants saying "actually the LLM can't want anything".


> To an observer the distinction is indistinguishable and irrelevant

Absolutely not. I expect more critical thought in a forum full of technical people when discussing technical subjects.


I agree, which is why it's disappointing that you were so eager to point out that "The LLM cannot want" that you completely missed how I did not claim that the LLM wanted.

The original comment had the exact verbose hedging you are asking for when discussing technical subjects. Clearly this is not sufficient to prevent people from jumping in with an "Ackshually" instead of reading the words in front of their face.


> The original comment had the exact verbose hedging you are asking for when discussing technical subjects.

Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I sincerely doubt that. When people find bugs in software they just say that the software is buggy.

But for LLM there's this ridiculous roundabout about "pattern matching behaving as if it wanted something" which is a roundabout way to aacribe intentionality.

If you said this about your OS people qould look at you funny, or assume you were joking.

Sorry, I don't think I am in the wrong for asking people to think more critically about this shit.


> Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I'm sorry, what are you asking for exactly? You were upset because you hallucinated that I said the LLM "wanted" something, and now you're upset that I used the exact technically correct language you specifically requested because it's not how people "normally" speak?

Sounds like the constant is just you being upset, regardless of what people say.

People say things like "the program is trying to do X", when obviously programs can't try to do a thing, because that implies intention, and they don't have agency. And if you say your OS is lying to you, people will treat that as though the OS is giving you false information when it should have different true information. People have done this for years. Here's an example: https://learn.microsoft.com/en-us/answers/questions/2437149/...


I hallucinated nothing, and my point still stands.

You actually described a bug in software by ascribing intentionality to a LLM. That you "hedged" the language by saying that "it behaved as if it wanted" does little to change the fact that this is not how people normally describe a bug.

But when it comes to LLMs there's this pervasive anthropomorphic language used to make it sound more sentient than it actually is.

Ridiculous talking points implying that I am angry is just regular deflection. Normally people do that when they don't like criticism.

Feel free to have the last word. You can keep talking about LLMs as if they are sentient if you want, I already pointed the bullshit and stressed the point enough.


If you believe that, you either have not reread my original comment, or are repeatedly misreading it. I never said what you claim I said.

I never ascribed intentionality to an LLM. This was something you hallucinated.


Its not patterns engine. It's a association prediction engine.

We are talking about LLM's not humans.

In Finland automatic camera fines (they're not exactly fines but I have no idea how to translate "liikennevirhemaksu" so work with me here) are the problem of whoever owns the car. If the owner wasn't the one driving the car, then it's up to them to inform the police who was actually driving


Interesting! If I translate it from Finnish to German, Google says something along the lines of "traffic violation fee". Usually you can't punish someone for something someone else has done, but maybe if you call it a fee (which doesn't imply punishment) instead of a fine, you can (at least in Finland)?

Reminds me of the fines for using public transport without a ticket in Germany: they're not called fines either, but "erhöhtes Beförderungsentgelt" ("increased transportation fee"). I'm sure there's a very good reason for this name too...


"Traffic vioaltion fee" is a great translation. As far as I understand the logic behind them, they're meant for relatively minor violations where a fine would be kind of overkill and specifically have to be "directed" at the right person.

The downside is that unlike fines which scale by income here – the term is "päiväsakko" or "day fine", a fine unit that scales with net income – the fees are fixed sums, so unless a person with high income really does something heinous with their car, they're not as likely to get 200k€ (really) speeding tickets.

So now if you're rich you can speed all you want and pay a relatively small fee for it, as long as you're not doing 200km/h in a school zone or something like that

Edit: https://www.nbcnews.com/id/wbna4233383

From 2004. He was driving 80km/h in a 40km/h zone.

"Millionaire hit with record speeding fine"

One of Finland’s richest men has been fined a record 170,000 euros ($217,000) for speeding through the centre of the capital, police said on Tuesday.


I'm sure some bright spark will soon show up to say that it was actually NATO who was violating our airspace for decades , just like they're claiming that NATO is the one cutting cables here


Yeah sure, we keep cutting our own telecoms cables multiple times per year, using Russian-operated ships as a front.

The Eagle S (I think it was?) case was brought to court here in Finland and they even admitted to dragging heir anchor but steadfastly maintained that it was due to their own incompetence (which the judge unfortunately believed.)

I suppose that was also a NATO ploy?


The US is blowing up Venezuelan boats, and according to Seymour Hersh, blew up Nord Stream. Why would a few cables be beyond US/NATO capabilities if it drums up popular support for US extra-judicial interdiction of other countries' maritime activity?


Do you understand that this has been going on for much longer than the US's Venezuelan murder spree, and longer than Trump has been president (this time around)?

Also, as I said, we have a crew of a Russian-operated ship on the record admitting to cutting a cable by dragging their anchor, and all the previous cases have also been traced to other Russian-operated ships (well, I think one was Chinese though) using AIS and radar data, and this has been done by OSINT folks in addition to the local authorities here around the Baltic. Are all of these people being controlled by NATO and the US?

Pro-Russian people like you assume that other countries will always just let the US or "NATO" do whatever they want and have absolutely zero autonomy at all, and you're absolute experts at ignoring everything that doesn't fit your insanely simplistic narrative that's predicated on the idea that Russia is just a perpetual victim and a spooky spooky NATO CIA USA cabal is actually doing everything bad that the Russians get up to.


Nowhere in this article does it say anything about Russians admitting to cutting the cable, let alone doing it on purpose with malicious intent, so you are just making things up now.

The list of US acts of terrorism goes beyond the Trump presidency; it's convenient for liberals to blame everything on Trump but the bombing of Nord Stream occurred under Biden; Obama was droning weddings while Hilary Clinton was setting fire to Libya (using NATO, the "defensive" alliance that strikes first!)

All the previous cases of cable cutting, alleged by Western news papers without any shred of evidence, is a good way of beating the war drums. The war propaganda and hysteria this time is more intense than the Iraq war, which I think you are too young to remember. It is unclear what material advantage Russia would get from cutting cables, but with hysteria, reason is not required.

"Pro-Russian people" like me .. well I'm pro-peace actually rather than pro-Russian and have seen that the Russians offered negotiations with the US and Europe multiple times that were rejected. Negotiations that might have averted bloodshed. It's interesting that a "non-binary" person like you (according to your Github) wants to view people in a binary category as pro/anti-Russian rather than perhaps having a different perspective.

As to the substance of your last point: I remember Europe actually arguing against the US during the 2003 invasion of Iraq and now seeing Europe being a bunch of kept poodles that would prefer to commit economic, moral and geopolitical suicide rather than stand up for themselves.


> The war propaganda and hysteria this time is more intense than the Iraq war, which I think you are too young to remember.

This feels like falling into a time warp back to February 2022 when the same sentiments were expressed vis-a-vis the imminent invasion. I see a lot of whataboutism, but not a whole lot of reasoning for why this isn't likely to be more of the same?


> Negotiations that might have averted bloodshed.

I mean they could have simply not invaded Ukraine. Seems like that's the thing a peace advocate such as yourself would endorse.


Reality would be much funnier if I didn't have to live in it


LLMs really can't be improved all that much beyond what we currently have, because they're fundamentally limited by their architecture, which is what ultimately leads to this sort of behaviour.

Unfortunately the AI bubble seems to be predicated on just improving LLMs and really really hoping that they'll magically turn into even weakly general AIs (or even AGIs like the worst Kool-aid drinkers claim they will), so everybody is throwing absolutely bonkers amounts of money at incremental improvements to existing architectures, instead of doing the hard thing and trying to come up with better architectures.

I doubt static networks like LLMs (or practically all other neural networks that are currently in use) will ever be candidates for general AI. All they can do is react to external input, they don't have any sort of an "inner life" outside of that, ie. the network isn't active except when you throw input at it. They literally can't even learn, and (re)training them takes ridiculous amounts of money and compute.

I'd wager that for producing an actual AGI, spiking neural networks or something similar to them would be what you'd want to lean in to, maybe with some kind of neuroplasticity-like mechanism. Spiking networks already exist and they can do some pretty cool stuff, but nowhere near what LLMs can do right now (even if they do do it kinda badly). Currently they're harder to train than more traditional static NNs because they're not differentiable so you can't do backpropagation, and they're still relatively new so there's a lot of open questions about eg. the uses and benefits of different neural models and such.


I think there is something to be said about the value of bad information. For example, pre ai, how might you come to the correct answer for something? You might dig into the underlying documentation or whatever "primary literature" exist for that thing and get the correct answer.

However, that was never very many people. Only the smart ones. Many would prefer to have shouted into the void at reddit/stackoverflow/quora/yahoo answers/forums/irc/whatever, to seek an "easy" answer that is probably not entirely correct if you bothered going right to the source of truth.

That represents a ton of money controlling that pipeline and selling expensive monthly subscriptions to people to use it. Even better if you can shoehorn yourself into the workplace, and get work to pay for it at a premium per user. Get people to come to rely on it and have no clue how to deal with anything without it.

It doesn't matter if it's any good. That isn't even the point. It just has to be the first thing people reach for and therefore available to every consumer and worker, a mandatory subscription most people now feel obliged to pay for.

This is why these companies are worth billions. Not for the utility, but from the money to be made off of the people who don't know any better.


But the thing is that they aren't even making money; eg. OpenAI lost $11 billion in one quarter. Big LLMs are just so fantastically expensive to train and operate, and they ultimately really aren't as useful to eg businesses as they've been evangelised as so demand just hasn't picked up – plus the subscription plans are priced so low that most if not all "LLM operators" (OpenAI, Anthropic, etc) apparently actually lose money on even the most expensive ones. They'd lose all their customers if the plans actually cost as much as they should.

Apropos to that, I wonder if OpenAI et al are losing money on API plans too, or if it's just the subscriptions.

Source for the OpenAI loss figure: https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Source for OpenAI losing money on their $200/mo sub: https://fortune.com/2025/01/07/sam-altman-openai-chatgpt-pro...


To lose 11 billion means you have successfully convinced some people to give you 11 billion to lose. And money wasn't lost either. It was spent. It was used for things, making people richer and buying hardware, which also makes people richer.


Based on your ad hominem of a reply I suppose it's safe to assume you don't have the experience, then


What is your proposed alternative, though?


Silos. You can create your own and say anything you want (only constrained by the law). Everyone else can join it, or blacklist it, for themselves. Nobody gets to shut off someone else's silo, they can only ignore it for themselves. Nobody gets to decide what other people choose to read or write.

For the case of Reddit, a silo maps nicely onto a subreddit. Within any subreddit the moderator can have full control, they can moderate it exactly as they choose. If you don't like it, create your own where you will have free rein.


What about content that is illegal in the country that your "silo" is hosted in, like, say, CSAM (but you can really really substitute anything else illegal there, like eg. planning terrorist attacks)? If a "silo" is CSAM-friendly or its express purpose is posting it and its moderators don't want to remove illegal content, what then?


I hope there are no legal jurisdictions that are actually CSAM-friendly. But this isn't a unique problem, there are many situations in the world where legal jurisdictions are muddy. For example, when over-the-air television signals can be received across country borders. Just let the law sort it out. Admittedly, it's more difficult for companies that operate in multiple countries, but they're already managing to do it today. The main hope is, that companies will not add any additional censorship themselves, and that an attitude of free exchange and tolerance, would be the default position for more of us than it is today.


That already exists, it’s called a website.


That's a good point. But for all practical purposes, Facebook, Reddit, and other major social networks represent what the web means to an average person. Many of them never even open a browser. So those major social networks should be treated more like a public square, for the discoverability that provides, if nothing else. And in the context of sites being delisted and apps being banned (Google, Apple, etc), it would be nice for major social networks to be committed to free speech on their platforms.


If there's something I'd expect Google to use a strong consistency model for, it'd be a credit system like that.

Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.


So yes, it would have an effect; even with your imaginary numbers that'd be a 3x drawdown


it might bring in the schedules, but since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else


> since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else

Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have a backlog to grow into.


Exactly. A drop in the expected growth would absolutely cause a drop in valuation as investors reassess their holdings


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: