Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are plenty of reasons why having a chatbot partner is a bad idea (especially for young people), but here's just a few:

- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.

- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.

- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.





That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.

People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.


You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.


> But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.

To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).


The LLM will only be challenging in the way you want it to be challenging. That is probably not the way that would be really challenging for you.

I only challenge LLMs in a way I don't want them to be challenging.

It's not meaningless. What do you do with a person who contradicts you or behaves in a way that is annoying to you? You can't always just shut that person up or change their mind or avoid them in some other way, can you? And I'm not talking about an employment relationship. Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person. You have a thinking and speaking subject in front of you who looks into the world, evaluates the world, and acts in the world just as consciously as you do.

Sociologists refer to this as double contingency. The nature of the interaction is completely open from both perspectives. Neither party can assume that they alone are in control. And that is precisely what is not the case with LLMs. Of course, you can prompt an LLM to snap at you and boss you around. But if your human partner treats you that way, you can't just prompt that behavior away. In interpersonal relationships (between equals), you are never in sole control. That's why it's so wonderful when they succeed and flourish. It's perfectly clear that an LLM can only ever give you the papier-mâché version of this.

I really can't imagine that you don't understand that.


> Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person.

You can fire an employee who challenges you, or you can reprompt an LLM persona that doesn't. Or you can choose not too. Claiming that power - even if unused - makes everyone a sycophant by default, is a very odd use of the term (to me, at least). I don't think I've ever heard anyone use the word in such a way before.

But maybe it makes sense to you; that's fine. Like I said previously, quibbling over personal definitions of "sycophant" isn't interesting and doesn't change the underlying point:

"...it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses)."

So feel free to ignore the word "sycophant" if it bothers you that much. We were talking about a particular behavior that LLM's tend to exhibit by default, and ways to change that behavior.


I didn't use that word, and that's not what I'm concerned about. My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.

> I didn't use that word, and that's not what I'm concerned about.

That was what the "meaningless" comment you took issue with was about.

> My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.

But this isn't true, anymore than claiming "a video game is not inherently challenging if you've just put it together accordingly." Just because you created something or set up the scenario, doesn't mean it can't be challenging.


I think they have made clear what they are criticizing. And a video game is exactly that: a video game. You can play it or leave it. You don't seem to be making a good faith effort to understand the other points of view being articulated here. So this is a good point to end the exchange.

> And a video game is exactly that: a video game. You can play it or leave it.

No one is claiming you can't walk away from LLM's, or re-prompt them. The discussion was whether they're inherently unchallenging, or if it's possible to prompt one to be challenging and not sycophantic.

"But you can walk away from them" is a nonsequitur. It's like claiming that all games are unchallenging, and then when presented with a challenging game, going "well, it's not challenging because you can walk away from it." This is true, and no one is arguing otherwise. But it's deliberately avoiding the point.


"I'm leaving you for a new context window."

> This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.

I think this insight is meaningful and true. If you hire a people-pleaser employee, and convince them that you want to be challenged, they're going to come up with either minor challenges on things that don't matter or clever challenges that prove you're pretty much right in the end. They won't question deep assumptions that would require you to throw out a bunch of work, or start hard conversations that might reveal you're not as smart as you think; that's just not who they are.


Hmm. I think you may be confusing sycophancy with simply following directions.

Sycophancy is a behavior. Your complaint seems more about social dynamics and whether LLMs have some kind of internal world.


Even "simply following directions" is something the chatbot will do, that a real human would not -- and that interaction with that real human is important for human development.

>> That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.

> You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

Also: if someone makes it "challenging" it's only going to be "challenging" with the scare quotes, it's not actually going to be challenging. Would anyone deliberately, consciously program in a real challenge and put up with all the negative feelings a real challenge would cause and invest that kind of mental energy for a chatbot?

It's like stepping on a thorn. Sometimes you step on one and you've got to deal with the pain, but no sane person is going to go out stepping on thorns deliberately because of that.


> and it's not too difficult to make an opinionated and challenging chatbot

Funnily enough, I've saved instructions for ChatGPT to always challenge my opinions with at least 2 opposing views; and never to agree with me if it seems that I'm wrong. I've also saved instructions for it to cut down on pleasantries and compliments.

Works quite well. I still have to slap it around for being too supportive / agreeing from time to time - but in general it's good at digging up opposing views and telling me when I'm wrong.


>People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though.

I don't disagree that some people take AI way too far, but overall, I don't see this as a significant issue. Why must relationships and human interaction be shoved down everyone's throats? People tend to impose their views on what is "right" onto others, whether it concerns religion, politics, appearance, opinions, having children, etc. In the end, it just doesn't matter - choose AI, cats, dogs, family, solitude, life, death, fit in, isolate - it's just a temporary experience. Ultimately, you will die and turn to dust like around 100 billion nameless others.


I lean toward the opinion there are certain things people (especially young people) should be steered away from because they tend to snowball in ways people may not anticipate, like drug abuse and suicide; situations where they wind up much more miserable than they realize, not understanding the various crutches they've adopted to hide from pain/anxiety have kept them from happiness (this is simplistic, though; many introverts are happy and fine).

I don't think I have a clear-enough vision on how AI will evolve to say we should do something about it, though, and few jurisdictions do anything about minors on social media, which we do have a big pile of data on, so I'm not sure it's worth thinking/talking about AI too much yet, at least as it relates to regulating for minors. Unlike social media, too, the general trajectory for AI is hazy. In the meantime, I won't be swayed much by anecdotes in the news.

Regardless, if I were hosting an LLM, I would certainly be cutting off service to any edgy/sexy/philosophy/religious services to minimize risk and culpability. I was reading a few weeks ago on Axios of actual churches offering chatbots. Some were actually neat; I hit up an Episcopalian one to figure out what their deal was and now know just enough to think of them as different-Lutherans. Then there are some where the chatbot is prompted to be Jesus or even Satan. Which, again, could actually be fine and healthy, but if I'm OpenAI or whoever, you could not pay me enough.


> chatbots are responding to the user's contribution only

Which is also why I feel the label "LLM Psychosis" has some merit to it, despite sounding scary.

Much like auditory hallucinations where voices are conveying ideas that seem-external-but-aren't... you can get actual text/sound conveying ideas that seem-external-but-aren't.

Oh, sure, even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.


> even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.

The mental corruption due to surrounding oneself with sycophantic yes men is historically well documented.


Excellent point. It’s bad for humans when humans do it! Imagine the perfect sycophant, never tires or dies, never slips, never pulls a bad facial expression, can immediately swerve their thoughts to match yours with no hiccups.

It was a danger for tyrants and it’s now a danger for the lonely.


South Park isn't for everyone, but they covered this pretty well recently with Randy Marsh going on a sycophant bender.

Interesting, thanks I’ll check it out.

I wonder if in the future that'll ever be a formal medical condition: Sycophancy poisoning, with chronic exposure leading to a syndrome of some sort...

That explains why Elon Musk is such an AI booster. The experience of using an LLM is not so different from his normal life.


> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions.

To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.


> To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.

This sounds like an argument in favor of safe injection sites for heroin users.


Hey hey safe injecting rooms have real harm minimisation impacts. Not convinced you can say the same for chatbot boyfriends.

That's exactly right, and that's fine. Our society is unwilling to take the steps necessary to end the root cause of drug abuse epidemics (privatization of healthcare industry, lack of social safety net, war on drugs), so localities have to do harm reduction in immediately actionable ways.

So too is our society unable to do what's necessary to reduce the startling alienation happening (halt suburban hyperspread, reduce working hours to give more leisure time, give workers ownership of the means of production so as to eliminate alienation from labor), so, ai girlfriends and boyfriends for the lonely NEETs. Bonus, maybe it'll reduce school shootings.


And there we are . . . "Our society is unable to do what's necessary on issue X, and what's necessary is this laundry list of my unrelated political hobby horses."

The person who introduced the topic did so derisively. I think you ought to re-read the comment to which you replied and a few of those leading to it for context.

If you don't deny that the USA is plagued by a drug addiction crisis, what's your solution?

Seeing society as responsible for drug abuse issues, of their many varieties, is very Rousseau.

Rousseau and Hobbes were just two dudes. I'd wager neither of them cracked the code entirely.

To claim that addicts have no responsibility for their addiction is as absurd as the idea that individual humans can be fully identified separate from the society that raised them or that they live in.


Given that those tend to have positive effects for the societies that practice this is that what you wanted to say?

Wouldn't they be seeking a romantic relationship otherwise?

Using AI to fulfill a need implies a need which usually results in action towards that need. Even "the dating scene is terrible" is human interaction.


> Even "the dating scene is terrible" is human interaction.

For some subset of people, this isn't true. Some people don't end up going on a single date or get a single match. And even for those who get a non-zero number there, that number might still be hovering around 1-2 matches a year and no actual dates.


Are we talking people trying to date or "trying to date"?

I am not even talking dates BTW but the pre-cursors to dates.

If you bring up Tinder etc then I would point out that AI has been doing bad things for quite a while obviously.


> Are we talking people trying to date or "trying to date"?

The former. The latter I find is naught more than a buzz word used to shut down people who complain about a very real problem.

> If you bring up Tinder etc then I would point out that AI has been doing bad things for quite a while obviously.

Clearly. But we've also been cornered into Tinder and other dating apps being one of very few social arenas where you can reasonably expect dating to actually happen.[1] There's also friend circles and other similar close social circles, but once you've exhausted those options, assuming no other possibilities reveal themselves, what else is there? There's uni or collage, but if you're past that time of your life, tough shit I guess. There's work, but people tend to have the sense to not let their love life and their work mix. You could hook up after someone changes jobs, but that's not something that happens every day.

[1] https://www.pnas.org/doi/full/10.1073/pnas.1908630116


Swiping on thousands of people without getting a single date is not human interaction and that's the reality for some people.

I still don't think an AI partner is a good solution, but you are seriously underestimating how bad the status quo is.


> Swiping on thousands of people without getting a single date is not human interaction and that's the reality for some people.

For some people, yes, but 99% of those people are men. The whole "women with AI boyfriends" thing is an entirely different issue.


If you have 100 men to 100 women on an imaginary tinder platform and most of the men get rejected by all 100 women it's easy to see where the problem would arise for women too.

In real dating apps, the ratio is never 1:1, there's always way more men.

The "problem" will arise anyway, of course, but as I said, it's a different problem - the women aren't struggling to find dates, they're just choosing not to date the men they find. Even classifying it as a "problem" is arguable.


> the ratio is never 1:1, there's always way more men.

Isn't it weird? There should be approximately equal number of not married men and women, so there should be some reason why there are less women on dating platforms. Is it because women work more and have less free time? Or because men are so bad? Or because they have an AI boyfriend? Or married men using dating apps shift the ratio?


Obviously men are people and therefore can vary, but a lot of them rely on women to be their sole source of emotional connection. Women tend to have more and closer friends and just aren't as lonely or desperate.

A lot of dudes are pretty awful to women in general, and dating apps are full of that sort. Add in the risks of meeting strange men, and it's not hard to see why a lot of women go "eh" and hang out with friends instead.


What else do you expect them to do if none of the choices are worthwhile?

Expectations and reality will differ. Ultimately we will have soft eugenics. This is a good thing in the long run, especially with how crowded the global south is.

Nature always finds a way, and it's telling you not to pass your genetics on. It seems cruel, but it is efficient and very elegant. Now we just need to find an incentive structure to encourage the intelligent to procreate.


Maybe lower their standards to the point that they can be satisfied by a real person, not a text completion algorithm that literally worships the ground they walk on and outputs some of the cheesiest, cringiest text I've ever read.

>Maybe lower their standards to the point that they can be satisfied by a real person, not a text completion algorithm that literally worships the ground they walk on and outputs some of the cheesiest, cringiest text I've ever read.

The vast majority of women are not replacing dating with chatbots, not even close. If you want women to stop being picky, you would have to reduce the "demand" in the market, stop men from being so damn desperate for any pair of legs in a skirt.

They are suffering through the exact same dating apps, suffering through their own problems. Try talking to one some time about how much it sucks.

Remember, the apps are not your friend, and not optimized to get you a date or a relationship. They are optimized to make you spend money.

The apps want you to feel hopeless, like there is no other way than the apps, and like only the apps can help you, which is why you should pay for their "features" which are purposely designed to screw you over. The Match company purposely withholds matches from you that are high quality and promising. They own nearly the entire market.


Making a lot of assumptions there, my dude.

Despite the name, the subreddit community has both men and women and both ai boyfriends and ai girlfriends.

I looked through a bunch of posts on the front page (and almost died from cringe in the process) and basically every one of them was a woman with an AI "boyfriend".

Interesting. I guess it's changed a lot since I looked at it last time. I remember it being about 50/50.

We do see - from 'crazy cat lady' to 'incel', from 'where have all the good men gone' to the rapid decline of the numbers of 25-year-olds who have had sexual experiences, not to mention from the 'loneliness epidemic' that has several governments, especially in Europe, alarmed enough to make it an agenda pointt: No, they would not. Not all of them. Not even a majority.

AI in these cases is just a better 'litter of 50 cats', a better, less-destructive, less-suffering-creating fantasy.


Not all human interaction is a net positive in the end.

In this framing “any” human interaction is good interaction.

This is true if the alternative to “any interaction” is “no interaction”. Bots alter this, and provide “good interaction”.

In this light, the case for relationship bots is quite strong.


Why would that be the alternative?

These are only problems if you assume the person later wants to come back to having human relationships. If you assume AI relationships are the new normal and the future looks kinda like The Matrix, with each person having their own constructed version of reality while their life-force is bled dry by some superintelligent machine, then it is all working as designed.

Human relationships are part of most families, most work, etc. Could get tedious constantly dealing with people who lack any resilience or understanding of other perspectives.

The point is you wouldn't deal with people. Every interaction becomes a transaction mediated by an AI that's designed to make you happy. You would never genuinely come in contact with other perspectives; everything would be filtered and altered to fit your preconceptions.

It's like all those dystopias where you live in a simulation but your real body is wasting away in a vat or pod or cryochamber.


Someone has to make the babies!

don't worry, "how is babby formed" is surely in every llm training set

“how girl get pragnent”

It could be the case that society is responding to overpopulation in many strange ways that serve to reduce/reverse the growth of a stressed population.

Perhaps not making as many babies is the longterm solution.


Wait, how did this work in The Matrix exactly?

Artificial wombs – we're on it.

When this gets figured out all hells will break loose the likes of which we have not seen

Decanting jars, a la Brave New World!

ugh. speak of the devil and he shall appear.

I don’t know. This reminds me of how people talked about violent video games 15 years back. Do FPS games desensitize and predispose gamers to violence, or are they an outlet?

I think for essentially all gamers, games are games and the real world is the real world. Behavior in one realm doesn’t just inherently transfer to the other.


Unless someone is harming themselves or others, who are we to judge?

We don't know that this is harmful. Those participating in it seem happier.

If we learn in the course of time (a decade?) that this degrades lives with some probability, we can begin to caution or intervene. But how in God's name would we even know that now?

I would posit this likey has measurable good outcomes right now. These people self-report as happier. Why don't we trust them? What signs are they showing otherwise?

People were crying about dialup internet being bad for kids when it provided a social and intellectual outlet for me. It seems to be a pattern as old as time for people to be skeptical about new ways for people to spend their time. Especially if it is deemed "antisocial" or against "norms".

There is obviously a big negative externality with things like social media or certain forms of pay-to-play gaming, where there are strong financial interests to create habits and get people angry or willing to open their wallets. But I don't see that here, at least not yet. If the companies start saying, "subscribe or your boyfriend dies", then we have cause for alarm. A lot of these bots seem to be open source, which is actually pretty intriguing.


It seems we're not quite there, yes. But you should have seen the despair when GPT 5 was rolled out to replace GPT 4.

These people were miserable. Complaining about a complete personality change of their "partner", the desperation in their words seemed genuine.


Words can never be a substitute for sentience, they are separate processes.

Words are simula. They're models, not games, we do not use them as games in conversation.

> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions

I saw a take that the AI chatbots have basically given us all the experience of being a billionaire: being coddled by sycophants, but without the billions to protect us from the consequences of the behaviors that encourages.


This. If you never train stick, you can never drive stick, just automatic. And if you never let a real person break your heart or otherwise disappoint you, you'll never be ready for real people.

AI friends need a "Disasters" menu like SimCity.

One of the first thing many Sims players do is to make a virtual version of their real boyfriend/girlfriend to torture and perform experiments on.


Ah, 'suffering builds character'. I haven't had that one in a while.

Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.

"But RealPeople™ can also elevate, surprise, and enchant you!" you may intervene. They sure than. An still, some may decide no longer to go for new rounds of Russian roulette. Someone like that is not a lesser person, they still have real™ enjoyment in a hundred other aspects in their life from music to being a food nerd. they just don't make their happiness dependant on volatile actors.

AI chatbots as relationship replacements are, in many ways, flight simulators:

Are they 'the real thing'? Nah, sitting in a real Cessna almost always beats a computer screen and a keyboard.

Are they always a worse situation than 'the real thing'? Simulators sure beat reality when reality is 'dual engine flameout halfway over the North Pacific'

Are they cheaper? YES, significantly!

Are they 'good enough'? For many, they are.

Are they 'syncophantic'? Yes, insofar as that circumstances are decided beforehand. A 'real' pilot doesn't get to choose 'blue skies, little sheep clouds in the sky', they only get to chosen not to fly that day. And the standard weather settings? Not exactly 'hurricane, category 5'.

Are they available, while real flight is not, to some or all members of the public? Generally yes. The simulator doesn't make you have a current medical.

Are they removing pilots/humans from 'the scene'? No, not really. In fact, many pilots fly simulators for risk-free training of extreme situations.

Your argument is basically 'A flight simulator won’t teach you what it feels like when the engine coughs for real at 1000 ft above ground and your hands shake on the yoke.'. No, it doesn't. An frankly, there are experiences you can live without - especially those you may not survive (emotionally).

Society has always had the tendency to pathologize those who do not pursue a sexual relationship as lesser humans. (Especially) single women that were too happy in the medevieal age? Witches that needed burning. Guy who preferred reading to dancing? A 'weirdo and a creep'. English knows 'master' for the unmarried, 'incomplete' man, an 'mister' for the one who got married. And today? those who are incapable or unwilling to participate in the dating scene are branded 'girlfailure' or 'incel' - with the latter group considered a walking security risk. Let's not add to the stigma by playing another tune for the 'oh, everyone must get out there' scene.


One difference between "AI chatbots" in this context and common flight simulator games is that someone else is listening in and has the actual control over the simulation. You're not alone in the same way that you are when pining over a character in a television series or books, or crashing a virtual jumbo jet into a skyscraper in MICROS~1 Flight Simulator.

You are aware that you can, in fact, run models on your own, fully airgapped machine, right? Ollama exists.

The fact that most people chose not to is no argument for 'mandatory' surveillance, just a laissez-faire attitude towards it.


Yes. I have never connected to any of the SaaS-models and only use Nx/Bumblebee and sometimes Ollama.

In this context it's not about people like me.


Good for you!

Now ... why you want to police the decisions others make (or chose not to make) with their data ... it has a slightly paternalistic aspect to it, wouldn't you agree?


This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.

A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.

Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.


> This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.

> A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.

This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?

Secondly, for exchange to occur there must a measure of inputs, outputs, and the assessment of their relative values. Any less effort or thought amounts to an unnecessary gamble. Both the giver and the intended beneficiary can only speak for their respective interests. They have no immediate knowledge of the other person's desires and few individuals ever make their expectations clear and simple to account for.

> Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.

A relationship is an expectation. And like all expectations, it is a conception of the mind. People can be in a relationship with anything, even figments of their imaginations, so long as they believe it and no contrary evidence arises to disprove it.


> This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?

It happens all the time. People sacrifice anything, everything, for no gain, all the time. It's called love. When you give everything for your family, your loved ones, your beliefs. It's what makes us human rather than calculating machines.


You can easily argue that the warm, fuzzy dopamine push you call 'love', triggered by positive interactions, is basically a "profit". Not all generated value is expressed in dollars.

"But love can be spontaneous and unconditional!" Yes, bodies are strange things. Aneuryisms also can be spontaneous, but are not considered intrinsically altruistic functionality to benefit humanity as a whole by removing an unfit specimen from the gene pool.

"Unconditional love" is not a rational design. It's an emergent neural malfunction: a reward loop that continues to fire even when the cost/benefit analysis no longer makes sense. In psychiatry, extreme versions are classified (codependency, traumatic bonding, obsessional love); the milder versions get romanticised - because the dopamine feels meaningful, not because the outcomes are consistently good.

Remember: one of the significant narratives our culture has about love - Romeo and Juliet - involves a double suicide due to heartbreak and 'unconditional love'. But we focus on the balcony, and conveniently forget about the crypt.

You call it "love" when dopamine rewards self-selected sacrifices. A casino calls it "winning" when someone happens to hit the right slot machine. Both experiences feel profound, both rely on chance, and pursuing both can ruin you. Playing Tetris is just as blinking, attention-grabbing and loud as a slot machine, but much safer, with similar dopamine outcomes as compared to playing slot machines.

So ... why would a rational actor invest significant resources to hunt for a maybe dopamine hit called love when they can have a guaranteed 'companionship-simulation' dopamine hit immediately?


Yes, great comment.

What do you think of the idea that people generally don't really like other people - that they do generally disappoint and cause suffering. (We are all imperfect, imperfectly getting along together, daily initiating and supporting acts of aggression against others.) And that, if the FakePeople™ experience were good enough, probably most people would opt out of engaging with others, similar to how most pilot experiences are on simulators?


Ultimately, that's the old Star Trek 'the holodeck would - in a realistic scenario - be the last invention of a civilization' argument.

I think that there will always be several strata of the population who will not be satisfied with FakePeople™, either because they are unable to interact with the system effectively due to cognitive or educational deficiencies, or because they are in a belief that RealPeople™ somehow have a hidden, non-measurable capacity (let's call it, for the lack of a better term, a 'soul'), that cannot be replicated or simulated - which makes it, ultimately, a theological question.

There is probably a tipping point at which the number of RealPeople™ enthusiasts is so low reasonable relationship matching is no longer possible.

But I don't really think the problem is 'RealPeople™ are generally horrible'. I believe that the problem is availability and cost of relationship - in energy, time, money, and effort:

Most pilot experiences are on simulators because RealFlight is expensive, and the vast majority of pilots don't have access to an aircraft (instead sharing one), which also limits potential flight hours (because when the weather is good, everyone wants to fly. No-one wants the plane up in bad conditions, because it's dangerous to the plane, and - less important for the ownership group - the pilot.)

Similarly: Relationship-building takes planning effort, carries significant opportunity cost, monetary resources, and has a low probability of the desired outcome (whatever that may be, it's just as true for 'long-term potentially married relationship as it is for the one-night stand). That's incompatible with what society expects from a professional these days (e.g. work 8-16 hours a day, keep physically fit, save for old age and/or potential health crisis, invest in your professional education, the list goes on).

Enter the AI model, which gives a pretty good simulation of a relationship for the cost of a monthly subway card, carries very little opportunity cost (simulation will stop for you at any time if something more important comes up), and needs no planning at all.

Risk of heartbreak (aka: potentially catastrophic psychiatric crisis, yes, such cases are common) and hell being people doesn't even have to factor in to make the relationship simulator appear like a good deal.

If people think 'relationship chatbots' are an issue, just you wait for when - not if - someone builds a reasonably-well-working 'chatbot in a silicone-skin-body' that's more than just a glorified sex doll - a physically existing, touchable, cooking, homemaking, reasonably funny, randomly-sensual, and yes, sex-simulation-capable 'Joi' (and/or her male-looking counterpart) is probably the last invention of mankind.


Soul, yes.

You may be right, that RealPeople do seek RealInteraction.

But, how many of each RealPerson's RealInteractions are actually that - it seems to me that lots of my own historical interactions were/are RealPersonProjections. RealPersonProjections and FakePerson interactions are pretty indistinguishable from within - over time, the characterisation of an interaction can change.

But, then again, perhaps the FakePerson interactions (with AI), will be a better developmental training ground than RealPersonProjections.

Ah - I'll leave it here - its already too meta! Thanks for the exchange.


Disturbing and sad.

> Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.

Good thing that "if" is clearly untrue.

> AI chatbots as relationship replacements are, in many ways, flight simulators:

If only! It's probably closer to playing star fox than a flight sim.


> Good thing that "if" is clearly untrue.

YMMV

> If only! It's probably closer to playing star fox than a flight sim.

But it's getting better, every day. I'd say we're in 'MS Flight Simulator 4.0' territory right now.


Love your thoughts about needing input from others! In Autistic / ADHD circles, the lack of input from other people, and the feedback of thoughts being amplified by oneself is called rumination. It can happen for many multiple ways-- lack of social discussion, drugs, etc. AI psychosis is just rumination, but the bot expands and validates your own ideas, making them appear to be validated by others. For vulnerable people, AI can be incredibly useful, but also dangerous. It requires individuals to deliberately self-regulate, pause, and break the cycle of rumination.

> In Autistic / ADHD circles

i.e. HN comments


Nah, most circles of neurodivergent people I've been around have humility and are aware of their own fallibility.

Is this clearly AI-generated comment part of the joke?

The comment seems less clearly-written (e.g., "It can happen for many multiple ways") than how a chatbot would phrase it.

Good call. I stand corrected: this is a human written comment masquerading as AI, enough so that I fell for it at my initial quick glance.

Excellent satire!


That just means they used a smaller and less focused model.

It doesn't. Name a model that writes like that by default.

We’re all just in a big LLM-generated self-licking-lollipop content farm. There aren’t any actual humans left here at all. For all you know, I’m not even human. Maybe you’re not either.

... and with this, you named the entire retention model of the whole AI industry. Kudos!

I share your concerns about the risks of over-reliance on AI companions—here are three key points that resonate deeply with me:

• Firstly, these systems tend to exhibit excessively agreeable patterns, which can hinder the development of resilience in navigating authentic human conflict and growth.

• Secondly, true relational depth requires mutual independent agency and lived experience that current models simply cannot provide autonomously.

• Thirdly, while convenience is tempting, substituting genuine reciprocity with perfectly tailored responses may signal deeper unmet needs worth examining thoughtfully. Let’s all strive to prioritize real human bonds—after all, that’s what makes life meaningfully complex and rewarding!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: