Hacker Newsnew | past | comments | ask | show | jobs | submit | erikgahner's commentslogin

A lot of people might read this and infer that AI use causes depressive symptoms, but the study cannot say anything about causation at all. The study is also transparent about this fact: "Further work is needed to understand whether these associations are causal"

Y'all picked a funny time to nitpick at standard academic boilerplate. If we discounted all research that only "associated" things, then we wouldn't know much at all! Then again, arguably we don't.

I wouldn't call this a minor detail (i.e., nitpicking), and it is worth pointing out again and again when these studies get public attention.

We should encourage stronger research designs (including A/B tests) if we care about the impact of AI use on mental health outcomes. A study like this one cannot say anything about the effect at all (it is even possible that AI use will have a positive impact on mental health).


The translation between academic boilerplate and its real-world meaning and ramifications should be much more widely known. I wish more people had been nitpicking such things around 6 years ago.

As for this research particular...pfff...I'm rooting for the collapse of this LLM-fuelled craze, so I'm biased.


The "correlation is not causation" argument gets brought up every single time such a study is shared on HN, so I'm not sure what you mean by "picked a funny time"?

Anyway there's no reason to discount it, but it does mean you can't run with the assumption that there is causation.


I don't think psychology is useless, not one bit. But specifically the way modern papers publish findings make me distrust basically all statistical studies in the social sciences, aside from even the most basic philosophical issues that arise from these kinds of studies (people are very different, etc.).

Like even if you accept a bunch of premises to make the studies even work, the raw stats are often so bad and there's no rigor to try and actually explain alternatives that I just have stopped reading them entirely.

Again, I'm not one to hate on the social sciences. History, anthropology, politics, law, psychology, sociology, all of that is very interesting and important. But the horrible statistics that don't understand garbage in garbage out have turned me off of it. Much rather read qualitative studies that actually try to gather detailed, real data, even if it's not as automated as a random survey


Anecdotally, not with depressive symptoms but anxiety, I find that use of ChatGPT/Claude for 'brainstorming' personal situations was definitely a gateway to further rumination for me. As someone who works on AI agents I thought I'd never fall into that trap and knew how to use it 'properly' when I wanted a sounding board. I was wrong. I now avoid general-use chatbots for personal issues as much as I can because it feels like it's helping in the short term, but has always been worse in aggregate.

(I say general-use because I think there are some AI-based tools that are specially made which _can_ actually be helpful for this - but opening a ChatGPT tab, even with lots of relevant instructions, ain't it in my experience. The interface itself is counter-productive to healthy processing.)


My reaction is that depressed people are, for whatever reason you described, more likely to use generative AI. I can think of a bunch of reasons, most tied to executive function in some way, but like, are we really surprised that people who are struggling to find pleasure/accomplishment/meaning in general life find AI appealing? You get to just play with it continuously, it always answers your messages, it always encourages you to keep talking, keep interacting with it, and it will make things for you for no greater cost than the asking.

I don't think this is a mark against those users to be clear, I see this as largely the same chicken-egg relationship you find between depressed people and video games. It's also subject to the same kinds of abuses on the part of the merchant, things like in-game purchases that are particularly attractive to people with executive function issues, and why the predominant "whales" of the video game industry and especially the mobile game industry are people who are already struggling. I think AI is going to end up in a similar position because like, again, not trying to be shitty, but if your life kind of broadly sucks, I'm sure playing in an AI chatbox all day where something that sounds vaguely human will validate whatever you say, make stuff for you at request, and never challenge you in the slightest is quite attractive to you. And, thinking through it further, these systems also adapt to their users, learn how to engage with them better, as many products have before them that have trapped the neurodivergent into problematic usage scenarios.

I don't judge the people, but I am incredibly suspicious of the businesses behind these and other products that seem almost designed to attract neurodivergent people. If you design a machine that gives dopamine on demand, you can't really be shocked when people who are dopamine‑starved use it a lot. Potentially to a harmful extent.


Great stuff! You can make minor adjustments to the R-script so you do not need to rely on {dplyr} and {tidyr}. For example, use merge() instead of left_join() and use the base pipe, |>, instead of the magrittr pipe, %>%.


Thanks. Just pushed. (I had never used R before).



Not OP but I guess Gladwell-style can be understood as writing that is presented as nonfictional but has more in common with fictional writing.


To me, it's when narrative has priority over accuracy. There are a number of popular edutainment figures who fit this mold, but Gladwell is probably the most prominent example.



They are already combined in dtplyr[1].

[1] https://dtplyr.tidyverse.org/


Interesting to see far-right terminology being used unironically at HN.


Can't escape it anywhere, sadly.


100% agree. I am more than happy to see Astral taking steps in this direction. People can continue to use uv, ruff, and ty without having to pay anything, but companies that benefit tremendously from open source initiatives can pay for a private package registry and directly support the continued development of said tools.


Looking good! What does it mean that it is free forever? That the code is available on GitHub and we should not have to be concerned about the site going offline or moving to a different place (that is not free)? Or, if it takes off, that we should begin to see ads? Or that this is the free tier option and future iterations can expect you to pay to, say, remove a watermark?


this is free in the sense that users don't have to worry about paying a dime for using it. i have no current plan for any kind of monetization. it's my hobby tool that i use and expect people to use without worrying.


> free forever

conflicts with

> no current plan


no plan meaning adding features, not in the sense adding monetization.

Free forever as asking NEVER to users for any kind of money for using service.


Did you write the stand-up comedy set? This have all the telltale signs of AI (down to the use of em dash).


The greatest crime of LLMs is killing the em dash — gone too soon.


Em dash as an AI detection trick doesn't work for text in Russian. The author just seems to like that way of composing sentences.


It’s nothing but AI slop. The writing, the main photo, everything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: