Hacker Newsnew | past | comments | ask | show | jobs | submit | sindriava's commentslogin

You seem very comfortable making unfounded claims. I don't think this is very constructive or adds much to the discussion. While we can debate the stylistic changes of the previous commenter, you seem to be discounting the rate at which the writing style of various LLMs has backpropagated into many peoples' brains.


I can sympathize with being mistakingly accused of using LLM output, but as a reader the above format of "Its not x - it's y" repeated multiple times for artificial dramatic emphasis to make a pretty mundane point that could use 1/3 the length grates on me like reading LinkedIn or marketing voice whether it's AI or not (and it's almost always AI anyway).

I've seen fairly niche subreddits go from enjoyable and interesting to ruined by being clogged with LLM spam that sounds exactly like this so my tolerance for reading it is incredibly low, especially on HN, and I'll just dismiss it.

I probably lose the occasionally legitimate original observation now and then but in a world where our attention is being hijacked by zero effort spam everywhere you look I just don't have the time or energy to avoid that heuristic.


Also discounting the fact that people actually do talk like that. In fact, these days I have to modify my prose to be intentionally less LLM-like lest the reader thinks it's LLM output.


ITT: Keyboard warriors with no background in economics deciding if multinational corporate deals are a bubble or not


ITT: people believe that those in jobs with big educational backgrounds might have incentives to say something not 100% backed by evidence because it would impact their job.


A careless comment from the Fed Chair could quite literally destroy the economy. He’s constrained in what he can say - you almost never get to hear what Powell is really thinking.


smh, keyboard warriors with no background in tailoring deciding whether the emperor has clothes or not


[flagged]


People also think that two companies doing business with each other is somehow a "circlular dependency".


> LLMs have already hit their limits

This is the most profoundly idiotic sentiment to ever come out of the AI discourse. Not only has it been sung for so long that it's been proven wrong mutliple times now, it also pulls the strawman of the century because NO ONE working in frontier AI labs is pushing the narrative that "we don't need anything to change to reach AGI". You can stick your head in the sand all you want, progress is happening and it's not slowing down as of yet. Your knowledge of AI being limited to talking to chatbots doesn't change a thing.


https://manuelmoreale.com/thoughts/look-another-ai-browser not what vast majority of the people on HN say



"This is the most profoundly idiotic sentiment to ever come out of the AI discourse. "

Man the hubris of folks like you - its going to be epic when things inevitably fall apart at the seams.


Take this attitude somewhere else, this isn't Reddit.

To set the record straight:

- "CoT reasoning- stolen from Chinese AI labs" I should really hope this point doesn't need correcting. Accusing anyone from stealing of stealing from "Chinese AI labs" is laughable at this point.

- "Codex is a ripoff of Claude Code" Claude Code wasn't the first CLI agent and you could just as easily "accuse" Anthropic of stealing the idea of chatting with an LLM from OpenAI.

- "Sora is a low quality clone of Google’s Veo3." Do you realize video models existed BEFORE you were born, which was apparently yesterday?

- "another Perplexity ripoff." Wait until you hear how Perplexity came to be.


We know for a fact that current LLMs are massively inefficient, this is not a new thing. But every optimization you make will allow you to run more inference with this hardware, there's not a reason for it to make it meaningless any more than more efficient cars didn't obsolete roads.


> But every optimization you make will allow you to run more inference with this hardware

Unless the optimization relies in part on a different hardware architecture, and is no more efficient than current techniques on existing hardware.

> there's not a reason for it to make it meaningless any more than more efficient cars didn't obsolete roads

Rail cars are pretty darned efficient, but they don’t really work on roads made for the other kind.


Current systems are already tremendously useful in the medical field. And I'm not talking about your doctor asking ChatGPT random shit, I'm saying radiology results processing, patient monitoring, monitoring of medication studies... The list goes on. Not to mention many of the research advances done using automated systems already, for example for weather forecasting.


I'm getting real "put everything on the blockchain" vibes from answers like this. I remember when folks were telling me that hospitals were going to put patient records on the blockchain. As for radiology, it doesn't seem this use of AI is as much of a "slam dunk" as it first appeared[1][2]. We'll see, I guess.

Right now I kind of land on the side of "Where is all the shovelware?". If AI is such a huge productivity boost for developers, where is all the software those developers are supposedly writing[3]? But this is just a microcosm of a bigger question. Almost all the economic growth since the AI boom started has been in AI companies. If AI is revolutionizing multiple fields, why aren't relevant companies those fields also growing at above-expected rates? Where's all this productivity that AI is supposedly unlocking?

[1] https://hms.harvard.edu/news/does-ai-help-or-hurt-human-radi...

[2] https://www.ajronline.org/doi/10.2214/AJR.24.31493

[3] https://mikelovesrobots.substack.com/p/wheres-the-shovelware...


Ok, but i am asking for uses for LLMs specifically.

Of course i agree ML has already helped in many other areas and has a bright future. But the thing everyone is talking about here are LLM's


Demis Hassabis seems to think this and not only does he not focus only on LLMs, he got a nobel prize for a non-LLM system ;)


As far as I know, that Nobel prize was for being the project manager...


If you talk to any of his early investors, they considered him absolutely crucial to the project.


They say the same about Sam Altman....


While I understand you wrote "free time", this isn't Reddit. Keep the snarkiness to a minimum.


This website is worse than reddit. Pretentious AI bros everywhere. It even has the reddit color.


I personally think the US benefited from recent events in a similar way body benefits from a fever. So yes, even though it might not feel like it at times.


In the sense that a lot of fevers are deadly.

Bodies do not "benefit" from fever. A fever is a signal that pathogens have recently entered the body, and the body is desperately at work trying to kick them out again. If it fails, you die. The fever is a direct mirror of the inflammation caused by that fight.

So, yes, the current administration certainly caused a fever. And the only thing the US benefits from are the antibodies fighting that pathogen.


I appreciate you're trying to give well meaning advice, but do you think what you wrote could be perceived as very condescending?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: