You seem very comfortable making unfounded claims. I don't think this is very constructive or adds much to the discussion. While we can debate the stylistic changes of the previous commenter, you seem to be discounting the rate at which the writing style of various LLMs has backpropagated into many peoples' brains.
I can sympathize with being mistakingly accused of using LLM output, but as a reader the above format of "Its not x - it's y" repeated multiple times for artificial dramatic emphasis to make a pretty mundane point that could use 1/3 the length grates on me like reading LinkedIn or marketing voice whether it's AI or not (and it's almost always AI anyway).
I've seen fairly niche subreddits go from enjoyable and interesting to ruined by being clogged with LLM spam that sounds exactly like this so my tolerance for reading it is incredibly low, especially on HN, and I'll just dismiss it.
I probably lose the occasionally legitimate original observation now and then but in a world where our attention is being hijacked by zero effort spam everywhere you look I just don't have the time or energy to avoid that heuristic.
Also discounting the fact that people actually do talk like that. In fact, these days I have to modify my prose to be intentionally less LLM-like lest the reader thinks it's LLM output.
ITT: people believe that those in jobs with big educational backgrounds might have incentives to say something not 100% backed by evidence because it would impact their job.
A careless comment from the Fed Chair could quite literally destroy the economy. He’s constrained in what he can say - you almost never get to hear what Powell is really thinking.
This is the most profoundly idiotic sentiment to ever come out of the AI discourse. Not only has it been sung for so long that it's been proven wrong mutliple times now, it also pulls the strawman of the century because NO ONE working in frontier AI labs is pushing the narrative that "we don't need anything to change to reach AGI". You can stick your head in the sand all you want, progress is happening and it's not slowing down as of yet. Your knowledge of AI being limited to talking to chatbots doesn't change a thing.
Take this attitude somewhere else, this isn't Reddit.
To set the record straight:
- "CoT reasoning- stolen from Chinese AI labs" I should really hope this point doesn't need correcting. Accusing anyone from stealing of stealing from "Chinese AI labs" is laughable at this point.
- "Codex is a ripoff of Claude Code" Claude Code wasn't the first CLI agent and you could just as easily "accuse" Anthropic of stealing the idea of chatting with an LLM from
OpenAI.
- "Sora is a low quality clone of Google’s Veo3." Do you realize video models existed BEFORE you were born, which was apparently yesterday?
- "another Perplexity ripoff." Wait until you hear how Perplexity came to be.
We know for a fact that current LLMs are massively inefficient, this is not a new thing. But every optimization you make will allow you to run more inference with this hardware, there's not a reason for it to make it meaningless any more than more efficient cars didn't obsolete roads.
Current systems are already tremendously useful in the medical field. And I'm not talking about your doctor asking ChatGPT random shit, I'm saying radiology results processing, patient monitoring, monitoring of medication studies... The list goes on. Not to mention many of the research advances done using automated systems already, for example for weather forecasting.
I'm getting real "put everything on the blockchain" vibes from answers like this. I remember when folks were telling me that hospitals were going to put patient records on the blockchain. As for radiology, it doesn't seem this use of AI is as much of a "slam dunk" as it first appeared[1][2]. We'll see, I guess.
Right now I kind of land on the side of "Where is all the shovelware?". If AI is such a huge productivity boost for developers, where is all the software those developers are supposedly writing[3]? But this is just a microcosm of a bigger question. Almost all the economic growth since the AI boom started has been in AI companies. If AI is revolutionizing multiple fields, why aren't relevant companies those fields also growing at above-expected rates? Where's all this productivity that AI is supposedly unlocking?
I personally think the US benefited from recent events in a similar way body benefits from a fever. So yes, even though it might not feel like it at times.
Bodies do not "benefit" from fever. A fever is a signal that pathogens have recently entered the body, and the body is desperately at work trying to kick them out again. If it fails, you die. The fever is a direct mirror of the inflammation caused by that fight.
So, yes, the current administration certainly caused a fever. And the only thing the US benefits from are the antibodies fighting that pathogen.