Hacker Newsnew | past | comments | ask | show | jobs | submit | agubelu's commentslogin

That would be as close to a declaration of war as you can get without firing a bullet.

The immediate and obvious response would be for the foreign branches of those companies to be declared "of national interest", nationalized and forced to keep operating.


>"nationalized and forced to keep operating."

Assuming there is no some kill switch which would render a whole infra including hardware inoperable.


I'd imagine the government would be in talks with the highest ranking local Amazon employees long before, but I can't imagine a country trusting the hardware or wanting to manage the jank.


It's called us-east-1?


AWS China is a completely separate partition under separate Chinese management, with no dependencies on us-east-1. It also greatly lags in feature deployments as a result.


We saw the damage crowdstrike caused in a few hours


> * Everyone over estimates their driving ability vs the average

Exactly. If there's an issue, I'd much rather delegate control to the pilots or train drivers who have been trained to deal with such issues.


I'll take a trip by train or plane rather than by car every single time.

I feel WAY more safe knowing that the vehicle is operated by trained professionals and there's an extremely robust system around them to ensure safety, rather than whatever semblance of control I think I have driving my car.


So you don't do anything because you have a job you need to keep and a kid to take care of, but you're perfectly okay with moving to a completely different country on short notice?


The US, for better or worse, isn't a cohesive country of people interested in a collective, but a smash and grab of economic gains sourced from those who are forced to live in it and cannot flee to developed countries. You come to it, or stay in it, to make more income you would in developed countries at the detriment of everyone else.

Whether you believe the economic human factory farm that is the US is worth saving or preserving will be a function of your lived experience and mental model. "What are you optimizing for?"


Calling the USA a "economic human factory farm" is the best thing I've heard all year.

Yeah we have some perks here. But they're not as rare as our propaganda would have us believe and we sure do pay for them in various ways.


Yes because one of those can get my face smashed in by a baton. Moving is a far safer option for my family.

Call it selfish if you want (hell, I'd even agree with you) but my priority is my family and my life. This idea that I have to care about "my country" is patriotic BS pounded into us to make it more likely to join the army.


Just curious, do you have dual citizenship? If not, what's exactly your plan to acquire a legal resident status quickly, and where?


I'd say a couple of reasons:

- The American political system has been very successful in telling its people that the only acceptable way to show discontent and enact change is by voting on elections.

- Lots of people are okay with it because it can only happen to the "bad guys", and why would it ever happen to them since they're the "good guys"... right?


> The American political system has been very successful in telling its people that the only acceptable way to show discontent and enact change is by voting on elections.

Has it? Because I recall a bunch of people gathering in the wrong building on Jan 6


Very does not mean perfectly.


... yet still tens of millions of eligible voters don't even bother

the country is very low-density, there's no one obvious point to protest (there was Occupy Wall Street... and then the Seattle TAZ and .... that's it, oh and the Capitol January 6th), strikes and unions are legally neutered, it's just not the American way anymore

the country has a lot of experience "managing" internal unpleasantry, see the time leading up to the civil war, and then the reconstruction, and then there was a lull as the innovation in racism led to legalized economic racism (the usual walking while black "crimes", vagrancy laws, etc), and then the civil rights era, with the riots, and since then (and as always) police brutality is used as a substitute to training and funding


I think a general strike might be effective for low-density places, though that requires enough people taking part to make it truly effective. That way you don't need an obvious place to protest apart from your workplace and it'd be a non-violent protest that would definitely get the attention of the wealthy.


How do you measure whether some output corresponds to 8 hours of work, and not 4 or 16 hours?


He doesn't known what he is talking about. Bunch of wannabe founders waxing BS. If you want 8 hours of guaranteed output use a bot


Isn't that what the author means?

"it still requires genuine expertise to spot the hallucinations"

"works very well if you do know what you are doing"


The author headline starts with "LLMs are a failure", hard to take author seriously with such a hyperbole even if second part of headline ("A new AI winter is coming") might be right.


But it can work well even if you don't know what you are doing (or don't look at the impl).

For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX/QA side. I've done it many times despite 20 years of software experience. -- Some stuff just doesn't justify me spending my time credentializing in the impl.

Hallucinations that lead to code that doesn't work just get fixed. Most code I write isn't like "now write an accurate technical essay about hamsters" where hallucinations can sneak through lest I scrutinize it; rather the code would just fail to work and trigger the LLM's feedback loop to fix it when it tries to run/lint/compile/typecheck it.

But the idea that you can only build with LLMs if you have a software engineer copilot isn't true and inches further away from true every month, so it kinda sounds like a convenient lie we tell ourselves as engineers (and understandably so: it's scary).


> Hallucinations that lead to code that doesn't work just get fixed

How about hallucinations that lead to code that doesn't work outside of the specific conditions that happen to be true in your dev environment? Or, even more subtly, hallucinations that lead to code which works but has critical security vulnerabilities?


Replace "hallucination" with "oversight" or "ignorance" and you have the same issue when a human writes the code.

A lot of that will come to the prompter's own foresight much like the vigilance of a beginner developer where they know they are working on a part of the system that is particularly sensitive to get right.

That said, only a subset of software needs an authentication solution or has zero tolerance to some codepath having a bug. Those don't apply to almost all of the apps/TUIs/GUIs I've built over the last few months.

If you have to restrict the domain to those cases for LLMs to be "disastrous", then I'll grant that for this convo.

What about everything else?


> A lot of that will come to the prompter's own foresight

And, on the current trend, how on earth are prompters supposed to develop this foresight, this expertise, this knowledge?

Sure, fine, we have them now, in the form of experienced devs, but these people will eventually be lost via attrition, last even faster if companies actually do make good on their threat to replace a team of 10 devs with a team of three prompters (former senior devs).

The short-sightedness of this, the ironic lack of foresight, is troubling. You're talking about shutting off the pipeline that will produce these future prompters.

The only way through, I think, will be if (very big if) the LLMs get so much better at coding (not code-gen) that you won't need a skilled prompter.

Good luck with that.


> how on earth are prompters supposed to develop this foresight, this expertise, this knowledge?

I suppose curiosity. The same way anyone develops expertise in the abstractions below after getting excited about the higher layer.


Have you checked your package imports lately?


Even assuming that's the case, everyone's acting like throwing more GPUs at the problem is somehow gonna get them to AGI


Far more is being done than simply throwing more GPU's at the problem.

GPT-5 required less compute to train than GPT-4.5. Data, RL, architectural improvements, etc. all contribute to the rate of improvement we're seeing now.


The very idea that AGI will arise from LLMs is ridiculous at best.

Computer science hubris at its finest.


Why is it ridiculous that an LLM or a system similar to or built off of an LLM could reach AGI?


Because intelligence is so much more than stochastically repeating stuff you've been trained on.

It needs to learn new information, create novel connections, be creative.. We are utterly clueless as to how the brain works and how intelligence is created.

We just took one cell, a neuron, made the simplest possible model of it, made some copies of it and you think it will suddenly spark into life by throwing GPUs at it ?


>It needs to learn new information, create novel connections, be creative.

LLM's can do all those things


Nope. Can't learn anything after the training data, only within the very narrow context window.

Any novel connections are through randomness, hence hallucinations instead of useful connections with background knowledge of involved systems or concepts.

About creativity, see my previous point. If I spit out words that go next to eachother, it won't be creativity. Creativity implies a goal, a purpose, or sometimes by chance, but utilising systematic thinking with understanding of the world.


I was considering refuting this point by point, but it seems your mind is already made up.

I feel that many people who deny the current utility and abilities of large-language models will continue to do so far after they've exceeded human intelligence, because the perception that they are fundamentally limited, regardless of whether they actually are or if their criticisms make any sense, is necessary for some load-bearing part of their sanity.


I never denied the utility or abilities of LLMs. I use them almost daily, and they can be very useful.

I am denying that they are intelligent, or that by scaling them / upgrading them they will suddenly spring to life and become AGI.


If AGI is built from LLMs, how could we trust it? It's going to "hallucinate", so I'm not sure that this AGI future people are clamoring for is going to really be all that good if it is built on LLMs.


Humans are wrong all the time too. Intentionally and unintentionally.


Because LLMs are just stochastic parrots and don't do any thinking.


Humans who repeatedly deny LLM capabilities despite the numerous milestones they've surpassed seem more like stochastic parrots.

The same arguments are always brought up, often short pithy one-liners without much clarification. It seems silly that despite this argument first emerging when LLM's could barely write functional code, now that LLM's have reached gold-medal performance on the IMO, it is still being made with little interrogation into its potential faults, or clarification on the precise boundary of intelligence LLM's will never be able to cross.


Which novel idea have LLMs brought forward so far?


> Claim: gpt-5-pro can prove new interesting mathematics.

>Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct.

https://x.com/SebastienBubeck/status/1958198661139009862

(please excuse the x link)


Call me back when LLMs stop "hallucinating" constantly.


"Wrong" is a concept that clearly applies when something is objectively wrong.

I asked ChatGPT for help with Wordle the other day, by asking for a 5-letter word that contained P, M, K and Y. It said:

> Yes, the word skimp contains the letters P, M, K, and Y

Would you say that wrong is not a concept that applies to this answer?


I think the original commenter meant that the LLM can't be called wrong because the concept requires understanding. However, I think it would be fine to call the LLM's response incorrect.


> Suddenly no British man can be more than 3 months in another European country before being “banned”

People finding out that voting against free movement means movement will no longer be free remains my favorite Brexit trope.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: