Everyone knew that was the future and that the big auto manufacturers were deliberately dragging.
No-one (serious) thought there was a market for the cybertruck.
The stock price is pure madness, it's like it's priced in robotaxis, but that's clearly not going to happen for Tesla. And if it did, it would be a small-ish market, their brand has become toxic in so many big markets.
> No-one (serious) thought there was a market for the cybertruck.
If they'd hit the price and performance of the launch announcements they might have. $40k base for what he initially talked about is a vastly better proposal than $61k base for what he actually delivered.
Good for them as a company, thats why they are still here.
And now? Everyone builds EVs, everyone is as far as Tesla or better.
Even the old school companies like BMW have now more models than Tesla and the Cybertruck was expensive to build, build badly and did not deliver what Elon the druggy and antidemocrat Musk promised.
Tesla unveiled the Roadster 20 years ago. That's plenty of time for other companies to catch up. They made a bet that once the battery moat evaporated the millions of miles of driving footage, powering affordable fully autonomous driving, would be their next moat. They failed, not because camera-based FSD is a silly idea (we drive with our eyes after all), but because it's a really hard problem. If they had won that bet, Tesla would justify its valuation. They didn't, and so we're left with the flailing of a doomed company.
The first electric car predates the 20th century. That seems pretty obvious.
The problem was always batteries and charging infrastructure. I wouldn't call these semi-impossible, but it's something Tesla definitely contributed significantly to.
> The first electric car predates the 20th century. That seems pretty obvious.
If you count remote control cars as well then you have an even weightier point.
But if you're serious about adapting technologies, countries and drivers to electric cars then you'll know that an electric car being made in the 19th century is totally irrelevent. Toyota even bet big on hydrogen rather than electric for a long time; that's how non-obvious it was.
>an electric car being made in the 19th century is totally irrelevent.
But then you strangely ignored why it was irrelevant, which I already pointed out and was the meat of the statement. The concept of an electric car is painfully simple. Way more so than an internal combustion engine, in fact.
> The first electric car predates the 20th century
Great, now do steam. Being produced in the past does not mean it will make a comeback, despite steam being quieter, with great torque, and the main ingredient for propulsion (water) being safer than gasoline for normal people to refuel
What I could see happening is Alphabet getting an exclusive lock on Tesla (probably not buying because the stock is too high) and then quasi-merging it with Waymo for a fully integrated, functional robo taxi company. A bit like when they bought Motorola phone division.
They're real at scale. Plenty of bugs don't suface until you're running under heavy load on distributed infrastructure. Often the culprit is low in the stack. Asking the reporter the right questions may not help in this case. You have full traces, but can't reproduce in a test environment.
When the cause is difficult to source or fix, it's sometimes easier to address the effect by coding around the problem, which is why mature code tends to have some unintuitive warts to handle edge cases.
> Unless your code is multi-threaded, to which I say, good luck!
What isn't multi-threaded these days? Kinda hard to serve HTTP without concurrency, and practically every new business needs to be on the web (or to serve multiple mobile clients; same deal).
All you need is a database and web form submission and now you have a full distributed system in your hands.
nginx is also from the era when fast static file serving was still a huge challenge, and "enough to run a business" for many purposes -- most software written has more mutable state, and much more potential for edge cases.
You mean in a single-threaded context like Javascript? (Or with Python GIL giving the impression of the same.) That removes some memory corruption races, but leaves all the logical problems in place. The biggest change is that you only have fixed points where interleaving can happen, limiting the possibilities -- but in either scenario, the number of possible paths is so big it's typically not human-accessible.
Webdevs not aware of race conditions -> complex page fails to load. They're lucky in how the domain sandboxes their bugs into affecting just that one page.
Historically I would have agreed with you. But since the rise of LLM-assisted coding, I've encountered an increasing number of things I'd call clear "ghost bugs" in single threaded code. I found a fun one today where invoking a process four times with a very specific access pattern would cause a key result of the second invocation to be overwritten. (It is not a coincidence, I don't think, that these are exactly the kind of bugs a genAI-as-a-service provider might never notice in production.)
I think our phone lines must work differently, the entire infrastructure is owned by one company (BT) who must lease it to other companies. So they can do things like this, as everyone needs a router at the end to access it and that's how they charge per customer.
There is a separate cable network, again one operator (Virgin), who don't lease it out.
It's not clear what happened from this news report.
His error in judgement may have been he hadn't investigated the problem sufficiently. Then falsely testified to the government. That's a big deal on its own.
The officer involved might have been fired or reprimanded, we don't know from that article.
>I think the true mind boggle is you don't seem to realize just how much content the AI conpanies have stolen.
What makes you think I don't realize it? Looks like your comment was generated by an LLM because that was an hallucination that is Not true at all.
AI companies have stolen a lot of content for training. I AGREE with this. So have you. That content lives rent free in your head as your memory. It's the same concept.
Legally speaking though, AI companies are a bit more in the red because the law, from a practical standpoint, doesn't exactly make illegal anything stored in your brain... but from a technical standpoint information on your brain, a hard drive or a billboard is still information instantiated/copied in the physical world.
The text you write and output is simply a reconfiguration of that information in your head. Look at what you're typing. The English language. It's not copywrited, but every single word your typing was not invented by you, the grammar rules and conventions were ripped off existing standards.
I think you are pointing out the exact conflation here. The commentor probably didn't steal a bunch of code, because it is possible to reason from first principles and rules and still end up being able to code as a human.
It did not take me reading the entirety of available public code to be kind of okay at programming, I created my way to being kind of okay at programming. I was given some rules and worked with those, I did not mnemonic my way into logic.
None of us scraped and consumed the entire internet, is hopefully pretty obvious, but we still have capabilities in excess of AI.
What’s being missed here is how fundamentally alien the starting point is.
A human does not begin at zero. A human is born with an enormous amount of structure already in place: a visual system that segments the world into objects, depth, edges, motion, and continuity; a spatial model that understands inside vs outside, near vs far, occlusion, orientation, and scale; a temporal model that assumes persistence through time; and a causal model that treats actions as producing effects. None of this has to be learned explicitly. A baby does not study geometry to understand space, or logic to understand cause and effect. The brain arrives preloaded.
Before you ever read a line of code, you already understand things like hierarchy, containment, repetition, symmetry, sequence, and goal-directed behavior. You know that objects don’t teleport, that actions cost effort, that symbols can stand in for things, and that rules can be applied consistently. These are not achievements. They are defaults.
An LLM starts with none of this.
It does not know what space is. It has no concept of depth, proximity, orientation, or object permanence. It does not know that a button is “on” a screen, that a window contains elements, or that left and right are meaningful distinctions. It does not know what vision is, what an object is, or that the world even has structure. At initialization, it does not even know that logic exists as a category.
And yet, we can watch it learn these things.
We know LLMs acquire spatial reasoning because they can construct GUIs with consistent layout, reason about coordinate systems, generate diagrams that preserve relative positioning, and describe scenes with correct spatial relationships. We know they acquire a functional notion of vision because they can reason about images they generate, anticipate occlusion, preserve perspective, and align visual elements coherently. None of that was built in. It was inferred.
But that inference did not come from code alone.
Code does not contain space. Code does not contain vision. Code does not contain the statistical regularities of the physical world, human perception, or how people describe what they see. Those live in diagrams, illustrations, UI mockups, photos, captions, instructional text, comics, product screenshots, academic papers, and casual descriptions scattered across the entire internet.
Humans don’t need to learn this because evolution already solved it for us. Our visual cortex is not trained from scratch; it is wired. Our spatial intuitions are not inferred; they are assumed. When we read code, we already understand that indentation implies hierarchy, that nesting implies containment, and that execution flows in time. An LLM has to reverse-engineer all of that.
That is why training on “just code” is insufficient. Code presupposes a world. It presupposes agents, actions, memory, time, structure, and intent. To understand code, a system must already understand the kinds of things code is about. Humans get that for free. LLMs don’t.
So the large, messy, heterogeneous corpus is not indulgence. It is compensation. It is how a system with no sensory grounding, no spatial intuition, and no causal priors reconstructs the scaffolding that humans are born with.
Once that scaffolding exists, the story changes.
Once the priors are in place, learning becomes local and efficient. Inside a small context window, an LLM can learn a new mini-language, adopt a novel set of rules, infer an unfamiliar API, or generalize from a few examples it has never seen before. No retraining. No new data ingestion. The learning happens in context.
This mirrors human learning exactly.
When you learn a new framework or pick up a new problem domain, you do not replay your entire lifetime of exposure. You learn from a short spec, a handful of examples, or a brief conversation. That only works because your priors already exist. The learning is cheap because the structure is already there.
The same is true for LLMs. The massive corpus is not what enables in-context learning; it is what made in-context learning possible in the first place.
The difference, then, is not that humans reason while LLMs copy. The difference is that humans start with a world model already installed, while LLMs have to build one from scratch. When you lack the priors, scale is not cheating. It is the price of entry.
But this is besides the point. We know for a fact that output from humans and LLMs are novel generalizations and not copies of existing data. It's easily proven by asking either a human or an LLM to write a program that doesn't exist in the universe and both the human and the LLM can readily do this. So in the end, both the human and the LLM have copied data in their minds and can generalize new data OFF of that copied data. It's just the LLM has more copied data while the human has less copied data, but both have copied data.
In fact the priors that a human is born with can even be described as copied data but encoded in our genes such that we our born with brains that inherit a learning bias optimized for our given reality.
That is what is missing. You look at speed of learning from training. The apt comparison in this case would be reconstructing a human brain neuron by neuron. If you want to compare how fast a human learns a new programming language with an LLM the correct comparison would thus be to compare with how fast an LLM learns a new programming language AFTER it has been trained and solely within inference in the context window.
GW don't have competitors, it has an absolute monopoly on the 40k and Fantasy worlds it has built up. It's like saying there's competitors to LOTR or Star Wars or DnD.
Their worlds are their monopolies. Worlds that now have multi-decades worth of lore investment (almost 50 years now I think).
Just because someone else can make cheaper little plastic models doesn't affect GW in the slightest. Or pump out AI slop stories.
The Horus Heresy book series is like 64 books now. And that's just a spin-off. It's set way before when 40k actually is set (10,000 years).
With so much lore they need complicated archiving and tracking to keep on top of it all (I happen to know their chief archivist).
You can't replace that. I only say all this just to try and explain how off the mark you are on understanding what the actual value of the company is.
I live in Nottingham where GW is based, another of my friends happens to have a company on an industrial estate where there are like 3 other tabletop gaming companies. All ex-gw staff.
You could probably fit all their buildings in the pub that GW has on its colossal factory site.
You used to know people who worked at Boots, which used to be the big Nottingham employer. Now days, I know more people who work at GW.
BattleTech is somewhat of a competitor, and a variety of smaller games have some niches.
Plenty of people use proxies, too. There's places that do monthly packs of new STLs that could be an entire faction army, and there's long been places that sold "definitely not Space Marines and Sisters of Battle" minis too.
They don't have a threat of anyone overtaking them at current, but AI making alternatives in this vein even cheaper could eat away at portions of their bottom line.
As a Battletech lover, the phrase "somewhat of a competitor" is a bit vague. I see Battletech as a 3%er - one of a few 3%ers - compared to the near-monopoly of WH40K (and fantasy WH).
As an aside, I am somewhat disappointed that Battletech's appeal to the mainstream is largely down to the Mechwarrior games which have minimal lore.
There is so much more that could be done. But the current owners seem to be pretty poor at translating all their paperwork stories for the modern crowd.
I don't get why you conflate privacy and resistance to censorship.
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
Voluntary “censorship” (not being shown visceral media you don’t ask) and censorship for children are very important.
Bad “censorship” is involuntarily denying or hiding from adults what they want to see. IMO, that power tends to get abused, so it should only be applied in specific, exceptional circumstances (and probably always temporarily, if only because information tends to leak, so there should be a longer fix that makes it unnecessary).
I agree with you that children should be protected from beheading and extremism; also, you should be able to easily avoid that yourself. I disagree in that, IMO, anonymous accounts and “free” websites should exist and be accessible to adults. I believe that trusted locked-down websites should also exist, which require ID and block visceral media; and bypassing the ID requirement or filter (as a client) or not properly enforcing it (as a server operator) should be illegal. Granting children access to unlocked sites should also be illegal (like giving children alcohol, except parents are allowed to grant their own children access).
A hangout for 11-16 year olds often seems to devolve into a bunch of kids all watching their own phones. It's really depressing to watch, though they do seem to play as well.
We have had several arguments about no social media and we're only 1 out of 6-ish years in to the too naïve to look after yourself on the internet phase, and the eldest already figured out how to download some chat app I'd never even heard of without permission.
This is a horrible comment and is exactly what we're trying to avoid on HN. The guidelines make it clear we're trying for something better here. HN is only a place where people want to participate because enough people take care to make their contributions far more substantive than this. Please do your part by reading the guidelines and making an effort to observe them in future.
These ones in particular are relevant:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Eschew flamebait. Avoid generic tangents. Omit internet tropes.
Please don't use Hacker News for political or ideological battle. It tramples curiosity.
No-one (serious) thought there was a market for the cybertruck.
The stock price is pure madness, it's like it's priced in robotaxis, but that's clearly not going to happen for Tesla. And if it did, it would be a small-ish market, their brand has become toxic in so many big markets.
reply