But it is clearly a shaming attack on the contributor. The post calls him ego-driven, defensive, an inferior coder, and many other (mild) insults. Sure, it doesn't accuse him of being a friend of Epstein, but that is not the only way of attacking someone.
Elon Musk is a fascist sympathizer who is currently being investigated in at least one European country for election interference, a known lier who has promised level 5 self driving by the end of the year every year since 2018 or so, landing people on Mars by next year since 2022, Tesla electric truck cheaper than rail, "today", in 2020, and many others.
You need only look at Tesla's attempts to compete with Waymo to see that you are just wrong. They tried to actually deploy fully autonomous Teslas, and it doesn't really work, it requires a human supervisor per car.
They are behind Waymo but they are getting there. They started giving fully autonomous drives since last month without safety driver in Austin. Tesla chose a harder camera-only approach but it's more scalable once it works.
Clearly at this point the camera-only thing is the ego of Musk getting in the way of the business, because any rational executive would have slapped a LIDAR there long ago.
> Mr Keegan said he was “pretty confident” that in “the next five to 10 years” driverless vehicles would “make a major contribution in terms of sustainable transport” on Dublin’s streets.
As always, people were overoptimistic back then, too. There are currently no driverless vehicles in Dublin at all, with none expected anytime soon unless you count the metro system (strictly speaking driverless, but clearly not what he was talking about).
Ask Musk why he refuses to provide details of accidents so we can make a judgment.
Tesla’s own Vehicle Safety Report claims that the average US driver experiences a minor collision every 229,000 miles, meaning the robotaxi fleet is crashing four times more often even by the company’s own benchmark.
I don't see how we could know the rate of US driver minor collisions like that. No way most people reporting 1-4mph "collisions" with things like this.
You don't have to know. You can fully remove the few "minor" accidents (that a self driving car shouldn't be doing ever anyway) and the Tesla still comes out worse than a statistical bucket that includes a staggering number of people who are currently driving drunk or high or reading a book
The car cannot be drunk or high. It can't be sleepy. It can't be distracted. It can't be worse at driving than any of the other cars. It can't get road rage. So why is it running into a stationary object at 17mph?
Worse, it's very very easy to take a human that crashes a lot and say "You actually shouldn't be on the road anymore" or at least their insurance becomes expensive. The system in all of these cars is identical. If one is hitting parked objects at 17mph, they would almost all do it.
You need to eat roughly somewhere between 1300 and 2000 Cal every day to maintain your weight even if you are doing to exercise at all.
If you want to lose weight, it's far easier to remove 800 Cal from your diet, at least time wise, then it is to exercise 800 Cal's worth every day.
Either way, if you're losing weight at any appreciable rate, you will feel hungry (at least if it's not chemically induced in some way, such as chemo or GLP-1 inhibitors or similar). That's just something you have to get used to if you want to lose weight.
This is well-intentioned but I think it oversimplifies in ways that can actually be harmful. "Just get used to being hungry" is rough advice to give people - chronic hunger is one of the main reasons diets fail, and framing weight loss as a willpower contest against hunger ignores that satiety is heavily influenced by _what_ you eat, not just how much. A 400 kcal meal of protein, fat, and fiber will keep you full for hours; 400 kcal of simple carbs will leave you hungry again in 45 minutes, in part because of the insulin and blood glucose dynamics involved.
The calories in/out model isn't wrong exactly, but it's so reductionist that it becomes misleading in practice. It omits hormonal responses (insulin, leptin, ghrelin), the thermic effect differences between macronutrients (your body burns 20-30% of protein calories just processing them vs 0-5% for fat), gut microbiome composition, sleep quality, stress hormones, meal timing, and individual metabolic variation. Two people eating identical calorie counts can have very different outcomes. Telling someone "just eat less and accept the hunger" without any of that context can set them up for a miserable yo-yo cycle - or worse, a disordered relationship with food.
No, the common "wisdom" you are puppeting here is harmful because it just doesn't work.
We have been telling people for decades now to be worried that they might harm themselves by too much restriction and it is just wrong. What is harmful is being over weight. What is harmful is then confusing people that they are somehow going to lose weight without much restriction or being hungry.
This also scales really bad with age because as you age the CNS recovery gets worse and worse compared to muscle recovery.
At 55, there is simply no way for me to lose weight other than being hungry. It is impossible to recover from the amount of exercise that would be needed. The reality is that no one needs to worry about too much restriction until they are down to around 12% or so body fat. The fact a person's bodyfat % is never mentioned in this is exemplary of how bad the standard advise is.
Most people have too much leptin and leptin resistance. Then those same people get the same bad advise over and over to not restrict too much because you don't want to be like an anorexic or extreme athlete and have too low of leptin. Of course, ignoring that the anorexic and extreme athlete are going to have incredibly low bodyfat percentages.
I think the advice that everyone who is overweight or obese really needs is to experiment with different ways of reducing their food consumption while managing their hunger and cravings, and find out a method that works for them. I don't think there's any universal solution. Even saying "eat less simple carbs, those make you more hungry because of this and that chemical pathway" is not good universal advice, because food consumption is not strictly tied to hunger in all people. It is up to you as the one who wants to lose weight to experiment and figure out what motivates you and works for you longer term.
For example, I don't feel satisfied with my meal if I don't feel slightly full. So, what has worked for me is to generally have a single large meal per day, in which I will typically eat whatever I've been really craving since my last meal. In some days that might be steak and brocolli, in other days it might be a McDonald's meal, or some cake. When I get cravings, it's far easier for me to defer them to tomorrow's meal than it would be to just stop eating junk food entirely, or to eat half a burger and two fries from the bag. The exact opposite might be true for other people, and you won't really know until you've tried for yourself.
One thing I will note - I think one of the concerns of the poster you are replying to with focusing too much on enduring hunger is that it might lead some people to develop anorexia, which is indeed a huge problem, even when the person is really overweight (since their anorexia will not just go away once they've lost that extra weight, it will keep going until they get dangerously malnourished).
I don't think I implied that the only thing that matters to weight loss is CICO, and that you only need willpower to lose weight. I don't personally believe this at all.
My point was instead that whatever effort you can spend on weight loss is better spent on managing your diet than increasing your level of activity (though I should also say that fitness is important beyond weight loss). Even when I said you can reduce 800 Cal of food, that doesn't mean "just skip a meal" (though that is also a valid strategy for some people). It can also mean "eat different kinds of food".
However, I do strongly believe that for any weight loss at a significant pace (say, 1kg/month or faster), and assuming it's not just a correction after a short stint of overeating (as in, it's more than losing 1-2kg you put on over Christmas) - then some feeling of hunger is inevitable. Losing long-term accumulated weight is going against your body's "wishes" (especially in the lipostat model, where your body has a set fat% equilibrium that it seeks to maintain), and hunger is an inevitable response to that. How much hunger you will feel can be controlled by better food choices and so on, but you will have to also get used to feeling some level of hunger.
For cardio sure but for weight training you're burning calories and tearing muscle fibres to increase size/strength. Also depending on the running you're doing, you're likely staying fitter.
Sure it's easier to fast but you're missing out on the other benefits associated with exercise.
> Sure it's easier to fast but you're missing out on the other benefits associated with exercise.
This is very true, exercise is very important for health regardless of its effect on weight.
> For cardio sure but for weight training you're burning calories and tearing muscle fibres to increase size/strength.
True, but you need to spend even more time to rack up 800 Cal worth of exercise by weight training compared to doing cardio, as a beginner or even an intermediate level gym goer.
It is also true though that weight training, if you actually successfully build muscle mass, can significantly increase your BMR and thus help with losing weight in that way, even if you're not spending hours or lifting hundreds of kilos at every session.
Yeah, unfortunately back of envelope physics math about the kC burned for lifting weights is deeply disappointing. Luckily our bodies are quite inefficient compared to a bomb-calorimeter, because back of envelope gets me less than a (k)calorie per 3 sets of 5 lifts, if you just do lazy potential energy math.
A tiny technical note - Cal is the official symbol for a "large calorie", equal to a kcal, 1000 cal, if you want to be precise but concise on the exact type of calorie you're talking about.
While I don't know if this applies to AI usage, but actual gambling addicts most certainly do not shop around for the best possible rewards: they stick more or less to the place they got addicted at initially. Not to mention, there's plenty of people addicted to "casinos" that give 0 monetary rewards, such as Candy Crush or Farmville back in the day and Genshin Impact or other gacha games today.
So, if there's a way to get people addicted to AI conversations, that's an excellent way to make money even if you are way behind your competitors, as addicted buyers are much more loyal that other clients.
You're taking the gambling analogy too seriously. People do in fact compare different LLMs and shop around. How gamblers choose casinos is literally irrelevant because this whole analogy is nothing more than a retarded excuse for AI holdouts to feel smug.
> My first instinct was, I had underspecified the location of the car. The model seems to assume the car is already at the car wash from the wording. GPT 5.x series models behave a bit more on the spectrum so you need to tell them the specifics.
This makes little sense, even though it sounds superficially convincing. However, why would a language model assume that the car is at the destination when evaluating the difference between walking or driving? Why not mention that, it it was really assuming it?
What seems to me far, far more likely to be happening here is that the phrase "walk or drive for <short distance>" is too strongly associated in the training data with the "walk" response, and the "car wash" part of the question simply can't flip enough weights to matter in the default response. This is also to be expected given that there are likely extremely few similar questions in the training set, since people just don't ask about what mode of transport is better for arriving at a car wash.
This is a clear case of a language model having language model limitations. Once you add more text in the prompt, you reduce the overall weight of the "walk or drive" part of the question, and the other relevant parts of the phrase get to matter more for the response.
You may be anthropomorphizing the model, here. Models don’t have “assumptions”; the problem is contrived and most likely there haven’t been many conversations on the internet about what to do when the car wash is really close to you (because it’s obvious to us). The training data for this problem is sparse.
I may be missing something, but this is the exact point I thought I was making as well. The training data for questions about walking or driving to car washes is very sparse; and training data for questions about walking or driving based on distance is overwhelmingly larger. So, the stat model has its output dominated by the length-of-trip analysis, while the fact that the destination is "car wash" only affects smaller parts of the answer.
I got your point because it seemed that you were precisely avoiding the anthropomorphizing and in fact seemed to be honing in on whats happening with the weights. The only way I can imagine these models are going to work with trick questions lies beyond word prediction or reinforcement training UNLESS reinforcement training is from a complete (as possible) world simulation including as much mechanics as possible and let these neural networks train on that.
Like for instance, think chess engines with AI, they can train themselves simply by playing many many games, the "world simulation" with those is the classic chess engine architecture but it uses the positional weights produced by the neural network, so says gemini anyways:
"ai chess engine architecture"
"Modern AI chess engines (e.g., Lc0, Stockfish) use
a hybrid architecture combining deep neural networks for positional evaluation with advanced search algorithms like Monte-Carlo Tree Search (MCTS) or alpha-beta pruning. They feature three core components: a neural network (often CNN-based) that analyzes board patterns (matrices) to evaluate positions, a search engine that explores move possibilities, and a Universal Chess Interface (UCI) for communication."
So with no model of the world to play with, I'm thinking the chatbot llms can only go with probabilities or what matches the prompt best in the crazy dimensional thing that goes on inside the neural networks. If it had access to a simple world of cars and car washes, it could run a simulation and rank it appropriately, and also could possibly infer through either simulation or training from those simulations that if you are washing a car, the operation will fail if the car is not present. I really like this car wash trick question lol
Reasoning automata can make assumptions. Lots of algorithms make "assumptions", often with backtracking if they don't work out. There is nothing human about making assumptions.
What you might be arguing against is that LLMs are not reasoning but merely predicting text. In that case they wouldn't make assumptions. If we were talking about GPT2 I would agree on that point. But I'm skeptical that is still true of the current generation of LLMs
I'd argue that "assumptions", i.e. the statistical models it uses to predict text, is basically what makes LLMs useful. The problem here is that its assumptions are naive. It only takes the distance into account, as that's what usually determines the correct response to such a question.
I think that’s still anthropomorphization. The point I’m making is that these things aren’t “assumptions” as we characterize them, not from the model’s perspective. We use assumptions as an analogy but the analogy becomes leaky when we get to the edges (like this situation).
It is not anthropomorphism. It is literally a prediction model and saying that a model "assumes" something is common parlance. This isn't new to neural models, this is a general way that we discuss all sorts of models from physical to conceptual.
And in the case of an LLM, walking a noncommutative path down a probabilistic knowledge manifold, it's incorrect to oversimplify the model's capabilities as simply parroting a training dataset. It has an internal world model and is capable of simulation.
> However, why would a language model assume that the car is at the destination when evaluating the difference between walking or driving? Why not mention that, it it was really assuming it?
Because it assumes it's a genuine question not a trick.
That's not evidence that the model is assuming anything, and this is not a brainteaser. A brainteaser would be exactly the opposite, a question about walking or driving somewhere where the answer is that the car is already there, or maybe different car identities (e.g. "my car was already at the car wash, I was asking about driving another car to go there and wash it!").
If the LLM were really basing its answer on a model of the world where the car is already at the car wash, and you asked it about walking or driving there, it would have to answer that there is no option, you have to walk there since you don't have a car at your origin point.
If it's a genuine question, and if I'm asking if I should drive somewhere, then the premise of the question is that my car is at my starting point, not at my destination.
While I don't think anyone has a plausible theory that goes to this level of detail on how humans actually think, there's still a major difference. I think it's fair to say that if we are doing a brute force search, we are still astonishingly more energy efficient at it than these LLMs. The amount of energy that goes into running an LLM for 12h straight is vastly higher than what it takes for humans to think about similar problems.
In the research group I am, we have usually try a few approach to each problem, let's say we get a:
Method A) 30% speed reduction and 80% precision decrease
Method B) 50% speed reduction and 5% precision increase
Method C) 740% speed reduction and 1% precision increase
and we only publish B. It's not brute force[1], but throw noodles at the wall, see what sticks, like the GP said. We don't throw spoons[1], but everything that looks like a noodle has a high chance of been thrown. It's a mix of experience[1] and not enough time to try everything.
One thing we know for sure is that humans learn from their interactions, while LLMs don't (beyond some small context window). This clear fact alone makes it worthless to debate with a current AI.
reply