Hacker Newsnew | past | comments | ask | show | jobs | submit | ALittleLight's commentslogin

The 20k model has no subscription? How does that work? Surely it's using a fair amount of compute, isn't it?


They will brag about each unit sold to investors to demonstrate product/market fit, and then raise probably 10-100x the sales price per unit from investors.


how far we've fallen where the concept of owning something that you bought seems preposterous to some people


The only part of this article I believe is the legal and bureaucratic burdens part.

"Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians"

I've had the misfortune of dealing with a radiologist or two this year. They spent 10-20 minutes talking about the imaging and the results with me. What they said was very superficial and they didn't have answers to several of the questions I asked.

I went over the images and pathology reports with ChatGPT and it was much better informed, did have answers for my questions, and had additional questions I should have been asking. I've used ChatGPT's information on the rare occasions when doctors deign to speak with me and it's always been right. Me, repeating conclusions and observations ChatGPT made, to my doctors, has twice changed the course of my treatment this year, and the doctors have never said anything I've learned from ChatGPT is wrong. By contrast, my doctors are often wrong, forgetful, or mistaken. I trust ChatGPT way more than them.

Good image recognition models probably are much better than human radiologists already and certainly could be vastly better. One obstacle this post mentions - AI models "struggle to replicate this performance in hospital conditions", is purely a choice. If HMOs trained models on real data then this would no longer be the case, if it is now, which I doubt.

I think it's pretty clearly doctors, and their various bureaucratic and legal allies, defending their legal monopoly so they can provide worse and slower healthcare at higher prices, so they continue to make money, at the small cost of the sick getting worse and dying.


Talented individual(s) who want to do a startup.


First, just from a "danger" standpoint - more people in the EU die from heat than from guns in the US. And roughly 8 times more people die from cold than heat in Europe. So, I would say, that we live in an environment where our neighbors are armed the same way you live in an environment where you're often dangerously hot or cold - i.e. we get used to it.

Second, you can walk or drive on a street. Every passerby in a car could kill you if they wanted to by colliding with you. It rarely happens. Stand next to a tall ledge or overpass with crowds walking by and watch the teeming masses - you're unlikely to see any of the thousands of people walking by leap off to their end. Similarly, in life, even though basically anyone could kill you, it's very rare to encounter someone who is in the process of ending their own life, and killing you would basically end, or severely degrade, their own life. Almost nobody wants to do it.

Charlie Kirk is/was kind of an extreme example. He said many things that severely angered hostile people. He went into big crowds and said provocative things many times before being shot. I think in most situations you have to push pretty hard to get to the point where people are angry enough to shoot at you. If you can avoid dangerous neighborhoods and dangerous professions (drugs and gangs) and dangerous people (especially boyfriends/husbands) then you are pretty unlikely to be shot and you benefit from being able to carry guns or keep guns in your home to protect yourself and your family.

For one example, consider the "Grooming gangs" in the UK, where thousands of men raped thousands of girls for decades with the tacit knowledge/permission of authorities - and despite the pleas of the girls and parents for help. Such a thing could be handled quite differently in a society that was well armed. If the police wouldn't help you, you might settle the matter yourself.


This looks pretty trivial. Obviously modern gains in life expectancy were from removing things that killed us in early age. This says nothing about future gains in life expectancy which may come from biological/medical interventions that reduce senescence.


This is like the worst case of "Sales promises features that don't exist" ever.


Musk's overpromised Full Self Driving is driving Tesla customers insane, and they're finally Breaking Away from his death cult.

"All I want is a refund!"

https://www.youtube.com/watch?v=fQaavQNGsMY


I have a 3 year old who uses AI by talking to ChatGPT's advanced voice mode. He enjoys talking to it and we also use image generation to generate images he's interested in.

He also likes to translate words into different languages - he uses Alexa for this, constantly translating stuff into all the languages he knows about, and he learns that other languages exist when Alexa misunderstands him and translates into a new language - e.g. yesterday he was asking Alexa to translate "eat spinach" and it misunderstood him as asking for "Eat" in "Finnish", and now he knows there's a new language he can translate to.

One thing that comes up a lot in our household is which things in the house are intelligent and which are not. For example, I once heard my son asking the fan to turn itself on, which seems pretty reasonable since some things in our house do respond to voice commands, and sorting out which do and which do not is not intuitive. There's a similar issue in the car sometimes. When our car's audio system is connected to a phone, my son can control what music is playing by saying "Okay Google, play labubu", but when we are listening to the radio, it doesn't work, and sometimes he will try to either command or ask us to control the radio (e.g. "restart this song"). Difficult concepts to explain to a child that we do control what plays on the phone but not on the radio and why.

Another AI activity we've done is vibe coding. My son is a big fan of numbers and countdowns, and asking Claude to generate a webpage with colorful numbers counting up and down and buttons to click to change the numbers and animations and so on works really well.


One big challenge I see with this is that it will attract people who struggle to make friends. A club of lonely men seems like a place I would be embarrassed to go to and hesitant to make friends at. Usually friends have more to recommend them than loneliness.


That's exactly how meetups always felt to me.


In economics, this is known as the problem of adverse selection.


Why would we be boiling to death in this situation? Jupiter is much further from Earth than the sun is and Jupiter is also much smaller. Heat would increase, but probably not that much.


I would rather expect Earth to not have a stable orbit. Either ripped apart from fluctuating tidal forces, flung away or in one of suns (thus boiling would happen, briefly) or just generally a much more extreme place compared to now.


Even without telepathy I think AI will get there. Doctors don't have that much time or access with a patient. Imagine telling ChatGPT what you feel, what your symptoms are, it asks follow up questions, gives some suggestions on changes or over the counter remedies, and comes to a diagnosis.

Once ChatGPT has done that 10 million times, and can learn from or search those records, vague descriptions of symptoms will likely sound pretty similar.


It won't. Because the LLM execution is fundamentally nondeterminstic. The patient will describe something that is irrelevant to the diagnosis (because the patient does not know what is relevant), the LLM will latch on to that, and your kidney cancer will be diagnosed as a knee fracture.


That should work, if the AMA ever allows it.

I'm thinking AI + MRI imaging could figure out a lot.

Much work to do...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: