Hacker Newsnew | past | comments | ask | show | jobs | submit | myhf's commentslogin

Why would you expect it to generate more effective skills when you aren't even making a salt circle or lighting incense?

LLM-generated blogslop is getting published in The Atlantic now?

"ordering" means arranging things in order by some metric.

"sorting" means assigning things into bins (which are usually ordered).


This is news to me. Source?

That's because it's not true.

https://www.merriam-webster.com/dictionary/ordering

Order - transitive verb - 1. to put in order : arrange - "The books are ordered alphabetically by author."

noun - 4. b(1) the arrangement, organization, or sequence of objects or of events - "alphabetical/chronological/historical order" "listed the items in order of importance"

https://www.merriam-webster.com/dictionary/sorting

Sort - transitive verb - 1. to put in a certain place or rank according to characteristics - "sort the mail" "sorted the winners from the losers" "sorting the data alphabetically"

noun - 5. an instance of sorting - "a numeric sort of a data file"


I don't get how this disagrees with what GGP wrote.

https://dictionary.cambridge.org/dictionary/english/sort

to put a number of things in an order or to separate them into groups: Paper, plastic, and cans are sorted for recycling.

sort something into something I'm going to sort these old books into those to be kept and those to be thrown away.

sort something by something You can use the computer to sort the newspaper articles alphabetically, by date, or by subject.

sort (through) She found the ring while sorting (through) some clothes.


"The children were sorted in to two lines by gender then ordered by height"

You might substitute "sorted by height" but its certainly not a correction. While "ordered into lines" would be an error.


What do you do when you sort your washing?

The sorting office for a postal service.

and don't forget "subtle backdoors disguised as regular example code"


I mean, people who use LLMs to crank out code are cranking it out by the millions of lines. Even if you have never seen it used toward a net positive result, you have to admit there is a LOT of it.


If all code is eventually tech debt, that sounds like a massive problem.


If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.

https://www.fda.gov/medical-devices/digital-health-center-ex...


How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.


Ideally, hold Google liable until their AI doesn’t confabulate medical advice.

Realistically, sign a EULA waiving your rights because their AI confabulates medical advice


Apparently we should hire the Guardian to evaluate LLM output accuracy?

Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?

In many areas AI has become this toy that we use because it looks real enough.

It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.

There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.


Because $$$


Because privatized $$$ and public !!!


> How do you suggest to deal with Gemini?

Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?


This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.


The "well we already have a bunch of people doing this and it would be difficult to introduce guardrails that are consistently effective so fuck it we ball" is one of the most toxic belief systems in the tech industry.


> This "random output machine" is already in large use in medicine

By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.

> Should I trust the young doctor fresh out of the Uni

You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.

In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.


There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.


Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.


These things are not equivalent.

This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"

Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).

None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.


It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.


An engineer is not a doctor, nor a doctor an engineer. Yes, AI is being used in medicines - as a tool for the professional - and that's the right use for it. Helping a radiologist read an X-Ray, MRI scan or CT Scan, helping a doctor create an effective treatment plan, warning a pharmacologist about unsafe combinations (dangerous drug interactions) when different medications are prescribed etc are all areas where an AI can make the job of a professional easier and better, and also help create better AI.


And where did I claim otherwise? You're not disagreeing with me but only reinforcing my point


When a doctor gets it wrong they end up in a courtroom, lose their job and the respect of their peers.

Nobody at Google gives a flying fuck.


Not really, these are exceptionaly cases. For most of misdiagnoses or failure to diagnose at all, nothing happens to the doctor.


Why stop at AI? By that same logic, we should ban non-doctors from being allowed to Google anything medical.


Nobody can (and should) stop you from learning and educating yourself. It however doesn't mean just because you can use Google or use AI, you think you can become a doctor:

- Bihar teen dies after ‘fake doctor’ conducts surgery using YouTube tutorial: Report - https://www.hindustantimes.com/india-news/bihar-teen-dies-af...

- Surgery performed while watching YouTube video leaves woman dead - https://www.tribuneindia.com/news/uttar-pradesh/surgery-perf...

- Woman dies after quack delivers her baby while watching YouTube videos - https://www.thehindu.com/news/national/bihar/in-bihar-woman-...

Educating a user about their illness and treatment is a legitimate use case for AI, but acting on its advise to treat yourself or self-medicate would be plain stupidity. (Thankfully, self-medicating isn't as easy because most medication require a prescription. However, so called "alternate" medicines are often a grey area, even with regulations (for example, in India).


> This "random output machine" is already in large use in medicine so why exactly not?

Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.

I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.



Instead of asking me to spend $150 and 4 hours, could you maybe just share the insights you gained from this course?


No, I'm not asking you spend $150, I'm providing you the evidence your looking for. Mayo Clinic, probably one of the most prominent private clinics in the US, is using transformers in their workflow, and there's many other similar links you could find online, but you choose to remain ignorant. Congratulations


The existence of a course on this topic is NOT evidence of "large use". The contents of the course might contain such evidence, or they might contain evidence that LLM use is practically non-existent at this point (the flowery language used to describe the course is used for almost any course tangentially related to new technology in the business context, so that's not evidence either).

But your focus on the existence of this course as your only piece of evidence is evidence enough for me.


Focus? You asked me for an evidence. I provided you with the one. And with the one which has a big weight on it. If that's the focus you're looking for then sure. Take it as you will, I am not here to convince anyone in anything. Have a look in the past to see how Transformers have solved the long standing problems nobody believed they are tractable up to that point.


An online course, even if offered by a reputable medical institution, hardly backs your argument.


LLM is just a tool. How the tool is used is also an important question. People vibe code these days, sometimes without proper review, but do you want them to vibe code a nuclear reactor controller without reviewing the code?

In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...


- The AI that are mostly in use in medicine are not LLMs

- Yes. All doctors advice should be taken cautiously, and every doctor recommends you get a second opinion for that exact reason.


> How do you suggest to deal with Gemini?

With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.


If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.


No.

"Whether we like it or not" is LLM inevitabilism.

https://news.ycombinator.com/item?id=44567857


Yes.

  >Argument By Adding -ism To The End Of A Word
Counterpoint: LLMs are inevitable.

Can't put that genie back in the bottle, no matter how much the powers-that-be may wish. Such is the nature of (technological) genies.

The only way to 'stop' LLMs is to invent something better.


depends if the cost of training can be recouped (with profit) from the proceeds of usage. Plenty of inventions prove themselves to be uneconomic.


Thought terminating cliche.


As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.

I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.

No information is superior to wrong information presented in a convincing way.


The Land of Eternal Motion in HyperRogue is an example of a hyperbolic snake game. You can continue moving indefinitely with an infinite tail.

https://hyperrogue.miraheze.org/wiki/Land_of_Eternal_Motion


Small correction: AI was not invented and it did not arrive.


it's physically painful for me to stop talking about my topic


it's physically painful for everyone else to listen to you talking about your topic.. if you talk for too long that is. :)


The (in)famous astronomer Tycho Brahe died from a bladder infection after politeness prevented him leaving the audience on such an occasion.

https://en.wikipedia.org/wiki/Tycho_Brahe#Illness,_death,_an...


Indeed! It's said that no talk should be longer than a microcentury.

https://www.ams.org/notices/199701/comm-rota.pdf

https://susam.net/microcentury.html


Nice one! I would've said half a microcentury would be the perfect limit for most school lectures, but for an engaging talk a full microcentury would be acceptable :-)


> if you talk for too long that is

I've made that mistake. Talking for longer than 50 minutes is a bad idea.


LOL. I get writers' cramp every time I write a check.


That’s funny because I get dementia every time I have to use my debit card. No matter how many times I think I know where it is, it isn’t there.


It's certainly a problem when circular investment structures are used to get around legal limits on the amount of leverage or fractional reserve, or to dodge taxes from bringing offshore funds onshore.


Plenty of sneaky ways of using different accounting years offshore to push taxes forward indefinitely too, since the profit is never present at the year end.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: