Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"They are firing people and they are deciding who gets their insurance claims covered."

AI != LLM. "AI" has been deciding those things for a while, especially insurance claims, since before LLMs were practical.

LLMs being hooked up to insurance claims is a highly questionable decision for lots of reasons, including the inability to "explain" its decisions. But this is not a characteristic of all AI systems, and there are plenty of pre-LLM systems that were called AI that are capable of explaining themselves and/or having their decisions explained reasonably. They can also have reasonably characterizable behaviors that can be largely understood.

I doubt LLMs are, at this moment, hooked up to too many direct actions like that, but that is certainly rapidly changing. This is the time for the community engineering with them to take a moment to look to see if this is actually a good idea before rolling it out.

I would think someone in an insurance company looking at hooking up LLMs to their system should be shaken by an article like this. They don't want a system that is sitting there and considering these sorts of factors in their decision. It isn't even just that they'd hate to have an AI that decided it had a concept of "mercy" and decided that this person, while they don't conform to the insurance company policies it has been taught, should still be approved. It goes in all directions; the AI is as likely to have an unhealthy dose of misanthropy and accidentally infer that it is supposed to be pursuing the interests of the insurance company and start rejecting claims way too much, and any number of other errors in any number of other directions. The insurance companies want an automated representation of their own interests without any human emotions involved; an automated Bob Parr is not appealing to them: https://www.youtube.com/watch?v=O_VMXa9k5KU (which is The Incredibles insurance scene where Mr. Incredible hacks the system on behalf of a sob story)



> “AI” has been deciding those things for a while, especially insurance claims, since before LLMs were practical.

Yeah, but no one thinks of rules engines as “AI” any more. AI is a buzzword whose applicability to any particular technology fades with the novelty of that technology.


My point is the equivocation is not logically valid. If you want to operate on the definition that AI is strictly the "new" stuff we don't understand yet, you must be sure that you do not slip in the old stuff under the new definition and start doing logic on it.

I'm actually not making fun of that definition, either. YouTube has been trying to get me to watch https://www.youtube.com/watch?v=UZDiGooFs54 , "The moment we stopped understanding AI [AlexNet]", but I'm pretty sure I can guess the content of the entire video from the thumbnail. I would consider it a reasonable 2040s definition of "AI" as "any algorithm humans can not deeply understand"; it may not be what people think of now, but that definition would certainly capture a very, very important distinction between algorith types. It'll leave some stuff at the fringes, but eh, all definitions have that if you look hard enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: