This is how a lot of the world works. Certain things aren't done very much because it takes a lot of human effort to do those things and that creates a status-quo.
For example a lot more people would sue eachother for petty things if it suddenly became very easy and cost efficiant. Its not, so they dont.
Another example of AI doing this exact type of thing in another realm: In the past convincing someone you were somebody they should give money to for a scam was very possible to do, but also difficult and not very cost efficiant. You could try to impersonate someone's daughter or a police officer, but it took a lot of effort to get it right.
Now, with voice mimicking ai, deepfakes, social media to mine for personal info, etc its not as difficult and so, very likely, its becoming a bigger problem than it was.
you mean Microsoft is a menace. Microsoft has been tricking generations of people into using OneDrive. I hope nobody is dumb enough to pay for it and I'd create a ton of fake emails and fill it up with junk.
I wouldn't be too shocked if they're real. They aren't going to be making humanoid robots that actually are useful and don't price 99% of people out of buying them and they have to come out with something new eventually.
And they can copy a lot of features from the better, cheaper chinese cars and just sell them for 3x as much in the american market because they have no competition here and the chinese are barred from selling their cars here.
Still, even if the are real it doesn't mean their company should be valued at 21x Ford's value, or even 1x Ford's.
If an AI company has done unethical things do you think it is inappropriate to discuss that? Take Grok: among other things it created sexualized images of underaged women without their consent, not by accident but as a feature. Is that just something you want to ignore? In response the people in charge merely restricted the feature to paid subscribers instead of removing it.
Do you think people who mention grok creating CSAM is a holier-than-thou attitude? Do you not think the people who ignore that are worse than other people?
I don't see a comment in this thread concretely discussing said unethical things.
Not sure why you felt the need to switch the topic to Grok. About its nudification incident, it seems a bit far stretched to say that malicious actors bypassing its safety controls was not an accident.
Initially, the image features were restricted to paying subscribers to prevent abuse by anonymous actors; this obviously happened while they were tightening safety controls to stop abuse.
If you're going to bring up that old topic, at least try to get the facts straight.
I switched to grok because its a very cut and dry case of an ai company having poor ethics.
To me it seems a LOT of a stretch to think that the people behind grok belived their safty controls worked, but you can belive that if you wish. Deepfakes of non-consenting adults were trending on X all the time, elon even appears to have shared them himself, which is pretty bad even if they're all just adults, and I'm sure you belive that they belived the AI could tell the difference between an underage person and an adult perfectly, although it seems clear they didn't test it very much.
I for one am appalled at TCP/IP because it facilitates so much unethical behavior. I of course am holier than thou because I do not ignore this and am a voice that raises awareness. I shall not be silenced!
Heh, I never thought about that but its so true. If society breaks down on the extreme level they anticipate the smart thing to do is probably join a super tight-knit community with lots of young people - maybe the furries or the Amish.
If society breaks down it will be too late to join such groups for nearly all outsiders. Unless you bring very valuable skills or other attributes to the table.
The time to build your community is now, before things get so bad every helpless individual is looking for a group to save them.
I wonder whether The Walking Dead ever did episodes with a surviving Amish community among it's many spinoffs. Potential problem for them is being outgunned by any aggressive community nearby.
There are a lot of difference kinds of LLMs. 0 of the ones I've encountered are good writers, in fact all of them are horrible at it.
But I wonder if there's one out there that I don't know about with a different kind of training that actually is good at writing and fun to talk to for a long time. (granted somepeople love talking to gpt 4, but also some people loved talking to ELIZA so clearly some people have a super high tolerance for slop.)
I can just see you telling that to a slave in 1830. What? You don't like slavery? Don't you see that you have to expect the slave owners to act in accordance with the incentives god gave them and force you to gather cotton until your hands bleed? change it or cope dude. Or someone watching from the hills as the horde of Gengis Khan torches their city and puts everyone they've ever known to the sword - those mongols are activing in accordance with incentives! Change it or cope!
I dabble in ASCII art and use Playscii these days. Its still pretty hard to make amazing looking art even with these great tools, which just shows how legendary the demoscene is.
I was looking for photos of NYC in the 1990s a few weeks ago. I eventually found some, but my search was greatly obstructed by AI photos of NYC in the 1990s.
The experiance made me certain that AI is going to to much more harm than good to the buisness of archiving historical photos.
As for the lady who is distorting photos to colorize them - I don't even understand why you would want to do that. There are other ways!
yeah, you're right. That's why she's doing it. But its a weird idea: I like this historical photo, so I'm going to distort in order to add color, which makes it not a historical photo anymore. I guess to her the distortion is so minimal it loses nothing, but to me it loses everything.
Its like saying "I love Da Vinci's art so I'm going to draw a moustache on everyone in the last supper" which you probably wouldn't do if you really loved Da Vinci's art.
There are some pretty obvious distortions when you closely look at the difference between the historical and AI-corrupted images. But I have to admit, the colorized one has a nice vibe to it, if you don't look too closely it gives a really nice feel for what the moment was actually like, more than the accurate black-and-white.
Which is to say, I think it comes down to what you value most out of historical photos; a forensic record of truth, or general idea of what it was like to live at the time, compared to today.
The photo is oversaturated and psychedelic. It seriously looks like what the world looks like on a dose of drugs. I much prefer the black and white one. They're both unreal in their "same same, but different" ways
I'm firmly against uncontrolled AI use. But as long as the edits are strongly labeled, I have to say I enjoy the effect.
Maybe it's because I'm too young and I've never had B&W content around, but the edited picture allows me to feel the photograph as real, as a place I could have walked around, which I can't really do with the original. I find that effect more valuable than a specific roof being deformed or whatever.
The effect bugs me personally mainly because the cars are implausible colors, there are a ton of small changes to e.g. the windows on the campers etc. But even more annoyingly, most of her posts are just the color photos without even the source pic. She clearly enjoys it, and many people in the comments do too, but I just have this existential dread that those will be slurped up in the next AI push and treated as historical truth in the future.
Then you pass both the original and the mustache'd photo across the table while boisterously announce: "look how absurd it is to love something so wholly and completely!" to the room instead of the person the photographs were passed to!
Yeah, I agree that small social networks are better. But some people are just bad at using social media - even if they're great people in other ways, so they share AI slop and made up political bait posts. You may have to curate your feed a bit.
For example a lot more people would sue eachother for petty things if it suddenly became very easy and cost efficiant. Its not, so they dont.
Another example of AI doing this exact type of thing in another realm: In the past convincing someone you were somebody they should give money to for a scam was very possible to do, but also difficult and not very cost efficiant. You could try to impersonate someone's daughter or a police officer, but it took a lot of effort to get it right.
Now, with voice mimicking ai, deepfakes, social media to mine for personal info, etc its not as difficult and so, very likely, its becoming a bigger problem than it was.
reply