I am yet to see a robot that could clean my bathroom. And I have a pretty basic bathroom: a toilet, a shower, a bathtub, a sink, a mirror, some shelves, laundry baskets, a washing machine, a window, a door, a floor.
How would you design a robot that can clean all of those?
I appreciate that you are taking the Janitor example quite literally.
From a jobs to be done perspective of any housekeeping task, there is a lot more progress in the past year than I realized. I thought the same until it was shared with me.
Speaking of "the West" is dumb, ignorant, and worst of all, not really helpful or insightful. Just as speaking of "the East" or "Asia". It just doesn't make sense to make these broad generalized statements about various multiple self-governing countries spanning hundreds of millions of people and thousands of square kilometers.
What a perfect example to demonstrate the "collective ignorance and hypocrisy of western people" they were mentioning. There is a dichotomy in "Corruption", it is weaponized as a tool of neocolonialism in the continued subjugation of the global south and systematically downplayed and re-framed in the west.
The west is just a shorthand for countries within the global north that are part of the international liberal order. This is all well established terminology, including "western imperialism" and "western hegemony". Its not our fault you are hearing these words for the very first time.
1. Certain users do not like "political" topics on the front page. But as I said, the very idea of "apolitical" tech news is naive, especially in times like these.
2. Some users want to suppress it because it goes against their own political interests.
Either way, it's a gross misuse of the flag button. I am wondering: are there any consequences for wrongly flagging submissions?
So what? I don't really care if you are proud of your work if I think you work is objectively evil. I imagine the designers of the Auschwitz's gas chambers were also proud of their "good" work. Yeah I ain't empathizing with them either and you can call that a "blind spot"
Where in the parent comment do you see them saying they are objectively “right”?
I read it as honestly subjective: “I see morality this way, you see it another way. If you act in a way that my morality deems evil, I will judge you for it regardless of how it fits into your belief system.”
If you can conclusively prove that liberals are evil, be my guest. Give me your rhetorical coup-de-grace that annihilates half of America's voting bloc.
Even the Nazis retained the ability to judge their opponents when WWII ended. Pity that their opponents were hangmen that wanted them to answer for murder.
Actors being this wealthy and famous has always been a mystery to me. Oh so you are a good looking person that recites other people's words for money while faking emotions? And you can take as many takes as you can and your fuckups will be corrected in post-production anyway? Well I guess the work you do totally merits the hundreds of millions of dollars you've amassed.
Like even kicking a ball or whatever makes more sense to me because there is an objective measurement of what it means to do it well, while with actors its mostly about sympathy or preference
Actors have a kind of legally-enforced monopoly. They're not employees you hire, they're products you buy.
If you want to make a movie staring Nicole Kidman, you have to pay whatever Nicole Kidman wants you to pay. You're legally forbidden from hiring an "off-brand" person and making her look indistinguishable from Kidman.
If you want to hire a Scala programmer, there's plenty of easily-replaceable people willing and able to do that job. No single person dictates how much money Scala programmers make.
Famous actors are basically a category that they're the only member of, and so they can set their prices. You can switch to a different category )(just as you can switch from Scala to Typescript) if one becomes too expensive, but that too carries some expense.
Franchises have a similar problem. If all your friends are watching Game of Thrones, you too want to watch Game of Thrones, even if there are other shows which are just as good. This means the Makers of GoT can dictate GoT prices, because the government gives them a legal monopoly on GoT distribution.
There's certainly a lot of actors that seem to just phone in a performance and are mainly hired due to their looks and high profiles, but don't forget about the actors that can elevate just about any role that they're in due to their skills and artistry.
It's celebrity. People want to imagine themselves like these icons they've built, even if only through the laziest of efforts. I wonder if it's an innate human trait to aspire to be like those we admire.
Clearly "tasting good" is not the primary driver behind all of this. Good taste is an incentive to satisfy something more primary, similar like sex feels good in order to satisfy procreation
> Good taste is an incentive to satisfy something more primary,
Covered in my comment above: The more primary drive is that those are high calorie foods. A drive to consume more high calorie foods is beneficial in times of food scarcity, like the past.
I believe there is a lot of shame-induced ignorance around this whole subject. Culturally poopin' is in the similar category like sex or death, outlawed from most "civilized" debates. But consider how central digestion is to our existence: basically almost before everything else we must consume -> digest -> expel first. You are not getting that smart brain of yours without that poopy butthole to go along with it
I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.
I hope I'm wrong but I don't think a privacy friendly alternative is going to exist. It's going to go the way of show me your drivers license to use my site.
Why wouldn't criminals like they do now just use stolen identities? If someone verifies they are a person that doesn't mean they're not leaving their PC on with some AI that uses their credentials either.
The point of these systems is not to ban any possibility of fake accounts. The point is to add friction so that creating accounts is harder than banning them, so criminals can’t recreate them at scale. Otherwise bans take seconds to overcome and a single person can run 10000 automated identities.
No credential will be sufficient, this is basically an unsolvable enforcement problem. That doesn't obviate the utility of rules and norms, but there's no airtight system which will hold back AI generated content.
Verifiable credentials have been an idea for a long time now. It wouldn't be that hard to solve. Sign everything you post with a verifiable credential. Implement support on all social media sites. The question is whether the forum implementers, governing bodies, and social media site owners want to try to build a solution like this or not.
It doesn’t stop people posting AI slop, it stops people from posting AI slop more than once. If you ban somebody for spamming today, they just create a new account and keep on spamming. If you can determine they are the same person you banned before using verifiable credentials, it makes the ban actually effective.
Layer on captchas. It won't completely stop slop but it's an incentive against slop flooding. And I mean, nothing is stopping a human from just going into ChatGPT by hand and asking for output and copy/pasting that into an HN post box.
I feel like we need a distributed system/protocol that allows people to have pseudonyms not linked to their real identity, but with a shared reputation/trust score, so if you’re a bad actor using a pseudonym your real identity and all your other sock puppets are penalized too.
I know very little about this but sense that some combination of buzzwords like homomorphic encryption, zk-snarks, and yes, blockchains could be useful.
Of course this would present problems if any of your identities were ever compromised and your reputation destroyed.
Driving everything by reputation-weighted identities just creates echo-chambers you then cannot escape.
The most useful time for the blowhard spout off at me is at the moment it makes me most uncomfortable. Because the blowhard probably has a valid point at some level, he’s just being an ass about it.
When we meet that moment with discipline, are able to identify and respond to the kernels of truth and ignore the chaff belted out, focus on the merits of the argument irrespective of the source of an adversarial viewpoint, we thrive.
I like the blowhards just the way they are, unruly and insolent.
That is exactly what will happen. The sad thing is, it needs to happen. I've found myself advocating for this lately, when 10 years ago, I wouldn't have even considered taking that position.
If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.
There's literally no other way to combat rampant botting, child abuse, and nation-state originating disinformation campaigns and the intentional creation of public discord.
That's a false dichotomy. There are other possible approaches to address these issues that don't include ID verification. It also isn't the golden solution, verified accounts could still be stolen or bought.
You're a fool if you believe this. Nation states will still have utter impunity. That's why they build, buy or bully backdoors to secure design. The Epstein class will still get away with murder. All the little poeple will cower in fear of reprisal for speaking their minds.
A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.
The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.
We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.
Reputation tracking is the key. The most simple option is open-invite invite-only spaces: Any user can invite more users, but only users with an invite can participate. Most Discord servers work like this, secret societies like the Oddfellows do, as does the other site.
If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.
The open-invite system works well in many cases. It works particularly well in-person but even there you can get drift over time. Our fraternity unanimously agreed on every single initiate who joined; the cohort today is still very different from the one 20 years ago.
In online systems the scales quickly get too big for open-invite. There needs to be a way to automatically update the trust network at a fine grain.
The one that jumps to mind is an inference system; when I +/- a comment, I'm really noting that I trust or distrust the author. It can be general or on a specific topic (eg I trust the author to tell the truth or I trust the author to make me laugh). I could also infer that other people with similar trust patterns are likely trustworthy. And I could likely infer that people who are trusted by people I trust are trustworthy.
yes and they're all full of suckers. In the best case which is already bad you get a pretentious online night club like Clubhouse, in the worst case you get Epstein's island.
These walled off societies always attract people who are drawn to exclusivity, are run like dystopian island communities or high school cliques and tend to, in a William Gibon 'anti-marketing', way be paradoxically even more vapid.
No you need actual open access and reputation systems. A good blueprint is something like well functioning academic communities. It's a combination of eliminating commercial motives, strict rules, high importance on reputation and correctness, peer review, and arguably also real identities and faces.
I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.
The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.
Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.
> Someone using an LLM is craft a reply is not a problem on it's own.
No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested.
Do you though? Like what real difference does it make to you? Can you even tell if this has been passed through an LLM or not? If you can't tell, why does it matter?
I don't want to be robo-slopped at en masse or be fed complete fabrications but neither of those actually require an LLM. If you're going to use an LLM to gather your thoughts, I don't see a problem with that.
the difference is that you get to see the unfiltered, unique perspective of a real human being. Just like I don't want to talk to anyone through an instagram or tiktok beauty filter or accent remover. If your thoughts are unordered, it's okay I'll take your unordered thoughts over some smoothed over crap.
Do people have really such a low opinion of themselves that they have to push every single thing through some kind of layer of artifice?
> the difference is that you get to see the unfiltered, unique perspective of a real human being.
The implicit unfounded assumption is whether that's actually worth more than a well written orderly response. Most comments are kind of crap.
Not everyone is good at writing. In some cases, it might even be a disability aid. And if their comments aren't good, we have a system in place to rank them accordingly. Again, I think the only problem is quantity. If we're overrun with low-effort posts, no amount of ranking will help that.
> The implicit unfounded assumption is whether that's actually worth more than a well written orderly response.
It's not implicit or unfounded. The parent comment is explicitly saying that's what they prefer. And, as an actual human, their preference is intrinsically valid for them.
If I like my kid's crappy cooking over a Michelin-star meal made by a robot... then I get to like my kid's crappy cooking more. I have that right. There is no social consensus when it comes to what I want. You can't argue whether my preference is correct or not, it's my preference.
As a software developer and human being, I know people often say they prefer one thing while actually preferring something else. That's human nature.
People have strong feelings about AI in general and that can definitely cloud what they will say about it. Everybody hates AI but, like CGI in movies, they only likely hate the AI or CGI that they notice.
Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.
To say otherwise is to say that worrying about lung cancer is clouding one's view of smoking.
> they only likely hate the AI or CGI that they notice.
No, this is simply not true at all. I dislike use of AI even more when I don't notice it. My goal getting on the Internet is to connect with other actual people and their creativity. I want actual people to be more connected to each other, and AI makes that worse, especially when it's good enough that people don't even realize their are being intermediated by corporations pumping out simulated humanity.
> Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.
That's fine. Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.
> My goal getting on the Internet is to connect with other actual people and their creativity.
It's too bad your goal doesn't include interacting with people who don't speak your language and use AI to translate for them. Or people who struggle with writing in general. I don't think it's as black and white as you make it out to be.
> Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.
I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.
We had the President of the United States posting AI-manipulated propaganda on social media. Millions of voters saw that, regardless of whether or not I happen to personally use ChatGPT.
It doesn't matter if I light up a cigarette myself if I have to spend all day in a crowded bar where everyone else is smoking.
> I don't think it's as black and white as you make it out to be.
I'm not saying it's black and white. All I'm saying is that your description of someone's strong feelings about AI as "clouding" their stance is incorrect. You can be clear-headed about feeling something is a large net negative for the world.
> I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.
My point... way at the top... is exactly that. People's behavior does have an effect but it always has.
The President of the United States posting manipulated propaganda is the problem; using AI now just makes it more obvious. It's actually better, right now, that it is so obvious. But anyone can, and has, done that with lesser tools to better affect.
People posting bullshit on the Internet has always been a problem. I'm not even sure how an AI ban is enforceable. While I don't think I have the solution, I think it makes more sense to look at this as content problem instead of tool problem. Both quality and quantity.
If you had the LLM write the comment, then it wasn't your thoughts.
I sometimes wonder if people aren't forgetting why we're on this platform.
The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN
> If you had the LLM write the comment, then it wasn't your thoughts.
But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.
Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.
If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?
I like to think about it in terms of output-to-prompt ratio. For HN comments, I think an output ratio of 1 or less is _probably_ fine. Examples:
- translating (relatively) literally from one language to another would be ~1:1.
- automatic spelling/grammar correction is ~1:1
- Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.
(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.)
As for expansion, that might just be the risk we take. I been downvoted on reddit for being "too verbose" in my replies and I'm a human. And perhaps just reading the prompt in that case wouldn't give you more information; the LLM might actually have some insight that is relevant to the conversation. What's the difference between that and googling for something and pasting it in?
The linked rule does not make such a distinction, and I don't see how this rule could be enforced with such a caveat, either.
Hence no, none of these examples should be okay. Even if pure translation and grammar check is gonna be effectively impossible to detect too, so likely pointless to talk about
And the last one is often detectable and very clearly against it - I'm not sure how you can come to any other conclusion
> I don't see how this rule could be enforced with such a caveat
I don't see how this rule is going to be enforced anyway. Many people posting with AI help won't get noticed at all and about 100 times a many people are going to be accused of using AI because they use proper grammar.
Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human.
How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.
Not sure where my comment is going, I just kinda rambled.
I'm going to guess we'll eventually settle onto a psuedo-anonymous cert system like HTTPS where some companies are entrusted with verification and if that company says "That's definitely a human" it'll fly - not a great solution, of course, but I really can't see a non-chain-of-custody/trust based approach to the problem and those might only slightly compromise anonymity in optimal scenarios but some compromise is inevitable.
Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.
I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.
Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.
I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.
Moving more and more into private communities removes that, and that is a great loss IMO.
> Moving more and more into private communities removes that, and that is a great loss IMO
It is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.
The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.
> I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years
On a site like HN it's kinda easy to vet for at least those that already had thousands of karma before ChatGPT had its breakthrough moment a few years ago.
Now an AI could be asked to "Use my HN account and only write in my style" and probably fool people but I take it old-timers (HN account wise) wouldn't, for the most part, bother doing something that low. Especially not if the community says it's against the guidelines.
If it becomes one, then that will be the end of sites like Hacker News.
This site, at its core, is fundamentally too low-bandwidth, too text-only, and too hands-off-moderated to be able to shoulder the burden of distinguishing real human-sourced dialog from text generated by machines that are optimized to generate dialog that looks human-sourced. Expect the consequence to be that the experience you are having right now will drastically shift.
My personal guess: sites like this will slop up and human beings will ship out, going to sites where they have some mechanism for trust establishment, even if that mechanism is as simple and lo-fi as "The only people who can connect to this site are ones the admin, who is Steve and we all know Steve, personally set up an account for." This has, of course, sacrificed anonymity. But I fundamentally don't see an attestation-of-humanity model that doesn't sacrifice anonymity at some layer; the whole point of anonymity on the Internet was that nobody knew you were a dog (or, in this case, a lobster), and if we now care deeply about a commenter's nephropid (or canid) qualities, we'll probably have to sacrifice that feature.
This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.
Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.
After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.
This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.
These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.
You just need to pay someone 1 cent every time they scan their eye for you. You will have people sitting at home and giving their eye scans to AIs to use.
It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.
You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.
One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.
you could sell physical items at any store where you have to show your ID and you get one for the age group you are.
that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.
People who are posting AI comments or setting up AI bots are... people. They can show their ID. If a website owner doesn't have a way to ban that specific human and the bad guy can always get another voucher, it's sort of meaningless.
In fact, even if you can ban the human for life, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon if it makes them $5k/month without requiring any effort or skill.
Perhaps not only just show your id to get your "Over age X verification object", but your ID also gets irreversibly altered (like a punch card) that makes it one-time-use only.
That might make it less likely someone would ever sell it because to get a new one might take a very long "cool-down" time and it'd severely hamper the seller.
I like Mitchell's Vouch idea. At the end of the day, it's all about trust. Anything else is an abstraction attempting to replicate some spectrum of trust.
I think we’ll see a return to smaller groups and implementing a lot of systems the way we do it IRL. I think you could definitely do a more fine-grained system that progressively adds less score to contacts the further away they are. In combination with some type of accumulating reputation system, you’d have both a force to keep out unknown IDs, but also a reason for one to stick to their current ID even though it’s anonymous.
Adding this type of rep system would destroy a lot of what is so cool about the internet though. There’d probably be segregation based on rep if it’s very visible, new IDs drowning in a sea of noise. Being anonymous but with a record isn’t the same as posting for the very first time as a completely blank identity and still being given an audience. Making online comms more like real life would alleviate some problems but would also lose part of the reason they’re used in the first place. I don’t see much any other way to do it besides maybe a state-provided anonymous identity provider (though that’s risky for a number of reasons), but it’s going to be sad to see things go.
> especially without sacrificing people's right to privacy and anonymity in the process
I'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?
(knowing that of course, neither of those actually solve the problem)
I am yet to see a robot that could clean my bathroom. And I have a pretty basic bathroom: a toilet, a shower, a bathtub, a sink, a mirror, some shelves, laundry baskets, a washing machine, a window, a door, a floor.
How would you design a robot that can clean all of those?
reply