It's funny how, in the end, the ultimate and perhaps only proof of personhood is being a person, engaged in the world. Sort of how the ultimate decentralized currency is encompassed in the global market of rising and falling economic powers. The oracle problem is a problem because there are no oracles. There is only the test of time. Or something along that line.
> the ultimate and perhaps only proof of personhood is being a person, engaged in the world
I buy this.
Daniel Suarez had a very similar idea, although he referred to it as the bot problem[0]. I believe this approach of identities withstanding "the test of time" solves the oracle problem but at the cost of a delayed solution. Initially you have lots and lots of bots and Sybil attacks are common. Then after a while, identities/nyms that exist and interact with the world increase in trustworthiness. Trustworthy identities will eventually be stolen or sold to bad actors, but like fake identities today they will be expensive.
My identity on hackernews is over 11 years old. Creating such an identity with the comment history, connection to a true name, and content over 11 years would be very expensive. Likely more expensive that a fake passport.
For instance Islamic State terrorists were buying counterfeit passports allowing them to enter the EU for 15,000 USD [1].
The major downside of such a system is that isolated people or people with few resources would be at a major disadvantage and we would essentially be replicating much of inequality of the credit score system.
[1]: "One such network, run by an Uzbek with extremist links living in Turkey, is now selling high-quality fake passports for up to $15,000 (£11,132) purporting to be from various countries. In at least 10 cases the Guardian is aware of, people who illegally crossed the Syrian border into Turkey have used his products to depart through Istanbul airport.", https://www.theguardian.com/world/2022/jan/31/revealed-how-f...
Keybase was sort of trying to do this with social proof, though it's reliance on a centralized provider (keybase itself) made it brittle.
I think you can get there with something like Urbit IDs that are easy to ban. If IDs are not infinite you can some protection against abuse - pairing that with some proof-of-humanity and you can get closer, but there are still issues of someone doing the proof and then selling their ID to a bot. At least if it's easy to ban you can try to make that not economical. Doubly true if the IDs have a non-zero (but low) cost.
The problem isn't trivial though - it's not obvious what will work best and it'll likely always be somewhat of an arms race, especially if you want to keep privacy.
> Keybase was sort of trying to do this with social proof, though it's reliance on a centralized provider (keybase itself) made it brittle.
I think something like this could've worked, which is why I was so sad over its demise. I think Keybase (like the MIT PGP keyring, which it is sort of a fancier version of) was predicated on the idea that it's much easier to build a centralized keyring and then decentralize it once it's widely adopted than to build a decentralized keyring from the ground up.
> My identity on hackernews is over 11 years old. Creating such an identity with the comment history, connection to a true name, and content over 11 years would be very expensive. Likely more expensive that a fake passport.
Many users here are far more insightful than ChatGPT. You might get some OK comments with ChatGPT and a stray upvote here and there, but I think you're more likely to get banned if you employ ChatGPT to write your HN comments at any significant volume.
Not that everyone writes great comments, all the time, but I'm arguing you probably won't break into the 1000s of HN karma if you try to automate it (because you'll probably get shadowbanned first)
Even that's a bit of a crapshoot. I have a Reddit account that's over 15 years old. I have a Neopets account that's over 20. I can't get into either of them, a consequence of (my abuse of) poor UX design. (What's the fake birth date I chose in 2000? Who knows! Was taking advantage of Reddit's account-juicing tactic of not requiring an email worth losing a decade-old identity to hackers? Probably not!)
I've tried to recover them, and faced a (perhaps justified) customer support brick wall. Was Neopets to anticipate Millennial nostalgia (and our preteen willingness to circumvent COPPA) in their sign-up processes a generation ago? How obligated is Reddit to investigate someone's claim to any single account?
But without those accounts, I'm two closer to being a digital non-entity, for significant portions of my life.
As proof of the original publication date, not only is it in the Wayback Machine, but it's also cited in several academic papers and books. It's great that someone is actually doing it, but I'm also kind of ticked they didn't cite this essay as prior art in their patent application.
>My identity on hackernews is over 11 years old. Creating such an identity with the comment history, connection to a true name, and content over 11 years would be very expensive. Likely more expensive that a fake passport.
You can buy very old accounts on any platform for very cheap. Like under $100 for a 10 year account on a popular platform cheap. Most platforms offer comment editing, and most people don't archive everyone else's profiles nor do they have access to the database to check for consistent changes.
Meaning that if we use your account for example, if someone bought it from you (or it was hacked after inactivity and you don't care for it, etc), they could easily rewrite what they need to paint the picture that your user isn't actually about using your real name, but a pseudonym. Most aren't going to care about this that far anyways, as usernames are easily ignored on most sites, same with comment histories. Just indicating how easy it is to rewrite both of those aspects.
The only platforms where this would be difficult are platforms that already partake in substantial identity verification like Facebook.
On HN you can only edit comments until they're 2h old, only delete them until somebody responds. Editing history as you've described would require admin privileges. Good luck getting that for $100.
This assumes that platforms never change policies and that admin privileges are expensive and impenetrable. Neither are true, as we've already witnessed multiple times now across various platforms.
No, it doesn't. I'm not speaking of "platforms." I'm speaking of exactly one platform that has an extremely small admin crew. Buying your way to a rewritten history on HN is highly implausible. Hacking, perhaps, but I'd expect that this community has been rattling those doors for the platform's entire existence.
Such a fake passport is not the same as a complete fake identity. There is more to that than a passport and a lot of it you categorically cannot fake.
A fake passport can trick some people some of the time but not all people all of the time. As such those $15k do not represent a full fake identity. Your history of enhancing with the state in some way, paying taxes, requesting a new passport, getting your drivers license, whatever, serves as defense in depth.
States are pretty good at this, actually.
Also, you account is worthless. Hard or impossible to replicate, sure, but if there is no buyer it‘s worth zero.
I never realized that painting was at the LA county museum of art until I was standing face to face with it. It felt ironic to have one level of meta removed. I was thinking "this is "this is not a pipe""
I strongly second latexr's recommendation of Scott McCloud's "Understanding Comics", and the spread he linked to above. Scott McCloud is the Marshall Mcluhan of Comic Art.
It's a speech by the author Robert Anton Wilson discussing the work of Alfred Korzybski, who helped to explain and popularize the map/territory distinction and how it relates to the philosophy of science.
I'm a huge fan of Wilson and Korzybski, the ideas Wilson discusses in this lecture definitely changed my life and the way that I think about science, but even I would caution that it's not all that relevant to the discussion here.
Words are tools of communication, but they are also tools of deception and obfuscation. They never represent reality correctly, but they can be useful.
There is a chain of complexity where subatomic wavicles assemble into atoms, molecules, self-reproducing molecules, cells, multicellular life, specialized multicellular life, different clades of life, and on up to forms of intelligence and self-awareness.
It is not possible for a human-equivalent intelligence to maintain continuous awareness of all these levels of complexity, and so we operate, much of the time, in abstractions and metaphors that we mistake for being real. We code-switch, consciously and not, through sets of behaviors that pretend that various abstractions are real.
Among these abstractions: a town is a group of buildings and people, but different social abstractions don't have to agree about the contents. So the post office, the street maps, the police and the real estate agents can all be simultaneously in disagreement about the house where the author lives in terms of their mappings of "towns" and "jurisdictions", yet all agree on the street address and which building they mean.
When you can't do something because it's illegal, you can do it, but you know that various social institutions will attempt to inflict various consequences on you if they become aware of your action in the appropriate ways. But most of the time you just say "I can't" and fall back on "I shouldn't".
There's a lot more, but it's about the same: we build lots of maps, we don't agree on them, but we keep acting as though they were real even though we know that they aren't. When someone is invested enough in their current map, they can become very upset with someone who points out that it's fictional.
Interesting that you raise the connection to schizotypal thinking. I recently learned there is a quite explicit connection between excessively instrumental, abstraction-obsessed thinking and schizophrenia. I recommend looking into the work of Iain McGilchrist if you want to learn more. A brief sample: https://www.youtube.com/watch?v=jkfMnaLpU7s
> The oracle problem is a problem because there are no oracles
To expand on this and correct your first statement, physical engagement can't be a proof of personhood either.
There can never be a proof of personhood, or if someone is a human or not, or if someone is conscious, etc.
It's more practically applicable than it sounds. Think conjoined twins (sharing a brain), disabled people, people in coma, long memory loss and all other edge cases.
In the end, even that is an approximation. We want to limit our services to "humans" because humans can spend money, and humans can only put so much content into our services (spamming limited versus a bot), and humans want to talk to humans, maybe humans only want to date humans, etc.
But if there was a bot that was well behaved (for whatever that means for a given service), and somehow legitimately viewed ads, and could legitimately decide to buy something with real money from an ad, there's a lot of sites that would no longer want to ban that particular bot. Now they need to distinguish between that bot and the other "bad" bots. In general, if a bot can "behave well" we don't necessarily want to kick them out just for being bots.
In the end, the more bots become like humans and humans through things like the Mechanical Turk become more like bots, service providers will be required to very carefully think through what it is they actually want to demand of their users. Splitting people into "human" and "bot" is already only an approximation on the grounds that there's plenty of humans services don't want around, and there's already some bots that services are fine with (e.g., some helpful reddit bots), so that split isn't going to be good enough. The process of thinking through what is really desirable at a much higher level of detail will be fun to watch, and there will be a lot of different answers for different services.
For your service maybe certain humans don’t qualify and maybe certain bots qualify. However, the matter at hand and WC purpose is proof of humanhood in particular.
>perhaps only proof of personhood is being a person, engaged in the world.
Not even. People have been having double-lives for a lot longer than we've had the Internet. The difference is that in physical space it's a lot harder to do so, because you can only be in one place at one time and you have to move from place to place rather than teleporting. So if you want to catch someone in the act of, say, voting twice, you just need to trace their movements.
That being said, there are also plenty of situations in which we consider having multiple identities online to be a good thing. Facebook's "real name" policy - forcing everyone to tie themselves to a government issued legal name instead of just a consistent one - was and is cancer. The whole industry of V-Tubers literally runs on talent living a double life and a healthy dose of kayfabe. And we don't yell at character actors because Robert Downey Jr. is also Iron Man.
Anonymity is considered a vital bedrock of liberal values. But so is voting, and this is where being able to create additional identities becomes a problem very quickly.
Wikipedia has an interesting problem of "ripened socks" where people will register multiple accounts and keep them in reserve, specifically to defeat accusations of sock puppeting. This can be detected, but it's murky - are you actually a sock puppet, or do you just lurk Wikipedia a lot[0]? You could probably even go further and split your editing history across multiple sock puppets. Statistical analysis might be able to reveal that, say, two accounts tend to edit the same set of articles, but that would also have a high false positive rate and disenfranchise other editors.
The underlying base assumptions of the crypto crowd is a sort of extreme para-identitarianism. Anti-identitarianism in its extreme would be something like Japanese-style imageboards[1], with their obsession over anonymous posting. Identitarianism would be Facebook's real name policy. Paraidentitarianism instead wants identity to be a side effect of resource scarcity. An identity like a signing key is non-scarce, I can just make more keys, so you cannot hold elections by simply counting the signing keys. But the money in your Bitcoin wallet is a paraidentity: it is provable scarcity, and thus can be bent into a sort of identity, at the expense of disenfranchising everyone who has not bought into the system. In fact, crypto hucksters explicitly threaten people who do not buy their coins of being left out of their future utopia.
What an interesting thought. Maybe the increasing amount of entities I get to interact with that do less and less to bring that prove goes to show how relatively unimportant being human will be.
The existing finance system is largely reputation based, with a light side of contract enforcement and a heavy dose of middlemen. It mostly works, but one of the biggest advantage of crypto stuff is removing the need for reputation and trust.
Except it doesn’t. That’s exactly what we’ve learned over the last decade. It never did either of those things. What it did is that it clouded reputation and trust just about enough to trick people into the illusion of a “trustless” system. There is no such thing. In the end you trust the coders trusted by the people who use the apps implementing the blockchain. There’s no tangible difference to trusting the coders trusted by the banks you use. Yes, with some cryptocurrencies they’re not associated directly with a centralised organisation. But that will absolutely change over time if the cryptocurrency ever becomes relevant to the big corporations of the world. Just as the WWW in practice now is controlled by Google and Apple.
The point of selling the idea that a cryptocurrency is “trustless” is to trick you into trusting it just long enough to pull off a Ponzi-scheme or rug-pull. And yes, that goes for Bitcoin too.
> one of the biggest advantage of crypto stuff is removing the need for reputation and trust.
I don't think it does that, though, at least when used as a currency. The irreversibility of crypto transactions means that you have to have some trust that the person you're dealing with will be willing to make things right when erroneous transactions happen.
When people say "trust" in the context of Bitcoin, it's about not having to trust a middle man, e.g. PayPal, who are notorious for capriciously locking you out of your account and stealing your money, or denying service to marginalized populations.
It doesn't mean you can't get ripped off by the party you're transacting with directly, but neither does cash, unless you pay for some kind of insurance. Which you can also do with cryptocurrency.
Credit cards come with a form insurance built in, but that doesn't mean you're not paying for it (they charge fees), including when you trust your counterparty and don't want to pay extra for insurance on a low-risk transaction.
> When people say "trust" in the context of Bitcoin, it's about not having to trust a middle man
That's all fine if you're a cryptocurrency guy, but in terms of what ordinary people mean by "trust", I don't see how cryptocurrency removes "the need for reputation and trust." You still need those things.
> neither does cash
True. Which is why cash also has a need for reputation and trust.
All I'm asserting is that the need to trust people is not eliminated by using cryptocurrency. How is my assertion wrong?
Someone says that TLS secures your credit card number when buying things online. Which it does.
You say that it doesn't do that, because someone could break into your house and install a hidden camera to capture you entering in your credit card number and it doesn't prevent that.
But you use other things to prevent that. That isn't the kind of security that TLS claims to provide.
And notably, there aren't a lot of alternatives to the kind of trust that blockchains would allow you to avoid placing in the likes of PayPal.
> The irreversibility of crypto transactions means that you have to have some trust that the person you're dealing with will be willing to make things right when erroneous transactions happen.
The cypherpunk answer to this dilemma is that revertability of transactions under some circumstances is a service that, if it is desired (for quite some applications of cryptocurrencies, it is), should be build in a layer above the "bare-metal" blockchain.