"nostr - Notes and Other Stuff Transmitted by Relays". "The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all."
Jack stumbled upon it yesterday and created an account, so there have been a flood of new users already in the last 16 hours.
I recommend https://astral.ninja as a first time user as it's easy to generate a key pair, which you can then use as you try the many other clients.
This is a very new protocol, so the clients are still dev level and not production level, but development has been very rapid on this protocol.
One thing that stands out to me as a privacy advocate, is that unlike Twitter or Mastadon, the admins (relay operators in the case of nostr) can't read your DMs, so nostr is already ahead of both of these technologies in that regard.
I think he's wrong. Honestly the idea that "social" needs a protocol isn't right. It's that services need a protocol. HTTP was about information retrieval, it wasn't about services. It was simply about rendering web pages, we then moved into creating services on top of that. SMTP was very much about text based messaging asynchronously. That scaled very well. XMPP failed due to it's complexity. Any social protocol IMO would fail due to this inherit baggage in attempting to build it for Twitter like behaviour. What we need to do is separate out what is the service from the protocol. The protocol is simple, we need a way to communicate with services that provide some functionality that's action driven. The services themselves can then implement all the complexity, but they too should be open, but most likely open source with open APIs.
People are getting lost in the dream of a twitter protocol. It's not the right framing. We have a bigger issue. All current centralised services are closed source and closed APIs. We need open source services and open APIs with an agreement on a communication protocol that goes beyond HTTP.
What's frustrating is this will mostly fall on deaf ears and I don't think Jack or anyone at his level is the person to execute on this. I don't think high profile people or high profile projects are going to do it. It needs time to bake in a different way. It needs actual use before it sees the light of day.
Yes! This is the approach we're taking at https://braid.org -- extending HTTP itself into a decentralized synchronization protocol, so that any application built on top of it can be decentralized.
Specifically, you can divide any application's protocol into two parts:
- A data synchronization protocol
- A data schema on top of that protocol
Existing decentralized social protocols (like ActivityPub) encode both the data synchronization and a set of data schemas into a single protocol. ActivityPub lets you broadcast a bunch of tweets to a bunch of subscribers, but doesn't work very well for other features, like decentralized moderation.
It's hard to add new features to these protocols because you have to go through the entire consensus process for each feature. This makes it hard to improve ActivityPub to add new features, such as decentralized moderation.
Instead, Braid tackles just the synchronization, with a very simple extension to HTTP, so that different applications just boil down to different data schemas. These data schemas are very easy to translate between (manually, or automatically), which means that any individual application can innovate to its heart's content on new features, and any other application can easily synchronize with any of those new features, to interoperate, and grow a decentralized web of state + features.
I think ActivityPub isn't so much a protocol as it is an API. Doesn't matter that it has a spec, it's just something defined on top of HTTP. There' an API doc, you can implement it. Has some nice interop. Essentially ActivityPub is a Mastodon API at this point.
Twitter is a centralised platform, a Twitter protocol would just end up looking like a documented API spec which anyone could then implement. It's a bit like AWS S3. S3 has now been implemented by most of the major cloud providers and there's some other open source software that does it too. Does it make it a universal storage service? I'm not so sure, but at least you know you can get data in/out of these various clouds using it.
Social as a protocol came close in XMPP but anything in this feed/timeline model doesn't have a good solution yet and I'm not seeing ActivityPub as that either. I'd be more inclined to lean into an open API for social and a protocol for communication between services that lets us build on top of these things in a standardised way. Because even open APIs evolve and move too quickly to bake too deeply into a protocol. We need to strike a balance between the things that need to evolve versus the thing that need a stable interface. Protocols give us stability with an envelope for something that's constantly changing.
The open native protocol for social media already exists. It's called ActivityPub, it's a W3C standard, and millions already use everyday through many platforms that support it.
Jack (on behalf of Twitter) walked away from the consortium that was discussing the protocol, came up with a copycat that has only a few incremental changes, probably lost the chance of showcasing BlueSky on a large platform like Twitter, is now working on yet another social platform just to showcase it, and he wants us to take him seriously.
Why bother? This guy has already missed his train, he's years behind, and the only reason why a platform should use BlueSky over ActivityPub seems to be "because I've built it and I want to make some money out of it".
There's not really entanglement AFAIK, but Bluesky did an investigation of existing decentralized network protocols and ActivityPub was one of them. There's a paper reporting the results of this investigation, but I do not have the link at hand.
I cannot understand how everyone avoids the most obvious solution to all of this.
Blocking.
Instead of bans, Twitter could just add people to their own block list, and then tell new and existing users that they can opt-in to a moderated Twitter experience. They could have different block lists for different moderation reasons. E.g. Bots, spam, vs terms violations.
An even better solution is a delegative blocking system for accounts and keywords. It's basically crowd-sourcing account blocking, where you are saying: "I want to block all accounts and keywords that this person blocks". Then you can have an allowlist to override some blocks.
Don't like conspiracy theorists? I am certain there are many people willing to tag all the conspiracy theorists they find.
The best part would be having a toggle that can show you all the content that is blocked, and why it was blocked (i.e. You delegated all your blocking to account X, and they delegated their blocking to account Y, and account Y placed this account on their block list named Z).
You could also have "smart block lists" where you could use queries to block people. E.g. "Block all followers of X". An issue you might have though is to differentiate between those that follow in support of someone, or follow simply to know what they said and challenge it.
Ideally I would like this for all internet content.
So I agree with Jack when he says:
> The biggest mistake I made was continuing to invest in building tools for us to manage the public conversation, versus building tools for the people using Twitter to easily manage it for themselves
Just as long as its always opt-in for everyone individually.
This would be useful, but is not sufficient; it's not enough that I can block CSAM, it needs to be possible to block other people from seeing it. Same goes for terrorist conspiracies, libel, wire fraud, and so on.
In the article, Jack links the Twitter thread wherein he reflects on a specific controversial ban. He only links the first tweet in the thread, but clearly still intends it to be read long-form and stands by his prior reasoning despite any principles enumerated in this article itself.
In the thread’s second tweet, Jack wrote:
> We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.
In the current article, he doesn’t address this topic at all. In fact, he focuses very much on eliminating barriers to serving content which, again according to his own thread, was inciting offline violence.
He proposes solutions which instead put users in control of moderating whether their feed is full of incitement to violence. In other words, he’s proposing that the people who would be incited to violence are the ones who should moderate their exposure to such incitement, without interference, and suggests that this would encourage healthier dialogue, with greater transparency, reducing the influence of powerful governments and businesses on acceptable speech.
What happens offline vanished from his analysis, but it can’t be ignored. If all OtherPlatformBesidesTwitter users have a free hand in deciding what OPBT serves them, and it’s acknowledged that content on OBPT can pose safety risks to other people completely off-platform…
… what Jack is suggesting here is that the incitement to violence should be facilitated without hindrance, and the potential victims should just tune it out because it’s not their preferred content.
Somewhat amusingly at an emotional distance, that’s almost exactly Twitter with human moderation removed. More darkly, what he’s proposing is a recruitment tool for literally any wonder imaginable, no matter how much it would also be an unspeakable horror.
I share most of Jack’s stated principles, as a former anarchist who mostly hasn’t shifted values apart from some more nuanced analysis of power. It’s unforgivable that he’s using that analysis to promote a model of social media that puts the onus on platforms to serve those eager for violence and gives no consideration for anyone who doesn’t want to be their victim.
And in case I’ve minced words, lol… I’m very much saying that Jack is finally coming out and admitting his goal is a social media specifically for nazis, stalkers, and other monsters whose victims either never escape or suffer lifelong trauma if they do escape.
Also, say what you want about Elon Musk, but at least he doesn’t have much pretense about twirling his evil mustache. Jack doesn’t even seem to want to figure out what side anything is on. I take that back: that’s his whole thesis, “seem” is too mealymouthed.
While dunking on previous Twitter staff when they enacted that very precise policy (some moderation actions were shown to be reach limitations in the Twitter Files nothingburger).
Don't think because Musk can construct grammatically correct sentences he has any kind of coherent vision for Twitter.
Musk is dimmer than Jack but that has the benefit of being more obvious at showing his same motives. Case in point: he’s already doing what Jack proposed in the article.
> Only the original author may remove content they produce.
Counter example: Cyber bullying. Person A and B talk on twitter about how to best hurt person C, damage C's reputation, invite others to hurt C, etc.
C himself is never part of the conversation. As for A, B and all the others which are part of the conversation, none of those people have any interest to end the conversation or to retract anything they said.
Nevertheless, the conversation can cause real harm to C, even though C is not part of it.
I addressed this in my comment, but it seems Jack thinks this was his mistake at Twitter. It’s Person C’s onus to block Persons A & B. Never mind if A and or B maim or kill C in actual meatspace, speech not only must go on but it must absolutely be unfettered by every single platform or else someone will claim speech has become something less than free.
To Jack, who has perfectly demonstrated the absolutist position, I have no care in the world whether to approve of what you say, but I will defend to the death your right to force others to let you say it to enough people that someone winds up dead.
That isn’t the scenario described by OP. In their scenario:
1. Person C isn’t accusing anyone of anything at all, and needn’t do so because,
2. Persons A & B are openly posting their threats and incitement on the platform.
3. Even if person C is provoking persons A & B with similar threats and incitement elsewhere, that doesn’t justify their behavior in kind.
All of that said, this kind of skepticism—pervasive in these discussions—is itself often introduced in bad faith. I don’t think that’s the case here, I think you’ve probably misread the original comment, but its frequent use as a bad faith tool promotes this kind of error as well.
Maybe you are not aware, but for many of the main contraversies about bans on Twitter, the main debate is about what constitutes a threat. It's never clear-cut threats or incitement. As evidenced in the Twitter Files. And there is no objective measure.
As you’re well aware I replied to a very well specified hypothetical and I responded to its specific conditions. It was about clear threats by participating persons against a non participating person. Continuing to interject suspicion of the hypothetical target of those clear threats is very much an example of bad faith.
I am questioning the premise of the hypothetical as unrealistic. Now if you want to solely debate your hypothetical, then how about you simply don't reply to my branch of inquiry? No one is forcing you to. You choose the debates you want to engage in. The point of HN is for intellectual curiosity - which I would think involves exploring many branches of discussion. It's not about trying to win debates.
> I am questioning the premise of the hypothetical as unrealistic.
You… think it’s unrealistic that two people with no other distinguishing characteristics than letters A & B might make threats and incite others on a social media platform with no limits to their doing so, against another person with no distinguishing characteristic other than the letter C? How can that possibly be good faith?
> Now if you want to solely debate your hypothetical, then how about you simply don't reply to my branch of inquiry?
I’m not even trying to debate anything, I noticed that you moved the goalposts and pointed it out in hopes you’d appreciate the clarification, not that you’d triple down and insist that even a very simple proposition of two people abusing an unmoderated platform should be turned around to scrutinize their targeted victim without any shred of reason to think their target had anything to scrutinize other than your apparent determination to find it.
Don’t you realize that you’re demonstrating exactly the kind of bias I earned about?
So in Jack's model, as far as I understand it, a bad actor could dox and post a bounty on someone's life, lose or destroy their key, and no one in the world could delete or block this information?
Do I have that scenario correct?
Counterpoint/question: Isn't this already all possible on TOR? Would ease of access make Jack's proposal more dangerous?
Jack's model is that moderation should only occur due to algorithms - but algorithms in this context are designed to maximize engagement, which means maximizing the reach of "illegal" content, because controversy leads engagement. "algorithmic moderation" is a contradiction in terms, and counter to the incentives of social media.
Although strictly illegal content is a red herring since not even most "free speech" platforms (not on the dark web) are willing to risk jail time for hosting content that explicitly breaks the law - certainly not Reddit. What we're really talking about is technically legal but still harmful content, although at the extreme I guess the free speech extremist position has to allow illegal content as well.
> Jack's model is that moderation should only occur due to algorithms
This is super hand wavy on his part. [0]
So algorithms just rise from the virgin snow having solved human philosophy and sorted all social norms? No humans were involved in the making of these algorithms?
It's really hard to take a proposal seriously when it contains something like that. Please someone explain to me how I can NOT read it as: "Responsibility for the platform I created? No, no, I only chose the team who coded it and I only oversaw the creation of this magic algo. That absolves me of everything!"
I find this type of harm reduction discourse really weasely, I imagine you have no problem with www and email, that are also such free sepeech exteremes that allow harmful content.
Why would the protocol go out of its way to do the job of a person whose aim is to usurp any of these powers? It takes very little imagination to see what could happen if you provide Jack’s proposed platform, every single coup in the world is precedent.
> if it's illegal the person C can go to the police
Just to be clear - as long as they stay away from outright illegal content, A and B can cyber-bully C? Because there's a whole swathe of behaviours that can make someone's life a misery without crossing into illegality...
Legally, Twitter is not responsible for content posted by third parties because of Section 230.
So Twitter will do what is best for their business.
The problem is that for each of your statements, the truth is always subjective. And a semblance of truth can only be established in a court of law. Otherwise it's just someone or some group's opinion. If there was actual reputation damage, then it must at least involve a false statement, and then fulfil the rest of the legal criteria for defamation as judged by a court.
There could also be a scenario where person C has conspired to have person A and B banned based on false accusations.
Say Twitter had no intervention:
Person C should block users A and B. And they should notify their network that they have blocked these users and recommend that they should also block these users.
A and B still remain on the platform though at the disgust of person C. Person C may file a defamation suit if they want, and Twitter must cooperate with law enforcement.
Person C could also leave the platform.
People could also leave the platform because of what they have seen happen to person C.
And finally, advertisers may leave the platform because of what happened.
Less users + less advertisers = less money. So naturally the platform wants to avoid this. Hence, they must create their own rules to avoid users and advertisers leaving, and find the right balance. The optimum is a set of rules that maximizes long-term revenue.
Now person A and B can still talk, and they can still do everything that person C accused them of, but just not on that platform. They might go to another social media platform to continue exactly the same behavior.
So you have to ask yourself, what is the difference between this happening on one platform vs the other?
If it's causing real harm, surely it will cause real harm if done on another platform too.
I read the whole thing but stopped taking it seriously after this line:
> Only the original author may remove content they produce.
This has some pretty bad implications if we think about obvious content problems like child abuse. And sure, algorithms can remove this from people’s feeds, but what about the victims?
Am I the only person who thinks this is a giant, glaring hole in this whole argument?
Almost this whole issue boils down to this, "do we need, and if, how do we handle forced content removal". And I have to say I'm on the freedom side here. Just like with encryption, you can't have a "trusted" middle men with a backdoor. You can't trust a state, a corporation, a "committee of wise men". You can't have a central dependable authoriser for a complex, abstract entity like content.
There are two forces at play here, general public ("everyone else outside the poster") and law enforcement.
- Groups will always be able to flag/hide/shadowban content. Yes this is not not scaleable to millions of people/posts (without masses of slave wage moderators or automation, both which would make ample mistakes). But I also think that communities are not scaleable to millions of people/posts. For all the necessary "common space", it is, and probably should be "the frontier/wild west". Clients might be restricted from accessing it or may have filtering safeguards.
- Law enforcement should have the right to pursue people posting illegal content in real life, and remove it on their behalf with whatever means they have at hand - no matter if in the US, Europe, China or North Korea. But they should be only able to do this via their own (black-hat) methods and not built-in switches served conveniently by the network protocol. Just because the digital world allows complete control, that doesn't mean society should utilise it for its own steering.
In the end my guess is that such debates are kind of needless/theoretical - the world will gravitate to some similar kind of self-balancing anyway, like how it did with digital content piracy.
Arguing that law enforcement should be forced to remove content via hacking is literally trusting middlemen with a backdoor. If you can't trust states, you can't trust law enforcement. If you can trust law enforcement to use "black hat" methods, you can trust them to interact legally with a company and have an API.
But why, in an open protocol, spend engineering resources making it easier for them? I get why Big Tech does it: they need the .gov in their court when regulations come up in the legislature, and they want the fat .mil contracts. People hacking for fun in their spare time don't have the same motivations.
Because we live in a society under the rule of law for which law enforcement's ability to enforce laws is generally seen as a good thing. The proper place to add 'friction' in that process is legislation - requiring a warrant, for instance.
But there's nothing inherently unjust or malign about law enforcement being able to easily perform its duties (given a legitimate context) or even for companies to give law enforcement the means to police illegal content online, that's their job, nor is the cause of liberty necessarily served by making it arbitrarily difficult.
But if you need a purely technical argument, because requiring law enforcement to use black-hat methods means requiring maintaining a minimal level of unsafety for the protocol.
To each their own. Personally, if I'm hacking on a hobby project, the last thing I want to do is read government standards documents and set up meetings with law enforcement IT people. If I were forced to do so, I would just start model railroading instead.
I read that with a bit of nuance, like "only the original author or someone who has taken control of their resources" or something although you're right that some care would be needed in considering a statement like that.
I understood the statement relative to the case where someone else centralized ultimately has control of your content, like Twitter. The other extreme is some decentralized equivalent to the case of hard-encrypted files that are only openable to a person with special knowledge. There's lots of grey there, and decentralization opens up lots of questions, like "does any one person really have control over content in a decentralized system?"
I guess the comment seemed too fuzzy to me to be interpreted in much detail, so I was willing to flow with it, at least to some level.
No, I think you are right. I think it is a very important function of a society to decide on things that should not be public or even illegal. This has always been the case and child abuse is the most obvious example.
It was odd that someone like Jack could make such a naive statement. Makes me think he's been a bit disconnected from all this for a while.
Maybe he assumed that illegal content was an obvious exception. However, illegal content differs between countries. And severe censorship is usually of illegal content.
I do hate to sound like the neckbeard that I am, but what about RSS? Doesn't it fit the bill as far as the three principles?
> Social media must be resilient to corporate and government control.
> Only the original author may remove content they produce.
... these are the same point? but RSS addresses them both by the fact that content can be hosted on blogging software running in one's basement.
> Moderation is best implemented by algorithmic choice.
... accomplished by having a choice of a variety of RSS clients.
RSS also, by being significantly less efficient at spreading information, seemingly would mitigate/eliminate many of the more harmful aspects of a single-platform publisher/feed aggregator like twitter.
This may in fact be a feature rather than a bug. "Anyone in the world can slam their crappy reply to my post in my face and create a message, notification to my phone, etc." is probably a bad model. "Anyone in my personal friend list" is better but sometimes still too broad.
Web logs had a characteristic where you kinda had to put a bit of elbow grease into seeking out replies, and that isn't all bad. There are plenty of instances in the old web log communities of some monomaniacal troll paralleling some bigger blog's every post with their inane (if not "insane" as my swipe keyboard started with) replies, but mostly nobody has to care. You visit the original web blog and you don't have to see it under every post. Even the original author can pretty much just get on with life without having to worry about it. This is not something "conventional" social media does well.
I remain unconvinced that the model of pushing replies into everyone's face, bundled with the original content, is the best or only model for social media. It's probably a nontrivial part of the reason everything seems so homogenous nowadays... it really is. The same regress-to-the-mean in every "social" interaction because it's the same regress-to-the-mean people and patterns in every single one. Give people a place to be themselves without a hundred people tromping in and you get those unique voices back.
I understand why you may say that, but it's not about misanthropy or introversion. It's about scaling. If you are in a situation where your posts get maybe four comments, all from your real-life friends, all generally positive, hey, great. You don't have a problem.
But let's say you're writing a technical blog, and you've picked up 10,000 readers. You may get a few hundred comments. And they're going to look like an HN discussion area; a few positive hints, but mostly negative comments. If you don't moderate, they're going to start to tend to actively insulting comments as the inevitable escalations develop. So maybe now you need a moderator. Who wants that.
If anything, it's the misanthropic ones who don't care! They're the ones willing to host a brawl in their comment section and give as good as they get, if not better. They're the ones who don't lose any sleep over ban-hammering someone just because. It's the normies who start having problems here.
The social media you describe and are implicitly praising promotes drive-by interactions and tons of shallow connections. The weblog community structure promoted fewer, but stronger connections. You might not have realized this, but a lot of "web rolls" back in the day weren't just "Hey, you may be randomly interested in these blogs"... after all, why would anybody put those up? They were actually a community. Conversations would wander back and forth in these communities for days at a time on a topic, with cross-linking, rebuttals, amplifications, etc. And it was nothing like the heat-but-no-light of a twitter reply chain.
I miss it. It was much nicer in a lot of ways, and could have been improved even more. But it takes a bit of a critical mass, maybe not a ton, but even so we've lost it to an endless chain of mostly vapid, regress-to-the-mean boring homogeneity.
Much later followup: FWIW, I blame a very popular misunderstanding of "free speech". Free speech means you can have your space to talk, and in that space you can say most whatever you want (subject to some legal restrictions like direct death threats and other issues), and anyone who wants to see what you have to say can come see.
It does not mean that anyone who says a thing online, like a blog post, newspaper article, government report, etc. is obligated to host a comment section that allows anyone with a keyboard to put their speech right next to the original topic, on nearly the same footing. I think the all-but-subconscious idea that comment sections are somehow mandatory, or the sine qua non of "free speech", seriously hobbles people as they try to design communities.
Weblogs empowered the individual, and then we could interactions from there. Comment-based designs pull everything down to the same level. Mastodon et al still go in the wrong direction, in my opinion.
RSS would (hypothetically) just be the feed protocol for posts. Replies would be handled out of band, potentially using mechanisms that already exist.¹
Heck, replies could be published using RSS too, with a reference to the original so client apps could thread them. In that scenario there'd be an opportunity for third parties to associate replies with posts for user who want hear from users they don't follow.
Good point. I was aware Linkbacks existed, and had tried to read that article before, but never really understood why they were supposed to be useful. Especially in the way they manifest as visual clutter on my Wordpress blog.
But I can see now that they are potentially a way for distributed conversation to happen (with a lot of work on building better UX norms). I wonder what the pros and cons are compared to ActivityPub.
I wonder that too. ActivityPub/ActivityStreams seems really interesting, but I haven’t found a good roll-up of (1) the problems it was designed to solve and (2) the design philosophy of this solution. Once I understand what’s possible vs. what Mastodon’s implementation of ActivityPub can do, I hope to be able to answer the question: Can an alternate implementation be less of a nightmare to scale, or is that intrinsic to the protocol?
true. its a simpler, even dumber model. but the core thing - following people's posts - is preserved. And there are other benefits, and I'm really dating myself here - but it was nice that engaging with RSS often involved visiting someone's personal website - often a richer, quirkier and crankier space for self-expression than the few fields offered by the twitter profile.
benefit #2 is that it just slows information down. I know I'm tilting at windmills on this one. But somehow the very speed of information on twitter seems to be a component of its societal risk. speed limit for memes?
I genuinely don't know how to read Jack. His actions contradict his words, and it's tough to believe he didn't know what Vidjya was up to. Playing ignorant makes it hard to trust anything he says.
According to the book Hatching Twitter and the YouTube channel ModernMBA[1], Jack was very careful of cultivating his public image to the point of public deception.
It seems like the more successful you get, the more you lose touch with reality. Jack's intentions seem well intentioned but his execution seems so poor and misguided.
You elevate yourself to higher and higher levels of abstraction until you aren’t even here anymore. You tell people what to do but no longer have a clue what they are doing. You get fed so much information that you can only consider any idea for a few minutes. You are surrounded by people pretending to agree with you to get money or favors.
It sounds like he was uncomfortable with his power as Twitter CEO but he couldn't get rid of that power (because of "the shareholders") so he let Twitter run on autopilot and vetoed everything that his executives tried to do. Unfortunately that's not a neutral position given that evil flourishes when Jack does nothing. (Clearly he's not a consequentialist.)
I want a federated platform where I can moderate content for myself, and other users can federate my moderation actions. Moderation actions include "delete", "upvote", and "downvote". Oh, and the interface needs to be more like Reddit/HN rather than Twitter/Mastodon: something that's conducive to actual conversation rather than disconnected streams of thoughts. Aether [0] had some of the right concepts here, but made bad choices in technology (P2P instead of federated; custom app instead of web) and ultimately couldn't get traction.
No one is asking for a universal protocol or language to share cat pics.
Technocrats want to provide the next unicorn that captures attention thereby making a new centralized system.
I use Go, net/http, and oh look, I can share state across the internet.
In physics electromagnetism came together after ditching all the literal knobs and pulleys used prior to get a handle on the behaviors.
IMO computer science needs to do that too; it’s creating tons of jargon as if there’s anything more than electron state in a machine that actually exists.
The simplest option is change laws to require ISPs provide a static IP on request. Then all of jacks goals can be met without over engineering another protocol.
The internet is an ever expanding series of interoperable protocols. The ask here, that we start to define shared standards for more already-well-known-to-humankind but-so-far-often-proprietary/not-standardized means of information-exchange seems remarkably pedestrian. I'm not sure what has you so fired up against it?
Standardizing social interactions between humans and making them persistent is great if you want to collect and process this data in bulk, which is appealing to SV capital.
It is negative value to actual people who often want to communicate spontaneously and transiently and who already have lots of open information exchange standards at their disposal already and can just... use them.
What's the Go method to upvote a content and how do other federated endpoints receive that signal? Or are you telling me to build it myself, in which case... yeah, that's what this thread is about?
It's not Go's duty to provide this capability. Internet, more specifically HTTP has breed many methods to do this over the years. Current iteration is APIs and their different flavors.
Before that we had cgi, HTTP forms and other ways.
You can notify your federated peers via XMPP, if you wish.
Working on it. There's a prototype and explanation at https://peeryview.org/about. We don't have federation (using braid) tested yet, but you can play with the basic model of realtime choose-your-own-moderation via a subjective web-of-trust, similar to liquid democracy.
Great point. This is the root of so much of the challenge on the internet.
There's a certain techno-utopianism libertarian view that says "absolutely, anybody should be able to say anything". And then there's a totalitarian authoritarian view that says "nobody should be able to post anything unless the Government authorises it".
Most people exist somewhere in the middle, recognising there is a lot of nuance. It's impossible to draw a clear, consistent boundary. So instead we depend on human intervention, and often the humans get it wrong - but it's the best we have.
What makes that "complete freedom" claim weird is also its (historical) novelty. Nobody was ever able to say everything without consequences, yet this utopia claims it's a forever right. Or maybe I understand them wrong, and they only defend the right to say bad stuff right before going to jail for it? Fine, but in this case there's no novelty at all, so it's probably not this.
The key is accepting legitimacy of various authorities under their respective domains and architecting protocols that respect and delegate or federate decisions to them as needed.
> There's a certain techno-utopianism libertarian view that says "absolutely, anybody should be able to say anything".
Scratch beneath the surface and it's usually BS. "Free speech" in tech seems to mostly be about who gets to be the gatekeeper, rather than removing gatekeepers altogether.
> > And then there's a totalitarian authoritarian view that says "nobody should be able to post anything unless the Government authorises it".
We all know of millions of people trying out many apps in trying to escape their own living hell of “totalitarian authoritarians” and their gatekeepers, but I distress.
So, it may be bovine excrement to you, but it is a divining principle in libertarians’ reflection of the current extreme views and probably should be a viable “canary-in-the-coalmine” test.
“If you want to know who controls you, look at who you are not allowed to criticize.” - Voltaire
> I’m a strong believer that any content produced by someone for the internet should be permanent until the original author chooses to delete it. It should be always available and addressable.
I think he's lived in the bubble too long, tbh. Addressable how? "Addressable" assumes an equal balance of power between the one putting up the content and the one addressing the content.
> Moderation is best implemented by algorithmic choice.
Here's an idea: moderation is best implemented by everyone. You follow well-known algorithms after initial register, then you can tweak the algorithm or apply algorithms made by others.
We already had FOAF as far back as 2004 with the semantic web. But the problem is that the network of connections, the social graph, is a a moat to prevent users from leaving the network and joining another.
I was just thinking "this is bitcoin but for social media" when he said it out loud:
> It has to be something akin to what bitcoin has shown to be possible. If you want proof of this, get out of the US and European bubble of the bitcoin price fluctuations and learn how real people are using it for censorship resistance in Africa and Central/South America.
I agree, ActivityPub could/should be a huge player in these changes. Jack recognizes ActivityPub's potential too, albiet as "Mastadon" rather than ActivityPub, which is somewhat a self-own to his point: the title is about internet protocols, yet here he is citing the implementation, not the protocol (which tbf was a late-coming change for Mastadon):
> As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, Mastodon another, Matrix yet another…and there will be many more.
ActivityPub is a good tool, but there's still a lot more required.
Starring someone's post kind of goes nowhere. Having a network that can at least in some cases make this popularity information or some kind of visible reaction system should at least be an option. We could have a stream of ActivityPub "Like" actions. But how do we gather & see those viewer's likes? How would we add more flexible reactions? There's work to do here, with interaction among feeds.
Moderation decisions are yet further- far further- behind, are just a couple bits of data inside most implementations. Being able to have multiple different sources of moderation, broadcasting their moderation, and letting us mix & remix these moderation feeds into our experiences is, to me, the essence of what Jack was talking about. ActivityPub has "Flag" actions which we could make a feed out of, but those are pretty bare-bones signals that I feel we'd need to expand on significantly. We don't even have a feed of moderation we can begin to work from here.
ActivityPub implementations today are a reasonably good first-order social network. We can swap toots, yay. But Jack here is talking about the higher-order networks that emerge atop the feed of posts. There's a compounding of further levels of social data, being networked too, that's not really happening so far. ActivityPub can and likely will come around to being relevant here... but right now it's too early days & that maturation into social-moderation hasn't really begun at all. Jack's post sets some good long term view, in my opinion, that ActivityPub needs to bring itself towards, regardless of this post or anyone or any org else.
He has to have known about ActivityPub, but with bluesky and now this he seems determined to create a competing standard. Reminds me of that XKCD comic.
Maybe this is just what he wants us to think, but I still think jack and twitter were different, different to other social media, old twitter was thoughtful, unique, not a run of the mill maximizing eye balls social media company (zuck mostly)
This post seems to prove that, and I truly hope we can break free of central influence on society's conversations.
They all seem different in the beginning before they hone their revenue model and the casual users are swamped or intimidated by professional users. Early Twitter was where friends hung out, before they left for Instagram and the resulting Twitter became about politics and opinion and personal brands. Early Instagram was great too.
Same. 2008-09 in my experience, it was much more like banter between friends. Lots of creative friends left for Instagram soon after it launched and never returned. The tone of Twitter inevitably changed. I use(d) Twitter and use Instagram very differently, as the circles of friends using them diverged.
Of course the fraudsters that created scams like Crypto and NFT would love to unleash a decentralized network that could not be regulated or moderated by congress, and that could not be properly logged anywhere by anyone, so that fraud could more easily be foisted upon everyone.
If this is permitted, and people buy into the crap peddling, then the outcome is well deserved by the victims.
Sorry to be so glum, but Jack had a big role in promotion and fosterance of Crypto and NFT while he served as Twitter CEO... It should not be given free pass. The Internet was not meant to be without credibility and accountability, in the age where the warnings are clearly visible, accountability and credibility should cancel most social media platforms based on their performance to date alone for creating stable new business models.
Current Social Media is built on ID theft, deception, unpaid labor, and fraud. Mark my words. It's not a stable platform to build anything else on at all.
> As far as the free and open social media protocol goes, there are many competing projects: @bluesky is one with the AT Protocol, Mastodon another, Matrix yet another…and there will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn’t about a “decentralized Twitter.” This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet.
"nostr - Notes and Other Stuff Transmitted by Relays". "The simplest open protocol that is able to create a censorship-resistant global "social" network once and for all."
Jack stumbled upon it yesterday and created an account, so there have been a flood of new users already in the last 16 hours.
https://github.com/nostr-protocol/nostr
I recommend https://astral.ninja as a first time user as it's easy to generate a key pair, which you can then use as you try the many other clients.
This is a very new protocol, so the clients are still dev level and not production level, but development has been very rapid on this protocol.
One thing that stands out to me as a privacy advocate, is that unlike Twitter or Mastadon, the admins (relay operators in the case of nostr) can't read your DMs, so nostr is already ahead of both of these technologies in that regard.