Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we disagree on the problem.

The thing you want to avoid is this:

a.scamsite.com gets blocked so they just put their phishing pages on b.scamsite.com

The psl or your solution isn’t a “don’t trust subdomains” notification it’s “if one subdomain is bad, you should still trust the others” and the problem there is you can’t trust them.

You could combine the two, but you still need the suffix list or similar curation.



It's more like "provenance" of content. I broadcast my accountability of "myblog.com/posts/...", but would disavow "myblog.com/posts/.../#comments"

There's some ways of like "nofollow", but nothing systematic, and no "protocol" for disavowing paths, uploads, or fragments.

Back in the slashdot days, I thought of "blogs are the stationary of the internet", a way to more authoritatively declare that the content was yours... but interop is hard and unprofitable so walled gardens became the norm.

We just haven't had the benefit or forcing function which encourages a solution to "that stuff over there is less trusted than my stuff over here".

Maybe we're at the point where hosts of any kind MUST be responsible (or accountable) for any content originating from their domain? It kills indie/anonymous hosting, but puts a fine "KYC" point on distributing "evil" stuff on the internet?


Again I think we're talking about different things.

> We just haven't had the benefit or forcing function which encourages a solution to "that stuff over there is less trusted than my stuff over here".

No the problem is we can't let people say "that stuff is someone elses fault" when it is their own fault.

Scammers will claim subdomains are actually just not them and are other bad actors, and you're back to loads of phishing pages.

> Maybe we're at the point where hosts of any kind MUST be responsible (or accountable) for any content originating from their domain?

We already are at that point, the PSL is to get past it for cases where people host on subdomains. Netlify shouldn't have to risk having every customer flagged if one customer is a phisher. The curation is vital.

The other solution would be to have another approach around hosting where verifiable owners could publish wherever they want and it's tied to a real entity, but that has other worrying outcomes I assume.


If you reread my final paragraph (MUST be responsible) then in think we're reaching the same conclusion: "on behalf of" is untenable for small hosts (ie: anyone smaller than Google or Facebook)

The other way of looking at it might be similar to "DMARC-4-HTTP", ie: sign Content-Length, Content-Sig with a public/private key and if you include `SELECT comments FROM evil` then that "taints" your key.

It gets back to netlify that index.html would be signed by netlify.gpg, but haxor.netlify.com would be signed by not_netlify.gpg

...we can call it "web of trust 2.0" :-P

Appreciate the honest discussion!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: