Hacker Newsnew | past | comments | ask | show | jobs | submit | justin101's commentslogin

I am curious about the legality of this. I guess I assumed that doing this type of thing would technically a DCMA type breech? So this makes me wonder if my assumption wrong? How does this work legally?


It's typically against terms of service to decompile or reverse engineer applications you download in this way, but it's also typically against terms of service to use their services from unofficial clients, so I think they're already way past T&Cs.


RE for interoperability is allowed in many jurisdictions (afaik, ianal)


What does copyright have to do with this?


I think the implication is that bypassing cert pinning could be considered a violation of the anti-circumvention provisions in the DMCA and WIPO Copyright Treaty, because it results in decryption of copyrighted content without the permission of the copyright owner.

IANAL, but in the US, at least, I think the exemptions for good-faith security research[1] would apply. Maybe even the reverse-engineering for interoperability language in the DMCA itself[2].

[1] https://www.federalregister.gov/documents/2015/10/28/2015-27...

[2] https://www.govinfo.gov/content/pkg/PLAW-105publ304/pdf/PLAW...


I'm not an expert, but I think in the Blizzard v Glider case the courts decided that if you've violated a program's EULA it becomes illegal copyright infringement to duplicate the bits from your disk into RAM


The serverless name is unhelpful. What these companies really mean by serverless is:

"your code that handles HTTP requests must be able to startup, run, and shutdown effectively instantly"

aka "serve your HTTP without bloated long running processes."

aka `cgi-bin` scripts.


GCP's max instance count is helpful to a certain extent, but this doesn't completely protect you unless from DDOS attacks. If you have 3 nodes that can all handle 1000 requests per second, and those requests are doing something expensive the max instance limit doesn't help you.

Although I guess if you have 3 nodes that can only handle 100 requests each the damage of a DDOS would be much more limited.


True, it doesn’t protect you from DDOS. But how is it different from having 3 regular hosts?


I enjoyed Google's GCP for quite a long time until they removed the ability to cap expenses with a budget limit. (It used to be that if you hit your budget limit you can make your site error out). Now everyone on GCP is one DDOS away from a nightmare cloud bill. I'd rather use a traditional server and be able to sleep at night. I moved all my sites and client sites off GCP.

The most annoying part is that Googles infrastructure for cloud computing is so much better than the others if you're willing to work within their ecosystem. Simple deployments, version management, rollbacks, etc... There is nothing quite like it. (Im not saying there aren't competitors, just that Google seems easiest to use)


> Simple deployments, version management, rollbacks, etc... There is nothing quite like it.

We do all of this with Azure


Youtube has a sense of humor and hardcodes this video about "videos that have 301 views" to have 301 views.


I am as horrified as you probably are, but Google is quite possibly not allowed (by law) to share certain links. Even if technically allowed, their lawyers would likely not let them participate or facilitate sharing links to sites that would be deemed "facilitating illegal activity".

People are starting to learn why all their s*t should not be up in the cloud.


This isn't sharing, these bookmarks are private to the user in question.


It is for sharing. This is their shared/public bookmark collections, not browser bookmarks.


No it's not, read the comments. This is their synced bookmarks that are not publicly shared.


I think this depends on a radical redefinition of what it means to "share" something. If I put a book in a self-storage unit, and later go to retrieve it, is the owner of the property "providing" or "sharing" the book? What if I am leasing an apartment from a landlord?

I suppose it is an issue inherent to services set up to run through central providers, who can institute arbitrary controls on the services, i.e. if they don't they are failing to do so, which of course exposes them to liability and censure, et cetera.


Irrespective of this is right or wrong, the concern of the lawyers at google would not be "If Google _should_ be held accountable for the private activity of users", the concern of the lawyers will always be "what _could_ they be held accountable for."


Section 230 of the CDA shields Google from liability for the links users choose to share using their services or not.


I doubt that applies if they have a court order or request from the FBI to block certain domain names or links.

If you had a court order or "request for cooperation" from the FBI to block that domain name, would you do it?


Not being from the US, this is probably a stupid question. But why can't people in the US sell willingly and freely sell things (legitimate or otherwise) to other willing buyers without government interference/participation?

I guess I used to think the US was the "land of the free" I guess I understood that people used this phrase in a literal way. Im wondering, when American's use the phrase "land of the free" perhaps I am misunderstanding, and it's really generally used in an ironic way?

Can someone please explain?


"Can't people in the US go out and shoot other people, because it is the land of the free?"

No, there are still laws, and fraud is illegal.


Well first there are taxes. Secondly some things like narcotics are not alllowed to be sold. Other controlled substances as well.


Where does one even go about finding 12Gb of pure latin text?


I had the same question, wondering what sort of workflow would have this task in the critical path. Maybe if the Library of Congress needs to change their default text encoding it'll save a minute or two?

The benchmark result is cool, but I'm curious how well it works with smaller outputs. When I've played around with SIMD stuff in the past, you can't necessary go off of metrics like "bytes generated per cycle", because of how much CPU freq can vary when using SIMD instructions, context switching costs, and different thermal properties (eg maybe the work per cycles is higher per SIMD, but the CPU generates heat much more quickly and downclocks itself).


Not sure whether that was sarcastic, but ISO-8859-1 (Latin 1) encodes most european languages, not just latin.

https://en.wikipedia.org/wiki/ISO/IEC_8859-1


But where do you find it? Almost the entirety of internet is UTF-8. You can always transcode to Latin 1 for testing purposes, but that raises the question of practical benefits of this algorithm.


Older corpora are probably still in Latin-1 or some variant. That could include decades of news paper publications.


All of Europe has written in Latin 1 for a decade. There are billion of files encoded in Latin 1 everywhere.


Where?


It's not necessarily about sustained throughput spent only in this routine. It can be small bursts of processing text segments that are then handed off to other parts of the program.

Once a program is optimized to the point where no leaf method / hot loop takes up more than a few percent of runtime and algorithmic improvements aren't available or extremely hard to implement the speed of all the basic routines (memcpy, allocations, string processing, data structures) start to matter. The constant factors elided by Big-O notation start to matter.


The Vatican?


The latin in latin-1 refers to the alphabet, not the language. In fact latin-1 can encode many Western European languages.


I believe it was a joke.

But the humour may have been lost in translation. It's funnier in the original ASCII.


The high bit is generally used to indicate humour.


Yes, but no one can say that. Managers need to go off on project management, team management, and all sorts of other scenic hotel retreats to learn how to use JIRA better because of course the fault is JIRA and the underlings.


There are a number of reasons this could be that are not necessarily nefarious. It's odd to jump straight to "something evil is going on"

Tell me this, does Twitter have some kind of "play nice" code that slows down inbound clicks through to a site so it doesn't DDOS other sites? I can easily imagine a scenario where anti DDOS caode would allow small sites to pass through quickly, yet sites under heavy "click through" load are being slightly throttled.


Then click delays would appear random to any single client.


This wouldn’t reduce total requests made so it would be a weird anti-DDOS measure.


Indeed. A five seconds delay only means the DDOS starts five seconds later.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: