Some FBI agents came to my house once and told me that my home Internet had been used to visit Islamic Extremist websites. They brought a local police office with them and a 'threat assessment' coordinator from my workplace. They asked me if my family was Muslim and wanted to know if we had been radicalized.
We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
I have no idea why they thought this about us. Maybe it was an IP mix-up, but it was very disturbing. I feared that I may lose my job. I became very afraid of the FBI that day. I think this could happen to anyone at anytime.
"threat assessment' coordinator from my workplace"
"I feared that I may lose my job."
I understand that police/FBI have to conduct investigation. What dont understand is involvement of the employer , it's extremely disturbing - you have not been convincted, you have not been charged, you are not even a suspect or accused of anything at this point - how is your private life the business of your employer?
Why is your privacy being breached and livehood being placed at risk?
Surely the FBI is not allowed to publicise random dirt they find on innocent people?
The FBI still has buildings named after J Edgar Hoover. That should tell you everything you need to know about their institutional respect for justice and due process.
I'm also not an American - but as far as I've read - massive abuse of power in using the FBI to spy on political rivals, illegal wiretapping, illegal surveillance of US congressmen and even presidents, running the FBI while they were doing extremely controversial programs like COINTELPRO and programs and investigations that tried to hinder the civil rights movement, etc.
> Surely the FBI is not allowed to publicise random dirt they find on innocent people?
If they're doing an investigation, they very likely got the employer involved in order to get more information on the person they're investigating, and companies have liaisons for law enforcement, as well. If the FBI comes knocking and says, "we think you've hired a terrorist," it's going to ruffle some feathers at the company no matter how unfounded or untruthful the claim is.
It isn't just the suspicion of terrorism that might have law enforcement or the FBI knocking at an employer's door. If someone is suspected of any type of cyber crime, the FBI will be coming for all of their computers and electronic devices, including the ones they use at work.
Depending on the company they would likely audit their activities incase the company itself was a vector, assuming that terrorists also require intelligence networks.
This is par for the course FBI intimidation tactics, along with interviewing everyone you've regularly conversed with. Serves a double purpose of investigation while simultaneously making you radioactive to be around.
Thereby isolating the person during a period of high emotional anxiety.
You deserve to be always assumed innocent until proven guilty, and you will have to be proven guilty to be found guilty, and realistically speaking, those premises are extremely technical.
You don't have to be found guilty to be punished, lookup "case load". That can keep you on probation and monitoring as long as they want to draw out the case and the whole time you are required to make monthly payments or risk going to jail.
Without specifics, or some indication of who is triggering the delay (e.g., defendants may request delays), I couldn't possibly comment.
Given law and legal processes are not my baliwick, I'd probably not be able to comment intelligently regardless. But you've posed a null-content question.
The State Attorney General dragging the case out because they refuse to look at it. They also filed it under the wrong statue so their arguments are incorrect.
I'm reading a book where the main character receives a subpoena to go to a interview with the Portugal dictatorship political police. Nothing happens to him (till now) but everybody in the hotel where he is hosted starts to treat him differently.
Who will be the first in the line when a firing is necessary? Probably the guy that has problems with the FBI.
It's (scarily) interesting that they react with actual personal attendance based purely on a very limited set of electronic information.
From your further description:
> We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
Would not the FBI have been able to any amount of background searching (read: further electronic information gathering), that would be less effort-intensive than getting arranging a 'threat assessment' coordinator from throw_away_dgs' actual workplace and a local police officer for an in-person door-knock. If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary.
I think what he experienced is another manifestation of the same phenomena as zero-tolerance policies in schools; institutions ask their enforcers to suspend common sense and strictly enforce the letter of the law/guideline/etc, even in situations where any reasonable person would decide it made no sense. They do this because such common sense and gut feelings is how bias and prejudice might creep into their oh-so-perfect system.
It used to be that if a teacher saw a kid get bullied and then punch his bully back, the teacher was empowered to evaluate the situation using their best judgement, and punish the bully while congratulating the bullied kid who stuck up for himself. The system sees a problem with that; the teacher's perception of the incident might have bias and prejudice. The system's solution is to have zero tolerance for any violence and punish both students equally. The system's solution to the possibility of prejudice against one student is to ensure prejudice against both students.
At my school it was worse than that. Any one "involved" in a physical altercation would be suspended. Someone could walk up and punch you and you would be suspended for it. This obviously had a chilling effect on reporting. No more bullying. Problem solved.
Such policies also justify and encourage excessive retribution. If you’re getting suspended whether you fight back or not, may as well cause some real damage to earn it.
Of course. Bias and prejudice is always a real concern. In situations where the teacher gets it wrong and punishes the bullied kid, the kid learns an unfortunate but useful lesson; that some agents of the system cannot be relied on.
But the zero tolerance response to this circumstance ensures the bullied student is prejudiced against, judging him guilty before considering the facts of the individual circumstance. What does that teach the kid? That the system itself cannot be relied on.
was about to comment the same thing. I teach future teachers, and I always say that_ everyone forgets their school math and chemistry lessons after cramming for the test. What sticks is learning how to survive in an unequal, dysfunctional system where you're the oppressed class, fighting among each other while you can't touch the people in power.
This is how 95% of the world works. In most countries, people are conditioned to "join" the rulers from a very young age, and people who use critical thinking are a tiny minority (often invisible)
They are right that everyone is biased, what they completely fail to establish is how they improved their own perception. Actions justified because of the presence of bias and prejudice very closely mirror religious dogma by a more objective metric.
Actually, I think they had no intel. You NEED intel for a judge to order a subpoena—and if a subpoena was issued, the ISP would open their firehose, and overwhelm the FBI with evidence suggesting that there’s nothing to investigate. And having visited extremist sites a handful of times—even if advertantly—is probably not going to meet the threshold for a subpoena.
If the FBI visited me and casually asked about my web history, I would casually ask them to pound sand (as should everyone!). But if the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle (as I fear most would).
It smells like someone is taking massive investigative shortcuts, at very significant cost to the accused. Then again, I can’t even fathom the upside for the FBI.
My gut reaction is simply speed. Why sit at my desk for a few hours reading documents when I can a couple phone calls and be scary for 20 minutes to feel secure in saying “yep - not terrorists”.
Or - you know - “weeeelp, I’ve been sitting at this desk all morning, let’s go talk to someone”.
Why spend the extra time and effort, let's just hit the road and totally and completely fuck at least one citizen's opinion of the entire system upon which their life and livelihood depends.
Saves me a couple of hours, and the sun's out. Sold!
Ironically, maybe this will actually radicalise the people they're investigating for radicalisation.
> Then again, I can’t even fathom the upside for the FBI.
The upside is power.
You yourself said as much: "If the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle."
You fear them. Rightly so. The FBI has incredible power, backed by the full might of corporate media. To cross them is to be crushed.
Why would they need a warrant, when Apple and Google climb over each other volunteer every scrap of your private information? Why take the time for a trial, when justice can more efficiently be served by both your employer and your union gleefully ruining you financially upon request?
People have been demanding[1] this for years. Now it's here.
>If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary.
They're not gonna have anything happen to them if they go tough on (and fuck over an) innocent guy.
They're gonna look bad if they miss a terrorist.
So they have no incentive to not have "red-scare levels of paranoia".
That's true, I still remember the fact that the Boston Bomber(s) were on international watch lists and their home countries warned the US (whichever TLA, may have been an issue of crossed wires) that these guys were on the move, and it was all ignored.
Now, visit a 'bad' website, or somehow be mistaken for someone that visited a 'bad' website, and you'll get some deep personal treatment.
Feds can't win, but it seems to be through their own laziness or incompetence or lack of interagency cooperation.
Or maybe because it's motives, and what level of capture they have over their 'customers'? Seems pretty simple to me. They have a monopoly of service and the only retribution people can take is political which means everything is done on appearance.
That's absolutely fucked. The whole story of the Holy Land Foundation being railroaded and labeled as terrorists when all they did was advocate for human rights of Palestinians...it's an incredibly chilling story. To hear that those who merely donated to a worthy cause were also then audited...the outrageous injustice makes my blood boil.
I’ll be the dissenting voice and say this reads like a “sow discord in the US 101”. Why on earth would the FBI bring both the police and a “threat assessment” coordinator from your work to interview you? Why would your workplace ever agree to it? That screams lawsuit waiting to happen.
And on that note, why didn’t you sue your workplace for harassment? Whether you’re religious or not isn’t any of their business and is a protected class.
A decade ago the FBI harassed me at my home waking me up from sleeping twice and at a past employer before on entirely unfounded claims.
They didn't care what the consequences were for targeting someone innocent.
They also made nasty threats like "Someone has to go down for this, and if you help us collect intel on your industry peers we suspect then someone else can be that person"
I told them politely to go die in a fire because I was not about to help them harass other innocent people but it was terrifying none the less that they seemingly had the power to end my whole universe.
I became convinced through that ordeal that the FBI is a deeply corrupt organization that creates pressure to close cases by any means needed.
The OPs post seems totally believable and consistent with stories I have heard from others, particularly if they work for an organization that has the US government as a customer like a defense contractor.
You're incredibly naive if you think this kind of stuff doesn't happen all the time since 9/11. I personally know several people with similar stories in the US.
You know several people whose employers sent someone to their house with FBI agents to harass them about their religious beliefs?? And none of them sued?
I’m not surprised at all that the FBI is harassing people, I find it incredibly hard to believe a private business would touch the situation with a 4,000 foot pole. They have absolutely nothing to gain and massive liability.
Is it prohibited to visit those websites? I once was interested to understand the way radicals think, to read about their arguments, so I spent some time hanging around some radical websites.
I'm fairly confident that those agencies use context in an automated manner to get any meaningful results.
So "keyword" (could be a word, domain or some other pattern) X may trigger only if Y and Z was already triggered. And some keyword A may only trigger if B was NOT present.
This way you can distinguish doctors, reporters or people studying history or chemistry from those who plan something.
Or e.g. ML applied to patterns over time. Globally.
And yes I do not like it at all, HN is full of people that may likely research some kind of bomb, religion or whatever else out of pure curiosity, but since there are not many such people it can be problem in court one day.
Mix in some Snowden, your hardware stack, gag orders and the fact that we have more laws that anybody can read and you may feel like watching some stupid memes.
To quote a Dartmouth history professor who taught a class on the subject: "if you don't get randomly selected for a search on your next flight you aren't doing your homework"
It's not prohibited but they notice and subject you to harassment by the system at every action with every part of the system that is integrated with their database.
Did they have a warrant? Never talk to the police without counsel, refuse all searches without warrants, "we might think you went on a website" is not probable cause, you have a right to an attorney and silence.
Most of the time they log your plain DNS queries. But DoH is encrypted, thus they won't be able to log your DNS queries. Cloudflare is not the only DoH provider. There are many. If you want you can grab a several lines of PHP code and create your own DoH link in another country. Becouse DoH is https they cannot distinguish it from normal https. Of course if the use deep packet analyses tool they will know what website you are visiting but they are not being used widely but are used to target specific people. To sum up; DoH is better than plain text DNS queris.
That's extremely disturbing. Accessing some random website should never cause police to show up. They should never even know what you did. That's like keeping tabs on what books people read and raiding somebody's house because they looked up how bombs are made.
> We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
I am most curious why you believe that is a defense against radicalization. In the US that is perhaps the most common demographic for radicalization of any type.
OP apparently managed to clear up the mistake without much bother by speaking to them (although they were understandably shaken up by the experience). This presumably wouldn't have happened if they'd done what you suggest.
Not speaking to law enforcement outside the presence of your attorney is excellent advice. There's no downside to having the attorney there, and potentially life shattering downsides to attempt otherwise.
Just like the NSA spying on Americans is unlawful [0] the FBI terrorizing political movements is unlawful [1] or the CIA operating in the US is unlawful [2]
Yet, I'm pretty sure all these are still happening, to a certain degree, to this day.
That is little consolation in the court of public opinion, where FBI management and the Justice Department have demonstrated willingness and capability to hold mob court and manipulate public opinion outside the formal legal system. They will SWAT you themselves if they like, on live TV.
That's not really how it works. Sure, it is also a way to circumvent such local legislature, but for that to work American allies would need to run actual surveillance structures in the US mainland proper out in the open..
You know, like the US does in the countries of it's "allies" like Germany [0]
Do you really think the US would allow German intelligence agencies to build whole complexes, plugged right into the US's largest IPX?
That's why this situation is not nearly as "symbiotic" as it's often made out to be. At best that applies to Five Eyes countries, and even there only to a very limited degree as no Five Eyes member as as much foreign presence as the US.
To this rhetorical question, a resounding “yes” answer. There is credible suggestion that GCHQ has been invited to operate US facilities on US soil for this explicit purpose.
The people responsible for investigating and prosecuting such crimes have some not so great incentives to avoid doing so and keep the whole thing secret though, don't they?
Sounds like an easy way to have your case tossed out in court.
It's funny how much this differs from my own personal experience with law enforcement. The friends I know are timid as hell and don't do anything without a warrant just to stay on the safe side- even if they probably don't need one.
Good luck with that. In my case there was a ton of violations of the SCA. Violations of the SCA are only actionable if they are "constitutional" in nature. (That essentially means that if the government indict you based on information they illegally gathered through violating the SCA but the information did not belong to you - say it belonged your wife or business partner - then you can't get the information suppressed/excluded in court)
In my case the government did violate the SCA and my constitutional rights, but two judges have looked at it and both stated the same answer - the police must be allowed to commit crimes to gather evidence. Next stop: appeal courts.
Yep, the courts side with law enforcement. The whole 'truth comes out in a fair fight' is completely undermined by this. The system protects itself above all else.
I was involved with a case that sounds similar - the judges don't care about your rights and blatantly missapply the law. Also, magistrates are also complete BS, and don't even know basic legal stuff. I had one think I called him prejudice when requesting a case be dismissed with prejudice... Complaints do nothing. There's no real oversight, leading to a completely incompetent system.
You have to generally assume that the FBI and other government agencies are competent. My baseline, starting assumption is that if everyone in the US was too scared to use programs like PRISM, they wouldn't have been built.
So these kinds of claims just don't make any sense in a world where we know that government has conducted surveillance without a warrant, and where we know that the FBI has built entire programs designed to make it easier for them to conduct surveillance without a warrant.
From the article posted that you're replying to:
> What Administration officials tend to obscure is that what they seek is not immunity for future cooperation with lawful surveillance, but rather telecom immunity for assisting with unlawful surveillance conducted from October 2001 through January 17, 2007, as part of the warrantless wiretap program initiated by the White House.
I'm not sure I understand what your implication is. I don't understand how it's possible to respond to an article that is about telecoms seeking immunity for previous unlawful actions by saying, "the government/businesses would be way too scared to do anything unlawful." I mean... obviously not, they sought immunity for it. They wouldn't just randomly do that, the most likely explanation is that they made immunity a pressing issue because they thought they needed it.
It does not seem to me that the optimistic world you describe and the observable actions and lobbying efforts of companies/administrations line up with each other.
Being charitable, let’s assume his friends work as homicide or theft detectives. If so, they need a high standard for admissible evidence to build their case.
If on the other hand his friends are street cops tasked with clearing a corner of drug dealers because some neighbor complained to their council person who complained to the police chief then those cops don’t necessarily care about extrajudicial activities.
Having been harassed by street cops and interacted with homicide detectives, I can tell you they vary tremendously in professionalism.
They definitely need a high standard for admissible evidence, that doesn't stop them from purchasing large amounts of data from all-too-willing communications companies and using parallel construction to build their case once they find out what happened via warantless spying.
They can also query these messages to see if there is something on the dealers they get paid from and then warn them if something comes up. It works both ways, no?
Let's be honest, how often do people share with their pals about how they commit crimes, or are less than scrupulous, at work, assuming their pals aren't criminals, as well? People tend to keep things like that a secret, even from people that are close to them.
An EO making it lawful for a federal agency to collect doesn't mean it is lawful for a private company to disclose, it doesn't change when a company is permitted to disclose the content of messages under the SCA
You are correct. There’s also varying 2-party/1-party consent required depending on the state in the absence of a warrant. But unless you’re targeting the devices, you will not get much at all from service providers. They simply don’t keep it contrary to what I read here.
The reality is that many times the only barrier to sensitive information is a shared login which many people know and a statement that users represent that they have legal authority to access that info.
Major service providers do not maintain SMS history beyond 24 hours, let alone 1-7 years (last time I worked a case that is). They’re transparent about it as well. Look up the LE liaison contacts on their sites and they’ll clearly list what is available or not available. That’s why it’s crucial to get the actual devices themselves. Reason: the infrastructure to manage SMS content for every customer for 7 years with zero business justification/use case is phenomenal. They’d spend most of their time responding to civil and criminal subpoenas/warrants. That would be a feat the NSA would be proud of. Been there and done that a 100 times. (This also aligns with certain VPN providers refusing to keep logs. It’s a cost that provides zero returns, so they cut it as a business decision, not because they’re trying to stick it to the man.
They sold access to send or send/receive messages for use cases where customers would legitimately consent. E.G. a wireless Bluetooth accessory that wants to access and reply to SMS message content on Apple devices that Apple won't grant access to.
Still. It meant a very powerful API key had to be protected and never abused.
I can only imagine others obtain God SMS access like this with less than ethical intentions.
I'm surprised to hear this has changed so significantly since the snowden leaks. Especially after the blatant attack on Qwest CEO Joseph Nacchio for refusing to spy. It was established then that the major mobile telcos in the USA were keeping and providing sms full data for 2-5 years (t-mobile, at&t, verizon, etc).
There's no reason for them to keep those records, other than for law enforcement's sake. No use case for calling up your operator to ask about that text message you got "from Fred at 4am one day a couple years ago."
IIRC the only reason this amendment was made was because the 180 day limit was found unconstitutional anyway by an appellate court. So, technically the amendment did nothing.
It doesn't matter where your data is held, locally or cloud, (if you are an American resident and your data is in the USA) as it is _your_ data and it is unconstitutional for them to read it without a warrant. In theory.
Source is a few years old, but I suppose we can make another FOIA request to find out how long carriers store text messages these days - it was basically 0-5 days a decade ago:
Idk... back in the mid 2000s my parents managed to get a transcript of all of my (minor) sister's SMS messages for a few months back (as part of a billing dispute).
You’ll be lucky if it’s any longer than 24-hours now. There’s no business use case for building and maintaining the technological infrastructure to manage it for years. It’s private info and they can’t sell it to anyone without legal liability. If LE gave them the funds to build this infrastructure and use it for retention then the service provider is essentially an agent of the state at that point.
I can only imagine that the scale of all US SMS messages is absolutely staggering. It probably eclipses all other text formats combined in terms of daily production. Here's a blog post from a few years ago estimating it at 26 billion text messages per day and rising: https://www.textrequest.com/blog/how-many-texts-people-send-...
Not counting media and assuming they are all 160 byte messages, that's 4 terabytes per day, or about 200 wikipedia's per day. I guess that's not too bad in terms of storage requirements, certainly a management amount of data for a telecom to store. But assuming that you want those indexed and easily retrievable somehow, it could get very burdensome to manage and interact with, and that tends to balloon the size at least a little bit as well.
The liability and legal issues around it (both externally and internally - don't want employees spying on their exes, leaking data from celebs, in addition to the policing issues, etc) makes it pretty undesirable to store though.
This seems like a good place to say that I strongly recommend Yasha Levine's Surveillance Valley book (https://www.goodreads.com/book/show/34220713-surveillance-va...) where he suggests that all of this is working as intended, going all the way back to the military counter-insurgency roots of the arpanet first in places like Vietnam, and then back home in anti-war and leftist movements. The contemporary themes that are relevant are the fact that current privacy movements like Tor, Signal, OTF, BBG are fundamentally military funded and survive on government contracts. It distracts from the needed political discourse into a technology one where "encryption is the great equalizer" and everyone can resist big brother in their own way on the platforms the government has built. Encryption does exist, but it also distracts from other vectors like vulnerabilities (that led to Ulbricht getting caught), what services you would e2e connect to, how you get the clients to connect to those services, what store can push binaries for said clients etc.
Yasha Levine is a conspiracy theorist hack. There’s really no other way to say it. His narrative is attractive to a left leaning audience with shallow knowledge in this area, but the reality is that without publicly funded software like Tor, Signal, OTF, and my own Lantern, our world would be more fully saturated with corporate control of the internet. We need more public funding for open source software (with public security audits, mind you), not less. Without them, we’d basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.
All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media, than they are with the surveillance state or the NSA. Levine overlooks basic details, such as reproducible builds, that quickly collapse the house of cards that is his narrative. He tries to paint them all with the NSA brush, when, in fact, they’re simply projects that have historically received some of their funding from the government while fulfilling missions with extraordinary humanitarian benefits. Levine’s own knowledge and experience in this area is shallow. Look elsewhere.
I don't disagree with what you're saying. I'm not sure your statement is in disagreement with mine either? I don't think he's saying less OSS is better or anything dogmatic? All he's saying is that using Tor/Signal shouldn't be the end all be all of your surveillance concerns.
> would be more fully saturated with corporate control of the internet
You might disagree. His point was that the "corporate controllers of the internet" support projects like Tor because A) it gives a (somewhat ineffective) channel for people to focus on rather than political recourses and B) there's no real threat to the corporate model. What would you do in this e2e encrypted internet without corporate services?
> such as reproducible builds
Seems like a tangential point. You can have an untampered copy of a client with a vulnerability.
> funding from the government while fulfilling missions with extraordinary humanitarian benefits
I don't think this is in disagreement with anything either
> from the government while fulfilling missions with extraordinary humanitarian benefits
Ahh yes, the famed operation Condor, operation Gladio, operation iceberg and so many other famed "humanitarian" projects
At the end of the day all that you mentioned goes back to a post-facto "it is good because *we* do it", I would go to say that most people here in HN are well aware of the start of Google when it was funded by us Intel as a way to parse Vietnam era datasets, or how US Intel uses Radio Free Asia to destabilize enemy countries abroad, but again, it is only good/not bad when "*we"* do it
Apologies for a rather low quality comment, but these types of persons handwaving the actual structure behind all of this really get on my nerves, specially when I have had family members be tortured as a consequence of these US activities
I’m certainly not defending all US government actions. That’s exactly the point. Levine tries to lump all of this in with surveillance. The US government funds the NSA, that is true. It also funds food stamps. And torture. The trick is to untangle it.
USAID is specifically designed and called that so as to tangle it, tell me, how would your average joe understand that USAID is a intelligence agency spinoff designed to sound "good" while doing evil all over the world rather than what its name suggests? You know... Aid?
The NSA, CIA, Extraordinary Rendition and so many other things dont exist there by accident, if said """government""" wishes to spend such amounts of money and resources to enact such evil under the veil of security, then i dont know about you, but then that to me and several other people just reads as "US Gov being flat out evil"
Do remember that there was *wide* support and acceptance back on the Kennedy days to just dissolve the CIA
> Levine tries to lump all of this in with surveillance.
I am not particularly kind to the guy, but he's just merely looking at it on a holistic system design level, any programmer minded person would do the exact same thing when presented with a black box problem
But as far as the foodstamps go, wouldn't it be great if the system where set up in such a way as that foodstamps where not needed to begin with? And on the flipside, why would "the government" allow for such a societal structure where the maintenance of "foodstamps" is necessary for the organization of the nation? I see that last bit in particular if anything as a national security problem...
As Clintonites would say: "It is the economy stupid"
It seems obvious that USAID is an intelligence front (I've encountered a few instances where it was mentioned that someone worked for USAID at the time, while it was simultaneously obvious that it would make way more sense if they were Intelligence), but is there any concrete evidence for that?
*Specially* that we are talking of USAID, on the case of NED for example, things get slightly murkier because then it is a matter of private rather than public record, but it still works as a tool for management of semi-clandestine operations and operations which need plausible deniability from CIA's end, or at least as much deniability as it can muster, tho these days they prefer to work with shell groups and other associated partners such as for example Atlas Network, Radio Free Asia also falls on that category, same with Voice Of America
If you are interested in books both, Killing Hope by William Blum and Legacy Of Ashes by Weiner are very, very, very good authoritative sources on the matter
If you prefer podcasts, Warnerd Radio has a couple very good episodes on the National Endowment For Democracy, tho they both quote excerpts of the books above
Yes, there is concrete evidence--specifically, the Office of Public Safety mentioned by Cyanbird, was an official cover given to CIA personnel to train local and national police forces in puppet countries how to fight a 'countersingurgency'. This included setting up national ID cards to track everyone, NSA style signals intelligence, and extensive use of torture. One of their favorite methods was to use portable US army telephones, as they had a hand crank generator capable of producing enough current/voltage to torture but were unlikely to cause cardiac arrest, they had an obvious non-torture use case so ordering them was not suspicious, and they had very fine wires that could be inserted up the urethra or stuck between teeth to deliver very painful electric shocks to sensitive areas. Dan Mitrione was a USAID OPS guy who was killed in South America in the 70s (Uruguay, i believe) in retaliation for his role in abuse and torture, who was known for adbucting homeless people upon whom his trainees could practice their torture techniques. The 1980 documentary "Inside the Company" about the CIA lays this out very well. It's long but is worth a watch, and I have seen no comparable films exposing this level of CIA activity since. Vietnam and the Phoenix Program is another classic example. John Manopoli was officially working for OPS in USAID, but was in fact CIA, and he first implemented the national ID card program they used to generate the lists of thousands of names of folks to abduct, torture, and either imprison or kill, and he was also instrumental in that part of the plan as well. Almost the only references to John Manopoli are in books about torture in the Phoenix program, or listings in USAID OPS phone books, or a handful of official OPS papers showing he did the same type of work in a handful of other countries.
While those programs certainly existed this is blatant a false equivocation, you can still have humanitarian programs while being a military hegemony. It's not one or the other.
This is in fact a distinct reason CIA/NSA (and vice versa) won't accept recruits who have served in the peace corp previously, amongst other reasons.
This comment is an incredibly naive attempt at a smear.
> Without them, we’d basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.
Wikipedia is absolutely not "outside of corporate control". It is trivially astroturfed to advance special interests.
> All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media
Both BBC and PBS routinely publish outright disinformation to advance the special interests of their corporate/government clients, including the intelligence community. For example, look at PBS Frontline's ridiculous puff piece for the violent extremist group HTS last year.
> Levine overlooks basic details, such as reproducible builds
Reproducible builds are also easily circumvented by selectively deploying backdoors and other malware, based on IP or other fingerprints.
If there are good reasons to dispute Levine's investigative journalism, they're not here.
Um, ok. All of the above projects use not only reproducible builds for many platforms, but they’re all open source, and they all have public security audits. Those three pillars are about as good as it gets. Is there something you would add?
I’m not claiming PBS and the BBC are perfect entities, but they do offer an alternative source of information that runs against the grain of corporate media. You would prefer…what exactly?
First, there’s a vast difference between the state department and the pentagon. Lumping those two together just reflects an unsophisticated understanding of the federal government. Signal has never received any state department or pentagon money. Tor had a significant early contribution from a researcher at Naval Research. That’s the extent of any pentagon funding. They have received significant state department funding, but to call the state department “warmongers” is just not accurate.
Please stop spreading misinformation. From the Tor Project's public IRS documents:
> WHILE FUNDING FOR TOR ORIGINALLY FOCUSED ON BASIC RESEARCH TO BETTER UNDERSTAND ANONYMITY, PRIVACY, AND CENSORSHIP-RESISTANCE, THE MAJORITY OF FUNDING NOW FALLS INTO THREE CATERGORIES: DEVELOPMENT FUNDING FROM GROUPS LIKE RADIO FREE ASIA AND DARPA TO DESIGN AND BUILD PR OTOTYPES BASED ON RESEARCH DONE BOTH INSIDE TOR AND ALSO AT OTHER INSTITUTIONS; DEPLOYMENT FUNDING FROM ORGANIZATIONS LIKE THE US STATE DEPARTMENT AND SWEDEN'S FOREIGN MINISTRY; AND UNRESTRICTED CONTRIBUTIONS FROM PRIVATE FOUNDATIONS, CORPORATIONS, AND INDIVIDUAL DONORS FOLLOWING IS A BREAKDOWN OF THE TOR PROJECT'S FUNDING SOURCES FOR THE PERIOD ENDED JUNE 30, 2020: FUNDING FROM US GOVERNMENT SOURCES US STATE DEPT - BUREAU OF DEMOCRACY, HUMAN RI GHTS AND LABOR 752,154 GEORGETOWN UNIVERSITY - NATIONAL SCIENCE FOUNDATION 98,727 RADIO FR EE ASIA/OPEN TECHNOLOGY FUND 908,744 NEW YORK UNIVERSITY - INSTITUTE OF MUSEUM AND LIBRARY SERVICES 101,549 GEORGETOWN UNIVERSITY - DEFENSE ADVANCED RESEARCH PROJECTS AGENCY 392,00 8 FUNDING FROM NON-US GOVERNMENT SOURCES DIGITAL IMPACT ALLIANCE - UNITED NATIONS 25,000 S WEDISH INTERNATIONAL DEVELOPMENT COOPERATION AGENCY (SIDA) 284,697 FUNDING FROM CORPORATE SOURCES MOZILLA 157,500 AVAST 50,000 MULLVAD 50,000 FUNDING FROM PRIVATE FOUNDATIONS OPEN SOURCE COLLECTIVE 23,100 MEDIA DEMOCRACY FUND 270,000 ZCASH FOUNDATION 51,122 MOZILLA OPEN SOURCE SUPPORT MOSS 75,000 RIPE 53,114 CRAIG NEWMARK PHILANTHROPIC FUND 50,000 STEFAN THO MAS CHARITABLE FOUNDATION 50,000 KAO FOUNDATION 10,000 MARIN COMMUNITY FOUNDATION 1,000 IN DIVIDUAL DONATIONS 890,353
Yes they’ve received funding from DARPA. I realized I forgot that after I posted. Good catch. To my knowledge, that funding is for new anti-censorship transports to sneak traffic in and out of censored countries.
And the State Department are definitely warmongers.
SecState Kissinger orchestrated the incineration of Laos, Cambodia and Vietnam.
SecState Powell orchestrated the flattening of Iraq.
SecState Clinton orchestrated the butchering of Libya.
SecState Pompeo tried and failed to orchestrate the annihilation of Iran by assassinating top officials and drawing them into war.
And so on and so forth. These aren't even theories. The State Department is closely involved in destabilizing sovereign governments through the full spectrum of means, including war, to advance Washington's interests.
Signal isn't funded by the military, by OTF/BBG, or any branch of the USG government. People who claim otherwise are confused (deeply) about a program OTF ran that sponsored third-party security reviews and development projects (summer-of-code style), none of which was mediated through OTF --- it was just a bucket of money.
You should be extremely skeptical about people who bring OTF/BBG up in these discussions. I have complicated feelings about Tor stemming mostly from culture and effectiveness concerns and would push back on claims that it's co-opted by the Navy or corporate interests, but at least I can see a clear (if silly) line connecting Tor to these supposed conflicts of interest.
Correct, it is not funded by "the military", but this is incorrect
> any branch of the USG government
Because Signal/TextSecure received considerable amounts of seed capital from Radio Free Asia which is a CIA spinoff with the explicit aim to fund the development of the cryptography at grass roots level, not per se to have full control of it like NSA would have done, but because having strong cryptography on such platforms (Telegram might be other) is highly effective against perceived US enemies like well... Iran, or Syria, and to allow their assets/agents to communicate more easily while abroad without bulky extra proprietary phones or software
All of that above is mentioned at length on Surveillance Valley btw
As I understand it the technology behind Tor is strengthened by an arms race. You want several different well-funded entities running nodes, because that makes the service better for everybody. Even if some of those entities are hostile they still help unless one entity controls a large portion of interior nodes and even then you're only giving metadata to that single entity (whichever it is) by using Tor, not anybody else - which is better than you're going to do with alternative technologies.
This analogy unfortunately cuts both ways, if you've got technology that undermines the majority government / power structure in a secure fashion, you'll always have the ability to come in as a intelligence agency and foment an insurgency movement.
Which also unfortunately points to them having exploits no one has discovered yet in said technology tools.
They can still maintain generalized situational control via additional superiority vectors(MASINT, HUMINT, GEOINT, OSINT, FININT etc.)
Ulbricht was caught via poor OPSEC and not via a Firefox/Tor 0day afaik. Though there was/is speculation that a Firefox/Tor 0day was used to bring down some Tor markets and possibly to locate the Silk Road's server. Silk Road 2.0 was brought down in like a few months, which could indicate such a 0day existed. Or that it was ran by some former Silk Road staff members who got doxed when Silk Road 1.0 was shut down.
Ulbricht was caught because an FBI agent, who would read things slowly and twice, recognized these 4 letters : heyy.
That's how Ulbricht sometimes spelled hey, and the agent had seen that particular spelling before in his investigation, in an email from Ulbrict’s student email address.
Nick Bilton's book “American Kingpin: The Epic Hunt for the Criminal Mastermind Behind the Silk Road” is a great read, highly recommended.
much more likely -- sigint tooling was applied to identify ulbricht, bulk metadata was turned over for his comms history, and it was pored over for things they could connect with sr to get warrants. imo, at least.
but getting to claim you're such a sharp investigator that you can figure it out by noticing the word heyy makes for a much better story to tell an author.
It was more complicated than just heyy, but I won't spoil the book.
It's been awhile since I've read it, but my impression was that solving the case was mostly traditional casework, and a lot of it, by many different people/agents/agencies.
That Reuters article certainly gives pause. Thanks for the link.
That's not what I think, that's what Nick Bilton thinks. The quality of his book makes me partial to his thesis, of course, but NSA conspiracy blah adds nothing.
Also, lots more went into catching him than just heyy, but that was the lucky break that had him caught. Now he shares a prison with Dr. Unabomber Kazinsky.
That could be the story but since parallel construction is routinely used to hide the existence of surveillance tools and back doors it’s not unreasonable to doubt it.
I thought I had heard it was stackoverflow, is that looped in somehow?
It’s not fiction you’re spoiling, but a factual conversation about events that you’re not going into due to spoilers. It is an odd defence that kills the conversation when other people bring up good points.
The parallel construction argument seems way more plausible if there’s nothing else besides “heyy”. If there is more, please say what it is instead of mentioning it exists but refusing to say it.
Where is any evidence of Tor being a military surveillance project? I find it hard to believe an open source project like this has been infiltrated. Yes, there is suspicion some ECC curves are compromised, but only the ones provided by NIST. I'd really like to see evidence of Tor.
The seed for that line of thinking is the fact that a US Navy lab built it.[0] Having said that, I believe that's the only basis and is a far cry from making the theory convincing or even probable.
“The Navy built it” is a bit of an exaggeration. Paul Syverson did early work on it at the Naval Research Lab, and Roger Dingledine and Nick Mathewson added to the collaboration at approximately the same time, with neither having anything to do with the Navy. That’s the extent of the military connection - some relationship in the first year or so of an 18 year or so project.
There's a been a suspiciously downplayed number of ephemeral hidden services that get raided / internationally taken down on the Tor network for it to be mere circumstance.
No one tries to take notice since they're hosting the worst content on the internet regularly.
Thank you. I never knew the source of the ridiculous theory that the internet sprang from spying attempts on the Vietnamese. I am always looking for keywords to filter conspiracy weirdos. Yasha Levine added
Wow, that is so thin it is transparent. If this is the sort of 'proof' that we are going to find then I am glad you posted the ref here so that I could add yet another kook to the list of those whose privacy/security rantings and books I can ignore. The biggest danger to long-term privacy projects is not the risk of taking advantage of an opportune partnership with a government agency when incentives align, it is conspiracy nutjobs poisoning the well with their paranoia and delusions.
So if you have something to hide, don't use iCloud backup.
And Whatsapp will give them the target's full contactbook (was to be expected), but also everyone that has the target in their contact list. That last one is quite far reaching.
Most people don't realize that most people have something to hide. The USA has so many laws on its books. Many of which are outright bizarre[0] and some of which normal people might normally break[1].
And that's only counting current/past laws. It wasn't that long ago a US President was suggesting all Muslims should be forced to carry special IDs[2]. If you have a documented history being a Muslim, it could be harder to fight a non-compliance charge.
I always liked this one I found in the Illinois statutes - it basically criminalizes every person online:
Barratry. If a person wickedly and willfully excites and stirs up actions or quarrels between the people of this State with a view to promote strife and contention, he or she is guilty of the petty offense of common barratry[.]
There is a renaissance of such laws regarding causing offense. That would basically be anybody whose face you don't like? I wonder how much considerations go into suggestions like this. Side effects should normally hit your face like a truck.
Did you even read the snopes article you referenced before making what seems like a definitive claim about how Trump was suggesting muslims carry special IDs? Because Snope's own rating is "Mixture" of truth and false and if you read the assessment, it is grasping at straws to even make that conclusion.
Sure, I can accept there is some nuance but the phrasing and definitive manner of your original statement is very misleading. I'm not the biggest fan of the guy but casually mentioning that he suggested the idea when in actuality it was an idea posed by a reporter is bad faith in my opinion.
> “Certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy,” he added. “We’re going to have to do things that were frankly unthinkable a year ago.”
> “We’re going to have to look at a lot of things very closely,” Trump continued. “We’re going to have to look at the mosques. We’re going to have to look very, very carefully.”
That's all he said to the interviewer. The interviewer was asking the hypothetical and suggested the special identification! He wouldn't take the bait, so since he didn't answer the hypothetical they said "he wouldn't deny it" and wrote the campaign of hit piece articles anyway. Whatever response they got they would have wrote that same piece. If he would have answered one way they would have quoted out of context. Since he responded generically it's obviously drummed up. The fact check is hilarious. "Mixed", lol.
Your last sentence just made me freak out thinking that I've previously done such stupidity in front of a "law officer".
I never for one second thought it could be a trap; I was overly willing to cooperate and truthfully respond to a "theoretical" inquiry. Damn, it hurts in retrospective.
Reporter: "Should there be a database or system that tracks Muslims in this country?"
Trump: "There should be a lot of systems, beyond databases. I mean, we should have a lot of systems."
And then he tried to backpedal. Decided it was a watch list, not a database, etc. Basically the usual shtick of his where he tries to say everything and nothing at the same time.
> There should be a lot of systems, beyond databases. I mean, we should have a lot of systems
Beyond databases. What does that mean? That could be analog systems, that could be anything not stored in a computer.
Nothing to do with identification which would need a database. It's a generic answer to avoid a hypothetical. It's a nonanswer.
He said nothing, not everything. You are attributing the reporters question to him. The reporter is posing the hypothetical that they created in the first place by the initial interview.
My main point was hypotheticals are always trap (unless among friends!), but that's a great example of an obvious one.
The usual shtick is to say nothing, because the journalistic usual shtick is to ask gotcha hypotheticals.
You're kind of quibbling over details. The below quote is already bad enough:
> "We’re going to have to look at the mosques. We’re going to have to look very, very carefully."
I already do not trust the person who has said that. Does it really matter if he proposed a full-fledged ID system? He still proposed monitoring mosques. He still proposed surveillance based on religious identity.
The correct answer to that question, "should Muslims be subject to special scrutiny" is a simple "no". I don't really get the debate about hypotheticals; this a question that does have a straightforward, right answer. And the implications here in regards to surveillance and ordinary people having stuff to hide -- those implications are all the same regardless of whether or not Trump actually proposed a literal database.
He was open to increased surveillance on Americans based on their religious identity, he didn't immediately shut the idea down.
Details are important. The media campaigns are claiming he wanted Muslim identification, a system THEY proposed in their hypothetical. When he didn't confirm they said "he wouldn't deny it" as their proof of support.
> The below quote is already bad enough. He still proposed surveillance based on religious identity.
He said nothing about citizens or monitoring them based on religious identity. He said look at mosques, that's all. Mosques are often the target of attacks.
Are you proposing that increased surveillance of mosques is to protect them? That requires a certain level of imagination given the full context of the quote:
> "Certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy," he added. "We’re going to have to do things that were frankly unthinkable a year ago."
> "We’re going to have to look at a lot of things very closely," Trump continued. "We’re going to have to look at the mosques. We’re going to have to look very, very carefully."
----
And once again, it kind of doesn't matter. An increased focus on monitoring places of worship is monitoring people based on their religious identity. I don't know a single Christian who would argue to me that monitoring churches isn't the same thing as monitoring Christians.
Mosques and churches are not abstract concepts that are divorced from the people inside of them. When you monitor an institution, you are necessarily monitoring the people inside of it, and it is reasonable for them to be concerned about the government taking an interest in their religious-identity. To argue otherwise requires someone to completely divorce religious identity from the practice of religion, and that's just not a reasonable argument to make.
----
> Details are important.
Not in the context of the original statement, "ordinary people often do have something to hide, and should care about privacy." Look, whatever, you trust Trump. You shouldn't, but you do. Fine.
Do you trust Biden? Do you trust the current government not to attempt to monitor you based on your vaccine status?
You're fighting over the idea that "your guy" wouldn't surveil ordinary people, but this also kind of doesn't matter because your guy isn't in the Whitehouse right now, and I can guarantee you that Republicans are never going to have permanent power over the government. No party wins forever. You have as much reason as anyone else to care about personal privacy, why are you fighting over who specifically is a threat? Does it change anything about the overall privacy debate?
Like I said, he always manages to say exactly the right things so the people who support him will read between the lines, but leave just enough ambiguity so those same people can quibble constantly over whether that was what he really meant.
> hypotheticals are always trap
He could have just said "No." Or "I have no such plans at this time." if he wanted to sound like a typical politician. His circumlocution is legendary, because it allows everyone to believe what they want to believe. Politicians all have this problem, but Trump elevates it to a whole new level.
You and the person you are communicating with must both not use iCloud backup. And since apple pushes the backup features pretty heavily, you can be reasonable sure that the person you are communicating is using backups. IE, you cannot use iMessage.
I got off all Apple products when they showed me their privacy stance is little more than marketing during the CSAM fiasco, but IIRC the trouble with iCloud backup is it stores the private key used to encrypt your iMessages backup. Not ideal to be sure, but wouldn't iMessage users be well protected against dragnet surveillance, or do we know that they're decrypting these messages en masse and sharing them with state authorities?
Has Apple made any public statements regarding iCloud's lack of privacy features. It takes the wind out of their privacy marketing that is effectively hurting ad tech but not truly protecting consumers from state-level actors with data access.
Here is an excerpt. The language sounds like encryption is enabled and the chart includes iCloud features as server and in transit protected. Seems like smoke and mirrors then.
> On each of your devices, the data that you store in iCloud and that's associated with your Apple ID is protected with a key derived from information unique to that device, combined with your device passcode which only you know. No one else, not even Apple, can access end-to-end encrypted information.
E2EE was in the iOS 15 beta for backups but it was removed? (Did not land for release) after they changed the time table of CSAM scanning feature. So we will see if we get E2EE backups once that image scanning lands.
Yes, and you can delete old backups on iCloud - and then switch to local, automatic, fully encrypted backups to a Mac or PC running iTunes.
HN tends to get very frothy-at-the-mouth over Apple and privacy but the reality is that iPhones can be easily set up to offer security and privacy that best in class, they play well with self-hosted sync services like Nextcloud....and unlike the Android-based "privacy" distros you're not running an OS made by a bunch of random nameless people, you can use banking apps, etc.
The only feature I miss is being able to control background data usage like Android does.
for signal users this means the messages of course do exist on your phone, which will be the first thing these agencies seek to abscond with once youre detained as its infinitely more crackable in their hands.
as a casual reminder: The fifth amendment protects your speech, not your biometrics. do not use face or fingerprint to secure your phone. use a strong passphrase, and if in doubt, power down the phone (android) as this offers the greatest protection against offline bruteforce and sidechannel attacks used currently to exploit running processes in the phone.
My advice if you’re not on the level where three letter agencies are actively interested in your comings and goings:
- Use a strong pass phrase
- Enable biometrics so you don’t need to type that pass phrase 100 times per day
- Learn the shortcut to have your phone disable biometrics and require the pass phrase so you can use it when police is coming for you, you’re entering the immigration line in the airport etc. - on iPhone this is mashing the side button 5 times
In case anyone with an Android is confused because they don't see the option: I believe that you have to explicitly enable the Lockdown option in Android's system settings before it shows up.
There are a couple of apps that will also lock down with a tap instantly. I'm sorry I forget the names though, but handy if you have it in hand and "open". I have been using iphone too long now to remember the names of the apps though. you can put a shortcut on every "page" of your android and tap it, it enforces locking the phone by passcode. so on most phones it would be a swipe and a tap, probably less than a 200 milliseconds if you practiced it.
On recent iPhones, the way to disable biometrics is to hold the side button and either volume button until a prompt appears, then tap cancel. Mashing the side button 5 times does not work.
Not sure how recent you're talking but I have an iPhone 11 Pro and I just tested pressing the side button 5 times and it takes me to the power off screen and prompts me for my password the same way that side button + volume does.
Apple's docs also say that pressing the side button 5 times still works.
> If you use the Emergency SOS shortcut, you need to enter your passcode to re-enable Touch ID, even if you don't complete a call to emergency services.
Pressing it five times starts the emergency SOS countdown (and requires the passcode next time) on my iPhone XS. Maybe you have the auto-calling disabled?
It doesn't on my 2nd Gen iPhone SE (2020). That said, anything that causes the "swipe to power off" screen to appear has the same affect, so essentially holding down the button for 5 seconds does the trick.
If you _are_ at the level where TLAs are interested in you they will not give you a chance to mash that button. You will have a loaded gun pointed at your head out of nowhere and you will freeze. From experience.
In most cases you are going to want to separately passphrase your messaging stuff so it is locked up when you are not using it. That makes every thing else a lot easier. For example, there is a Signal fork that supports such operation:
I think that it would stay unlocked for a time, possibly till you locked it. Possibly such an arraignment would be more practical for something offline like encrypted email.
A compromise would be to just save the messages to a passphrase. You could use a public key so that you would only need the passphrase to read the old messages. I haven't heard about anything that actually does this.
That's actually the old method for iPhone 7 and before. Now, you can activate emergency SOS by holding the power button and one of the volume buttons. Assuming you don't need to contact any emergency contacts or services, just cancel out of that and your passcode will be required to unlock.
That did the trick thanks. But ultimately I’m behind on updates so my phone could probably be broken into trivial with the forensic tools available to most law enforcement. I’m going to update soon.
Don't have any family or friends, either. If you refuse to talk and invoke your rights the government will just threaten to hurt those you love until you break and give up your passwords. From experience.
I liked it in Wrath of Man where one guy is acting tough as fuck until they bring his girl into the room.
Also, if you can, if you are encrypting data, use a hidden volume inside the first - that way you can give the government the outer password and they'll be happy thinking they have everything.
Not "recently". Disappearing messages have been there for at least 5 or 5 years.
Almost _all_ my Signal chats are on 1 week or 1 day disappearing settings. It helps to remind everyone to grab useful info out of the chat (for example, stick dinner plan times/dates/locations into a calendar) rather than hoping everybody on the chat remembers to delete messages intended to be ephemeral.
The "$person set disappearing messages to 5 minutes" has become shorthand for "juicy tidbit that's not to be repeated" amongst quite a few of my circl3es of friends. Even in face to face discussion, someone will occasionally say something like "bigiain has set disappearing messages to five minutes" as a joke/gag way of saying what used to be expressed as "Don't tell anyone, but..."
Keep in mind that any time a message is on flash storage there might be a hidden copy kept for flash technical reasons. It is hard to get to (particularly if the disk is encrypted) but might still be accessible in some cases.
I think encrypted messengers should have a "completely off the record" mode that can easily be switched on and off. Such a mode would guarantee that your messages are never stored anywhere that might become permanent. When you switch it off then everything is wiped from memory. That might be a good time to ensure any keys associated with a forward secrecy scheme are wiped as well.
> And a screenshot, or another camera, or a rooted phone can easily defeat that.
Not if the message has already been deleted. Auto-deleting messages are so the recipient doesn't have to delete them manually, not so the recipient can't possibly keep a copy.
Exactly this. Even more: Auto-deleting messages are also that the sender doesn't have to delete them manually. Most people do not understand this. I even had a discussion with an open source chat app implementer who insisted on not implementing disappearing messages because they couldn't be really enforced.
That's a different threat model, no messaging app is trying to protect the sender from the receiver. Disappearing messages are meant to protect two parties communicating with each other against a 3rd party who would eventually gain access to the device and its data.
Wickr has a "screenshot notification to sender" feature (which of course, can be worked around by taking a pic of the screen without Wickr knowing you've done it).
District courts don't make law. Magistrates working for those district courts even less so. The case this news article cites has no precedential value anywhere - not even within N.D.Cal. - and should not be relied upon.
Agreed. That decision is unlikely to be repeated by any appellate court. IMO, all the rulings on biometrics not being testimonial are constitutionally correct, even if that sucks. A lot of constitutional rulings suck.
The real solution is for a federal statute to require warrants.
The contents of your mind are protected because you must take an active part of disclose them. Of course, they can still order you to give them the password and stick you in jail for Contempt of Court charges if you don't.
Check out Habeas Data. It's a fascinating/horrifying book detailing much of this.
"Your honor, the state agrees to not prosecute on any information inferrable from the text of the password."
"Understood. The defendant's Fifth Amendment right to protection from self-incrimination is secured. As per the prior ruling, the defendant will remain in custody for contempt of court until such time as they divulge the necessary password to comply with the warrant."
I don't know why you're being downvoted. For a start, if it was a third party that had the passcode and refused to divulge it they can be held in jail until they release it, e.g. if your wife knows it. (There are many cases where people have been sentenced to years or decades in prison for not testifying)
If it is you not divulging your own passcode, then legally the judge can't give you contempt, but in reality they could give you contempt until you fought it through the appellate court. Contempt is a special type of thing - certainly here in Illinois you have no right to a jury trial on contempt charges. You're just fucked.
I believe judges can, in fact, hold a defendant for refusing to give up their own passwords, and that the contempt could be indefinite. This is a point of law that is not settled at the federal level yet, and at the state level it varies from jurisdiction to jurisdiction.
In one case, the appellate court at the federal level simply refused to hear the case that had been decided at the sate supreme court level.
They don't actually need your passphrase to unlock your phone - they just need somebody with the passphrase to unlock in for them. And if there's any doubt about who that is, then having that passphrase counts as testimonial; but if there's not - it might not count as testimonial.
Although there are apparently a whole bunch of legal details that matter here; courts have in some cases held that defendants can be forced to decrypt a device when the mere act of being able to decrypt it is itself a foregone conclusion.
(If you want to google a few of these cases, the all writs act is a decent keyword to include in the search).
The defendant never needs to divulge the passphrase - they simply need to provide a decrypted laptop.
We really should up our game on encryption, perhaps some kind of time-based crypto rotation that inherently self-destructs rendering the data unusable if you don't authenticate with it every so often. If you are physically unable to unlock a device you can't be compelled to do so.
I think a fingerprint is easier to get if you’re not willing to cooperate. However, I think if they really, I mean really want your password, they will probably find a way to get it out of you. I think it also depends if it’s the local sheriff asking for your password or someone from the FBI while you’re tied up in a bunker somewhere in Nevada.
This would be difficult to prove. They would have to know for certain the evidence was on there to begin with. I don't see the prosecutor easily meeting their burden of proof on this charge.
This is how the statute is worded here in Illinois:
"A person obstructs justice when, with intent to prevent the apprehension or obstruct the prosecution or defense of any person, he or she knowingly commits any of the following acts: (1) Destroys, alters, conceals or disguises physical evidence."
Ugh. It's a vague law. I don't even know how they would prosecute that for virtual evidence held on a device that they didn't already have a view inside of.
i was under such duress that i was shaking so badly that i made typos in my 30 character password 10 times. the loss of evidence is not my fault as it is the people putting me under that duress. don't think it'll hold up though
FaceID can already prevent a device from unlocking if someone is sleeping. In theory devices could detect if they were being unlocked "under duress" by using biometrics to look at facial expressions, heartbeat, etc, and then wipe themselves. I don't know how practical in reality but perhaps it could be a feature you turn on in a sensitive environment.
How? They can physically overpower you and place the sensor against your finger, or in front of your eye and pry it open without your consent and gain access with 0 input from you. How do they similarly force you to type something that requires deliberate, repeated concrete actions on your part?
In my case they threatened to harm my wife if I didn't stop refusing. After my case is over I'll happily release the video tapes so you can see how this shit works.
You are wrong. It protects passwords as speech, as they are testimonial, per many court rulings. It does not protect biometrics based on law that basically says the police can force you to give up your fingerprints for their records, so they can sure as fuck force your finger onto a reader.
(Oh and by the way, as I mentioned in the comment you replied to, the fifth amendment DOES NOT PROTECT SPEECH. That's the FIRST amendment. The FIFTH amendment protects AGAINST SELF-INCRIMINATION.)
Arguably, yes. That's why it's important to know the shortcut on iOS to render faceid inoperable until you give it the password - mash the power button five times fast!
Telegram is encrypted OVER THE WIRE and AT REST by default with strong encryption no matter what you do. It's E2EE if you select private chat with someone.
Lots of FUD out there there about Telegram not being encrypted that's just not true. There's nothing either side can to do send a message in clear text / unencrypted.
"Encrypted OVER THE WIRE and AT REST" means that telegram has easy and unfettered access to chat logs. So they can give it up to authorities. (I don't argue that they DO, just that they very much CAN).
This is proven by an extremely simple experiment: you log in on your new phone, enter password and instantly see all chats.
Another simple experiment points that chats are unlikely to be even encrypted at rest is that Telegram has an extremely fast server side message search. You log into a web client, half a second later you can type a search query and uncover chats from years ago.
It kinda depends on if images and videos are encrypted separately and only indexed at first.
How much data there are on your chats? 1 megabyte is around one thick book in plaintext.
AES-CBC as example method decrypts more than 2 gigabits per second with hardware opcodes (2012 processor), for example if we look this data https://www.bearssl.org/speed.html
It is impossible to say based on delay when searching plaintext on this level whether there is encryption.
Encryption over the wire and at rest is a basic expectancy of any web service today. They would meet that criteria just by using SSL and disk encryption on their servers. E2EE is a much stronger criteria.
> It's E2EE if you select private chat with someone.
And its not E2EE if you fail to select private chat.
What this means is that any conversations where you do select E2EE are the ones the "authorities" will take interest in, even if only to the extent of metadata.
That's the fundamental problem with E2EE-by-exception, rather than by default. It calls attention to specific data, even if its not cleartext, rather than obscuring everything.
Telegram only uses end to end encryption for secret chats. All other chats are only encrypted on the wire with Telegram's keys. Your comment was encrypted on the wire to HN but that's not going to do anything to keep it away from the FBI. The majority of all Telegram messages are only secured by Telegram's unwillingness to cave to outside pressure. It's in plaintext as far as they're concerned.
For somebody who isn’t super cyprtography-savvy, what’s the difference between over the wire and e2ee? Does the former mean that telegram itself can read non-private-chat messages if it so chooses?
> For somebody who isn’t super cyprtography-savvy, what’s the difference between over the wire and e2ee?
E2EE: As long as it is correctly set up and no significant breakthroughs happens in math, nobody except the sender, the receiver can read the messages.
> Does the former mean that telegram itself can read non-private-chat messages if it so chooses?
Correct. They say they store messages encrypted and store keys and messages in different jurisdictions, effectively preventing themselves from abusing it or being coerced into giving it away, but this cannot be proven.
If your life depends on it, use Signal, otherwise use the one you prefer and can get your friends to use (preferably not WhatsApp though as it leaks all your connections to Facebook and uploads your data unencrypted to Google for indexing(!) if you enable backups.
Edited to remove ridiculously wrong statement, thanks kind SquishyPanda23 who pointed it out.
Pretty much. End to end uses the encryption keys of both _users_ to send. Over the wire has both sides use the platforms keys so the platform decrypts, stores in plain text, and sends it encrypted again to the other side. Over the wire is basically just HTTPS.
over the wire is when its encrypted during transmission between the User and Telegram's servers. HTTPS or SSL/TLS, etc. At Rest is when its encrypted in their DBs or hard drives, etc. Theoretically, Telegram can still read the contents if they wished to do so if they setup the appropriate code, or tools inbetween these steps.
E2EE means that the users exchange encryption keys, and they encrypt the data at the client, so that only the other client can decrypt it. Meaning Telegram can never inspect the data if they wanted to.
yes. worth remembering also that even with e2ee, a ad-tech-driven company could have endpoints determine marketing segments based on content of conversations ad report those to the company to better target ad spend.
Also, as is the case with WhatsApp, they siphon off your metdata and even have the gall to make an agreement with Google to download message content unencrypted to Google when one enable backups.
are you trolling? telegram (are therefore the fbi) has full access to all content of every message. unless you use private chat, which nobody does, and isn't even available on desktop. i use it. but it's about as private as discord. which is to say not at all
I don't know whether Telegram is E2EE by default (probably not.) When you do a call on telegram you are given a series of emoji and they are supposed to match what the person on the other side has, and that's supposed to indicate E2EE for that call.
But they have to fake the voice, if I call the other person and say "my emoji sequence is this, this and that" for the other person to verify and vice-versa.
Person A calls you. I intercept the call, so person A is calling me, and then I call you (spoofing so I look like Person A). When you pick up, I pick up, then I transmit what you're saying to Person A (and vice versa).
How do you know I'm intercepting the transmission? Does the emoji sequence verify the call, perhaps?
The emoji sequence is a hash of the secret key values generated as part of a modified/extended version of the Diffie-Hellman key exchange. The emoji sequence is generated and displayed independently on both devices before the final necessary key exchange message is transmitted over the wire, so a man-in-the-middle has no way of modifying messages in flight to ensure that both parties end up generating the same emoji sequence.
The emoji sequence represents the secret key exchange between you and the other party. If you intercept the call, you are making one key exchange with person A, and another key exchange with person B. Due to the mathematics involved, there is no way for you to force both key exchanges to yield the same result.
For a "standard" DH key exchange it would be possible to brute force the emoji sequence to be the same (since it's too short to be resistant to brute forcing), but the protocol that Telegram uses specifically defends against that by having both sides commit to their share of the key ahead of time, so they cannot try different numbers.
So person A and person B are going to see different emojis no matter what you do. To fake a phone verification while performing a main-in-the-middle attack you'd also have to fake their voices to each other. That's hard.
Both connections would show different emojis on both sides then. So you would need to somehow deep fake the voice of the one telling their emojis to the other one.
Real privacy is too burdensome for most users, so they feel just fine if the service owner promises in a stern voice that their chats are really secure.
It is not necessary to provide real security, do fingerprint verification, etc if the users are already happy with the level of security they are promised.
The emoji comparison thing is mathematically solid. Assuming the clients aren't backdoored (and the Telegram client is open source, so that's not that easy), there is no way for an attacker to make both sides show the same emoji. If they want to convince two users that they have en E2EE connection while performing a man in the middle attack, they'd have to fake their voices to each other to change what emoji sequence they each read out. That's hard, and therefore this is real, meaningful privacy.
Telegram can potentially perform mitm at any time and generate matching emoji images for both sides of conversation, since you can't really trust the app code to be the same they put on GitHub. If you've built it yourself, that'd reduce the risk, but nobody does that because blind trust is much more easy.
This is true, and IMHO somewhere that App-Stores could potentially assist in building trust for OSS Apps being distributed.
What I'm envisioning is a 'build hash' that is reproducible based on the public source code with a given set of compiler settings (i.e. same used for publish.) The systems app-management widget could then display this build hash in the app-check menu.
This would likely require more care in packaging, as well as some form of secure config API that allows companies to provide certain bits of configuration (i.e. remote servers to contact) without impacting the build output. This would mean that yes, people would still need to audit the code, but at least it's easy for anyone to canary out to the internet that the hashes are mismatching, same for when someone does find something on an audit.
OTOH, I'm sure Telegram's competitors in the chat space would love a reason to de-legitimize them, so it wouldn't surprise me if -someone- out there was already doing some sort of compare on published builds.
This chart is showing what messaging providers are willing to give to law enforcement, not a reflection of the technical capabilities of the messaging provider.
I assume what they're showing for Telegram (basically no data except IP/phone data if Telegram decides it's for a legit counter-terrorism activity) is a matter of Telegram business policy.
Signal gives the limited information they do because I assume they are subject to warrants from U.S. courts. Telegram is run, to my understanding, from jurisdictions where enforcing a U.S. court order would be difficult-to-impossible, and they keep the private keys to decrypt their stored message content split between servers in relatively non-overlapping legal jurisdictions, so even a successful seizure of data in one wouldn't be enough to decrypt message content.
That's all well and good -- and I appreciate Telegram for setting things up that way -- but that means at any time Telegram could make a policy decision to cooperate with law enforcement and provide much more than what is shown on this chart. Signal, on the other hand, could choose to cooperate as much as they want but not have the technical capability to provide more information. (Barring them updating their client to intentionally build in a backdoor, etc., but I'm basing this on what the current implementation is.)
The other important thing about this chart: this is the unclassified version. Is there another classified document out there which says "we have a secret relationship with Telegram/whomever and they give us all the message content we want" but they don't advertise to the law enforcement community at large? They secretly use it to aid in parallel construction so they don't ever have to reveal that a messaging vendor is giving them message content in court? We have no idea.
tl;dr: Telegram looks great on this chart because of policy, not technology. I love Telegram, but I'm under no illusions that it's appropriate for talking about things I wouldn't want law enforcement to have access to. Luckily, I haven't found myself needing to talk to my friends about illegal activity.
>Telegram looks great on this chart because of policy, not technology.
This is what puzzles me about Apple, they absolutely have the capability to mitm iMessage pretty discreetly. Because Apple just completely hand waves away key distribution and they can silently add and remove keys at their leisure it's largely only policy that underpins their security. They're not Telegram, they aren't structured to be in a position to be able to ignore demands from the justice system to assist with some agent's latest fishing expidition. How are they getting away with not providing stuff that they obviously have access to? The PDF lists "Pen Register: no capability"
Telegram isn't based in Russia (anymore). The company is incorporated in Dubai since 2017 [0]. They opposed Russian warrants in the past, resulting in the blocking of the app in the territory for some time [1].
Not exactly. Non-secret chats are stored encrypted on Telegram's servers, and separately from keys. The goal seems to be to require multiple jurisdictions to issue a court order before data can be decrypted.
Telegram doesn't store your messages forever and they are encrypted and seizing the servers won't allow you to decrypt them unless you also seize the correct servers from another country
It is widely known and confirmed by Telegram themselves that your messages are encrypted at rest by keys they possess.
This is a similar process to what Dropbox, iCloud, Google Drive, and Facebook Messenger do. Your files with cloud services aren’t stored unencrypted on a hard drive - they’re encrypted, with the keys kept somewhere else by the cloud provider. This way somebody can’t walk out with a rack and access user data.
Encrypted at rest means the data is encrypted as stored on disk, not that they do not have access to the keys. That would be end-to-end encryption.
What Telegram claims to have done is set this up in a way that makes it very hard for a single party/state to get these keys. It's not possible to make this completely impossible (if you have a server processing user data, it will have the keys loaded at some point, and there is always some way to physically attack it), but it is possible to make it very hard (physical tamper detection on the servers, secure boot tied to machine identity credentials required to access key material, etc - it's hard, but not impossible, to make this too difficult for any nation state to bypass). We don't know how good their set-up is, but it's certainly possible to do a good job at doing what they claim to be doing.
It doesn't matter at all, if you consider the risks of FBI (or FSB) accessing your chat logs. Telegram can produce your unencrypted chats to them, wether they are encrypted at rest or not.
I just don't see why they would make life harder for themselves developing stuff, given how often Durov lies. He claimed that all Telegram developers are outside of Russia, but then it turned out that they were working next floor from his old VK company office, right in Saint Petersburg.
Check the difference between Telegram and WhatsApp.
Add to this the fact that WhatsApp
- uploads messages unencrypted to Google if you or someone you chat with enable backups
- and send all your metadata to Facebook.
Then remember how many people here have tried to tell us that Telegram is unusable abd WhatsApp is the bees knees.
Then think twice before taking security advice from such people again.
PS: as usual, if your life depends on it I recommend using Signal, and also being generally careful. For post card messaging use whatever makes you happy (except WhatsApp ;-)
This isn’t the case anymore with WhatsApp. Backups to iCloud and google drive are optionally fully encrypted. You have the choice of storing the decryption artifacts on facebooks servers (which are held on a Secure Enclave) or to backup the 64 character decryption code yourself.
Telegram defaults to no encryption, does not do encrypted group chats, has a home-rolled encryption protocol which almost guarantees it's weak as nearly every home-rolled encryption system always is (if not also backdoored). Coupled with it being headquartered in Russia means it is completely untrustable.
The only reason Telegram comes out on top of Whatsapp in the document in question is because Telegram is a foreign company with little interest in cooperating with a US domestic police agency; the FBI has no leverage over Russian companies.
What that list doesn't show is what Telegram does when the FSB knocks. By all means, give your potentially embarassing message content to a hostile nation's intelligence service.
This is plain false as can be verified by anyone who can check Telegram GitHub repos or run the app in a debugging environment.
Telegram defaults to point-to-point encryption. Same as banks and gmail.
Fun fact: back in the days WhatsApp sent messages unencrypted (i.e. as plain text) over port 443(!).
> does not do encrypted group chats,
again, point-to-point encryption
> has a home-rolled encryption protocol which almost guarantees it's weak as nearly every home-rolled encryption system always is (if not also backdoored).
Earlier versions had serious problems. Newer versions are supposedly better.
Also there is a lot of difference between home-grown cryptography by a math wizard, made open source for everyone to inspect and various secret sauce variants.
HN has a long history of claiming it can be trivially broken, yet despite source code being available no one has done it? Lazyness or incompetence? Or maybe it isn't so simple?
I don't know but if you want to shut me up and make your claim to fame: do break Telegram cryptography. You'll do the world a service both by exposing it and by shutting up people like me.
Meanwhile, stop spreading lies. Telegram is not unencrypted. It is point-to-point encrypted by default.
I obviously was referring to e2ee; everything is point to point encrypted these days. e2ee is turned off by default and cannot be enabled for group chats.
I stand by my assertion that Telegram's proprietary secret encryption is nearly guaranteed to be weaker than industry-standard encryption. "Home grown is always weaker" is a well known position of almost the entire crypto community.
I further stand by my assertion that Telegram's encryption is nearly guaranteed to be backdoored, because there is literally zero reason for a startup to invest the massive engineering resources needed to successfully develop and maintain its own encryption algorithms, unless they were being paid to do so.
The NSA has a long history of backdooring private encryption technology through industry "partnerships."
Do you seriously think Putin would allow a domestic company to develop a communication tool that would allow Russians to communicate with each other in complete privacy?
> prove it or shut up.
Go read the HN commenting policy (specifically around civility) or shut up.
So you admit you weren't just spreading inaccuracies you heard from someone else but you knew you were posting disinformation.
> I further stand by my assertion that Telegram's encryption is nearly guaranteed to be backdoored, because there is literally zero reason for a startup to invest the massive engineering resources needed to successfully develop and maintain its own encryption algorithms, unless they were being paid to do so.
This is a good argument.
> Do you seriously think Putin would allow a domestic company to develop a communication tool that would allow Russians to communicate with each other in complete privacy?
Telegram is not a Russian company?
>> prove it or shut up.
> Go read the HN commenting policy (specifically around civility) or shut up.
Sorry. I was too harsh. I actually regret.
Compared to willfully spreading disinformation however it seems pretty minor though?
-----
A bit more: I know local police used to use Telegram. That worries me.
It is actually even more complicated:
If Putin reads my most personal messages I don't care.
If NSA or even worse, local police actually took their time to read my messages I'd be mad or worried.
However if FSB asked for help they would need a very good reason and I'd try to consult with local law enforcement first.
If local police however asked for help I'd go out of my way to help them.
That is a lot of speculation. If you read the encryption protocol, actual methods being used for encryption are well known. Client is open source and supports reproducible builds. If there is a backdoor, it is in front of our eyes.
> What that list doesn't show is what Telegram does when the FSB knocks. By all means, give your potentially embarassing message content to a hostile nation's intelligence service.
Telegram is in a lot of trouble in operating in Russia. It was blocked for two years. [1]
If they are so co-operative, why pass the opportunity to watch on their own people. Or did they become co-operative after unblock? It seems, that they help on some level [2], but does this threaten to other countries? Hard to say so.
Apple did stop updates for the Telegram. Google and Apple has weak history on compiling Russian requests. Maybe they complie with other countries more, but not Russian.
Do you want this to stop? Raise awareness, add this to your mail sig:
> This electronic communication has been processed by the United
> States National Security Agency.
If it makes people uncomfortable, GOOD. Pretending that your mail - and their mail - is not being accessed is not the way to resolve this uncomfortable situation. Ending it is the way. And that demands awareness.
I wonder how this affects nonprofits like Matrix/Element and Signal. What can they do with them? Gangstalk their developers? Coerce big tech to ban them from their appstores?
The design of these decentralized/federated platforms is specifically so they can't easily coerce their owners into disclosing incriminating information. In some sense, it's similar to how Bittorrent implicates it's users.
The issue is a bit more complex. I was thinking more on the lines of "will I get bothered for making crypto available for the masses that nobody can crack?"
Well signal does not have the data, they comply with such orders with the tiny amount of metadata they have (like a timestamp of when your account was created and that’s about it)
Telegram, as I understand it, can access your messages when stored in their Cloud[1]. They just make a choice to not provide the content of those to anyone.
I've had surprisingly good luck with strong-arming people into switching. The important part is having their trust, if they don't believe you they won't listen. The next part is to make simple, verifiable, and non-technical arguments for switching. Believe it or not, almost everybody is willing to take small steps if they're free.
Instead of rambling on and on about "end to end encryption" or "double-ratchet cryptographic algorithms" or other junk only nerds care about, approach it like this:
* There are no ads, and none of the messages you send can be used for advertising
* It's not owned by Facebook, Google, Microsoft, or any of the other mega-corporations, and you don't need an account on one of their sites to use it
* It will still work great if you travel, change providers, etc
* It's much safer to use on public Wi-Fi than other services or SMS
Honestly, don't even touch on law enforcement access as in the OP. That can strike a nerve for some people. The best appeals are the simple ones.
Also, a big one that works for me (especially iPhone users, which are the hardest to convert): "You can send full quality images and videos to Android users." The fact that Apple shots themselves in the foot is an advantage to Signal.
That’s not Apples flaw, it’s a flaw with SMS. It can only handle file sizes up to a certain limit, and during periods of congestion they lower that limit.
The best advice I have to give to get people to switch is showing that you have cross platform capabilities. Essentially everyone can have the features of iMessage/WA: full resolution images and videos, responding to messages with emojis (WA doesn't have), stickers (unfortunately you have to grab from signalstickers.com instead of in-app), voice and video calling, etc. If Apple didn't have such a closed ecosystem then I think it would be harder to get people to switch. In this respect, Signal is more feature rich than anything else (except Telegram, but Telegram doesn't have the same security and isn't trustless).
I think the common mistake is trying to convince people with the security. Use that as a bonus, not the main feature. You're talking geek to people that don't speak geek (convince geeks with these arguments, not mom and dad). I also suggest strong arming people and using momentum (if 4 people in a group of 5 have Signal, switch the group to Signal. Or respond to WA messages on Signal).
I switched to signal and got few people to switch too, then they started their shit coin(MOB). IMO Signal Messenger is just a way for that company to reach their shit coin goals. Uninstalled and never recommending that again.
I remember many people being pissed off when these features were announced some months ago.
As far as I can tell, nothing really happened afterwards. I use Signal on a daily basis and haven't noticed any coin-related functionalities. Either they were canceled, haven't been released yet or they're just buried somewhere deep and not advertised.
MOB is in beta and I think getting moved (if not already) to main soon. But it is non-intrusive and you won't notice it unless you look for it. People are just complaining about a feature that you have to look for. I'm not a fan of MOB and how the situation was handled, but I also think the reactions people are having are a bit over the top.
It's still a pain to buy MOB in the US so it's not that usable in the states. It would have been interesting to me if they just used Zcash instead of rolling their own, but I'm not sure what's supposed to be special about MOB vs. Zcash.
I'd love Zcash (forced private transactions). But honestly I'd also like if we could use different currencies. My dream was that you could send cash and they would just use MOB as the intermediate transaction (so your bank would just see a transaction to/from Signal and not who you were sending/receiving to/from). But that also has technical challenges and legal issues so I understand why not. I think a multi-currency wallet is the next best option imo.
Yeah, my long term hope for this stuff is that Urbit succeeds and then a lot of the UX here gets fixed by that and all of these apps become redundant and unnecessary. I'm definitely in the minority there but I think there's a future path where that's possible and works well.
I have had some success. It helps that many of the people I regularly contact were willing to migrate, even after some time. Most already used WhatsApp, so the friction to installing a new app was less than someone not accustomed to using a dedicated app for messaging.
But most of my American friends that don't have international contacts still just use SMS because they are not really accustomed to an app such as WhatsApp and so on.
The way these became bullet points on the slide is ~
An active investigation leads an agent to a suspect known to have used one of these applications
An administrative subpoena is issued to the company asking for what information is available
The company is then ordered by a federal judge to provide information related to a particular account or accounts
The company complies.
This is why it is important to understand how your messaging service handles data and how you can compromise your own safekeeping of all or part of that data.
Well, who cares when all they need is to use something like Pegasus to obtain full access to your phone simply by sending you a WhatsApp message (without having you even open the message).
Knowing how well guarded IOS is against app developers, I wonder what kind of zero-day would suddenly turn a message received in WhatsApp to full system access. I think NSO found a WhatsApp backdoor, not a zero-day bug.
Can't the FBI get chatlogs from WeChat? https://www.youtube.com/watch?v=N5V7G9IBomQ In the short documentary that the FBI made about catching Kevin Mallory they mentioned catching him sending classified stuff via WeChat.
I use LINE a fairbit, have a number of Japanese friends as well as friends that have traveled to Japan. I had no idea they had implemented much better encryption [1]. I'm convincing all my contacts to turn on the option now.
The funny thing is that sometimes when I search for Arabic words about Islam I get results for some old and usually the extremest books on CIA library (direct links to PDFs) which I wonder why?
Isn’t this simply imaginary, where in practice all the FBI has to do to up the ante is to request military-grade interception from a willing foreign counterpart?
The point of promoting and using privacy respecting software is not necessarily to make it impossible for law enforcement to get what they want. It's to make it somewhat expensive and require targeted probes.
You simply want it to be cost prohibitive to engage in mass surveillance on everyone, because that is an immensely powerful tool of totalitarian oppression that get really bad if we happen to elect the wrong person once.
I agree with you on the level of my person, and naturally flag that this economic argument is extremely poor policy. It’s quite unclear that the marginal cost is non-zero, or even flat by person. One might reasonably conclude we are already each inside a high-resolution springing trap, waiting for the moment we find ourselves athwart the powers that be. Imagine in physical space where the local police could simply call in foreign air strikes upon domestic citizens, with only economics to prevent otherwise. We must have transparent and firm laws, reformed at a fundamental level.
Didn't Russia(KGB) try to block Telegram in the past and were unsuccessful? I feel like they are fairly safe and trustworthy. Of course, I like Signal best, but Telegram has so many nice features.
If I remember correctly standard SMS has no security on it at all and is in the clear during transit. I may be wrong and never scared of being corrected.
My family has really taken to it. Granted it's mostly just message family app to them, but they are very not technically fluent but yet seemed to have picked it up just fine.
I really think this is not discussed when hacker news brings up secure messaging. The user experience is so much more important than the underlying tech. My family doesn't care about end to end encryption. They care about video calling with the press of a button, and easy features that are just there and work like zoom or the many other software products that they have to use work.
Thank you Signal team for focusing so hard on the user experience.
Current U.S. "due process" includes national security letters and other secret legal requests and secret courts to approve those requests. So there are still some checks and balances but it's less clear that they are working well enough or as intended.
Just look at the transparency reports of major Internet companies; they can report numbers of (certain types of) requests and that's about it. Mass surveillance under seal is not a great trend.
When political parties start advocating for jailing political opponents and treating the supreme court as political office for nominations, I find it harder to trust the current due process.
Ok, let's turn it around and say they would keep me safe from you. Why would they? What's their motivation to keep ME save from YOU? Are you even a threat? Am I a threat? And would a real threat even be caught by this system?
This discussion is not very interesting from a security perspective. I tuned out at “cloud”.
If it’s not in your physical possession, it’s not your computer. If it’s not your computer, then whoever administers the computer, or whoever [points a gun at/gives enough money to] the administrator of that system can access whatever you put on that system.
If a “cloud” or “service” is involved, then you can trivially use them to move or store data that you encrypted locally on your computer with your key that was generated and stored locally and never left your system. But subject to the limits above, the administrators of the other computers will still be able to see metadata like where the data came from and is going to. And they might be able to see your data too if you ever (even once, ask Ross Ulbrecht) failed to follow the basic encryption guidelines above.
You can make metadata access harder via VPNs and Tor, but you CANNOT make it impossible- in the worst case, maybe your adversary is controlling all the Tor nodes and has compromised the software.
Which leads me to my last point, if you did not write (or at least read) the code that you’re using to do all of the above, then you’re at the mercy of whoever wrote it.
And, if you try to follow perfect operational security, you will have a stressful and unpleasant life, as it’s really really hard.
> if you did not write (or at least read) the code that you’re using to do all of the above, then you’re at the mercy of whoever wrote it.
It's worse than that. Even if you read the code, you have to trust that the code you read is the code a service is actually using. Even if you deploy the code yourself, you have to trust that the infrastructure you're running on does not have some type of backdoor. Even if you run your own infrastructure, hardware can still have backdoors. Of course, the likelihood of any of these things actually becoming a problem decreases significantly as you read through the paragraph.
Indeed. I mentioned those specific things because it has been done. However, I think the likelihood of the average user being affected by things near the end of the list is generally quite small. If we aren't willing to accept this, at some point, we can't use technology for anything important.
If i'm reading this page correctly, AMD is working on something that would allow you to run trusted code that not even someone with physical access to the hardware could read (without breaking this system).
> With the confidential execution environments provided by Confidential VM and AMD SEV, Google Cloud keeps customers' sensitive code and other data encrypted in memory during processing. Google does not have access to the encryption keys. In addition, Confidential VM can help alleviate concerns about risk related to either dependency on Google infrastructure or Google insiders' access to customer data in the clear.
Then you only have to trust that AMD did not accidentally or intentionally introduce a bug in the system. Remember Spectre? Remember all the security bugs in the Intel management code?
You also have to trust that AMD generated and have always managed the encryption keys for that system properly and in accordance with their documentation.
And are you even sure that you’re actually running on an AMD system? If the system is in the cloud, then it’s hard to be sure what is executing your code.
And are you sure that your code didn’t accidentally break the security guarantees of the underlying system?
I have worked on all these problems in my day job, working on HSMs. At the end of the day there are still some leaps of faith.
You'd also need to consider AMD's management engine, the Platform Security Processor. If we're really slinging conspiracy theories, AMD processors are likely just as backdoored as Intel one. I don't mean to be grim, but I think it's safe to assume that the US government has direct memory access to the vast majority of computer processors you can buy these days.
I probably shouldn't have removed my tinfoil lining yet but yes, you're correct. Any information the US government has access to through these channels is also probably accessible by our surveillance/intelligence allies. It raises a lot of questions about how deep the rabbit hole goes, but I won't elucidate them here since I've been threatened with bans for doing so. I guess it's a do-your-own research situation, but always carry a healthy degree of skepticism when you read about anything government-adjacent.
We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
I have no idea why they thought this about us. Maybe it was an IP mix-up, but it was very disturbing. I feared that I may lose my job. I became very afraid of the FBI that day. I think this could happen to anyone at anytime.