> "How do they accomplish their goals with project BULLRUN? One way is that United States National Security Agency (NSA) participates in Internet Engineering Task Force (IETF) community protocol standardization meetings with the explicit goal of sabotaging protocol security to enhance NSA surveillance capabilities." "Discussions with insiders confirmed what is claimed in as of yet unpublished classified documents from the Snowden archive and other sources." (page 6-7, note 8)
There's long been stories about meddling in other standards orgs (both to strengthen and to weaken them), but I don't recall hearing rumors about sabotage of IETF standards.
These rumors, about IETF in particular, predate the Snowden disclosures.
Almost immediately after that happened, a well-known cypherpunk person accused the IETF IPSEC standardization process of subversion, pointing out Phil Rogaway (an extraordinarily well-respected and influential academic cryptographer) trying in vain to get the IETF not to standardize a chained-IV CBC transport (this is a bug class now best known, in TLS, as BEAST), while a chorus of IETF gadflies dunked on him and questioned his credentials. The gadflies ultimately prevailed.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency. This is a common failure of all standards organizations (W3C didn't need any help coming up with XML DSIG, which is probably the worst cryptosystem ever devised), but it's somewhat amplified in open settings like IETF.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency
The 3 letter agencies usually recrut people from academia and sensitive organizations in order to pursue their agenda.
This would be a snappier comeback if it wasn't the academic cryptographer who was right, and the random people in non-sensitive organizations who shot the right answer down.
I wonder what other institutions are infested with swarms of gadflies that'll all swear up and down in unison that something is definitely good or bad, will attack outsiders with narratives that disagree with their own, and weaken the credibility of the entire institution in the process.
Surely this can't be a phenomenon unique to technology, can it?
It's just human nature. I refuse to participate in standards work, partly because it's much more pleasant throwing rocks from a safe distance, but also because I recognize in myself the same ordinary frailties that had friends of the original IPSEC standards authors writing mailing list posts about Rogaway being a "supposed" cryptographer. There but for the grace of not joining standards lists go I.
I think --- I am not kidding here, I believe this sincerely --- the correct conclusion is community-driven cryptographic standards organizations are a force for evil. Curve25519 is a good standard. So is Noise, so is WireGuard. None of those were developed in a standards organization. It's hard to think of a good cryptosystem that was. TLS? It took decades --- bad decades --- to get to 1.3. Of that outcome, I believe it was Watson Ladd who said that if you turned it in as the "secure transport" homework in your undergraduate cryptography class, you'd get a B.
It goes further back than crypto: back in the day, there was the design-by-committee OSI versus the "rough consensus and running code" IETF. The result is that while we still teach the OSI 7-layer model in universities, in practice we use TCP/IP, often with HTTP and TLS on top.
You're basically describing politics, where specific opinions are central to a group identity. So long as there is an innate human desire to belong, there will be plenty of people who are happy to live with any amount of cognitive dissonance.
Joining a gadfly swarm just gives them an opportunity to prove their worth to the group.
DNSSEC is a great example of just how poorly committee designed crypto is. The government doesn't need to do anything to standards, they can just let them play out as-is.
We ended up with a huge mess that solves no actual problems, introduced problems that never existed before, and so confusing in what it does that people still blindly defend it.
Why do you assume the government didn't do anything? If anyone on those committees was paid or influenced by the NSA/CIA, they would not have disclosed it.
You're right, of course.
But standards bodies also sabotage their own work when there's no security relevance (USB's dozens of options with impenetrable names like "USB 3.2 Gen 2×2")
And the US DoD reportedly loves Trusted Platform Modules, so presumably if the US intervened at all it was to improve the spec - and yet it's got more holes than swiss cheese.
It’s true that you can always imagine a perfectly-concealed conspiracy but we see the same dynamics unfold in many places where there’s no security impact so the parsimonious explanation is that there is no conspiracy, only normal human social dynamics.
I'm curious as to how successful they were at subverting the IETF process. It wouldn't be impossible, but since much of the process is in the open it could be difficult, especially if they did it under their own name.
I suspect most of it was done under different corporate identities, and probaby just managed to slow adoption of systematic security architectures. Of course, once the Snowden papers came out, all that effort was rendered moot as the IETF reacted pretty hard.
Ya ever heard of the OAuth2 protocol? I spent almost half a decade working on identity stuff, and spending a lot of time in OAuth land. OAuth is an overly complicated mess that has many many ways to go wrong and very few ways to go right.
If you told me the NSA/CIA had purposefully sabotaged the development of the OAuth2 protocol to make it so complex that no one can implement it securely it'd be the best explanation I've heard yet about why it is the monstrosity it is.
Ya ever heard of the OAuth2 protocol?
Have you ever seen SAML? Now there is a protocol that seems borderline sabotaged. CSRF tokens? Optional part of spec. Which part of the response is signed? Up to you with different implementations making different choices; but better verify the sig covers the relavent part of the doc. Can you change the signed part of spec in a way that alters the xml parse tree without it invalidating the signature? Of course you can!
Oauth2 is downright sane in comparison.
[To be clear, saml is not a ietf spec, it just solves a similar problem as oauth2]
Honestly, as someone who has implemented both SAML and OAuth2 (+ OIDC) providers, I found SAML much easier to understand. Yes, there are dangerous exploits around XML. Yes, the spec is HUGE. But practically speaking, people only implement a few parts of it, and those parts were, IMHO, easier to understand and reason about.
This is definitely an unpopular opinion however.
What's insidious about SAML is that it really is mostly straightforward to understand, but it's built on a foundation of sand, bone dust, and ash; it works --- mostly, modulo footguns like audience restrictions --- if you assume XML signature validation is reliable. But XML signature validation is deeply cursed, and is so complicated that most fielded SAML implementations are wrapping libxmlsec, a gnarly C codebase nobody reads.
Redirecting the user when they tap sign in from untrustednewsite.com, to a new window with the domain hidden of bigsitewithallyourdata.com and saying “Yeah, give us your login credentials” always felt like the craziest thing to me
So ripe for man in the middle attacks. Even if you just did a straight modal and said “put your google credentials into these fields”, we’re training people that that’s totally fine
OAuth2 is about the simplest thing you could come up to solve the delegated authentication problem. It's complexity stems mostly from the environment it operates in: it's an authentication protocol that must run on top of unmodified oblivious browsers.
There's a lot of random additional complexity floating around it, but that complexity tracks changes in how applications are deployed, to mobile applications and standalone front-end browser applications that can't hold "app secrets".
The whole system is very convoluted, I'm not saying you're wrong to observe that, but I'd argue that's because apps have a convoluted history. The initial idea of the OAuth2 code flow is pretty simple.
The problem is, as anyone with sufficient experience knows, it is perfectly possible and common for devs to design security disasters without any involvement by the spooks. I suspect the NSA counts on this and uses any covert influence they might have to slow corrections (eg. Profiles for OAuth2 that actually are reasonable, patches to common services, etc.)
sabotaging a design is remarkably easy. We have several individuals that almost do it effortlessly. It's almost a talent for some. I suspect that doing it maliciously while hiding behind some odd corner scenario or some compatibility requirements can't be that hard and will be almost impossible to prove or detect.
On the other hand, why would the NSA even bother? With these talented individuals providing them a never ending supply of security issues? And they don't even have to pay for them besides having their normal 0-day discovery team look for them. I think supporting this is their reaction when a 0-day they are relying on gets patched out.
I can imagine special cases where they would exert resources and effort, such as the IETF or some other economically efficient chokepoint (Rust?), but not in general.
Just a concrete example of that time we know the NSA actually did their job properly: https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...
Fantastic and damming read. I had no idea they were so active and nefarious even in the 70s
Are you sure you read it carefully? NSA's role in DES was to make it resistant to linear and differential cryptanalysis.
True, but they have also forced the reduction of the key size from 64 bits to 56 bits, making it much more vulnerable to brute force.
So they have hardened DES towards attacks from those with much less money than NSA, while making it weaker against rich organizations like NSA.
I'm too young to have firsthand experience, but I've seen the speculation that IPSEC was an example of this kind of strategy. It's certainly more-- ahem-- flexible that it probably ought to be. I know there have been exploits against ISAKMP implementations. I'd assume the baroque nature of the protocol drove some of that vulnerability.
Not IETF, but NIST, which I suspect is worse. Dual_EC_DRBG was withdrawn when it was discovered to be an attempt by the NSA to sabotage ECC specifications.
NIST EC DSA curves are the only ones used by CAs, are manipulatable, and have no explanation for their origin. Pretty much the entire HTTPS web is likely an open book to the NSA.
I don't know what you mean by "manipulatable". I don't know what you mean by "DSA". I assume what you're doing is casting aspersions on the NIST P-curves. They're the most widely used curves on the Internet (hopefully not for too long).
I don't think all that many serious people believe there's anything malign about them. It's easy to dunk on them, because they're based on "random seeds" generated inside NIST or NSA. As a "nothing up our sleeves" measure, NSA generated these seeds and then passed them through SHA1, thus ostensibly destroying any structure in the seed before using it to generate coefficients for the curve.
The ostensible "backdoor attack" here is that NSA could have used its massive computation resources (bear in mind this happened in the 1990s) to conduct a search for seeds that hashed to some kind of insecure curve. The problem with this logic is that unless the secret curve weakness is very common, the attack is implausible. It's not that academic researchers automatically would have found it, but rather that NSA wouldn't be able to count on them not finding it.
Search for [koblitz menezes enigma curve] for a paper that goes into the history of this, and makes that argument (I just shoplifted it from the paper). If you don't know who Neal Koblitz and Alfred Menezes are, we're not speaking the same language anyways.
The real subtext to this "P-curves are corrupted" claim is that there are curves everyone drastically prefers, most especially Curve25519 (the second-most popular curve on the Internet). Modern curves have nice properties, like (mostly) not requiring point validation for security (any 32-byte string is a secure key, which is decidedly not the case for ordinary short Weierstrass curves, and being easy to implement without timing attacks.
The author of Curve25519 doesn't trust the NIST curves. That can mean something to you, but it's worth pointing out that author doesn't trust any other curves, either. Cryptographers have proposed "nothing up my sleeves" curves, whose coefficients are drawn from mathematical constants (like pi and e). Bernstein famously co-authored a paper that attempted to demonstrate that you could generate an "untrustworthy" curve by searching through permutations of NUMS parameters. It was fun stunt cryptography, but if you're looking for parsimony and peer consensus on these issues, you're probably better off with Menezes.
Incidentally, you said "ECDSA curve". ECDSA is an algorithm, not a curve. But nobody likes ECDSA, which was also designed by the NSA. A very similar situation plays out there --- Curve25519's author also invented Ed25519, a Schnorr-like signing scheme that resolves a variety of problems with ECDSA. Few people claim ECDSA is enemy action, though; we all just sort of understand that everyone had a lot to learn back in 1997.
I don't think that quite emphasises enough why the P curves are rightly seen as suspicious.
The NSA knew all about the importance of nothing-up-my-sleeve numbers for eliminating suspicions around their involvement. That's why the curve constants are the output of a hash function.
And yet for this scheme to correctly eliminate suspicion, it's vital that the inputs to the hash function are also above suspicion. So a good choice of input to the hash function would be something like 1, or the digits of pi, or some other natural number. Then there can be no possibility of game playing even if the hash function is broken (which as we now know, SHA-1 is).
The NSA didn't do this. Instead they picked huge numbers that have no obvious origin and then refused to explain where they came from. They created something that superficially looks like it should place the scheme above suspicion, but when you look at the details you discover it doesn't actually do so.
This is catastrophic! The absolute best case explanation is rank incompetence, but that's quite simply implausible as these schemes are otherwise designed in a highly competent manner. The NSA just doesn't develop cryptographic schemes with obvious howling errors in them. Except in this case, where they supposedly did.
None the attempted justifications for why this is OK hold water, in my view. They all rest on a series of very dubious assumptions:
1. If there was a way to build exploitable curves, academics would know about it.
2. If an outside researcher did discover such a technique, they'd actually tell everyone about it and not, say, be bribed or coerced or convinced to stay silent by the NSA.
3. The NSA didn't know SHA-1 was broken despite being the designers of it, and therefore would have had to do a brute force search for an exploitable curve, which they couldn't have done.
What do we know today? Well, we know that SHA-1 has critical weaknesses which took decades to discover, we know that the NSA had an active programme of sabotaging cryptographic standards via kleptography, and we know that they are more than capable of corrupting large numbers of people across many institutions, including professional cryptographers working at places like RSA. So none of these assumptions look safe.
I think the idea that this can't be an NSA attack despite things looking exactly the way it would if it was boils down to a desire to believe that academia can't be far behind what the NSA can do. But that's a very subjective sociological belief. Having spent time at academic cryptography conferences and reading their papers, I don't find it hard to believe that the NSA could know things about ECC that aren't in the public domain. Academic incentives just aren't there to research the things the NSA cares about, especially in recent years.
Another take... How long did OpenSSL have the heartBleed vulnerability? And that was EASY to understand, it was completely open and readable code, there are a billion plus more programmers than cryptographers, and all these academics also didn't catch it.
I'm out of my depth for the math portion of the discussion, but I can't say that "other people would know" is a reason I can get behind.
IDK, it does look bad.
Forged certificates would immediately show up in Certificate Transparency logs.
He isn’t saying the CA is bad. He is saying the curves selected are arbitrary and they stuck to specific ones with no reason. That the NSA has a backdoor to at least some TLS EC algorithms.
Now, IMO I thought the whole point was that the curve itself didn’t really matter. As you are just picking X Y points on it and doing some work from there. But if there is a flaw, and it required specific curves to work, well there you go.
The NIST process (especially then) isn't fully open, which makes it easier to subvert with an inside agent.
There is no evidence that it was backdoored.
Circumstantial evidence is still evidence. I'll acknowledge that this is extremely tenuous, but: the NSA has gimped algos before, wants to do it again as frequently as possible, and has the capacity to do so.
Unfortunately, at some point we (non-crypto-experts) have to trust something.
Article references Russias SORM system which provides not only FSB but the police and tax agencies with basically fully access to everything on the internet including credit card transactions, this stuff started in 1995 and was penetrated by the NSA
Under SORM‑2, Russian Internet service providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, email messages and web use. The device must be installed at the ISP's expense.
originally there was a warrant system but it seemed quite liberal and they don’t bother with the secret court system “oversight” like the US:
Since 2010, intelligence officers can wiretap someone's phones or monitor their Internet activity based on received reports that an individual is preparing to commit a crime. They do not have to back up those allegations with formal criminal charges against the suspect. According to a 2011 ruling, intelligence officers have the right to conduct surveillance of anyone who they claim is preparing to call for "extremist activity."
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
Internet and telecom companies are required to disclose these communications and metadata, as well as "all other information necessary" to authorities on request and without a court order
Equally troubling, the new counterterrorism law also requires Internet companies to provide to security authorities “information necessary for decoding” electronic messages if they encode messages or allow their users to employ “additional coding.” Since a substantial proportion of Internet traffic is “coded” in some form, this provision will affect a broad range of online activity.
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
Russia is not alone. The EU required the same thing from 2006, until the EU Court struck it down in 2014: https://en.wikipedia.org/wiki/Data_Retention_Directive
https://en.wikipedia.org/wiki/Data_retention#European_Union is a fun read in how the EU fined countries for adhering to their respective national constitutions and refusing to implement that directive.
Looking beyond the EU, there are plenty of allegedly democratic countries on that wiki page with legally-required data retention:
In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. [..] It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant
So if governments are sniffing on high entropy traffic, could we just send normal seeming (SSH or whatever) packets with the payload coming from /dev/urandom? Would that be a denial of service?
If you could get enough people doing it, yes. But in practice the only people who would care enough would be the people the governments want to watch. Even a decent chunk of the crypto community would rather dunk on a cryptosystem they don't like than actually encrypt their emails (although of course how much of that is the NSA disrupting things is an open question).
I actually sorta think unencrypted email has been a boon for society as both corporations and government agencies have left paper trails that helped expose their misdeeds later.
I thought there were wildly popular messaging apps now that were encrypted by default ?
There are, but basically only because of Facebook, who seem to actually care quite a bit about crypto and have the clout to make things happen. I don't think it's coincidence that they're also the main sender of encrypted emails (you can set your PGP key and then they'll use it for all their emails to you).
Create 1000s of files like this:
dd if=/dev/urandom of=encrypted1.zip bs=4M count=1
and upload it to dropbox and google drive
NSA will archive it for you expecting to decrypt the content later in the future when a software bug is found or hardware is more capable.
When they find a way to "decrypt" /dev/urandom output into, you know, whatever is expedient for them at some future date, do you think a secret judge in a secret court is going to believe that you were just moving around random noise for the lulz?
My conspiracy theory is that AES 256 has been cracked by NSA/CIA but they just shut up about it so everyone feels safe.
If AES is cracked by the NSA then they know that there is a fault. They must therefore assume that others could know about it and might exploit that fault, either now or in the future. A massive amount of American infrastructure, including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
The NSA uses a completely different set of algorithms called Suite A. They don't use AES for exactly this reason.
Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)
a small percentage of the overall national security-related information assurance market
That quote does not exist in the one Wikipedia citation. I am not sure where they got it. Any time data flows from one physical location to another it is encrypted with Suite A algorithms, which is arguably where it matters the most.
The government is known to horde and deploy zero days for surveillance... That was part of the snowden leak. google/Ask chatgpt about Tailored Access Operations revealed under the snowden papers.
If the flaw in AES is subtle enough, they may be (justifiedly?) convinced that nobody else's capabilities will suffice to discover and/or exploit it - or, at least, that they have enough of a grasp of what everyone else is doing that no other actor could discover the flaw without the NSA finding out about this.
Encryption bugs are fundamentally different than other exploits; if I send traffic protected by encryption then it needs to remain encrypted as long as the information needs to be secret. With national security that can be decades.
They wouldn't go public with such information. They would very quietly get the most important parts moved off of it, quickly, with as little noise as possible.
Moving any part of the US government off of AES would be noticed. The NSA doesn't send IT people to do installs. They set standards which are then incorporated into products by outside contractors selling to government agencies. Any attempted migration away from AES would be news here on HN within hours.
As an example of one such standard which was recently updated and still includes AES256, https://en.m.wikipedia.org/wiki/Commercial_National_Security...
If they can find a flaw in AES 256, others can too.
My theory is they've got a backdoor in the iPhone hardware random number generator. It's the most obvious high-value target, it's inherently almost undetectable (the output of a CSPRNG is indistinguishable from real random numbers), and you can keep crypto researchers busy with fights about whose cryptosystem is best, safe in the knowledge that it doesn't matter what they come up with, they were owned from the start.
Wow djb was on his committee, cool.
DJB is widely thought to be the entire reason he was in the program at all.
I didn’t know who this was, others probably too.
While we work on scaling free distillation to more content and while we are figuring out payment options, you can use your own API keys.
Your API keys are stored in your browser, and never on our servers.
We will let you know when this and other new features are generally available.