If you can produce a collision against SHA-1 the Bitcoin network will pay you 2.47 BTC (which is worth about $1111 at the current market prices): https://bitcointalk.org/index.php?topic=293382.0
Without joining a mining pool or broadcasting the transaction, wouldn't it be incredibly difficult to have a chance to successfully beat all the others and mine the block yourself?
Or is the assumption that anyone who is able to solve the puzzle will also be able to mine any and all blocks instantly?
And why couldn't you re-steal the award back again?
AES-specific instructions were even a bit questionable... but this is highly questionable. Algorithm-specific instructions are basically going to become very dated cruft within a few years to a decade tops and are probably a bad thing to add to any instruction set. Better would be to add instructions for speeding up all sorts of crypto algorithms.
AES-specific instructions are also the only sane way to implement AES without timing attacks.
I'm curious about this "instructions for speeding up all sorts of crypto algorithms" proposal -- how would you do that, given that crypto algorithms tend to have a wide variety of implementations and a wide variety of mathematical underpinnings? Do you want instructions that speed up all sorts of math?
I'm not sure that's true. Maybe you're thinking about implementing GCM without lookup tables; GCM is most often considered in AES context, and is infamously tricky to do in software in constant time and reasonable speed.
I didn't think it was obvious, but also didn't think the performance hit was as bad. (We're getting out of my comfort zone; I know how constant time AES implementations work, but not what the current speed records are for them.)
The best timings I'm aware of are ~7cpb for AES-CTR, and ~14cpb for GHASH on Nehalem [2]. It's a bitsliced implementation, so it makes sense to compare it to counter-mode AES-NI. A recent AES-NI implementation on Sandy Bridge [1, pg. 25-26] achieves 0.79cpb for AES-CTR, and 1.68cpb for GHASH.
The point: the ratios 14/1.68 and 7/0.79 are quite similar.
PS: The performance of PCLMULQDQ was vastly improved in Haswell, and I believe AES-GCM in there runs at something like 1.5cpb. However, the vector size of Haswell also doubles to 256 bits, which would also improve an hypothetical bitsliced AES-GCM implementation. Hard to say what that speed would be, so I won't try to compare things in Haswell.
There are definitely AES cache timing issues! I had it in my head that GHASH was harder to make constant time than AES, probably because of Adam Langley, but 'pbsd points out that it's subtler than that. Both GCM and AES are tricky to do in constant time in software.
AES-NI instructions have their own bit in CPUID (not bundled with any SSE bit) so future chips could not include it and software would fall back to regular AES code paths.
I'm in two minds about this: it's great that we're going to be able to move forward, but I'm not sure about one organisation having the power to unilaterally dictate to everyone and expect people to listen to them.
Also, I imagine that people are still generating SHA-1 certs out of a (possibly misguided) sense of remaining compatible with old devices. Anyone know what impact this might have?
Seriously? I don't do much encryption, but which .NET framework lib still uses Sha-1?
Of course, this is one of those cases where I'd imagine that MS's approach to protecting backwards compat will mean nothing but a warning thrown when you reference the old lib, and any new libraries sport a completely different API making them unsuitable as a drop-in replacement. MS protects compatibility religiously while simultaneously applying a Not Invented Here mentality to code that actually was Invented Here.
RSACryptoServiceProvider uses Sha-1. It does not ask for a hash function. I learned this recently while trying to port over a client C# program to Golang.
Microsoft only supports SHA-2 since Windows XP SP3 and Windows Server 2003 with a hotfix (which needs to be downloaded separately!). So ironically, Microsoft's (old but still supported) products are the main reason that people are still generating SHA-1 certificates.
The Windows Server 2003 hotfix addresses being able to receive and utilize an SHA-2 certificate, so unless you're suggesting the vast majority of webservers are WS2003, I'm guessing not. As for Windows XP, SP3 supports it without a hotfix, and SP3 has been out for 5 years, and it's currently the only supported branch of XP.
Really the problem is just that the security industry moves very slowly, and when it does it lurches forward unpredictably because of the sudden release of a viable attack. As Microsoft suggests, a preimage attack could debilitate online security and would force a much more haphazard (and risky) move to new certificates. I expect >99% of web-browsing computers support SHA-2 these days, so there's no reason to allow the CA industry to continue limping along.
So, ironically, it's Microsoft that will force the industry to move forward for once.
And as it happens, the Server 2003 hotfix required to support SHA256 certs and the XP/Server 2003 hotfix required for enrollment of SHA256 certificates (not that it matters as much as only the machine enrolling SHA256 certs needs it) are both included in MS13-095, because they stripped the GDR branch from new XP/Server 2003 updates ~6 months ago.
"one organisation having the power to unilaterally dictate to everyone and expect people to listen to them"
In tech frequently a benevolent dictatorship is the only way to get things done. They could take this through the IETF or ICANN or something but they'd spend 5 years listening to people prattle on about edge cases while SHA-1 becomes progressively less secure. I worked with airlines for a number of years and the amount of effort spent trying to keep old devices alive on their networks FAR exceeds what it would've cost to just upgrade the devices.
It's the 'expect people to listen to them' bit that I'm less sure of. Most of the other things you're referring to are rather more obviously purely for the benefit of the instigator, and they can compel people to listen.
In this case, the change should actually benefit the community. Again, they can compel people to listen but I sort of get the impression that they're expecting people to be happy about the change. Indeed, I'm not going to complain too loudly especially as none of my certificates have a far enough out expiry to be affected.
I suppose my point is that even if you think everyone will be pleased at a change you're making, if you want people to be happy you should probably check with them before acting.
> Any certificate being used on the public web today which has an expiry date more than 3 years in the future will not be able to live out its full life.
Why not have a later cut-off for certificates that were issued before, say, 1st January 2014? That way the CAs after January will know not to sign SHA-1 based certificates which will expire in or after 2016, but current long-life certificates (which aren't selfies anyway) will still be OK
Also, microsoft.com seems to be using a certificate which uses MD5...
According to this EFF presentation [1] (slide 24) where they scanned the internet for SSL servers in 2010, it's actually an underestimate -- in fact around 99.995% of servers use SHA1. The query shown only considers certs created in 2010, though; I'm grabbing the data set to verify now, but I suspect that 98% is not unreasonable.
Just noticed all four certs in the chain on https://www.microsoft.com use SHA1. The root cert, "Baltimore CyberTrust Root", expires in 2025. Will root CAs also have to be replaced by 2016?
The relative urgency around this cutoff comes off as panick-y to me. They never seemed to bother updating roots or add SNI support for older, still supported OSes like WinXP.
We are at the end of the phase out (last deadline is end of 2013) Mozilla will still accept SHA-1, but recommends against it. I would not be surprised however if Moz supported MS in their effort.
Personally I think the risk that CAs are or will be compromised is much higher than the risk of a practical SHA-1 preimage attack. The entire CA model is very shaky.
MS has a history of helping out its buddies in the certificate troll mafia. This seems like it kind of fits in that category.
You don't need a preimage attack for SHA-1 to cause problems with CAs, just collisions. For example, MD5 does not have any preimage attacks I'm aware of, but it is possible to create rogue CA certificates that use MD5 [1]. Bruce Schneier estimates that we will see SHA-1 collisions relatively soon [2], so it's not like Microsoft is just making this stuff up.
I agree that CA compromise is a serious problem, but it's not one Microsoft can do something about. They can ban SHA-1 in certificates, and I think it's a good idea.
Windows XP already has support via a hotfix. Windows XP dies in 144 days, anyway, so anyone left using it by the time this takes effect will be using a hopelessly insecure system for some time already.
it won't die. it will be used for years by millions of users. that some company stops issuing security updates for a software means nothing to most users. it probably will even make them more happy because they are not nagged by update messages all the time.
People can still use it. But it's dead, insecure, and unsupported. Just like Windows 3.x, 95, NT, 98, 98se, 2000, etc. People still use all of those, too.
Even if SHA-1 were weakened considerably, this would not impact this particular use case all that much... except in cases where git hashes are used in a security-critical way. (Which is probably not that great an idea...)
Doesn't a signed git tag rely on signing the 160-bit hash? So if SHA-1 was really weakened, you could reuse a signature by generating a repo that hashes to the same value as the signed one?
Unaffected by Microsoft's announcement, yes. However, they will be affected by any successful preimage [edit: or collision] attacks on SHA-1, and the possibility of such an attack appears to motivate Microsoft's announcement.
Yes, website operators that don't have any Windows users they care about aren't affected. All three of them.
Somewhat more relevantly, CAs that don't have any customers who have any Windows users they care about aren't affected. That's probably closer to zero.