> IME you can't reliably extract the intent from the C code, much less the binary, so you can't really fix these bugs without a human rewriting the source.
I am pretty sure that the parent is talking about hardware memory safety which doesn't require any "human rewriting the source".
The same thing can be said about a Rust vector OOB panic or any other bug in any safe language. Bugs happen which is why programmers are employed in the first place!
> The same thing can be said about a Rust vector OOB panic or any other bug in any safe language. Bugs happen which is why programmers are employed in the first place!
Sure, the point is you're going to need the programmer either way, so "hardware security lets us detect the problem without rewriting the code" isn't really a compelling advantage for that approach.
If a program halts, that is a narrow security issue that will not leak data. Humans need to fix bugs, but that is nothing new. A memory bug with such features would be hardly more significant than any other bug, and people would get better at fixing them over time because they would be easier to detect.
> If a program halts, that is a narrow security issue that will not leak data.
Maybe. Depends what the fallback for the business that was using it is when that program doesn't run.
> Humans need to fix bugs, but that is nothing new. A memory bug with such features would be hardly more significant than any other bug
Perhaps. But it seems to me that the changes that you'd need to make to fix such a bug are much the same changes that you'd need to make to port the code to Rust or what have you, since ultimately in either case you have to prove that the memory access is correct. Indeed I'd argue that an approach that lets you find these bugs at compile time rather than run time has a distinct advantage.
>Perhaps. But it seems to me that the changes that you'd need to make to fix such a bug are much the same changes that you'd need to make to port the code to Rust or what have you, since ultimately in either case you have to prove that the memory access is correct.
No, you wouldn't need to prove that the memory access is correct if you relied on hardware features. Or I should say, that proof will be mostly done by compiler and library writers who implement the low level stuff like array allocations. The net lines of code changed would definitely be less than a complete rewrite, and would not require rediscovery of specifications that normally has to happen in the course of a rewrite.
>Indeed I'd argue that an approach that lets you find these bugs at compile time rather than run time has a distinct advantage.
It is an advantage but it's not free. Every compilation takes longer in a more restrictive language. The benefits would rapidly diminish with the number of instances of the program that run tests, which is incidentally one metric that correlates positively with how significant bugs actually are. You could think of it as free unit tests, almost. The extra hardware does have a cost but that cost is WAAAY lower than the cost of a wholesale rewrite.
> No, you wouldn't need to prove that the memory access is correct if you relied on hardware features. Or I should say, that proof will be mostly done by compiler and library writers who implement the low level stuff like array allocations. The net lines of code changed would definitely be less than a complete rewrite, and would not require rediscovery of specifications that normally has to happen in the course of a rewrite.
I don't see how the hardware features make this part any easier than a Rust-style borrow checker or avoid requiring the same rediscovery of specifications. Checking at runtime has some advantages (it means that if there are codepaths that are never actually run, you can skip getting those correct - although it's sometimes hard to tell the difference between a codepath that's never run and a codepath that's rarely run), but for every memory access that does happen, your compiler/runtime/hardware is answering the same question either way - "why is this memory access legitimate?" - and that's going to require the same amount of logic (and potentially involve arbitrarily complex aspects of the rest of the code) to answer in either setting.
That's possible but unlikely. I would be OK with requiring software bugs like that to be fixed, unless it can be explained away as impossible for some reason. We could almost certainly move toward requiring this kind of stuff to be fixed much more easily than we could do the commonly proposed "rewrite it in another language bro" path.
There's no such thing as hardware memory safety, with absolutely no change to the semantics of the machine as seen by the compiled C program. There are going to be false positives.
I am pretty sure that the parent is talking about hardware memory safety which doesn't require any "human rewriting the source".