For simple models (constant incoming radiance), you can indeed just add the optical depths from the different fog 'layers'. (90% sure but the maths is easy to check anyway, see https://forwardscattering.org/post/72)
I'd be super interested in more information on this! Do you mean abandoning unsupervised learning completely?
Prompt Injection seems to me to be a fundamental problem in the sense that data and instructions are in the same stream and there's no clear/simple way to differentiate between the two at runtime.
I haven't thought about it deeply. But I guess it's about allowing the model to easily distinguish the prompt from the conversation. Models seem to get confused with escaping, which is fair enough, escaping is very confusing.
It's true that for the transformer architecture the prompt and conversation are in the same stream.
However you could do something like activate a special input neuron only for prompt input.
Or have the prompt a fixed size (e.g. a fixed prefix size).
And then do a bunch of adversarial training to punish the model when it confuses the prompt and conversation :)
"look, I'm sorry, but the rule is simple:
if you made something 2x faster, you might have done something smart
if you made something 100x faster, you definitely just stopped doing something stupid"
I think this might not be a shortcoming of MSVC but rather a deliberate design decision. It seems likely that MSVC is failing to apply strict aliasing, but that it's deliberately avoiding it, probably for compatibility reasons with code that wasn't/isn't written to spec. And frankly it can be very onerous to write code that is 100% correct per the standard when dealing with e.g. memory-mapped files; I'm struggling to recall seeing a single case of this.
Relative to what?
Relative to modern OpenGL with good driver support, not much probably.
The big win is due to the simplified API, which is helpful for application developers and also driver writers.
reply