The standard definition of quantized arithmetic for neural networks is not the same as the one used for floating point or double floating point values in the IEEE standardization of "real" arithmetic: https://arxiv.org/abs/1712.05877
In that paper they frequently say "integer quantization" for a reason. They relax to quantization because it's natural
4-bit NormalFloat Quantization The NormalFloat (NF) data type builds on Quantile Quantization[15] which is an information-theoretically optimal data type that ensures each quantization bin has an equal number of values assigned from the input tensor.
- QLoRA: Efficient Finetuning of Quantized LLMs https://arxiv.org/abs/2305.14314
3. Float8 Quantized Fine-tuning, for speeding up fine-tuning by dynamically quantizing high precision weights and activations to float8, similar to pre-training in float8.
- https://docs.pytorch.org/ao/stable/eager_tutorials/finetuning.html
If you want to be really pedantic you could have just said everything implemented on digital computers is quantized since it's all just boolean arithmetic on some finite bit vectors.
Do you mean the distribution of representable numbers as floats or do you mean real numbers? I always assumed infinity was stored between 0-1 because you can 1/x everything. But I have never had enough free opportunity time for maths.
I'm not sure how to answer because I'm not sure which question you're asking.
For infinity, neither can you calculate +/-inf but there also aren't an infinite set of representable numbers on [0,1]. You get more with fp64 and more with fp128 but it's still finite. This is what leads to that thing where you might add numbers and get something like 1.9999999998 (I did not count the number of 9s). Look at how numbers are represented on computers. It uses a system with mantissa and exponents. You'll see there are more representable numbers on [-1,1] than in other ranges. Makes that kind of normalization important when doing math work on computers.
This also causes breakdowns in seemingly ordinary math. Such as adding and multiplying not being associative. It doesn't work with finite precision, which means you don't want fields to with in. This is regardless of the precision level, which is why I made my previous comment.
For real numbers, we're talking about computers. Computers only use a finite subset of the real numbers. I'm not sure why you're bringing them up