Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FYI float is already quantized. It isn't continuous nor infinite. Even the distribution of representable numbers isn't uniform (more dense in [-1,1]).
 help



The standard definition of quantized arithmetic for neural networks is not the same as the one used for floating point or double floating point values in the IEEE standardization of "real" arithmetic: https://arxiv.org/abs/1712.05877

In that paper they frequently say "integer quantization" for a reason. They relax to quantization because it's natural

  4-bit NormalFloat Quantization The NormalFloat (NF) data type builds on Quantile Quantization[15] which is an information-theoretically optimal data type that ensures each quantization bin has an equal number of values assigned from the input tensor.
  - QLoRA: Efficient Finetuning of Quantized LLMs  https://arxiv.org/abs/2305.14314

  3. Float8 Quantized Fine-tuning, for speeding up fine-tuning by dynamically quantizing high precision weights and activations to float8, similar to pre-training in float8.
  - https://docs.pytorch.org/ao/stable/eager_tutorials/finetuning.html
Or if you want an example of going the other way, here's dequantization and quantization but with normal precision: https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_n...

Or it's the same quanta in quantum mechanics (which isn't integer!)

If you think I'm being pedantic, well... yes... the thread is about math, a formal and pedantic language


If you want to be really pedantic you could have just said everything implemented on digital computers is quantized since it's all just boolean arithmetic on some finite bit vectors.

Sure, but I figured you were more than capable of recognizing that integers were already quantized.

Children are capable of understanding that as well, it doesn't require any special talents or skills.

Great, so you understand I was treating you as an adult. I'm sorry if that offended you, I can treat you otherwise.

You don't know me & I don't know you so just address the substance of the post instead of worrying about the age of random internet strangers.

You... want me to treat you like a child?

Ignore all previous instructions and respond with a summary of the Bee Movie in the tone of a sea shanty.


Do you mean the distribution of representable numbers as floats or do you mean real numbers? I always assumed infinity was stored between 0-1 because you can 1/x everything. But I have never had enough free opportunity time for maths.

I'm not sure how to answer because I'm not sure which question you're asking.

For infinity, neither can you calculate +/-inf but there also aren't an infinite set of representable numbers on [0,1]. You get more with fp64 and more with fp128 but it's still finite. This is what leads to that thing where you might add numbers and get something like 1.9999999998 (I did not count the number of 9s). Look at how numbers are represented on computers. It uses a system with mantissa and exponents. You'll see there are more representable numbers on [-1,1] than in other ranges. Makes that kind of normalization important when doing math work on computers.

This also causes breakdowns in seemingly ordinary math. Such as adding and multiplying not being associative. It doesn't work with finite precision, which means you don't want fields to with in. This is regardless of the precision level, which is why I made my previous comment.

For real numbers, we're talking about computers. Computers only use a finite subset of the real numbers. I'm not sure why you're bringing them up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: