Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What Lindelof says is that the size of the output is always bounded by ε% as many digits as the input, for ANY (arbitrarily miniscule) ε > 0.

Is there are reason or benefit of stating it like that rather than saying the fraction of output and input digit size asymptotically approaches zero? Or did I misunderstand the explanation?



Thinking about digit size is sort of fine for simplification / not turning off general audiences with equations, but a bit of a pain to think about if , e.g., the things you're comparing might be smaller than one.

What Lindelof says more precisely is that for any ε > 0 and any L-function L(s), there is a constant C_(ε,L) (depending only on ε and L, but crucially not on s) such that |L(s)| <= C_ε (1+|Im(s)|)^ε.


There may be some linguistic benefits to the "epsilon" formulation when discussing subconvexity rather than Lindelof. For instance, I think "the output is eventually bounded by 24% of the input" sounds more natural than "the limsup of output/input approaches something less than 24%".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: