Hacker Newsnew | past | comments | ask | show | jobs | submit | lolcatuser's commentslogin

Definitely. This, plus a graduate degree in a more specific field, and you end up with a very well-rounded education.


FYI I just read some more about polymathy and had a look at the curricula. To me it feels really a bit too 'meta'. You might as well say, stay open and curious to any knowledge.

That said, still I'm open and curious... could you please be so kind as to elaborate how that education is very well-rounded? What would the studies yield more concretely?


The reason division by zero can be a number is that floating-point numbers aren't individual numbers but ranges of numbers. So 0.0 isn't really the integer 0, it is the range of numbers between 0 and the smallest representable non-zero number. So division by 0.0 may be division by zero, or it may be division by a very small number, in which case it would approach infinity.

The same goes for the floating-point infinity: it doesn't represent the concept of infinity, it is a placeholder for every number between the largest representable number and infinity. That's how dividing a very large number by a very small number can result in infinity, a number which is really not a number.

This is the philosophy by which IEEE floating-point numbers were designed, and it's the explanation behind which negative zeroes and infinities make sense in floating point.

The way I find it easiest to reason about is by taking a graph of an asymptote and rounding it to the nearest units. You somehow need a way to say "there is an asymptote here!" even though you might not have all the units, and so you introduce infinities and negative zeroes to maintain as much precision as possible.


this isn't true. floats don't have semantics of internals. they have semantics of exact points that do math with rounding (otherwise the result of subtracting 2 floats would sometimes be more than 1 float)


Moreover the spec defines rounding modes (rtnz, rtg, rtfz rtl) that are designed to exactly do interval arithmetic


I don't know how they work internally, but I can say that power hammers used for blacksmithing sound similar to the woodpecker in the video, only much slower. A couple hard, fast hits with absurd force before slowing down and stopping.

I'm guessing the woodpecker behaves that way because it's putting momentum into the hitting, even if that doesn't totally make sense in my head. When hammering on something, it's easiest to let the gravity do most of the work and focus your effort on aiming and raising the hammer, so you naturally have 1-2 hits that are solely momentum based at the end.

The woodpecker is horizontal, though, - not pecking in line with gravity - so my thought process isn't a perfect analogue. But if their tongue works like a spring then I can imagine it making sense.


A hypothetical god would not destroy the world, but absolutely could.

I think that's the idea - they're not reveling in the fact that they can do anything anybody could reasonably want to do, they're reveling in the fact that they can also do everything else, too.


It's better to say that, in C, characters are bytes. That's why coming at this from the modern perspective of "characters are the things on my screen" is always going to confuse you - C doesn't have bytes, only characters. C programmers (should) understand this the same way Lisp programmers understand that "CAR" and "CDR" refer to "first" and "rest" or Forth programmers understand that "the stack" is the data stack and has no relation to the call stack.


So a "char" is a byte, but a "char" is perhaps not a "character", except as a funny coincidence of jargon.


C isn't Java. Even Niklaus Wirth in Pascal, Oberon, and the like avoided naming their identifiers too long. 'GetStrSz()' is enough to achieve (most of) what you want, assuming certain naming conventions:

- Makes it clear that this returns the number of bytes, assuming a naming convention where `sz` refers to size (in bytes) and `ln` refers to length (in some other unit which would be specified in the type). Note that in C, 'characters' refers to bytes. It's a flaw in how C names its types, yes, but I wouldn't say it should be any different just because other languages do things differently.

- It doesn't use full words because I don't think it needs to. Abbreviations are OK as long as every (invested) party agrees that they're sane, and I think they're pretty sane.

- It makes it explicit that it is performing a calculation (hence, is O(n)) via 'get'.

I don't think all this is necessary, though - I actually think 'strln()' is enough. First, because characters means bytes, I can assume that this function is getting the number of characters (bytes) in a string. I wouldn't expect it to give me anything else! Second, in C, if strings were a struct of some sort, I'd expect to be able to get their length via 'str->ln', which would be O(1). The fact that the length is found through a function in the first place signals to me that it's doing something behind the scenes to figure that out. Remember - that's just my opinion, which I admit is extreme - but I think yours is just as extreme.


Naming is extremely important, and while strlen is a very basic and hardly ambiguous example, consistency is key and I believe that good naming rules should be applied globally or at least at the framework level.

I think that full words and verbs are easier to read and avoid ambiguity.

I guess this is a matter of style and preference.

This anecdote reminds me of the Mutazt type, something I found in a new codebase I was asked to debug. I had to dig for almost an hour to find exactly what this type was.

Turns out it was a char*, a C string. Buried under 4-5 levels of abstractions.

Mutazt = Mutable ASCII Zero Terminal.


Maybe my least favorite "feature" of C. I can manage most aspects of zero-terminated strings well enough, but when I have to specify the length of them, is it an 'int', 'size_t', 'ssize_t', or something else? (Answer: All of the above!)


It's entirely possible to write a wrapper function with a short name to convert string literals to actual string objects.

    my_function(my_var, 3.6, $("bzarflo"), my_other_var, false);
Isn't that much more of a mouthful, and as long as 'my_function' knows to free it, then you're A-OK! The only trouble is '$()' isn't legal in standard C, so a real solution would have to be something like 'str()'.


You absolutely can justify infinity - even if your real system can't represent numbers larger than X, you can't guarantee that won't change in the future.

For example, in C, integers have minimum sizes, but no maximum. We're lucky nobody ever went, "well, our PDP-11 can only go up to 2^16, so let's set the maximum there."

That's not to say it doesn't make sense to limit things after a certain point, but you'd damn well better be able to justify it, especially if you're using a recursive data structure that could absolutely grow to infinity if needed.


I can guarantee it’ll never be able to support infinity records. I can also guarantee that most data structures can’t even support a googol of records and never will. Some won’t ever support a quintillion records due to complexity, and quantum won’t fix all of those. There may be structures that support this, but they aren’t the ones we use today.

And the reason is that complexity theory is on shaky ground these days and needs a revision. Forty years and six orders of magnitude ago we treated a bunch of operations as constant , which is a clever fiction that is blatantly obvious now,but some of you knuckleheads continue to not look at. Most C terms are in fact stair-stepped log(n). Every time log(n) doubles and crosses a threshold, the cost of the operation doubles, and therefore is in log(n), not C. When you’re talking billions of records that distinction is already fairly obvious, and people dealing with trillions have to architect for it.

Memory access, as it turns out, is sqrt(n), not C, and with magic hardware might some day reach the cube root of n, and don’t count on it getting any better until we can bend space. With that sort of memory access a lot of O(1) algorithms perform worst than log(n) operations.

As you go into 128 bit addressing the difference between nlogn and n^3/2 * log(n)² becomes impractical. There’s a wall there, and we won’t get past it without completely different algorithms or desktop FTL technology.

The next order of magnitude of n will require a complete rewrite of that section of the code, so in practical terms the n only goes up in a new program, not the old one, and therefore the old program cannot and never will handle a thousand times more data than it was designed for.


You're getting deep into the realm of data sets that don't fit into a computer. I don't think you're talking about solving the same problems.

> Forty years and six orders of magnitude ago we treated a bunch of operations as constant , which is a clever fiction that is blatantly obvious now,but some of you knuckleheads continue to not look at. Most C terms are in fact stair-stepped log(n). Every time log(n) doubles and crosses a threshold, the cost of the operation doubles, and therefore is in log(n), not C.

Memory latency hasn't really budged in 25 years and is significantly faster than it was 40 years ago.

Pointers have gotten bigger, but they're not that big compared to data and they're only one notch away from the maximum you could ever use in a single computer.

Where are we seeing these slowdowns?

> I can also guarantee that most data structures can’t even support a googol of records and never will. Some won’t ever support a quintillion records due to complexity

A B-tree will do fine.

> Memory access, as it turns out, is sqrt(n)

Not in real single computers.

And if our baseline is 1983 latency, it would take more than unrealistic amounts of circuits and size for the speed-of-light delays to drag modern tech down to equal it, let alone be worse.

I agree that actual infinity is impractical, but “zero, one, and out of memory” gives the wrong idea, because it implies you might limit things to the amount of memory currently available or available soon. "Infinity" is better at getting across the idea that your code is not allowed to bake in any limits.


For giggles I did the math.

n^3/2 * log(n)²

The time required for 1000n is 3 million times more than n. If anyone makes a computer that is 6 orders of magnitude faster than current computers, none of us here will be alive to see it.


I want to say, INT_MAX, but I know what you mean. It has been useful that the value varies by implementation (and even by compiler flag, on occasion).


I think you both need to watch some YouTube math videos on what exactly infinity means. As Everyday Maths put it, it’s not “count until you can’t count anymore and then add one to the number”. That’s the sort of wrong frame of thinking that makes you think that an infinite pile of hundred dollar bills is worth more than an infinite pile of one dollar bills. They are both worth $infinity.

We don’t have any algorithms that can deal with infinity, so stop pretending like we do. And we couldn’t build one unless the universe is infinite and expansions turns out to be incorrect. So no infinity in computer data structures.


You're being willfully obtuse here.

When somebody says "this algorithm is unbounded" they of course don't mean that they've changed the laws of the universe to allow for an infinite array of memory that it can work with. They just mean that if you could do that, it'd work.

As an example: `while (node) node = node->next;` (in C, where `node`'s type defines a reference to `next` of the same type) will traverse an unbounded list. It will work just as well for zero nodes, one node, a dozen nodes, a billion nodes, and so on, as the number of nodes approaches infinity. Obviously it is impossible for you to ever create a computer with an unbounded amount of memory, but computer scientists can talk about algorithms the same way mathematicians can talk about what `f(x)=x^2` at infinity.

No mathematician, with the knowledge we have now, would ever say, "It's unreasonable for us to talk about infinity. That's simply not possible, ever, now or in the future, so let's limit things at 999,999,999,999,999,999,999,999,999." We settled this debate back in the 1600s.


Ounces can measure both volume and weight, depending on the context.

In this case, there's not enough context to tell, so the comment is total BS.

If they meant ounces (volume), then an ounce of gold would weigh more than an ounce of feathers, because gold is denser. If they meant ounces (weight), then an ounce of gold and an ounce of feathers weigh the same.


> Ounces can measure both volume and weight, depending on the context.

That's not really accurate and the rest of the comment shows it's meaningfully impacting your understanding of the problem. It's not that an ounce is one measure that covers volume and weight, it's that there are different measurements that have "ounce" in their name.

Avoirdupois ounce (oz) - A unit of mass in the Imperial and US customary systems, equal to 1/16 of a pound or approximately 28.3495 grams.

Troy ounce (oz t or ozt) - A unit of mass used for precious metals like gold and silver, equal to 1/12 of a troy pound or approximately 31.1035 grams.

Apothecaries' ounce (℥) - A unit of mass historically used in pharmacies, equal to 1/12 of an apothecaries' pound or approximately 31.1035 grams. It is the same as the troy ounce but used in a different context.

Fluid ounce (fl oz) - A unit of volume in the Imperial and US customary systems, used for measuring liquids. There are slight differences between the two systems:

a. Imperial fluid ounce - 1/20 of an Imperial pint or approximately 28.4131 milliliters.

b. US fluid ounce - 1/16 of a US pint or approximately 29.5735 milliliters.

An ounce of gold is heavier than an ounce of iridium, even though it's not as dense. This question isn't silly, this is actually a real problem. For example, you could be shipping some silver and think you can just sum the ounces and make sure you're under the weight limit. But the weight limit and silver are measured differently.


No, they're relying on the implied use of Troy ounces for precious metals.

Using fluid oz for gold without saying so would be bonkers. Using Troy oz for gold without saying so is standard practice.

Edit: Doing this with a liquid vs. a solid would be a fun trick though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: