Well, wouldn't the program itself be an input on which a human is unable to determine the result (i.e., if the program halts)? I'm curious on your thoughts here, maybe there's something here I'm missing.
The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.
So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.
With this example, a human can examine the implementation of the doesHalt function to determine what it will return for the input, and thus whether the program will halt.
Note: whatever algorithm is implemented in the doesHalt function will contain a bug for at least some inputs, since it's trying to generalize something that is non-algorithmic.
In principle no algorithm can be created to determine if an arbitrary program will halt, since whatever it is could be implemented in a function which the program calls (with itself as the input) and then does the opposite thing.
it's a _necessary_ but _not sufficient_ condition for a recession. so it does provide you information. if the yield curve does not invert, it is unlikely there will be a recession.
lepton number does not need to be conserved. it is an approximate symmetry of nature. if lepton number were conserved, neutrinos could not oscillate.
quarks, the particles composing protons and neutrons, have fractional charge; these would be the particles that would interact with an electron or positron. the charges wouldn't work out (charge is conserved (as far as we known...)) so there wouldn't be a fundamental electromagnetic interaction between a single quark and a e+/e- (i.e., an annihilation). But there are fundamental weak interactions between quarks and e+/e-; these processes are known as inverse beta decay and are used for pet scans.
To quote Wikipedia: “Lepton flavor is only approximately conserved, and is notably not conserved in neutrino oscillation.[6] However, total lepton number is still conserved in the Standard Model.“
The beta decay gives rise to e.g. a positron and a neutrino (or an electron and an anti-neutrino).
What is going on in neutron stars? The layman's answer is that the electrons get squeezed into the protons due to the extreme gravity, leaving only neutrons. But I suppose (?) there needs to be a anti-neutrino or similar that comes along to "complete" the reaction?
mesons are composed of two quarks and can have neutral or non-neutral charge. baryons are composed of three quarks and can have neutral or non-neutral charge. both mesons and baryons are classified as hadrons
charge, baryon number, isospin and strangeness are all related by the Gell-Mann-Nishijima formula[1].
the overarching principle underlying this is that in quantum chromodynamics, the theory of the strong interaction, quarks are representations of an underlying SU(3) gauge group containing color charge (a quantum number) and electric charge quantized as -1/3e and +2/3e. QCD is a gauge theory (like all other theories of nature) and the gauge boson, the gluon, also pops out of demanding the theory be invariant under local SU(3) gauge transformations.
that said, no one has any clue why all of our physical theories are gauge theories :-)
Would a theory of the history of the universe be a non-gauge theory since there was fast initial inflation, a relative slowdown, and now accelerating expansion again? Nothing seems conserved here. It seems symmetries were broken as the forces split apart, anti-particles were annihilated, etc
I'm not sure what courses they had in mind, but Victor Shoup (one of the authors of the OP) has a book on number theory and algebra that goes over probability. That would probably be most useful if your goal is to study the applied crypto book.
The prerequisites are (self-reported) minimum, just calculus and mathematical maturity should be sufficient. I would check it out (it's free) and see if it's at an appropriate level.
Unfortunately I've yet to come across an introductory text or course on probability that is actually good :-(
a huge part of it is trying to solve / formulate easier versions of the problem, or problems that are similar or related to the original problem. Or making some stronger assumptions to get rid of the clutter / all of the moving variables and distill it down to the smallest form a human brain can handle! For many of the "huge" unsolved problems, there tends to be a program of "dominos" or "ledges" you hope to work on in some order that will make the original problem fall.
that way you don't just meander idolly from day to day, but instead gain some intuition for the central problem (and of course have publishable work to appease the grant gods / the university).
trying to code up some of the work to experiment is also useful, but that can be a research problem of its own :-)
but then the heap would grow down in such a diagram...
btw my understanding is that the heap came first in the logical development of C and other systems programming. so it made sense to have .text and other program data at the lowest virtual memory addresses and then have the heap grow towards higher memory addresses.
then when the stack became a thing it had to grow down...
The stack might have actually come before the heap in history, and actually proceeds any systems programming language that was a higher level than assembly. They were seen in the Z4, and Turing wrote about them in 40s.
a pattern that i converged on --- at least in postgres --- is to aggregate your data into json objects and then go from there. you don't need to know how many attributes (columns) should be in the result of your pivot. you can also do this in reverse (pivot from wide to long) with the same technique.
so for example if you have the schema `(obj_id, key, value)` in a long-formatted table, where an `obj_id` will have data spanning multiple rows, then you can issue a query like
```
SELECT obj_id, jsonb_object_agg(key, value) FROM table GROUP BY obj_id;
```
up to actual syntax...it's been awhile since i've had to do a task requiring this, so details are fuzzy but pattern's there.
so each row in your query result would look like a json document: `(obj_id, `{"key1": "value", "key2": "value", ...})`
XOR is used a ton in the theoretical underpinnings of cryptography. It's used in the one time pad which is essentially the "smallest" cryptographic scheme that is perfectly secure (perfectly secure has a mathematical definition in this context, it's not saying there can never be any attacks).
In general the reason why is that if you have two random variables x and y, where x has any distribution (so for example x could even be "attack normandy on june 6" with certainty) and y is uniformly distributed across all n-bit strings (so it could be any string of n zeros and ones with equal probability), then you can show that x ^ y appears as if it is also uniformly distributed across all n-bit strings as well.
Because of this property it's used frequently in many higher order methods as well.
i second tristan needham's visual complex analysis.
some other good ones:
* the one we used in my undergrad course was fisher's complex variables which is great if you're learning for the purposes of applications. it's a cheap dover book.
* rudin's real and complex analysis (if theory is your thing. note that rudin's books, while great, do require a good background in math).
* as the article mentions, eli stein has a series of books on the four main branches of analysis. i believe the second book is on complex analysis.
The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.
So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.