The back button breaking seems to be if you're not logged in, as loading the dashboard redirects to the homepage. Might be better to show a modal or something prompting the user to log in or at least use window.location.replace so it doesn't kill the back button.
The demo page looks to be what I was missing, are these placed in a specific location (eg the bottom right corner of the window) or is that up to the site owner?
The snippets are all depend to the web page owner. They decide where to paste the code. If they have script limitations, they can also use a single button to get donations without any messages.
There is no green light at the start - it's the lights going out they react to. There's also no minimum time, you can get moving after 1ms - it's legal. In fact, you can move before the lights go out, there's a tolerance before you're classed as moving.
what happened to a memory leak being some memory that was allocated but had no reference to it so couldn't be freed? If you can copy the map and release it and the memory usage drops, there is no leak?
This is also why valgrind classifies the leaks it reports with stuff like "still reachable" or "possibly still in use" (I might be remembering the exact phrasing incorrectly). It would be pretty hard to programmatically determine whether the memory that's still kept around was intended to be kept around or not, which is why valgrind supports generating "suppressions" (and specifying them in subsequent runs to be ignored).
This is the use case for weak maps, which both Java and JavaScript have. In the latter case, the map is not iterable, so one cannot observe JavaScript GC (through WeakMap at least).
A memory leak has always meant "this program keeps allocating more memory as it runs, even though it's not being asked to store anything new". That is equivalent to saying that a program has a memory leak when it fails to free memory that is no longer needed, not just memory that is no longer reachable.
For example, a common example of a memory leak is adding items to a "cache" without any mechanism that evicts items from the cache in any scenario. The "cache" is thus not a cache, but a memory leak (a common implementation of this leaking scenario is that items are put in a map, but never removed from the map).
Memory leak has never, as far as I know, referred to the specific case of memory that is no longer accessible from program code to be freed. In fact, this definition doesn't even make sense from a runtime system perspective, since even in C, the memory is actually always still reachable - from malloc()'s internal data structures.
Those pretty much can't happen in garbage collected languages, so the usage of the term has been widened to include things like this. I agree it's a shame.
Memory leaks have always meant "failing to free memory that is no longer needed".
Garbage collection literature often stresses the difference between "no longer needed" and "not reachable", noting that the former is not automatically enforceable (it amounts to solving the halting problem), but the latter is only a heuristic. So, the fact that garbage collectors can't prevent all memory leaks is always stressed by the literature.
I read about this in the Garbage Collection Handbook [1], which is an excellent overview of the entire field (at least up to ~2016), which discusses the distinction at large. I don't have it on me to quote, but a very clear distinction is made between "live objects" and "reachable objects", with reachabillity acting as a computable proxy for the uncomputable property of liveness. Liveness is defined as "this object will be used again by the program in some way", and a memory leak is defined as "failing to free an object that is no longer live". An unreachable object can't be live, but there are many ways of having a reachable object that is not live.
To prove that this is used in the literature at large, here is the abstract of a random GC paper I found [0]:
> Functional languages manage heap data through garbage collection. Since static analysis of heap data is difficult, garbage collectors conservatively approximate the liveness of heap objects by reachability i.e. every object that is reachable from the root set is considered live. Consequently, a large amount of memory that is reachable but not used further during execution is left uncollected by the collector.
Although, it's pretty useless as an exploit, since it requires you to be able to run arbitrary Go code to begin with (the author admits as much). It's _very_ unlikely that a remote attacker could exploit a data race in a regular Go program.
Every GC language by definition are memory safe, memory safety in programming does not mean than accessing the same resources from two thread should be safe.
I don't know how it works in other languages, but accessing a partially overwritten slice in Go (as will happen in the presence of data races) can cause your code to access out-of-bounds memory. And as we all know, once you have read/write access to arbitrary areas in memory, you've basically opened up Pandora's box.
I don't think you can have data races (but certainly you can have race conditions) in python because of the GIL. I imagine Ruby is similar. Otherwise, no, the other languages you listed are not "memory safe". Once you start reading and writing to arbitrary locations in a process, almost anything can happen. But certainly you can say that there are different degrees of memory safety. All of the languages you mentioned are leaps and bounds above C/C++.
The same goes for Rust and most other "safe" languages. They all have synchronization primitives that make it safe, but you need to use them - the compiler won't always tell you.
For Rust specifically, the compiler does force safe programs to have no data races. That's actually what the ownership system, Send and Sync are about. If you manage to corrupt memory or have undefined behavior in safe Rust, that should be a compiler or library bug.
That is basically the entire shtick of rust. That data is "owned", and only the owner can write. You can "borrow" something for read access, but if something is borrowed it can't be written to.
There are of course workarounds for this like reference counted wrappers and so on.
> Trying to boot Linux from a USB stick failed out of the box for no obvious reason, but after further examination the cause became clear - the firmware defaults to not trusting bootloaders or drivers signed with the Microsoft 3rd Party UEFI CA key.
Isn't that precisely what the GP linked to? If it's just a matter of changing a BIOS switch then this is a non-issue imo.
It may be a non-issue for some developer folks out on here.
However, navigating the scary BIOS menus, in a foreign language, with technical jargon even I can't always understand (and I do assemble my desktops from parts). Even keeping running Ubuntu (or Fedora or whatever) on a new laptop would probably not happen for many non-technical folks, if this sort of 1990s level dark arts boot time shenanigans were encountered.
At least that's my experience with relatives and friends who need computers for non-technical work or entertainment. Not all, but most have found Linux overall nicer user experience than Windows, so this would be a net negative development. Defaults matter, so this is definitely an issue in my opinion!
The PDF linked to further up this thread describes the process of trusting the 3rd party Microsoft CA that the original article mentions has been distrusted.
Am I missing something or is there is no pages that actually show what they look like?
reply