Hacker Newsnew | past | comments | ask | show | jobs | submit | more downboots's favoriteslogin

> Somehow we've built a culture where its socially acceptable, and even rewarded, to behave online in ways that would never be acceptable in other social settings.

I don't think we've "built" the culture per se. I think it's more like humans evolved over millions of years for small, in-person groups, and we're not "built" (by nature) to handle the endless sea of online strangers. On the internet, the personal familiarity is gone, the proximity is gone, the facial expressions and tone are gone, and indeed the fear of repercussions is mostly gone.



You could use a free text to voice application e.g. https://www.naturalreaders.com/online/?s=V3715efd32dc2c47409...

I believe that if the runners speeds are all real numbers which are linearly independent over the rationals, then the associated dynamical system will be ergodic

https://en.wikipedia.org/wiki/Ergodicity

and what you say is true.

But in addition to your example you have to worry about, e.g. sqrt(2), sqrt(3), and sqrt(2) + sqrt(3).


To me, the most compelling telling of the Grothendeick story is this:

https://www.psychologytoday.com/us/articles/201707/the-mad-g...

A different telling of another profoundly gifted character is this:

http://www.inquiriesjournal.com/articles/1638/divinity-in-th...

The state of mind where new ideas are born is a liminal state that integrates disciplines effortlessly. Think Herb Simon, Marshall McLuhan or Arthur Koestler.


"Concepts are bundles of useful correlations."

Is "concept" a bundle of useful correlations?

Is "bundle" a bundle of useful correlations?

Is "useful" a bundle of useful correlations?

Is "correlation" a bundle of useful correlations?

And correlations of what? Very strong emphasis on what. Quid est?

From the get go, like all skepticism, this approach begins by sitting on a branch that more or less acknowledges reality and its comprehensibility, but quickly begins sawing off the very branch it is sitting on. Skepticism is always a form of selective denial, and as such, it is incoherent. It's like those people who argue passionately that there is no truth while in the lecture hall, but as soon as the clock strikes 4pm, they're worried about missing the bus and getting everybody in the room to the next lecture, or as Duhem would say, going home to kraut and pipe.

"Triangularity" is not a "bundle of correlations". Triangularity is an irreducible, but analyzable whole, a form that is instantiated in the world within particulars and that we encounter in particulars and abstract from those particulars to arrive at the concept. That's what a concept is: a form abstracted from concrete particulars that exists in the intellect as a universal.

Unfortunately, mechanistic metaphysics, taught and insinuated in our scientistic curricula from a young age, is a difficult intellectual habit to break for many. It is very difficult for people to uproot these obstinate presuppositions.


Personal favorite, visualizing used disk space in order to find large directories that can be deleted. I use QDirStat on Linux and WizTree on Windows.

Showing CPU performance of an application is a pretty neat use case of one type of treemaps too, typically called flamegraphs too, but in reality I think they're just up-side down treemaps :)


This is super cool. This exploit will be one of the canonical examples that just running something in a VM does not mean it's safe. We've always known about VM breakout, but this is a no-breakout massive exploit that is simple to execute and gives big payoffs.

Remember: just because this one bug gets fixed in microcode doesn't mean there's not another one of these waiting to be discovered. Many (most?) 0-days are known about by black-hats-for-hire well before they're made public.

CPU vulnerabilities found in the past few years:

  https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)
  https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
  https://aepicleak.com/
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#SGAxe
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#LVI
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#Plundervolt
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#MicroScope_replay_attack
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#Enclave_attack
  https://en.wikipedia.org/wiki/Software_Guard_Extensions#Prime+Probe_attack
  https://www.vusec.net/projects/crosstalk/
  https://en.wikipedia.org/wiki/Hertzbleed
  https://www.securityweek.com/amd-processors-expose-sensitive-data-new-squip-attack/

I would be cautious or even distrustful of using anything from Oracle. VirtualBox components come under three different licenses - GPLv2, personal use & evaluation license, and an enterprise license. Their VirtualBox license FAQ [1] gives them enough leeway to change future licenses at will. If an exploit is discovered in your old VirtualBox and they've changed the license, you're out of luck.

Be specially careful when installing their extension pack, as it is an evaluation license.

We've moved our development to KVM and Virtual Machine Manager on Linux [3] and UTM on Mac [4]. There are other options to run your VM, such as Multipass [5] or VirtualBuddy [6].

On a digressive topic - it was fun migrating our legacy application server stack from Oracle Java (old & poorly considered decision) to OpenJDK, thanks to their license [2].

[1] https://www.virtualbox.org/wiki/Licensing_FAQ

[2] https://www.oracle.com/java/technologies/javase/jdk-faqs.htm...

[3] https://ubuntu.com/blog/kvm-hyphervisor

[4] https://mac.getutm.app/

[5] https://multipass.run/

[6] https://github.com/insidegui/VirtualBuddy


The eternal struggle. Information wants to be free, and then people use those freedoms to do the most screwed up things imaginable, and people like this pay the price.

It’s a damn shame how the original cyberpunk dream played out. We could’ve had a world where companies couldn’t do anything about people using their ideas. Instead we get one where you can’t even be anonymous without rubbing elbows with child predators.

It’s surprising how much anonymity and the subject at hand are correlated. In my 20s I liked to explore, as I’m sure many of you do too. I once met someone in the Whonix community who wanted to nix google maps entirely; he spent a lot of time downloading maps and trying to make a way to view them locally, which I think is going to be prescient one day. It already is in many parts of the world — you don’t have cell service, so you can’t just pull up google maps. Nowadays starlink solves that problem, but back then it wasn’t clear that we’d ever be able to have maps at our fingertips regardless of internet access. This was back in the era of that poor CNET reporter that got lost with his family in the mountains precisely because of no maps, and ended up dying to exposure when he went to get help. Never leave your car.

I found all of this fascinating. What a project! Make all of google maps accessible right from your phone, with no internet. I briefly fell in love with that community.

Ultimately what drove me away was the literal flood of child porn that was always right next to anything to do with tor, whonix, or anonymity in general. I have a pretty high tolerance for “operating in gray areas,” like this guy. But one of the tragedies of the cyberpunk dream is that the entire scene has been coopted by cp. In some sense cp is the ultimate test of anonymity, since you’ll be thrown in prison pretty much instantly if caught. So perhaps it’s no surprise that it’s the most common and pervasive result of anonymity, but it sure is a shame.



I know about 5 git commands and have worked with gitflow and feature branch repos for years now.

I can't say I've ever had to think about anything or longed for a better system.

And yeah, I've also done the xkcd hack when things refuse to smash together.

Works for me.

As long as the Visual Code git blame extension keeps telling me line by line what ticket/issue was put when I'm happy.


I found it useful to screenrecord all my computer activity. It is OCR searchable. This way if I vaguely remember some bits, I can reconstruct my sources. It also preserves original text, in case sources get edited.

I manage this with conditional includes in ~/.gitconfig:

  [includeIf "gitdir:~/work/client1/"]
   path = ~/work/client1/.gitconfig
  [includeIf "gitdir:~/work/client2/"]
   path = ~/work/client2/.gitconfig

Keep your data private and don't leak it to third parties. Use something like privateGPT (32k stars). Not your keys, not your data.

"Interact privately with your documents using the power of GPT, 100% privately, no data leaks"[0]

[0] https://github.com/imartinez/privateGPT


It's interesting how much it parallels the novel 1984. The members of the party lived under extreme surveillance and control for what?

My analysis is that the outer party traded off those things effectively for a sense of safety, and the inner party traded off even more for pride. In total the party was only about 20% of the populace.

At any point Winston could have left. The party had obviously created an oppressive wartime environment that made it scary to leave the safety of the party, but Winston had many opportunities to simply walk away.

But he was invested.

And I think that's the parallel I see in modern capitalist society. We're so invested that we cannot imagine walking away. Disposable gadgets, next-day shipping from the comfort of your couch, virtual social scores (either crowd sourced or provided by your ruling party). These things all feed into the sunk cost fallacy that starting over would be too hard.

And maybe it is. Maybe that's why I keep marching with the herd.


Response from Scott Aaronson:

https://scottaaronson.blog/?p=1799


A) Hell yeah

B) Prerequisites to Machine Learning math(so prerequisites to Linear algebra, multivariate calculus and "advanced" probabilities/statistics)

B.2) it would be best if it's only the prerequisites to ML math and nothing else



AI/ML researcher + builder.

Mainly HN and ML subreddit, and a bunch of newsletters. My overarching goal is to completely stay away from Twitter (which I have stopped going to 2 months ago). I instead rely on the above to bubble up interesting things.

Also as others said, find something to build and work on and don’t keep looking sideways (aka Twitter, random news), just keep going. It can be discouraging and depressing to see that someone else is doing something similar. Instead go deep into what you’re doing and only once in a while check around.

Follow your own “train of thought”.

If you think back at the great advances in science or computing, deep work was done by people relentless pursuing their ideas — not bombarded constantly by “news”.

In the LLM era the barrier to build useful/interesting things has gotten very low, leading to a ton of distracting noise.


My favorite quote on the meaning of life, from Viktor Frankl's "Man's Search for Meaning":

> For the meaning of life differs from man to man, from day to day and from hour to hour. What matters, therefore, is not the meaning of life in general but rather the specific meaning of a person's life at a given moment. To put the question in general terms would be comparable to the question posed to a chess champion: "Tell me, Master, what is the best move in the world?" There simply is no such thing as the best or even a good move apart from a particular situation in a game and the particular personality of one's opponent. The same holds for human existence. One should not search for an abstract meaning of life. Everyone has his own specific vocation or mission in life to carry out a concrete assignment which demands fulfillment. Therein he cannot be replaced, nor can his life be repeated. Thus, everyone's task is as unique as is his specific opportunity to implement it.


I'm not sure about searches, but you can view the metrics for page views: https://en.wikipedia.org/wiki/Wikipedia:Pageview_statistics

Hard not to "accept" inflation when life's necessities are priced on markets experiencing inflation...

What does "consumer pushback" even mean? How does one "vote with your dollar" in a captured market? Does Bloomberg think "protests" are going to do anything except feed the culture wars by giving fodder to news media outlets' spin doctors?


Next time you eat a fast food item, think about how many people were involved or one step away from making that burger or sub.

Meat, bun, spices, ketchup, mustard, pickles, lettuce, cheese, bacon...

Every one of those has hundreds ro thousands of people involved. Researched, seeds, planted, grown, harvested, collected, processed, quality controlled, containerized, packaged, distributed, opened, preparedz put on your bun, served.

All those steps have at least one person involved, plus management, plus sales, plus quality, plus food scientist research, plus people's tastes research. Plus the manufacturing of the tractors, packaging systems, quality control equipment, processing equipment... hundreds more people.

We eat like kings of old with hundreds of people working to feed us one meal, and we don't think anything of it.


The loss of societies's expectations to support a claim with an argument is dangerous.

Constructing an argument to defend this statement is left as an exercise for the reader.


Here comes the memory hole, folks. This copyright claim is just the ruse to break down the wall, with the real purpose being to lay claim to news and internet archiving so that inconvenient news and information can be more easily memory holed without archives existing.

Let me put it this way, everyone should start working on decentralized archiving tools and retention of information locally about topics they are particularly interested in. I don’t say that out of the blue.


Machine learning is good at statistical sampling, approximation, and categoriztion.

Let's think about hash functions. Not inverting a hash, just applying one.

- A hash function has a bit-perfect map between input and output, "close" is not enough.

- A good hash function is designed to produce a completely different answer if even one input bit is changed. It's not a smooth function, in terms of the input space and the natural topology of the output space. Let me call that bit-sensitivity.

- I guess you could frame a hash as a categorization, but the number of categories would have to be the size of the output space, which is insanely big for any hash worth your time. You'd have to recognize 2^|bits in output|. Good luck!

So ML seems to be a very bad match for implementing the (forward) hash itself.

What about the inverse?

- The bit-perfection requirement is extremely onerous. Either the output of the inverse has the correct hash or it does not.

- At least the space of inputs which match a given hash IS a distribution (there are many inputs which yield the same hash, so there are many valid outputs given an input for the inverse hash). Unfortunately, because the purpose of the hash is to be bit-sensitive, the distribution of hash inputs (inverse outputs) is horribly multimodal.

- You can't generate the output of the inverse in stages, because the bit-sensitivity makes it nigh impossible to say you're meaningfully getting closer to the correct answer.

Also seems to be a bad match for ML.

Finally, some broader implications. Good hashes are hoped to be one-way functions. https://en.wikipedia.org/wiki/One-way_function Inverting one succeeds with very little probability, by definition. If your approach worked it would show that there is SOME algorithm for inverting these functions and thus they are not one-way. That's OK, there's actually no proof that ANY function is one-way. But our current hash functions have survived years of intense scrutiny by human-intelligence experts. Learning to invert one would be a major, exciting development, upending an entire field. Not impossible, but a reason for skepticism.


Yes I do this routinely, using a Quest Pro.

The main downside is that the resolution just isn't there yet on the Quest Pro to make what you are seeing comparable to a real monitor. If you want to see the same "amount" of eg: text, you will be looking at a bigger screen in space - which is what I do. I stare at the equivalent of about a 32" monitor. It is OK if you only need to use a single monitor, but the advertised benefit of having as many monitors as you want isn't as great as you would think because they have to be so big that you have to rotate your head a lot.

You also need to be pretty good at touch typing. If you can't type without looking at your fingers, it becomes painful fast - even with passthrough showing you the keyboard or with a tracked keyboard (which shows up rendered in VR).

All of this will change with the next gen as they bump up significantly in resolution. I already use it routinely to get a "change of scene" and to focus more. It's quite fun to go into the public rooms and co-work with half a dozen other people doing the same thing from all over the world. So once the resolution goes up another notch I think we'll hit the point where it's unambiguously better for a lot of people than a real monitor and this will start to take off. For now, you have to have a reason or be an enthusiast to want to do it.


I had an idea a few years ago around anonymity, but now it seems even more possible with things like billion parameter LLMs. A service where you enter text that you want to post on Reddit/Twitter/etc. but you instead give it to your locally-running model, which is trained to output the text with the following transformations:

  1. Writing style change
  2. Degree of extra/reduced fluff
  3. Degree of typos
  4. Typing style change (e.g. using semicolons a lot, misusing commas consistently, conventions like S.O.S vs. SOS)
All in an effort to further anonymize your text. The idea is that people have different, consistent grammatical habits and mistakes, writing, concision, etc. You create a "profile" for each identity you want, usually one per account.

Another interesting concept to explore is platform style. Have you ever noticed that everyone on Reddit sort of sounds the same? People on HN sort of sound the same. I think there's an emergent platform-based profile that top comments hover around, because adhering to this style is more likely to get upvotes. The hive mind is real!


The worst thing about being smart is how easy it is to talk yourself into believing just about anything. After all, you make really good arguments.

EA appeals to exactly that kind of really-smart-person who is perfectly capable of convincing themselves that they're always right about everything. And from there, you can justify all kinds of terrible things.

Once that happens, it can easily spiral out from there. People who know perfectly well they're misbehaving will claim that they aren't, using the same arguments. It won't hold water, but now we're swamped, and the entire thing crumbles.

I'd love to believe in effective altruism. I already know that my money is more effective in the hands of a food bank than giving people food myself. I'd love to think that could scale. It would be great to have smarter, better-informed people vetting things. But I don't have any reason to trust them -- in part because I know too many of the type of people who get involved and aren't trustworthy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: