Hacker Newsnew | past | comments | ask | show | jobs | submit | hannasm's commentslogin

Honestly I think I am already sold on AI, who is the first company that is going to show us all how much it really costs and start enshitification? First to market wins right?

  > patched an Encrypted DNS Server to store the original TTL of a response, defined as the minimum TTL of its records, for each incoming query

The article seems to be based on capturing live dns data from some real network. While it may be true that persistent connections help reduce ttl it certainly seems like the article is accounting for that unless their network is only using http1.0 for some reason.

I agree that low TTL could help during an outage if you actually wanted to move your workload somewhere else, and I didn't see it mentioned in the article, but I've never actually seen this done in my experience, setting TTL extremely low for some sort of extreme DR scenario smells like an anti pattern to me.

Consider the counterpoint, having high TTL can prevent your service going down if the dns server crashes or loses connectivity.


This article does a good job of calling attention to the pattern.

If you work in powershell you can start out in the terminal, then when you've got whatever you need working you can grab the history (get-history) and write it to a file, which I've always referred to as a `sample`. Then when it becomes important enough that other people ask me about it regularly I refactor the `sample` into a true production grade `script`. It often doesn't start out with a clear direction and creating a separate file is just unnecessary ceremony when you can just tinker and export later when the `up-enter' pattern actually appears.


I had the same thought but unfortunately even if that translation is accurate it could still be bidirectional hallucinating and would not really be sufficient evidence...

It's another reformulation rather than a true proof. Now, instead of wanting a proof of a theorem, now we just need to prove that this proof is actually proving the theorem. The proof itself being so incomprehensible that it can't on its own be trusted, but if it can be shown that it can be trusted then the theorem must be true.


Well I think this is a great step forward but it would be great if we could mix aspect ratios even better...

Consider a similar layout to OP but the landscape images will span multiple columns as well as everything it already does.

The thing about masonry is that it adapts to the size of the images. You could already do masonry using flexbox if you know the image sizes (https://github.com/hannasm/masonflexjs). Doing it as a true mosaic layout would be a step above current capabilities. At that point it's probably pretty easy to create configurations that don't fit perfectly/ require lots of empty space to layout nicely though.


Kind of random but why, in the linked repo, are you using dotnet core for minifying a Javascript file? I'm just curious. It seems like overkill to me.


You could always interpret a variable from the perspective of it's memory address. It is clearly variable in the sense that it can and will change between allocations of that address, however an immutable variable is intended to remain constant as long as the current allocation of it remains.


This article is barely a comment on some other situation; but I've been saying this to anyone who wants listen for years.

There's nothing special about whitespace (unless you write python).

Capitalization and a bunch of other stuff in your coding convention document are usually just signs that you have poor tooling and lack of skill.

Give me a PR that satisfies the requirements and the appropriate test cases and i'll happily rewrite it to spaces only indented with curly braces on newlines and etc... as I see fit.

The hard part is the first two tasks, you can train an intern to do the third


  > these papers keep stapling on broad philosophical claims about whether models can “really reason” that are just completely unsupported by the content of the research.
From the scientific papers I've read almost every single research paper does this. What's the point of publishing a paper if it doesn't at least try to convince the readers that something award worthy has been learned?

Usually there may be some interesting ideas hidden in the data but the paper's methods and scope weren't even worthy of a conclusion to begin with. It's just one data point in the vast sea of scientific experimentation.

The conclusion feels to me like a cultural phenomenon and it's just a matter of survival for most authors. I have to imagine it was easier in the past.

"Does the flame burn green? Why yes it does..."

These days it's more like

"With my two hours of compute on the million dollar mainframe, my toy llm didn't seem to get there, YMMV"


Everybody else here has a fantastic solution to your complaint but wouldn't it be even better to think big here, and wish that stupid whitespace formatting issues weren't something that git was tokenizing to begin with?


Whitespace is important for a myriad of reasons, so the implementation makes sense to me. However, I would love to see git have configuration surrounding syntax awareness. That's a huge undertaking, though, but one can dream.


My condolences to everyone affected.

In a gerontological context: It would interesting to understand more about his genealogy. Does the age of 76 mark an above or below average age in his family and their historical lifespan?

As someone who has experienced great success in life, and also been (at least at one point) in peak physical shape, the cause of his demise seems especially interesting to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: