Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

So you're saying instead of assessing the current capabilities of the technology, we should imagine its future capabilities, "accept" that they will surely be achieved and then assess those?

I would assess the directionality and rate of the trend. If it's getting better fast and we don't see a limit to that trend then it will eventually pass whatever threshold we set for adoption.

As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.

> running arbitrary commands to invoke language-specific package managers.

This is exactly what we do in Nix. You see this everywhere in nixpkgs.

What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.

In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.


Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?

And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.


I mean, I guess at a theoretical level. In practice, it's just not a large problem.

Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!

A problem with that is that Nix is slow.

On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.

Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.

Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.

Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.

To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.

I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.


What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation:

    preferLocalBuild = true;
    allowSubstitutes = false;
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.

If you are interested in seeing my experiment, it's open on your favourite forge:

https://github.com/poly2it/kein



I use crane, but it does not have arbitrary granularity. The end goal would be something which handled all builds in Nix.

Sure, and that's useful but not revolutionary nor exclusive to 3D printers. You can use a milling to mill a bunch of pieces for a milling machine. You can use a PCB printer to print the PCBs for a PCB printer. A 3D printer is much, much closer to this than it is to a self-replicating machine.

> You can use a milling to mill a bunch of pieces for a milling machine.

Now that CNC mills get more affordable, people are starting to get vocal about their visions of a self-milling CNC mill. :-)


A classic manual Bridgeport mill, a foundry for making castings, a heat-treating furnace, a steel planer, a lathe, a drill press, a grinder, and a supply of steel is enough for a master machinist to reproduce all that. That's what was used to make machine tools in the first half of the 20th century.

... and now work on

- how these machining processes can be automatized, and

- how the cost, space requirements and noise levels for these machines can be reduced so that every ambitious maker can have them in their apartment

Voila, the start of a home manufacturing revolution ...


I get the feeling there is something interesting here, but the website seems myopically focused on syntax. It doesn't really tell me what this language is good at or how you'd expect people to use it.

They are quite literally negotiable: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...

There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.


Your comment is 100% correct, but I just want to point out that this doesn't negate the risks of bob's approach here.

LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.

Indeed, some rate limits don't apply for renewals but some still do.


If you’ve hit a rate limit, we don’t have a way to temporarily reset it. [1]

From your link

move the adjustments to production twice monthly.

I don't know about your use case but I couldn't risk being unable to get a new certificate for at least a fortnight because my container was stuck in a restart loop.

[1]: https://letsencrypt.org/docs/rate-limits/


Because with Next.js, Vercel was able to turn the frontend stack into also a really shitty backend stack. And it's particularly shitty at being deployed, so they're in the business of doing that for you.


The world has not decided that spyware can't be produced. Mostly, the powers that be treat it like weapons of war.

That is, companies can make and sell it as long as they only sell it to governments and only the ones that we like.


What other markdown viewers or editors support URL schemes that just execute code? And not in a browser sandbox but in the same security context notepad itself is running in.


Funnily enough, the core Windows API here that brings with it support for every URL scheme under the sun is plain old ShellExecute() from the mid-90s IE-in-the-shell era when such support was thought reasonable. (I actually still think it’s reasonable, just not with the OS architectures we have now or had then.)


I used to enjoy it much more before it became just another podcast extoiling the virtues of AI-assisted coding. I have too many of those already.


I appreciate their treatment of the current AI boom cycle. Just last night they had Evan Ratliff on from the Shell Game podcast[1], and it was a great episode. They're not breathlessly hyping AI and trying to make a quick buck off it, instead it seems they're taking an honest, rigorous look at it (which is sadly pretty rare) and talking about the successes as well as the failures. Personally I don't always agree with their takes, I'm more firmly in Ed Zitron's camp that this is all a massive financial scam, isn't really good for much, and will do a lot more harm than good in the long run. They're less negatively biased than that, which is fine.

[1] https://www.shellgame.co/


How would they? This is AI, it has to move faster than you can even ask security questions, let alone answer them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: