So you're saying instead of assessing the current capabilities of the technology, we should imagine its future capabilities, "accept" that they will surely be achieved and then assess those?
I would assess the directionality and rate of the trend. If it's getting better fast and we don't see a limit to that trend then it will eventually pass whatever threshold we set for adoption.
As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.
> running arbitrary commands to invoke language-specific package managers.
This is exactly what we do in Nix. You see this everywhere in nixpkgs.
What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.
In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.
Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?
And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.
Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!
On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.
Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.
Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.
Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.
To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.
I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.
What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation:
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.
If you are interested in seeing my experiment, it's open on your favourite forge:
Sure, and that's useful but not revolutionary nor exclusive to 3D printers. You can use a milling to mill a bunch of pieces for a milling machine. You can use a PCB printer to print the PCBs for a PCB printer. A 3D printer is much, much closer to this than it is to a self-replicating machine.
A classic manual Bridgeport mill, a foundry for making castings, a heat-treating furnace, a steel planer, a lathe, a drill press, a grinder, and a supply of steel is enough for a master machinist to reproduce all that. That's what was used to make machine tools in the first half of the 20th century.
I get the feeling there is something interesting here, but the website seems myopically focused on syntax. It doesn't really tell me what this language is good at or how you'd expect people to use it.
There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.
If you’ve hit a rate limit, we don’t have a way to temporarily reset it. [1]
From your link
move the adjustments to production twice monthly.
I don't know about your use case but I couldn't risk being unable to get a new certificate for at least a fortnight because my container was stuck in a restart loop.
Because with Next.js, Vercel was able to turn the frontend stack into also a really shitty backend stack. And it's particularly shitty at being deployed, so they're in the business of doing that for you.
What other markdown viewers or editors support URL schemes that just execute code? And not in a browser sandbox but in the same security context notepad itself is running in.
Funnily enough, the core Windows API here that brings with it support for every URL scheme under the sun is plain old ShellExecute() from the mid-90s IE-in-the-shell era when such support was thought reasonable. (I actually still think it’s reasonable, just not with the OS architectures we have now or had then.)
I appreciate their treatment of the current AI boom cycle. Just last night they had Evan Ratliff on from the Shell Game podcast[1], and it was a great episode. They're not breathlessly hyping AI and trying to make a quick buck off it, instead it seems they're taking an honest, rigorous look at it (which is sadly pretty rare) and talking about the successes as well as the failures. Personally I don't always agree with their takes, I'm more firmly in Ed Zitron's camp that this is all a massive financial scam, isn't really good for much, and will do a lot more harm than good in the long run. They're less negatively biased than that, which is fine.
reply