Do you mean the distribution of representable numbers as floats or do you mean real numbers? I always assumed infinity was stored between 0-1 because you can 1/x everything. But I have never had enough free opportunity time for maths.
I'm not sure how to answer because I'm not sure which question you're asking.
For infinity, neither can you calculate +/-inf but there also aren't an infinite set of representable numbers on [0,1]. You get more with fp64 and more with fp128 but it's still finite. This is what leads to that thing where you might add numbers and get something like 1.9999999998 (I did not count the number of 9s). Look at how numbers are represented on computers. It uses a system with mantissa and exponents. You'll see there are more representable numbers on [-1,1] than in other ranges. Makes that kind of normalization important when doing math work on computers.
This also causes breakdowns in seemingly ordinary math. Such as adding and multiplying not being associative. It doesn't work with finite precision, which means you don't want fields to with in. This is regardless of the precision level, which is why I made my previous comment.
For real numbers, we're talking about computers. Computers only use a finite subset of the real numbers. I'm not sure why you're bringing them up
I'm just guessing, but seems the people who write these agent CLIs haven't found a good heuristic for allowing/disallowing/asking the user about permissions for commands, so instead of trying to sit down and actually figure it out, someone had the bright idea to let the LLM also manage that allowing/disallowing themselves. How that ever made sense, will probably forever be lost on me.
`chroot` is literally the first thing I used when I first installed a local agent, by intuition (later moved on to a container-wrapper), and now I'm reading about people who are giving these agents direct access to reply to their emails and more.
> I'm just guessing, but seems the people who write these agent CLIs haven't found a good heuristic for allowing/disallowing/asking the user about permissions for commands, so instead of trying to sit down and actually figure it out, someone had the bright idea to let the LLM also manage that allowing/disallowing themselves. How that ever made sense, will probably forever be lost on me.
I don't think there is such a good heuristic. The user wants the agent to do the right thing and not to do the wrong thing, but the capabilities needed are identical.
> `chroot` is literally the first thing I used when I first installed a local agent, by intuition (later moved on to a container-wrapper), and now I'm reading about people who are giving these agents direct access to reply to their emails and more.
That's a good, safe, and sane default for project-focused agent use, but it seems like those playing it risky are using agents for general-purpose assistance and automation. The access required to do so chafes against strict sandboxing.
There still needs to be a harness running on your local machine to spawn the processes in their sandboxes. I consider that "part of the LLM" even if it isn't doing any inference.
If that part were running sandboxed, then it would be impossible for it to contact the OpenAI servers (to get the LLM's responses), or to spawn an unsandboxed process (for situations where the LLM requests it from the user).
That's obviously not true. You can do anything you want with a sandbox. Open a socket to the OpenAI servers and then pass that off to the sandbox and let the sandboxed process communicate over that socket. Now it can talk to OpenAI's servers but it can't open connections to any other servers or do anything else.
The startup process which sets up the original socket would have to be privileged, of course, but only for the purpose of setting up the initial connection. The running LLM harness process would not have any ability to break out of the sandbox after that.
As for spawning unsandboxed processes, that would require a much more sophisticated system whereby the harness uses an API to request permission from the user to spawn the process. We already have APIs like this for requesting extra permissions from users on Android and iOS, so it's not in-principle impossible either.
In practice I think such requests would be a security nightmare and best avoided, since essentially it would be like a prisoner asking the guard to let him out of jail and the guard just handing the prisoner the keys. That unsandboxed process could do literally anything it has permissions to do as a non-sandboxed user.
The devil is in the details. How much of the code running on my machine is confined to the sandbox vs how much is used in the boostrap phase? I haven't looked but I would hope it can survive some security audits.
If I'm following this it means you need to audit all code that the llm writes though as anything you run from another terminal window will be run as you with full permissions.
The thing is that on macOS at least, Codex does have the ability use an actual sandbox that I believe prevents certain write operations and network access.
Is it asking you permission to run that python command? If so, then that's expected: commands that you approve get to run without the sandbox.
The point is that Codex can (by default) run commands on its own, without approval (e.g., running `make` on the project it's working on), but they're subject to the imposed OS sandbox.
This is controlled by the `--sandbox` and `--ask-for-approval` arguments to `codex`.
Probably for any case where an actual human is doing it. On an image you obviously want to do it at bake time, so I feel default off with a flag would have been a better design decision for pip.
I just read the thread and use Python, I can't comment on the % speedup attributed to uv that comes from this optimization.
Images are a good example where doing it at install-time is probably the best yeah, since every run of the image starts 'fresh', losing the compilation which happened last time the image got started.
If it was a optional toggle it would probably become best practice to activate compilation in dockerfiles.
> On an image you obviously want to do it at bake time
It seems like tons of people are creating container images with an installer tool and having it do a bunch of installations, rather than creating the image with the relevant Python packages already in place. Hard to understand why.
For that matter, a pre-baked Python install could do much more interesting things to improve import times than just leaving a forest of `.pyc` files in `__pycache__` folders all over the place.
reply