Hacker Newsnew | past | comments | ask | show | jobs | submit | beacon294's commentslogin

Do you mean the distribution of representable numbers as floats or do you mean real numbers? I always assumed infinity was stored between 0-1 because you can 1/x everything. But I have never had enough free opportunity time for maths.

I'm not sure how to answer because I'm not sure which question you're asking.

For infinity, neither can you calculate +/-inf but there also aren't an infinite set of representable numbers on [0,1]. You get more with fp64 and more with fp128 but it's still finite. This is what leads to that thing where you might add numbers and get something like 1.9999999998 (I did not count the number of 9s). Look at how numbers are represented on computers. It uses a system with mantissa and exponents. You'll see there are more representable numbers on [-1,1] than in other ranges. Makes that kind of normalization important when doing math work on computers.

This also causes breakdowns in seemingly ordinary math. Such as adding and multiplying not being associative. It doesn't work with finite precision, which means you don't want fields to with in. This is regardless of the precision level, which is why I made my previous comment.

For real numbers, we're talking about computers. Computers only use a finite subset of the real numbers. I'm not sure why you're bringing them up


My problem was simply chrome so I switched to brave.

I read the old forums carefully:

[BIG Warning: this didn't work for child commenter]

- simply decline/reject the TOS on install. It will auto uninstall the installer and go away.

Life has been good since.


I TRUSTED YOU! Nooooooooo!

Anyway, I hope you're happy.

(I thought it would show me a TOS prompt again, but it did not. My bad.)


What? Let me put this warning in the top comment. I got one.

FWIW the lockout probably wasn't related... maybe the content you were working on or your context window management somehow triggered something?

It's up.

I would like to know why it is correcting thanks to Thanksgiving.


I considered starting a similar tweet to Tim Cook today but decided it would do nothing.

I tried adding swifter, i tried vtt, they filed that up to.

If true wondering, this message is typed on my apple phone.

*Edit: Just figured out I can delete the apple keyboard from the enabled list. They can put that in their metrics and smoke it.*


My codex just uses python to write files around the sandbox when I ask it to patch a sdk outside its path.


It's definitely not a sandbox if you can just "use python to write files" outside of it o_O


Hence the article’s security theatre remark.

I’m not sure why everyone seems to have forgotten about Unix permissions, proper sandboxing, jails, VMs etc when building agents.

Even just running the agent as a different user with minimal permissions and jailed into its home directory would be simple and easy enough.


I'm just guessing, but seems the people who write these agent CLIs haven't found a good heuristic for allowing/disallowing/asking the user about permissions for commands, so instead of trying to sit down and actually figure it out, someone had the bright idea to let the LLM also manage that allowing/disallowing themselves. How that ever made sense, will probably forever be lost on me.

`chroot` is literally the first thing I used when I first installed a local agent, by intuition (later moved on to a container-wrapper), and now I'm reading about people who are giving these agents direct access to reply to their emails and more.


> I'm just guessing, but seems the people who write these agent CLIs haven't found a good heuristic for allowing/disallowing/asking the user about permissions for commands, so instead of trying to sit down and actually figure it out, someone had the bright idea to let the LLM also manage that allowing/disallowing themselves. How that ever made sense, will probably forever be lost on me.

I don't think there is such a good heuristic. The user wants the agent to do the right thing and not to do the wrong thing, but the capabilities needed are identical.

> `chroot` is literally the first thing I used when I first installed a local agent, by intuition (later moved on to a container-wrapper), and now I'm reading about people who are giving these agents direct access to reply to their emails and more.

That's a good, safe, and sane default for project-focused agent use, but it seems like those playing it risky are using agents for general-purpose assistance and automation. The access required to do so chafes against strict sandboxing.


Here's OpenAI's docs page on how they sandbox Codex: https://developers.openai.com/codex/security/

Here's the macOS kernel-enforced sandbox profile that gets applied to processes spawned by the LLM: https://github.com/openai/codex/blob/main/codex-rs/core/src/...

I think skepticism is healthy here, but there's no need to just guess.


That still doesn't seem ideal. Run the LLM itself in a kernel-enforced sandbox, lest it find ways to exploit vulnerabilities in its own code.


The LLM inference itself doesn't "run code" per se (it's just doing tensor math), and besides, it runs on OpenAI's servers, not your machine.


There still needs to be a harness running on your local machine to spawn the processes in their sandboxes. I consider that "part of the LLM" even if it isn't doing any inference.


If that part were running sandboxed, then it would be impossible for it to contact the OpenAI servers (to get the LLM's responses), or to spawn an unsandboxed process (for situations where the LLM requests it from the user).


That's obviously not true. You can do anything you want with a sandbox. Open a socket to the OpenAI servers and then pass that off to the sandbox and let the sandboxed process communicate over that socket. Now it can talk to OpenAI's servers but it can't open connections to any other servers or do anything else.

The startup process which sets up the original socket would have to be privileged, of course, but only for the purpose of setting up the initial connection. The running LLM harness process would not have any ability to break out of the sandbox after that.

As for spawning unsandboxed processes, that would require a much more sophisticated system whereby the harness uses an API to request permission from the user to spawn the process. We already have APIs like this for requesting extra permissions from users on Android and iOS, so it's not in-principle impossible either.

In practice I think such requests would be a security nightmare and best avoided, since essentially it would be like a prisoner asking the guard to let him out of jail and the guard just handing the prisoner the keys. That unsandboxed process could do literally anything it has permissions to do as a non-sandboxed user.


You are essentially describing the system that Codex (and, I presume, Claude Code et al.) already implements.


The devil is in the details. How much of the code running on my machine is confined to the sandbox vs how much is used in the boostrap phase? I haven't looked but I would hope it can survive some security audits.


If I'm following this it means you need to audit all code that the llm writes though as anything you run from another terminal window will be run as you with full permissions.


The thing is that on macOS at least, Codex does have the ability use an actual sandbox that I believe prevents certain write operations and network access.


Is it asking you permission to run that python command? If so, then that's expected: commands that you approve get to run without the sandbox.

The point is that Codex can (by default) run commands on its own, without approval (e.g., running `make` on the project it's working on), but they're subject to the imposed OS sandbox.

This is controlled by the `--sandbox` and `--ask-for-approval` arguments to `codex`.


These are called flat because they are defined in flat files.

Apparently researchers call non-hierarchical state machines "flat machines" / "flat agents". Oh well!

I think editing post content locks after some time or edit count.

Well-formatted examples:

* https://github.com/memgrafter/flatagents/tree/main/sdk/pytho...

* https://github.com/memgrafter/research-crawler-flatagents

* https://github.com/memgrafter/claude-skills-flatagents


Probably for any case where an actual human is doing it. On an image you obviously want to do it at bake time, so I feel default off with a flag would have been a better design decision for pip.

I just read the thread and use Python, I can't comment on the % speedup attributed to uv that comes from this optimization.


Images are a good example where doing it at install-time is probably the best yeah, since every run of the image starts 'fresh', losing the compilation which happened last time the image got started.

If it was a optional toggle it would probably become best practice to activate compilation in dockerfiles.


> On an image you obviously want to do it at bake time

It seems like tons of people are creating container images with an installer tool and having it do a bunch of installations, rather than creating the image with the relevant Python packages already in place. Hard to understand why.

For that matter, a pre-baked Python install could do much more interesting things to improve import times than just leaving a forest of `.pyc` files in `__pycache__` folders all over the place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: