Non-trivial, doesn’t work with standard build tooling, and unless something has changed it produces installers that extract into several different files. You don’t just get a standalone statically linked binary that you can hand off.
I don't think that's even well-defined if you have arbitrary infix operators with arbitrary precedence and arbitrary associativity (think Haskell).
If $, & and @ are operators in that order of precedence, all right-associatve. Using your notation, what is:
a & << b $ c >> @ d
If $ is reduced below & but above @ then it's the same as:
((a & b) $ c) @ d
If it's reduced below both & and @ then it becomes:
(a & b) $ (c @ d)
I think conceptualizing parentheses as "increase priority" is fundamentally not the correct abstraction, it's school brain in a way. They are a way to specify an arbitrary tree of expressions, and in that sense they're complete.
I think ungrouping make sense if you consider reverse parentheses as a syntactic construct added to the language, and not replacing the existing parentheses.
For instance, using "] e [" as the notation for reverse parentheses around expression e, the second line showing reverse parenthese simplification, third line showing the
grouping after parsing, and the fourth line using postfix notation:
A + B * (C + D) * (E + F)
=> A + B * (C + D) * (E + F)
=> (A + (B * (C + D) * (E + F)))
=> A B C D + E F + * * +
A + ] B * (C + D) [ * (E + F)
=> A + B * C + D * (E + F)
=> ((A + (B * C)) + (D * (E + F)))
=> A B C * + D E F * + +
So what ungrouping would mean is to undo the grouping done by regular parentheses.
However, this is not what is proposed later in the article.
Possibilities include reversing the priority inside the reverse parentheses, or lowering the priority wrt the rest of the expression.
I'm not sure I'm following but I think what he means is that if normal parenthesis around an addition mean this addition must precede multiplication, these anti-parenthesis around a multiplication have to make addition take place before it.
One of the mental frameworks that convinced me is how much of a "free action" it is. Have the LLM (or the agent) churn on some problem and do something else. Come back and review the result. If you had to put significant effort into each query, I agree it wouldn't be worth it, but you can just type something into the textbox and wait.
I got nerd sniped and tried to reconstruct the games. The notation is weird and they aren't using modern conventions (a1 is dark, queens on the d-file, kings on the e-file, white goes first.)
Also, as the article mentions there are a few errors. With a bit of deduction this is my best attempt at reconstructing the first one:
I care very little about fashion, whether in clothes or in computers. I've always liked Anthropic products a bit more but Codex is excellent, if that's your jam more power to you.
I believe the argument isn't that ancient statues were ugly, but rather that reconstructions are ugly (unfortunately this has been used to argue against the now ascertained fact that ancient statues were indeed painted). Purely subjective judgement from someone not trained in the arts: that photo of the Augusto di Prima Porta doesn't look like a great paint-job. The idea that, like the statue itself, the painting must instead have been a great work of art lost to time seems solid to me.
This is very interesting if used judiciously, I can see many use cases where I'd want interfaces to be drawn dynamically (e.g. charts for business intelligence.)
What scares me is that even without arbitrary code generation, there's the potential for hallucinations and prompt injection to hit hard if a solution like this isn't sandboxed properly. An automatically generated "confirm purchase" button like in the shown example is... probably something I'd not make entirely unsupervised just yet.
I agree saying "they don't think" and leaving it at that isn't particularly useful or insightful, it's like saying "submarines don't swim" and refusing to elaborate further. It can be useful if you extend it to "they don't think like you do". Concepts like finite context windows, or the fact that the model is "frozen" and stateless, or the idea that you can transfer conversations between models are trivial if you know a bit about how LLMs work, but extremely baffling otherwise.
> or the fact that the model is "frozen" and stateless,
much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.
> or the idea that you can transfer conversations between models are trivial
because computers are better-organized than humanity.
My context window is about a day. I can remember what I had for lunch today, and sometimes what I had for lunch yesterday. Beyond that, my lunches are gone from my context window and are only in my training data. I have vague ideas about what dishes I ate, but don't remember what days specifically. If I had to tell you what separate dishes I ate in the same meal, I don't have specific memories of that. I remember I ate fried plantains, and I ate beans & rice. I assume they were on the same day because they are from the same cuisine, and am confident enough that I would bet money on it, but I don't know for certain.
One of my earliest memories is of painting a ceramic mug when I was about 3 years old. The only reason I remember it is because every now and then I think about what my earliest memory is, and then I refresh my memory of it. I used to remember a few other things from when I was slightly older, but no longer do, because I haven't had reasons to think of them.
I don't think humans have specific black and white differences between types of knowledge that way LLMs do, but there is definitely a lot of behavior that is similar to context window vs training data (and a gradient in between). We remember recent things a lot better than less recent things. The quantity of stuff we can remember in our "working memory" is approximately finite. If you try to hold a complex thought in your mind, you can probably do that indefinitely, but if you then try to hold a second equally complex thought as well, you'll often lose the details of the first thought and need to reread or rederive those details.
A lot of people genuinely can't remember what they did an hour ago, but to be very clear you're implying that an LLM can't "remember" something from an hour, or three hours ago, when it's the opposite.
I can restart a conversation with an LLM 15 days later and the state is exactly as it was.
Can't do that with a human.
The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.
If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.
I know this because it's happened continually in tech companies decade after decade.
LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.
I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.
it doesn't sound like you really understand what these statements mean. if LLMs are like any humans it's those with late stage dementia, not healthy adults
It's one of the failure modes of online forums. Everyone piles on and you get an unrealistic opinion sample. I'm not exactly trying to shove AI into everything, I'm weary of over hyping and mostly conservative in my technology choices. Still, I get a lot out of LLMs and agents for coding tasks.
i have trouble understanding how a forum of supposedly serious coders can be so detached from reality, but I do know that this is one of HN’s pathologies
I think it's more of a thread-bound dynamic rather than HN as a whole. If the thread starts positive you get "AGI tomorrow", if the thread starts negative you get "stochastic parrot".
But I see what you mean, there have been at least a few insane comment sections for sure.
reply