Hacker Newsnew | past | comments | ask | show | jobs | submit | qsort's commentslogin


Non-trivial, doesn’t work with standard build tooling, and unless something has changed it produces installers that extract into several different files. You don’t just get a standalone statically linked binary that you can hand off.

I don't think that's even well-defined if you have arbitrary infix operators with arbitrary precedence and arbitrary associativity (think Haskell). If $, & and @ are operators in that order of precedence, all right-associatve. Using your notation, what is:

  a & << b $ c >> @ d
If $ is reduced below & but above @ then it's the same as:

  ((a & b) $ c) @ d
If it's reduced below both & and @ then it becomes:

  (a & b) $ (c @ d)
I think conceptualizing parentheses as "increase priority" is fundamentally not the correct abstraction, it's school brain in a way. They are a way to specify an arbitrary tree of expressions, and in that sense they're complete.

Clearly we need left-associative and right-associative inverse parentheses.

a & )b $ c) @ d would mean ((a & b) $ c) @ d.

a & (b $ c( @ d would mean a & (b $ (c @ d)).

Combining both, a & )b $ c( @ d would mean (a & b) $ (c @ d).

;)


Am I stupid if I don't get it? What is the intended end state? What does "ungroup operands" mean?

I think ungrouping make sense if you consider reverse parentheses as a syntactic construct added to the language, and not replacing the existing parentheses.

For instance, using "] e [" as the notation for reverse parentheses around expression e, the second line showing reverse parenthese simplification, third line showing the grouping after parsing, and the fourth line using postfix notation:

A + B * (C + D) * (E + F)

=> A + B * (C + D) * (E + F)

=> (A + (B * (C + D) * (E + F)))

=> A B C D + E F + * * +

A + ] B * (C + D) [ * (E + F)

=> A + B * C + D * (E + F)

=> ((A + (B * C)) + (D * (E + F)))

=> A B C * + D E F * + +

So what ungrouping would mean is to undo the grouping done by regular parentheses.

However, this is not what is proposed later in the article.

Possibilities include reversing the priority inside the reverse parentheses, or lowering the priority wrt the rest of the expression.


I'm not sure I'm following but I think what he means is that if normal parenthesis around an addition mean this addition must precede multiplication, these anti-parenthesis around a multiplication have to make addition take place before it.

I stumbled on this:

https://lobste.rs/s/qoqfwz/inverse_parentheses#c_n5z77w

which should provide the answer.


One of the mental frameworks that convinced me is how much of a "free action" it is. Have the LLM (or the agent) churn on some problem and do something else. Come back and review the result. If you had to put significant effort into each query, I agree it wouldn't be worth it, but you can just type something into the textbox and wait.

Are you counting the time/effort to evaluate the accuracy and relevance of an LLM left to "think" for a while?

I got nerd sniped and tried to reconstruct the games. The notation is weird and they aren't using modern conventions (a1 is dark, queens on the d-file, kings on the e-file, white goes first.)

Also, as the article mentions there are a few errors. With a bit of deduction this is my best attempt at reconstructing the first one:

https://lichess.org/HzzfuyWv

In keeping with the theme of exciting new technology, I tried giving the problem to Opus 4.5 but it seems to hallucinate badly: https://claude.ai/share/299fb10e-8465-41b3-bad5-85500291ed67


I care very little about fashion, whether in clothes or in computers. I've always liked Anthropic products a bit more but Codex is excellent, if that's your jam more power to you.

I believe the argument isn't that ancient statues were ugly, but rather that reconstructions are ugly (unfortunately this has been used to argue against the now ascertained fact that ancient statues were indeed painted). Purely subjective judgement from someone not trained in the arts: that photo of the Augusto di Prima Porta doesn't look like a great paint-job. The idea that, like the statue itself, the painting must instead have been a great work of art lost to time seems solid to me.

> the now ascertained fact that ancient statues were indeed painted

"Now ascertained"? Ancient sources specifically say they were painted.


This is very interesting if used judiciously, I can see many use cases where I'd want interfaces to be drawn dynamically (e.g. charts for business intelligence.)

What scares me is that even without arbitrary code generation, there's the potential for hallucinations and prompt injection to hit hard if a solution like this isn't sandboxed properly. An automatically generated "confirm purchase" button like in the shown example is... probably something I'd not make entirely unsupervised just yet.


I agree saying "they don't think" and leaving it at that isn't particularly useful or insightful, it's like saying "submarines don't swim" and refusing to elaborate further. It can be useful if you extend it to "they don't think like you do". Concepts like finite context windows, or the fact that the model is "frozen" and stateless, or the idea that you can transfer conversations between models are trivial if you know a bit about how LLMs work, but extremely baffling otherwise.


> Concepts like

> finite context windows

like a human has

> or the fact that the model is "frozen" and stateless,

much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.

> or the idea that you can transfer conversations between models are trivial

because computers are better-organized than humanity.


> much like a human adult.

I do hope you're able to remember what you had for lunch without incessantly repeating it to keep it in your context window


My context window is about a day. I can remember what I had for lunch today, and sometimes what I had for lunch yesterday. Beyond that, my lunches are gone from my context window and are only in my training data. I have vague ideas about what dishes I ate, but don't remember what days specifically. If I had to tell you what separate dishes I ate in the same meal, I don't have specific memories of that. I remember I ate fried plantains, and I ate beans & rice. I assume they were on the same day because they are from the same cuisine, and am confident enough that I would bet money on it, but I don't know for certain.

One of my earliest memories is of painting a ceramic mug when I was about 3 years old. The only reason I remember it is because every now and then I think about what my earliest memory is, and then I refresh my memory of it. I used to remember a few other things from when I was slightly older, but no longer do, because I haven't had reasons to think of them.

I don't think humans have specific black and white differences between types of knowledge that way LLMs do, but there is definitely a lot of behavior that is similar to context window vs training data (and a gradient in between). We remember recent things a lot better than less recent things. The quantity of stuff we can remember in our "working memory" is approximately finite. If you try to hold a complex thought in your mind, you can probably do that indefinitely, but if you then try to hold a second equally complex thought as well, you'll often lose the details of the first thought and need to reread or rederive those details.


Wouldn't context be comparable to human short term memory, which could be neurons firing in a certain pattern repeatedly to keep it there?

How would you say human short term memory works if not by repeated firing (similar to repeatedly putting same tokens in over and over)?


A lot of people genuinely can't remember what they did an hour ago, but to be very clear you're implying that an LLM can't "remember" something from an hour, or three hours ago, when it's the opposite.

I can restart a conversation with an LLM 15 days later and the state is exactly as it was.

Can't do that with a human.

The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.

If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.

I know this because it's happened continually in tech companies decade after decade.

LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.

I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.


The Turing test was passed in the 80s by Eliza it doesnt mean anything


Why doesn't it mean anything?


I do hope you're able to remember what buttons you just pressed without staring at your hands while doing so to keep it in your working memory

I do hope you're able to remember what was your browser tab 5 tab switches ago without keeping track of it...


> much like a human adult.

it doesn't sound like you really understand what these statements mean. if LLMs are like any humans it's those with late stage dementia, not healthy adults


It's one of the failure modes of online forums. Everyone piles on and you get an unrealistic opinion sample. I'm not exactly trying to shove AI into everything, I'm weary of over hyping and mostly conservative in my technology choices. Still, I get a lot out of LLMs and agents for coding tasks.


i have trouble understanding how a forum of supposedly serious coders can be so detached from reality, but I do know that this is one of HN’s pathologies


I think it's more of a thread-bound dynamic rather than HN as a whole. If the thread starts positive you get "AGI tomorrow", if the thread starts negative you get "stochastic parrot".

But I see what you mean, there have been at least a few insane comment sections for sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: