Hacker Newsnew | past | comments | ask | show | jobs | submit | bogtog's commentslogin

I'd be curious if there were some measurements of the final effects, since presumably models wont <think> in caveman speak nor code like that

> Not listed here is how banks themselves have changed to be almost entirely online

Sorry what? Was this not the central theme of the article? (albeit with a title that used the word "iPhone" to be catchier)


Yea that could have been worded better. My point was more that the banks didn't turn into software (an app) with just developers working on it, just that the labor force that was doing teller operations moved.


> This is wrong. It's not insider trading. Lutnick didn't have inside information. His son just had a brain. Anyone who read the case knew which way the court was going, it was the least surprising decision ever. Perhaps the only surprising thing is that the court ever heard it.

If this was so obvious, wouldn't there have been more competitors pushing down the value of it?


Think about how to actually pull this trade off. It isn't pushing a button on your trading app. Competitors cant enter this competition easily.


I thought his son just had a brain?


And a small army of folks to do his bidding. Cantor has over 10k people, he's the chairman.


Mr. Less-than-Consistently-Candid strikes again


> But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go

The LLM still benefits from the abstraction provided by Python (fewer tokens and less cognitive load). I could see a pipeline working where one model writes in Python or so, then another model is tasked to compile it into a more performant language


It's very good (in our experience, YMMV of course) when/llm write prototype with python and then port automatically 1-1 to Rust for perf. We write prototypes in JS and Python and then it gets auto ported to Rust and we have been doing this for about 1 year for all our projects where it makes sense; in the past months it has been incredibly good with claude code; it is absolutely automatic; we run it in a loop until all (many handwritten in the original language) tests succeed.


IDK what's going on in your shop but that sounds like a terrible idea!

- Libraries don't necessarily map one-to-one from Python to Rust/etc.

- Paradigms don't map neatly; Python is OO, Rust leans more towards FP.

- Even if the code be re-written in Rust, it's probably not the most Rustic (?) approach or the most performant.


It doesn't map anything 1 to 1, it uses our guidelines and architecture for porting it which works well. I did say YMMV anyway; it works well for us.


Sorry, so basically you're saying there are two separate guidelines, one for Python and one for Rust, and you have the LLM write it first in Python and then Rust. But I still don't understand why it would be any better than writing the code in Rust in one go? Why "priming" it in Python would improve the result in any way?

Also, what happens when bug fixes are needed? Again first in Py and then in Rs?


Why not get it to write it in Rust in the first place?


Presumably the thought experiment hasn’t matured to that point yet.


I think that's not as beneficial as having proper type errors and feeding that into itself as it writes


Expressive linting seems more useful for that than lax typing without null safety.


NP (as in P = NP) is also much lower for Python than Rust on the human side.


What does that mean? Can you elaborate?


Sorry, yes. LLMs write code that's then checked by human reviewers. Maybe it will be checked less in the future. But I'm not seeing fully-autonomous AI on the horizon.

At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."


Well, verification is easier than creation (i.e., P ≠ NP). I think humans who can quickly verify something works will be in more demand than those who know how to write it. Even better: Since LLMs aren't as creative as humans (in-distribution thinking), test-writers will be in more demand (out-of-distribution thinkers). Both of these mean that humans will still be needed, but for other reasons.

The future belongs to generalists!


> The future belongs to generalists!

Couldn't be more correct.

The experienced generalists with techniques of verification testing are the winners [0] in this.

But one thing you cannot do, is openly admit or to be found out to say something like: "I don't know a single line of Rust/Go/Typescript/$LANG code but I used an AI to do all of it" and the system breaks down and you can't fix it.

It would be quite difficult to take a SWE seriously that prides themselves in having zero understanding and experience of building production systems and runs the risk of losing the company time and money.

[0] https://news.ycombinator.com/item?id=46772520


I prefer my C compiler to write my asm for me from my C code but I can still (and sometimes have to!) read the asm it creates.


P ≠ NP is NOT confirmed and my god I really do not want that to ever be confirmed

I really do want to live in the world where P = NP and we can trivially get P time algorithms for believed to be NP problems.

I reject your reality and substitute my own.


I figure OP would try and give the models pure text forms of the game?

.....

l....

l....

l.ttt

l..t.


This is fair, but this seems like the only way to test this type of thing while avoiding the risk of harassing tons of farmers with AI emails. In the end, the performance will be judged on how much of a human harness is given


I associate "yello" with Homer Simpson: https://www.facebook.com/TheDoctorZaius/videos/7233283715092...

(fingers crossed I'm not somehow doxxing myself by sharing a fb link)


Forgot about that. So about a 100% chance my dad got it from that.


People will pay extra for Opus over Sonnet and often describe the $200 Max plan as cheap because of the time it saves. Paying for a somewhat better harness follows the same logic


The game looks really good, although I think it'd be improved if the sphere was a bit smaller. It feels like it takes too long for the game to become difficult


Here's a console command you can run to increase the snake length immediately, and thus the difficulty:

   (() => { let count = 50; const delay = 100; const interval = setInterval(() => { addSnakeNode(); if (--count <= 0) clearInterval(interval); }, delay);})()


Why wrap in a lambda?


Because I learned JS before ECMAScript 6 was widely supported by browsers and haven't written a ton of it targeting modern browsers. You're right that it's unnecessary.


Could be to allow use of local variables that do not leak into the scope this code is executed in. That's what I use this pattern for.


pro tip: no longer necessary

    { let count = 50; const interval = setInterval(() => { addSnakeNode(); if (--count <= 0) clearInterval(interval); }, 100) }


And polluting the global variable namespace hardly matters when using the console.


Speed should slightly increase with each new apple


I strongly disagree, I like that the challenge comes from the snake getting longer as opposed to speed.


Agree - my millennial brain got bored quickly and it was still very easy.


Easy up to ~70, interesting between 80-110, very hard around 120-130. I think scores above 200 are pretty sus, there is very little room on the sphere at that point (using the cheat from sibling comment). Anything >400 is definitely made up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: