Yea that could have been worded better. My point was more that the banks didn't turn into software (an app) with just developers working on it, just that the labor force that was doing teller operations moved.
> This is wrong. It's not insider trading. Lutnick didn't have inside information. His son just had a brain. Anyone who read the case knew which way the court was going, it was the least surprising decision ever. Perhaps the only surprising thing is that the court ever heard it.
If this was so obvious, wouldn't there have been more competitors pushing down the value of it?
> But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go
The LLM still benefits from the abstraction provided by Python (fewer tokens and less cognitive load). I could see a pipeline working where one model writes in Python or so, then another model is tasked to compile it into a more performant language
It's very good (in our experience, YMMV of course) when/llm write prototype with python and then port automatically 1-1 to Rust for perf. We write prototypes in JS and Python and then it gets auto ported to Rust and we have been doing this for about 1 year for all our projects where it makes sense; in the past months it has been incredibly good with claude code; it is absolutely automatic; we run it in a loop until all (many handwritten in the original language) tests succeed.
Sorry, so basically you're saying there are two separate guidelines, one for Python and one for Rust, and you have the LLM write it first in Python and then Rust. But I still don't understand why it would be any better than writing the code in Rust in one go? Why "priming" it in Python would improve the result in any way?
Also, what happens when bug fixes are needed? Again first in Py and then in Rs?
Sorry, yes. LLMs write code that's then checked by human reviewers. Maybe it will be checked less in the future. But I'm not seeing fully-autonomous AI on the horizon.
At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."
Well, verification is easier than creation (i.e., P ≠ NP). I think humans who can quickly verify something works will be in more demand than those who know how to write it. Even better: Since LLMs aren't as creative as humans (in-distribution thinking), test-writers will be in more demand (out-of-distribution thinkers). Both of these mean that humans will still be needed, but for other reasons.
The experienced generalists with techniques of verification testing are the winners [0] in this.
But one thing you cannot do, is openly admit or to be found out to say something like: "I don't know a single line of Rust/Go/Typescript/$LANG code but I used an AI to do all of it" and the system breaks down and you can't fix it.
It would be quite difficult to take a SWE seriously that prides themselves in having zero understanding and experience of building production systems and runs the risk of losing the company time and money.
This is fair, but this seems like the only way to test this type of thing while avoiding the risk of harassing tons of farmers with AI emails. In the end, the performance will be judged on how much of a human harness is given
People will pay extra for Opus over Sonnet and often describe the $200 Max plan as cheap because of the time it saves. Paying for a somewhat better harness follows the same logic
The game looks really good, although I think it'd be improved if the sphere was a bit smaller. It feels like it takes too long for the game to become difficult
Because I learned JS before ECMAScript 6 was widely supported by browsers and haven't written a ton of it targeting modern browsers. You're right that it's unnecessary.
Easy up to ~70, interesting between 80-110, very hard around 120-130. I think scores above 200 are pretty sus, there is very little room on the sphere at that point (using the cheat from sibling comment). Anything >400 is definitely made up.
reply