Hacker Newsnew | past | comments | ask | show | jobs | submit | jmalicki's commentslogin

In cursor you highlight and hit Ctrl-L, and use the voice prompting - I can do this today!

All you have to do is record a table of fixup locations you can fill in in a second pass once the labels are resolved.

In practice, one of the difficulties in getting _clang_ to assemble the Linux kernel (as opposed to GNU `as` aka GAS), was having clang implement support for "fragments" in more places.

https://eli.thegreenplace.net/2013/01/03/assembler-relaxatio...

There were a few cases IIRC around usage of the `.` operator which means something to the effect of "the current point in the program." It can be used in complex expressions, and sometimes resolving those requires multiple passes. So supporting GAS compatible syntax in more than just the basic cases forces the architecture of your assembler to be multi-pass.


It's also interesting because the radius of curvature is smaller, meaning the distance to the horizon is shorter north south, and a lot of these views are north south. So the increase in mountain height more than overcomes the other effect!

Woah, I've been thinking about this whole project for so long, but never considered that!

Are we saying line of sights are not symmetric? Why not?

The earth is an oblate spheroid to an approximation. It's not that they're not symmetric, but at the equator the north south axis has higher rates of curvature than anywhere else (but the east west has somewhat lower rates because of the larger circumference due to the bulge).

So that large lines of sight are near the equator on a north south axis (or symmetrically south north) is crazy because the high rates of curvature in that direction at those latitudes should give the shortest distance to the horizon on earth, making those lines of sight even that much more impressive!


I have had success with having the skill create a new branch and moving pieces of code there, testing them after the move, then adding it.

So commit locally and have it recreate the commit as a sequence on another branch.


That sounds like stacked changes (if you're not familiar think how lkml patches are like 0/8 1/8 etc where each is a standalone change that only depends on ones before), and I have been using agents create sets of stacked PRs when I have a big diff.

Instead of ordering of files, it creates an ordering of PRs where each has descriptions, independent CI, etc. and can be merged one at a time (perhaps at the small cost of the main branch having unused library functions until the final PR is merged)


It's been a hugely popular PE play - any time a brand has a reputation for being very well made buy it for life level of stuff, that people pay a high price for, you can buy it and start reducing the quality for a few years, selling cheaper lower quality goods for the same price, hoping no one notices.

For the first few years, there aren't enough product issues for most of the hardcore enthusiasts to notice - maybe your tent ripping was just bad luck, or it may take two years for even a mediocre tent to weaken and fail for all but the people taking their tent to Denali or something.

Eventually the people who know move on and stop paying for the poorly made crap, but it's still seen as an exclusive brand by people who care about showing off they can afford something expensive vs. those for whom the quality was worth paying more for.


This isn't what the parent was talking about, but probabilistic programming languages are totally a thing!

https://en.wikipedia.org/wiki/Probabilistic_programming


In the same way in Rust you can download a package with Cargo and use it without reimplementing it, an LLM can download and explore all written human knowledge to produce a solution.

Or how you can efficiently loop over all combinations of all inputs in a short computer program, it will just take awhile!

If you have a programming language where finding an efficient algorithm is a compiler optimization, then your programs can get a lot shorter.


But then models will do more computation, to be slower.

What will have to change are workflows. Why are you ever waiting for the prompt to return? When you send an email, do you stare at your screen until you get a reply?


> Where is this all leading to, if after all the billions spent and all the benchmarks beaten conclusively, LLMs still can't do reasoning, can't do world-modelling, can't do context learning and so on, and so forth?

Humans completely displaced from the workforce while they harp "but LLMs can't really think and don't really have creativity!"


Humans are being displaced because moronic business magnates are trying to force feed the country their wares but failing spectacularly so now they are forcing governments across the world to buy their wares under threat of the US government.

I think that’s more of a reflection on what employers are willing to pay for…

Agreed, but does it matter? LLMs will affect us by how much they displace the parts of us employers are willing to pay for. The "but they don't really think!" is just a cope.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: