Hacker Newsnew | past | comments | ask | show | jobs | submit | anon7725's commentslogin

Inflation risk.

I think most people would accept inflation as less of a risk then 20+% swings of the crypto market on a fairly common basis.

If they’re that common, it’s easy to profit 20% from them.

That makes absolutely zero sense, and you know it. I understand you are here to essentially shill bitcoin given you have a company that exists because of it but at least argue in good faith.

A strange game. The only winning move is not to play.

> My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.

“Why don’t you just make the minimum 37 pieces of flAIr?”


> the prompt is now the source that needs to be maintained

The inference response to the prompt is not deterministic. In fact, it’s probably chaotic since small changes to the prompt can produce large changes to the inference.


The inference response to the prompt is not deterministic.

So? Nobody cares.

Is the output of your C compiler the same every time you run it? How about your FPGA synthesis tool? Is that deterministic? Are you sure?

What difference does it make, as long as the code works?


> Is the output of your C compiler the same every time you run it?

Yes? Because of actual engineering mind you and not rolling the dice until the lucky number comes up.

https://reproducibility.nixos.social/evaluations/2/2d293cbfa...


It's not true for a place-and-route engine, so why does it have to be true for a C compiler?

Nobody else cares. If you do, that's great, I guess... but you'll be outcompeted by people who don't.



That's an advertisement, not an answer.

Did you really read and understand this page in the 1 minute between my post and your reply or did you write a dismissive answer immediately?

Eh, I'll get an LLM to give me a summary later.

In the meantime: no, deterministic code generation isn't necessary, and anyone who says it is is wrong.


The C compiler will still make working programs every time, so long as your code isn’t broken. But sometimes the code chatgpt produces won’t work. Or it'll kinda work but you’ll get weird, different bugs each time you generate it. No thanks.

Nothing matters but d/dt. It's so much better than it was a year ago, it's not even funny.

How weird would it be if something like this worked perfectly out of the box, with no need for further improvement and refinement?


> So? Nobody cares

Yeah the business surely won't care when we rerun the prompt and the server works completely differently.

> Is the output of your C compiler the same every time you run it

I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.

That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid. It's just not possible. We have code because we need to be that specific, so the business can actually be reliable. If we could be less specific, we would have done that before AI. We have tried this with no code tools. Adding randomness is not going to help.


I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.

Nobody is saying it should. Determinism is not a requirement for this. There are an infinite number of ways to write a program that behaves according to a given spec. This is equally true whether you are writing the source code, an LLM is writing the source code, or a compiler is generating the object code.

All that matters is that the program's requirements are met without undesired side effects. Again, this condition does not require deterministic behavior on the author's part or the compiler's.

To the extent it does require determinism, the program was poorly- or incompletely-specified.

That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid.

No, it's evidence that you're arguing with a point that wasn't made at all, or that was made by somebody else.


You're on the wrong axis. You have to be deterministic about following the spec, or it's a BUG in the compiler. Whether or not you actually have the exact same instructions, a compiler will always do what the code says or it's bugged.

LLMs do not and cannot follow the spec of English reliably, because English is open to interpretation, and that's a feature. It makes LLMs good at some tasks, but terrible for what you're suggesting. And it's weird because you have to ignore the good things about LLMs to believe what you wrote.

> There are an infinite number of ways to write a program that behaves according to a given spec

You're arguing for more abstractions on top of an already leaky abstraction. English is not an appropriate spec. You can write 50 pages of what an app should do and somebody will get it wrong. It's good for ballparking what an app should do, and LLMs can make that part faster, but it's not good for reliably plugging into your business. We don't write vars, loops, and ifs for no reason. We do it because, at the end of the day, an English spec is meaningless until someone actually encodes it into rules.

The idea that this will be AI, and we will enjoy the same reliability we get with compilers, is absurd. It's also not even a conversation worth having when LLMs hallucinate basic linux commands.


People are betting trillions that you're the one who's "on the wrong axis." Seems that if you're that confident, there's money to be made on the other side of the market, right? Got any tips?

Essentially all of the drawbacks to LLMs you're mentioning are either already obsolete or almost so, or are solvable by the usual philosopher's stone in engineering: negative feedback. In this case, feedback from carefully-structured tests. Safe to say that we'll spend more time writing tests and less time writing original code going forward.


> People are betting trillions that you're the one who's "on the wrong axis."

People are betting trillions of dollars that AI agents will do a lot of useful economic work in 10 years. But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.

The insane investment in AI isn't because today's agents can reliably write software better than senior developers. The investment is a bet that they'll be able to reliably solve some set of useful problems tomorrow. We don't know which problems they'll be able to reliably solve, or when. They're already doing some useful economic work. And AI agents will probably keep getting smarter over time. Thats all we know.

Maybe in a few years LLMs will be reliable enough to do what you're proposing. But neither I - nor most people in this thread - think they're there yet. If you think we're wrong, prove us wrong with code. Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.


Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.

I do, all the time.

But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.

Like almost any powerful tool, there are a few good ways to use LLM technology and countless bad ways. What kind of moron would expect "Write an operating system" or "Write a compiler" or "Write a web browser" to yield anything but plagiarized garbage? A high-quality program starts with a high-quality specification, same as always. Or at least with carefully-considered intent.

The difference is, given a sufficiently high-quality specification, an LLM can handle the specification->source step, just as a compiler or assembler relieves you of having to micromanage the source->object code step.

IMHO, the way it will shake out is that LLMs as we know them today will be only components, perhaps relatively small ones, of larger systems that translate human intent to machine function. What we call "programming" today is only one implementation of a larger abstraction.


> I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.

This doesn’t seem like a good example. People who engineer systems that rely on integrals still know what an integral is. They might not be doing it manually, but it’s still part of the tower of knowledge that supports whatever work they are doing now. Say you are modeling some physical system in Matlab - you know what an integral is, how it connects with the higher level work that you’re doing, etc.

An example from programming: you know what process isolation is, and how memory is allocated, etc. You’re not explicitly working with that when you create a new python list that ends up on the heap, but it’s part of your tower of knowledge. If there’s a need, you can shake off the cobwebs and climb back down the tower a bit to figure something out.

So here’s my contention: LLMs make it optional to have the tower of knowledge that is required today. Some people seem to be very productive with agentic coding tools today - because they already have the tower. We are in a liminal state that allows for this, since we all came up in the before time, struggling to get things to compile, scratching our heads at core dumps, etc.

What happens when you no longer need to have a mental model of what you’re doing? The hard problems in comp sci and software engineering are no less hard after the advent of LLMs.


Here's one way to think about it

Architects are not civil engineers and often don't know details of construction, project management, structural engineering etc. For a few years there will still be a role for a human "architect" but most of the specific low level stuff will be automated. Eventually there won't be an architect either but that may be 10 years away


Optional tower of knowledge leads to a ballooning of incompetence and future problems

how much of this usage is replacing a web search or spelling/grammar checks with something orders of magnitude more costly?

Growth in the PC market and internet usage had a substantial bottom-up component. The PC, even without connectivity, was useful for word processing, games, etc. Families stretched their budgets to buy one in the 80's and 90's.

The internet famously doubled in connectivity every 100 days during its expansion era. Its usefulness was blindingly obvious - there was no need for management to send out emails warning that they were monitoring internet usage, and you'd better make sure that you were using it enough. Can you imagine!

We are at a remarkable point in tech. The least-informed people in an organization (execs) are pushing a technology onto their organizations. A jaw-droppingly enormous amount of capital is being deployed in essentially a "pushing on a rope" scenario.


I use AnyList for recipes, grocery/shopping lists and checklists. It’s a great app!


I don’t know how you work, but I spend a good portion of my day in a terminal while working on AI-type projects.

The terminal never left.


I'd like to make the distinction between text, indeed, word/command based interfaces and "terminal".

It so happens that right now one is synonymous with the other but there's no instrinsic requirement.

There's probably something to be said for the inherent constraints imposed by the terminal protocol, but, again, we can build the same things without that.


It is a “joke” insofar as it’s an asinine undertaking.

It’s not a “joke” in the sense of being lighthearted or unserious: there was a press conference at the White House. Official US maps have been updated. Google Maps has been updated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: