Hacker Newsnew | past | comments | ask | show | jobs | submit | nlehuen's commentslogin

And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!


> And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!

Not just any compiler, but a non-typesafe, ad-hoc, informally specified grammar with a bunch of unspecified or under-specified behaviour.

Not sure if we can call this a win :-)


Greenspun's tenth rule in action!


It can be type safe and testable with free monads


This is why I think things like devops benefit from the traditional computer science education. Once you see the pattern, whatever project you were assigned looks like something you've done before. And your users will appreciate the care and attention.


I think you're already doing that? The only thing that's added is serializing the plan to a file and then deserializing it to make the changes.


Yeah any time you're translating "user args" and "system state" to actions + execution and supporting a "dry run" preview it seems like you only really have two options: the "ad-hoc quick and dirty informal implementation", or the "let's actually separate the planning and assumption checking and state checking from the execution" design.


I was thinking that he's describing implementing an initial algebra for a functor (≈AST) and an F-Algebra for evaluation. But I guess those are different words for the same things.


I came here to write that :-)


Actually there are π(N) ~ N / ln(N) primes less than N per the Prime Number Theorem, so π(2 ^ 160) ~ 2 ^ 153.2 - this only drops 7 bits. So that does increase the odds of collision but much less than what I expected!


Maths saved the day again!

I added a section to the README and pages site noting your logic.


It’s ok, you can still assign a unique hash for more than half of the atoms in the universe.


There are at least some in the Paris subway, including one that went at 12 km/h but was decommissioned in 2011:

https://en.wikipedia.org/wiki/Moving_walkway#Trottoir_roulan...


That one was in activity about the same period I took the Montparnasse station somewhat regularly, and over those years I couldn't ever take it as it was always either broken or running opposite to my direction.

I do think a concept with parallel tracks moving at different speeds would have been easier to use and more reliable though. But it might not have been revolutionary/over-engineered enough to attract attention and subsidies.


Man, they should've designed it similarly to the video, with parallel tracks with differing speeds. But people's lack of attention would probably lead them to park a foot on each track and causing a tumble.

Speaking of speed, in the Stockholm main station the escalators go faster than others I've experienced... But I don't know if they've adjusted the speed since my experience years ago.


Decomissioned but still rolling, just slower.


I initially thought that this space was encumbered with parents but Gemini Deep Research says the main patent expired in 2018: https://docs.google.com/document/d/1GmWGZMe_cjszV5eEegeFiWXK...


China is west of the USA.


The globe is one-half empty water (Pacific) and one-half continents. China is east on the continent half.


Europe is west of the US


No, it’s east. Far east.


Not to worry, there is a 278 page book about initialization in C++!

https://leanpub.com/cppinitbook

(I don't know whether it's good or not, I just find it fascinating that it exists)


Wow! Exhibit 1 for the prosecution.


C++ doesn't have initiation hazing rituals, but initialization hazing rituals. (One of which is that book.)


That's what I've been saying, every line of C++ is a book waiting to be written.


Well, authors are incentivized into writing long books. Having said that it obviously doesn't take away the fact that C++ init is indeed bonkers.


What would be the incentive for making this a long book? Couldn't be money.


It is actually. It's been shown that longer books make more sales as they are considered more trustworthy, so authors are incentivized to artificially drag them longer than they actually require


Ever written one? How much did you make?


The money isn't from book sales. The money is you can charge higher consultant fees because "you wrote the book". If you don't play the game of course you won't make money, but writing a book is one step. (the full game has lots of different paths, there are other ways to make a lot of money without writing a book)


right so no money in increased sales, which was obviously the point of the comment.


I imagine if I'd managed to actually memorize all of C++'s initialization rules, I'd probably have to write a book too just to get it all out, or I'd lose my sanity.


Then you can proudly put “C++ initialization consultant” on your resumé and get paid $1000 a day fixing class constructors at Fortune 500 companies.


There are no limits to how much of an specialized expert you can become in C++.

Knowing all of it just isn't possible from my experience.


Imagine you're in a world where magazines are dead, but books are still a thing, and stores won't stock a thin book.


I left on 2022-12-18 after some of the muskian shenanigans and I haven't missed it since, quite the contrary in fact. Before that I had to put a time limit on using the app to fight off the dark patterns.

Now I'm going to actually delete my account.


Holy cow, I created my account in May of 2007. What a different world we live in now.


You are posting this under a pseudonym. If you did publish something horrific or illegal, it would have been the responsibility of this web site to either censor your content, and/or identify you when asked by authorities. Which do you prefer?


> when asked by authorities

Key point right here.

You let people post what they will, and if the authorities get involved, cooperate with them. HN should not be preemptively monitoring all comments and making corporate moralistic judgments on what you wrote and censoring people who mention Mickey Mouse or post song lyrics or talk about hotwiring a car.

Why shouldn't OpenAI do the same?


It seems reasonable to work with law enforcement if information provides details about a crime that took place in the real world. I am not sure what purpose censoring as a responsibility would serve? Who cares if someone writes a fictional horrific story? A site like this may choose to remove noise to keep the quality of the signal high, but preference and responsibility are not the same.


This website is not a tool - not really.

Your keyboard is.

Censoring AI generation itself is very much like censoring your keyboard or text editor or IDE.

Edit: Of course, "literally everything is a tool", yada yada. You get what I mean. There is a meaningful difference between that translate our thoughts to a digital medium (keyboards) and tools that share those thoughts that others.


A website is almost certainly a tool. It has servers and distributes information typed on thousands of keyboards to millions of screens.


HN is the one doing the distribution, not the user. The latter is free to type whatever it wants, but it is not entitled to have HN distributes his words. Just like a publisher do not have to publish a book he doesn’t want to.


When someone posts on FB, they don't consider that FB is publishing their content for them


I also work at Google and I agree with the general sentiment that AI completion is not doing engineering per se, simply because writing code is just a small part of engineering.

However in my experience the system is much more powerful than you described. Maybe this is because I'm mostly writing C++ for which there is a much bigger training corpus than JavaScript.

One thing the system is already pretty good at is writing entire short functions from a comment. The trick is not to write:

  function getAc...
But instead:

  // This function smargls the bleurgh
  // by flooming the trux.
  function getAc...
This way the completion goes much farther and the quality improves a lot. Essentially, use comments as the prompt to generate large chunks of code, instead of giving minimum context to the system, which limits it to single line completion.


This type of not having to think about the implementation, especially in a language that we've by now well-established can't be written safely by humans (including by Google's own research into Android vulnerabilities if I'm not mistaken), at least with the current level of LLM, worries me the most

Time will tell whether it outputs worse, equal, or better quality than skilled humans, but I'd be very wary of anything it suggests beyond obvious boilerplate (like all the symbols needed in a for loop) or naming things (function name and comment autocompletes like the person above you described)


> worries me the most

It isn't something I worry about at all. If it doesn't work and starts creating bugs and horrible code, the best places will adjust to that and it won't be used or will be used more judiciously.

I'll still review code like I always do and prevent bad code from making it into our repo. I don't see why it's my problem to worry about. Why is it yours?


Because I do security audits

Functional bugs in edge cases are annoying enough, and I seem to run into these regularly as a user, but there's yet another class of people creating edge cases for their own purposes. The nonchalant "if it doesn't work"... I don't know whether that confirms my suspicion that not all developers are aware of (as a first step; let alone control for) the risks


And especially if it generates bugs in ways different from humans - human review might be less effective at catching it...


It generates bugs in pretty similar ways. It’s based on human-written code, after all.

Edge cases will usually be the ones to get through. Most developers don’t correctly write tests that exercise the limits of each input (or indeed have time to both unit test every function that way, and integration test to be sure the bigger stories are correctly working). Nothing about ai assist changes any of this.

(If anybody starts doing significant fully unsupervised “ai” coding they would likely pay the price in extreme instability so I’m assuming here that humans still basically read/skim PRs the same as they always have)


Except that no one trusts Barney down the hall that has stack overflow open 24/7. People naturally trust AI implicitly.


It's worrying, yes, but we've had stackoverflow copy-paste coding for over a decade now already, which has exactly the same effects.

This isn't a new concern. Thoughtless software development started a long time ago.


As a security consultant, I think I'm aware of security risks all the time, also when I'm developing code just as a hobby in spare time. I can't say that I've come across a lot of stackoverflow code that was unsafe. It happened (like unsafe SVG file upload handling advice) and I know of analyses that find it in spades, but I personally correct the few that I see (got enough stackoverflow rep to downvote, comment, or even edit without the user's approval though I'm not sure I've ever needed that) and the ones found in studies may be in less-popular answers that people don't come across as often because we should be seeing more of them otherwise, both personally and in the customer's code

So that's not to say there is nothing to be concerned about on stackoverflow, just that the risk seems manageable and understood. You also nearly always have to fit it to your own situation anyway. With the custom solutions from generative models, this is all not yet established and you're not having to customise (look at) it further if it made a plausible-looking suggestion

Perhaps this way of coding ends up introducing fewer bugs. Time will tell, but we all know how many wrong answers these things generate in text as well as what they were trained on, giving grounds for worry—while also gathering experience, of course. I'm not saying to not use it at all. It's a balance and something to be aware of

I also can't say that I find it to be thoughtless when I look for answers on stackoverflow. Perhaps as a beginning coder, you might copy bigger bits? Or without knowing what it does? That's not my current experience, though


This is a good idea even outside of Google, with tools like copilot and such.

Often when I don't know exactly what function / sequence of functions I need to achieve a particular outcome, I put in a comment describing what I want to do, and Copilot does the rest. I then remove the comment once I make sure that the generated code actually works.

I find it a lot less flow-breaking than stackoverflow or even asking an LLM.

It doesn't work all of the time, and sometimes you do have to Google still, but for the cases it does work for, it's pretty nice.


Why remove the comment that summarises the intent for humans? The compiler will ignore your comment anyway, so it's only there for the next human who comes along and will help them understand the code


Because the code, when written, is usually obvious enough.

Something like:

  query = query.orderBy(field: "username", Ordering.DESC)
Doesn't need an explanation, but when working in a language I don't know well, I might not remember whether I'm supposed to call orderBy on the query or on the ORM module and pass query as the argument, whether the kwarg is called "field" or "column", whether it wants a string or something like `User.name` as the column expression, how to specify the ordering and so on.


Like he says, the "comment" describes what he wants to do. That's not what humans are interested in. The human already knows "what he wants to do" when they read the code. It's the things like "why did he want to do this in the first place?" that is lacking in the code, and what information is available to add in a comment for the sake of humans.

Remember, LLMs are just compilers for programming languages that just so happen to have a lot of similarities with natural language. The code is not the comment. You still need to comment your code for humans.


> Like he says, the "comment" describes what he wants to do. That's not what humans are interested in.

When I'm maintaining other people's code, or my own after enough time has gone by, I'm very interested in that sort of comment. It gives me a chance to see if the code as written does what the comment says it was intended to do. It's not valuable for most of the code in a project, but is incredibly valuable for certain key parts.

You're right that comments about why things were done the way they were are the most valuable ones, but this kind of comment is in second place in my book.


Or for something that needs like a quick mathematical lemma or a worked example. A comment on what is fantastic.


It's often unnecessarily verbose. If you read a comment and glance at the code that follows, you'll understand what it is supposed to do. But the comment you're giving as an instruction to an LLM usually contains information which will then be duplicated in the generated code.


I see. Might still be good to have a verbose comment than no comment at all, as well as a marker of "this was generated" so (by the age of the code) you have some idea of what quality the LLM was in that year and whether to proofread it once more or not


External comments are API usage comments. LLM prompts are also implementation proposal.

Implementation comments belong inside the implementation, so they should be over if not deleted.


Next human will put the code in a prompt and ask what it does. Chinese Whispers.


I tried making a meme some months ago with exactly this idea, but for emails. One person would tell an LLM "answer that I'm fine with either option" and sends a 5 KB email, in response to which the recipient receives it and gets the automatic summary function to tell them (in a good case) "they're happy either way" or (in a bad case) "they don't give a damn". It didn't really work, too complex for meme format as far as my abilities went, but yeah the bad translator effect is something I'm very much expecting from people who use an LLM without disclosing it


If someone is going to use an LLM to send me an email, I'd much rather them just send me the prompt directly. For the LLM message to be useful the prompt would have included all the context and details anyway, I don't need an LLM to make it longer and sound more "professional" or polite.


That is actually exactly my unstated point / the awareness I was hoping to achieve by trying to make that meme :D


Not necessarily. Your prompt could include instructions to gather information from your emails and address book to tell your friend about all the relevant contacts you know in the shoe industry.


Well that sounds reasonable enough. My only request is that you send me the prompt and let me decide if I want to comply...informed consent!



Wow, I love good, original programming jokes like these, even the ideas of the jokes. I used to browse r/ProgrammerHumor frequently, but it is too repetitive -- mostly recycled memes and there is anything new.

This is one that I really liked: https://www.reddit.com/r/ProgrammerHumor/comments/l5gg3t/thi...


(No need to Orientalize to defamiarize, especially when a huge fraction of the audience is Chinese, so Orientalizing doesn't defamiliarize. Game of Whispers or Telephone works fine.)


Do the Chinese call it English Whispers?


Chinese-Americans, at least, call it a game of Telephone, like everyone else in the English-speaking world except for the actual English.

We call it “Telephone” because “Chinese Whispers” not only sounds racist, it is also super confusing. You need a lot of cultural context to understand the particular way in which Chinese whispers would be different from any other set of whispers.


I happened to re-read this, and to be clear, I'm not Chinese-American. the "we" there means "everyone else in the English-speaking world except for the actual English."


It’s all Greek to them.


Pardon my French.


I can guarantee you there is more publicly accessible javascript in the world than C++.

Copilot will autocomplete entire functions as well, sometimes without comments or even after just typing "f". It uses your previous edits as context and can assume what you're implementing pretty well.


I can guarantee you that the author was referencing code within Google. That is, their tooling is trained off internal code bases. I am imagining c++ dwarfs javascript.


Google does not write much publicly available JavaScript. They wrote their own special flavor. (Same for any hugel legacy operation)


Can we get some more info on what you're reffering to ?


They're probably talking about Closure Compiler type annotations [0], which never really took off outside Google, but (imo) were pretty great in the days before TypeScript. (Disclosure: Googler)

0. https://github.com/google/closure-compiler/wiki/Annotating-J...


I find writing code to be almost relaxing plus that's really a tiny fraction of dev work. Not too excited about potential productivity gains based purely on authoring snippets. I find it much more interesting on boosting maintainability, robustness and other quality metrics (not focusing on quality of AI output, actual quality of the code base).


I frequently use copilot and also find that writing comments like you do, to describe what I expect each function/class/etc to do gives superb results, and usually eliminates most of the actual coding work. Obviously it adds significant specification work but that’s not usually a bad thing.


I don't work at Google, but I do something similar with my code: write comments, generate the code, and then have the AI tooling create test cases.

AI coding assistants are generally really good at ramping up a base level of tests which you can then direct to add more specific scenario's to.


Has anyone made a coding assistant which can do this based off audio which I’m saying out loud while I’m typing (interview/pairing style), so instead of typing the comment I can just say it?


I had some success using this for basic input, but never took it very far. It's meant to be customizable for that sort of thing though: https://talon.wiki/quickstart/getting_started/ (Edit: just the voice input part)


Comment Driven Programming might be interesting, as an offshoot of Documentation Driven Programming


That's pretty nice. Does it write modern C++, as I guess it's expected?


Yes it does. Internally Google uses C++20 (https://google.github.io/styleguide/cppguide.html#C++_Versio...) and the model picks the style from training, I suppose.


OTOH it's sad that a Nobel-winning macroeconomist has to write a blog article about fixing Anaconda's hostile takeover of student machines and setting up a basic Python environment.


Sure, but compared to matlab or STATA it's still great news.

I mean, we dependency management in python is a complete mess. It's not something we can expect a mere Nobel winner to solve.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: