Was it a fiasco? Really? The rust unwrap call is the equivalent to C code like this:
int result = foo(…);
assert(result >= 0);
If that assert tripped, would you blame the assert? Of course not. Or blame C? No. If that assert tripped, it’s doing its job by telling you there’s a problem in the call to foo().
You can write buggy code in rust just like you can in any other language.
I think it's because unwrap() seems to unassuming at a glance. If it were or_panic() instead I think people would intuit it more as extremely dangerous. I understand that we're not dealing with newbies here, but everyone is still human and everything we do to reduce mistakes is a good thing.
I'm sure lots of bystanders are surprised to learn what .unwrap() does. But reading the post, I didn't get the impression that anyone at cloudflare was confused by unwrap's behaviour.
If you read the postmortem, they talk in depth about what the issue really was - which from memory is that their software statically allocated room for 20 rules or something. And their database query unexpected returned more than 20 items. Oops!
I can see the argument for renaming unwrap to unwrap_or_panic. But no alternate spelling of .unwrap() would have saved cloudflare from their buggy database code.
Looking at that unwrap as a Result<T> handler, the arguable issue with the code was the lack of informative explanation in the unexpected case. Panicking from the ill-defined state was desired behaviour, but explicit is always better.
The argument to the contrary is that reading the error out-load showed “the config initializer failing to return a valid configuration”. A panic trace saying “config init failed” is a minor improvement.
If we’re gonna guess and point fingers, I think the configuration init should be doing its own panicking and logging when it blows up.
First, again, that’s not the issue. The bug was in their database code. Could this codebase be improved with error messages? Yes for sure. But that wouldn’t have prevented the outage.
Almost every codebase I’ve ever worked in, in every programming language, could use better human readable error messages. But they’re notoriously hard to figure out ahead of time. You can only write good error messages for error cases you’ve thought through. And most error cases only become apparent when you stub your toe on them for real. Then you wonder how you missed it in the first place.
In any case, this sort of thing has nothing to do with rust.
It's not unassuming. Rust is superior to many other languages for making this risky behaviour visually present in the code base.
You can go ahead and grep your codebase for this today, instead of waiting for an incident.
I'm a fairly new migrant from Java to C#, and when I do some kind of collection lookup, I still need to check whether the method will return a null, throw an exception, expect an out+variable, or worst of all, make up some kind of default. C#'s equivalent to unwrap seems to be '!' (or maybe .Val() or something?)
Whether the value is null (and under which conditions) is encoded into the nullability of return value. Unless you work with a project which went out of its way to disable NRTs (which I've sadly seen happen).
> I think it's because unwrap() seems to unassuming at a glance. If it were or_panic() instead I think people would intuit it more as extremely dangerous.
Anyone who has learned how to program Rust knows that unwrap() will panic if the thing you are unwrapping is Err/None. It's not unassuming at all. When the only person who could be tripped up by a method name is a complete newbie to the language, I don't think it's actually a problem.
Similarly, assert() isn't immediately obvious to a beginner that it will cause a panic. Heck, the term "panic" itself is non obvious to a beginner as something that will crash the program. Yet I don't hear anyone arguing that the panic! macro needs to be changed to crash_this_program. The fact of the matter is that a certain amount of jargon is inevitable in programming (and in my view this is a good thing, because it enables more concise communication amongst practitioners). Unwrap is no different than those other bits of jargon - perhaps non obvious when you are new, but completely obvious once you have learned it.
I don't think you can know what unwrap does and assume it is safe. Plus warnings about unwrap are very common in the Rust community, I even remember articles saying to never use it.
I have always been critical of the Rust hype but unwrap is completely fine. Is an escape hatch has legitimate uses. Some code is fine to just fail.
It is easy to spot during code review. I have never programmed Rust professional and even I would have asked about the unwrap in the cloudfare code if I had reviewed that. You can even enforce to not use unwrap at all through automatic tooling.
The point is Rust provides more safety guarantees than C. But unwrap is an escape hatch, one that can blow up in your face. If they had taken the Haskell route and not provide unwrap at all, this wouldn't have happened.
Haskell’s fromJust, and similar partial functions like head, are as dangerous as Rust’s unwrap. The difference is only in the failure mode. Rust panics, whereas Haskell throws a runtime exception.
You might think that the Haskell behavior is “safer” in some sense, but there’s a huge gotcha: exceptions in pure code are the mortal enemy of lazy evaluation. Lazy evaluation means that an exception can occur after the catch block that surrounded the code in question has exited, so the exception isn’t guaranteed to get caught.
Exceptions can be ok in a monad like IO, which is what they’re intended for - the monad enforces an evaluation order. But if you use a partial function like fromJust in pure code, you have to be very careful about forcing evaluation if you want to be able to catch the exception it might generate. That’s antithetical to the goal of using exceptions - now you have to write to code carefully to make sure exceptions are catchable.
The bottom line is that for reliable code, you need to avoid fromJust and friends in Haskell as much you do in Rust.
The solution in both languages is to use a linter to warn about the use of partial functions: HLint for Haskell, Clippy for Rust. If Cloudflare had done that - and paid attention to the warning! - they would have caught that unwrap error of theirs at linting time. This is basically a learning curve issue.
> The difference is only in the failure mode. Rust panics, whereas Haskell throws a runtime exception.
Fun facts: Rust’s default panic handler also throws a runtime exception just like C++ and other languages. Rust also has catch blocks (std::panic::catch_unwind). But its rarely used. By convention, panicking in rust is typically used for unrecoverable errors, when the program should probably crash. And Result is used when you expect something to be fallable - like parsing user input.
You see catch_unwind in the unit test runner. (That’s why a failing test case doesn’t stop other unit tests running). And in web servers to return 50x. You can opt out of this behaviour with panic=abort in Cargo.toml, which also makes rust binaries a bit smaller.
The difference is not just convention. You mentioned some similarities between Rust panics and C++ exceptions, but there are some important differences. If you tried to write Rust code that used panics and catch_unwind as a general exception mechanism, you’d soon run into those differences, and find out why Rust code isn’t written that way.
The key difference is that in the general case, panics are designed to lead to program termination, not recovery. Examples like unit tests are a special case - the fact that handling panics work for that case doesn’t mean they would work well more broadly.
The point you mentioned, about being able to configure panics to abort, is another issue. If you did that in a program which used panics as an exception handling mechanism, the program would fail on its first exception. Of course you can say “just don’t do that”, but the point is it highlights the difference in the semantics of panics vs. exceptions.
Also, panics are not typed, the way exceptions are in C++ or Java. Using them as a general exception handling mechanism would either be very limiting, or require the design of a whole infrastructure for that.
The are other issues as well, including behavior related to threads, to FFI, and to where panics can even be caught.
I forgot about fromJust. On the other hand, fromJust is shunned by practically everybody writing Haskell. `unwrap` doesn't have the same status. I also understand why. Rust wanted to be more appealing, not too restrictive while Haskell doesn't care about attracting developers.
It's not just fromJust, there many other partial functions, and they all have the same issue, such as head, tail, init, last, read, foldl1, maximum, minimum, etc.
It's an overstatement to say that these are "shunned by practically everybody". They're commonly used in scenarios where the author is confident that the failure condition can't happen due to e.g. a prior test or guard, or that failure can be reliably caught. For example, you can catch a `read` exception reliably in IO. They're also commonly used in GHCi or other interactive environments.
I disagree that the Rust perspective on unwrap is significantly different. Perhaps for beginning programmers in the language? But the same is often true of Haskell. Anyone with some experience should understand the risks of these functions, and if they don't, they'll eventually learn the hard way.
One pattern in Rust that may mislead beginners is that unwrap is often used on things like builders in example docs. The logic here is that if you're building some critical piece of infra that the rest of the program depends on, then if it fails to build the program is toast anyway, so letting it panic can make sense. These examples are also typically scenarios where builder failure is unusual. In that case, it's the author's choice whether to handle failure or just let it panic.
Haskell is far more dangerous. It allows you to simple destruct the `Just` variant without a path for the empty case, causing a runtime error if it ever occurs.
> The point is Rust provides more safety guarantees than C. But unwrap is an escape hatch
Nope. Rust never makes any guarantees that code is panic-free. Quite the opposite. Rust crashes in more circumstances than C code does. For example, indexing past the end of an array is undefined behaviour in C. But if you try that in rust, your program will detect it and crash immediately.
More broadly, safe rust exists to prevent undefined behaviour. Most of the work goes to stopping you from making common memory related bugs, like use-after-free, misaligned reads and data races. The full list of guarantees is pretty interesting[1]. In debug mode, rust programs also crash on integer overflow and underflow. (Thanks for the correction!). But panic is well defined behaviour, so that's allowed. Surprisingly, you're also allowed to leak memory in safe rust if you want to. Why not? Leaks don't cause UB.
You can tell at a glance that unwrap doesn't violate safe rust's rules because you can call it from safe rust without an unsafe block.
I never said Rust makes guarantees that code is panic-free. I said that Rust provides more safety guarantees than C. The Result type is one of them because you have to handle the error case explicitly. If you don't use unwrap.
Also, when I say safety guarantees, I'm not talking about safe rust. I'm talking about Rust features that prevent bugs, like the borrow checker, types like Result and many others.
Ah thanks for the clarification. That wasn’t clear to me reading your comment.
You’re right that rust forces you to explicitly decide what to do with Result::Err. But that’s exactly what we see here. .unwrap() is handling the error case explicitly. It says “if this is an error, crash the program. Otherwise give me the value”. It’s a very useful function that was used correctly here. And it functioned correctly by crashing the program.
I don’t see the problem in this code, beyond it not giving a good error message as it crashed. As the old joke goes, “Task failed successfully.”
This is the equivalent of force-unwrap in Swift, which is strongly discouraged. Swift format will reject this anti-pattern. The code running the internet probably should not force unwrap either.
In rust, handing out indexes isn’t that common. It’s generally bad practice because your program will end up with extra, unnecessary bounds checks. Usually we program rust just the same as in C - get a reference (pointer) to an item inside the array and pass that around. The rust compiler ensures the array isn’t modified or freed while the pointer is held. (Which is helpful, but very inconvenient at times!)
Yes. And there’s still lots of places where you can get significant speed ups by simply applying those old techniques in a new domain or a novel way. The difference between a naive implementation of an algorithm and an optimised one is often many orders of magnitude. Look at automerge - which went from taking 30 seconds on a simple example to tens of milliseconds.
I think about this regularly when I compile C++ or rust using llvm. It’s an excellent compiler backend. It produces really good code. But it is incredibly slow, and for no good technical reason. Plenty of other similar compilers run circles around it.
Imagine an llvm rewrite by the people who made V8, or chrome or the unreal engine. Or the guy who made luajit or the Go compiler team. I’d be shocked if we didn’t see an order of magnitude speed up overnight. They’d need some leeway to redesign llvm IR of course. And it would take years to port all of llvm’s existing optimisations. But my computer can retire billions of operations per second. And render cyberpunk at 60fps. It shouldn’t take seconds of cpu time to compile a small program.
Lately rust is my primary language, and I couldn't agree more with this.
I've taken to using typescript for prototyping - since its fast (enough), and its trivial to run both on the server (via bun) or in a browser. The type system is similar enough to rust that swapping back and forth is pretty easy. And there's a great package ecosystem.
I'll get something working, iterate on the design, maybe go through a few rewrites and when I'm happy enough with the network protocol / UI / data layout, pull out rust, port everything across and optimize.
Its easier than you think to port code like this. Our intuition is all messed up when it comes to moving code between languages because we look at a big project and think of how long it took to write that in the first place. But rewriting code from imperative language A to B is a relatively mechanical process. Its much faster than you think. I'm surprised it doesn't happen more often.
I'm in a similar place, but my stack is Python->Go
With Python I can easily iterate on solutions, observe them as they change, use the REPL to debug things and in general just write bad code just to get it working. I do try to add type annotations etc and not go full "yolo Javascript everything is an object" -style :)
But in the end running Python code on someone else's computer is a pain in the ass, so when I'm done I usually use an LLM to rewrite the whole thing in Go, which in most cases gives me a nice speedup and more importantly I get a single executable I can just copy around and run.
In a few cases the solution requires a Python library that doesn't have a Go equivalent I just stick with the Python one and shove it in a container or something for distribution.
I'm in the camp of "If your target is Go, then prototype in Go." I don't bother with the intermediate step. Go is already so very close to being a dynamic language that I don't get the point. Just write "bad" Go to prototype quickly. Skip the error checks. Panic for fun. Write long functions. Make giant structs. Don't worry about memory.
You mentioned running someone else's python is painful, and it most certainly is. No other language have I dealt with more of the "Well, it works on my machine" excuse, after being passed done the world's worst code from a "data scientist". Then the "well, use virtual environments"... Oh, you didn't provide that. What version are you using? What libraries did you manually copy into your project? I abhor the language/runtime. Since most of us don't work in isolation, I find the intermediate prototype in another language for Go a waste of time and resources.
Now... I do support an argument for "we prototype in X because we do not run X in production". That means that prototype code will not be part of our releases. Let someone iterate quickly in a sandbox, but they can't copy/paste that stuff into the main product.
Just a stupid rant. Sorry. I'm unemployed. Career is dead. So, I shouldn't even hit "reply"... but I will.
I second your experience with Python. I've been coding in Python for 10+ years. When I get passed down some 'data scientist' code, I often it breaks.
With Rust, it was amazing - it was a pain to get it compiled and get past the restrictions (coming from a Python coder) - the code just ran without a hitch, and it was fast, never even tried to optimize it.
As a Python 'old-timer' , I also am not impressed with all the gratuitous fake typing , and especially Pydantic. Pydantic feels so un-pythonic, they're trying to make it like Go or Rust, but its falling flat, at least for me.
Is there a good resource on how to get better at python prototyping?
The typing system makes it somewhat slow for me and I am faster prototyping in Go then in Python, despite that I am writing more Python code. And yes I use type annotations everywhere, ideally even using pydantic.
I tend to use it a lot for data analytics and exploration but I do this now in nushell which holds up very well for this kind of tasks.
When I'm receiving some random JSON from an API, it's so much easier to drop into a Python REPL and just wander around the structure and figure out what's where. I don't need to have a defined struct with annotations for the data to parse it like in Go.
In the first phase I don't bother with any linters or type annotations, I just need the skeleton of something that works end to end. A proof of concept if you will.
Then it's just iterating with Python, figuring out what comes in and what goes out and finalising the format.
Well yeah. And because when an expert looks at the code chatgpt produces, the flaws are more obvious. It programs with the skill of the median programmer on GitHub. For beginners and people who do cookie cutter work, this can be incredible because it writes the same or better code they could write, fast and for free. But for experts, the code it produces is consistently worse than what we can do. At best my pride demands I fix all its flaws before shipping. More commonly, it’s a waste of time to ask it to help, and I need to code the solution from scratch myself anyway.
I use it for throwaway prototypes and demos. And whenever I’m thrust into a language I don’t know that well, or to help me debug weird issues outside my area of expertise. But when I go deep on a problem, it’s often worse than useless.
This is why AI is the perfect management Rorschach test.
To management (out of IC roles for long enough to lose their technical expertise), it looks perfect!
To ICs, the flaws are apparent!
So inevitably management greenlights new AI projects* and behaviors, and then everyone is in the 'This was my idea, so it can't fail' CYA scenario.
* Add in a dash of management consulting advice here, and note that management consultants' core product was already literally 'something that looks plausible enough to make execs spend money on it'
In my experience (with ChatGPT 5.1 as of late) is that the AI follows a problem->solution internal logic and doesn't think and try to structure its code.
If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
I feel like the AI has a strong bias towards adding things, and not removing them. The most obviously wrong thing is with CSS - when I try to do some styling, it gets 90% of the way there, but there's almost always something that's not quite right.
Then I tell the AI to fix a style, since that div is getting clipped or not correctly centered etc.
It almost always keeps adding properties, and after 2-3 tries and an incredibly bloated style, I delete the thing and take a step back and think logically about how to properly lay this out with flexbox.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
>
>A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
>
>I feel like the AI has a strong bias towards adding things, and not removing them.
I suspect this is because an LLM doesn't build a mental model of the code base like a dev does. It can decide to look at certain files, and maybe you can improve this by putting a broad architecture overview of a system in an agents.md file, I don't have much experience with that.
But for now, I'm finding it most useful still think in terms of code architecture, and give it small steps that are part of that architecture, and then iterate based on your own review of AI generated code. I don't have the confidence in it to just let some agent plan, and then run for tens of minutes or even hours building out a feature. I want to be in the loop earlier to set the direction.
A good system prompt goes a long way with the latest models. Even just something as simple as "use DRY principles whenever possible." or prompting a plan-implement-evaluate cycle gets pretty good results, at least for tasks that are doing things that AI is well trained on like CRUD APIs.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
I don’t think this is an inherent issue to the technology. Duplicate code detectors have been around for ages. Given an AI agent a tool which calls one, and ask it to reduce duplication, it will start refactoring.
Of course, there is a risk of going too far in the other direction-refactorings which technically reduce duplication but which have unacceptable costs (you can be too DRY). But some possible solutions: (a) ask it to judge if the refactoring is worth it or not - if it judges no, just ignore the duplication and move on; (b) get a human to review the decision in (a); (c) if AI repeatedly makes wrong decision (according to human), prompt engineering, or maybe even just some hardcoded heuristics
It actually is somewhat a limit of the technology. LLMs can't go back and modify their own output, later tokens are always dependent on earlier tokens and they can't do anything out of order. "Thinking" helps somewhat by allowing some iteration before they give the user actual output, but that requires them to write it the long way and THEN refactor it without being asked, which is both very expensive and something they have to recognize the user wants.
Coding agents can edit their own output - because their output is tool calls to read and write files, and so it can write a file, run some check on it, modify the file to try to make it pass, run the check again, etc
Sorry but from where I sit, this is only marginally closes gap from AI to truly senior engineers.
Basically human junior engineers start by writing code in a very procedural and literal style with duplicate logic all over the place because that's the first step in adapting human intelligence to learning how to program. Then the programmer realizes this leads to things becoming unmaintainable and so they start to learn the abstraction techniques of functions, etc. An LLM doesn't have to learn any of that, because they already know all languages and mechanical technique in their corpus, so this beginning journey never applies.
But what the junior programmer has that the LLM doesn't, is an innate common sense understanding of human goals that are driving the creation of the code to begin with, and that serves them through their entire progression from junior to senior. As you point out, code can be "too DRY", but why? Senior engineers understand that DRYing up code is not a style issue, its more about maintainability and understanding what is likely to change, and what will be the apparent effects to human stakeholders who depend on the software. Basically do these things map to things that are conceptually the same for human users and are unlikely to diverge in the future. This is also a surprisingly deep question as perhaps every human stakeholder will swear up and down they are the same, but nevertheless 6 months from now a problem arises that requires them to diverge. At this point there is now a cognitive overhead and dissonance of explaining that divergence of the users who were heretofore perfectly satisfied with one domain concept.
Ultimately the value function for success of a specific code factoring style depends on a lot of implicit context and assumptions that are baked into the heads of various stakeholders for the specific use case and can change based on myriad outside factors that are not visible to an LLM. Senior engineers understand the map is not the territory, for LLMs there is no territory.
I’m not suggesting AIs can replace senior engineers (I don’t want to be replaced!)
But, senior engineers can supervise the AI, notice when it makes suboptimal decisions, intervene to address that somehow (by editing prompts or providing new tools)… and the idea is gradually the AI will do better.
Rather than replacing engineers with AIs, engineers can use AIs to deliver more in the same amount of time
Which I think points out the biggest issue with current AI - knowledge workers in any profession at any skill level tend to get the impression that AI is very impressive, but is prone to fail at real world tasks unpredictably, thus the mental model of 'junior engineer' or any human that does its simple tasks by itself reliably, is wrong.
AI operating at all levels needs to be constantly supervised.
Which would still make AI a worthwhile technology, as a tool, as many have remarked before me.
The problem is, companies are pushing for agentic AI instead of one that can do repetitve, short horizon tasks in a fast and reliable manner.
Sure. My point was AI was already 25% of the way there even with their verbose messy style. I think with your suggestions (style guidance, human in the loop, etc) we get at most 30% of the way there.
Bad code is only really bad if it needs to be maintained.
If your AI reliably generates working code from a detailed prompt, the prompt is now the source that needs to be maintained. There is no important reason to even look at the generated code
> the prompt is now the source that needs to be maintained
The inference response to the prompt is not deterministic. In fact, it’s probably chaotic since small changes to the prompt can produce large changes to the inference.
The C compiler will still make working programs every time, so long as your code isn’t broken. But sometimes the code chatgpt produces won’t work. Or it'll kinda work but you’ll get weird, different bugs each time you generate it. No thanks.
Yeah the business surely won't care when we rerun the prompt and the server works completely differently.
> Is the output of your C compiler the same every time you run it
I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.
That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid. It's just not possible. We have code because we need to be that specific, so the business can actually be reliable. If we could be less specific, we would have done that before AI. We have tried this with no code tools. Adding randomness is not going to help.
I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.
Nobody is saying it should. Determinism is not a requirement for this. There are an infinite number of ways to write a program that behaves according to a given spec. This is equally true whether you are writing the source code, an LLM is writing the source code, or a compiler is generating the object code.
All that matters is that the program's requirements are met without undesired side effects. Again, this condition does not require deterministic behavior on the author's part or the compiler's.
To the extent it does require determinism, the program was poorly- or incompletely-specified.
That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid.
No, it's evidence that you're arguing with a point that wasn't made at all, or that was made by somebody else.
You're on the wrong axis. You have to be deterministic about following the spec, or it's a BUG in the compiler. Whether or not you actually have the exact same instructions, a compiler will always do what the code says or it's bugged.
LLMs do not and cannot follow the spec of English reliably, because English is open to interpretation, and that's a feature. It makes LLMs good at some tasks, but terrible for what you're suggesting. And it's weird because you have to ignore the good things about LLMs to believe what you wrote.
> There are an infinite number of ways to write a program that behaves according to a given spec
You're arguing for more abstractions on top of an already leaky abstraction. English is not an appropriate spec. You can write 50 pages of what an app should do and somebody will get it wrong. It's good for ballparking what an app should do, and LLMs can make that part faster, but it's not good for reliably plugging into your business. We don't write vars, loops, and ifs for no reason. We do it because, at the end of the day, an English spec is meaningless until someone actually encodes it into rules.
The idea that this will be AI, and we will enjoy the same reliability we get with compilers, is absurd. It's also not even a conversation worth having when LLMs hallucinate basic linux commands.
People are betting trillions that you're the one who's "on the wrong axis." Seems that if you're that confident, there's money to be made on the other side of the market, right? Got any tips?
Essentially all of the drawbacks to LLMs you're mentioning are either already obsolete or almost so, or are solvable by the usual philosopher's stone in engineering: negative feedback. In this case, feedback from carefully-structured tests. Safe to say that we'll spend more time writing tests and less time writing original code going forward.
> People are betting trillions that you're the one who's "on the wrong axis."
People are betting trillions of dollars that AI agents will do a lot of useful economic work in 10 years. But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.
The insane investment in AI isn't because today's agents can reliably write software better than senior developers. The investment is a bet that they'll be able to reliably solve some set of useful problems tomorrow. We don't know which problems they'll be able to reliably solve, or when. They're already doing some useful economic work. And AI agents will probably keep getting smarter over time. Thats all we know.
Maybe in a few years LLMs will be reliable enough to do what you're proposing. But neither I - nor most people in this thread - think they're there yet. If you think we're wrong, prove us wrong with code. Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.
Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.
I do, all the time.
But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.
Like almost any powerful tool, there are a few good ways to use LLM technology and countless bad ways. What kind of moron would expect "Write an operating system" or "Write a compiler" or "Write a web browser" to yield anything but plagiarized garbage? A high-quality program starts with a high-quality specification, same as always. Or at least with carefully-considered intent.
The difference is, given a sufficiently high-quality specification, an LLM can handle the specification->source step, just as a compiler or assembler relieves you of having to micromanage the source->object code step.
IMHO, the way it will shake out is that LLMs as we know them today will be only components, perhaps relatively small ones, of larger systems that translate human intent to machine function. What we call "programming" today is only one implementation of a larger abstraction.
I think this might be plausible in the future, but it needs a lot more tooling. For starters you need to be able to run the prompt through the exact same model so you can reproduce a "build".
Even the exact same model isn't enough. There are several sources of nondeterminism in LLMs. These would all need to be squashed or seeded - which as far as I know isn't a feature that openai / anthropic / etc provide.
Well.. except the AI models are nondeterministic. If you ask an AI the same prompt 20 times, you'll get 20 different answers. Some of them might work, some probably won't. It usually takes a human to tell which are which and fix problems & refactor. If you keep the prompt, you can't manually modify the generated code afterwards (since it'll be regenerated). Even if you get the AI to write all the code correctly, there's no guarantee it'll do the same thing next time.
> It programs with the skill of the median programmer on GitHub
This is a common intuition but it's provably false.
The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
Eighteen months ago GPT-4 was outperforming 85% of human participants in coding contests. And people who participate in coding contests are already well above the median skill level on Github.
And capability has gone way up in the last 18 months.
The best argument I've yet heard against the effectiveness of AI tools for SW dev is the absence of an explosion of shovelware over the past 1-2 years.
Basically, if the tools are even half as good as some proponents claim, wouldn't you expect at least a significant increase in simple games on Steam or apps in app stores over that time frame? But we're not seeing that.
Interesting approach. I can think of one more explanation the author didn't consider: what if software development time wasn't the bottleneck to what he analyzed? The chart for Google Play app submissions, for example, goes down because Google made it much more difficult to publish apps on their store in ways unrelated to software quality. In that case, it wouldn't matter whether AI tools could write a billion production-ready apps, because the limiting factor is Google's submission requirements.
There are other charts besides Google play. Particularly insightful id the steam chart as steam is already full of shovelware and, in my experience, many developers wish they were making games but the pay is bad.
GitHub repos is pretty interesting too but it could be that people just aren't committing this stuff. Showing zero increase is unexpected though.
I've had this same thought for some time. There should have been an explosion in startups, new product from established companies, new apps by the dozen every day. If LLMs can now reliably turn an idea into an application, where are they?
The argument against this is that shovelware has a distinctly different distribution model now.
App stores have quality hurdles that didn’t exist in the diskette days. The types of people making low quality software now can self publish (and in fact do, often), but they get drowned out by established big dogs or the ever-shifting firehose of our social zeitgeist if you are not where they are.
Anyone who has been on Reddit this year in any software adjacent sub has seen hundreds (at minimum) of posts about “feedback on my app” or slop posts doing a god awful job of digging for market insights on pain points.
The core problem with this guy’s argument is that he’s looking in the wrong places - where a SWE would distribute their stuff, not a normie - and then drawing the wrong conclusions. And I am telling you, normies are out there, right now, upchucking some of the sloppiest of slop software you could ever imagine with wanton abandon.
Or even solving problems that business need to solve, generally speaking.
This complete misunderstand of what software engineering even is is the major reason so many engineers are fed up with the clueless leaders foisting AI tools upon their orgs because they apparently lack the critical reasoning skills to be able to distinguish marketing speak from reality.
I don't think this disproves my claim, for several reasons.
First, I don't know where those human participants came from, but if you pick people off the street or from a college campus, they aren't going to be the world's best programmers. On the other hand, github users are on average more skilled than the average CS student. Even students and beginners who use github usually don't have much code there. If the LLMs are weighted to treat every line of code about same, they'd pick up more lines of code from prolific developers (who are often more experienced) than they would from beginners.
Also in a coding contest, you're under time pressure. Even when your code works, its often ugly and thrown together. On github, the only code I check in is code that solves whatever problem I set out to solve. I suspect everyone writes better code on github than we do in programming competitions. I suspect if you gave the competitors functionally unlimited time to do the programming competition, many more would outperform GPT-4.
Programming contests also usually require that you write a fully self contained program which has been very well specified. The program usually doesn't need any error handling, or need to be maintained. (And if it does need error handling, the cases are all fully specified in the problem description). Relatively speaking, LLMs are pretty good at these kind of problems - where I want some throwaway code that'll work today and get deleted tomorrow.
But most software I write isn't like that. And LLMs struggle to write maintainable software in large projects. Most problems aren't so well specified. And for most code, you end up spending more effort maintaining the code over its lifetime than it takes to write in the first place. Chatgpt usually writes code that is a headache to maintain. It doesn't write or use local utility functions. It doesn't factor its code well. The code is often overly verbose. It often writes code that's very poorly optimized. Or the code contains quite obvious bugs for unexpected input - like overflow errors or boundary conditions. And the code it produces very rarely handles errors correctly. None of these problems really matter in programming competitions. But it does matter a lot more when writing real software. These problems make LLMs much less useful at work.
> The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
It does, by default. Try asking ChatGPT to implement quicksort in JavaScript, the result will be dogshit. Of course it can do better if you guide it, but that implies you recognize dogshit, or at least that you use some sort of prompting technique that will veer it off the beaten path.
I asked the free version of ChatGPT to implement quicksort in JS. I can't really see much wrong with it, but maybe I'm missing something? (Ugh, I just can't get HN to format code right... pastebin here: https://pastebin.com/tjaibW1x)
----
function quickSortInPlace(arr, left = 0, right = arr.length - 1) {
if (left < right) {
const pivotIndex = partition(arr, left, right);
quickSortInPlace(arr, left, pivotIndex - 1);
quickSortInPlace(arr, pivotIndex + 1, right);
}
return arr;
}
function partition(arr, left, right) {
const pivot = arr[right];
let i = left;
for (let j = left; j < right; j++) {
if (arr[j] < pivot) {
[arr[i], arr[j]] = [arr[j], arr[i]];
i++;
}
}
[arr[i], arr[right]] = [arr[right], arr[i]]; // Move pivot into place
return i;
}
This is exactly the level of code I've come to expect from chatgpt. Its about the level of code I'd want from a smart CS student. But I'd hope to never use this in production:
- It always uses the last item as a pivot, which will give it pathological O(n^2) performance if the list is sorted. Passing an already sorted list to a sort function is a very common case. Good quicksort implementations will use a random pivot, or at least the middle pivot so re-sorting lists is fast.
- If you pass already sorted data, the recursive call to quickSortInPlace will take up stack space proportional to the size of the array. So if you pass a large sorted array, not only will the function take n^2 time, it might also generate a stack overflow and crash.
- This code: ... = [arr[j], arr[i]]; Creates an array and immediately destructures it. This is - or at least used to be - quite slow. I'd avoid doing that in the body of quicksort's inner loop.
- There's no way to pass a custom comparator, which is essential in real code.
I just tried in firefox:
// Sort an array of 1 million sorted elements
arr = Array(1e6).fill(0).map((_, i) => i)
console.time('x')
quickSortInPlace(arr)
console.timeEnd('x')
My computer ran for about a minute then the javascript virtual machine crashed:
Uncaught InternalError: too much recursion
This is about the quality of quicksort implementation I'd expect to see in a CS class, or in a random package in npm. If someone on my team committed this, I'd tell them to go rewrite it properly. (Or just use a standard library function - which wouldn't have these flaws.)
OK, you just added requirements the previous poster had not mentioned. Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server, where you'd also want to do things like paging.
Secondly, if a standard implementation was to be used, that's essentially a No-Op. AI will reuse library functions where possible by default and agents will even "npm install" them for you. This is purely the result of my prompt, which was simply "Can you write a QuickSort implementation in JS?"
In any case, to incorporate your feedback, I simply added "that needs to sort an array of a million elements and accepts a custom comparator?" to the initial prompt and reran in a new session, and this is what I got in less than 5 seconds. It runs in about 160ms on Chrome:
How long would your team-mate have taken? What else would you change? If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. I'd honestly be very curious to see where it fails.
However, this exchange is very illustrative: I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
> OK, you just added requirements the previous poster had not mentioned.
Lol of course! The real requirements for a piece of software are never specified in full ahead of time. Figuring out the spec is half the job.
> Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server
Who said anything about the browser? I run javascript on the server all the time.
Don't defend these bugs. 1 million items just isn't very many items for a sort function. On my computer, the built in javascript sort function can sort 1 million sorted items in 9ms. I'd expect any competent quicksort implementation to be able to do something similar. Hanging for 1 minute then crashing is a bug.
If you want a use case, consider the very common case of sorting user-supplied data. If I can send a JSON payload to your server and make it hang for 1 minute then crash, you've got a problem.
> If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. [..] How long would your team-mate have taken?
We've gotta compare like for like here. How long does it take to debug code like this when an AI generates it? It took me about 25 minutes to discover & verify those problems. That was careful work. Then you reprompted it, and then you tested the new code to see if it fixed the problems. How long did that take, all added together? We also haven't tested the new code for correctness or to see if it has new bugs. Given its a complete rewrite, there's a good chance chatgpt introduced new issues. I've also had plenty of instances where I've spotted a problem and chatgpt apologises then completely fails to fix the problem I've spotted. Especially lifetime issues in rust - its really bad at those!
The question is this: Is this back and forth process faster or slower than programming quicksort by hand? I'm really not sure. Once we've reviewed and tested this code, and fixed any other problems in it, we're probably looking at about an hour of work all up. I could probably implement quicksort at a similar quality in a similar amount of time. I find writing code is usually less stressful than reviewing code, because mistakes while programming are usually obvious. But mistakes while reviewing are invisible. Neither you nor anyone else in this thread spotted the pathological behavior this implementation had with sorted data. Finding problems like that by just looking is hard.
Quicksort is also the best case for LLMs. Its a well understood, well specified problem with a simple, well known solution. There isn't any existing code it needs to integrate with. But those aren't the sort of problems I want chatgpt's help solving. If I could just use a library, I'm already doing that. I want chatgpt to solve problems its probably never seen before, with all the context of the problem I'm trying to solve, to fit in with all the code we've already written. It often takes 5-10 minutes of typing and copy+pasting just to write a suitable prompt. And in those cases, the code chatgpt produces is often much, much worse.
> I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
Yes exactly! As a senior developer, my job is to solve the problem people actually have, not the problem they tell me about. So yes of course I want it to read my mind! Actually turning a clear spec into working software is the easy part. ChatGPT is increasingly good at doing the work of a junior developer. But as a senior dev / tech lead, I also need to figure out what problems we're even solving, and what the best approach is. ChatGPT doesn't help much when it comes to this kind of work.
(By the way, that is basically a perfect definition of the difference between a junior and senior developer. Junior devs are only responsible for taking a spec and turning it into working software. Senior devs are responsible for reading everyone's mind, and turning that into useful software.)
And don't get me wrong. I'm not anti chatgpt. I use it all the time, for all sorts of things. I'd love to use it more for production grade code in large codebases if I could. But bugs like this matter. I don't want to spend my time babysitting chatgpt. Programming is easy. By the time I have a clear specification in my head, its often easier to just type out the code myself.
That's where we come in of course! Look into spec-driven development. You basically encourage the LLM to ask questions and hash out all these details.
> Who said anything about the browser?... Don't defend these bugs.
A problem of insufficient specification... didn't expect an HN comment to turn into an engineering exercise! :-) But these are the kinds of things you'd put in the spec.
> How long does it take to debug code like this when an AI generates it? It took me about 25 minutes to discover & verify those problems.
Here's where it gets interesting: before reviewing any code, I basically ask it to generate tests, which always all pass. Then I review the main code and test code, at which point I usually add even more test-cases (e.g. https://news.ycombinator.com/item?id=46143454). And, because codegen is so cheap, I can even include performance tests, (which statistically speaking, nobody ever does)!
Here's a one-shot result of that approach (I really don't mean to take up more of your time, this is just so you can see what it is capable of):
https://pastebin.com/VFbW7AKi
While I do review the code (a habit -- I always review my own code before a PR), I review the tests more closely because, while boring, I find them a) much easier to review, and b) more confidence-inspiring than manual review of intricate logic.
> I want chatgpt to solve problems its probably never seen before, with all the context of the problem I'm trying to solve, to fit in with all the code we've already written.
Totally, and again, this is where we come in! Still, it is a huge productivity booster even in uncommon contexts. E.g. I'm using it to do computer vision stuff (where I have no prior background!) with opencv.js for a problem not well-represented in the literature. It still does amazingly well... with the right context. It's initial suggestions were overindexed on the common case, but over many conversations, it "understood" my use-case and consistently gives appropriate suggestions. And because it's vision stuff, I can instantly verify results by sight.
Big caveat: success depends heavily on the use-case. I have had more mixed results in other cases, such as concurrency issues in an LD_PRELOAD library in C. One reason for the mixed sentiments we see.
> ChatGPT is increasingly good at doing the work of a junior developer.
Yes, and in fact, I've been rather pessimistic about the prospects of junior developers, a personal concern given I have a kid who wants to get into software engineering...
I'll end with a note that my workflow today is extremely different from before AI, and it took me many months of experimentation to figure out what worked for me. Most engineers simply haven't had the time to do so, which is another reason we see so many mixed sentiments. But I would strongly encourage everybody to invest the time and effort because this discipline is going to change drastically really quickly.
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”
AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.
I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.
> But is there a real connection between being wrong and not being read or are you yourself wrong ?
You don’t need to be a standup comedian yourself to spot bad comedy.
> Furthermore, I doubt there are any chances "right/wrong" applies to aesthetical types of philosophical discussions.
It’s hard to figure out what readers want because you don’t get direct feedback. But if you spend any amount of time in front of an audience, it becomes incredibly clear that some things work on stage better than others. I truly believe charisma is a learnable skill. By treating it as talent we deprive people who aren’t charismatic the chance to improve. Writing is just the same. Claiming that there’s no “right/wrong” here implies that it’s impossible to learn to write in a more engaging way. And that’s obviously false.
I did a clowning course a few years ago. In one silly exercise we all partnered up. Each couple were given a tennis ball, and we had to squish the ball between our foreheads so it wouldn’t fall. And like that, move around the room. Afterwards the teacher got half the class on stage and do it again, while everyone else watched. Then the audience got to vote on which couple we liked the most. It was surreal - almost everyone voted on the same pair. Those two in particular were somehow more interesting than everyone else. In that room there was a right and a wrong way to wordlessly hold a tennis ball between two people’s faces. And we all agreed on what it was.
> You don't need to be a standup comedian yourself to spot bad comedy.
I am not a native english speaker, I don't know anything about humourous form of language in that tongue.
Charisma depends on your audience, and audiences can differ quite a lot. There is no "right/wrong" because what please you as an audience may be considered wrong by another one.
"Writing in a more engaging way" aka changing your conceptions of what is right/wrong in order to conform to the current cultural supremacia that is built up everyday by pushing some kind of fast-food culture or idk.
Your story is interesting, and I don't understand how you could be surprised : people that go to clowning classes can share the same taste about what is good/bad ? That's not a very surprising fact ! If you had told me that they were people from different cultures ...
Do you think Baudelaire cared about engagement ? You talked like there were no way taste could dramatically change to the point "ugly" is becoming "good" or vice-versa. Some of the writers and artists I like the most braved the taste™ of the different hegemonic culture of their time, and just trusted their own intuition of what they did want to express, say, create.
Marcel Duchamp is a great example of how a mid level joke can change the art world suddenly (and people's taste with it).
> Charisma depends on your audience, and audiences can differ quite a lot. There is no "right/wrong" because what please you as an audience may be considered wrong by another one.
Yes; one of the most aspects of charisma is being sensitive to your audience. Charismatic people watch how their performance is received, and adjust it on the fly. Not too much, but enough to make the audience feel cared for. This is one reason why there's a sort of magic in live performances.
I also think we're talking about two extreme ideas here that are both wrong:
1. Performances are on an objective spectrum from "right" to "wrong"
2. Nothing is good or bad. Everything is subjective.
The truth is somewhere in the middle. There's no such thing as "the objectively best pieces of music (/art / writing / etc)". But some music, art and writing is enjoyed by many people. And some is junk. There is no objective measure of music. But also, nobody would consider my amateur piano playing to be as good as The Beetles or Mozart.
> "Writing in a more engaging way" aka changing your conceptions of what is right/wrong in order to conform to the current cultural supremacia that is built up everyday by pushing some kind of fast-food culture or idk.
I don't know where to start with this.
Again, there's two extremes that are both wrong: a) As a writer / performer, you should conform exactly to whatever the audience wants. And b) Forget the audience. Write however you want without any regard for them.
Both of these extreme positions will result in bad work. The answer is somewhere in the middle. We don't want a performer to be our slave or our master. We want you to be our friend. Our leader. Our teacher.
In other terms, write however you want. But if you don't care about your audience, don't be surprised if your audience doesn't care about you.
> people that go to clowning classes can share the same taste about what is good/bad ? That's not a very surprising fact ! If you had told me that they were people from different cultures ...
I'm Australian. The class was in France, taught by a French clown. There were students from the USA, Canada, Australia, the UK, South Africa, NZ, Finland, Germany and more.
Not all art works across different cultures, but clowning does. I think if you showed our performances to a group of monkeys, even they would also find it funny and if they could, they would pick out the same favorites.
Of course everything is subjective. The fact we're social animals creates the feeling that there's some kind of rules, but that's just a bias. We're biased.
There is no absolute junk art-artefact, because humanity potentially extends timewise to so many instances of different humans that you cannot know in advance if something will be considered "good" at some point in time, culture, individusl brain etc...
> Both of these extremes position will result in bad work.
That's absurd, if someone "conform exactly to whatever the audience wants" then everyone in the audience would be pleased, how could it be bad work ?
Side note. Is it really possible for some artist to forget the audience ? I mean "however you want without any regard for them" is possible, but due to the fact you write as you want, it would be an absolute masterclass if you succeeded to be not cared about by anyone.
But yeah, that's not constituent of what is an artwork or not. This discussion is useless to the artist.
> [clown class]
Okay, but mondialization, hegemony of certain cultures etc...
Are we truly different cultures ? I don't think so.
But anyway, that's not the problem.
The problem is that you were using this example to justify the fact that taste is not relative.
I accept that a group of clown students got impressed by the same clowns. But I don't accept that there wouldn't be some differences if the whole humanity (past and future) voted that day.
> There is no absolute junk art-artefact, because humanity potentially extends timewise to so many instances of different humans that you cannot know in advance if something will be considered "good" at some point in time, culture, individusl brain etc...
Look at the number of plays each song gets in Spotify. If everyone had their own, totally unique taste, there would be no mathematical correlation between which songs I enjoy and which songs you enjoy. We would see a uniform distribution of plays of all the songs in spotify. But the distribution is very non-uniform. Some songs get billions of listens. Some songs get essentially none.
However, if we all had exactly the same taste, Spotify would only need a small selection of "the best" songs for everybody to enjoy. This is also not what we see.
Art has fashions. But many of aspects of music and storytelling have remained relevant across culture and across time so far. We like musical rhythm. We enjoy narrative in stories. We enjoy stories about relationships between people. We like some variety, but not too much. And so on. I'm sure tastes will change. But if I just mash my hand on the piano with no skill and upload that to spotify, I doubt even in the fullness of time I'll ever get as many spotify listens as The Beetles.
> That's absurd, if someone "conform exactly to whatever the audience wants" then everyone in the audience would be pleased, how could it be bad work ? [...] This discussion is useless to the artist.
Yes exactly. An artist can't work like this. It wouldn't work. It has the wrong energy.
Its kind of paradoxical, but the audience doesn't want to feel like we're in charge too much. We like it when performers take risks on stage, and show us who they are so we can judge them. Look at the top rated videos on youtube. Or the most popular songs. Or any list of the best movies ever made. All of them will contain a strong, clear point of view of the artist. Stanley Kubrick and Mick Jagger don't ask the audience what we want. They tell us what we want. (And they get it right.)
---
At a broader level, I think this whole discussion is a diversion. You seem to have argued both that all art is subjective. And that creating works of art based on what people want would be submitting to the "current cultural supremacia". Both of these arguments sound like excuses to me. Excuses to not try and become skilled. Excuses to skip being be sensitive to your audience. Excuses that protect from failure. For what its worth, I struggle with this too. My clown teacher told me I need to "try to not die so much" when things don't work on stage. It is very difficult. If human tastes really do change so much across time, then don't create for people in the future. Create for people right now. The people right in front of you, who you can understand.
The most successful clowns, businessmen and writers all care about their audience. But when things inevitably don't work, they acknowledge the failure with lightness and try again.
I'm happy because there is a misunderstanding of my words here.
I am not saying that the ethos of the artist should be "taste is random".
In fact the act of being an artist is to embrace biases.
I agree with the Kant interpretation of aesthetics as a "teleologic" thing : there is no objective "beauty" yet we perpetually fail to embrace this and instead, when we find something "beautiful" we consider this judgement as if it was universal, absolute and objective (when it is not).
On the other hand, to complete this, I agree with Wittgenstein when he says that "you cannot SAY anything about aesthetics/ethics".
These two ideas aren't forming any paradox.
> Create for the people right now.
I would prefer :
Set your own dogma or not, but do what you feel you want to do, be it creating for others, for yourself (you abominable narcissus), for your cat, for the banana you just eaten, or for whatever supreme being you chose to believe in.
Choose your audience, let an audience choose you, or choose to be alone, whatever fits for you.
To me, we're in a difficult position because the only way you have to quantify the value of an artwork (music in this case) is the number of streams it has.
Call me a mad man but it is not rare for me to hate most streamed music and to prefer <none> streamed music.
Yet I seriously doubt I am "musically dumb".
What I find instead is that advertisment, reputation, exposure, a good label, radio streams will get you a long way to become a <most streamed> artist.
And no, that's not "an excuse" to not try and become skilled... What sense does it make to say "Dua Lippa is better skilled than J.S.Bach because she has more streams" (or the contrary) or "AC/DC is better skilled than Alan Vega because they sold more disks" (or the contrary).
> If I just mash my hand on the piano with no skill
Okay, that's pretty sure.
Now if you wrote a small piano piece, there is no way you could predict if it will become a hit or not. It depends on factors that are really far from being limited to "the piece in question".
> Now if you wrote a small piano piece, there is no way you could predict if it will become a hit or not. It depends on factors that are really far from being limited to "the piece in question".
This is where I think we really disagree. If I want to make music people like, I’m pretty sure piano lessons would help me. Theory. Rhythm. Learning to sing. Then I need to practice! Making a smash hit isn’t predictable, but it’s not random either. Luck is a necessary but not sufficient quality. As the saying goes, most overnight successes are 20 years in the making. Watch the early stuff from Louis CK. From Trey Parker and Matt Stone. It’s not as good. They got better over time.
You can learn to write better. To be more charismatic. To connect better to an audience. You’re not in control of whether or not an iOS app is successful. But you can’t make it at all if you don’t know how to code. And if you’re bad at design it probably won’t make it. It’s not simply a coincidence that some blog posts get read and others are ignored. Ask anyone successful. By honestly any metric of success. Practice, skill and hard work won’t guarantee anyone cares about your craft. But if you don’t try? Don’t listen to your audience and improve? Good luck.
What I said is the banal : no matter how you're skilled, there's no guarantee of success, and in the small window of opportunity that is "becoming successful", there are (maybe normally distributed) skilled and non skilled people.
Not sure about "most overnight successes are 20 years in the making"; if I want to be perfectly rationnal, I recognize that this sentence is often false (but don't have the data to analyze this in depth, I would love to be able to check this though).
I don't want to offend you by any mean, but we have a tendency to pick up some examples and to hallucinate something from that few examples (which is a natural and quasi-reasonable thing to do when you have a sufficient large dataset); here you tell me about Louis CK, and while it can be true that they did better over time, I am pretty sure we can find counterexemples of this, no ? I imagine that's not a rare thing to meet people that prefer early-xxxx more than later-xxxx.
> Ask anyone successful.
I decided to erase what I meant to answer here (maybe it's time to move on haha).
Well, I think I understood you position, I'm glad we took the time to talk. The internet is full of these opportunities and I enjoy that from time to time, even if thinking in a language I master less than my main one is always tiresome and somewhat "violent" (I feel dumber in English ?). (Wow, it's late in Australia !)
Yeah I’ve occasionally mentioned things at work, and had someone say “I think I read a blog post about that once”. Only to discover they read about it on my blog! Incredibly satisfying.
I’ve also seen screenshots of my blog posts show up in random technical talks I happened to watch. I want to shout at the screen - “That was meeeee!”
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
I've found the best route at that point is just... copying people who are really good. For my interest (3d modeling) if you want voice-over and directions, those are all pretty basic, but if you want to see how someone approaches a large, complex object, I will literally watch a timelapse of someone doing it and scrub the video in increments to see each modifier/action they took. It's slow but that's also how I built some intuition and muscle memory. That's just the way...
Yeah. I remember seeing the circuit board for official Xbox controllers vs cheap 3rd party ones. The official controllers had about 10x as many components. I don’t know what all that stuff does, but I’m sure it all contributes to the controllers feeling and working better.
I wonder the same thing about chargers. I’ve recently moved from a 3rd party charger for my camera batteries I got on amazon to an official Sony charger. The 3rd party charger seemed to work great - but it was practically weightless. The Sony charger is clearly a way more complex (and more expensive) product. I don’t know if all that complexity is actually worth it. What does it all do? But I assume so.
Was it a fiasco? Really? The rust unwrap call is the equivalent to C code like this:
If that assert tripped, would you blame the assert? Of course not. Or blame C? No. If that assert tripped, it’s doing its job by telling you there’s a problem in the call to foo().You can write buggy code in rust just like you can in any other language.
reply