`Readonly<T>` in TypeScript is almost useless, unsound and completely unsafe (as are most other things in TypeScript), and in no way equivalent to the Rust affine type system.
In particular, Readonly only prevents writing to the immediate fields of the object, but doing eg `const x: Readonly<X>; x.a.b = ...` is completely fine (ie. nested mutability is allowed).
If you want transitive immutability, you need a type-level function (such as `ReadonlyDeep` from `type-fest`), but then that gives terrible error messages.
Also due to the bivariance of the TypeScript type system, using Readonly in combination with generics can silently and automatically cast it away, making it largely pointless for actual safety...
These links are all mostly unmaintained AI-generated random MCP servers. Could have put in a bit more effort than copy-pasting search results...
To talk about where it's _actually_ at:
Agentic IDEs have LSP support built in, which is better that just tree-sitter -such as Copilot in VSCode, which contrary to what you might expect, can actually use arbitrary models & has BYOK.
90% of it flew over my head, but since we are mostly practical engineers here, one question of course presented itself:
Is this a physically realizable mode of computation? (ie. what's the physical equivalent of a "geometric computer")
Error handling patterns very much depend on the domain with possibly completely opposite recommendations, and any principles should specify in which domain it is a good approach.
Pure and deterministic algorithms can have the luxury of having much more strict and concrete error handling strategies.
On the other hand, "promptly handling errors" sounds naive for large distributed systems where almost every call can fail for reasons unknown. Experience has already shown that exactly the opposite approach - just panic and restart the "service" (process in Erlang terms, but it's got nothing to do with Unix processes and is far more granular) can build extremely reliable systems.
In conventional programming languages, that roughly corresponds to adding top-mid-level error boundaries (exception/panic handlers) at critical junctures.
But perhaps that's what you mean by "error bubbling nests shallowly"? No idea, since it's not exactly clear.
I generally agree with you. I have a bad habit of simplifying my statements and losing nuance. If an error occurs, it should be handled reasonably and soon. If the cause or solution to an error is unclear, restarting at the task boundary is valid. Blowing away the whole task in one stroke is an instance of handling the error promptly. If the solution is in-task, it should be expediently reached through the call structure. Returning through many functions, performing cleanup all the while, in order to reach the handler is the pattern of deeply-nested error bubbling that I dislike.
Of course, this is still a cursory description. I think various strategies by domain agree on general principles. There is likely a class of errors that is too likely and/or unsolvable, in which case whole-task-restarts are advisable. Within the task, there may be other kinds of errors that can and should reasonably be resolved. User interactivity is also relevant (e.g. is a user directly interacting with a task in the error state). Overall, this comes down to a few factors for each error, roughly: likelihood, severity, solvability. Errors should always be reported when possible and brought to the attention, if not fixed, of a capable supervisor, which eventually culimante at a person (e.g. user or sysadmin).
The solution to LLM slop in general is certainly almost nothing like what is proposed here, that's just buzzword bingo in the Python ecosystem (and it sounds like an AI hallucination).
What you (and most vibe coders) are missing is just Good Old Fashioned verified software engineering - strong contracts (via better static types, contract libraries, linters, compilers, automated review and soft rule enforcement via eg. another LLM etc.), abstractions that reduce redundancy and increase cohesion, meaningful tests (and meta-tests & metrics ensuring test quality) etc. etc.
No. Taking the value of a single character is a correct perfect hash function, assuming there exists a position for the input string set where all characters differ.
Option and Result types, as implemented today in mainstream languages (ie. mostly anemically), are not the answer to exceptions being a mess.
Exceptions have a lot of additional functionality in larger ecosystems such as:
- Backtraces ie. showing the exact path of the error from its source to whereever it was handled, in a zero-cost way. This is by far the most important aspect of exceptions, as it enables automatically analysing and aggregating them in large systems, to eg. attribute blame from changes in error metrics to individual commits.
- Nested exceptions ie. converting from one error system to another without losing information. Extensible with arbitrary metadata.
- An open and extensible error type hierarchy. Again, necessary in large scale systems to differentiate between eg. the cause (caller fault, callee fault aka HTTP 400/500 divide), retryable or permanently fatal, loggable etc. exceptions while also maintaining API/ABI backward/forward compatibility.
(for some of these, eg. Rust has crates for a Result-y equivalent, but a community consensus does not exist, yet...)
General-purpose exceptions simply are complicated, and any system trying to "re-invent" them will eventually run into the same problems. Over-simplifying error handling just results in less maintainable, debuggable and reliable systems.
This isn't a binary choice. In Scala, you can use Throwable or Exception as your error type with Either:
Either[Throwable, Option[Foobar]]
The type Try[T] is essentially Either[Throwable, T]
Either[Throwable, T], Try, as well as IO from Cats Effect give you the stack traces that you expect from conventional Java style, with the superior option of programming in the monadic / "railway" style. Try also interfaces nicely with Java libraries: val result: Try[Foobar] = Try(javaFunction).
Don't agree with a single thing, especially not with the characterization that functional error handling is some kind of attempt at reinventing exceptions. But yeah, it's clear my and your camp will never agree lol. Fortunately for you, so far, your camp has mostly won, at least in the "object oriented" languages. But I think that's rapidly changing.
I am not in any sort of "camp", in fact I prefer using a mostly functional style. The above comment was based on experience working in large (~100M LoC) code bases.
As the comment clearly indicates, it is about anemic/"naive" functional error handling not being the counterpoint to general-purpose exceptions, not functional error handling vs. exceptions in general.
I do mostly prefer error handling being explicitly marked at every call site (ie. the functional style), but note that this is not always meaningfully possible in very large systems (at least beyond the notion of "I do not know exactly what errors are possible here, just propagate whatever happens" which is equivalent to regular exception handling)
And, as I already mentioned in the original, Rust does have functional solutions to some of these problems, and as other comments indicate, eg. Scala has them as well (probably even theoretically better since it can be a strict superset of the existing zero-cost exception model in the JVM).
The backtrace argument is good, but I wonder how valuable traces would be in a world that never experienced reads-of-nothing (npe, reading from undefined, reading out of bounds array, etc). Presumably this would be because of 100% use of ADTs, or maybe some other mechanism; but, even Haskell throws exceptions out of `IO a` so such a world might never be realized.
From an optimization perspective, such dialects are pretty much like the intermediate datastructures the "single-IR" style passes build internally anyway (eg. various loop analyses), just in a sharable and more consistent (if less performant) form.
Single IR passes from that perspective are roughly equivalent to MLIR-style `ir = loop_to_generic_dialect(my_loop_optimization(generic_to_loop_dialect(ir))`.
This assumes the existence of bidirectional dialect tranformations. Note that even LLVM IR, while a single IR, is technically multi-level as well, eg. for instruction selection, it needs to be canonicalized & expanded first, and feeding arbitrary IR into that pass will result in an exception (or sometimes even a segfault, considering it is C++).
Also, even though passes for single IR can theoretically be run in an arbitrary order, they are generally run in an order that can re-use (some) intermediate analysis results. This is, again, equivalent to minimizing the number of inter-dialect transformations in a multi-dialect IR.
It requires the final function to have a numerically reasonable finite difference gradient, which is somewhat different from what is commonly referred to as "differentiable" - eg. the insides of that function could still use non-differentiable/non-analytic functions.
It seems to be based on numeric.js, which is based on the classic Fortran UNCMIN [1] optimizer.
In particular, Readonly only prevents writing to the immediate fields of the object, but doing eg `const x: Readonly<X>; x.a.b = ...` is completely fine (ie. nested mutability is allowed). If you want transitive immutability, you need a type-level function (such as `ReadonlyDeep` from `type-fest`), but then that gives terrible error messages.
Also due to the bivariance of the TypeScript type system, using Readonly in combination with generics can silently and automatically cast it away, making it largely pointless for actual safety...
reply