I've worked through the same process in SolidJS, which had the dynamic dependency tracking from the beginning.
I agree that not seeing reactivity in the type system can be irritating. In theory, you can wrap reactive elements in `Computed` objects (Angular's signals have this, I believe) so you can follow them a bit better, but the problem is that you can still accidentally end up with implicitly reactive values, so it only works as a kind of opt-in "here be reactivity" signal, and you can't guarantee that just because you can't see a `Computed`, that reactivity has been ruled out.
That said, I find I eventually built up a good intuition for where reactivity would be, usually with the logic that functions were reactive and single values weren't, kind of like thunks in other contexts. For me, at least, it feels much simpler to have this implicit tracking, because then I don't need to define dependencies explicitly, but I can generally see them in my code.
Yeah, the UX/DX of the turning these algorithms into something usable is really interesting, and something I didn't get to talk much about.
With the variations on the push algorithm, you do kind of need to know the graph topology ahead of time, at least to be able to traverse it efficiently and correctly (this is the topological sorting thing). But for pull (and therefore for push-pull), the dependencies can be completely dynamic - in a programming language, you can call `eval` or something, or in a spreadsheet you could use `indirect` to generate cell references dynamically. For push-pull specifically, when you evaluate a node you would generally delete all of its upstream connections (i.e. for each cell it depends on currently, remove that dependency) and then rebuild that node's connections to the graph while evaluating it.
Signals libraries are exactly where I found this concept, and they work basically like you describe. I think this is a big part of what makes signals work so well in comparison to, say, RxJS - they're basically the same concept (here's a new reactive primitive, let's model our data in terms of this primitive and then plug that into the UI, so we can separate business logic from rendering logic more cleanly), but the behaviour of a signal is often easier to understand because it's not built from different combinators but just described in "ordinary" code. In effect, if observables are monads that need to be wired together with the correct combinators, then signals are do-notation.
With your push-pull algorithm, were you considering that the graph had already been built up, e.g. by an earlier pull phase? And the push-pull bit is just for subsequent updates? If so, then I think I'm following :).
I've been working in the context of reactivity on the backend where you're often loading up the calculations "from scratch" in a request.
I agree with your monad analogy! We looked into using something like Rx in the name of not reinventing the wheel. If you build out your calculation graph in a DSL like that, then you can do more analysis of the graph structure. But as you said in the article, it can get complicated. And Rx and co are oriented towards "push" flows where you always need to process every input event. In our context, you don't necessarily care about every input if the user makes a bunch of changes at once; it's very acceptable to e.g. debounce the end result.
With push-pull, assuming you set up the dependency chain on pull, you need an initial pull to give you your initial values and wire up the graph, and then every time an update comes in you use the push phase to determine what should be changed, and the pull phase to update it consistently.
There's some great discussion over on lobste.rs (https://lobste.rs/s/2zk3oe/pushing_pulling_three_reactivity), but I particularly recommend this link that someone posted there to a post that covers much of the same topic but in a lot more detail with reference to various existing libraries and tools and how they do things: https://lord.io/spreadsheets/
Yours might go a little less into the details, but its really clear and I like the diagrams and explanation around glitch hazards. Please do follow up on your tangents if you have time.
I really enjoyed your post and was surprised to see it not posted here. I guess now I can leave the comment I wasn't able to leave on lobste.rs :)
The format made for good lunchtime reading -- the care you put into making it easily readable shows. Are the illustrations actually hand-drawn? Looking forward to the next part(s) that you hinted at!
The illustrations are hand-drawn on my tablet, and then converted to SVG and touched up via an incredibly laborious and probably fairly unnecessary process.
Not the author, but when I want to make diagrams like this, I usually use tldraw! It has a nice nearly-hand-drawn feel that's casual and approachable for these kinds of sketches.
> This way you never have content that is out of sync.
They can definitely go out of sync, particularly if something that isn't the editor or the AI changes the code (e.g. running shell commands or opening the file in a different editor and making changes there). I've had a whole load of issues with VSCode where there's been spurious edits all over the place that show up again even if I try and revert them, because every time the AI makes an unrelated edit, VSCode tries to reset the file to the version it thinks exists and then play the AI's edits on top.
Firstly, if you're doing those steps, you're building your own tutorial, not just following the exact steps in a manual provided with the software. The sample config won't be exact or perfect for your setup, so you'll need to say least figure out how to adjust it to your needs.
That said, I think you're still leaning things building IKEA-style software. The first time I learned how to program, I learned from a book and I tried things out by copying listings from the book by hand into files on my computer and executing them. Essentially, it was programming-by-IKEA-manual, but it was valuable because I was trying things out with my own hands, even if I didn't fully understand every time why I needed the code I'd been told to write.
From there I graduated to fiddling with those examples and making changes to make it do what I wanted, not what the book said. And over time I figured out how to write entirely new things, and so on and so forth. But the first step required following very simple instructions.
The analogy isn't perfect, because my goal with IKEA furniture is usually not to learn how to build furniture, but to get a finished product. So I learn a little bit about using tools, but not a huge amount. Whereas when typing in that code as a kid, my goal was learning, and the finished product was basically useless outside of that.
The author's example there feels like a bit of both worlds. The task requires more independent thought than an IKEA manual, so they need to learn and understand more. But the end goal is still practical.
But Anthropic can't be a winning bidder, can they? They're specifically saying they won't offer certain services that the US Gov wants. Therefore they de facto fail any bid that requires them to offer those services. (And from Anthropic's side, it sounds like they're also refusing to bid for those contracts.)
The first two things that spring to mind are pasties from the UK (which are not usually spherical but can get quite hemispherical), and the "UFO-Döner" from Germany (which are more oblate spheroids). Maybe by combining these ideas, your friend can get closer to their dream?
Yeah, a lot of the examples made me think "wait, there's something else going on there, right?", which would make sense if the author has difficulty communicating or negotiating their proposals.
In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
> In the first example, for example, they suggested a new metric to track added warnings in the build, and then there was a disagreement in the team, and then as a footnote someone went and fixed the warnings anyway? That sounds like the author might be missing something from their story.
I do not find anything missing here. This is how things often plays out in reality. Both your retelling of it and what was actually written in the article.
Your retelling: Some people agree and some disagree with new metric. That is completely normal. Then someone who agree or want to achieve the peace or just temporary does not feel like doing "real jira" tasks fixes warnings. Team moves on.
Actual article: the warnings get solved when it becomes apparent one of them caused production issue. That is when "this new process step matters" side wins.
I'm referencing the footnote where the author says that the discussion caused one team member to go and fix the issue. The warnings causing a production issue is, I think, a complete hypothetical.
What this story is missing is an explanation for why people were disagreeing. Like, why is someone not looking at warnings? Is it that the warnings are less important than the author understands? Is it that the warnings come from something that the team have little control over? And the solution the author suggests - would it really have changed anything if they already weren't looking at warnings? The author writes as if their proposal would have fixed things, but that's not really clear to me, because it's basically just a view into whether the problem is getting worse, which can be ignored just as easily as the problem itself.
Someone hacked his site or something, so I cant get back. But, I thought you mean situation in one of the first paragraphs where the team started take some issue seriously after actual problem.
And honestly, I have seen people disagree and fight literal standard changes like "lets have pipeline that runs tests before merge" or "database change must go through test environment before being sent over".
It is perfertly possible and normal for people to fight change and be wrong without there being grave smart missing reason. I have no problem to m trust the author that he was simply right in hindsight.
If you ever tried to improve processes or project with persistent issues, the problems author described are entirely believable. The author does not know what to do in that situation, but he described the usual dynamic pretty accurately.
I think there's a lot more than just that, but I think part of the problem is that you just get an uncanny valley feeling. All of the phrases and rhetorical tricks that these tools use are perfectly valid, but together they feel somehow thin?
That said, some specific things that feel very AI-y are the mostly short, equally-sized paragraphs with occasional punchy one-sentence paragraphs interspersed between them; the use of bold when listing things (and the number of two-element lists); there are a couple of "it's not X, it's Y"-style statements; one paragraph ends with an "they say it's X, but it's actually Y" construct; and even the phrasing of some of the headings.
None of these are necessarily individually tells of AI writing (and I suspect if you look through my own comments and blog posts on various sites, you'd find me using many of the same constructs, because they're all either effective rhetorically, or make the text clearer and easier to understand). But there's something about the concentration of them here that feels like AI - the uncanny valley feeling.
I would put money on this post at least having gone through AI review, if not having been generated by AI from human-written notes. I understand why people do that, but I also think it's a shame that some of the individual colour of people's writing is disappearing from these sorts of blog posts.
I agree that not seeing reactivity in the type system can be irritating. In theory, you can wrap reactive elements in `Computed` objects (Angular's signals have this, I believe) so you can follow them a bit better, but the problem is that you can still accidentally end up with implicitly reactive values, so it only works as a kind of opt-in "here be reactivity" signal, and you can't guarantee that just because you can't see a `Computed`, that reactivity has been ruled out.
That said, I find I eventually built up a good intuition for where reactivity would be, usually with the logic that functions were reactive and single values weren't, kind of like thunks in other contexts. For me, at least, it feels much simpler to have this implicit tracking, because then I don't need to define dependencies explicitly, but I can generally see them in my code.
reply