In short: I've replace the Common Lisp loop (that works for Hunchentoot since it opens threads, but doesn't for Woo since it blocks) with a deeper integration into the event loop:
> And that was the main change: looking at the innards of it, there are some features available, like woo.ev:evloop. This was not enough, and access to the libev timer was also needed. After some work with lev and CFFI, the SDK now implements a Node.js-style approach using libev timers via woo.ev:evloop and the lev CFFI bindings (check woo-async.lisp).
This is likely (almost surely) not perfect or even ideal, but it does seem to work, and I've been testing the demo app with 1 worker and multiple clients.
The CL-SPICE library I used, that wraps the SPICE C library through CFFI, doesn't cover the type of SPICE kernel that I wanted to use for the Comms module. I could try and add it, but it could be more involved than what I expected and put the thing on hold.
So I used the FORTRAN SDK for SPICE, since I had used it before, and it's reasonably small and easy. The alternative coud be using the C SDK, but I went with FORTRAN since I already had most of the code from a previous project.
I haven't tried Wookie, since adding Clack+Woo was already a substantial change. Reading https://fukamachi.hashnode.dev/woo-a-high-performance-common... , where it compares with Wookie, I'm not sure if it would make a difference: it might be wrong, but "it says:
> Of course, this architecture also has its drawbacks as it works in a single thread, which means only one process can be executed at a time. When a response is being sent to one client, it is not possible to read another client's request.
... which for SSE seems to be similar to what the issue is with Woo. I wrote a bit more on it in https://github.com/fsmunoz/datastar-cl/blob/main/SSE-WOO-LIM... , and it can be more of a "me" problem than anything else, but to keep a SSE stream open, it doesn't play well with async models. That's why I added a with-sse-response macro that, unlike with-sse-connection, sends events without keeping the connection open.
wookie is built on cl-async, so my hope is that it's more tractable to write proper async SSE handler. But I haven't looked at whether it's possible to keep open connection asyncly.
This is my attempted at something that makes using Common Lisp with Datastar easier. To test the SDK I made this demo that shows the simulation of the Cassini-Huygens mission using the NASA SPICE toolkit and JPL Horizons API: https://dataspice.interlaye.red/
The Datastar API itself is very simple, 3 functions or so, I ended up wasting a lot more time on stuff like leeping the SSE stream open, compression support (zstd only atm), and trying to use CLOS in a way that would fit both Hunchentoot and Clack (not always easy).
Very nice, thank you. The tests directory is good for testing, and I suggest adding an examples directory with a few very short and complete simple examples.
Depends on what you want, I touch upon that somewhat: to replicate this specific pattern, you can replace Squid with something that fills in the gap without any major changes - so, nginx or Caddy for example -- but you would have to make sure the feature set is adequate: I see Squid as being egress-first, where others are ingress-first (nginx being used a an ingress controller, recently discontinued but still...), so I do think that for this specific purpose it works quite well.
As for Envoy and others, I think this would fit in a different architecture that I sort of point to near the end, one that includes using a service mesh: Istio for example uses Envoy for Egress Gateway, Cillium also has an Egress Gateway, etc. This to me would be a separate pattern though.
Yes! And this can be partially a limitation that helps, in the sense that it forces you to add that. In this example, I had to spent some time with the Common Lisp dexador approach to make it work. I've added a "PROXY: " UI hint in the page at https://horizons.interlaye.red/ , you will see that it says "-- PROXY: http://squid.egress-proxy.svc.cluster.local:3128 --". This was actually something from my debugging that I decided to keep.
A next article will likely address this limitation though, and look into transparent proxying. This will involve nftables, sidecars, etc, and the more we go into this direction, the more installing a CNI that comes with this by default starts to make sense.
Depending on what want for "lock down", this or something like this could work: you are essentially defining a single outbound communication path. In a way, your scenario was one of the reasons behind this experiment.
I'll take a look a the overflow thing, although I'm not sure if I will be able to fix it: I do have an image at the start which is an alternative to the text-based drawing, so nothing is lost. I use my own blogging solution that is essentially Texinfo (https://interlaye.red/Texiblog.html) so these blocks are the result of using an @example block (which is then converted into a preformatted block). I'm not sure this can be improved, apart from (as you said) using alternative images.
You can certainly use the Squid ACLs to limit the egress for agents. One of the current shortcomings (I explicitly mentioned it near the end) is that there's no per-namespace granularity, so you wouldn't be able to determine it on a per-agent level -- but you would be able to generally establish that all agents would only have access to a global whitelist.
I guess you have just described what I was hinting at here:
>Linked with several of the above (mainly the centralised configuration) is that when using ACL rules to limit communication to external domains, these are cumulative: all namespaces will be able to communicate with all whitelisted domains, even if they only need to communicate with some of them.
> These limitations point toward why more sophisticated solutions exist, after all; a follow-up article will explore using Squid’s include directive to enable per-namespace configuration, and in doing so, show why you’d eventually want a controller or operator to manage the complexity.
... which is actually a good thing. More than making something "new", it's great to hear that the overall approach is sound.
In short: I've replace the Common Lisp loop (that works for Hunchentoot since it opens threads, but doesn't for Woo since it blocks) with a deeper integration into the event loop:
> And that was the main change: looking at the innards of it, there are some features available, like woo.ev:evloop. This was not enough, and access to the libev timer was also needed. After some work with lev and CFFI, the SDK now implements a Node.js-style approach using libev timers via woo.ev:evloop and the lev CFFI bindings (check woo-async.lisp).
This is likely (almost surely) not perfect or even ideal, but it does seem to work, and I've been testing the demo app with 1 worker and multiple clients.