I mean. I've owned several IoT devices that work either locally or over the internet. Some of this you can just blame on local networks being fiddly in ways that are difficult to control.
Over local network it's an unreliable assumption that device A can discover device B through some form of broadcast. There are ways to intentionally or unintentionally block that. And then even if you know each party's IP, some networks will intentionally isolate different users for security reasons.
Is it an Apple/Android limitation or a more basic networking limitation that drives devices to communicate with centralized servers on the internet?
> Over local network it's an unreliable assumption that device A can discover device B through some form of broadcast.
Yes and I’m 100% sure people at wifi consortia and the likes could design a thing that fixes that. They came up with DHCP which smells vaguely like this, I’m sure there’s a way.
For another data point: last week I ordered my first PCBs from JLCBCB. 2 fully assembled (parts already soldered) and 3 bare. $40 for the boards themselves, $40 for shipping, and $20 for US Tariffs, for total $100. Should take a week to arrive, shipping's cheaper if your willing to wait a month.
Re help: I asked for some help on libera ##electronics. I think there are larger communities on reddit that would also take a pass over designs.
My impression is that for straightforward circuits (not very high frequency or high power) you can get away with almost anything as far as layout goes. You punch in some generous setting for spacing of traces etc in the CAD software and it does some basic validation. (Are all the parts connected, not too close?).
I used KiCAD. It works well, though for assembly EasyEDA is probably lower friction. I had to dig around to find the right footprints for certain parts.
Based on how broken the site's external links were I incorrectly concluded the whole blog was created by genAI. Then I thought to check web.archive.org and found they're legitimate links & it's just internet bit rot.
A more optimistic takeaway is that if you set out to solve a hard problem then you might be surprised about your tech's applications elsewhere. Between 1983 and 1989 they built a company that they went on to sell for ~25 million, or ~64 million in 2024 dollars. I don't know how much went into it but it doesn't sound like an obvious failure.
The ideal set of building blocks depends on the problem.
If the building blocks make it easy to write concurrent code (Go, Erlang), then it becomes easier to write a server. If they make it easy to represent "A or B or C" and pattern match on trees (ML-like languages), then it becomes easier to write a compiler.
Add to that: if you are trying to make an easy to onboard language, you want to look at how beginners use it, not experts. Someone writing a compiler for language X is certainly an expert in X.
Maybe the decline of desktop applications (Java, C#) and Android (Java)? And then Go coming out with some killer frameworks? I have to stretch my imagination to imagine anything causing Go to outpace C# or Javas ecosystem.
I'm 30 and have been using vim since college. Usually I'm not actually using the vim editor, but some IDE that has vim keybindings. Good enough for me since I don't customize it much anyway. And then I get the best of both worlds: IDE features for navigating around files (jump to definition, display references), and vim for navigating within a file. Professionally I never paid attention to how many people are using it. Off hand I know a couple people that use emacs, and zero that know vim, but this is probably because emacs users talk about emacs :)
Ahh having a generic pool abstraction that collects results is tempting... nice work. I likely won't use this library though, since
- I don't want to mess with panics.
- The default concurrency GOMAXPROCS is almost never what I want.
- Aggregated errors are almost never what I want. (I haven't read the implementation, but I worry about losing important error codes, or propagating something huge across an RPC boundary).
- Using it would place a burden on any reader that isn't familiar with the library
> The default concurrency GOMAXPROCS is almost never what I want.
FWIW, the default concurrency has been changed to "unlimited" since the 0.1.0 release.
> Aggregated errors are almost never what I want.
Out of curiousity, what do you want? There is an option to only keep the first error, and it's possible to unwrap the error to an array of errors that compose it if you just want a slice of errors.
> Using it would place a burden on any reader that isn't familiar with the library
Using concurrency in general places a burden on the reader :) I personally find using patterns like this to significantly reduce the read/review burden.
> FWIW, the default concurrency has been changed to "unlimited" since the 0.1.0 release.
Nice! Will that end up on Github?
> Out of curiousity, what do you want
Most often I want to return just the first error. Some reasons: (1) smaller error messages passed across RPC boundaries (2) original errors can be inspected as intended (e.g. error codes) (3) when the semantics are to cancel after the first error, the errors that come after that are just noise.
A couple other thoughts
- I think the non-conc comparison code is especially hairy since it's using goroutine pools. There's nothing wrong with that and it's fast, just not the easiest to work with. Often goroutine overhead is negligible and I would bound concurrency in dumber ways e.g. by shoving a sync.Semaphore into whatever code I otherwise have
- I like that errgroup has .Go() block when concurrency limit is reached. Based on a quick skim of the code I think(?) conc does that too, but it could use documentation
It's already there! I just haven't cut a release since the change.
> Most often I want to return just the first error.
In many cases, I do too, which is why (*pool).WithFirstError() exists :)
> original errors can be inspected as intended
If you're using errors.Is() or errors.As(), error inspection should still work as expected.
> Often goroutine overhead is negligible and I would bound concurrency in dumber ways
Yes, definitely. And that's what I've always done too. However, I've found it's surprisingly easy to get subtly wrong, especially when modifying code I didn't write, and even more especially if I want to propagate panics (which I do, though that seems to be a somewhat controversial opinion in this thread). Conc is intended to be a well-known pattern that I don't have to think about too much when using it.
> I think(?) conc does that too, but it could use documentation
It does! I'll update the docs to make that more clear.
Tell the teammates that you want to see them grow to become code reviewers. At first require PRs to be approved by two people: first someone else reviews it. once they LGTM, then you review it for final approval. Do this for a while, until you start noticing like you have little to add. And then grant the relevant reviewer the privilege to review in a single pass. Bus factor += 1, and better knowledge transfer.
That is the process you'll have when you get back. But in the short term, it's probably fine to let them approve each others code while you're gone. Yes they might they break something.. but you can always roll back (I hope). Blocking all PR merging until you return sounds a little overkill to me, but there might be some reason the stakes are so high that I'm missing.
Over local network it's an unreliable assumption that device A can discover device B through some form of broadcast. There are ways to intentionally or unintentionally block that. And then even if you know each party's IP, some networks will intentionally isolate different users for security reasons.
Is it an Apple/Android limitation or a more basic networking limitation that drives devices to communicate with centralized servers on the internet?
And yes I agree it does seem ridiculous.