> The European Union’s Digital Identity Wallet takes a radically different approach. Zero-knowledge proofs let you verify age without revealing personal data—like showing you’re over 18 without disclosing your birthdate or identity details
I'm not buying that "I am over 18, but I tell you this without revealing that I am over 18" is a real thing
Is there a name for this? I think about this all the time. I've always had a theory that some offensive words may actually be persisting longer solely because we essentially calcify their definitions and never allow them to evolve into new less offensive meanings.
Suggestion: it should, at very least, not show a UI a notification "_ is now on signal!" As a nice to have, yes, it should blackhole a message until at least one reply happens.
No, but I can empathize. I never understood the Nix language. It's impenetrable to me. I hate it so much.
I keep checking back every year or so secretly hoping they'll have upgraded the language. It's fascinating how successful Nix is given how utterly opaque the language is.
Imagine how hard moderating a forum is. Now imagine moderating the whole Internet. Everyone always thinks it's trivial. Everyone couldn't be more wrong.
How convenient that Google gets to reap all of the benefits of that situation without bearing any of the responsibility. Oh, but it's fine, because they have a trillion dollars and "it's hard, you guuyyysss!"
"lets make private corporations responsible for policing the behavior of other corporations instead of the government and legislation" <- this is what I'm hearing
> For example, my software renders the smiling face with horns as red [], but often the depiction is purple.
It seems the article is ironically falling for the same problem. This would be worked around by including images of the emoji variants, rather than relying on Unicode.
It's not insane. The best codebase I ever inherited was about 50kloc of C# that ran pretty much everything in the entire company. One web server and one DB server easily handled the ~1000 requests/minute. And the code was way more maintainable than any other nontrivial app I've worked on professionally.
I work(ed) on something similar in Java. And it still works quite well. But last few years are increasingly about getting berated by management on why things are not modern Kubernetes/ micro services based by now.
I feel like people forget just how fast a single machine can be. If your database is SQLite the app will be able to burn down requests faster than you ever thought possible. You can handle much more than 23 req/day.
In the not-too-distant past I was handling many thousands of DB-backed requests per hour in a Python app running plain PostgreSQL.
You can get really, really far with a decent machine if you mind the bottlenecks. Getting into swap? Add RAM. Blocked by IO? Throw in some more NVMe. Any reasonable CPU can process a lot more data than it's popular to think.
Anytime someone talks about scale I remember just how much data the low mhz CPUs used to process. Sure the modern stuff has nicer UIs, but the UIs of the past where not bad, and we a lot of data was processed. Almost nobody has more data than what the busiest 200mhz CPU on 1999 used to handle alone, so if you can't do it that isn't a scaling problem it is a people problem. (don't get me wrong, this might be a good trade off to make - but don't say you couldn't do it on a single computer)
It's not. It's kind of bonkers to pursue that when you have a lot of traffic, but it's a perfectly sane starting point until you know where the pain points are.
In general, the vast number of small shops chugging away with a tractably sized monolith aren't really participating in the conversation, just idly wondering what approach they'd take if they suddenly needed to scale up.
I'm not even sure it's bonkers if you have a lot of traffic. It depends on the nature of the traffic and how you define "a lot". In general, though, it's amazing how low latency a function call that can handle passing data back and forth within a memory page or a few cache lines is compared to inter-process communication, let alone network I/O.
The corollary to that is, it's amazing how far you can push vertical scaling if you're mindful of how you use memory. I've seen people scale single-process, single-threaded systems multiple orders of magnitude past the point where many people would say scale-out is an absolute necessity, just by being mindful of things like locality of reference and avoiding unnecessary copying.
if you have 23 requests per day the insane thing is wondering whether or not you've chosen the correct infrastructure, becuase it really doesn't matter.
do whatever you want, you've already spent more time than it's worth considering it.
most productive applications have more RPS than that. we should ideally be speaking about how to architect _productive_ applications and not just mocks and prototypes
Don't know if this is sarcasm or not. If you have 23 req/day, then there's no tech problem to solve. Whatever you have is good enough, and increasing traffic will come from solving problems outside tech (marketing, etc)
I'm not buying that "I am over 18, but I tell you this without revealing that I am over 18" is a real thing
reply