Hacker Newsnew | past | comments | ask | show | jobs | submit | infogulch's commentslogin

This looks pretty nice to use!

How does it work if you want to join multiple complex subqueries?

How far can a new query language like this go? Could this be added as a native query language in e.g. postgresql?


That's an interesting stress test for I2P. They should try to fix that, the protocol should be resilient to such an event. Even if there are 10x more bad nodes than good nodes (assuming they were noncompliant I2P actors based on that thread) the good nodes should still be able to find each other and continue working. To be fair spam will always be a thorny problem in completely decentralized protocols.

> Even if there are 10x more bad nodes than good nodes [...] the good nodes should still be able to find each other

What network, distributed or decentralized, can survive such an event? Most of the protocols break down once you hit some N% threshold of the network being bad nodes, asking it to survive 1000%+ bad nodes when others usually is something like "When at least half the nodes are good". Are there existing decentralized/distributed protocols that would survive a 1000% attack of bad nodes?


No. They should not try to survive such attacks. The best defense to a temporary attack is often to pull the plug. Better than than potentially expose users. When there are 10x as many bad nodes as good, the base protection of any anonymity network is likely compromised. Shut down, survive, and return once the attacker has moved on.

This is why Tor is centralized, so that they can take action like cutting out malicious nodes if needed. It’s decentralized in the sense that anyone can participate by default.

> so that they can take action like cutting out malicious nodes if needed

How does that work?


While anyone can run a Tor node and register it as available, the tags that Tor relays get assigned and the list of relays is controlled by 9 consensus servers[1] that are run by different members the Tor project (in different countries). They can thus easily block nodes.

[1]: https://consensus-health.torproject.org/


Interesting, thank you so much! Yeah, if those 9 really are independent entities, I’d say I don’t see many issues here.

It's 10, not 9. And there are severe problems with having a total of 10 DA be the essential source of truth for whole network. It would be trivial to DDoS the DAs and bring down the Tor network or at the very least, disrupt it: https://arxiv.org/abs/2509.10755.

It's the only complaint I have of the current state of Tor. Anyone should be able to run directory authority, regardless if you trust the operator or not (same as normal relays).


Couldn't you

A: Run your own network that trusts the existing plus whatever nodes you think ought to be and convince everyone that this is better if it is

B: Run a node and convince others to trust yours so that eventually there is 11 then 12 and so forth?


Anyone can. The DA code is open source and is used whenever you run a testnet. You can also run a DA on the mainnet - how do you think the 10 primary DAs exist? They're not 10 computers owned by a single organization - they're 10 mutually trusting individuals. However, most of the network won't trust you.

Why would an attacker move on if it can maintain a successful DoS attack forever?

Because botnets are mostly there to make money nowadays. Or owned by state actors.

Either way, it’s opportunity cost.


The mentioned botnet didn't intentionally take down I2P. It's run by bunch of kids who don't know what they're doing.

Finding good nodes is a thorny problem for human friendship, too!

That's why the Web of Trust, or classic GNUPG key signing parties are a forgotten/ignored must have. Anyone can change and go rouge of course, but it's statistically less likely.

If I understand gp correctly, the web of trust comes after finding these human nodes, and will not help you in the process.

It doesn't work for I2P due to its design, but for things like Nostr, it works well. Essentially, the goal is to build up a list of "known" reliable relays over time, while simultaneously blacklisting anyone who joins and proves to be unreliable relying on the statistic that collaborative individuals outnumber hostile ones in any sufficiently large cohort.

Of course, it's far from being 100% effective, but it mitigates the issue significantly.


Hostile entities generally have a lot of money they can use to perform a Sybil attack.

Sure, but can't break the trusted part of the network who can remain operational in that case, even if not really anonymous anymore.

Funny and excellent comment!

This is a nice increment in ACME usability.

Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676


Does GNL support pleats, tucks, and darts? These sewn features help make flat cloth conform to curves in the body. The terms don't seem to be mentioned explicitly in the repo, though maybe they can be implemented with the existing notation.

https://en.wikipedia.org/wiki/Pleat | https://en.wikipedia.org/wiki/Tuck_(sewing) | https://en.wikipedia.org/wiki/Dart_(sewing)



Github doesn't do plurality folding which trips up my searches sometimes. Thanks.

Live Streaming to Video Calling to Screen Sharing to Remote Control to Remote Desktop

Each is only a small conceptual step from the previous. I wonder if we'll ever have technology that enables smooth transitions from one end to the other.


How I see SQL databases evolving over the next 10 years:

    1. integrate an off the shelf OLAP engine
       forward OLAP queries to it
       deal with continued issues keeping the two datasets in sync
    2. rebase OLTP and OLAP engines to use a unified storage layer
       storage layer supports both page-aligned row-oriented files and column-oriented files and remote files
       still have data and semantic inconsistencies due to running two engines
    3. merge the engines
       policy to automatically archive old records to a compressed column-oriented file format
       option to move archived record files to remote object storage, fetch on demand
       queries seamlessly integrate data from freshly updated records and archived records
       only noticeable difference is queries for very old records seem to take a few seconds longer to get the results back


If you keep the fd open maybe you could read refreshed secrets through it for live secret rotation.


Send up a spacecraft with back-to-back / equal area solar panels and radiators (have to reject heat backwards, can't reject it to your neighboring sphere elements!). Push your chip temp as much as possible (90C? 100C?). Find a favorable choice of vapor for a heat pump / Organic Rankine Cycle (possibly dual-loop) to boost the temp to 150C for the radiator. Cool the chip with vapor 20C below its running temp. 20-40% of the solar power goes to run the pumps, leaving 60-80% for the workload (a resistor with extra steps).

There are a lot of degrees of freedom to optimize something like this.

Spacecraft radiator system using a heat pump - https://patents.google.com/patent/US6883588B1/en


Now check this out:

> Protobuf performs up to 6 times faster than JSON. - https://auth0.com/blog/beating-json-performance-with-protobu... (2017)

That's a 30x faster just by switching to a zero-copy data format that's suitable for both in memory use and network. JSON services spend 20-90% of their compute on serde. A zero copy data format would essentially eliminate it.


Why don't we use standardized zero-copy data formats for this kind of thing? A standardized layout like Arrow means that the data is not tied to the layout/padding of a particular language, potential security problems like bounds checks are automatically handled by the tooling, and it works well over multiple communication channels.


While Arrow is amazing, it is only the C Data Interface that can be FFI'ed, which is pretty low level. If you have something higher-level like a table or a vector of recordbatches, you have to write quite a bit of FFI glue yourself. It is still performant because it's a tiny amount of metadata, but it can still be a bit tedious.

And the reason is ABI compatibility. Reasoning about ABI compatibility across different C++ versions and optimization levels and architectures can be a nightmare, let alone different programming languages.

The reason it works at all for Arrow is that the leaf levels of the data model are large contiguous columnar arrays, so reconstructing the higher layers still gets you a lot of value. The other domains where it works are tensors/DLPack and scientific arrays (Zarr etc). For arbitrary struct layouts across languages/architectures/versions, serdes is way more reliable than a universal ABI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: