Working with Jenkins CasC, JobDSL and declarative pipelines, I'm not sure where the million times comes from. Sure, there are some annoying parts, and GHA has the social network for reusable actions, but apart from that it's not that different.
Oldschool maven type jobs where you type shell script into a `<textarea>`? Yeah, let's not talk about those, but we don't have a single one left anymore.
downdetectorsdowndetector.com does not load the results as part of the HTML, nor does it do any API requests to retrieve the status. Instead, the obfuscated javascript code contains a `generateMockStatus()` function that has parts like `responseTimeMs: randomInt(...)` and a hardcoded `status: up` / `httpStatus: 200`. I didn't reverse-engineer the entire script, but based on it incorrectly showing downdetector.com as being up today, I'm pretty sure that downdetectorsdowndetector.com is just faking the results.
downdetectorsdowndetectorsdowndetector.com and downdetectorsdowndetectorsdowndetectorsdowndetector.com seem like they might be legit. One has the results in the HTML, the other fetches some JSON from a backend (`status4.php`).
Also with additional control difficulty due to reduced hydraulic pressure.
> On the Boeing 767, the control surfaces are so large that the pilots cannot move them with muscle power alone. Instead, hydraulic systems are used to multiply the forces applied by the pilots. Since the engines supply power for the hydraulic systems, in the case of a complete power outage, the aircraft was designed with a ram air turbine that swings out from a compartment located beneath the bottom of the 767,[10] and drives a hydraulic pump to supply power to hydraulic systems.
> As the aircraft slowed on approach to landing, the reduced power generated by the ram air turbine rendered the aircraft increasingly difficult to control.[16]
> The forward slip disrupted airflow past the ram air turbine, which decreased the hydraulic power available; the pilots were surprised to find the aircraft slow to respond when straightening after the forward slip.
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable [...]
Surprising, but not entirely unplausible for a GCP outage to spread to CF.
Probably unintentional. "We just read this config from this URL at startup" can easily snowball into "if that URL is unavailable, this service will go down globally, and all running instances will fail to restart when the devops team try to do a pre-emptive rollback"
After reading about cloudflare infra in post mortems it has always been surprising how immature their stack is. Like they used to run their entire global control plane in a single failure domain.
Im not sure who is running the show there, but the whole thing seems kinda shoddy given cloudflares position as the backbone of a large portion of the internet.
I personally work at a place with less market cap than cloudflare and we were hit by the exact same instances (datacenter power went out) and had almost no downtime, whereas the entire cloudflare api was down for nearly a day.
Nice job keeping your app up during the outage but I'm not sure you can say "the whole thing seems kinda shoddy" when they're handling the amount of traffic they are.
What's the alternative here? Do you want them to replicate their infrastructure across different cloud providers with automatic fail-over? That sounds -- heck -- I don't know if modern devops is really up to that. It would probably cause more problems than it would solve...
I was really surprised. The dependence on another enterprise’s cloud services in-general I think is risky, but pretty much everyone does it these days, but I didn’t expect them to be.
AWS has Outpost racks that let you run AWS instances and services in your own datacenter managed like the ones running in AWS datacenters. Neat but incredibly expensive.
> What's the alternative here? Do you want them to replicate their infrastructure
Cloudflare adverises themselves as _the_ redundancy / CDN provider. Don't ask me for an "alternative" but tell them to get their backend infra shit in order.
There are roughly 20-25 major IaaS providers in the world that should have close to dependency on each other. I'm almost certain that cloud flare believe that was their posture, and that the action items coming out of this post mortem will be to make sure that this is the case.
Based on [1] it seems like one `management.endpoints.web.exposure.include=*` is enough to expose everything including the heapdump endpoint on the public HTTP API without authentication. It's even there in the docs as an example.
Looks like there is a change [2] coming to the `management.endpoint.heapdump.access` default value that would make this harder to expose by accident.
Going a bit further, it seems like there's a grain of truth here, HTTP/2 has a stream priority dependency mechanism [1] and this report [2] from Imperva describes an actual Dependency Cycle DoS in the nghttp implementation.
Unfortunately that's where it seems to end... I'm not that familiar with QUIC and HTTP/2, but I think the closest it gets is that the GitHub repo exists and has a `class QuicConnection` [3]. Beyond that, the QUIC protocol layer doesn't have any concept of exchanging stream priorities [4] and HTTP/2 priorities are something the client sends, not the server? The PoC also mentions HTTP/3 and PRIORITY_UPDATE frames, but those are from the newer RFC 9218 [5] and lack the stream dependencies used in HTTP/2 PRIORITY frames.
This is a whole new problem open source project will be facing. AI slop PR and Vulnerability reports, which will be only solved using AI tools to filter through the unholy amount.
AI filtering AI that's submitted based on AI scouring the web for ways to make probably less money than it costs to run. The future looks like turning on computers and having them run at 100% GPU + CPU usage 100% of time with 0 clue what they're doing. What a future.
It looks like the Iberian peninsula is relatively isolated from the rest of the CESA synchronous grid, with only 2% cross-border capacity compared to local generation. [1]
There's a map at [2]
> The Spanish electricity system is currently connected to the systems of France, Portugal, Andorra and Morocco. The exchange capacity of this interconnection is around 3 GW, which represents a low level of interconnection for the peninsula. The international interconnection level is calculated by comparing the electricity exchange capacity with other countries with the generation capacity or installed power.
That graph doesn't seem to make a very clear distinction between historical, real-time and predicted values... I think the event happened at 12:30 local time or so.
There seems to be some kind of recurrent daily pattern where the French - Spanish interconnect switches from Spain -> France imports to France -> Spain exports at around that time, and then back again in the late afternoon.
Oldschool maven type jobs where you type shell script into a `<textarea>`? Yeah, let's not talk about those, but we don't have a single one left anymore.
reply