I'm sorry, I don't get the point. Perhaps I'm missing something.
If I open a single port to my home server, then anybody can send any traffic to my server on that port. The attack surface is exactly the process running on my home server, listening on that port.
If I use the cloudflare tunnel, anybody using my web service connects to some cloudflare server which transparantly forwards, through the tunnel, everything to the process running at home. The attack surface is ... exactly the process running on my home server, receiving everything coming into the tunnel, effectively listening on the port opened on the cloudflare server.
Where is the difference? Any security issue in the process running on my server that can be exploited by sending traffic to it is attackable in either case.
Does cloudflare filter the traffic in any way? How does it know what's good and what's bad traffic?
> If I open a single port to my home server, then anybody can send any traffic to my server on that port. The attack surface is exactly the process running on my home server, listening on that port.
If you open a single port on your home server, you're exposing that port, sure. But you're also exposing your IP, and with that comes attacks on your IP stack, if you're worried about that. Presumably cloudflare proxies application traffic, but likely normalizes fragmentation and tcp flags and what nots.
Additionally, when you're exposing your IP, you're subject to volumetric attacks on your IP. High volume DDoS is often spoofs your IP to UDP servers that will respond, generating high volumes of traffic that overwhelm either your system in general, or the bandwidth on your connection. If you're behind a tunnel, the tunnel endpoint will get that traffic, and Cloudflare seems to manage that well. If you manage to attract a DDoS at your application level, that could very well make it through the tunnel and overwhelm your service. I think Cloudflare does offer some filters for that, but my knowledge is limited. IMHO, most of the value is from avoiding non-application traffic; but I just host most of my stuff in cheap hosting and if someone wants to DDoS me, my server will go down and that's fine.
The article is mostly about the how, and not the why. It briefly mentions the why with:
> you might be worried about forwarding your IP and connections to the world without properly securing them. Setting it all up sounds like a hassle, right?
If I were to do this, it would be because I didn't want expose my IP to the world. And the two big reasons not to expose your IP are so you can't be DDoSed, and to reduce the privacy impact. Other people have chimed in that they do it because their IP is not static, and I think you can run the CF tunnel client behind CGNAT, which is also valuable.
Cf also allows adding authentication. Everything from OTP to third party OIDC. Including major providers like google , github etc. In edition blocking access by region or country.
Also not everyone can simply open a port on their router. Lot of people have ISPs that prohibit that or are behind CGNAT. So CF tunnels makes it lot easier for them to selfhost and expose those apps.
The point is the problem of exposing a port, as opposed to the additional problem of whatever security concerns you imagine your backend "process" may have.
I suppose you may not imagine that exposing a port is somehow problematic. However, it is. First, an open port reveals many things[1] about your operation you would likely prefer not to reveal. Second, it requires Internet service that permits control over open ports, and the authority to utilize it, either or both of which may not be available to you.
I have no trouble appreciating the value of this, both for personal and commercial purposes. The inherent DDOS protection alone is a huge benefit.
[1] Off the top of my head:
a.) The ASN and, ultimately, the ISP you're using.
b.) The approximate physical location of your system.
c.) Through fingerprinting, your firewall device, and whatever problems it has.
Yeah the point of CloudFlare tunel is absolutely not what is shown in this article. It's to privately expose services on the web without opening ports.
You can out auth, georestrictions, etc. so that people are authorized before they ever reach your computer.
I expose a lot of services on my NAS via CloudFlare tunels, but every single one of them is behind an authentication screen managed by CloudFlare and running on their servers.
Endpoints visible to the internet is one of the main reasons I created connet [1] - with it you can choose when and where to realize the other end. Another benefit is that endpoints talk to each other directly (under many conditions) without traffic ever hitting the cloud.
I might be wrong but I think with Cloudflare tunnel (same with tailscale), you don't need to open that port to the public? That is at least my understanding. Still, Cloudflare must communicate somehow with the external world and if that is compromised, then so is your service too.
If I open a single port to my home server, then anybody can send any traffic to my server on that port. The attack surface is exactly the process running on my home server, listening on that port.
If I use the cloudflare tunnel, anybody using my web service connects to some cloudflare server which transparantly forwards, through the tunnel, everything to the process running at home. The attack surface is ... exactly the process running on my home server, receiving everything coming into the tunnel, effectively listening on the port opened on the cloudflare server.
Where is the difference? Any security issue in the process running on my server that can be exploited by sending traffic to it is attackable in either case.
Does cloudflare filter the traffic in any way? How does it know what's good and what's bad traffic?