Hacker Newsnew | past | comments | ask | show | jobs | submit | peanut-walrus's commentslogin

So everyone has wanted "year of the Linux desktop" for a while. This year, since Microsoft has decided to call open season on their own feet and Valve has taken a break from swimming in their money pool to make sure absolutely any piece of software ever written can run on Linux, it looks like this might actually be happening. I am seeing a massive influx of new users, driven by distros like Cachy, Nobara, Bazzite. A lot of them don't have previous Linux experience and are generally not the most technically savvy users.

This absolutely terrifies me. Linux desktop security is, to put it politely, nonexistant. And the culture that goes with Linux desktop users just makes things worse, there's still a lot of BOFH gatekeeping going on, laughing at the new users when they inevitably mess something up and worst of all, completely refusing to admit that the Linux desktop has security issues. Whenever a new user asks what antivirus they should run, they are usually met with derision and ridicule, because the (oldschool) Linux users genuinely think their computers are somehow immune and can never be hacked.

The first cybercriminals to put some development effort into Linux ransomware/stealers are going to wreak havoc and a lot of people are going to be in for a rude awakening. The D-Bus issue with secrets in the article is just one of many many many ways in which Linux desktops are insecure by design.

There are of course distros out there that take security seriously, but we are not really seeing new users migrating to Qubes en masse.

Edit: not calling out the distros above in particular, all 3 are doing very good work and are not really any worse in security than most other distros.


Any windows program you download can steal all your secrets too. The only operating systems that isolate programs by default are on phones (and chromebooks).

Unless you give it admin permissions, it really can't (admittedly, a lot of Windows users do run their computers with their admin account by default). Also, Windows users generally have at least some kind of anti-malware running, which, while not perfect, does work well against most spray-and-pray malware out there.

Edit: did some research, I must correct myself, the stealers have indeed evolved so admin permissions are not required for most credentials on Windows either.

However, should "strictly speaking, not really worse than Windows" be the security target we aim for in Linux?


All your data is owned by your user. If you run a program, it will have access to all your data. Admin or not is irrelevant here.

The keyring is pretty open on Windows, if you know the key you can request anything even if stored by another app. There is a way to lock a secret to a specific app but it's not properly enforced in most versions of Windows.

The only user data that would require admin privilege is that of sandboxed Windows Store applications where even the owner can't access it directly from outside the program and you have to be admin.


The main problems with these kinds of in-repo vault solutions:

- Sharing encryption key for all team members. You need to be able to remove/add people with access. Only way is to rotate the key and only let the current set of people know about the new one.

- Version control is pointless, you just see that the vault changed, no hint as to what was actually updated in the vault.

- Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.


Agreed with 1 and 3, just a tip re 2 though: sops encodes json and yaml semantically, key names of objects are preserved. Iow you can see which key changed.

Whether that is a feature or a metadata leak is up to the beholder :)


git-crypt solves all 3 (mostly)

> Sharing encryption key for all team members

you're enrolling a particular users public/key and encrypting a symmetric key using their public key, not generating a single encryption key which you distribute. You can roll the underlying encryption key at any time and git-crypt will work transparently for all users since they get the new symmetric key when they pull (encrypted with their asymmetric key).

> Version control is pointless

git-crypt solves this for local diff operations. for anything web-based like git{hub,lab,tea,coffee} it still sucks.

> - Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.

With git-crypt, if you have gitattributes set correctly (to include a file) and git-crypt is not working correctly or can't encrypt things, it will fail to commit so no risk there.

You can, of course, put secrets in files which you don't chose to encrypt. That is, I suppose, a risk of any solution regardless of in-repo vs out-of-repo encryption.


for 1), seems like you could do a proxy encryption solution.

edit: wrong way to phrase I think. What I mean to say is, have a message key to encrypt the body, but then rotate that when team membership changes, and "let them know" by updating a header that has the new message key encrypted using a key derived using each current member's public key.


Re 2 you can implement a custom Git diff tool, and so (with the encryption key) see what's changed, straight from `git diff`

Here's another one:

- using a third party tool to read and store credentials is an attack vector itself.


Cloudflare is widely used because it's the easiest way to run a website for free or expose local services to internet. I think for most cloudflare users, the ddos protection is not the main reason they're using it.

I am using cloudflare because the origin servers are IPv6 only.

Cloudflare hosts websites for free?

Yup, the free plan is quite generous.

Yes, they have free plans.

Vulnerabilities in the software you use don't even make the top 5 in ways bad guys actually compromise you.

The most common attacks:

- Phishing

- Getting the user to run the malware themselves

- Credential reuse

- Literal physical theft

- Users uploading their own stuff completely willingly to some sketchy service

Vulnerabilities in the services you use are important, but you can't update those yourself :)


> Users uploading their own stuff completely willingly to some sketchy service

> Getting the user to run the malware themselves

Here are two good reasons for not trusting a password manager that stores your vault online.

On the other hand, most people have no backup strategy for their digital life.


For 1, you can still have extremely malicious networks. It's true that your web traffic is likely encrypted but... What services are exposed on your machine? Do you have mapped samba shares?

For 5 - session cookies are one of the main things stealers look for. Deleting cookies is absolutely good advice until browsers build in better mitigations against cookie theft.

For 6 - if there was a standard interface how password managers could rotate my creds, I would sure as hell use it. Force rotating passwords is only "bad" if people need to remember them. Any random credentials stored in a vault absolutely should be rotated periodically, there is no reason not to.

I don't see the point of this letter, none of the "bad" advice they call out is harmful to security in any way, if people feel safer avoiding public wifi, so be it. Is it just a call out to other cisos to update their security hygiene powerpoints?


While you and I would love it if password managers would rotate creds, we're not yet at the point where people will use password managers. They're still using CompanynameFall2025!. Next month, they'll dutifully rotate their password to CompanynameWinter2025! because their work policy is still stuck on shitty standards.

> This kind of advice is well-intentioned but misleading. It consumes the limited time people have to protect themselves and diverts attention from actions that truly reduce the likelihood and impact of real compromises.

When you've got 15 seconds to _maybe_ get someone to change their behavior for the better, you need to discard everything that's not essential and stay very very far away from "yes, but" in your explanations.


If your VPN provider offers a socks5 instance you can do this entire thing with a socat oneliner + the dns hijack of course.


It makes sense to think about price in the context of your business. If your entire infra cost is a rounding error on your balance sheet, of course you would pick the provider with the best features and availability guarantees (choose IBM/AWS). If your infra cost makes up a significant percentage of your operating expenses, you will start spending engineering effort to lower the cost.

That's why AWS can get away with charging the prices they do, even though it is expensive, for most companies it is not expensive enough to make it worth their while to look for cheaper alternatives.


It’s often less about engineering effort and more about taking some small risks to try less mainstream (but still relatively mature) alternatives by reasoning from first principles and doing a bit of homework.

From our experience, you can actually end up in a situation that requires less engineering effort and be more stable, while saving on costs, if you dare to go to a bit lower abstraction layers. Sometimes being closer to the metal is simpler, not more complex. And in practice complexity is much more often the cause of outages rather than hardware reliability.


I wonder if similar to infrastructure resilience, code resilience is also required for critical services that can never go down? Instead of relying on a single implementation for a critical service, have multiple independent implementations in different languages. Back when I was running my own DNS servers, I did always ensure that primary and secondary were running on different platforms and different software.


What traffic would you request the upstream providers to block if getting hit by Aisuru? Considering the botnet consists of residential routers, those are the same networks your users will be originating from. Sure, in best case, if your site is very regional, you can just block all traffic outside your country - but most services don't have this luxury.

Blocking individual IP addresses? Sure, but consider that before your service detects enough anomalous traffic from one particular IP and is able to send the request to block upstream, your service will already be down from the aggregate traffic. Even a "slow" ddos with <10 packets per second from one source is enough to saturate your 10Gbps link if the attacker has a million machines to originate traffic from.


In many cases the infected devices are in developing countries where none of your customers is. Many sites are regional, for example, a medium business operating within one country, or even city.

And even if the attack comes from your country, it is better to block part of the customers and figure out what to do next rather than have your site down.


Could it not be argued that ISPs should be forced to block users with vulnerable devices?

They have all the data on what CPE a user has, can send a letter and email with a deadline, and cut them off after it expires and the router has not been updated/is still exposed to the wide internet.


My dad’s small town ISP called him to say his household connection recently started saturating the link 24/7 and to look into whether a device had been compromised.

(Turns out some raspi reseller shipped a product with empty uname/password)

While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?

And what about a botnet that doesn’t saturate your connection, how does your ISP even know? They get full access to your traffic for heuristics? What if it’s just one curl request per N seconds?

Not many good answers available if any.


> While a cute story, how do you scale that? And what about all the users that would be incapable of troubleshooting it, like if their laptop, roku, or smart lightbulb were compromised? They just lose internet?

Uh, yes. Exactly and plainly that. We also go and suspend people's driver licenses or at the very least seriously fine them if they misbehave on the road, including driving around with unsafe cars.

Access to the Internet should be a privilege, not a right. Maybe the resulting anger from widespread crackdowns would be enough of a push for legislators to demand better security from device vendors.

> And what about a botnet that doesn’t saturate your connection, how does your ISP even know?

In ye olde days providers had (to have to) abuse@ mailboxes. Credible evidence of malicious behavior reported to these did lead to customers getting told to clean up shop or else.


Xfinity did exactly this to me a few years ago. I wasn't compromised but tried running a blockchain node on my machine. The connection to the whole house was blocked off until I stopped it.


It could be argued that ISPs should not snoop on my traffic, barring a court order.


So accept that your customers won't be able to use your services whenever some russian teenager is bored? Yeah, good luck with justifying that choice.


And how often does that happen?


For the service I'm responsible for, 4 times in the last 24 hours.


Congratulations, you're the exception rather than the norm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: