Hacker Newsnew | past | comments | ask | show | jobs | submit | tadfisher's commentslogin

That's what's nice about coarse-grained feature options like Rust's editions or Haskell's "languages", you can opt in to better default behavior and retain compatibility with libraries coded to older standards.

The "null vs null" problem is commonly described as a problem with the concept of "null" or optional values; I think of it as a problem with how the language represents "references", whether via pointers or some opaque higher-level concept. Hoare's billion-dollar mistake was disallowing references which are guaranteed to be non-null; i.e. ones that refer to a value which exists.


I was in Oahu last week in a place that experienced 10 inches of rainfall in one day. I had never been in a situation where stepping outside felt like turning on a shower.

Hawaii gets a lot of rain. IIRC there are some places that are amongst the highest rainfall anywhere. I experienced a flash flood on Maui near Hana. We were hiking there with no rain when we started and turned into torrential rain. Quite an experience. The campground turned into a swamp with knee high water.

Regular thing in Hilo or Waiʻaleʻale.

For those unaware, Mencius Moldbug is the pen name of Curtis Yarvin, thought leader for the Silicon Valley branch of right-wing technofascist weirdos which includes Peter Thiel and apparently half of a16z.

Is LadybugDB not one of these 25 projects?

LadybugDB is backed by this tech (I didn't write it)

https://vldb.org/cidrdb/2023/kuzu-graph-database-management-...

You can judge for yourself what work has been done in the last 5 months. Many short videos here. New open source contributors who I didn't know before ramping up.

https://youtube.com/@ladybugdb


Those 25 are me too; this one is a me as well /s.

Lies. Everyone knows The Red Green Show is the only television program legally allowed in Canada.

Not just Canada. Never screened here AFAIK so I had to buy it on DVD.

Which is frankly hilarious because the Microsoft Store is the worst offender when it comes to hosting straight-up scams.

I'm not the only one who has noticed: https://www.reddit.com/r/windows/s/6y39VNaLUh


The same is true on Android.

Did you visit that link? The top-downloaded apps on the Microsoft Store are 50% scams, compared to 0% on the Play Store and App Store.

Banks do these things to check security boxes, not to prevent scams.

In this case, they don't want users to reverse-engineer their app or look at logs that might inadvertently leak information about how to reverse-engineer their app. It is pointless, I know, but some security consultant has created a checkbox which must be checked at all costs.


Yes, it is really dumb that some of these settings are exposed to all apps with no permission gating [0]. But it will likely always be possible to fingerprint based on enabled developer options because there are preferences which can only be enabled via the developer options UI and (arguably) need to be visible to apps.

0: https://developer.android.com/reference/android/provider/Set...


What might help better is having permissions that you can set separate settings that can be read for different apps (including the possibility to return errors instead of the actual values), even if they can be read by default you can also change them per apps. (This has other benefits as well, including possibility of some settings not working properly due to a bug, you can then work around it.)

We'll see when this rolls out, but I don't foresee the package manager checking for developer mode when launching "unverified" apps, just when installing them. AFAICT the verification service is only queried on install currently.

Googler here (community engagement for Android) - I looked into the developer options question, and it's my understanding that you don't have to keep developer options enabled after you enable the advanced flow. Once you make the change on your device, it's enabled.

If you turn off developer options, then to turn off the advanced flow, you would first have to turn developer options back on.


Why can't stores take over the "verification" process (like they do already)? Why do app developers have to be verified themselves, why does the verification have to be done by google? There are so many options, why choose google of all companies? Just laziness?

If I understand correctly, the F-Droid store itself would be possible to install without waiting period, as it's an app from a verified developer.

Would apps installed from F-Droid be subject to this process, or would they also be exempt? Could that be a solution that makes everyone happy? Android already tracks which app store an app originates from re: autoupdating.

Also: Can I skip the 24h by changing the my phone's clock?


> as it's an app from a verified developer.

Well that's if they go through the verification process, which does not seem like a thing they'd want to do - https://f-droid.org/en/2026/02/24/open-letter-opposing-devel...


If one verified app can install many unverified apps, either aurora droid or fdroid basic or one of the many other frontends would end up offering that feature quickly.

But there's been some comments that even that wouldn't be possible, every app would have to be verified individually, or be signed by a developer with less than 20 installs.

(Which of course then begs the question: Why not build a version of Fdroid that generates its own signing key and resigns every app on device?)


Honestly, if coerced sideloading is a real attack vector, then this seems to be a pretty fair compromise.

I just remain skeptical that this tactic is successful on modern Android, with all the settings and scare screens you need to go through in order to sideload an app and grant dangerous permissions.

I expect scammers will move to pre-packaged software with a bundled ADB client for Windows/Mac, then the flow is "enable developer options" -> "enable usb debugging" -> "install malware and grant permissions with one click over ADB". People with laptops are more lucrative targets anyway.


I predict that they're going to introduce further restrictions, but I think the restrictions will only apply to certain powerful Android permissions.

The use case they're trying to protect against is malware authors "coaching" users to install their app.

In November, they specifically called out anonymous malware apps with the permission to intercept text messages and phone calls (circumventing two-factor authentication). https://android-developers.googleblog.com/2025/11/android-de...

After today's announced policy goes into effect, it will be easier to coach users to install a Progressive Web App ("Installable Web Apps") than it will be to coach users to sideload a native Android app, even if the Android app has no permissions to do anything more than what an Installable Web App can do: make basic HTTPS requests and store some app-local data. (99% of apps need no more permissions than that!)

I think Google believes it should be easy to install a web app. It should be just as easy to sideload a native app with limited permissions. But it should be very hard/expensive for a malware author to anonymously distribute an app with the permission to intercept texts and calls.


I don't think Google has a strategy around what should be easy for users to do. PWAs still lack native capabilities and are obviously shortcuts to Chrome, and Google pushes developers to Trusted Web Activities which need to be published on the Play Store or sideloaded.

But these developer verification policies don't make any exceptions for permission-light apps, nor do they make it harder to sideload apps which request dangerous permissions, they just identify developers. I also suspect that making developer verification dependent on app manifest permissions opens up a bypass, as the package manager would need to check both on each update instead of just on first install.


> But it should be very hard/expensive for a malware author to anonymously distribute an app with the permission to intercept texts and calls.

And how hard/expensive should it be for the developer of a legitimate F/OSS app to intercept calls/texts?


Yep, I have a legitimate use case for exactly this. It integrates directly with my application and gives it native phone capabilities that are unavailable if I were to use a VoIP provider of any kind.

As a legitimate developer developing an app with the power to take over the phone, I think it's appropriate to ask you to verify your identity. It should be an affordable one-time verification process.

This should not be required for apps that do HTTPS requests and store app-local data, like 99%+ of all apps, including 99% of F-Droid apps.

But, in my opinion, the benefit of anonymity to you is much smaller than the harm of anonymous malware authors coaching/coercing users to install phone-takeover apps.

(I'm sure you and I won't agree about this; I bet you have a principled stand that you should be able to anonymously distribute malware phone-takeover apps because "I own my device," and so everyone must be vulnerable to being coerced to install malware under that ethical principle. It's a reasonable stance, but I don't share it, and I don't think most people share it.)


I think you read a bit too much into my message. I agree, it's complicated, I don't want my parents and grandparents easily getting scammed.

But yes they are my devices, and I should be able to do exactly what I want with them. If I'm forced to deal with other developers incredibly shitty decisions around how they treat VoIP numbers, guess who's going to have a stack of phones with cheap plans in the office instead of paying a VoIP provider...

But no, I have no interest in actually distributing software like that further than than the phones sitting in my office.


For a security-sensitive permission like intercepting texts and calls, I'm not sure it makes sense for that to be anonymous at all, not even for local development, not even for students/hobbyists.

Getting someone to verify their identity before they have the permission to completely takeover my phone feels pretty reasonable to me. It should be a cheap, one-time process to verify your identity and develop an app with that much power.

I can already hear the reply, "What a slippery slope! First Google will make you verify identity for complete phone takeovers, but soon enough they'll try to verify developer identity for all apps."

But if I'm forced to choose between "any malware author can anonymously intercept texts and calls" or "only identified developers can do that, and maybe someday Google will go too far with it," I'm definitely picking the latter.


The scam only has to work on a tiny slice of users, and the people who fall for fake bank alerts or package texts will march through a pile of Android warnigns if the script is convincing enough. Once the operator gets them onto a PC, the whole thing gets easier because ADB turns it into a guided install instead of a phone-only sideload.

That's why I don't think the extra prompts matter much beyond raising attacker cost a bit. Google is patching the visible path while the scam just moves one hop sideways.


> Honestly, if coerced sideloading is a real attack vector, [...]

I don't believe that it is. I follow this "scene" pretty closely, and that means I read about successful scams all the time. They happen in huge numbers. Yet I have never encountered a reliable report of one that utilized a "sideloaded"[1] malicious app. Not once. Phishing email messages and web sites, sure. This change will not help counter those, though.

I don't even see what you could accomplish with a malicious app that you couldn't otherwise. I would certainly be interested to hear of any real world cases demonstrating the danger.

[1] When I was a kid, this was called "installing."


This is the thing that bothers me the most about this. It is as if even the HN crowd is taking it as given that malware is this big problem for banking on Android but in reality there seems to be very little evidence to back this up. I regularly read local (Finnish) news stories about scams and they always seem to be about purely social engineering via whatsapp or the scammer calling their number and convincing the victim they are a banking official or police etc.

That's why I'm inclined to believe Google is just using safety as an excuse to further leverage their monopoly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: