I wonder, seeing the immense growth in 2023/2024, how that correlates with the ladybird project, which officially started in 2024.
Could Manifest v3 be the reason we have so much fresh air blowing in the browser ecosystem or does it just stem from a general unhappiness of said ecosystem?
I think that Ladybird has driven a lot of the effort, otherwise we'd just see browsers continuing to use Chromium with backports to allow v2 being worked on.
Ladybird was already progressing rapidly within SerenityOS well before it was officially launched, and I think that's given people a new inspiration for how plausible it is to create a browser from scratch. I'm really pleased we're seeing Servo having a resurgence too.
It’s indeed rapidly progressing feature-wise, but I have yet to see an explanation for how they intend to manage security once market adoption happens.
Ladybird is written in C++, which is memory-unsafe by default (unlike Rust, which is memory-safe by default). Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security. I don’t understand how the Ladybird team could possibly hope to secure a C++ browser engine, given that even engineering giants have consistently failed to do so.
> Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security.
And part of Firefox/Chromes security effort has been to use memory safe languages in critical sections like file format decoders. They're far too deeply invested in C++ to move away entirely in our lifetimes, but they are taking advantage of other languages where they feasibly can, so to write a new browser in pure C++ is a regression from what the big players are already doing.
I just checked out Servo, and like all browsers it has a VERY large footprint of dependencies (notably GStreamer/GOject, libpng/jpeg, PCRE). Considering browsers have quite the decent process isolation (the whole browser process vs heavily sandboxed renderer processes), I wonder how tangible the Rust advantage turns out to be.
I just looked at the top CVEs for chrome in 2025. There are 5 which allow excaping the sandbox, and the top ones seem to be V8 bugs where the JIT is coaxed into generating exploitable code.
One seems to be a genuine use-after-free.
So I can echo what you wrote about the JS engine being most exploitable, but how is Rust supposed to help with generating memory-safe JITed code?
I know they have said that. But it feels a bit strange to me to continue to develop in C++ then, if they eventually will have to rewrite everything in Swift. Wouldn't it be better to switch language sooner rather than later in that case?
Or maybe it doesn't have to take so much time to do a rewrite if an AI does it. But then I also wonder why not do it now, rather than wait.
That is the plan, but they are stalled on that effort by difficulties getting Swift's memory model (reference counting) to play nice with Ladybird's (garbage collection)
I think there was some work with the Swift team at Apple to fix this but there haven't been any updates in months
I know that that’s the plan, but I believe it when I see it. Mozilla invented entire language features for Rust based on Servo’s needs. It’s doubtful whether a language like Swift, which is used mostly for high-level UI code, has what it takes to serve as the foundation of a browser engine.
The increased activity came from Igalia who started working on Servo in 2023 with support from the Linux Foundation. Prior to that the project was effectively dead in the water with no sponsored development.
But the question still remains, why did Igalia pick up a dead project?
I doubt you'd invest that kind of money/time into a project without a good reason. I am not saying that ladybird or manifest v3 are the reason, I just notice a lot of new energy in the not-just-chrome category and wonder what the other reasons might be.
Andreas Kling is pretty open about his reasons to have started the ladybird project and I just know Servo from his monthly videos and a few other sidenotes, so I was surprised that it gained so much traction after being basically dead.
> But the question still remains, why did Igalia pick up a dead project?
Igalia is generally pro open-source, and Servo certainly aligns with their ethos, but a lot of the money came from Futurewei / Huawei who are interested in Servo because it's not US based, and therefore they are actually able to contribute to it (they are effectively banned from contributing to Chrome/Firefox/Safari due to US sanctions). There is now also funding from the Sovereign Tech Fund who are also interested in a "European browser" (and NLnet, but they fund all sorts of things)
As I understand it, funding was provided by NLnet[1], a longstanding Dutch non-profit that focuses on supporting open internet technologies. The funding was provided specifically for reviving Servo. By the looks of it, the money itself mostly comes from the EU, which has various grant programmes to fund open access technology, digital sovereignty, etc. Given several Servo contributors worked for Igalia, I expect they submitted a proposal to NLnet for them to fund Servo development, and it was successful.
Awesome! Something that I had on my todo list the past couple of months, because I also switched from YT Music to Navidrome + Bandcamp. Feels great to own your music again.
There was a discussion here on hn about OpenAI and it's privacy. Same confusion about e2ee. Users thinking e2ee is possible when you chat with an ai agent.
>Users thinking e2ee is possible when you chat with an ai agent.
It shouldn't be any harder than e2ee chatting with any other user. It's just instead of the other end chatting using a keyboard as an input they chat using a language model to type the messages. Of course like any other e2ee solution, the person you are talking to also has access to your messages as that's the whole point, being able to talk to them.
I do not think this matches anyones' mental model of what "end-to-end encrypted" for a conversation between me and what is ostensibly my own computer should look like.
If you promise end-to-end encryption, and later it turns out your employees have been reading my chat transcripts...
I'm not sure how you can call chatgpt "ostensibly my own computer" when it's primarily a website.
And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
It's still a disingenuous use of the term. And, if TFA is anything like multiple other providers, it's going to be "oh, the video is E2EE. But the 5fps ,non-sensitive' 512*512px preview isn't."
> it's primarily a website … unambiguously possible[sic] for chatGPT … happens to also be the message platform
I assume you mean impossible, and in either case that’s not quite accurate. The “end” is a specific AI model you wished to communicate with, not the platform. You’re suggesting they are one and the same, but they are not and Google proves that with their own secure LLM offering.
But I’m 100% with you on it being a disingenuous use.
No, No typo- the problem with ChatGPT is the third party that would would be Attesting that's how it works, is just the 2nd party.
I'm not familiar with the referenced Google secure LLM, but offhand- if it's TEE based- Google would be publishing auditable/signed images and Intel/AMD would be the third party Attesting that's whats actually running. TEEs are way out of my expertise though, and there's a ton of places and ways for it to break down.
> And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
This is basically the whole thrust of Apple's Private Cloud Compute architecture. It is possible to build a system that prevents user2 from reading the chats, but it's not clear that most companies want to work within those restrictions.
> If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
If they marketed it as end-to-end encrypted? 100%, unambiguously, yes. And certainly not without I, as the user, granting them access permissions to do so.
If you have an E2EE chat with McDonalds, you shouldn't be surprised that McDonalds employees can read the messages you've sent that account. When messaging accounts controlled by businesses then the business can see those messages.
This is why I specified "mental model". Interacting with ChatGPT is not marketed as "sending messages to OpenAI the company". It is implied to be "sending messages to my personal AI assistant".
Yeah of course, technically that is true. Still when talking about e2ee in any context it implies to the non technical user: The company providing the service cannot read what I am writing.
That's not given in any of those examples. In the case of ChatGPT and this toilet sensor e2ee is almost equivalent to 'we use https'. But nowadays everybody uses https, so it does not sound as good in marketing.
Yes but National Security Letters make that pointless. You can't encrypt away a legal obligation. The point of e2ee is that a provider can say to the feds "this is all the information we have", and removing the e2ee would be noticed by security researchers.
If the provider controls one of the ends then the feds can instruct them to tap that end and nobody is any the wiser.
The best you can do is either to run the inference in an outside jurisdiction (hard for large scale AI), or to attempt a warrant canary.
> Yes but National Security Letters make that pointless
It seems ridiculous to use the term "national security letter" as opposed to "subpoena" in this context, there is no relevant distinction between the two when it comes to this subject. A pointless distraction.
> You can't encrypt away a legal obligation.
Of course you can't. But a subpoena (or a NSL, which is a subpoena) can only mandate you to provide information which you have within your control. It can not mandate you to procure information which you do not have within your control.
If you implement e2ee, customer chats are not within your control. There is no way to breach that with a subpoena. A subpoena can not force you to implement a backdoor or disable e2ee.
I believe we are in agreement. If you are a communication platform that implements e2ee then you provide the guarantee to users, backed by security researchers, that the government can't read their communications by getting a subpoena from the communication platform.
The problem with AI platforms is that they are also a party to the communication, therefore they can indeed be forced to reveal chats, and therefore it's not e2ee because defining e2ee that way would render the term without distinction.
I knew I've already seen this. Seemed like a great tool then as well as now. Will definitively deploy it on for my personal file server. Just haven gotten around it.
You cannot compare these examples. There is currently no way to encrypt the user message and have the model on the server read/process the message without it being decrypted first.
Mullvad and E2EE Messengers do not need to process the contents of the message on their server. All they do is, passing it to another computer. It could be scrambled binary for all they care.
But any AI company _has_ to read the content of the message by definition of their service.
Lumo never promises encryption while processing a conversation on their servers. Chats HAVE to be decrypted at some point on the server or send already decrypted by the client, even when they are stored encrypted.
Read the marketing carefully and you will notice that there is no word about encrypted processing, just storage - and of course that's a solved problem, because it was solved decades ago.
The agent needs the data decrypted, at least for the moment, I know of no model that can process encrypted data. So as long as the model runs on a server, whoever manages that server has access to your messages while they are being processed.
EDIT:
Even found an article where they acknowledge this [0]. Even though there seems to exist models/techniques that can produce output from encrypted messages with 'Homomorphic Encryption' [1], it is not practical, as it would takedays to produce an answer and it would consumes huge amounts of processing power.
While I can understand that it's frustrating. Kernel level Anticheat is a abomination in itself and should in no way be supported. It is a security flaw in itself!
It's also unfortunately impossible to have a good competitive multi-player online experience without kernel-level anti-cheat. It's simply too easy to cheat at many of these games in the absence of strict control measures, and even a single cheater can ruin a game session for every other gamer.
No one reached directly for kernel-level anti-cheat. It was the result of an escalation of the sophistication of cheating solutions.
While you're at it, you might as well downgrade to Win7. No jokes aside, if all the software you want runs on it, it is fine. Just no security update etc.
I would suggest just switching to Linux and using a VM for things that NEED to be Windows. Games that run kernel level Anti-Cheat won't run, but tbh nothing I would suggest installing anyway.
The big two that spring to mind are online games and Adobe softwares. I don't think a VM can usually meet the performance needed for either.
I do wish more artists would take a chance on open source softwares, but most of the ones I know are still insistent that nothing can ever come close to Adobe. But that's a rant for another time.
Games run pretty great on Linux, but if you do want a VM, passing through a graphics card to that VM via vfio provides 95%+ the performance of native.
Virtual reality headsets with dual 4K screens running at 75Hz+ perform well on a Windows VM done that way. A normal flatscreen game is going to be just fine.
I stayed on Win7 until December 2023, until there was some exploit that was attackable by just viewing an image, so just browsing the web would've made it vulnerable (I believe in the WebP format).
Although it seems there are people still Frankensteining Win7, and even patching DLLs to make the newest browsers/apps still run on it.
Famously MS Teams was really screwed up, but I had to use it for work..
The problem is not the lack of patches. The problem is with websites refusing service based on client's version from the user-agent or breaking by using cutting edge features without a polyfill.
All the Fairphone Versions support e/OS/ as far as I know. I have the Fairphone 5 with the current e/OS/ version completely un-googled. But you also have the option to allow partial google-fication in e/OS/ so you don't miss out on most of the features and paid-apps you had.
While I agree with you, i feel like that was not the point the author was making.
It more so was a warning that the combination of little reviewed community plugins and a not sandboxed macos binary is a potential risk. And with that sentiment I can also agree.
That was my take too. I am less concerned with an app being simply closed source and much more concerned with closed source coupled with skipping review and the general approved distribution models on the two platforms.
Could Manifest v3 be the reason we have so much fresh air blowing in the browser ecosystem or does it just stem from a general unhappiness of said ecosystem?
reply