I'll just echo here what I say whenever Wayland/X11 comes up: multiple monitors and screen tearing.
Wayland rocks those two challenges. No screen tearing, I can pick scaling for individual monitors on the fly, and the colors look gorgeous.
There were features that it was missing in the previous releases -- I have to use a pretty recent version of Fedora to have all the bugs that I need rattled out of KDE so that I can use Wayland -- but it's really going well now. Zoom works thanks to pipe wire, copy and paste works. I hope KFonts will get working soon but I've been able to work around that with fc-cache.
You are talking about a very rare niche case where one monitor runs with VRR and the other doesn't. Everything else is handled just fine with X.
> screen tearing.
"No screen tearing" just means forced vsync which is easily possible on X11 with a configuration switch or by using a compositor. Actually forced vsync is one of the great disadvantages of Wayland because it always comes at the cost of higher latency.
> Wayland rocks those two challenges.
And it sucks in every other challenge. Most importantly standardization and development. The Wayland API ecosystem is ultra developer unfriendly and complicated and will pose serious harm to the FOSS landscape which thrives on hobbyist and niche applications. It's so bad it wouldn't be far fetched to call it sabotage. (E.g. look at the hello world: https://github.com/emersion/hello-wayland/blob/master/main.c it's a complete mess)
Not really that niche when you have a 4K laptop hooked up to the 1080p monitors that you were able to buy because it was on your own dime and you wanted to go the affordable route. Every time I do that on X, The only thing I can do is downscale my 4K laptop screen to 1440, I can't set a fractional scale for it while still preserving reasonable performance.
And no one cares about standardization and development. They just want to use their blasted laptop. I don't know about any of that stuff, I just know Wayland works better out of the box.
The basic idea here is that you provide some callback functions the handle events, setup a framebuffer with mmap, and register your surface and framebuffer with the compositor. There is nothing complicated or messy going on here.
This is also the low level interface, most people will not being using this directly and instead will be using a GUI library like GTK or whatever.
I don't know very much C and I have been able to get a "hello world" wayland window setup and displaying some silly graphics with cairo in a few hours or less.
For my personal taste it's already a mess but for the the average masochist it's maybe bearable. The real mess starts from here though:
- Wayland can not render strings. You need something external just to render some other text than "hello world". And depending on the library text rendering will look different for each application.
- Wayland can and will block your render loop for arbitrary reasons (like being minimized) potentially indefinitely. In order to make this application useful you have to put everything concerning rendering in it's own thread which complicates things hugely.
- If you want to have basic functionality like capturing the screen (former simple XGetImage() call) you have to talk to dbus and pipwire which pull all kind of dependencies and require loads parallel infrastructure running. And then you still have no guarantee that it works on every compositor.
To me these are all advantages, not disadvantages.
Wayland is not X11 and doesn't want to be, it doesn't implement even a fraction of what X11 did and that is an intentional design choice!
Wayland does not provide any graphics drawing primitives at all. It only sets up a handle to some shared memory and expects the client to do all drawing.
I think the point here is that you shouldn't be using the display protocol to render text in the first place. If you want to draw text you use a library that is designed for that task rather than shoving that functionality into the compositor and into the Wayland protocol itself.
>- Wayland can and will block your render loop for arbitrary reasons (like being minimized) potentially indefinitely. In order to make this application useful you have to put everything concerning rendering in it's own thread which complicates things hugely.
You don't have to use a thread to do this! You aren't forced to block and wait for the callbacks, you can implement your own event loop and check for messages when you want to!
>- If you want to have basic functionality like capturing the screen (former simple XGetImage() call) you have to talk to dbus and pipwire which pull all kind of dependencies and require loads parallel infrastructure running. And then you still have no guarantee that it works on every compositor.
I think it's good that clients can't capture the screen without permission, and I think using dbus for IPC is very reasonable, but I don't know anything about pipewire or the compatibility stuff.
Also I think it's fair to not like some of these design choices for sure, but it would probably be better for people who like X11's design to continue to use X11 rather than trying to force Wayland into an X11 clone. X11 will be around for a very long time so I don't think anybody will be forced to stop using it any time soon, and people can pick it up if development stops!
Wayland is a display server. Display servers in general do not render strings. They render bitmaps passed in by the clients. Clients are welcome to use their favorite string render libraries into those bitmaps.
> Wayland can and will block your render loop for arbitrary reasons (like being minimized) potentially indefinitely.
Do you realize that all wayland calls are async? On the client side, you don't have to block on waiting on empty socket, if you do not want.
Few conmmens above, someone complained that a basic wayland client is complicated. The complicated thing was... setting up an event loop.
> If you want to have basic functionality like capturing the screen (former simple XGetImage() call) you have to talk to dbus and pipwire which pull all kind of dependencies and require loads parallel infrastructure running.
Yes; but these dependencies are in different processes, not yours. From the POV of your process, it is just an structured IPC.
XGetImage() is not that simple; it only coincidentally worked due to implementation detail. Having a global framebuffer is not mandatory, just at the time all the PCs had it. Nowadays, even PC hardware is getting overlays, so there might not be a buffer that represents what's on the display anymore.
Another issue is, that it is impossible to implement zero-copy screen casting with XGetImage(). It is possible and has been done with wayland and dmabuf, feeding the screencasted surface into hardware video encoder (without the buffer bouncing between system and GPU ram several times), and the userspace getting already compressed video stream.
Final issue is, that XGetImage() is not gated via user permission and does not provide indicators, that the screen was grabbed or is being casted. Wayland does.
Last I knew multimonitor with mixed DPIs (an increasingly common situation with laptops standardizing around HiDPI displays) was messy under X, with xconf fiddling required to get things working well.
This is not a problem with X. DPI and pitch information is accessible via the xrandr extension. It's GNOME and GTK that arbitrarily chooses to ignore that information.
The problem with X is that all X11 screens have to have same DPI (and few other things, like color depth). Which is quite a problem when you have mixed-DPI displays.
Now the separate X11 displays do not have this limitation, but they have other ones: like not being able to move your window from one to another, without restarting the app. Historically, there was only one app, that was capable of changing the display without restart: XEmacs. For all the others, I have my doubts, that users would accept such limitations.
Now, both Xinerama and Xrandr standardized on using X11 screens for multi-monitor displays for exactly this reason. With displays roughly with same DPI and the graphics becoming truecolor, it wasn't really problem. But nowadays, it is.
That is mixed-DPI hardware. Another issue with your argument is, that "DPI and pitch information is accessible via xrandr extension". Yes, it it, that is true. The problem is, that it is one-way street. The client can read that, then it can respect that and adjust its rendering... but it doesn't have to. The client is free to ignore all that. Now the display server has the problem, that it doesn't know, how the client decided: should it upscale the client, or not? Nobody knows, all the display server has is an opaque bitmap.
With wayland, the client must explicitly set the scale, communicate it do the display server, so then the compositor knows, how to handle that surface. It can even transparently downscale the surface, when the user moves it from HiDPI to normal-DPI display. Something impossible on X11.
To be fair, I actually ran a system recently (Surface Tablet with integrated intel graphics) where the screen tearing on X just wasn't solvable. Neither with settings in the xorg.conf nor with a compositor could I get Youtube videos to not tear on fast movements. If Wayland really would solve that for those machines - while being deactivable where it counts - that would be a big plus even for me.
Currently you can't turn off vsync on wayland, except some limited cases involving full screen apps with direct scanout that may not even be implemented in your compositor or may not be possible due to missing implementation for async page flips with atomic modesetting.
Due to design issues relating to implicit sync with wayland any misbehaving app can cause your entire desktop to drop frames so if you're a multi monitor user expect stutters when the browser you have open in the background was a little too slow. Or worse if an app is badly broken [1]
The good news is that all of this is fixable. Windows has forced compositing for most cases and it performs much more consistently. A browser taking a while to render can't hang unrelated windows' composition. Wayland compositors can get there too eventually by moving to explicit sync APIs [2].
For now if you do care about these issues staying on compositorless X is the least bad option.
Oh my goodness. Is that’s what’s been going on? I’ve noticed that when I open one particular browser on my desktop that the entire display freezes for roughly a second before it becomes responsive again.
Wayland was never going to be fully Xorg compatible. It's not just that it would be a massive effort, but it conflicts with core Wayland concepts relating to isolation and security. If keystroke access and window properties were still a free-for-all like on x11, we'd be back around to building on an imperfect protocol. Distros and desktops can build around those insecure concepts if they want (KDE has options for some of them), but it doesn't make sense to include it as part of the protocol.
The discussion is circular, and it ultimately amounts to a lot of dissatisfaction on either side. People should use what they want to use and support what they're capable of supporting. Neither x11 or Wayland are going away, so we need less of the "make Wayland more like x11" and "make x11 more like Wayland" bandwagoning.
I have to admit that I find the isolation and security design to be rather strange. Isolating graphical applications requires a lot of pieces, one of which (what Wayland did) is preventing them from poking at other applications via the GUI. It requires isolating them in the backend (out of scope for Wayland, but Flatpak is at least trying). But it also requires preventing them from spoofing each other and thus deliberately confusing the user. This seems like it needs UI enforced on top of the isolated applications, which means drawing them in a box (like a nested compositor, at which point none of what Wayland did to isolate applications matters) or enforcing informative window decorations. And that part seems like it requires server-side decorations, but Wayland is allergic to SSD.
So I don’t get it. How exactly is the core Wayland protocol a good base for the GUI parts of isolation?
You can render decorations server-side, it's just not guaranteed that the client will respect it. If you really want a Qubes-style SSD desktop, it's attainable in Wayland although it will look incredibly ugly and be highly redundant. Good luck pitching that to GNOME and KDE devs as a default.
So... I don't see how the isolation design is strange. Wayland makes sure that windows are individually isolated, and Flatpak/Bubblewrap isolates the backend and provides interaction portals. It's not a perfect solution, but it does stop your timer application from being a secret keylogger. If your biggest concern is a Trojan horse attack, it sounds to me like Wayland did what it set out to do.
> Wayland was never going to be fully Xorg compatible.
Maybe not, but the fact that Wayland doesn't support an important use case for me is why, regardless of any benefits Wayland may have over X, I won't be using Wayland.
> Neither x11 or Wayland are going away
I sure hope this is the case. I don't care if Wayland exists, I worry that it will become the only realistic option.
> regardless of any benefits Wayland may have over X, I won't be using Wayland.
That's fine. I don't really know what your 'mystery feature' is, but I feel pretty certain it's on a Wayland roadmap somewhere. The same cannot be said for new features in x11.
> I don't care if Wayland exists, I worry that it will become the only realistic option.
It is the only realistic option, if you care about security and isolation. x11 is very flexible and fun, but it's not surprising that the people taking the Linux desktop seriously are pushing for Wayland. It sucks that you're unable to use it for whatever reason, but people aren't going to reallocate development resources to give a dying protocol new features.
> people aren't going to reallocate development resources to give a dying protocol new features.
Which is actually fine by me. I don't need it to have new features.
My point isn't that X is better or worse than Wayland -- clearly the answer to that question is "it depends". I was just expressing the concern that Wayland may eventually become mandatory.
I respect their internals discussion, but they had only three user facing issues & I think all of them have been either fully or partially addressed. Screen recording for example has been reasonably well addressed for well over a year, and protocols for it have existed for a while.
Also it is again the rules to backhandedly ask "Did you read the article." Please read the guidelines.
Except if you’re an nVidia user which I’m guessing you’re not. Multiple monitors (even single monitor) and graphical glitches and tearing all over the place.
As a developer with a Linux laptop (ThinkPads, etc.) since forever, I have been avoiding nVidia like the plague since switcheroo became a thing more than a decade ago. Every time I tried the user experience with (discrete) nVidia cards as a non- single screen gamer has always been a horrible waste of battery. Intel and AMD GPUs getting so much better have been a total blessing. Being able to just connect an external monitor/projector with different scaling without a hassle is very liberating in a professional setting
The latest 545 Nvidia driver is so broken that I had to downgrade back to 535. A huge amount of Linux desktop issues can be traced to Nvidia and their drivers.
Maybe although I haven’t experienced issues that appear to be Nvidia driver issues per se (ie I’ve installed older ones and newer ones and all issues remain). I’m on a desktop 2080 so that may be it. Haven’t noticed anything especially bad with 545. What kinds of issues have you noticed?
There's an example right above you of NVIDIA's explicit-sync patches being rejected because AMD and Intel aren't ready to move on that piece of functionality yet, despite implicit sync resulting in higher latency and a worse experience for users, and being generally agreed to be undesirable and not the direction they want development to go, which is generally agreed to be what NVIDIA has implemented. And this follows the overall hostility over EGLStreams or whatever it was.
They're doing the code and releasing the patches, and they largely capitulated on whatever the EGLStreams issue was supposed to be about, they are leading in what everyone involved seems to agree is "the right direction" and the linux community still can't find their way to "yes". At some point you have to start looking at it as not being entirely NVIDIA's fault - "fuck you, NVIDIA" is no longer an expression of how difficult they are to work with, but rather a generalized expression of hostility from the linux community over perceived historical slights etc.
Like, if there's some big history of warfare between IBM and the kernel team, are you just going to reject any patch coming from IBM because you don't like the company, even when it's generally agreed that the patch is generally right and going in the consensus direction, just because you don't like the name on the signoff? That seems to be where the linux community is at these days.
The whole "fuck you, nvidia!" thing has always been noxious, that's an example of linus's personal toxicity that a certain segment of the linux community has eagerly celebrated. At best it is a thought-terminating cliche that contributes nothing and drags down the debate, and at worst it's given covering-fire for that segment to act out in their own immature, negative ways, to the detriment of the users.
It's not a good look, it's just immature, and at this point it's practically expected from the kernel team. Look who's in charge. And it's not just linus, Greg KH seems pretty bad too, from a casual observer every time I see his name come up it's some childish troublemaking like the symbols-relicensing warfare thing. He's personally made my life worse on multiple occasions, breaking shit that is working for reasons I don't really care about. And I know people think it's for a good cause, but at the end of the day users don't really care about that shit.
The BSD/Unix community is generally a lot more adult and mature, and tbh avoiding linux hi-jinx is a great reason to go with a macbook these days. It doesn't have to be this way, the personality-driven shit doesn't exist in most other spaces and they are better for it. The Linux kernel team is almost uniquely bad about this, and it's largely due to poor leadership and toxic individuals at the top that then bleeds into these adjacent spaces and venues. The wayland people feel empowered to act out when the head honcho publically tells a major partner to go fuck themselves, and so on.
It kinda is what it is - as long as there are children at the helm, there are gonna be issues in linux-land.
* Excessive terminal use
* Code authoring (VSCode/Vim)
* Web browsing/app usage in Firefox/Google Chrome
* Screen sharing and recording (browser/OBS)
* Electron apps including Slack, Element, and Discord
* Occasional use of Krita^
^ Krita has a known issue with (X)Wayland, canvas acceleration, and tablet input.
I decided to give Wayland on my workstation another go when the 545 series dropped. Hilariously, the main reason being the underlying features added that enabled support for GNOME's Night Light feature.
Other than the first day of going through my apps and making sure things worked, I haven't had any real issues. For the work I do, Wayland more than covers my needs.
I have gotten a smidgen of cursor glitching after upgrading from the released 545.09.02 NFB driver to the beta-NFB 545.09.06 version. It only happens when GNOME switches to its "loading" cursor, but it's mostly been ironed out by now. I updated the driver as part of the system upgrade from Fedora 38 to 39, so that probably had a hand in causing it.
Can't say I've seen that one. I have seen Chrome have some windowing issues, but nothing that would drive me crazy. I've also added "--gtk-version=4" to my launcher to support alternate inputs (e.g. typing booster).
I think it should be a point of pride for the linux developer community that nvidia cards work as well as they do seeing as the company is actually hostile.
The only solution for the end user is to vote with their wallets if running linux and/or wayland is important to them.
Bringing it as an argument _against_ wayland, however, is missing the point entirely in my opinion.
Well the bug that I’m experiencing is specific to Xwayland and seems to be a bug where the wayland developers are refusing to take Nvidia’s patches to fix the issue. So maybe it is wayland and not the drivers? Nvidia and Google seem to have a decent relationship working together to resolve issues for Linux Chrome Wayland.
what patches and why?, the other two major gpu company (intel and amd) don't need to send any patch to make their gpu work perfectly on wayland, why nvidia need it?, isn't their fault that their driver system is flawed?
Today Xwayland relies on implicitly synchronization. That’s apparently a legacy mechanism. Nvidia wants to add explicit synchronization support which it has as it sounds like the future whereas AMD and Intel aren’t ready for that. Xwayland devs claim the ecosystem isn’t ready and yet if a simple patch fixes the issue for Nvidia and the Xwayland folks agree eventually GPUs will use explicit synchronization, it seems like the Wayland folks are the ones dragging their feet.
Wayland rocks those two challenges. No screen tearing, I can pick scaling for individual monitors on the fly, and the colors look gorgeous.
There were features that it was missing in the previous releases -- I have to use a pretty recent version of Fedora to have all the bugs that I need rattled out of KDE so that I can use Wayland -- but it's really going well now. Zoom works thanks to pipe wire, copy and paste works. I hope KFonts will get working soon but I've been able to work around that with fc-cache.