Rolling your own UI in openGL is very doable and can make a lot of sense for any application that will require non standard custom widgets. Use your preferred truetype rendering library to generate your text textures, blender uses freetype I think. Widgets can all be done with vectors and gradients, start with functions that create primitives such as rounded boxes or different line types and build from there.
Having done both web and gl ux for a living, I think you might be overestimating the complexity of a gl implementation and underestimating the complexity of meeting the same specs using a web implementation.
While not quite the same thing, if you have the time, dip your toes into some immediate mode UI, for example imgui. It is enjoyable not a grind.
Never did they ever say that would implement the UI of Blender (I assume that's what you were referring to) using OpenGL. I think what they meant was that implementing just a UI in OpenGL isn't as hard as the other guy thought
Do you think wasm has any chance of replacing the whole html/css stuff with just webgpu in a canvas? I have been playing a bit around with wgpu in rust and I can compile the same project either as a native binary or a .js that just renders to the browser. It seems to work pretty cool. Photoshop seems to be runable in the browser now, and I've seen a lot of other cool stuff, but things like fluid simulations seem to still be very laggy.
Rendering everything into a canvas will realistically mean total lack of accessibility features. Also you won't be able to use the DOM inspector. I don't think that would be an improvement at all. If one could work with the DOM via Wasm (plus source maps so you can still use the debugger) it might be something.
I have an OpenGL personal project that I'm struggling with the UI on (using QT5 currently with GLCanvas with buttons around it). If I wanted to switch to a GL-only UI (and discard QT altogether), how would I add event listeners? I can get a borderless window as a GL viewport, but I don't know how to detect clicks and match them to which GL object.
You have a click at pixel 43, so what did you draw between pixels 40 and 50? That's what they clicked. You have to know what's on the screen, but you should know this, because you put it there. Or just use dear imgui.
There’s so much more than that though. You need to build your own accessibility tree and hooks into the OS’s assistive tech infrastructure just for a start.
I know it's not just screen readers, but what a11y concerns would make sense for blender? It barely uses audio and what it does probably can be covered by external tooling. For visuals... the only things that occur are probably covered by Blender's UI scaling/zoom. But I could easily be forgetting something or ignorant.
Edit: Oops, somehow forgot input - but there again, I would natively expect most things to work via keyboard/mouse emulation, and beyond that you'd probably need custom integration, but it's got the Python hooks to facilitate that.
What comes to mind is someone with a tremor that is unable to use classic pointing devices and might have better luck using tab/arrow-key navigation to move through the buttons/menus/etc. From my cursory examination of the product I don't see much support for keyboard navigation, though as a professional tool I'm sure there's a plethora of keyboard shortcuts that one could learn.
Good point, I'd never noticed that before - menus seem fine once you get them open, but I can't find a way to open any menu without clicking, and ex. the preferences pane does seem completely impossible to navigate via keyboard. So yes, I agree that that appears a downside to their own toolkit.
Though I think all/pretty much all menu items can be accessed by pressing space and then typing the name of it. (If you use the setting space for command search.) There will be an auto completion list and it remembers the last action. So that is even better than what many other GUI applications do, where you have to search for ages in deeply nested menus for the action that you know how it is named, but don't know where the hell it is hidden. Quite frankly every program should have that feature.
FWIW you need to implement that yourself with basically any advanced enough toolkit. Even in HTML-land, any list widget worth it's salt is handling keystrokes itself.
> As a multi platform OpenGL app everything we draw is quite hidden from screen readers. Without cross-platform open source libraries available I can’t think of a feasible way of interacting with existing screen readers.