I love KDE, especially since Plasma 6 release but oh man is the Settings program poorly designed and littered with settings 99% of users will never need.
So many options placed seemingly at random. Similar options like lockscreen, login screen and desktop background settings spread out over 3 different main categories.
Customization options so extensive and granular one can only wonder about their purpose. Even in their latest release blog post they chose to brag about the new ability to change intensity/thickness of frames. I don't think most people care about stuff like this.
Until recently defaults were straight up insane like single click to open folders/launch programs, touchpad scroll being inverted etc.
If you navigate to Settings -> Sound you'll be presented with some options but also buttons in the top right that will open a mostly empty screen with a few additional options. Why not split the whole page into parts and present everything on a single screen? Why not tabs?
Sometimes those buttons in the top right have different behavior. Some will open a whole new page ansd sometimes it's just a popup and other times it's a dropdown.
And oh man just navigating Settings sucks. Main list consists of single and two level options with two level options opening another, mostly empty vertical pane so the actual size of the right pane changes with top text jumping around depending on what you press. So why some settings have two levels and some have tabs and some have those junky top right buttons that need their own back button to show up in the interface whenever they're pressed? I'm not for or against any of those design choices but why all of them at random? I just want some goddamn consistency.
Cherry on top is the bloat most distros choose to install alongside Plasma desktop. Dragon Player? kMail? Does anyone even use these? I dislike Gnome a lot but at least their preinstalled software is minimal, elegant and actively supported/developed. Most KDE programs look like they stopped receiving updates in 2008.
I still think it's a great DE but there's much room for improvement.
Can only speak for myself but the problem is that with KDE there's always stuff I need to go in and change because I don't like the defaults, and then I fall into a rabbit hole of endless tweaking from which it's difficult to escape because no matter how much time I spend I can never get it to be just right.
Funny I feel the same about gnome. I haven't played with others enough to comment I suppose but all are missing some basic creature comfort stuff like a full tcp/up config dialog or a real fluid working app store out of the box. Distros add these but what is going on here.
The thing with GNOME is having to stack a bunch of extensions (most of which will only somewhat meet your needs) to get desired features, half of which will break periodically because there’s no stable extension API.
GNOME and KDE sit on extreme opposite ends of the minimalist/maximalist spectrum.
It's quality issue from my experience. Nobody ever bothered with polishing the defaults and the "option bombardment" is really bad incoherent design instead of having too many things.
I remember spending hours customising the KDE 5 task bar clock, trying to correct the padding. Eventually I gave up customising it and switched to GNOME.
KDE app customisation is also a mess compared to something like foobar2000.
The defaults have been polished more times than I can count and virtually every KDE release changes some defaults to be more user friendly. It's been getting better for a long time.
The wealth of things in the KDE settings are things people will likely never change or are things that can be tweaked but don't necessarily need to be. For example, let's look at GNOMEs settings app. It has menus and options for all the things that the average user needs (network settings, mouse and display options, etc.) but leaves out, for example, things that people need to change for specific workflows (like the option to have focus follow the mouse). A settings app should let the user set things needed for the functions of a computer to work properly while separating deeper level customization for those who want it.
I think emacs does a very good job at this. You can configure most of the settings people need to be productive in a text editor from the menu bar while leaving the extremely rich customization of emacs to the options menu and elisp config files
The better the code is, the less detailed a mental map is required. It's a bad sign if you need too much deep knowledge of multiple subsystems and their implementation details to fix one bug without breaking everything. Conversely, if drive-by contributors can quickly figure out a bug they're facing and write a fix by only examining the place it happens with minimal global context, you've succeeded at keeping your code loosely-coupled with clear naming and minimal surprises.
I think there has always been some truth to that, long before AI. Being driven to get up and just do the thing is the most important factor in getting things done. Expertise and competency are force multipliers, but you can pick those up along the way - I think people who prefer to front-load a lot of theory find this distasteful, sometimes even ego-threatening, but it's held true in my observations across my career.
Yes, sometimes people who barrel forward can create a mess, and there are places where careful deliberation and planning really pay off, but in most cases, my observation has been that the "do-ers" produce a lot of good work, letting the structure of the problem space reveal itself as they go along and adapting as needed, without getting hung up on academic purity or aesthetically perfect code; in contrast, some others can fall into pathological over-thinking and over-planning, slowing down the team with nitpicks that don't ultimately matter, demanding to know what your contingencies are for x y z and w without accepting "I'll figure it out when or if any of those actually happen" - meanwhile their own output is much slower, and while it may be more likely to work according to their own plan the first time without bugs, it wasn't worth the extra time compared to the first approach. It's premature optimization but applied to the whole development process instead of just a piece of code.
I think the over-thinkers are more prone to shun AI because they can't be sure that every line of code was done exactly how they would do it, and they see (perhaps an unwarranted) value in everything being structured according to a perfect human-approved plan and within their full understanding; I do plan out the important parts of my architecture to a degree before starting, and that's a large part of my job as a lead/architect, but overall I find the most value in the do-er approach I described, which AI is fantastic at helping iterate on. I don't feel like I'm committing some philosophical sin when it makes some module as a blackbox and it works without me carefully combing through it - the important part is that it works without blowing up resource usage and I can move on to the next thing.
The way the interviewed person described fast iteration with feedback has always been how I learned best - I had a lot of fun and foundational learning playing with the (then-brand-new) HTML5 stuff like making games on canvas elements and using 3D rendering libraries. And this results in a lot of learning by osmosis, and I can confirm that's also the case using AI to iterate on something you're unfamiliar with - shaders in my example very recently. Starting off with a fully working shader that did most of the cool things I wanted it to do, generated by a prompt, was super cool and motivating to me - and then as I iterated on it and incorporated different things into it, with or without the AI, I learned a lot about shaders.
Overall, I don't think the author's appraisal is entirely wrong, but the result isn't necessarily a bad thing - motivation to accomplish things has always been the most important factor, and now other factors are somewhat diminished while the motivation factor is amplified. Intelligence and expertise can't be discounted, but the important of front-loading them can easily be overstated.
>Is this supposed to be an implicit dig at audiobooks? The scientific consensus seems to be that there's no difference to comprehension or retention
I wouldn't trust that "scientific consensus" if my life dependent on it.
For starters, there's no scientific consensus.
The linked post refers to merely 2 studies, both of doubtful quality. And one says "it's no different", the other says it's worse.
The one that says "it's no different" asked them to read/listen to mere two chapters of total ~ 3000 words.
That's a Substack essay or New Yorker article level, not a book, and only of one text type (non-fiction historical account. How does it translate to literature, technical, theoritical, philosophical, and so on?). The test to check retention was multiple choice - not qualitative comprehension. And several other issues besides.
And on the other study in the post, the audio group performed much worse.
The medium feels wholly immaterial in this case. The words reach your brain, and then it's up to you to think about them, imagine the scene, process ideas. Audiobooks let the narrator add inflection, which maybe takes a slight load off you, but I don't see the big deal. I've read lots of fiction, and listened to a lot on road trips, and I don't feel like my comprehension suffered in either case compared to the other. The important thing is you can have the same level of conversation about the material - I don't believe all this woo about reading being the only pure and intellectual way to process information.
Well, we don’t say that “seeing” a theater play is the same as “reading” a theater play - regardless of comprehension or retention - so why should we say that “listening” to a book is the same as “reading” a book?
Drawing these distinctions is complicated by multi-modal consumption. As an avid lifelong reader (nearly a book per week for about 50 years) I greatly enjoy reading on my kindle and seamlessly switching to listening while driving or doing the dishes. With most books these days it's probably 80% reading -- but in the past, when I had a long commute, it was closer to 50/50. When discussing a given book with others, it's practically irrelevant whether I read or listened to the audiobook narration.
As for theater plays, attending a live performance with actors is fundamentally different from reading the script.
I'm pretty sure it will vary a LOT from person to person... I remember what I see very well.. what I hear, not nearly as much. I say this as when I was commuting I'd listing to a lot of audio books and podcasts... I didn't retain much at all. But I can skim a written article and retain a lot more. Further still, if I literally copy something I see while writing it down, it's hard for me not to remember. That last bit got me through high school as I never did any homework, but always aced tests.
Everyone is definitely different in terms of how they learn best. That's not to say that listening to non-fiction is or isn't better for oneself than nothing, or even different forms of music may be different. There's nothing wrong with entertainment or factual knowledge... (See "Fat Electrician" on YouTube/Pepperbox for a lot of both.)
I think GP is making a subtler point, not that listening to audio books is worse than reading books with your eyes, but that it's telling that people who listen to audio books themselves go out of their way to emphasize that it's equivalent to reading, thus betraying that in their own value system they put a higher value on (actual) reading.
Better at reading yes but not necessarily better at comprehension which is what I believe people are getting at in these discussions. I read and listen. Initially my comprehension and memory while listening was inferior, but you can learn the skill of deep concentration on audio (or some may have it natively).
I mean no one is listening to an audiobook of an Eternal Golden Braid - even if one existed it couldn't lead to an equivalent outcome compared to reading it. Let's not even get started on the impact on literary devices like Wordplay and Neologisms.
There doesn't need to be an implicit dig; audiobooks are explicitly a different medium, and in the Marshall McLuhan sense obviously thus impact comprehension, retention, and the overall grok.
This sounds like the kind of low-thought pattern-based repetitive task where you could tell an LLM to do it and almost certainly expect a fully correct result (and for it to find some bugs along the way), especially if there's some test coverage for it to verify itself against. If you're skeptical, you could tell it to do it on some files you've already converted by hand and compare the results. This kind of thing was a slam dunk for an LLM even a year or two ago.
> It seems a lot of large AI models basically just copy the training data and add slight modifications
This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.
Good - we've been building the seed corpus for AI the past 50 years, and all this manual work now becomes exponentially more useful to others who get to build amazing things without all the tedium. I'm personally thrilled if my code made it in to the machine to help others. We laid train tracks by hand so that they could invent a machine to do it and we can focus on the destination.
I've been coding for over a decade, and I've built some great things, but the slow, careful, painstaking drudge-work parts were always the biggest motivation-killers. AI is worth it at any cost for removing the friction from these parts the way it has for me. Days of work are compressed into 20 minutes sometimes (e.g. convert a huge file of Mercurial hooks into Git hooks, knowing only a little about Mercurial hooks and none about Git hooks re: technical implementation). Donkey-work that would serve no value wasting my human time and energy on when a machine can do it, because it learned from decades of examples from the before-times when people did this by hand. If some people abuse the tools to make a morg here and there, so be it; it's infinitely worth the tradeoff.
Yeah, I felt kind of bad that he gave me such an earnest, thought-out reply to what was essentially a stupid morg/borg joke. But his final sentence suggests that he at least got my joke.
(I don't entirely agree with him, but I upvoted for at least trying to get us back on topic!)
I find AI is great at documenting code. It's a description of what the code does and how to use it - all that matters is that it's correct and easy to read, which it almost certainly will be in my experience.
I have quite a different take on that. As much as most people view documentation as a chore, there is value in it.
See it as code review, reflection, getting a birds eye view.
When I document my code, I often stop in between, and think: That implementation detail doesn't make sense/is over convoluted/can be simplified/seems to be lacking sanity check etc…
There is also the art of subtly injecting humor in it, with, e.g. code examples.
Documentation is needed for intent. For everything else you could just read the code. With well-written code, “what the code does and how to use it” should be clear.
> all that matters is that it's correct and easy to read
Absolutely disagree. A lot of the best docs I've read feel more personal, and have little extra touches like telling the reader which sections to skip or to spend more time in depending on what your background is.
Formatting and layout matters too. Docs sites with messy navigation and sidenotes all over the place might be "easy to read" if you can focus on only looking at one thing, but when you try to read the whole thing, you just get a bunch of extra noise that could've been left out.
Anyone lumping AI in with gambling and crypto scams has their head firmly in the sand. The value in making it easier to make computers do things for you is plainly obvious. People all across the tech literacy spectrum are seeing benefits, from coding projects that took days or weeks taking minutes to hours to finish to soccer moms telling their phones to fix up family photos and find them the email from the doctor's that's buried in their inbox. Competent doctors are getting help catching things from scans/symptoms that would take House MD to connect but are a breeze for an LLM, and there are numerous reports of people figuring out their own longstanding obscure medical issues by asking an LLM to speculate on their symptoms after a hundred doctors failed to diagnose it. Someone who can't afford to get ripped off by a mechanic and doesn't have the spare hours in the day and technical knowledge to Google it the old-fashioned way has a decent chance of solving simple car problems instantly on their own by asking ChatGPT or Gemini. It's frankly a miracle, and it keeps making more leaps and bounds every time the peanut gallery starts going on about it having hit a wall it can never improve from.
On the other hand, it’s not clear if any of it is sustainably profitable. Zitron has done a lot of reporting in this area, but since he’s “controversial” let’s stick with Forbes and the case of Sora (2025):
As far as I can tell, the math isn’t getting any better. The financial costs of running an AI service are enormous and it’s not clear where sufficient revenue is going to magically come from once the aggressive loss-leading ends.
"Anyone lumping AI in with gambling and crypto scams has their head firmly in the sand."
I disagree. While some AI related stuff is promising, much of it is consumerist or data harvesting. Many people are basically gambling on any sort of stock related to AI (vs diversifying). Education is likely declining as adoption allows students to avoid critical thought or applying concepts.
"It's frankly a miracle, and it keeps making more leaps and bounds every time the peanut gallery starts going on about it having hit a wall it can never improve from."
Explainable by engineering isn't a miracle. It's just an expanse on neural nets and the other predecessors from 30 years ago. In my experience, it has trouble following basic directions such as keeping a summary to a single page.
> Education is likely declining as adoption allows students to avoid critical thought or applying concepts.
I highly doubt this will be the case. This common viewpoint is almost certainly no more than another iteration of Plato decrying the invention of books.
Books and calculators still required knowledge of concepts and application. That's vastly different from students today asking AI to write a book report for them and retaining no knowledge from it.
I don't consider AI to be outstanding. Maybe that bridge they're talking about was. Seems that this comes down to opinion.
reply