As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
You would think they would care about the fact that their brand is being torched but I guess they figure they're too big to need to care.
Their new philosophy is "the user is too stupid to even think for themselves LOL." It's not just their rhetoric, it's every single choice they've made screaming out their new priorities of which user respect is both last and least
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
Why is this, I wonder? Aren't the models trained on about the same blob of huggingface web scrapes anyway? Does one tool do a better job of pre-parsing the web data, or pre-parsing the prompts, or enhancing the prompts? Or a better sequence of self-repair in an agent-like conversation? Or maybe more precision in the weights and a more expensive model?
their products are just just good enough to allow them to put a checkbox in a feature table to allow it to be sold to someone who will then never have to use it
but not even a penny more will be spent than the absolute bare minimum to allow that
this explains Teams, Azure, and everything else they make you can think of
How do you QA adding weird prediction tool to say Outlook. I have to use Outlook at one of my clients and have switched to writing all emails in VS Code and then pasting it to Outlook as “autocomplete” is unbearable… Not sure QA is possible with tools like these…
Part of QA used to be evaluating whether a change was actually helpful in doing the thing it was supposed to be doing.
... why, it's almost like in eliminating the QA function, we removed the final checks and balances on developers (read: PMs) from implementing whatever ass-backwards feature occurs to them.
Just in time for 'AI all the things!' directives to come down from on high.
exactly!! though evaluating whether a change was actually helpful in doing the thing it was supposed to be doing is hard when no one knows what it is supposed to be doing :)
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
This is the most annoying thing, and it's even happened to Jetbrains' rider too.
Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.
You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.
In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.
It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.
Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.
I've been immensely frustrated by no longer being able to set reminders by voice. I got so used to saying "remind me in an hour to do x" and now that's just entirely not an option.
I'm a very forgetful person and easily distracted. This feature was incredibly valuable to me.
I got Gemini Pro (or whatever it's called) for free for a year on my new Pixel phone, but there's an option to keep Assistant, which I'm using.
Gotta love the enshittification: "new and better" being more CPU cycles being burned for a worse experience.
I just have a shortcut to the Gemini webpage on my home screen if I want to use it, and for some reason I can't just place a shortcut (maybe it's my ancient launcher that's not even in the play store anymore), so I have to make a tasker task that opens the webpage when run.
This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE.
Had to turn ML auto-completion off. It was getting in the way.
There is no setting to revert back to the very reliable and high quality "AI" autocomplete that reliably did not recommend class methods that do not exist and reliably figured out the pattern I was writing 20 lines of without randomly suggesting 100 lines of new code that only disrupts my view of the code I am trying to work on.
I even clicked the "Don't do multiline suggestions" checkbox because the above was so absurdly anti-productive, but it was ignored
The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.
I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.
You can still use the older ML-model (and non-LLM-based!) IntelliCode completion suggestions - it’s buried in the VS Installer as an optional feature entirely separate from anything branded CoPilot.
The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):
This task cannot be accomplished
USING
standard SQL queries against the provided database schema. Replication slots
managed through PostgreSQL system views AND functions,
NOT through user-defined tables. Therefore,
I must return
Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.
I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).
Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.
This is a great post. Next time that you see it, grab a screenshot, put on GitHub pages and post it here on HN. It will generated lots of interesting discussion about rubbish suggestions from poor LLM models.
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.
People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.
Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.
> Tech optimizes for revealed preference in the moment.
I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.
It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.
Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.
So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.
Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.
Yes i agree with this. I think more people, than not, would benefit from actively cultivating space in their lives to be bored. Even something as basic as putting your phone in the internal zip part of your bag, so when you're standing in line at the store/post office/whatever you can't be arsed to just reach for your phone and instead be in your head or aware of your surroundings. Both can be such wonderful and interesting places but we seem to forget that now
Plants "want" nitrogen, but dump fertilizer onto soil and you get algal blooms, dead zones, plants growing leggy and weak.
A responsible farmer is a steward of the local ecology, and there's an "ecology of friction" here. The fertilizer company doesn't say "well, the plants absorbed it."
But tech companies do.
There's something puritanical about pointing to "revealed preference" as absolution, I think. When clicking is consent then any downstream damage is a failure of self-control on the user's part. The ecological cost/responsibility is externalized to the organisms being disrupted.
Like Schopenhauer said: "Man kann tun, was er will, aber er kann nicht wollen, was er will." One can do what one wants, but one cannot will what one wants.
I wouldn't go as far as old Arthur, but I do think we should demand a level of "ecological stewardship". Our will is conditioned by our environment and tech companies overtly try to shape that environment.
I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.
I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.
But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.
AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.
Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.
I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.
ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.
This is perhaps one of the most articulate takes on this I have ever read - thank-you!
And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.
It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.
Friction is an element of the environment like any other. There's an "ecology of friction" we should respect. Deciding friction is bad and should be eradicated is like deciding mosquitoes or spiders or wolves are bad and should be eradicated.
Sometimes friction is noise. Sometimes friction is signal. Sometimes the two can't be separated.
I learned much the same way you did. I also started a coding bootcamp, so I've thought a lot about what counts as "wasted" time.
I think of it like building a road through wilderness. The road gets you there faster, but careless construction disturbs the ecosystem. If you're building the road, you should at least understand its ecological impact.
Much of tech treats friction as an undifferentiated problem to be minimized or eliminated—rather than as part of a living system that plays an ecological role in how we learn and work.
Take Codecademy, which uses a virtual file system with HTML, CSS, and JavaScript files. Even after mastering the lessons, many learners try the same tasks on their own computers and ask, "Why do I need to put this CSS file in that directory? What does that have to do with my hard drive?"
If they'd learned directly on their own machines, they would have picked up the hard-drive concepts along the way. Instead, they learned a simplified version that, while seemingly more efficient for "learning to code," creates its own kind of waste.
But is that to say the student "should" spend a week struggling? Could they spend a day, say, and still learn what the friction was there to teach? Yes, usually.
I tell everyone to introduce friction into their lives...especially if they have kids. Friction is good! Friction is part of the je ne sais quoi that make human's create
In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.
> but friction is necessary if we want to maneuver and get feedback from the environment
You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.
Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.
Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.
Thank you for expressing this. It might not be pithy but its something I've been thinking about a lot for a long time and this a well articulated way of expressing this
I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
Sheesh, I notice I also just ask an assistant quite a bit rather than putting effort to think about things. Imagine people who drive everywhere with GPS (even for routine drives) and are lost without it, and imagine that for everything needing a little thought...
As an old school interface/interaction designer, I see this as a direct consequence of how the discipline of software design has evolved in the last decade or two.
We’ve went from conceiving of software as tools - constructs that enhance and amplify their user’s skills and capabilities - to magic boxes that should aim to do everything with just one button (and maybe even that is one action too many).
This shift in thinking is visible in how junior designers and product managers are trained and incentivized to think about their work. “Anticipating the user’s intent”, “providing a magical experience”, “making the simplest, most beautiful and intuitive product” - all things that are so routine parlance now that they sound trite, but that would make any software designer from the 80s/90s catatonic because of how orthogonal they are to good tool design.
To caricature a bit, the industry went from being run by people designing heavy machinery to people designing Disneyland rides. Disneyland rides are great and have their place, but you probably
don’t want your tractor to be designed like one.
Perhaps this is a feature and not a bug for MS. Every time you hit escape or accept, you're giving them more training samples. The more training data they can get you to give them, the better. So they WANT to be throwing out possibly irrelevant suggestions at every opportunity.
As much as I love JetBrains (IntelliJ and friends), I have the same feeling this year. The ratio that I undo an accidental tab/whatever far exceeds the accepted ones. I'm not anti-LLM -- they are great for many things, but I am tired of undoing shitting suggestions. Literally, many of them produce a syntax error. Please don't read this post as dumping on JetBrains. I still love their products.
> It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
No trolling: This is genius-level sarcasm. You do realise that most "business" emails are essentially this, right? Oh, right, you knew that already!
I agree. I am happiest just using plain Emacs for coding and every once in a while separately using an LLM or once or twice a day use gemini-cli or codex for a single task.
My comment is for coding, but same opinion for writing emails - once in a blue moon, then I will use a LLM manually.
You raise a good point. For specific programming tasks, I don't really want token-by-token suggestions in an IDE. And, like you, when I have a specific problem, e.g., "I need to do Kerberos auth like this in that language." -- I go to ask an LLM, and it is generally very useful. Then I look at the produced code and say: "Oh, that's how you do it." I almost never copy/paste the results from the LLM into my code base.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
A world full of humans being guided by computers would be... dystopian.
Although I imagine a version where AI drives humans who mindlessly trust them to be more vegetarian or take public transport, helping save the environment (an ironic wish since AI is burning the planet). Of course "AI" is being guided by their owners, so there'd be a camp who uses Grok who'll still drive SUVs, eat meat, and be racist idiots...
The disappointing thing is I’d rather them spend the time improving security but it sounds like all cycles are shoved into making AI shovels. Last year, the CEO promised security would come first but it’s not the case
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.