I’m wary about the exuberance of AI displacing quality.
But some of the worst experiences I’ve had with coworkers were with those who made programming part of their identity. Every technical disagreement on a PR became a threat to identity or principles, and ceased being about making the right decision in that moment. Identity means: there’s us, and them, and they don’t get it.
‘Programmer’ is much better off as a description of one who does an activity. Not an identity.
I see this same thing now. In this case, it’s a more senior engineer and his manager taking credit for work a less senior engineer who’d left the team did.
There’s simply no advantage to crediting work to someone who’d left the team.
We love to blame those who are misfortunate. It’s called just world syndrome. It’s deeply uncomfortable to realize that this kind of thing is the norm, and justice is the exception. I’ve been extremely fortunate in my career, but not due to any special savviness of my own.
It’s not true that if someone else is getting credit for your work, that’s a you problem.
At my workplace now, there’s a senior staff engineer taking credit for work that was done by someone 3 levels below him. And the senior staff engineer still thinks he is not getting enough credit for his work. The senior staff engineer’s manager has been crediting him for the work the less senior engineer had done, since the less senior engineer is no longer at that team, in forums where the less senior engineer has no access to.
The less senior engineer is plenty likeable. As is the senior staff engineer. But the less senior engineer had left that team, and the senior staff engineer and his manager are unscrupulous, and do what they’d like to their advantage.
As someone who isn't invested in this spat, this just looks petty for openai to put this on their website.
Just write a press release and let the tech press publish it. Don't host it yourself. The legalistic language belongs in a filing, not a user-facing blog.
That's not at all what they meant. They meant they ended up raising their own quality bar tremendously because the QA person represented a ~P5 user, not a P50 or P95 user, and had to design around misuse & sad path instead of happy path, and doing so is actually a good quality in a QA.
Frankly I think the 'latest' generation of models from a lot of providers, which switch between 'fast' and 'thinking' modes, are really just the 'latest' because they encourage users to use cheaper inference by default. In chatgpt I still trust o3 the most. It gives me fewer flat-out wrong or nonsensical responses.
I'm suspecting that once these models hit 'good enough' for ~90% of users and use cases, the providers started optimizing for cost instead of quality, but still benchmark and advertise for quality.
As long as I'm not reviewing PRs with thousands of lines net new that weren't even read by their PR submitter, I'm fine with anything. The software design I've seen from AI code agent using peers has been dreadful.
I think for some who are excited about AI programming, they're happy they can build a lot more things. I think for others, they're excited they can build the same amount of things, but with a lot less thinking. The agent and their code reviewers can do the thinking for them.
Oh, not at all. I appreciate the rethoric, although it didn't work with me because I've never experienced Google as childish or even amicable. So the author's excessive predisposition to think of Google as a marvelous friend was a bit jarring for me (and that's why I personally felt the tone a bit on the side of emotional manipulation). But, to each their own. Expressing commentary and emotions is good, I do prefer it to cold facts all the time :-)
I don't think this quite captures the problem: even if the code is functional and proven to work, it can still be bad in many other ways.
The submitter should understand how it works and be able to 'own' and review modifications to it. That's cognitive work submitters ipso facto don't do by offloading the understanding to an LLM. That's the actual hard work reviewers and future programmers have to do instead.
But some of the worst experiences I’ve had with coworkers were with those who made programming part of their identity. Every technical disagreement on a PR became a threat to identity or principles, and ceased being about making the right decision in that moment. Identity means: there’s us, and them, and they don’t get it.
‘Programmer’ is much better off as a description of one who does an activity. Not an identity.
reply