> The world is so not ready for the impact of LLMs on security issues.
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.
The tragedy is having a bunch of those senior engineers writing blog posts and what not of how productive they are, without realising that it means business now needs less of them.
I suppose that if you don’t believe that models will be good enough to work completely without senior engineer help, positioning yourself as a master prompter is a good move to improve your chances of not getting fired.
If all you have are being good at prompting you are gone. Business is going to prefer new grads who have taken some class in ai prompting (which many schools now offer). Doesn't matter if the class in ai prompting isn't any good. What matters is the idea that they had a formal training in this thing, and that they are willing to work for far less pay than any senior.
>I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
The difference is twofold. First, junior devs who ask for code reviews on massive, 2000+ line diffs get coached, and eventually fired if they persist at it. And second, even the most prolific junior engineer would take years to write what Claude is capable of generating in an afternoon.
When Sundar Pichai announces that 75% of all new code at Google is AI-generated, their stock price goes up. If he were to announce that 75% of all new code at Google is now written by junior engineers, this would trigger a massive sell-off and a lot of employees would resign.
The dangers of technical debt and the importance of mitigating it have been known for a long time. Unfortunately a lot of entities now ignore all experience and best practices as soon as you say the "AI" buzzword.
> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
> An LLM can produce higher-quality documentation than most humans.
Can bears some heavy weight.
LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.
I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)
You can throw out almost everything by volume from LLM generated documentation without loosing any information.
Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.
> LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
I should clarify: the documentation I’m talking about is not generated using a generic LLM prompt, which would mostly suck.
With the proper context and additions (skills, plugins, MCPs) LLMs can produce high-quality documentation. You'd also have subagents doing QA of the documentation.
If stuff really goes wrong, you need people who deeply understand the codebase so that they know where to look and how to diagnose the issue. It might be the case in the future that LLMs become so powerful they'll diagnose any issue (I doubt it), but until then, we need people in the loop.
1. When I benchmarked it, AFP was significantly faster than SMB. Both with SMB2 and SMB3. Even when transport encryption was turned off.
2. On SMB2+, symlinks created by the client are not real symlinks. They're "Minshall+French" links which only look like symlinks to other SMB2+ clients. To the server and NFS mounts they look like flat files with the target path encoded in them.
3. It exposes a different precision for certain timestamps. Software that uses this metadata to decide whether a file needs to be updated will see almost every file as needing a resync.
It's been a year or two since I checked the status of these. The situation may have improved since last I looked.
Yeah I recently migrated my NAS and took the opportunity to switch from AFP to SMB for my Time Machine backups. There were so many problems like the ones you describe that I gave up and went back to AFP. Looks like I'm going to be forced to spend a weekend with Claude figuring this out.
If you're using Synology, a couple years ago they finally published a help article that lists (IIRC) 3-5 settings important to switching from AFP to SMB. I had tried before that but to no avail.
> YouTube added a feature so I can easily skip in-video sponsored sections
That feature benefits YouTube, too. Maybe even more than its value as a Premium feature. It makes it so that viewers can skip the ads the creator was paid to make without YouTube getting a cut of the proceeds, pushing down the value of those ads.
Hasn't this always been the case? If a movie or show features product placement, a TV station playing said movie/show doesn't get any of the proceeds from that advertisement, do they?
This is just the pros having more tact than amateurs, and actual writers. I do see some “influencers” that do more of a pure product placement. They just happen to be drinking a specific energy drink in every video where it sits perfectly with the label out. I see some YouTubers trying to get better at integrating the ad into the video, but most of them can’t be bothered to write and record a custom script.
That said, Subway often seemed to get pretty heavy with its product placement. The last season of Chuck had a good amount of this, even what was essentially an ad read right in the middle of an episode by Big Mike. On Community they personified Subway and based a whole episode on him. In the Office they brought in Ryan Howard to say “eat fresh” over and over again, and even called out that it was for Subway to make sure it didn’t go over anyone’s head. Subway was big on sponsoring the last seasons of struggling shows with loyal fanbases, and littering the episodes with Subway product placement to the point where it became a plot point. I remember Zachary Levi (Chuck) tweeting out to ask everyone to go buy some Subway before the finale. It sounded like if Subway saw enough of a spike in buying from the sponsorship, they might fund yet another season.
I know, but I don't see a fundamental difference. If TV networks are happy to pay for a show that also gets advertising revenue from product placement, I don't see why YouTube would not be happy to deliver ads and pay some percent of that to a channel that displays its own ads. Especially given that YouTube has much, much less cost per video than a traditional network, which can only broadcast one program at a time.
They bill it as being for testing, but it works great if all you want is a no-fuss S3-compatible API on top of a filesystem. I've run it on my NAS for a few years now to provide a much faster transfer protocol compared to SMB.
How disappointing it is to see how easily some leaders in our industry abandon their principles, and how cheaply they sell out their fellow man.
The tech industry was never perfect. It was never a charity. But there was a time, several years ago now, when people were more driven to build things that delighted others.
That was always a lie. Gates became the richest pedo in the world building a monopoly of bad software and destroying other people's businesses. Since then things only got worse.
Just last week, two NYPD cops were indicted for evidence tampering for doing exactly that.
The indicted cops responded to an off-duty cop's DUI crash. They texted each other on their personal phones so as not to create a record. They positioned their bodycams so as not to capture the incident. At one point, one of the cops held the other's to make it look as if he was still standing there while he secretly called their supervisor. They then let the drunk cop drive away. Hours later, another officer found the car parked on the sidewalk. That officer did finally arrest him.
"These police officers did their job. We should not be here today," said union president Patrick Hendry, who accused the DA of targeting the officers. "He needs to support officers instead of going after them. Enough is enough."
To their credit, these charges came based on a referral from NYPD's Internal Affairs Bureau, though it was 4 years later.
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
reply