I don't get the fight against estimates. An estimate is an estimate. An estimate can be wrong. It likely is wrong, that's fine, it doesn't have to be perfect. There is a confidence interval. You can communicate that.
Very often something like "6-12 months" is a good enough estimate. I've worked in software a long time and I really don't get why many people think it's impossible to give such an estimate. Most of us are developing glorified CRUD apps, it's not rocket science. And even rocket science can be estimated to a usable degree.
Really you have no idea if feature X is going to take 1 day or 1 year?
It's almost never fine, though. When it's fine, people aren't pressured into giving estimates.
> It likely is wrong, that's fine
The most you can do is say it. Communication demands effort from all involved parties, and way too many people in a position to demand estimates just refuse to put any effort into it.
You've never had a manager or product person take estimates, even clearly communicated as low confidence or rife with unknowns, as gospel truth? Lucky you.
Engineer: “It will take me two days [of work].” Sales:”We will have your fix ready in three calendar days [today + 2].”
Actual work that week gives employee 3 hours of non-meeting time, each daily meeting adds 0.5 hours of high-urgency administrative work. Friday’s we have a mandatory all-hands town halls…
Repeat that cycle for every customer facing issue, every demo facing issue, and internal political issue and you quickly drive deep frustrations and back talking.
I think there’s a fundamental truth: no one in their right minds, not even motivated engineers, actually hears anything but calendar when getting “days” estimates. It’s a terrible misrepresentation almost all the time, and engineers do a disservice when they yield to pressure to deliver them outside the broader planning process.
Project schedules should be the only place that time commitments come from, since they’re informed with necessary resource availability.
For me the worst part is that I (and they) don't fully know what the person asking me from the estimate wants me to build, and usually the fastest way is to just build the thing.
But very often the CI operations _are_ the problem. It's just YAML files with unlimited configuration options that have very limited documentation, without any type of LSP.
Personally I think this is an extreme waste of time. Every week you're learning something new that is already outdated the next week. You're telling me AI can write complex code but isn't able to figure out how to properly guide the user into writing usable prompts?
A somewhat intelligent junior will dive deep for one week and be on the same knowledge level as you in roughly 3 years.
No matter how good AI gets we will never be in a situation where a person with poor communication skills will be able to use it as effectively as someone who's communication skills are razor sharp.
But the examples you've posted have nothing to do with communication skills, they're just hacks to get particular tools to work better for you, and those will change whenever the next model/service decides to do things differently.
I'm generally skeptical of Simon's specific line of argument here, but I'm inclined to agree with the point about communication skill.
In particular, the idea of saying something like "use red/green TDD" is an expression of communication skill (and also, of course, awareness of software methodology jargon).
Ehhh, I don't know. "Communication" is for sapients. I'd call that "knowing the right keywords".
And if the hype is right, why would you need to know any of them? I've seen people unironically suggest telling the LLM to "write good code", which seems even easier.
I sympathize with your view on a philosophical level, but the consequence is really a meaningless semantic argument. The point is that prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better than trying to "guess the password" that will magically get optimum performance out of the AI.
Telling an intern to care about code quality might actually cause an intern who hasn't been caring about code quality to care a little bit more. But it isn't going to help the intern understand the intended purpose of the software.
I'm not making a semantic argument, I'm making a practical one.
> prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better
Ok, but why would you assume that would remain true? There's no reason it should.
As AI starts training on code made by AI, you're going to get feedback loops as more and more of the training data is going to be structured alike and the older handwritten code starts going stale.
If you're not writing the code and you don't care about the structure, why would you ever need to learn any of the jargon? You'd just copy and paste prompts out of Github until it works or just say "hey Alexa, make me an app like this other app".
Why do you bother with all this discussion? Like, I get it the first x times for some low x, it's fun to have the discussion. But after a while, aren't you just tired of the people who keep pushing back? You are right, they are wrong. It's obvious to anyone who has put the effort in.
It's also useful for figuring out what I think and how best to express that. Sometimes I get really great replies too - I compared ethical LLM objections to veganism today on Lobste.rs and got a superb reply explaining why the comparison doesn't hold: https://lobste.rs/s/cmsfbu/don_t_fall_into_anti_ai_hype#c_oc...
Yes and no. Knowing the terminology is a short-cut to make the LLM use the correct part of its "brain".
Like when working with video, if you use "timecode" instead of "timestamp", it'll use the video production part of the vector memory more. Video production people always talk about "timecodes", not "timestamps".
You can also explain the idea of red/green testing the long way without mentioning any of the keywords. It might work, but just knowing you can say "use red/green testing" is a magic shortcut to the correct result.
Thus: working with LLMs is a skill, but also an ever-changing skill.
Why can't both be true at the same time? Maybe their problems are more complex than yours. Why do you assume it's a skill issue and ignore the contextual variables?
On the rare occasions that I can convince them to share the details of the problems they are tackling and the exact prompts they are using it becomes very clear that they haven't learned how to use the tools yet.
I'm kind of curious about the things you're seeing since I find the best way is to have them come up with a plan for the work they're about to do and then make sure they actually finish it because they like to skip stuff if it requires too much effort.
I mean, I just think of them like a dog that'll get distracted and go off doing some other random thing if you don't supervise them enough and you certainly don't want to trust them to guard your sandwich.
Repairable laptops don't reduce e-waste. You replace the mainboard and then what? You have a spare mainboard that sits there collecting dust. The best way to prevent e-waste is to build durable laptops that last a lifetime. Like Dell, HP and Lenovo have been doing for years (while also being very repairable at the same time).
We have open source documentation and CAD around the Mainboards to enable people to reuse them as single board computers or mini PCs after upgrading them out of their laptops. Even if the original owner of the Mainboard has no use for that, the functionality means it has resale value for others to use, reducing waste.
Nice for experimentation, but if you want a daily driver that lasts for years: Dell Latitude (now Dell Pro), HP EliteBook or Lenovo ThinkPad. Literally laptops built to last. Will last a decade with ease. Higher segments ofcourse better than lower segments, but in general very very good if you stay away from lowest tier
Agreed. Trackpads on Windows are very good (approaching Mac quality) but on Linux it's hit and (mostly) miss. Gnome gestures are borderline unusable. Sometimes Gnome forgets how many fingers I'm using and every single finger mouse movement is suddenly a gesture, have to retry gestures to switch workspaces because the first two times it fails, etc. It becomes worse with more windows open. No back swipe gesture in Chrome, etc. Basic stuff that is annoying in every day use. Flawless mouse/touchpad support is not too much to ask.
Very often something like "6-12 months" is a good enough estimate. I've worked in software a long time and I really don't get why many people think it's impossible to give such an estimate. Most of us are developing glorified CRUD apps, it's not rocket science. And even rocket science can be estimated to a usable degree.
Really you have no idea if feature X is going to take 1 day or 1 year?