Just do the work at your body's natural pace. Your body will tell you when it's tired, when it's ready to focus, and when it's excited to work.
As with everything, the problem is hubris and impatience. So eager to prove god-like status, you stumble and fall, posting ephemeral think pieces about your "journey" on social media. Reading LinkedIn is the closest I've ever got to thinking "maybe asylums would be okay with a few reforms..."
My favorite (which I think/hope was a joke) was the guy cooking chicken breast in his hotel coffee maker to "show his commitment to the company" and how hard he was willing to grind.
Cooking with hotel coffee makers during a grind is a long standing tradition. Usually it's ramen or canned food, though.
For better or worse, this is becoming more difficult as the standard coffee maker in rooms these days is a one-cup-at-a-time model. So you need your own container, or you'll have to use the ice bucket (if they have one).
A chicken breast is a bit far fetched, but I could see it happening with someone who was craving chicken at an odd hour.
If they really believe their AI is that good and security practices and tooling that solid, why can't they automatically flag this stuff? I am sure they can, but once flagged a human has to check and that seems costly?
Career-limiting perhaps (if expressing normal human emotion is a minus inside of an organization, it may be time to bail) but some of the best minds I've met/observed were absolute curmudgeons (with purpose—they were properly bothered by a problem and refused to go along with the "sweep it under the rug" behavior).
Sure, I've dealt with plenty of assholes, too, but the grumps are usually just tired of their valid insight being ignored by more foolish, orthogonally incentivized types (read: "playing the game" not "making it work well").
We've all tolerated the grumpy genius at some point in our careers. Nevertheless, most of us would prefer to work with a person who's both smart and kind over someone who's smart and curmudgeonly. It is possible to be both smart and kind, and I've had the pleasure of working with such people.
Assholes can sap an organization's strength faster than any productive value their intelligence can provide. I'm not suggesting the author is an asshole, though; there's not enough evidence from this post.
This only happens because the software industry has fallen into the Religion of Speed. I see it constantly: justified corner-cutting, rushing shit out the door, and always loading up another feature/project/whatever with absolutely zero self-awareness. AI is just an amplifier for bad behavior that was already causing chaos.
What's not being said here but should be: discipline matters. It's part of being a professional and always precedes someone who can ship code that "just works."
> For software engineering, it is useless unless you're writing snippets that already exist in the LLMs corpus.
If I give something like Sonnet the docs for my JS framework, it can write code "in it" just fine. It makes the occasional mistake, but if I provide proper context and planning up front, it can knock out some fairly impressive stuff (e.g., helping me to wire up a shipping/logistics dashboard for a new ecom business).
That said, this requires me policing the chat (preferred) vs. letting an agent loose. I think the latter is just opening your wallet to model providers but shrug.
If you need a shipping dashboard, then yeah, that's a very common, very simple use-case. Just hook up an API to a UI. Even then I don't think you'll make a very maintainable app that way, especially if you have multiple views (because the LLMs are not consistent in how they use features, they're always generating from scratch and matching whatever's closest).
What I'm saying is that whenever you need to actually do some software design, i.e. tackle a novel problem, they are useless.
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
I believe it’s you who is misunderstanding his positions here. He clearly lays out that he is focused on irrational optimism effecting the investment around the tech, not whether or not the tech itself is viable. His analysis was indeed well thought out from the perspective he is approaching it from.
he clearly states he doesn't understand the topic.
But you don't need to understand to explore the ramifications which is what he's done here and it's an insightful & fairly even-handed take on it.
It does feel like AI chat here gets bogged down on "its not that great, its overhyped etc." without trying to actually engage properly with it. Even if it's crap if it eliminates 5-10% of companies labour cost that's a huge deal and the second order effects on economy and society will be profound. And from where i'm standing, doing that is pretty possible without ai even being that good.
One can see he knows little about AI and relies on the judgments of others. Yet, he knows a lot more about economics, finance, and history than most AI practitioners.
As a founder of an AI company, I actually agreed with most of the article and found it to be very close to my mental model of the world. Turns out you might actually not need to understand what's causing the hype if you know that history rhymes...!