This is one of the best Rust articles I've ever read. It's obviously from experience and covers a lot of _business logic_ foot guns that Rust doesn't typically protect you against without a little bit of careful coding that allows the compiler to help you.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
1) We have barely scratched the surface of what is possible to do with existing AI technology.
2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
It's interesting to think Microsoft was around back then too, taking approximately 14 years to regain the loss of approximately 58% of their valuation.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
That’s not my experience at all, I’m getting it done much faster and the quality is on par. It’s hard to measure, but as a small business owner it’s clear to me that I now require fewer new developers.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI.
Alternatively, you are referring to the latter group and, uh, sorry.
The whole point I tried to make when I said “you need to learn how to use it” is that it’s not vibe coding. It has nothing to do with vibes. You need to be specific and methodological to get good results, and use it for appropriate problems.
I think the AI companies have over-promised in terms of “vibe” coding, as you need to be very specific, not at all based on “vibes”.
I’m one of those advocates for AI, but on HN it consistently gets downvoted no matter how I try to explain things. There’s a super strong anti-AI sentiment here.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
There is no scenario where AI is a net benefit. There are three possibilities:
1. AI does things we can already do but cheaper and worse.
This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse.
2. Total victory of capital over labor.
This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with.
3. Extinction of all biological life.
This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead.
What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame?
Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy.
It's not a post-scarcity world. There's no obvious upper bound on resources AGI could use, and there's no obvious stopping point where you can call it smart enough. So long are there are other competing elites the incentive is to keep improving it. All the useless people will be using resources that could be used to make more semiconductors and power plants.
This is such a weirdly antagonistic take. Javascript was out there first, and it was good enough, and a vast improvement over both flash and java in the browser. There's no guarantee that some committee designed language would have ever made it out to the public, let alone that it ever would have gotten any kind of uptake, or that it would have been better than javascript.
JS at the time was obviously thrown together in a huge rush as a very poorly designed landgrab during the dotcom IPO goldrush.
"Designed by committee" as the alternative is a false dichotomy.
There was already much better work in languages for this kind of purported requirement of non-programmer (or less-programmer) use. JS obviously didn't even try to address those users.
And there were certainly better languages for letting full-programmers accomplish the same things.
Even Sun themselves internally had better work on multimedia Web browser at the time.
Instead, some team just threw anything at being able to make the press release they wanted to make, ASAP, not caring whether it was trash, or they could've even done better within the press release time constraint if they cared.
There's a market out there, and a first-mover advantage. Technologies are embedded in and product of existing political, social, and economic processes, not standalone entities.
Think of all the other "worse" languages that triumphed over their purported betters.
I agree that first mover advantage is an important concept. I don't know how much it applies to the JS launch situation, but there was definitely a rush.
I think the reason this is interesting to mathematicians is that he was working with an axiomatic system that is fairly new, and _in particular_ is thought not to be strong enough to be able to prove the pigeon hole principle. Since he proved that all these other theorems are equivalent to the pigeonhole principle, all of those other theorems are probably also not able to be proven with PV1.
> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?
So a couple of things. There are going to be a handful of companies in the world with the infrastructure footprint and engineering org capable of running LLMs efficiently and at scale. You are never going to be able to run open models in your own infra in a way that is cost competitive with using their API.
Competition _between_ the largest AI companies _will_ drive API prices to essentially 0 profit margin, but none of those companies will care because they aren't primarily going to make money by selling the LLM API -- your usage of their API just subsidizes their infrastructure costs, and they'll use that infra to build products like chat gpt and claude, etc. Those products are their moat and will be where 90% of their profit comes from.
I am not sure why everyone is so obsessed with "moats" anyway. Why does gmail have so many users? Anybody can build an email app. For the same reason that people stick with gmail, people are going to stick with chatgpt. It's being integrated into every aspect of their lives. The switching costs for people are going to be immense.
I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.
That assumes a certain kind of ad though. Even a "pu ch the monkey" style banner ad would be a start. I can't imagine they wouldn't be very careful not to give consumers the impression that their "thumb was on the scale" of what ads you see
Sure, but affiliate != ads. Rather, both affiliate links and paid ad slots are by definition not neutral and thus bias your results, no matter what anyone claims.
I think this is good, but I doubt it'll actually impact rental prices as much as people think, because the problem is fundamentally a housing shortage.
The problem is fundamentally a housing shortage AND RealPage recommended to large rental operators that they leave apartments empty. So they definitely did actually pour gasoline on the fire.
Landlords are completely capable of leaving units vacant all by themselves and there isn't any evidence that realpage clients were more likely to do that.
> For tenants, the system upends the practice of negotiating with apartment building staff. RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money.
...
> Apartment managers can reject the software’s suggestions, but as many as 90% are adopted, according to former RealPage employees.
Yeah I mean I don't know how to tell you this but that's not a source. Rental managers leave units vacant all the time, hoping for an unrealistic price. It's certainly possible and even consistent with those claims that realpage clients were no worse than general property managers in this regard. They may have even been more willing to accept lower prices based on the realpage recommendations.
Your rebuttal is literally "nuh uh." You asked for a source, you were given a source: documentation that shows RealPage discouraged landlords from negotiating, and landlords admitting to not negotiating. COULD they have negotiated? Sure, and a unicorn COULD be orbiting Saturn with 5,000 homes in tow, trying to figure out the optimal thrust vector to come to Earth. Anything COULD happen, but we know what DID happen: RealPage and services like it have enabled landlords to collude with dubious legality on rent prices, and rent prices went up as a result. The most predictable outcome.
The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
reply