Hacker Newsnew | past | comments | ask | show | jobs | submit | ajkjk's commentslogin

one reason is that startup culture is cringe as hell

I'm being course, but like... it is though.


What does that mean?

If you are operating under the constraint that talking to strangers is impossible then I could see why ChatGPT feels like a godsend...

> like building an AI product made me part of the problem.

It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.

That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.


> It's not about their careers.

That's the thing, though, it is about their careers.

It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.

It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.

Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?


disagree completely. You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not. The same people could trivially capitalize on the shifting climate and have a good career in the new world. But they'd still be pissed about it.

I'm one of these people. So is everyone I know. The grievance is moral, not utilitarian. I don't care about executives getting rid of people. I care that they're causing obviously stupid things to happen, based on their stupid delusions, making everyone's lives worse, and they're unaccountable for it. And in doing so they devalue all of the things I consider to be good about tech, like good software that works and solves real problems. Of course they always did that but it's especially bad now.


> You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not.

It doesn't matter how much astroturf I read, I can see what's happening with my own eyes.

> The grievance is moral, not utilitarian.

Nope, it's both.

Businesses have no morals. (Most) people do. Everything that a business does is in service of the bottom line. They aren't pushing AI everywhere out of some desire to help humanity, they're doing it because they sunk a lot of resources into it and are trying to force an ROI.

There are a lot of people who have fully bought in to AI and think that it's more capable than it is. We just had a thread the other day where someone was using AI to vibe code an app, but managed to accidentally tell the LLM to delete the contents of his hard drive.

AI apologists insist that AI agents are a vital tool for doing more faster and handwave any criticism. It doesn't matter that AI agents consume an obscene amount of resources to do it, or that pretend developers are using it to write code they don't understand and can't test that they're shoving into production anyway. That's all fine because a loud fraction of senior developers are using it to bypass the 'boring parts' of writing programs to focus on the interesting bits.


I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.

If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.


> like [being involved in creation of the problem] made me a part of the problem.

Yeah, that's weird. Why would anyone think that? /s


hell if passing the buck is the opposite of holding the bag then maybe we should mix em

maybe the full array of options is: pass the hot potato, hold the buck, or drop it like a bag.


or you can thoughtfully consider it and maybe learn something

quotes like this are only used to dismiss observations you don't like


The quote makes a statement, we don't know if it is true. What can you learn from that? It might spark some thoughts, maybe.

exactly. maybe you think of it as a smidge more credible because someone else thinks it, even. Especially if they're a generally intelligent person whose other thoughts you like.

Bro, you literally provided zero evidence, learn what?

When someone suggests an idea without evidence there's still a modicum of data in the fact that they believe it. You don't have to, like, suddenly change your mind, but you also don't have to blow it off as unsubstantiated entirely. Probably they believe it, and said it, for a reason. Anyway whether or not you blow it off is entirely an indication of your trust in them, and has nothing to do with whether they presented evidence.

My feeling is we need laws to stop it

The industry agrees with you, hence the regulatory capture.

Too big to fail now

If it only takes a few years for a private entity to become "too big to fail" and quasi-immune to government regulation, we have a real problem.

An yeah, and honestly we do seem to have a real problem. Here's hoping OpenAI doesn't get the bailout they seem to be angling for..

You don't like some features being added to products so you want laws against adding certain features?

I might not like a certain feature, but I'd dislike the government preventing companies from adding features a whole lot more. The thought of that terrifies me.

(To be clear, legitimate regulations around privacy, user data, anti-fraud, etc. are fine. But just because you find AI features to be something you don't... like? That's not a legitimate reason for government intervention.)


I think it's more about enforcing having easy mechanisms to opt out, which seem to be absent with regards to AI integration.

It's better to assume good faith when providing a counter argument.


That doesn't change anything. If there aren't any harms except that certain people don't "like" a feature, it's not the government's role to force companies to allow users to opt out of features. If you don't like a feature, don't buy the product. The government should not be micromanaging product design.

What product should I buy if I need a smartphone to e.g. pay for parking but I don't want a smartphone that tracks me?

Take it up with your city council, if they're the ones require a smartphone to pay for parking.

But also, you're going to have to be more specific about what tracking you're worried about. Cell towers need to track you to give you service. But the parking app only gets the data you enable with permissions, and the data the city requires you to give the app (e.g. a payment method). So I'm not super clear what tracking you're concerned about?

If you don't use your smartphone for anything but paying for parking, I genuinely don't know what tracking you're concerned about.


> If there aren't any harms except that certain people don't "like" a feature

The reason I don't like these sorts of features is because I think they are harmful, personally


In a democratic society, "government" is just a tool that the people use to exercize their will.

Why isn’t it the governments role?

Because you think it’s not?

What if I, and many other people, think that it is?


Because it's ultimately a form of censorship. Governments shouldn't be in the business of shutting down speech some people don't like, and in the same way shouldn't be in the business of shutting down software features some people don't like. As long as nobody is being harmed, censorship is bad and anti-democratic. (And we make exceptions for cases of actual harm, like libelous or threatening speech, or a product that injures or defrauds its users.) Freedom is a fundamental aspect of democracy, which is why freedoms are written into constitutions so simple majority vote can't remove them.

1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).


I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

And legally, code and software are considered a form of speech in many contexts.

Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.


Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.


Those examples of harm are not good ones.

The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.

As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.

But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.

(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)


> Those examples of harm are not good ones.

I emphatically disagree. See you at the ballot box.

> but it's not a major practical concern for any end users so far.

My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.


Fear of leaks, or the other harms you mention, have nothing to do with the question at hand, which is whether these features are enabled by default.

If someone is using ChatGPT, they're using ChatGPT. They're not inputting sensitive personal secrets by accident. Turning Gemini off by default in Gmail isn't going to change whether someone is using ChatGPT as a therapist or something.

You seem to simply be arguing that you don't like LLM's. To which I'll reply: if they do turn out to present substantial harms that need to be regulated, then so be it, and regulate them appropriately.

But that applies to all of them, and has nothing to do with the question at hand, which is whether they can be enabled by default in consumer products. As long as chatgpt.com and gemini.google.com exist, there's no basis for asking the government to turn off LLM features by default in Gmail or Calendar, while making them freely available as standalone products. Does that make sense?


Yes. I think laws should be used to shut down things that are universally disliked but for which there is no other mechanism for accountability. That seems like obviously the point of laws.

Except these LLM features are not universally disliked. If they were, believe me, the companies would not be building them.

Lots of people actually find them useful. And the features are being iterated on to improve them.


not the LLM features. The undisable-able intrusions to advertise them, which rely on controlling platforms and so being able to use them to anticompetitively promote their own products.

Yes, the LLM features. There's nothing anticompetitive in a popup or button for your own feature in your own product. People like me find many of them useful. Maybe you find that inconvenient for the narrative you're pushing, but it's true.

What do you mean narrative? The narrative is that I hate them. If almost everyone else also does, we should ban them. Simple as that.

It is no different from passing legislation to ban spam or mandate that newsletters have one-click unsubscribe buttons. I hate living a world where corporations are unaccountably disrespectful to their users. Laws are how you hold them accountable. I don't care if the justification for the laws is competition or spam or what. It's simply the point of laws: to give society power over things that individuals can't easily have power over, so that they may improve their lives. To argue that we shouldn't improve our lives is absurd. The only justification for not doing it would be that it is immoral to do so, which it is not. Otherwise the only remaining question is whether we can rally the political will to do it. Not likely in the short term, but, the way things are going I expect it will happen eventually.


> If almost everyone else also does, we should ban them.

But not "almost everyone" hates them. Plenty of people like them and use them. You're ignoring that.

And think about applying your argument to free speech. If most people don't like an opinion, should we ban it?

You shouldn't be able to ban things you merely don't like. There needs to be some kind of legitimate harm. Having Gemini in your Gmail isn't creating any harm.


What is so terrifying about exerting democratic control over software critical to exist in society?

> over software critical to exist in society?

I don't know what that means grammatically.

But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

The answer is, because it infringes on freedom. As long as these AI features aren't harming anyone -- if your only complaint is you find their presence annoying, in a product you have a free choice in using or not using -- then there's no democratic justification for passing laws against them. Democratic rights take precedence.


This is the argument against all customer protection as well as things like health codes, right?

Nobody is FORCING you to go to that restaurant so it's antidemocracy to take away their freedom to not wash their hands when they cook?


Please see the part of my comment where I say as long as it's not harming anyone.

> As long as these AI features aren't harming anyone

Why do you say this ? They are clearly harming the privacy of people. Or you don't in privacy as a right ? But, a lot of people do - democratically.


If you can show it's harming privacy, then regulate the privacy. That's legitimate. But I assume you're talking about AI training, not feature usage.

Trying to regulate whether an end-user feature is available just because you don't "like" AI creep is no different from trying to regulate that user interfaces ought to use flat design rather than 3D effects like buttons with shadows. It would be an illegitimate use of government power.


Making sure people have the option not to listen to your "speech" is not control over people's free speech.

It absolutely is.

When I buy a book, I don't want the government deciding in advance which paragraphs should be included, and which paragraphs people "shouldn't have to listen to". So I don't want it doing that with software either. It's the same thing.

You don't have to buy that book in the first place. The same way you don't have to use a piece of software.


You're trying to make it sound like a corporation's right to force AI on us is equivalent to an individual's right to speech, which is idiotic in its face. But I'd also point out that speech is regulated in the US, so you're still not making the point you think you're making.

And as far as I'm concerned, as long as Google and Apple have a monopoly on smartphone software, they should be regulated into the ground. Consumers have no alternatives, especially if they have a job.


It's not "idiotic on its face" and that's not appropriate for HN. Please see the guidelines:

https://news.ycombinator.com/newsguidelines.html

Code and software are very much forms of speech in a legal sense.

And free speech is regulated in cases of harm, like violent threats or libel. But there's no harm here in any legal sense. People are just unhappy with the product UX -- that there are buttons and areas dedicated to AI features.

Companies should absolutely have the freedom to build the products they want as long as there's no actual harm. If you merely don't like a UX, use a competing product. If you don't like the UX of any product, then tough. Products aren't usually perfectly what you want, and that's OK.


You're completely ignoring the most important point I raised, which is that I can't use a competing product. I can't stop using Microsoft, Google, Meta, or Apple products and still be a part of my industry or US society.

So what's your argument?

You're not being forced to use the AI features. If you don't want to use them, don't use them. There's zero antitrust or anticompetitive issue here.

Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange. But that's what you're asking for, for the government to determine which features software products have.


> You're not being forced to use the AI features. If you don't want to use them, don't use them

You can't turn them off in many products, and Microsoft's and Google's roadmaps both say that they're going to disable turning them off, starting with using existing telemetry for AI training.

> Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

You're just continuing to ignore that all of this is based on their market dominance. There are literally two options for smartphone operating systems. For something that's vital to modern life, that's unacceptable and gives users no choice.

If a company gets to enjoy a near-monopoly status, it has to be regulated to prevent abuse of its power. There's a huge amount of precedent for this in industries like telecom.

> If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange

Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?


> Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?

Do you really not see that there is no difference?

Either the government starts dictating product design or it doesn't.

I don't want a world where the government decides which features software makers include or turn on or off by default. Whether there are 20 companies competing in a space or mainly 2.

Don't you see where that leads? Suddenly it's dictating encryption and inserting backdoors. Suddenly it starts allowing Truth Social to build new features and removing features on Twitter.

This is a bigger issue than you seem to be acknowledging. The freedom to create the software you want, provided it's not causing actual harm, is as important to preserve as the freedom to write the books or blog posts you want.

If this had something to do with antitrust then the fact that there are only two major phone platforms would be relevant. But the fact that both platforms are implementing LLM features is not anticompetitive. To the contrary, it's competitive even if you personally don't like it. It's literally no different from them both supporting 1,000 other features in common.


> But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

What an asinine comparison lol



Bruh learn to take responsibility for your behavior

"Bruh", please read the guidelines. Your comments are completely inappropriate for HN.

Newsflash

If voters Democratically decide to do something, that's democracy at work.


"Newsflash", the entire point of constitutions that enumerate rights is that fundamental rights and freedoms may not be abridged even by majority decision.

If a Supreme Court strikes down a majority-passed law limiting free speech guaranteed by the Constitution, that's democracy at work.


If they can't be abridged, then why do we have amendments?

And no, that would be the courts at work, which may or may not be beholden to the whims of other political figures.


It takes more than majority vote to add a new amendment.

Go ahead and try, but I don't think you'll find that an amendment to restrict people's freedoms is going to be very popular. Because it will be seen as anti-democratic.


I mean you said 60 percent yourself, that would be a majority decision, and a democratic one.

I'm not sure the point you're trying to make here.

Voters restrict their own freedoms all the time. Hell, my state recently passed a law preventing Ranked Choice voting.


I'm not following you. I didn't say 60%? And 60% is a supermajority, not a majority. Which is a huge distinction. And US constitutional amendments require much stricter thresholds than that -- two thirds of Congress and three quarters of states. That's a gigantic bar.

Yes voters try to restrict their own freedoms all the time. We have constitutions with rights to block them from doing that in fundamental ways. That's what protection from tyranny of the majority is all about. Just because you have a majority doesn't mean you're allowed to take away rights. That's a fundamental principle of democracy. Democracy isn't just majority rule -- it's the protection of rights as well.


>You don't like some features being added to products so you want laws against adding certain features?

Correct, especially when the features break copyright law, use as much electricity as Belgium, and don't actually work at all. Just a simple button that says "Enabled", and it's off by default. Shouldn't be too hard, yeah? You can continue to use the slop machine, that's fine. Don't force the rest of us to get down in the trough with you.


I have no problem with a company voluntarily choosing to make it a toggle.

I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

If there are problems with copyright, that's an issue for the courts -- not a user toggle. If you have problems with the electricity, then that's an issue for electricity infrastructure regulations. If you think it doens't work, then don't use it.

Passing a law forcing a company to turn off a legal feature by default is absurd. It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.


>I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

I have a big problem with companies forcing me to use garbage I don't want to use.

>If there are problems with copyright, that's an issue for the courts -- not a user toggle.

But in the meantime, companies can just get away with breaking the law.

>If you have problems with the electricity, then that's an issue for electricity infrastructure regulations.

But in the meantime, companies can drive up the cost of electricity with reckless abandon.

>If you think it doens't work, then don't use it.

I wish I lived in your world where I can opt out of all of this AI garbage.

>Passing a law forcing a company to turn off a legal feature by default is absurd.

"Legal" is doing a lot of heavy lifting. You know the court system is slow, and companies running roughshod over the law until the litigation works itself out "because they're all already doing it anyway" is par for the course. AirBnB should've been illegal, but by the time we went to litigate it, it was too big. Spotify sold pirated music until it was too big to litigate. How convenient that this keeps happening. To the casual observer, it would almost seem intentional, but no, it's probably just some crazy coincidence.

>It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.

Forcing companies to stop being deleterious to society is not censorship, and it isn't Handmaid's Tale to enforce a semblance of consumer rights.


> I have a big problem with companies forcing me to use garbage I don't want to use.

That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

In reality, companies try their best to build products that make money, and they compete with each other to do so. These principles have led to amazing products. And as long as no consumers are being harmed (e.g. fraud, safety, etc.), asking the government to interfere in product decisions is a terrible idea. The free market exists because it does a better job than any other system at giving consumers what they want. Just because you or a group of people personally don't like a particular product isn't a reason to overturn the free market and start asking the government to interfere with product design. Because if you start down that path, pretty soon they're going to be interfering with the things you like.


> The free market exists because it does a better job than any other system at giving consumers what they want.

Bull. Free markets are subject to a lot of pressures, both from the consumers, but also from the corporate ownership and supply chains. The average consumer cannot afford a bespoke alternative for everything they want, or need, so are subject to a market. Within the constraints of that market it is, indeed, best for them if they are free to choose what they want.

But from personal experience I know damn sure that what I really really want is often not available, so I'm left signalling with my money that a barely tolerable alternative is acceptable. And then, over a long enough period of time, I don't even get that barely tolerable alternative anymore as the company has phased it out. Free markets, in an age of mass production and lower margins, universally mean that a fraction of the market will be unable to buy what they want, and the alternatives available may mean they have to go without entirely. Because we have lost the ability to make it ourselves (assuming we ever had that ability).


> But from personal experience I know damn sure that what I really really want is often not available

But that's just life. I genuinely don't understand how you can complain that not every product is exactly the product you want. Companies are designing their products to meet the needs of millions of people at the price point they can pay for it. Not for you personally.

We have more consumer choice than we've ever had in modern history, and you're still complaining it's not enough?

Even when we lived in tribes and made everything ourselves, we were extremely limited in our options to the raw materials available locally, and the extremely limited ability to transform things. We've never had more choice than we have today. I cannot fathom how you are still able to complain about it.


I'm just formulating an argument that a free market is not the be all and end all. If you have the money, bespoke is better. And if you don't have the money, making it yourself is better, if you have the skills (which most don't for most purposes).

Issues that do plague the current market in the US, that impact my household enough to notice, are:

1) Product trends. When a market leader decides to go all in on something, a lot of the other companies follow along. We've seen this in internet connectivity, touchscreens in new cars, ingredients in hair care products, among others. This greatly limits the ability of consumers to find alternatives that do not have these trends. In personal care products this is a significant issue when it comes to allergies or other kinds of sensitivities.

But in general just look at the number of people who complain about things such as a lack of discrete buttons for touchpads. Not even Framework offers buttoned touchpads as an option, despite there being a market for them.

It's obvious that it's the vocal, heavy spenders who determine what's on the market. Or it's a race to the bottom in terms of price that determines this. It's not the average consumer.

2) Perfume cross-contamination as an extension of chemical odors in general[0,1]. In recent years many companies with perfumed products such as cleaning agents have increased the perfume or increased its duration with fixatives. This amplified after so many people had their sense of smell damage during early COVID (lots of complaints about scented candles and the like not having an odor anymore, et cetera).

This wouldn't be a problem from a consumer point of view except that the perfumes transfer to non-perfumed products - basically anything that has plastic or paper absorbs second-hand fragrances pretty well. I live in as close as we can get to a perfume-free household, for medical reasons. It's effectively impossible to buy certain classes of products, or anything at all from certain retailers, that doesn't come perfumed. There are major stores such as Amazon and Target that we rarely buy from as we have to spend a lot of money, time, and effort to desmell products (basically everything purchased from Amazon or Target now has a second-hand perfume).

It's possible to have stores that have both perfumed products and non-perfumed products such that perfume cross-contamination doesn't occur. But this requires the appropriate ventilation, and isn't something that's going to happen unless one of the principals of the store has a sensitivity.

And then there are perfumes picked up in transit from the wholesaler, trucking company, or shipping company.

I hope someday to win Powerball or Mega Millions so that I can start a company dedicated to perfume-free household basics. That are guaranteed to still be perfume-free on delivery.

0 - https://www.drsteinemann.com/faqs.html

1 - https://dynamics.org/Altenberg/CURRENT_AFFAIRS/CHINA_PLASTIC...


On the one hand, I'm annoyed by some of the same things that annoy you.

On the other hand, it's never been easier to buy fragrance-free versions of detergents, cleaning products, personal care products, etc. When I was growing up, they didn't exist at all -- everything was horribly scented. Now "free" or "free and clear" is a whole product category. Literally everything I buy is fragrance-free, and it's wonderful. Little of it's available at my local CVS, but it's all available on Target.com or Amazon. Thanks to the free market.

And when you say "it's the vocal, heavy spenders who determine what's on the market" that's not true at all. It's the race to the bottom in terms of price, which you say, but that is the average consumer. The average consumer wants to spend less. You can spend more to get better products, usually.

Trends really are cost-driven and consumer-driven. If companies make things people really don't like, people stop buying them and the companies change. There are a million examples, from New Coke to the Apple touchbar. You're arguing the free market is failing, but it really does work. You're demanding something better, but when you add government intervention to dictate how products are made, that's generally going to make things worse, because why would the government be better than free competition for consumers' wallets?


>That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

I am dealing with it, thanks, by fighting against it.

>I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

Because as we all know, forcing you to use the abusive copyright laundering slop machine is exactly morally equivalent to not getting arbitrary cheques in the mail.

>In reality, companies try their best to build products that make money, and they compete with each other to do so.

In the Atlas Shrugged cinematic universe, maybe. Now, companies try to extract as much as they can by doing as little as possible. Who was Google competing with for their AI summmary, when it's so laughably bad, and the only people who want it are the people whose paycheques depend on it, or people who want engagement on LinkedIn?

>The free market exists because it does a better job than any other system at giving consumers what they want.

Nobody wants this!

>Because if you start down that path, pretty soon they're going to be interfering with the things you like.

I mean, they're doing that, too, and people like you look down your nose and tell me to take that abuse as well. So no, I'm not going to sit idly by and watch these parasites ruin what shreds of humanity I have left.


>> The free market exists because it does a better job than any other system at giving consumers what they want.

> Nobody wants this!

OK, well if you don't believe in the free market then sure.

Good luck seeing how well state ownership manages the economy and if it does a better job at delivering the software features you want, or even of putting food on your table. Because the entire history of the twentieth century says you're not going to like it.


"regulating corporate overreach = state ownership"

Huh.

Your argument boils down to "it is wrong for people to defend themselves from corporations", but the cases you're making are incoherent. It seems like you believe this but don't know why you believe it and you're making up gibberish to defend it. I'd suggest you stop and analyze why you believe this--like what you really think will happen, and why you really think people do not have a right to defend themselves. Personally I can think of no situation where it is moral to say: people should not defend themselves. The concept seems absurd. To me all of human history is evidence that people do, always, have a right to defend themselves, and much evil has been perpetrated by the notion that they should sit down endure abuses instead.


No, you seem to not be reading what I'm saying. Please don't call it "incoherent" or "gibberish" just because you don't agree. That's completely inappropriate.

We're talking about a UX choice and you're talking about people "defending themselves" as opposed to "enduring abuses" coming from "much evil"?

The justification you're proposing is the same one that censors free speech, because people want to defend themselves from certain ideas, or things they just don't "like".

There's no harm here. Nobody's attacking you. You're not being abused. We're talking about a software feature you think is inconvenient that it takes up space on your screen.

I think companies should have the freedom to design products the they want, as long as it's not causing harm. Which in this case, it's not. You just don't like it. But that's not harm. If you don't like it, don't use it. Same as if you don't like a book, don't read it.

Rights and freedoms exist for a good reason. They're not absolute because they can conflict with each other, but in this case there's zero conflict. There's no justification for the government to start dictating Google's UX in this case.


Why would this be flagged?


I don't think it's that. It bothers me a lot too, and not because anyone else is judging me or anything. I think it's just that it's depressing... it sucks to be doing bad work, on top of other bad work, and unable to do good work instead. It is incredibly frustrating to care about quality but be surrounded by and constrained to work on crap. Just feels like everything went horribly wrong somewhere and you're powerless to do anything about it and your only option is to suck it up.

I know how to fix this but I'm not "allowed to" can eat away at you easily. Then there are things that I know how I might fix but I wouldn't realistically be able to because it's all a lot to take on, there will almost never be enough time and hands to get it done, within the set constraints.

This is due to the way how incentives are aligned. Systems that are powering things for, say a decade at least but worked on Quarterly basis.

Why is this alive and well, then? Because it doesn't actually matter as long as money keeps rolling in. It is also possible that the losses caused by or efficiency not achieved do not show up in the accounts.


>I know how to fix this but I'm not "allowed to" can eat away at you easily.

This really is the worst, and that's why I left my first job. Funnily enough, I just took that job back after a few years but I am now the lead and sole developer on it, I'm having the time of my life doing what I've always wanted to do back then, and seeing the product now flourish.

The bad code didn't really matter, it was the fact that I was not allowed to improve it and forced to build new features on top of crappy code that made me quit in the first place.


That's in addition to the parent comment. They are both true. Caring is for other people, the tickets must flow.

Adoption = number of users

Adoption rate = first derivative

Flattening adoption rate = the second derivative is negative

Starting to flatten = the third derivative is negative

I don't think anyone cares what the third derivative of something is when the first derivative could easily change by a macroscopic amount overnight.


Adoption rate is not derivative of Adoption. Rate of change is. Adoption rate is the percentage of uptake (there, same order with Adoption itself). It being flattening means first derivative is getting close to 0.

I agree, I think I misunderstood their wording.

In which case it's at least funny, but maybe subtract one from all my derivatives.. Which kills my point also. Dang.


It maps pretty cleanly to the well understood derivatives of a position vector. Position (user count), velocity (first derivative, change in user count over time), acceleration (second derivative, speeding up or flattening of the velocity), and jerk (third derivative, change in acceleration such as the shift between from acceleration to deceleration)

It really is a beautiful title.


It is a beautiful title and a beautiful way to think about it—alas, I think gp is right: here, from the charts anyway, the writer seems to mean the count of firms reporting adoption (as a proportion of total survey respondents).

Which paints a grimmer picture—I was surprised that they report a marked decline in adoption amongst firms of 250+ employees. That rate-as-first-derivative apparently turned negative months ago!

Then again, it’s awfully scant on context: does the absolute number of firms tell us much about how (or how productively) they’re using this tech? Maybe that’s for their deluxe investors.


It is not velocity, it is not change. Have you read the graphs? What do you think 12% in Aug and Sep for 250+ Employee companies mean, that another 12% of companies adopted AI or is it a flat "12% of the companies have adopted in Aug, and it did not change in Sep"

> Have you read the graphs?

Yes. The title specifically is beautiful. The charts aren't nearly as interesting, though probably a bit more than a meta discussion on whether certain time intervals align with one interpretation of the author's intent or another.


The function log(x) also has derivative that goes closer and closer to 0.

However lim x->inf log(x) is still inf.


Is it your assertion that an 'infinite' percentage! of the businesses will use AI on a long enough time scale?

If you need everything to be math, at least have the courtesy to use the https://en.wikipedia.org/wiki/Logistic_function and not unbounded logarithmic curves when referring to on our very finite world.


> Adoption = number of users

> Adoption rate = first derivative

If you mean with respect to time, wrong. The denonimator in adoption rate that makes it a “rate” is the number of existing businesses, not time. It is adoption scaled to the universe of businesses, not the rate of change of adoption over time.


The adoption rate is the rate of adoption over time.

One could try to make an argument that "adoption rate" should mean change in adoption over time, but the meaning as used in this article is unambiguously not that. It's just percentages, not time derivatives, as clearly shown by the vertical axis labels.

There's another axis on the charts.

Yes, it charts the adoption rate (adopting firms/firms) against time. But it doesn't use the term "adoption rate" to mean the first derivative of "adoption" with respect to time.

When it talks about the adoption rate flattening it is talking about the first derivative of the adoption rate (as defined in the previous paragraph, not as you wish it was defined) with respect to time tending toward 0 (and, consequently, the second derivative being negative.) Not the third derivative with respect to time being negative.


I assure you I don't have any wishes one way or another.

What tickled me into making the comment above had nothing to do with whether adoption rate was used by the author (or is used generally) to mean market penetration or the rate of adoption. It was because a visual aid that is labeled ambiguously enough to support the exact opposite perspective was used as a basis for clearing up any ambiguity.

The purpose of a time series chart is necessarily time-derivative, as the slope or shape of the line is generally the focus (is a value trending upward, downward, varying seasonally, etc). It's fair to include or omit a label on the dependent axis. If omitted, it's also fair to label the chart as the dependent variable and also to let the "... over time" be implicit.

However, when the dependent axis is not explicitly labeled and "over time" is left implicit, it's absolutely hilarious to me to point to it and say it clearly shows that the chart's title is or is not time-derivative.

I know comment sections are generally for heated debates trying to prove right and wrong, but sometimes it's nice to be able to muse for a moment on funny things like this.


Normally, the adoption rate of something is the percentage ratio of adopters to non-adopters.

While there's an extreme amount of hype around AI, it seems there's an equal amount of demand for signs that it's a bubble or it's slowing down.

Well, that’s only because it exhibits all the signs of a bubble. It’s not exactly a grand conspiracy.

You could use that logic to dismiss any analysis of any trajectory ever.

Perfectly excusable post that says absolutely nothing about anything.


Looking at the graphs in the linked article, a more accurate title would probably be "AI adoption has stagnated" - which a lot of people are going to care about.

Corporate AI adoption looks to be hitting a plateau, and adoption in large companies is even shrinking. The only market still showing growth is companies with fewer than 5 employees - and even there it's only linear growth.

Considering our economy is pumping billions into the AI industry, that's pretty bad news. If the industry isn't rapidly growing, why are they building all those data centers? Are they just setting money on fire in a desperate attempt to keep their share price from plummeting?


When all the dust settles, I think it's probably going to be the biggest bubble ever. The unjustified hype is unbelievable.

For some reason I can't even get Claude Code (Running GLM 4.6) to do the simplest of tasks today without feeling like I want to tear my hair out, whereas it used to be pretty good before.

They are all struggling mightily with the economics, and I suspect after each big announcement of a new improved model x.y.z where they demo shiny so called advancement, all the major AI companies heavily throttle their models in use to save a buck.

At this point I'm seriously considering biting the bullet and avoiding all use of AI for coding, except for research and exploring codebases.

First it was Bitcoin, and now this, careening from one hyper-bubble to a worse one.


I don’t understand, how can adoption rate change overnight if its derivative is negative? Trying to draw a parallel to get intuition, if adoption is distance, adoption rate speed, and the derivative of adoption rate is acceleration, then if I was pedal to the floor but then release the pedal and start braking, I’ll not lose the distance gained (adoption) but my acceleration will flatten then get negative and my speed (adoption rate) will ultimately get to 0 right? Seems pretty significant for an industry built on 2030 projections.

One announcement from a company or government can suddenly change the derivative discontinuously.

Derivatives irl do not follow the rules of calculus that you learn in class because they don't have to be continuous. (you could quibble that if you zoom in enough it can be regarded as continuous.. But you don't gain anything from doing that, it really does behave discontinuous)


Derivatives in actual calculus don’t have to be continuous either. Consider the function defined by f(x) = x^2 sin(1/x) for x != 0; f(0) = 0.

The derivative at 0 exists and is 0, because lim h-> 0 (h^2 sin(1/h))/h = lim h-> 0 (h sin(1/h)), which equals 0 because the sin function is bounded.

When x !=0, the derivative is given by the product and chain rules as 2x sin(1/x) - cos(1/x), which obviously approaches no limit as x-> 0, and so the derivative exists but is discontinuous.


Not sure what kinda calculus you took at least here in the states it's very standard to learn about such functions in class, and yes there is a difference between discontinuous and the slope being really large (though finite) for a brief period of time

You rarely study delta and step functions in an introductory calculus class. In this case the first derivative would be a step function, in the sense that over any finite interval it appears to be discontinuous. Since you can only sample a function in reality there's no distinguishing the discontinuous version from its smooth approximation.

(I suppose a rudimentary version of this is taught in intro calc. It's been a long time so I don't really remember.)


I'm sure it depends on who's teaching the class and what curriculum they follow, but we were doing piecewise linear functions well before differentiation so I think I do actually disagree as per your caveat. It's also possible that the courses triaged different material. As a calc for engineers not calc for math majors taker, my experience may have been heavier on deltas and steps.

Not to be all “do you know who X is,” but I did have to chuckle a little when I saw who it is that you’re teaching differentiation to here…

As seems to have sort of happened between March and April of this year, at least from the Ramp chart in TFA. I wonder what that was about.

Person who draws comparison from current situation to derivatives points out that derivatives rules don't apply to current situation.

Awesome stuff.


I don't understand your point. It seemed like the person I was replying to didn't understand how both claims could be simultaneously true so I was elaborating.

Yeah, what a jerk.

You win today.

I can't believe i was down voted for this silly comment on a third derivative pun. Get a life, techie.

Hehehehehheeh

I think it might be answering long-term questions about direct chat use of AIs. Of course as AI goes through its macroscopic changes the amount it gets used for each person will increase, however some will continue to avoid using AI directly, just like I don't fully use GPS navigation but I benefit from it whether I like it or not when others are transporting me or delivering things to me.

Not really. In this context adoption might be number of users. But adoption rate is a fraction of users that adopted this to all users.

Hm that's true. Both seem plausible in English. I didn't look closely enough to figure out which they meant.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: