Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a lot of furor in this thread, but people felt the same way when Google Street View came out. Eventually they worked through most of the thorny bits and people use Street View now.

I suspect MSFT is in a similar spot. If they don’t train on more data, they’ll be left behind by Anthropic/OAI. If they do, they’ll annoy a few diehards for a while, they’ll work through the kinks, then everyone will get used to it.

 help



That comparison doesn’t hold at all. This would be equivalent to Google publishing photos of inside your home.

Or, perhaps more directly, training their image-gen models on your private Google Photos.


Conceptually I think it’s a fine comparison.

They’re training (with an opt out) on stuff people feel is an invasion of their privacy to make their service better.


> (with an opt out)

And this is the problem: any time you're adding a new "feature" that invades users' privacy, it needs to be opt-in.


> it needs to be opt-in.

And again: Google did this and everyone eventually got over it. Cars do this with telematics data and everyone is over that too.


That doesn't change the ethics of it.

The only reason that worked is because Google is an unstoppable monopoly juggernaut whose name is literally synonymous with searching the web.


Idk man… people have mostly decided they’re fine with sharing their data.

People use Google Maps even though their location is bought and sold openly. People put their deepest secrets into ChatGPT even after their CEO warned that they use it to sell you ads, keep all of it even after you delete it and may even be required to produce it during a lawsuit. People still buy new cars even though they track their movements.

What you feel is a deeply offensive ethical violation probably is fine with the general population. In general, people are willing to trade their data for good quality services.


What you're describing is "revealed preferences theory", and it requires a truly free set of choices—essentially, when looking at it in relation to companies like Google, it relies on "free market theory", the kind that only works with a spherical market in a frictionless vacuum.

People "have decided" this because they're not given good choices, they're not well-informed about what their data is going to be used for, and their real buying power has been going down for decades so they don't have the disposable funds to put toward the much more expensive options that would allow them to retain their privacy (even in the rare cases where such options still exist).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: