I'm one of the ones who sees ChatGPT's mistakes. It makes lots of them. Some are very dumb.
But the idea that it's a fraud or a scam or whatever label the naysayers choose to put on it is just wrong. It makes mistakes, but it also provides a lot of value when used correctly.
Consider the many use cases where you don't need to rely on its "knowledge" at all - just this morning, I used it to write some marketing emails. I've got a paragraph about the company that I give it, plus some specifics about the particular email, and it knocks out something that's entirely useable in seconds.
There are plenty of tasks like this, in which it's creating something based on your input, in which it is immediately and clearly useful. Just because there are use cases where it fails doesn't make it a scam.
I would wager that the recipients of marketing emails do not consider them valuable. The prospect that in the future, these will not even have human creativity flowing into them just adds insult to injury.
I've got a few thousand people who have signed up for them and haven't yet unsubscribed. Whenever I send out a marketing email to that list, a number of people immediately make purchases.
Why would that be the case if people didn't consider them valuable?
But the idea that it's a fraud or a scam or whatever label the naysayers choose to put on it is just wrong. It makes mistakes, but it also provides a lot of value when used correctly.
Consider the many use cases where you don't need to rely on its "knowledge" at all - just this morning, I used it to write some marketing emails. I've got a paragraph about the company that I give it, plus some specifics about the particular email, and it knocks out something that's entirely useable in seconds.
There are plenty of tasks like this, in which it's creating something based on your input, in which it is immediately and clearly useful. Just because there are use cases where it fails doesn't make it a scam.