I'd like to know more about the claimed usage figures of OpenAI. I'd like to know more about how many people are really actively using it. I can't stop hearing about LLMs in places like Hacker News, but in my real life away from the tech sphere if I asked someone if they know what OpenAI or ChatGPT are I'd say maybe half would be aware of it, a small fraction would have ever used it a single time and I don't believe I know a single person who actively uses it to get things done.
I think an overwhelming majority of users prompt it to generate a wedding speech speech in the style of H.P. Lovecraft or whatever, have a giggle and never use it again. I am so sceptical of the claimed figures, especially the often-repeated claim that "ChatGPT [is] the fastest-growing consumer application in history"[0].
I learnt to code decades ago and later studied computer science and physics. I now work in marketing more than anything (the marketing side of other things like film, tech and politics). To do that I often build websites and apps. ChatGPT is invaluable to me. I use it as a coding assistant. It can write things in languages I don’t know enabling things that weren’t possible within budget a year ago. It works because I have the technical skill to prompt it properly - writing larger pieces of code in pieces - and because I have the ability to identify errors and fix them. If limited to the raw output it would be an exercise in frustration, but as a person with coding chops I can 5x my output by providing detailed feedback and suggested approaches. It’s quite revolutionary. That said though I work with text, photos and video I find it useless and uninspiring for creative output. For me it’s purely a technical tool.
I use it in a limited get things done way as a kind of substitute for Stack Overflow as in here's my code, here's the error message, what did I do wrong? I've also found it quite handy for legal questions - asking it questions of UK law it can be surprisingly good and regular lawyers surprisingly bad in a being too busy not returning calls kind of way.
But yeah, I don't know many people using it for practical stuff.
"Fastest growing" is a suspect statistic even when it is technically accurate. A company can go from having one customer to three, and honestly exclaim, "We tripled our user base in one day!"
The thing that bugs me about this is how he just dismisses Uber as providing nothing.
I guess the thousands of rides I've taken so far were just... nothing? I'll tell you what. I remember the world before Uber. It fundamentally improved my life.
If I were one of the shrill victim types, I would call this take ableist against blind people who have objectively gotten value out of Uber.
Instead, I'll just call it silly.
> The thing that bugs me about this is how he just dismisses Uber as providing nothing.
I think you are misreading him, but it's mostly his fault. He's not talking about if consumers got value out of things during a bubble, but what is left over after it "pops". His argument is basically that uber did not represent a viable business plan initially/currently (true) and will never find a viable plan (arguable), therefore at some point it will collapse when people get tired of subsidizing the rides in various ways. His claim then is when this happens, what is left afterwards is strictly worse than what existed before uber.
Now I'm not going to opine on the argument or the timeline, but nowhere in there does he claim that people getting subsidized rides didn't benefit from them at the time, just that it nets out negative over time.
This idea is just so fundamentally weird that I didn't think it could have been what he meant.
Why does surplus only matter if you capture it after the bubble pops? Does the money I saved on Uber not carry forward? Does the cheap trip I took to the doctor's office where they found something important not still benefit me even if the company later shuts down?
Edit: You would need a literal Gods-eye perspective of the Universe to know how something like this actually nets out.
I think you're oversimplifying it. Of course it provides something of value to some people - it pays drivers, riders get rides home from the airport, etc. I don't think he's saying that Uber has never provided anything of value at any point, but he's making the case that it's still a scam or a bubble, because the company isn't actually making money, it knows it, and has no plan to become profitable. Therefore, it's a net loss, and essentially scamming investors out of money, plus the other negatives he mentions. I'm sure some people made tons of money when the stock market crashed in 1929, that doesn't mean it wasn't catastrophic - just because some people benefited.
So not only is the author wrong about the point that he obviously did not make, but he is wrong about the point that everyone else understands him to have made too, even though you are still unable to understand it.
I don't know man, how hard is it to just admit that you were wrong.
On one hand, it destroyed the livelihoods of many people. On the other, it made ordering a taxi so easy that a lot of people stopped driving home from parties drunk.
Uber has also made getting around in a lot of cities much safer. I've gotten into taxi's with drunk drivers before (seriously wtf), not a problem with Uber, or at least not for very long. (When I reported the drunk taxi driver, turns out the medallion owner wasn't working that night and had lent his cab to someone else, there was not trace of my ride or of the driver...)
I think you can kind of have both. For example in Budapest Uber is banned and I miss it, but Bolt provides the exact same user experience for ordering a regulated city taxi (with fixed pricing as well). I can get one quickly any time of day.
In American cities I often found I had to wait 20 minutes to get an Uber or Lyft, if one even came. Is that because the economics of it fundamentally don't work and no one wants to drive for them any more?
> I think you can kind of have both. For example in Budapest Uber is banned and I miss it, but Bolt provides the exact same user experience for ordering a regulated city taxi (with fixed pricing as well). I can get one quickly any time of day.
The American taxi companies screwed up, tl;dr by maintaining a low supply of taxi medallions cities and cab companies maintained wealth for people who had already acquired one. This meant taxi companies could never fulfill demand during peak hours.
Also city councils had to bow to pressure to not have streets clogged 24/7 with taxis.
> In American cities I often found I had to wait 20 minutes to get an Uber or Lyft, if one even came. Is that because the economics of it fundamentally don't work and no one wants to drive for them any more?
Depends on the time of day, the city, and where you are at. I've not had a problem getting an Uber in Seattle, but the prices in Seattle are extravagant.
> Uber has also made getting around in a lot of cities much safer.
This has been the complete polar opposite of my experience in NYC. The professional taxi drivers are pretty consistent, like 1/4 uber/lyft drivers barely know how to drive at all. I distinctly remember one driver so terrifyingly incompetent I made him pull over and let me out of the car five miles from where I was going...
By safer I meant that by reducing the friction of ordering a driver, more people just go ahead and have someone else drive them home instead of getting behind the wheel while drunk.
Drunk people are stupid. A giant "get me home now" button is more likely to get utilized versus stumbling around trying to wave down a taxi.
Also the cash thing, knowing that a card is 100% going to be accepted is huge, compared with Taxis and "oops the card reader is broken today".
> Flagging a cab on a busy road in NYC is always faster than waiting for an Uber, at least in my experience.
In Seattle, you can't really flag down a cab.
What you could do in the past (pre-uber) was call their phone number, have the dispatcher on the other end pick up and be annoyed that you are bothering him, and request a cab that may or may not show up.
Also you had to know where you are at, which for a lot of people I know, is a serious challenge. A surprisingly large % of people have serious problems finding what intersection they are at. (To be fair, sometimes you are on a street that, for whatever reason, doesn't have any street signs, or the street sign is not lit up at all and you have to use your phone's flashlight to try and read it... and again, all this while drunk).
>Uber burned $31 billion in investor cash, mostly from the Saudi royal family, to create the illusion of a viable business. Not only did that fraud end up screwing over the retail investors...
but looking now Uber's market cap is $128bn so it doesn't look like the investors did that badly or that there's nothing there.
Why would you buy Uber for 128bn if it cant make money? I assume there is a faith that it will find some way to be profitable in the future. But they have been at it for a while…
Not only did that fraud end up screwing over the retail investors who made the Saudis and the other early investors a pile of money after the company’s IPO – but it also destroyed the legitimate taxi business and convinced cities all over the world to starve their transit systems of investment because Uber seemed so much cheaper. Uber continues to hemorrhage money, resorting to cheap accounting tricks to make it seem like they’re finally turning it around, even as they double the price of rides and halve driver pay (and still lose money on every ride). The market can remain irrational longer than any of us can stay solvent, but when Uber runs out of suckers, it will go the way of other pump-and-dumps like WeWork.
--
Enumerating the manifest harms doesn't mean you didn't enjoy cheap rides for a time, it's not all or nothing.
"Cheap rides" do have a measurable cost to others and society.
In addition to the ills cited here one could add many others, not least of which was the elevation of a grotesque ethos of "breaking things" (i.e. evading any legal obligation or social responsibility until regulators and critics, viewed with contempt, "catch up"), and notably distilled down douche-bag toxic tech-bro culture. Both of which still tarnish and retard our industry.
Right. He talks about hypothetical outcomes as if they have happened, which dramatically reduces his credibility.
For instance, in Denver, we still have several (I think even more than when I moved here in 2011) Taxi companies. Those Taxi companies actually use apps now. Those companies don't ever have the "oh the credit card reader is broken" excuse anymore. Riding a Non-uber Taxicab in Denver is an objectively improved experience now!
He gets enough wrong throughout the article that I can’t really draw much from his conclusions. Fun food for thought though as I do agree we are in an AI bubble. Regardless of that though for anyone using these models the transcendent change to daily productivity/workflows have been mind boggling. Intellectuals like to opine and fail to see enough value or have the capacity to build value and that’s why they remain intellectuals and not practitioners.
I work in AI now and spent 3 years in the crypto space, to say crypto has left nothing of value really is an over simplistic view. To say Uber is a failed company (I worked at a ride share company too), wow. Does he not remember the utter hellscape of getting a taxi in SF 10 years ago?
The executives at these companies (mine too) openly declare a bubble similar to .com in the offing, cash in on it while you can. Maybe you’ll get lucky and get early access to some of these models we’ve got cooking to use for personal projects ;).
Regarding Uber. Uber is deeply polarizing because it made no money up until the most recent quarter while racking up $31.5-billion in operating losses since 2014.
But they made more than 300 million last quarter if I recall.
They “yet to turn a profit” criticism hit a roar in 2022 as the easy money dried up. See articles like:
Of course this opinion is now under threat as Uber is turning a profit. But the idea that Uber is a failure that just hasn’t realized it’s full potential in that capacity yet is still deeply entrenched.
It’s interesting that as much as their CEO talks mobility and self driving, their fastest growing profit segment by FAR is delivery and it’s almost half their revenues.
They may be a bet on self driving for mobility, but they are making most of their profit growth from Amazon-style last mile.
Crypto is really cool, decentralized computation and asset management are very valid use cases. look at $rndr or $tao. there is still a lot of really good innovation going on that you and the mainstream media will pick up on in about 1.5 years. :)
math calculation, general knowledge, interviewing, python scripting, vision for UI building, vision for technical analysis, GPTs for subject matter expertise, autogen for doc creation, competitive analysis using vision, builder.io (figma plugin) for HTML to figma designs, builder.io for generative designs for figma.
many more ways, the only limit is your imagination. basically just a massive accelerator and upleveler for me.
>All the big, exciting uses for AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).
Wtf is Doctorow talking about here? Because the voice transcription, text-to-speech, optical character recognition, and language translation applications are already changing the world. I'm open to calling it a "bubble," but the sentence above is absurd. Reminds me of the John McCarthy quote "As soon as it works, no one calls it AI anymore"
I don’t think he’s questing the technological potential of AI, just the business viability of the current crop of AI companies offering billion dollar models.
Sure, AI can do language translation - but any major money making operation can’t rely solely on an AI translation (or an AI text-to-speech, or an AI transcription). Big important business needs a human in the loop to verify everything the AI does, so there really isn’t much money being saved.
You don’t need a human in the loop for a $10 app to help you learn Spanish, but that’s not the kind of big business that will keep OpenAIs lights on.
Yes and they'll just go back to crypto mining when they flood the market with low prices. This ends up increasing the mining complexity and increasing energy consumption for the same number of blocks mined.
> The largest of these models are incredibly expensive. They’re expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models.
This is the part that doesn't seem like an unsustainable bubble. Initial training is very expensive. The cost of running a trained model keeps coming down as we find more and more ways of making it smaller while maintaining a high level of accuracy. It really comes down to if these advancements can keep going and meet costs before investment dries up.
I'm not sure to what degree large scale LLMs are a bubble. My sense is there are enough useful applications lurking in there that people will be paying ChatGPT API fees for a long-time. I also think those fees are high enough that the company is not only covering costs but paying out huge bonuses to every developer.
As for other AI technologies, we already have many proven examples in use today generating value. Things like automated text scraping are everywhere. Applications like TTS and ASR are somewhat common-place as well. I think the big difference between AI and previous tech bubbles is that AI has a huge amount of useful research literature demonstrating various kinds of value underlying it. There is a question of whether research value can be translated into business value effectively, but there are very real improvements in AI that underlie the explosion in AI businesses. I'm hard pressed to find the same level of technical innovation in crypto over the same time period. As I understand it most cryptocurrencies run minor variations of one of two algorithms. I also think the dotcom bubble in general had very little of the same research underpinnings. There are a lot of AI problems where we are twice as good as we used to be five years ago. It's not surprising that some start-ups will exist to check if that's good enough to generate business value. I'd be very shocked if this AI Bubble didn't leave a lot of good things behind because I think it's very unlike the other bubbles in the fundamentals that underpin it.
> My sense is there are enough useful applications
FWIW tech bubbles always have a large number of useful applications, hell the dot-com bust was full of companies held up for mockery that (maybe?) turned out just to be 15 years to early.
Bubble doesn't mean "there is nothing useful there" it just means the interest/focus/investment gets unmoored from the fundamentals. I think we are pretty clearly there in the AI space right now, even if you focus on core technologies that are working.
>very shocked if this AI Bubble didn't leave a lot of good things behind
you are basically arguing it's his "Bubble 1.0" rather than "Bubble 2.0", and he is less optimistic, but you aren't arguing fundamentally different things.
I don't think a bubble associated with a technology means that the technology is not incredibly useful. Rather, it's an overly aggressive inflation of value that is nominally attributable to that technology, where much of that value later collapses.
The dot com bubble did not invalidate the importance of the web. But there were all kinds of shenanigans going on around the promise of the web. People were setting up companies thinking/claiming that making a website would make the business wildly valuable, ignoring all the actual problems unrelated to the web that they'd need to solve before they would have viability. They were then successfully going public with nothing but the website built - no actual product in place. Then it collapsed. But the web carried on and some of the companies being built through that period (e.g., Amazon, eBay, Google) were quite real.
I see similar things going on with AI now...
An electric utility company that a friend works at has an edict to lay off all 200 customer support staff and replace them with a chatbot. They don't want to pay OpenAI, so the executives are giving the team (of 3 data scientists) 6-months to build their own in-house LLMs. They don't have the data or the infrastructure expertise to do it. But, AI!
A tech company with 1-2K employees and lots of revenue that another friend works at is preparing to IPO in the next year. Executives are forcing LLMs on teams that can't find reasonable applications for them. It's being done to boost their valuation heading into the IPO because they can then call themselves "AI powered".
At a FAANG that another friend works at, a non-technical VP with lots of available budget commandeered several data science and engineering teams to build something laughably bad with LLMs (LLMs would never work for this application, customers would hate the outcome, and it would create contractual problems for the company). The technical teams were tempted by the budget after a round of layoffs, but they could only stomach it for so long. After most of a year and 10s of millions of dollars spent (plus the contractual issues starting to appear), sanity prevailed, and the project was cancelled.
None of these examples say that LLMs (or other ML approaches) won't continue to have life/society altering impact. They point to people trying to exploit/build the hype for short term financial gain. It's why so many ML practitioners don't like the term AI - many of them feel like AI == ML + hype. ML is just a tool, and all the hype makes people engage in science-fiction type thinking that obfuscate where the value really is. But it will all work because AI!
These are only anecdotes, but what I'm seeing now feels a lot like what I saw in the dot com bubble.
I think the author is broadly correct: I think basically everyone can agree that we're in an AI investment bubble, and it is important and interesting to try to evaluate what, if anything, will be left when it pops.
But it strikes me as intellectually dishonest to assume in advance that everything that is currently being built will fail entirely.
Doctorow is so relentlessly cynical about tech in general and Silicon Valley in particular that I find it really hard to read his stuff without rolling my eyes.
Take this section:
> In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
Implicit in this statement is the assumption that AI evaluation of X-rays will never get any better than it is right now. Why? Why would we assume that when the long arc of technology bends always towards things getting better/faster/cheaper?
He's talking about the value proposition RIGHT NOW which is for higher accuracy but also more expensive (because you'd need the same human in the loop + pay for the AI) and how that's different from how AI is being pitched right now which is save money and use less people. He's not assuming it won't get better, he's saying it's what it is right now.
We've been doing that sort of AI + radiologist since the mid 90s now, and it has made an incremental improvement; I expect it will continue to make incremental improvements. It's certainly cheaper than double reading everything (which has been show to improve some things, e.g. screening). Anyone expecting a radical change in that model in the short term is unfamilar with the SOTA in one or both domains.
To put it another way, if we are looking for places to get a human out of the loop, radiology shouldn't be that high up anyone's list - but decision support is super useful sometimes.
A.I. has got something special to it because of its special talent at bullshitting.
Last week I tried really seriously to use JetBrain's A.I. assistant at work.
It could answer questions about my code sometimes, it could figure out what my database schema was by looking at the stubs JooQ built out of the tables.
which is a little tricky to express in JooQ because you run into one of those "need to access the definition before it is defined" problems that they invented the Y combinator to deal with... Most people just hand-code the table reference in the join like is done here
The assistant tried to write the query numerous times, it got the basic idea right but could never quite figure out how to close the loop on the recursion. It exhausting to keep cutting and pasting code from it and then hitting alt+ENTER to bring in imports and finding sometimes I got the wrong imports (java.lang.Record vs jooq.Record), etc.
It would often say things that were totally wrong such as needing JooQ 3.15 to write CTEs because it misread the documentation.
Now my experience is that many ignorant programmers are also belligerent and the assistant is always polite and actually will change it's opinion if you show it evidence to the contrary... But maybe that makes it more dangerous because the better it gets socially, the better it gets at keeping you in a tarpit.
The sort of deception that goes into making a bubble generally requires a kernel of truth to build and exaggerate on. Bubbles have always served to get investors to pay for training new labor for the industry. That depresses wages and labor costs in the long term.
The fact it isn't also triggering a hiring boom stands out to me as worth looking at.
I don't think anyone can meaningfully predict the AI bubble. Expert predictions go anywhere from galactic colonization bubble (do they collapse at the edge of the galaxy as resources get scarce, or continue on to the rest of the universe?) to whole-world GDP bubbles, to big tech bubbles, startup bubbles, etc. I haven't seen convincing evidence for a probability density that isn't fairly smeared over all these options and wide timescales.
Unlike traditional bubbles we get qualitatively different outcomes at greater scale. A million GPT-2s is nothing like 1 GPT-4. Researchers are continually surprised by new abilities at greater scales.
I think the difference between the other bubbles (enron, CDOs, crypto, pets.com, etc) and the AI bubble is stark.
None of those other bubbles had something truly new at their core. You can say AI image generation is gimmicky, but you can't deny that it's a capability that machines have now and didn't before.
A rush of speculative investment into finding applications for that capability makes sense. Some will turn out to be pets.com and others will become Amazon. But the world has already changed, and all that remains is to figure out who is going to make money from that.
All your examples have fundamentally new stuff at the core though? E-commerce lets you compete with hundreds of brick-and-mortar locations using a single warehouse. Crypto enabled global 24/7 markets. CDOs were derivatives with built-in hedges: the houses. Enron is an accounting fraud case, not a bubble—energy trading is still super profitable.
E-commerce isn’t on that list for the same reason AI isn’t: it had real value. Cryptocurrency had a ton of money poured into it but never found something it did better than the alternatives - PayPal didn’t even feel the need to pretend to treat customers better! – and magical thinking about security and customer service not mattering didn’t help.
Pets.com is harder to understand if you weren’t there: it looks like a not bad idea, which it was, but the business was setup as if it was a lucrative high-margin industry. There was no way it could succeed losing money on every sale, and they locked those costs in so well there was no real way to lose less money unless you could figure out how to make shipping far cheaper (most of what people buy at pet stores is heavy food). The hype was off the charts, though, and people poured money into the stock anyway. I worked with a major competitor and was on a few calls with their ostensible technical staff, and it was palpable how they had hired anyone who could spell HTML and everyone was killing time waiting for their options to vest rather than building anything real.
AI is like the early web in that regard: there are plenty of outright fraudulent companies and a bunch of doomed ones who just don’t realize it yet, but there is also a real capability that wiser people will use. As with the early web, I’m expecting a lot of startups to get a ton of attention compared to boring old companies actually doing useful things.
There was definitely some level of novelty in each case I mention, but a very different level than AI offers. Without going into massive detail I would point out that CDOs were not the first example of securitisation and crypto was not the first example of a 24/7 market.
Something missing here from other bubbles like dotcom, tulips, 2006 housing or dogecoin is people speculating on asset prices going up driving the thing. The root cause in all of those were people buying overpriced whatevers hoping to flip them to the next fool. There hasn't been very much of that yet with AI. Maybe it is to come?
Wikipedia's definition:
>An economic bubble (also called a speculative bubble or a financial bubble) is a period when current asset prices greatly exceed their intrinsic valuation, being the valuation that the underlying long-term fundamentals justify.
I'm not sure the AI stuff even qualifies as a bubble under that. If you assume better than human AI comes, which is quite likely, it will be able to do all the things and more so you could hazzard a value of the order of world GDP which is approx $100 trillion.
> Every business plan has the word “AI” in it, even if the business itself has no AI in it
I agree. A product needs at least an LLM under the hood, or Bayesian inference, or machine learning, or neural nets to be deemed 'AI'. Otherwise it's just a bunch of if...else statements.
I’m pretty sure that a lot of people are going to end up going down rabbit holes that waste a lot of their money chasing the golden payout on AI, I also don’t think it’s our typical bubble.
Microsoft owns 49% and we’re already seeing its usefulness in the world of enterprise organisations where Microsoft is frankly becoming irreplaceable on the office side of things. We’re only in the early stages of using Copilot, bing agent, 365 or whatever it’s called, and it’s rapidly changing our “boring” business processes. It’s things like having it listen in on Teams meetings and write summaries. Making non-ugly sleep inducing presentations in a financial company focused on green energy… seriously… It’s replaced all our external spending on corporate designs, it’s not on par with what you’d see on top of the line communication, but it’s good enough to send out to our investors. It’s also good enough to replace any form of icon libraries and stock pictures. We use it to enhance excel (not sure how exactly, and I’m pretty sure I don’t want to know considering what lives in our shared excel sheets isn’t supposed to live there). It’s helping more digitally inclined PMs and business people automate a lot of their processes in ways no other no-code or RPA tool has ever done before them. It’s helping BI write shitty software, which is perfectly fine since we can always re-write the stuff that needs to be efficient and maintainable. And so on.
But it’s also just part of the Microsoft package. If you or I buy into this, or even smaller companies do, you’re going to pay the prices listed by Microsoft. That’s not how enterprise pays though, we go through these 3rd party dealers with all sorts of fancy certifications and whatever that they pay to have, and then we buy packaged subscriptions on things we need. Of course if you compared, we probably pay more, but it doesn’t look like that on the budgets, and that’s what matters when you roll these things out. So for us AI is similar to how Microsoft Teams became our only communications tool. Not because it was better (at least not at the time) but because it was “free”. AI is also “free” for us, and because it is, it’s going to lock us even harder into Microsoft as a vendor. It’s not like there was really an alternative anyway, but now there really isn’t. And that’s not a bubble, it’ll only strengthen Microsoft unless EU or US regulation does something about it, but how can they when they’re using these tools themselves?
I'm not sure the first sentence is correct. "Of course AI is a bubble." If the product is good enough the "every single billboard is advertising some kind of AI company" may just show enthusiasm.
Maybe it will be answered in the first sentience;) Sentient AI would be quite a thing.
...I feel compelled to quote this now... more for the parent of your comment
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Not the point of the article, as I have no serious opinion on the "AI" issue (might be a bubble). However, I did find one comment interesting, which is that from Doctorow's opinion, a lot of companies / humans prioritize (Fast + Cheap) >> Quality.
That part, I actually agree with. So much lately feels like people trying to inflate a garbage bag, make it look real pretty, and then run away with the money before you notice. And not really care about the fallout afterward, cause they got their island, and in Hollywood parlance, their "** U money". Never understood that in Hollywood (your job's like, the most, desirable).
Tech bros, I can kind of understand wanting to get away. Specially if you never cared about tech, and just wanted quick cash. Probably a huge number of humans who get drawn toward the FAANG realm cause of money, and never even slightly cared about tech. Just see $'s.
"Enron's fraud left nothing behind but a string of suspicious deaths."
Man this guy really knows nothing about energy...
Enron happened at the start of the shale and natural gas revolution that reshaped American energy and global politics. Enron's energy traders alone (including John Arnold) helped structure global markets for natural gas. Their physical pipeline assets are owned by Berkshire Hathaway. The fraud of Enron was on top of a real set of businesses, with massive physical assets!
> or a large language model can be fun, and playful communities have sprung up around these subsidized, free-to-use tools
I expect better from Doctorow, who has long advocated for people taking personal control of technology.
The cool thing about this generation of AI is that you can run it on home, or even on your phone. A $3000 MacBook (yes, expensive) can run a useful voice controlled AI that can do a lot of cool stuff. Image generation models also run locally, and there has been an explosion of tools, running on people's PCs, that allow for entire new creative workflows.
Is the training subsidized? Yes, it is. However finetuning is crowd sourced, and again we find ourselves in a new field where anyone who wants to can quickly get up to speed and make serious real contributions.
He later degradates the "small" models, ignoring that Mistral's latest release is powerful, people can run models that approach GPT 3.5 (which is incredibly useful!) Powerful voice recognition, the type only dreamed about a decade ago, is now available for everyone. The image generation models that are available for personal use are incredibly powerful.
Research papers are coming out left and right about how to do video generation. Models are getting cheaper to run all the time, and we are exactly 1 (hard fought...) manufacturing node improvement away from this stuff being usable on affordable home machines.
> But with AIs’ tendency to “hallucinate” and confabulate, there’s an increasing recognition that these AI judgments require a “human in the loop” to carefully review their judgments.
FFS people hallucinate things that aren't there. Every time a human accesses a memory in their brain the memory gets altered a little bit. Fact checking exists for a reason.
> AI companies are implicitly betting that their customers will buy AI for highly consequential automation
How about 10xing current employees?
OpenAI is doing a terrible job of capturing the value it provides. Heck most people are doing a terrible job of utilizing AI in their day to day, versus what they could be doing.
> All the big, exciting uses for AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).
"Take these 10 PowerPoint slide decks, extract the graphs we used in last quarter's summary report, and bring all the chart styles into alignment with our corporate branding".
"Go through these last 5 years of corporate financial filings and highlight any areas of business that you think the company is trying to cover up and obfuscate".
"Add error checking and retry logic to this shell script, as well as output logging when an error occurs"
"Write a mock for this object"
"The explanation by teacher gave of how to integrate a sine wave didn't make sense, can you provide an alternate explanation?"
I think the term bubble is getting mistreated here. Poor little thing.
If enthusiasm and investment is a measure of “bubble” then every new investable technology wave that comes along will be one.
If “bubble” means “getting more resources thrown at it than its potential justifies” then we have an entirely different conversation going on.
So, which is AI? I’d argue the potential for AI is larger than the entire Internet thus far, though whether LLMs specifically are going to pan out is another thing entirely.
What we have is the tendrils of capitalism reaching out to try to fund the next general purpose technology (gpt!) wave, which when you think about it makes a lot of sense. They aren’t bubbles, they are experimentation phases while the mechanisms of our society attempt to determine the breadth and depth and general utility of the new innovations. Call that a bubble if you must, but it’s really something else.
Rather than use the bubble terminology, I prefer to think of things as waves as it’s entirely more accurate.
Here’s a well written summary by Nicholas Bergman so I don’t have to write so much:
Charles A. Beard warned almost one hundred years ago: “Technology marches in seven-league boots from one ruthless, revolutionary conquest to another, tearing down old factories and industries, flinging up new processes with terrifying rapidity.” This image of technology is bleak at best, but there is a lot of truth tucked away inside this Technological Determinist statement.
Throughout the history of mankind, significant changes in how society, business and culture function has followed key developments in our technology. Across thousands of years, humanity has ridden five distinct technological waves – waves that have all uniquely impacted how we experience the world, and waves that have all eventually taken backstage and given way to a new on one.
The five waves that have crashed into our society so far are as follows: “Early Mechanisation” (1770s to the 1830s), “Steam Power and Railways” (1830s to 1880s), “Electrical and Heavy Engineering” (1880s to 1930s), “Fordist Mass Production” (1930s to 1970s) and “Information and Communication” (1970s to 2010s). We are now beginning to see the surge of mankind’s sixth wave: “Miniaturization”.
How do we know? Because all prior waves have been charged by the same forms of technologies: General Purpose Technologies (GPTs) to be exact. GPTs are paramount in connecting the past with the present, and, in the most reductionist sense, a GPT is a single generic technology that develops over time but can still be described as the same technology over its lifetime – the printing press, the wheel and the computer are three quintessential examples of GPTs. Initially, these technologies have immense space for growth and are eventually widely adopted – permeating different sectors and industries over time.
In order for a new technological wave to grow, there must be at least one, but often multiple GPTs that drive it. These technologies themselves must go through a three-step process in order to reach widespread adoption: experimentation, expansion and transformation.
Taking cars as an example, at first, no-one really understood the applications of this new technology, it wasn’t viewed as a great method of transportation and was just a novelty way of getting from A to B – this was its experimental phase. Following this, the expansionist phase, where we begin to see interesting innovative use cases as businesses incorporate the technology into their scopes, and a democratization of the technology due to increased affordability and reliability. Finally, we see the transformational phase, characterized here by the emergence of suburbs, drive-in cinemas and shopping malls. Essentially the technology began to transform how society worked, how society was now experienced was determined by this new technology.
So, calling it “a bubble” cheapens what it is. It’s phase 1: experimentation. And in many ways Cory is following the same template when he talks about whether it will leave anything behind or not, but it’s not a pessimistic process like the term bubble is implying. It’s a deeply healthy and recurring process of humans exploring new technologies in order to determine whether they are a general purpose technology and whether it can be, and how, widely deployed.
This really puts some perspective on the crypto wave as well. If you think of it as a giant beta to see what worked and what didn’t, with eventual deployment of solid working solutions for digital exchange finding foothold, it’s hard to see it as a “failure” as that nomenclature is again a diminishment of what is simply the healthy and natural process of exploring new technology waves for broad purpose and suitability.
Whether you believe that “miniaturization” is the wave is not the point though. We may not know exactly what form the wave will take until it’s quite obvious, or every investor would be right 100 percent of the time. But the pattern is the point.
If you want to read more, this is where the quotes are from:
> AI isn't a bubble [because] AI will deliver just like the internet.
And everyone knows there was no internet bubble. No period of proliferation of available venture capital, rapid growth of valuations followed by all gains being lost.
internet was both a monumentally successful technology, and a bubble. The dot-com crash of 2000 saw most stocks give up all gains from the prior ~5 years, the surviving stocks took over a decade to recover their previous high.
> Bubbles may form when enthusiasm and money gets ahead of progress
Nah, it will deliver on some of those promises but there is definitely zero chance that it will deliver on all of them (maybe even most of them), but this also isn't how bubbles form.
The Internet delivered 5 trillion dollar companies. It was also a bubble that popped in 2002. For every Google or Amazon there are a dozen Nortel's and Webvan's.
AI is a bubble in the sense there isn't a direct path of put more money into research and get a better result. I'd almost argue we are spending money in the wrong areas.
The internet had clear and well defined steps for improvement. Although we can increase accuracy or whatever by 1-2% that doesn't define definitive steps toward AGI.
Humanity is spending money in a lot of different areas, and we're seeing both theoretical and functional progress. The progress in applications is already destabilizing a lot of industries in just the last year, how much faster should we progress?
I think an overwhelming majority of users prompt it to generate a wedding speech speech in the style of H.P. Lovecraft or whatever, have a giggle and never use it again. I am so sceptical of the claimed figures, especially the often-repeated claim that "ChatGPT [is] the fastest-growing consumer application in history"[0].
[0] https://www.reuters.com/technology/chatgpt-sets-record-faste...