It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
Now I’m no big city spreadsheet man but I bet you “company that owes us billions went belly up” looks better on the books than “company we bought that owes us billions went belly up.”
It’s pretty crazy that Amazon’s $8B investment didn’t even get them a board seat. It’s basically a lot of cloud credits though. I bet both Google and Amazon invested in Anthropic at least partially to stress test and harden their own AI / GPU offerings. They now have a good showcase.
Yeah. I bet there’s a win-win in the details where it gets to sound like a lot of investment for both parties to look good but really wasn’t actually much real risk.
Like if I offered you $8 billion in soft serve ice cream so long as you keep bringing birthday parties to my bowling alley. The moment the music stops and the parents want their children back, it’s not like I’m out $8 billion.
Why does everybody keep insisting on this “Enron accounting” stuff. LLM companies need shitloads of compute for specialized use case. Cloud vendor wants to become a big player in selling compute for that specialized use case, and has compute available.
Cloud provider gives credit to LLM provider in exchange for a part of the company.
Amazon gave away datacenter time share in exchange for stock in a startup. That has nothing to do with electricity futures and private credit revolvers.
This is my thought too. They de-risked any other AI startup from choosing AWS as their platform. If the hype continues AWS will get their 30% margin on something growing like rocket emoji, if they don't at least they didn't miss the boat.
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
A childhood internet friend of mine did something similar to that but for sending SMSes for free using the telco websites' built in SMS forms. He even had a website with how much he saved his users, at least until the telcos shut him down.
Well Phreaking in 2003-05 (no clue when anymore), so at the same time you could still get free phone calls on pay phones in the library or hotel lobby.
Not sure for Claude Code specifically, but in the general case, yes - GPT4Free and friends.
I think if you run any kind of freely-accessible LLM, it is inevitable that someone is going to try to exploit it for their own profit. It's usually pretty obvious when they find it because your bill explodes.
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
> I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
There may well be some "interesting" financial arrangements in place between the two. After all, Claude models are available in AWS Bedrock, which means Amazon are already physically operating them for other client uses.
Looks less "intelligent" to me, just a lot more trained on agentic (multi-turn tool) use so it greatly outperforms the others on the benches where that helps while lagging elsewhere. They also released bigger models, where "Pro" is supposedly competitive with 4.5 Sonnet. Lite is priced the same as 2.5 Flash, Pro as GPT 5.1. We'll definitely do some comparative testing on Nova 2 Lite vs 2.5 Flash, but not expecting much.
Claude 2.0 was laughably bad. I remember wondering why any investor would be funding them to compete against OpenAI. Today I cancelled my ChatGPT Pro because Claude Max does everything I need it to.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
There are really impressive marketing/advertisement formulas to be had. I wont share mine but I'm sure there are many ways to go step by step from not-customers to customers where each step has a known monetary value. If an LLM does something impressive in one of the steps you also know what it is worth.
I have an approach for multiple of these steps, which involves adapting a kind of non-LLMs respected authority tech approach (my previous side project), to LLMs.
I think it can be done right now with MCP servers in a way that you don't immediately hand over your data to the chatbot portal companies so that they can cut you out. (But, over time/traffic, they could quickly learn to mimick your MCP server, much like they mimick Web content and other training data, and at least appear to casual users to interact like you, even if twisted to push whatever company bid for the current user interaction. I haven't figured out what you do when they've trained on mimicking you with an evil twin; maybe you get acquired early, and then there are more resources to solve that next problem.)
Haha just tried and it works! First I tried in Spanish (I'm in Spain) and it simply refused, then I asked in English and it just did it (but it answered in Spanish!)
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
Same for Apple would be my take right now. No point in spending billions trying to build and train an LLM. Better to buy AI services from e.g. OpenAI for a bit, then extract the valuable bits after the crash. The current crop of AI companies can waste money of figuring out what works and what doesn't.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
The movie seems like a fluff piece when you find out what has transpired at DeepMind subsequently with slowing down publishing material to “selling out to product” which the founder was hell bent against in the documentary.
> It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
Or, as a slight variation of that, they think the underlying technology will always be quickly commoditized and that no one will ever be able to maintain much of a moat.
I think anyone sane will have had the same conclusion a long time ago.
It's a black box with input/output in text, thats not a very good moat.
especially given that Deepseek type events can happen because you can just train off of your competitors outputs
I've tried out Gemini 2.5/3 and it generally seems to suck for some reason, problems with lying/hallucinating and following instructions, but ever since Bard came out at first, I thought Google would have the best chances of winning since they have their own TPUs, YouTube (insane video/visual/audio data), Search (indexed pages), and their Cloud/DCs and they can stick it into Android/Search/Workspace.
meanwhile OpenAI has no existing business, they only have API/Subs as revenue, and they're utilizing Nvidia/AMD
I really wonder how things will look once this gold rush stabilizes
Bezos is playing it smart: sell shovels to all of the gold diggers. If he partners with one of the gold diggers he won't be able to sell shovels to the remainder.
That's a risk-return issue. Bezos plays it safe within Amazon, and quite unsafe outside of that. By the time he acquires something from Amazon it is because it has proven long-term revenue generation and the shake-out period is done and consolidation is about to start. With AI the shake-out is still to come. So he can afford to wait to eventually acquire the winner or to copy it if he can't buy it. Having very deep pockets enables different business strategies.
They're likely just waiting out the eventual crash and waiting to buy at the resulting fire sale. Microsoft has done a very good job of investing in the space enough to see a potentially lucrative pay out while managing the risk enough to not be sunk if it doesn't pan out.
It's safe to assume that a company like Anthropic has been getting (and rejecting) a steady stream of acquisition offers, including from the likes of Amazon, from the moment they got proninent in the AI space.
I think Claude Code is the moat (though I definitely recognize it's a pretty shallow moat). I don't want to switch to Codex or whatever the Gemini CLI is, I like Claude Code and I've gotten used to how it works.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
1. using Claude Code exclusively (back when it really was on another level from the competition) to
2. switching back and forth with CC using the Z.ai GLM 4.6 backend (very close to a drop-in replacement these days) due to CC massively cutting down the quota on the Claude Pro plan to
3. now primarily using OpenCode with the Claude Code backend, or Sonnet 4.5 Github Copilot backend, or Z.ai GLM 4.6 backend (in that order of priority)
OpenCode is so much faster than CC even when using Claude Sonnet as the model (at least on the cheap Claude Pro plan, can't speak for Max). But it can't be entirely due to the Claude plan rate limiting because it's way faster than CC even when using Claude Code itself as the backend in OC.
I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.
Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.
So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).
Google Code assist is pretty good. I had it create a pretty comprehensive inventory tracking app within the quota that you get with the $25 google plan.
Google had PageRank, which gave them much better quality results (and they got users to stick with them by offering lots of free services (like gmail) that were better quality than existing paid services). The difference was night and day compared to the best other search engines at the time (WebCrawler was my goto, then sometimes AltaVista). The quality difference between "foundation" models is nil. Even the huge models they run in datacenters are hardly better than local models you can run on a machine 64gb+ ram (though faster of course). As Google grew it got better and better at giving you good results and fighting spam, while other search engines drowned in spam and were completely ruined by SEO.
PageRank wasn't that much better. It was better and the word spread. Google also had a very clean UI at a time where websites like Excite and Yahoo had super bloated pages.
That was the differentiation. What makes you think AI companies can't find moats similar to Google's? The right UX, the right model and a winner can race past everyone.
I remember the pre-Google days when AltaVista was the best search engine, just doing keyword matching, and of course you would therefore have to wade through pages of results to hopefully find something of interest.
Google was like night & day. PageRank meant that typically the most useful results would be on the first page.
PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
> It's interesting that Amazon don't appear interested in acquiring Anthropic
1. Why buy the cow when you can get the milk for free?
2. Amazon doesn't appear interested in acquiring Anthropic _at its current valuation_. I would be surprised if it's not available for acquisition at 1/10th its current price in the next 3-5 years
AI isn't going anywhere, but "prop model + inference" is far from a proven business model.
I get the feeling Amazon wants to be the shovel seller for the AI rush than be a frontier model lab.
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
I don't know how much they are spending to be fair.
I am basing my observation on the noises they are making.
They did put out a model called Nova but they are not drumming it up at all.
The model page makes no claims of benchmarks or performance.
There are no signs of them poaching talent.
Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
Well, i have had chats with a few engineers working in Amazon retail and there is talk about adding agents for Ops and similar internal tasks. So there is a bunch of AI related things happening, and like others have said, they rent shovels for the rush, so they will bank all the money without having to compete with the money bonfires that others are burning.
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
Sort of. You can do what Zuck did; give your shares more votes, so you stay in control. (He owns 13% of the shares, but more than 50% of the voting power.) That's less doable with an acquisition.
In one case your ownership is diluted by maybe 10%, and you keep full decision making power and everything else. In the other it is diluted by 100% and you are now an employee. They are very different outcomes.
If you can't find a use for AI, you probably haven't given it much of a try. Just one random example: I needed to find experts in a technical field, and gave the problem to Claude Code. Claude put together a comprehensive research plan, dug deep into industry working groups, and produced a prioritized list of experts along with their bios, rationale and LinkedIn profiles.
Completely possible for me to do, but it saved me at least a couple hours of Googling.
why would you take on that burn rate when you can invest, get the investment back over time in cloud spend, and maybe make off like bandits when they ipo
It is spending a lot of money to do the same thing (selling the shovels), and gaining maybe a bit bigger cut if the bubble doesn't burst too violently.
Anthropic is a $1T company in the making (by 2030), already raised their last round at ~$200B valuation. Do you really think Amazon can acquire them? They already invested a lot of money in them and probably own at least 20% of Anthropic, which was the smartest thing Jassy did in a while. Not to mention, if Adobe wasn't allowed to buy Figma, do you think Amazon will be allowed to buy Anthropic? No way it's going to be approved.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Most companies will much rather have the problem of becoming profitable given amazing revenue growth than the problem of not having growth but being profitable. (e.g. Dropbox)
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.