DDR5 is ~8GT/s, GDDR6 is ~16GT/s, GDDR7 is ~32GT/s. It's faster but the difference isn't crazy and if the premise was to have a lot of slots then you could also have a lot of channels. 16 channels of DDR5-8200 would have slightly more memory bandwidth than RTX 4090.
Yeah, so DDR5 is 8GT and GDDR7 is 32GT.
Bus width is 64 vs 384. That already makes the VRAM 4*6 (24) times faster.
You can add more channels, sure, but each channel makes it less and less likely for you to boot. Look at modern AM5 struggling to boot at over 6000 with more than two sticks.
So you’d have to get an insane six channels to match the bus width, at which point your only choice to be stable would be to lower the speed so much that you’re back to the same orders of magnitude difference, really.
Now we could instead solder that RAM, move it closer to the GPU and cross-link channels to reduce noise. We could also increase the speed and oh, we just invented soldered-on GDDR…
The bus width is the number of channels. They don't call them channels when they're soldered but 384 is already the equivalent of 6. The premise is that you would have more. Dual socket Epyc systems already have 24 channels (12 channels per socket). It costs money but so does 256GB of GDDR.
> Look at modern AM5 struggling to boot at over 6000 with more than two sticks.
The relevant number for this is the number of sticks per channel. With 16 channels and 64GB sticks you could have 1TB of RAM with only one stick per channel. Use CAMM2 instead of DIMMs and you get the same speed and capacity from 8 slots.
But it would still be faster than splitting the model up on a cluster though, right? But I’ve also wondered why they haven’t just shipped gpus like cpus.
Man I'd love to have a GPU socket. But it'd be pretty hard to get a standard going that everyone would support. Look at sockets for CPUs, we barely had cross over for like 2 generations.
But boy, a standard GPU socket so you could easily BYO cooler would be nice.
The problem isn't the sockets. It costs a lot to spec and build new sockets, we wouldn't swap them for no reason.
The problem is that the signals and features that the motherboard and CPU expect are different between generations. We use different sockets on different generations to prevent you plugging in incompatible CPUs.
We used to have cross-generational sockets in the 386 era because the hardware supported it. Motherboards weren't changing so you could just upgrade the CPU. But then the CPUs needed different voltages than before for performance. So we needed a new socket to not blow up your CPU with the wrong voltage.
That's where we are today. Each generation of CPU wants different voltages, power, signals, a specific chipset, etc. Within the same +-1 generation you can swap CPUs because they're electrically compatible.
To have universal CPU sockets, we'd need a universal electrical interface standard, which is too much of a moving target.
AMD would probably love to never have to tool up a new CPU socket. They don't make money on the motherboard you have to buy. But the old motherboards just can't support new CPUs. Thus, new socket.
Would that be worth anything, though? What about the overhead of clock cycles needed for loading from and storing to RAM? Might not amount to a net benefit for performance, and it could also potentially complicate heat management I bet.
Gaming on linux in 2015 was a giant pita and most recent games didn't work properly or didn't work at all through wine.
In 2025 I just buy games on steam blindly because I know they'll work, except for a handful of multiplayer titles that use unsupported kernel level anticheat.
A receiver has always been a pretty standard part of even really simple AV setups - you can get half decent ones pretty cheap, and then you just run either the HDMI ARC port or the optical/coax digital audio out from your tv to the receiver so that everything you plug into your tv has it's audio go out to the speakers.
I doubt many real world use cases would run out of incrementing 64 bit ids - collisions if they were random sure, but i64 max is 9,223,372,036,854,775,807 - if each row took only 1 bit of space, that would be slightly more than an exabyte of data.
Being able to create something and know the id of it before waiting for an http round trip simplifies enough code that I think UUIDs are worth it for me. I hadn't really considered the potential perf optimization from orderable ids before though - I will consider UUID v7 in future.
I think the writers classification of this being a Chinese vs English distinction is a bit presumptuous - the portion of the USA OP is familiar with maybe, but I'll jump on the bandwagon to say this kind of negated negative language is very very common in New Zealand.
Not bad, not wrong, no problem etc etc are all very common, and we have the following too:
I think besides the mechanics, the other thing that makes the grandia/grandia 2 battle system so fun is how snappy all the animations and interactions are. You never really feel like you're waiting for things to happen even though it is semi turn based.
I've used llms to help me write large sets of test cases, but it requires a lot of iteration and the mistakes it makes are both very common and insidious.
Stuff like reimplementing large amounts of the code inside the tests because testing the actual code is "too hard", spending inordinate amounts of time covering every single edge case on some tiny bit of input processing unrelated to the main business logic, mocking out the code under test, changing failing tests to match obviously incorrect behavior... basically all the mistakes you expect to see totally green devs who don't understand the purpose of tests making.
It saves a shitload of time setting up all the scaffolding and whatnot, but unless they very carefully reviewed and either manually edited or iterated a lot with the LLM I would be almost certain the tests were garbage given my experiences.
(This is with fairly current models too btw - mostly sonnet 4 and 4.5, also in fairness to the LLM a shocking proportion of tests written by real people that I've read are also unhelpful garbage, I can't imagine the training data is of great quality)
This is true of basically everything people complain about having gotten worse over time.
Whiteware and kitchen appliances are the same - you can absolutely buy a fridge, or a stand mixer or whatever that will work well and last forever. It's just the value proposition compared to cheap crap that will still likely last for a few years but at a 1/5th of the price is not great unless you're going to use it really heavily.
Last time I had to buy a refrigerator it seemed like the choice was between one that cost around $1k and one that cost $10k. I really couldn't find a mid quality option. There wasn't a price point at around 2x the cheap ones for better quality. Those price points exist, it's just that they're usually the same cheap fridges crammed full of pointless features that actually make the whole thing less reliable because it's more stuff to break.
What I wanted was a refrigerator with a reliable compressor. That's where it really seemed like the only options are cheap and astronomical.
This is actually super helpful! I ended up with a less expensive GE model because it seemed like they were the only brand with positive reliability reports besides the super expensive premium brands.
Compressor is replaceable. Also, how do you judge reliability of a compressor before buying it?
Instead, try to find a refrigerator with access to the cooling pipes. Last fridge I threw away had a leak that couldn't be patched because the pipes were all embedded in the plastic walls of the fridge.
Yeah I think the caveat is that the compressor and maybe seals, lights and few other bits are the ONLY repairable parts of most fridges. The whole structure of a modern fridge is foam panels and sheet metal folds that aren't ever meant to come apart after being assembled.
Got a nice Samsung fridge for 500€, it is running without issues for 10 years already. There is no sense to buy expensive fridge unless you need a professional one.
reply