it's because I only have a phone to use for coding. Tho, I am planning to make it more general. Mobile development is just one of the main goals of this language.
It might have more value than you think. If you look up SCEV in LLVM you'll see it's primarily used for analysis and it enables other optimizations outside of math loops that, by themselves, probably don't show up very often.
What's actually way cooler about this is that it's generic. Anybody could pattern match the "sum of a finite integer sequence" but the fact that it's general purpose is really awesome.
"we wouldn’t even need to bother looking at the AI-generated code any more, just like we don’t bother looking at the machine code generated by a compiler."
I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
There seem to be very real differences between x86 and ARM not only in the designs they make easy, but also in the difficulty of making higher-performance designs.
It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
ISA matters.
x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide
32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.
Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.
While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.
> It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel.
These companies target different workloads. ARM, Apple, and Qualcomm are all making processors primarily designed to be run in low power applications like cell phones or laptops, whereas Intel and AMD are designing processors for servers and desktops.
> x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
My napkin math is that Apple’s transistor volumes are roughly comparable to the entire PC market combined, and they’re doing most of that on TSMC’s latest node. So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
This hasn't been true for at least half of a decade.
The latest generation of phone chips run from 4.2GHz all the way up to 4.6GHz with even just a single core using 12-16 watts of power and multi-core hitting over 20w.
Those cores are designed for desktops and happen to work in phones, but the smaller, energy-efficient M-cores and E-cores still dominate in phones because they can't keep up with the P-cores.
ARM's Neoverse cores are mostly just their normal P-cores with more validation and certification. Nuvia (designers of Qualcomm's cores) was founded because the M-series designers wanted to make a server-specific chip and Apple wasn't interested. Apple themselves have made mind-blowingly huge chips for their Max/Ultra designs.
"x86 cores are worse because they are server-grade" just isn't a valid rebuttal. A phone is much more constrained than a watercooled server in a datacenter. ARM chips are faster and consume less power and use less die area.
> So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
Apple doesn't design ARM's chips and we know ARM's peak revenue and their R&D spending. ARM pumps out several times more cores per year along with every other thing you would need to make a chip (and they announced they are actually making their own server chips). ARM does this with an R&D budget that is a small fraction of AMD's budget to do the same thing.
What is AMD's excuse? Either everybody at AMD and Intel suck or all the extra work to make x86 fast (and validating all the weirdness around it) is a ball and chain slowing them down.
Probably because must “emulation” is more like “transpilation” these days - there is a hit up front to translate into native instructions, but they are then cached and repeatedly executed like any other native instructions.
But only on Apple ARM implementations which have specific hardware built into the chip to do so (emulate the memory model), which won't be available in the future because they're dropping rosetta2..
Apple isn't dropping Rosetta 2. They say quite clearly that it's sticking around indefinitely for older applications and games.
It seems to me that Apple is simply going to require native ARM versions of new software if you want it to be signed and verified by them (which seems pretty reasonably after 5+ years).
> In fact, many would argue it's fair to say that ARM is not RISC
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
But even ARM1 had some concessions to pragmatics, like push/pop many registers (with a pretty clever microcoded implementation!), shifted rigsters/rotated immediates as operands, and auto-incrementing/decrementing address registers for loads/stores.
Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).
I hate the idea of having one "Software Discipline". Something is lost when people are constrained by OOP or TDD or "Clean Code". Obviously, as with the example of TDD in the article, a lot of these terms mean different things to different people. Hence whenever "Clean Code" is criticized, people who think their code is "clean" take up arms.
I tend to disagree with most of these rulesets that are meaningless to "engineering". The idea that a function should only be 40 lines long is offensive to me. Personally, I would rather have one 400 line function than ten 40 line functions. I'm a Ziguana. I care about handling edge cases and I think my programming language should be a domain specific language to produce optimal assembly.
I would not constrain other people who feel differently. I read an article where some project transitioned from Rust to Zig, even though the people on the team were all Rustaceans. Obviously their Rust people hated this and left! To me, that's not a step in the right direction just because I prefer Zig to Rust! That's a disaster because you're taking away the way your team wants to build software.
I think hardly any of the things we disagree on actually have much to do with "Engineering". We mostly aren't proving our code correct, nor defining all the bounds in which it should work. I personally tend to think in those terms and certain self-contained pieces of my software have these limits documented, but I'm not using tools that do this automatically yet. I'd love to build such tools in the coming years though. But there's always the problem that people build tools that don't notice common use-cases that are correct, and then people have to stop doing correct things that the tool can't understand.
I don't remember so much of it now, but as a kid I did a History project on this where I went to the local state University and read all the references in the archives related to Celluloid and other names it went by. A really interesting subject, for sure!
According to Wikipedia, Alexander Parkes created the first celluloid (later called "Parkesine") on purpose in 1855 (as mentioned in the article, Collodion already existed and, when dried, created a celluloid-like film). John Wesley Hyatt apparently acquired Parkes's patent.
Daniel Spill, who worked with Parkes directly in England, founded several companies with Parkes selling Celluloid in England.
Spill and Hyatt spent the better part of a decade in court against each other over who invented it first and who has the right to the patents. The judge ultimately ruled that both of them can continue their businesses, and that Parkes invented it first.
“Safety” as defined in the context of the Rust language - the absence of Undefined Behavior - is important 100% of the time. Without it, you are not writing programs in the language you think you are using.
That’s a convoluted way to say that UB is much, much worse than you think. A C program with UB is not a C program, but something else.
mmm, no, that's an explicitly incomplete list of informally-described observable behaviors that qualify as undefined -- which is fine and good! but quite far from a specification!
Andrew wrote ONE SENTENCE and that's enough for you to diagnose him? That's enough for you to identify a pattern of behavior? Really?
He's "polishing a pig"? He's hiding "all issues" with Zig in internal mailing lists to "project a polished facade"? ALL?! You got all that from one sentence saying he wishes the author took a different approach?
Alright, fine. Here's my analysis of your character and lifelong patterns of behavior based on your first two sentences:
You just want to tear down everybody who is trying to do good work if they make any mistake at all. You look for any imperfection in others because criticizing people is the only approximation of joy in your existence. You are the guy that leaves Google reviews of local restaurants where you just critique the attractiveness of the women who work there. You see yourself as totally justified and blameless for your anti-social behavior no matter the circumstances, and you actually relish the idea of someone being hurt by you because that's all the impact you could hope for.
If that's not accurate to who you are, well, ¯\_(ツ)_/¯ that's just how it reads to me.
The internal communication channels are where the people who can fix the problem are looking. They aren't looking at random blogs until it's too late to actually have a meaningful and calm discussion with the person raising the issue.
If the author had raised the issue on the actual channels that the Zig project requests people use, and then the Zig team had been dismissive or rude, then, yeah, for sure go writing blog posts. I'm not sure why this is such a hard thing to grasp. If you have an issue, raise it with the people who can fix the issue first. Don't immediately go screaming from the roof-tops. That behaviour is entitled, immature, insincere and unproductive.
You seem to be reading a lot into my replies that isn't there. I'm not sure why you're so offended. At no point have either of us actually addressed the grievances of the blog post's author. That's one of the many reasons they weren't the best option. It's like you feel I'm attacking your right to complain about things. I'm not. Complain away, it's healthy, but there are better ways to communicate with the people who can actually address the problem.
reply