They're different problem spaces, TrustZone is a trusted execution environment and TPM is an API for performing key storage and attestation which revolves around system state (PCRs).
Essentially, TPM is a standardized API for implementing a few primitives over the state of PCRs. Fundamentally, TPM is just the ability to say "encrypt and store this blob in a way that it can only be recovered if all of these values were sent in the right order," or "sign this challenge with an attestation that can only be provided if these values match." You can use a TEE to implement a TPM and on most modern x86 systems (fTPM) this is how it is done anyway.
You don't really need an fTPM either in some sense; one could use TEE primitives to write a trusted application that should perform similar tasks, however, TPM provides the API by which most early-boot systems (UEFI) provide their measurements, so it's the easiest way to do system attestation on commodity hardware.
Sure, why not? You have a reference implementation for both TrustZone OP-TEE (from Microsoft!) and in-Linux-kernel. No need to code anything, everything is already there, tested and ready to work.
As I understand it, you can not actually deploy a fTPM (in embedded and other scenarios) unless you run your own full PKI and have your CA signed off by Microsoft or some other TPM consortium member. So sure the code exists, but it's also just a dummy implementation, and for any embedded product that is not super cost conscious I will forever recommend to just buy the $1 chip, connect it via SPI and live happily ever after. Check the box, in embedded most non-technical people can't even begin to understand what FDE means anyway.
If you don't need the TPM checkbox, most vendors have simple signing fuses that are a lot easier than going fTPM.
Well, you have much more control of lower-level boot process on ARM chips, and each of the SoC manufacturers have their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.
In context of trusted boot — not much. If your specific application doesn't require TPM 2.0 advanced features, like separate NVRAM and different locality levels, then it's not worth to use dedicated chip.
However if you want something like PIN brute force protection with a cooldown on a separate chip, dTPM will do that. This is more or less exactly why Apple, Google and other major players have separate chip for most sensitive stuff—to prevent security bypasses when the attacker gained code execution (or some kind of reset) on the application processor.
> their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.
TPM and x86 trusted boot / root of trust are completely separate things, linked _only_ by the provision of measurements from the (_presumed_!) good firmware to the TPM.
x86 trusted boot relies on the same SoC manufacturer type stuff as in ARM land, starting with a fused public key hash; on AMD it's driven by the PSP (which is ARM!) and on Intel it's a mix of TXE and the ME.
This is a common mistake and very important to point out because using TPM alone on x86 doesn't prove anything; unless you _also_ have a root of trust, an attacker could just be feeding the "right" hashes to the TPM and you'd never know better.
On ARM, you control the whole boot process on many SoCs, and can make your own bespoke secure/trusted/measured boot chain, starting from bootrom to the very latest boot stages (given that your SoC manufacturer has root of trust and all the documentation on how to use it), without TPM.
You more or less can't do that on x86, and have to rely on existing proprietary code facilities to implement measured boot using TPM (as the only method), for which you can implement trusted boot, using TPM and all the previous measures proprietary code made to it.
You can do that on x86 too, the main difference is a combination of openness and who you need to sign an NDA with (which, granted, is a big difference, since most ARM vendors are more likely to be your friend than Intel). However, there are a ton of x86 based arcade machines, automotive systems, and so on which have secured root of trust and do not use UEFI at all. On Intel, you get unprovisioned chips and burn your KM hash into the PCH FPFs to advance the lifecycle state at EOM, which is basically the same thing you'd do with most ARM SoCs.
I cracked into many x86-based arcade machines (and non-arcade gambling machines), and none of them used anything really bespoke. I never seen non-BIOS/UEFI x86 system in my life.
Not going to say they are non-existent, but probably the only mention of not using UEFI on Intel chips was in the presentation of Linux optimization for automotive from Intel itself, where they booted Linux in 2 seconds from the cold boot.
I've seen the Intel bare-metal stuff in enough automotive products to call it extant in the wild; I've only heard of it being used in video arcade stuff so maybe I was misinformed there.
Anyway, I think we're both on the same page regardless that TPM and hardware root of trust are not the same thing. In some configurations TPM can (weakly) attest that the hardware root of trust is present, but it doesn't actually do any hardware trust root, and that looks architecturally very similar on x86 to how it looks anywhere else (mask ROM verifies a second bootloader against RTL or manufacturing fused chipmaker public key hash, second bootloader measures subsequent material against OEM fused key hash, and so it goes).
I think the general problem is that SoC-based security relies on internal "fuses" that are write-once, as the name suggests, which usually means that they are usable by the manufacturer only.
TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.
That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage via a swap partition (I don't think they are common in embedded systems, though).
Their have been many vulnerabilities in TrustZone implementations and both Google and Apple now use separate secure element chips. In Apple's case they put the secure element as part of their main SoC, but on devices where that wasn't designed in house like Intel they had the T2 Security Chip. On all Pixel devices I'm pretty sure the Titan has been a separate chip (at least since they started including it at all).
So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.