The given history of UTF-16 and UTF-8 is a bit muddled.
> UTF-16 was redefined to be ill-formed if it contains unpaired surrogate 16-bit code units.
This is incorrect. UTF-16 did not exist until Unicode 2.0, which was the version of the standard that introduced surrogate code points. UCS-2 was the 16-bit encoding that predated it, and UTF-16 was designed as a replacement for UCS-2 in order to handle supplementary characters properly.
> UTF-8 was similarly redefined to be ill-formed if it contains surrogate byte sequences.
Not really true either. UTF-8 became part of the Unicode standard with Unicode 2.0, and so incorporated surrogate code point handling. UTF-8 was originally created in 1992, long before Unicode 2.0, and at the time was based on UCS. I'm not really sure it's relevant to talk about UTF-8 prior to its inclusion in the Unicode standard, but even then, encoding the code point range D800-DFFF was not allowed, for the same reason it was actually not allowed in UCS-2, which is that this code point range was unallocated (it was in fact part of the Special Zone, which I am unable to find an actual definition for in the scanned dead-tree Unicode 1.0 book, but I haven't read it cover-to-cover). The distinction is that it was not considered "ill-formed" to encode those code points, and so it was perfectly legal to receive UCS-2 that encoded those values, process it, and re-transmit it (as it's legal to process and retransmit text streams that represent characters unknown to the process; the assumption is the process that originally encoded them understood the characters). So technically yes, UTF-8 changed from its original definition based on UCS to one that explicitly considered encoding D800-DFFF as ill-formed, but UTF-8 as it has existed in the Unicode Standard has always considered it ill-formed.
> Unicode text was restricted to not contain any surrogate code point. (This was presumably deemed simpler that only restricting pairs.)
This is a bit of an odd parenthetical. Regardless of encoding, it's never legal to emit a text stream that contains surrogate code points, as these points have been explicitly reserved for the use of UTF-16. The UTF-8 and UTF-32 encodings explicitly consider attempts to encode these code points as ill-formed, but there's no reason to ever allow it in the first place as it's a violation of the Unicode conformance rules to do so. Because there is no process that can possibly have encoded those code points in the first place while conforming to the Unicode standard, there is no reason for any process to attempt to interpret those code points when consuming a Unicode encoding. Allowing them would just be a potential security hazard (which is the same rationale for treating non-shortest-form UTF-8 encodings as ill-formed). It has nothing to do with simplicity.
> UTF-16 was redefined to be ill-formed if it contains unpaired surrogate 16-bit code units.
This is incorrect. UTF-16 did not exist until Unicode 2.0, which was the version of the standard that introduced surrogate code points. UCS-2 was the 16-bit encoding that predated it, and UTF-16 was designed as a replacement for UCS-2 in order to handle supplementary characters properly.
> UTF-8 was similarly redefined to be ill-formed if it contains surrogate byte sequences.
Not really true either. UTF-8 became part of the Unicode standard with Unicode 2.0, and so incorporated surrogate code point handling. UTF-8 was originally created in 1992, long before Unicode 2.0, and at the time was based on UCS. I'm not really sure it's relevant to talk about UTF-8 prior to its inclusion in the Unicode standard, but even then, encoding the code point range D800-DFFF was not allowed, for the same reason it was actually not allowed in UCS-2, which is that this code point range was unallocated (it was in fact part of the Special Zone, which I am unable to find an actual definition for in the scanned dead-tree Unicode 1.0 book, but I haven't read it cover-to-cover). The distinction is that it was not considered "ill-formed" to encode those code points, and so it was perfectly legal to receive UCS-2 that encoded those values, process it, and re-transmit it (as it's legal to process and retransmit text streams that represent characters unknown to the process; the assumption is the process that originally encoded them understood the characters). So technically yes, UTF-8 changed from its original definition based on UCS to one that explicitly considered encoding D800-DFFF as ill-formed, but UTF-8 as it has existed in the Unicode Standard has always considered it ill-formed.
> Unicode text was restricted to not contain any surrogate code point. (This was presumably deemed simpler that only restricting pairs.)
This is a bit of an odd parenthetical. Regardless of encoding, it's never legal to emit a text stream that contains surrogate code points, as these points have been explicitly reserved for the use of UTF-16. The UTF-8 and UTF-32 encodings explicitly consider attempts to encode these code points as ill-formed, but there's no reason to ever allow it in the first place as it's a violation of the Unicode conformance rules to do so. Because there is no process that can possibly have encoded those code points in the first place while conforming to the Unicode standard, there is no reason for any process to attempt to interpret those code points when consuming a Unicode encoding. Allowing them would just be a potential security hazard (which is the same rationale for treating non-shortest-form UTF-8 encodings as ill-formed). It has nothing to do with simplicity.