Apologies for the confusion (and my description of sync writes above is slightly off the mark as well, see the very bottom of this post for correction).
To be clear here when I say block size I mean a read or write operation being done by ZFS, when I say recordsize I mean the zpool tuning parameter, when I say stripe I mean a row of data across all disks in a RAIDz(n) vdev.
For read operations I agree that if the data you are reading is within a block that was written at a larger size you will be reading more data from the zpool than you need.
But for write operations ZFS writes blocks of data as large as necessary, but not necessarily an entire stripe. See Matt Ahrens blog post on RAIDz Stripe Width. There is a really good chart in the middle of the post that visualizes write allocation in a RAIDz vdev (unless Open ZFS changed that basic part of the ZFS design).
So a 4KiB block of data written as a 4KiB block will be read as a 4KiB block (plus parity). If the 4KiB block was in the middle of a larger (64KiB) block of data, yes more than 4KiB will have to be read to get at that data and the entire 64KiB block will have to be written out (or broken up into multiple smaller blocks) as there is one pointer to the entire original block of data.
The implication from the above linked article is that RAIDz writes to each device in the vdev one sector at a time, so the total stripe size of a vdev for a 512Byte sector drive would be 512 * number of disks, for a 4KiB sector the stripe would be 4KiB * number of disks. Note that a block write does not need to hit all disks, just enough to hold the block of data plus parity.
For example, if you have a RAIDz1 vdev of 6 x 4KiB sector disks the stripe size would be 24KiB. If you wrote a 4KiB block of data, ZFS would commit that to 2 disks, one data and one parity. If you wrote a 48KiB block of data then ZFS would commit 3 parity and 12 data sectors as follows [PDDDDD,PDDDDD,PDD], assuming it can start at the beginning of a stripe.
My understanding of recordsize as used by ZFS is that it is not related to RAIDz stripe size, but to how much and when data is written to disk from the ARC. First writes are aggregated into groups of recordsize and second once recordsize data is ready to write that data is flushed to disk (the other trigger to flush writes to disk is a timeout, originally 30 seconds, but very quickly after ZFS was released that was tuned down to 5 seconds).
And all this changes with compression as a user write of 4KiB may be less than 4KiB after compression.
ZFS Sync Writes
When ZFS receives synchronous write it must commit it to non-volatile storage before returning to the calling program. The sync write data is immediately committed to both the ARC (as would an async write) and the ZIL/SLOG (ZFS Intent Log/Separate LOG device). The basic ZIL is an area at the start of the zpool that is written to sequentially. The SLOG is a separate device (called the LOG device in TrueNAS) that is also sequentially written to. The sync writes in the ARC are committed to the zpool the same way as any other write, in fact I do not think the ARC distinguishes sync from async writes. Neither the ZIL nor the SLOG are actually read unless the write data in the ARC is not committed. This can happen if the server crashes, in which case the ZIL/SLOG is read when the zpool is imported/mounted at boot time and is replayed to ensure that sync write data is committed to the zpool. Any async write data that was not committed is lost, but the filesystem remains consistent.