Thanks for the feedback thus far folks.
etorix:
If you ever have an incident with the cold storage pool, you’ll be in SMR Hell…
Frankly, I don’t understand the point of 8 TB HDDs nowadays. The $/TB sweet spot should be around 18-20 TB; anything smaller than 16 TB is just too small to consider.
I’d probably resilver with a CMR drive at that point if it comes to it.
As for the decision on 8TB’s, they can be had cheap these days… I got lucky with a helluva deal from a buddy who shelved his own NAS plans, on 8x 18TB’s, that I’ve just finished upgrading one of my 8x 8TB VDEVs with…
You’re right - sweet spot SHOULD be 18-20TB these days, but like I said, large capacity drives are eye-wateringly expensive in my neck of the woods, with hardly any deep discount sales. I can’t justify laying out nearly $5k on another 8x new 18-20TB drives… So I’ll roll with cheap and cheerful 8TB’s for now, until a decent deal comes along on another 8x 18-20TB’s.
etorix:
At this price I assume that drives aren’t new, so consider they are burnt-in and just check SMART.
So many small, and probably 10k rpm drives, will use a lot of power compared with a lower number of high capacity 3.5" drives, but if this is cold storage it may not matter much.
Nope - Ex Data centre MD1200 shelves - BUT, they were conventional RAID… They had a few SMART tests at <20 hours, and spent the rest of their lives relying on the RAID controller’s oversight. I did SMART tests when I fired up the shelves – all seemed well, and then one of the drives started acting up as soon as a chucked a few hundred GB of data on a test pool, to gauge some performance metrics… So now I’m running at least 1-2 passes of Badblocks and another Long SMART on them before deployment - just to be sure.
7200 RPM Seagate ST4000NM0063 drives - and if I’m effectively only going to run them for 3-4 days a year as a cold storage backup solution, their power consumption really makes little to no difference in the grand scheme of things - The comparative cost of larger drives will outweigh the cheap 4TB drives’ power consumption many times over.
etorix:
Hot spares are useful if the system is live and can react to failures. In cold storage, a “hot spare” would be… cold—and useless.
LOL - you say that… I was tempted to refer to it as a luke-warm spare in my post. 
My reasoning for a spare would be to enable an immediate replacement in situ, without having to pull the offending disk (or connect the replacement disk to the server first) - but now that I think about it, you’re right. I’m being daft on that front.
Arwen:
As an owner of a Seagate 8TB SMR Archive drive, I think they are not Host Aware, but just Drive Managed. Could be wrong. However, even Host Aware are only useful IF the host driver software can work with the SMR drive to select areas to write or clear. Not the case with OpenZFS.
I’d understood 8TB Archives to apparently be some of the earliest HA SMR examples. But yeah, ultimately moot if OpenZFS doesn’t know what to do with it.
Arwen:
In regards to using 8 of them in a RAID-Z2 pool, I think this fails the sanity check 
Part of the issue is that internally they fragment the data. With ZFS doing COW, Copy On Write, this leads to both internal and external fragmentation causing them to be quite slow after long use.
Some people in the past foolishly said “My pool of Seagate SMR Archive disks is just fine, been using it for months without serious problems…” Well, that is NOT the problem we are warning against. It’s after they have been in heavy use, or long use, that the fragmentation rears its ugly head and tanks performance. Even to the point of a SMR disk becoming OFFLINE due to timeouts. Causing DEGRADED pools.
Duly noted … 
Ay caramba, the considerations on SMR’s seem to be all over the place. After much searching, I’d basically understood them to behave just fine in a media hoarding environment, where all the data is written sequentially in large chunks, and hardly ever deleted / edited…
And I, anecdotally, kind of AM one of those people you refer to - the second of 3x VDEVs on my server, consisted of 8x 8TB Archives in Z2… These are the same ones I’m now replacing, after going nigh-on 9 years of 24/7 availability. Numerous 3G/s capacity upgrade drive replacements, and 1.5G/s+ Scrubs. I’ve had one or two of them go on the fritz, and replaced them, with no hassles.
I’d spent a good few years inactive on the Free/TrueNAS forums, and only now did I see that my Archives are in fact SMR, and that SMR is considered the boogeyman-devil-babayaga-vengeful-John-Wick of mangling ZFS data - so I’m replacing them with CMR’s now…
It’d feel a bit wasteful to just throw them out / use them as peperweights though, hence thinking of employing them for cold storage backups…
The way I figured it - cold backup storage in Z1 / Z2 (perhaps even as a 2nd tier backup?), and if a drive conks out, replace it with a CMR - if that goes painfully slow, nuke the resilver, and take my chances to pull all the data onto a spare disk shelf…
That said, I’m not arguing for their use - just spitballing and providing my own anecdotal considerations… if conventional wisdom dictates to keep them away from ZFS, then we keep them away from any and all forms of ZFS… so be it. If they’re not to be trusted in any way, shape or form, then so be that as well… 