Yes, it will be RAID-Z2 containing 2 VDEVs, with 8 disk each.
Is this kind of layout not supported for the special VDEV or is it not recommended as a general practice? I mean will i experience any sort of issue if i use RAID-Z2, either a single VDEV or multiple and use special VDEV as 3-way or 4-way mirror?
So, you mean to say how the data pool is, the same layout/redundancy should be for the special metadata VDEV and it cannot be smaller or larger than the data pool? Is that what you mean?
Yes, i fully understand that but this is the best i can do atleast for now. Maybe somewhere near in the future, i can shift this as a backup or use more heavily configured NAS for the backup and use the current one as active NAS.
Woohoo. Gives me so much relief!
How would i even determine whether i need L2ARC for my current setup? I know that one needs to have at least 64GB of RAM to start in the first place.
AFAIK it’s not mandatory (but I’ll let the experts weigh in), but what would be the purpose of having super-high redundancy on the special VDEV and not in the pool? That would only make sense, if for some reason the special VDEV is prone to more wear and tear during the resilvering process, but (again AFAIK) that is not the case.
That makes sense. In this case though I’d recommend to be value-conscious (note that I’m not saying “thrifty”), in how you equip your system, so you can get to the stage of getting a backup system sooner.
They have been discussion on the forums about that and even in a recent TrueNAS Tech Talk. TLDR you do not NEED 64GB RAM, but ZFS will use most of the available RAM for its ARC read cache. I recommend you watch the section of that episode.
There is an ongoing discussion about L2ARC in another thread on this forum.
The nice thing about L2ARC is that you can try it out with a cheap SSD you have laying around, since it’s a read cache (as previously mentioned), so pool integrity is not jeopardized (only potentially the lifetime of your SSD in the long run). And you can always detach the cache from the pool. So zero investment to try it out.
The final answer will depend on the use-case for your system, which I didn’t see mentioned in this thread.
My understanding is that there is no data held on the sVDEV which is also held on the data vdev. So if there is a RaidZ2 data vdev, then you can lose two disks and still have a copy of the data. One would need three drives in the sVDEV to have the same level of redundancy.
If one used a single disk in the sVDEV, and that disk got trashed, then the entire pool would fail. Do not skimp on redundancy on a special vdev.
Maybe separate your data into separate datasets on the Vdevs, based on the need/benefit of using metadata.
(I mean, raw footages on one dataset, that does not need much metadata and small files, like photos, and backups on the other one.)
So help me out here folks as the ‘Special VDEV’ use case has always confused me a little.
I’ve just checked one of my servers and it has an ARC hit ratio of 99% of which 97.7% is metadata. Now given that RAM is much faster than any other device you could use for your ‘Special VDEV’ why in my use case would I want one?
Would it be to simply speed up the remaining 1% of ARC misses of which some ‘MIGHT’ be metadata?
Not true with a special vdev, if the dataset is set to use a block size smaller than the small file cut off, the dataset will sit entirely on the special vdev.
As I know, for example, “dedup” is a property of the datset not the VDEV. So you can have a “deduped” dataset along with a “non deduped” one on the same physical set of drives.
(I guess, you can even use a nin deduped dataset under a deduped one, but I am not so sure of it since I dont like usinf that.)
To have something in ARC, it has to become hot first. Everything that is in your ARC, has missed a fast read before to become hot
Special vdev also takes care of metadata writes. So it will also speed up your metadata writes.
And the special vdev small files setting can can be helpful for RAIDZ or dRAID. It can prevent padding, it can prevent inefficient storing because of pool geometry, and it can help against the minimum stripe size of dRAID.