I have a truescale server that i have had running for a while but realizing disk io is becoming an issue, i would like to add an nvme/ssd storage device and create a seperate pool for just my vm machines to use, is this something i can force the vms to use a seperate pool? Or can someone point me to some guides to help understand this process better?
Just remember:
-
VMs do synchronous I/Os which actually do 2 I/Os per I/O, waiting for the first one to complete before execution continues. So depending on the size of your VM data and the total I/O throughput, you can either put the whole data on NVMe or just put the first I/O on NVMe (SLOG).
-
Whatever you choose, you will need to mirror the new devices - so buy 2x NVMe / SSD.
P.S. Those of us who answer questions here would rather you give more details and ask for advice first rather than make a mistake and then ask for help correcting it.
This is data that is backed up else were and thus only needs a single nvme/ssd, my main issue is when my plex users are active it is starting to lag out some of my vms due to my sata hard drives being only 5400 rpm drives. I would like to move all my vms to a seperate much faster nvme drive and leave my big data store on the arrayed 5400 wd red drives. Eventually i will be upgraded to bigger storage drives but those will still be non ssd/nvme as cost is a concern. I know with the apps section i can assign that to a different storage pool, and i think i should be able to do the same with the vms? either way, nothing on this server needs to be backed up as all the actual data is backed up to a synology nas. Im not using this as a cache or slog drive, not sure why you think thats coming into play here?
Some WD Red drives are SMR - you need to check your exact model number to confirm that they are NOT SMR drives because SMR drives are absolutely and completely UNSUITABLE for use in any form of redundant ZFS pool!!
Regarding cache and log drives, by the sound of it you don’t understand how these work in ZFS and why they might be applicable to your situation.
As I said before, IMO you will be better off explaining your situation e.g. what the VMs are for and how much data they need, and describing your existing TrueNAS configuration in detail, and then asking for advice (rather than giving snippets of information and then telling the more experienced people trying to help you that they don’t know what they are talking about).
im just gonna move on to something else in which i dont get treated like im an idiot, thanks for the unkind words. (ps i read the drive suggestions 3 years ago when i decided to use truenas) none of my drives are smr.
Well, I cannot speak for others, but personally I was simply trying to help based on the level of information your provided and the level of expertise that you claimed not to have (your original post asks about how you can use a separate pool which suggests only a basic knowledge of TrueNAS).
But if you want to respond to someone trying to help you avoid future issues as being equal to someone treating you as an idiot, that is certainly your prerogative.
Are you staying slog will do more to increase my disc io then moving to a faster disc would? Because most my vm s and data is fine as long as both is low usage, but when heavy use of both bigs down, and to me separating one of those to a separate drive sirens like what I was asking for help with, and seems to be a better solution then a catch drive?
No - I am saying that the solution depends on what type of performance problem you have.
Write Performance
Using an NVMe pair for either SLOG or the whole pool will help the first write I/O to to the ZIL to complete very quickly, which is what your VM will see. However if your VMs produce so much I/O that the ZIL fills up and further writes to the ZIL have to wait until some of them have been destaged to the data part of the pool, then only putting the whole pool on faster drives will help.
If your VM pool is (say) 20TB in size, then NVMe for the whole pool will be expensive, whereas a small NVMe pair for SLOG will be pretty cheap.
But if your VMs use (say) 200GB in total, then you might as well put the whole of this on an NVMe pair.
Read Performance
VMs doing reads from a zVol are essentially doing random reads without TrueNAS doing any intelligent Read Ahead. If the block is already in ARC, then a read from disk isn’t needed, but a physical read is needed to get the block into ARC. Holding the data on NVMe will speed up the first read. Having more memory will help it stay in ARC.
If your data is on a normal file-system mounted in the VM, then normal read-ahead can apply for large sequential files, meaning a lot more data will be servced from ARC, and more memory is probably the cheapest solution.
Yes, you can definitely set up a separate pool for your VMs to use faster NVMe/SSD storage in TrueNAS. You’ll want to create a new pool specifically for the SSDs and then move your VM datasets or virtual disks to that pool. After creating the pool, you can configure each VM to use the new pool by adjusting the storage settings for each VM to point to the SSD pool. There are guides on the TrueNAS forums and the official docs that walk through creating pools and migrating datasets. Check out their official documentation on ZFS Pools and setting VM storage locations to get started. That should give you the performance boost you’re looking for!