Suggestions on setup of server

I am looking for suggestions on how to best set up my truenas CORE server. Should I use a cache drive? How a bout a Metadata special device? Things like this. My scenario is that I work at a small computer shop and I want to use this to copy customer data to when backing up their data to transfer to another system. And then transferring that data to their computer later or often times to another computer they brought in. I have been using OpenMedaiVault but have had a lot of issues with small file sizes bogging down the transfer speeds. We intend to upgrade the server to use a 2.5Gbps card, any suggestions on one to use? There is usually only one computer transferring to or from the server at a time.

I have a Dell Poweredge T330 with a PERC H730 set to HBA mode, 32GB of RAM, 6 400GB Intel DC S3700 SATA SSDs, 2 3TB Constellation ES.3 SAS drives. 1x random ssd for installing TrueNAS to. I do also have a few 250GB 970 EVO Plus drives along with PCIE to M.2 adapter cards that I can use to make metadata special devices or cache drives if they would be helpful. I am currently transferring data to the HDDs at the end of the day as an extra source for keeping the data in case something does go wrong with the primary SSD array.

Fix slow ZFS, get More IOPS! Best Practices for TrueNAS - YouTube

All the info I came across said that more RAM is far superior than any cache drive

The max RAM this system can hold is 64GB and most of the time our data transfers are in the hundreds of gigs. I also realized I made a mistake originally in the original post and said 16GB when I meant 32GB is in the system now. I am also wondering about how much of the data we will need will actually be in ARC since we sometimes do the data transfer and then have to wait a week or more before we can transfer the data back. And the large files I expect to transfer pretty quickly without a cache as with the 6 drives we have in a Z1 array I expect to see around 2.3GBps which is far higher than the 2.5Gbps ethernet speeds. This is why I am wondering if a metadata special device using a pair of 970evo plus drives to mirror the metadata and assigning small files to use the rest of the space might be a better idea than the cache.

I’ll be honest, that sounds dodgy. My advice is don’t.

None, they are all crap. Anything between 1GbE and 10GbE is a farce.

That’s going to kill performance, you would want to replace that with a real HBA. If that’s what you’re using right now, I think it goes some way towards explaining why the performance you were seeing was so bad.

This data has to come from somewhere and/or go somewhere. Is the bottleneck here really the server or is it the somewhere?

1 Like

The “cache drive” you wrote I assume it’s the L2ARC: that’s a read cache that could help your situation if set to metadata-only; if you are dealing with lots of small files a metadata VDEV might be helpful if properly configured, but at that point I believe you would be better to just build a NVMe regular pool… or just an eclosure that lets you merge two (or more) SSDs into a virtual drive of greater size. Actually, since the data should be temporary, this method would be ideal unless you need more than 4TB of space; you could do the same with HDDs, but they are less portable :stuck_out_tongue:

Two ways to approaches to consider:

  1. pool consisting of several vdevs of 2-way or 3-way mirrors (ssd drives). Simple, fast, but depending on the # of customers you might start running out of space.

  2. sVDEV pool - HDDs for bulk storage and SSDs for small files and metadata. More complicated and the sVDEV drives need to be high quality enterprise drives in at least a 3-way mirror. I’d mirror the HDDs for speed also.

If you are not very familiar with TrueNAS, I’d avoid the sVDEV option as there is too much that can go wrong resulting in a hosed pool. Option 1 seems like a better bet, especially as SSDs keep coming down in cost and a flash pool with quality drives largely obviates the benefit of a L2ARC, sVDEV, or other add-ons to HDD pools.

All that said, I’m not sure it’s a good idea to do backups for customer data onto your equipment. I would not want to be responsible for the PPI or any contents that may not be legal (CSAM).

Instead, I’d give them the option to buy an inexpensive SSD for you to back up onto if they want that extra layer of protection. Then hand the drive to them along with their computer when they leave.

I am not sure how to reply to the direct questions like you did but I just wanted to say that we have had a couple of our external SSDs we usually back up customer data to before we reinstall a system fail on us. Fortunately both died before we had wiped the original data and we are primarily trying to prevent a catastrophe where the drive dies after their drive is wiped and we have no way to recover it with certainty.

Thank you for the advice in between first and last comment.

I am farily certain that it is on the server. We are using Cat6 cables into a 24 port switch. And the problem isn’t always happening. If I am transferring large files then I usually cap out at 100+MBps. It is just on smaller files where our speed tanks so bad that we switched to using external SSDs instead.

By Cache, I did indeed mean L2ARC, sorry for not specifying that.

Thanks Constantin, I had been focused on setting up a Z1 array and didn’t think about setting it up in a multi-way mirror like that, which makes me feel silly. Especially since we rarely have to copy more than a few hundred gigs.

I didn’t realize there could be so many issues with a sVDEV, and figured that as long as I had made sure to mirror them and both drives don’t go down at the same time I should be fine. Thanks for the heads up.

I intend to set up the customer data share to only be accessible from specific computers either by setting up a VLAN or by setting up TrueNAS to only allow access from specific hosts to help keep it secure. As well as deleting the data as soon as the computer leaves the shop.

While we encourage every customer to have an external drive for backing up their data not everyone can afford that and in some cases they can barely afford the repair. The owner is extremely generous and is a real bleeding heart.

I hear you and at the same time I still wonder if a rotating set of DAS drives wouldn’t make more sense. That is, buy a few external SSDs of varying capacities, back the computers up to them individually, then erase them each time a computer is checked out again. With SSDs, this process is quick and the data unrecoverable.

It’s also a simple solution that prevents you from retaining any objectionable content between customers. If the drives are formatted with encryption (i.e. Apple FileVault, Windows Bitlocker, etc.) then the data is also protected while it is at rest.

I’d label each drive by capacity to match the amount of data you have to back up (i.e. 500GB, 1TB, etc.) and then use a large re-usable plastic sleeve / bag to keep the drive and the laptop together until the customer picks up their machine.

1 Like

Yup, with a couple of 1TB SSDs and a RAID1 enclosure you would be done with a far simpler and portable solution.

You can use TN for that, but considering the amunt of data you need to temporarely store it’s not a cost-efficient way. If you are using this to explore TN while helping with work it’s another thing, and that goes beyond monetary value.

1 Like