Trying to decide what OS to use on my new NAS for purely shared storage

Would you look at that. So there is. Granted there isn’t a password on the bios atm.

I am having a strange Deja vu however for the past few days messing with this machine. Can’t quite place it on what exactly. Either way I do appreciate all the help from you, and everyone, especially dell specific stuff. I hadn’t intended on it being a dell support thread :slight_smile:

Motherboard arrives tomorrow and hopefully get’s this machine moving forward.

Actually, the deja vu might be my R250 (I think that’s the model) 1U server I had a power failure on and needed a board replacement. Don’t use it anymore so forgot about it.

1 Like

New motherboard is here! I had a thought though. SInce the 2x 2.5" flexbay sas attaches to the front 12 drive backplane does that mean my system wont see them when I passthrough the HBA to the VM? There are several SAS cable plug on the motherboard. I wonder if one of those can be used to avoid that? Reason being I’ld like to use them for VM datastores in ESXi. Though I guess I can use them by TN and just use the m.2 PCIe for datastores.

It LIVES!

However, I can’t figure out how to get out of BIOS manufacturing mode. Even though I entered the old service tag and it was added.

Apparently I also incorrectly plugged in a SAS cable even though I am 99% sure I put it back the way it was but should be an easy fix. I can’t read the text to see which is what on the boards. Front bay drives are detected as is the flexbay ssd. Also seems like riser issues but the cards are detected that are on them. I’ll reseat them.


Also seems I lost the IDRAC enterprise license that was enabled on the old board :frowning:

1 Like

Maybe a bad SAS cable. Guess I will just disconnect the flexbay. I mean it detects the SSD installed and the SAS drive I have in the front bay. It doesn’t however detect the pci-e m2 adapter even after enabling bifurcation. IT also has the correct LCC update for HTTP not S if that matters but can’t connect and errors out. I also can’t boot the DRM image like I was able to on the old mb setup. I enabled USB3.0 but just get a grub command line. At least the system boots but such a pain expecting a factory fresh MB to just work and not need so much fiddling.

EDIT:

Fixed SAS issue, and forgot to change boot the UEFI so now its running the DRM iso and updating stuff. I just wish the fans wouldn’t run full tilt while doing it. Takes like 45 minutes and you can hear the thing outside.

Still getting a power and life error but logs say everything is a-OK.

ESXi installed. However, the system fan will not slow down. They are stuck on 86%. I can’t handle them being this loud. They idle when booting etc but as soon as you do anything like update firmware or run an OS they are just ON. Currently sitting at 120CFM loud.

CPU’s are not hot

Ok, so got the NVME and SSD to detect in esxi by disabling and re-enabling passthrough to them. Installing TNC. Will have to figure out how to get the HBA to passthrough.

1 Like

woot. Since that 1tb SSD is a Pro model does that mean it can handle being used as something by TNS without wearing out in a few months?

Now to decide between a 12-way Z2 or Z3. 49TB on Z3, 54TB Z2. 5TB difference doesn’t seem like much of a save to go Z2

Went with Z3. Now I setup an SMB Share to my pool. I selected guest access and unchecked enable ACL. I can open the SMB on windows but no permissions. I guess I need to make a user and figure out how all that works.

1 Like

Followed this tutorial. Didn’t work. Still can’t log into with the user. I’d really like guest access so multiple machines can be connected to it at the same time.

Note to myself: do not buy DELL hardware.

1 Like

Haha. I had decent luck once. Second time hell. If I ever build another server it will be DIY all the way. I enjoy DIY but skipped it because I found a good deal. Lesson learned.

Since installing the new motherboard it’s missing all sorts of things that I at least can’t just do. Such as hardware diagnosis. The DRM iso I ran to update bios etc installed that tool but nope. So have no idea how to diagnose the power or health issue as the IDRAC logs don’t tell me anything. IT just tells me to install it from the LCC, but that doesn’t show anything to update/install. Also losing the enterprise IDRAC license. Granted I didn’t back it up, didn’t know it was a thing. Oh and can’t use the LCC

Another downside is dell software, especially newer versions, hates 3rd party hardware. Be it PCIe and even SSD/NVME drives. So in my instance, the 120CFM jet sitting on the table is due to having non-dell certified sata ssd and m.2 drives. Or at least that is a likely culprit. Even though I have the m.2’s in dell branded adapters. My old R710 though I can slap whatever I want in it and they idle fine.

They claim its because they can’t know how it will perform or may not read the proper data adjust thermal limits even if the hardware tries. So the solution is to crank those bad boys up to ear bleeding levels to ensure proper cooling is being done. But Dell, what if users want to override that or disable that solution? Maybe even set a profile or limits? Sorry, you’re SOL. Now you could run a CLI command to adjust it statically, maybe. However, that won’t persist after reboots and something could revert at any time.

I haven’t put the server in the rack yet in the garage since I want to mess with it some more. I want to get TNC fully working first. None of the drives are showing temps, IO, or other data so not sure if it can read it or even have access. So, can it even read SMART? Not sure yet. I also assume I need a mail server creds to setup email alerts. Kind of wish I installed Scale as the UI looks more polished and better laid out. I suppose I can since it’s a VM and I haven’t really done anything with it.

If I decide to delete the Core vm and move to Scale do I need to delete the pool? Will I be able to setup a new pool like before with it formatting etc in the process?

On bare metal you would not have any issues in migrating from CORE to SCALE beyond the jails; in a Virtualized istance like this one I believe it should be about the same.

You can just upgrade the current CORE VM to SCALE withouth deleting it and recreating.
If you want to delete the boot pool VM, I would export the other pools first then import them back after installing SCALE.

I guess I just wanted to start fresh as maybe I did something wrong.

So trying to setup SMB

Pool:Dataset

Dataset Options:

Dataset Permissions:

Group Windows:

Group Windows Members:

smbshare User Permissions:

Network Tab on Windows:
image

Login:
image

Error:
image

If I try this method as either \TRUENAS\SAS\WindowsShare or without the SAS
image

I’d prefer to have guest access to not have to even bother with this but not sure how as the checkbox for that doesn’t work

Well, I stripped ACL and then added the group and now users work. Now to figure out how to add no login guest access. As I can’t access it from my server 2012 r2 VM on another machine

As I need to be able to access from that machine so I can transfer all my media to the new server

NVM, had to enable legacy in the network settings of TNC. I’m able to now sync my data to TNC.

However it doesn’t detect any IO or stats in the report dashboard. Does that mean I won’t be able to detect drive issues or failures? I guess if it scrubs data should be ok. Then can see drive failure from idrac or the LED’s on the driver bays

Going to mark this solved.

Decision was ESXi 8.0.2 with TNC virtualized with HBA330 passed through. Transfers over SMB easily stay at 99MB/s.

1 Like

Apparently using PCIe passthrough does not pass SMART, or other information, to TN.

Well darn. Guess I can kind of rely on the server itself to monitor health then. I’ve personally never had a hard drive fail but know it does happen. My hitachi nas drives are going on 6ish years now always spun up on the old r710 server. So crossing fingers. These seagate enterprise sas drives are rated for 550TB/year. They will never see those kind of numbers. Granted are second hard.