Thanks guys. It was very important that my VMs could access the pools/smb. I had read people praise that your VMs get in their words direct access if using TN to virtualize but then find out it’s really not quite that way and also needs a workaround(bridge). So either way there isn’t direct access (correct?) but using SMB shares like any device connected to the same network and not a VM. That isn’t really a big deal and how I do it already with ESXI and OMV. Ill read over the Accessing NAS From a VM | documentation.
Also do I need SLOG/L2ARC or drives for metadata? I do have 96gb ram in it but would like to not use up a bunch so VMs can use it. With that said I’ve looking over build options and what everything means. I’m thinking more about a 12 drive ZR2 or maybe a 11(ZR2)+1 hot spare. Redundancy isn’t as important as available space and IOPS isn’t as important since nothing will be running off that space and its more about stream/transfer speeds.
Edit: I see there are “Intel HPE IBM P3600 1.6TB HHHL PCI-E NVMe SSD 100% Life Remaining” drives on ebay for 120 bucks and saw a 400gb in your build Stux.
That is from before Optane really took over the SLOG market. At the time the only Optane drive was the newly released p4800x (iirc). Later on I used a p4801x m2 drive for another build.
Unfortunately Optane is getting harder to source now :(, but still probably the bees knees.
I have a few m.2 drives laying around 500gb-1tb drives. Any recommendation on a PCIe card to slap them into that doesn’t require external power or a sata cable? Maybe a 2 or 4 m.2 port card?
Thanks, looks like it does but needs enabled per slot in BIOS. Not sure if the riser is x16 though. Wasn’t specified so going to assume x8 until it arrives.
Putting the VMs in a dedicated SSD pool is good practice and well supported.
If you want to live a bit of the wild side, buy enterprise quality SSDs that are big enough and then implement a sVDEV. That that buys you the performance increase of SSD for metadata, small files, and can host your VMs too, all from within the same pool.
If you define the VM dataset as being entirely “small file”, (ie metadata small file size = record size) it is forced onto the sVDEV. Now your general pool gets the awesome metadata performance that makes directory traversals, rsync, etc a joy, small files can be read and written quickly, and your VMs can make use of the SSDs without resorting to a separate pool.
(Dataset record sizes are set in the storage/pools menu, I’m writing a primer, see the sVDEV tag).
Remember though: IF your sVDEV goes, your entire pool will be unrecoverable. Choose good drives, use at least a three-way mirror, and be sure to have SMART, notifications, etc enabled to ensure you catch a drive before it goes OFFLINE. There be dragons here so if the above sounds too complicated, just go with the tried and true and host the VMs on a separate SSD pool.
I like this idea. I’d likely mirror to be safe so gonna have to look for some more ssds.
m.2 adapter card:
So it does seem all the riser pcie slots are x8. They hate us having x16 in enterprise servers apparently. Same goes for my Dell 710 LFF. I could have gotten a single x16 riser but seemed like a waste of 3 usable x8 slots I could have alternatively.
With that said are there any x4/x8 m.2 pci adapter cards that will suffice? I’ve seen some but they all have a single SATA port behind one of the m.2 ports which states required for whatever reason. Maybe the SATA cable is only required for certain uses I’m not sure. I had considered grabbing a dell branded one that has 2 m.2’s on it so that my server fans don’t run at full throttle when there is an unknown product in there.
The HBA330 is a perfectly-decent LSI SAS3008 HBA. It should be flashed to an appropriate firmware version, of course, but otherwise should be just fine.
Look harder: There are many adapters or risers for M.2 or U.2 drives in PCIe slots; the more expensive cards with PLX switches do not require bifurcation and can hold four drives in a x8 slot.
I bought a HBA330 mono (16 bucks) to swap out the H330 mono instead of needing to flash it with HBA330 firmware. I will see if there is any updates for it though.
Thanks. I was looking on ebay. I’ll try using better keywords that the ali uses.
Well it’s here yay! It even came with a 4 port 10 GbE NDC. Not that I have anything that can use it but nice I suppose. Should be able to get the new cpu’s, HBA, and SATADOM installed and the OS on it today. None of the trays came with HDD screws so will have some tomorrow (oversight on my part not having any). The flexbay for SSD and other stuff should be here later in the weekend.
EDIT:
Maybe not. Can’t get it to boot. Or sometimes it will. So going to take a break. I think some of it is it needs a HDD in the front panel populated and can’t really seat the drives without screws. The HBA idrac finally lit up so going to bring in a monitor and see whats up. Reseated all the ram and risers. Did notice whoever originally setup the server didn’t split the 16gb ram up between CPUs so two 16gb are on the left CPU with a mix of 8gb and all 8gb on the right cpu. So will need to adjust that. Not that it’s preventing anything just seems like an odd setup.
Edit2:
I don’t think it likes the SATADOM in the yellow port or just coincidence. Either way, installed the new CPU’s, HBA, and manually inserted 2/12 SAS drives while in their trays until screws get here. Drives initialized and are all green. My VGA monitor attached to my dell 710 is shot so looks like I need to goodwill another one or get an adapter to HDMI. I’ll also have to bridge my desktop wifi connection to ethernet so I can run a short cable to access the server. Don’t feel like working in the garage where my rack is until its ready to stay in the rack.
Edit 3:
So the front panel has some health indicators like on your car dashboard. Have an icon related to power blinking amber. So either both of the 750watt 80+ platinum GPUs have a fault somewhere in either voltage or whatnot or the voltage regulator on the board is going bad/is bad. If its the motherboard seems I’ll be SoL, unless the seller refunds me or enough to replace the board/psus. Once the VGA->HDMI converter comes I can then figure out dells open management software and actually see what the errors are or confirm my suspicions.
Edit: 4
I got the flexbay backplane 2.5" kit in today so while I wait for screws and the VGA to HDMI figured I’d install it. Thankfully I have on a SFF dell 710 so pulled some sas drives out in trays and fit like a glove and works. Will install the SSDs when I can figure out what is going on. Hoping I don’t need a new motherboard so contacted the seller to see what he observed or thinks. Hoping they will take care of me if I need a new motherboard.
This is getting tedious. But second hand stuff it is just sometimes what it is. Also ordered a new PSU to see if that’s the issue. I might have to install ESXi so I can install dells open management software to look at what the errors are. Been contemplating just virtualizing TNS if I have to go the ESXi route for diagnostics. We shall see.
Welp, I believe the server is dead. Notice for 30 day money back the buyer is to pay shipping to return for refund which is nearly 1/2 the cost of the server. Looks like I won’t have the funds to fix this or move forward as I’m not going to pay 100 bucks to return a DOA server. Waiting to hear back from the seller as they seem reputable with 30k reviews and 99.7% positive feedback.
If I get stuck with no option but to keep as is even though it never worked when I got it. A new motherboard and power supplies will be a bare minimum solution. While not that expensive it doesn’t mean it will fix it. Could just be part of the issue. Maybe it then requires new backplane, ram, etc which can then add up quickly. I’ll update when I have a solution instead of bumping my thread over others with misc updates.
Edit:
Seller got back to me with a “well it worked here” setup a return. Which means I cover the shipping. I sent him a video that it indeed doesn’t and the error lights on the motherboard. Requested if he would do a partial refund so I can at least replace some parts to see if I can fix it.
Don’t want to blast them or anything but this is their store. And maybe they do come back with free return shipping or a partial refund. https://www.ebay.com/str/eftnetworks
I don’t feel like they know much about networking/server gear but that’s mostly what they sell after our discussion before I bought the server. But they have what appears a great seller rating so wasn’t too concerning.
They did show screenshots from inside the idrac webui in the listing and that it does post to BIOS. So I don’t know if it’s shipping damage or my bad luck. The off thing to me is they don’t list their items as parts/repair only but if you scroll down any item they sell they pretty much imply that regardless of condition. However, don’t put it in the description or title. Seems a slight deceptive with having not noticing it but hindsight on my end.