Newbie Here! I'm having problems getting a second IP address

Seriously I’m just a dumb old guy with a computer. I don’t know much about techy things and when things go wrong I need help.

I just built a DIY NAS to replace my old now obsolete but still works Drobo 5N2.

  • JONSBO N1 Mini-ITX NAS Chassis, ITX Computer Case, 5+1 Disk Bays NAS Mini Aluminum Case, SFX Power
  • Intel N100 N305 NAS Motherboard 6 Bay, Mini ITX Mainboard, Purple PCB,12TH GEN Intel N100
  • Patriot Signature Line Series DDR5 32GB (1 x 32GB) 4800MHz SODIMM
  • Ediloca EN760 SSD with Heatsink 500GB PCIe Gen4, NVMe M.2 2280, 4800MB/s,
  • Binardat 10G Ethernet PCIe Network Adapter, NIC
  • Apevia SFX-AP500W Mini ITX Solution/Micro ATX/SFX 500W Power Supply

Storage

  • X3 - Seagate IronWolf 8TB NAS Internal Hard Drive HDD – 3.5 Inch SATA 6Gb/s 7200 RPM 256MB Cache for RAID
  • X2 - Western Digital 4TB WD Red NAS Internal Hard Drive HDD - 5400 RPM, 256 MB Cache, 3.5"
  • 240GB SATA SSD 2.5" Solid State Drive

OS
TrueNAS Scale 24.10.0.2

So the build went well right up until it was time for the initial boot. Pressed the button… Nothing. No beep. No Fans. No LEDs. Nothing. So now what? Wait I know, maybe I mixed up the front panel connections. Nope, tried every different possibility. Still nothing. Long story short, I pulled my hair for the next two days thinking that I had screwed something up until I finally took it into my local computer store to see where I went wrong. When there I watched them mess with it for the next 90+ mins and still nothing. Finally somebody in the background piped up “Well maybe it’s the RAM.” Sure enough they switched out the ram and I was looking at the BIOS screen within a minute. Lesson learned. I just assumed because the RAM was new that it would be good. Not so. Name brand RAM too.

After that things went better. I installed TrueNAS SCALE 24.10 on the 240GB SSD. I set the 3 - 8TB as a Raid Z1 Pool and the 2 - 4TB as a striped mirror Pool. The 500GB NVMe was later set to a cache drive for the Raid Z1 Pool. (Not sure if I got that right but it seems to go faster with the cache drive.)

The mother board has two 2.5G RJ45 ports and I added a 10GbE PCIe NIC. My Mac Studio is connected to the 10G NIC. The default settings had given a DCHP IP address to the two 2.5G NIC but no IP for the 10G PCIe. I’ve figured out how to configure a static connection to the 10GbE PCIe NIC by changing the alias. I connected to my Mac with the 10GB and able to transfer files. Best speed I’ve seen is about 600MB/S.

So the problem I’m having is that I’ve somehow broken the IP address to the original Integrated 2.5G NICs when I connected to the 10G NIC. And I’d like to get that sorted out so that I can have all ports function.

I believe you can only have one port active with DHCP. You have to play with the networking GUI, test and save your setup. They also have to be on different network address ranges.

I don’t know how well your 10G card will work. It looks to use the Aquantia? chipset. Usually recommendations are Intel, Chelsio, etc.
https://www.truenas.com/community/resources/10-gig-networking-primer.42/

You only list 32GB of RAM so you should get rid of the 500GB NVME as L2ARC. It will just eat into the RAM avaible to regular ARC, etc. 64GB is the starting point for adding L2ARC.

see ZFS primer and hardware docs for your version of TrueNAS.
https://www.truenas.com/docs/references/zfsprimer/

2 Likes

The onboard Aquantia in my NAS works OK-ish, but seems to drop connections under heavy loads. I should probably reconfigure to use the Chelsio card I put in there for testing.

As was said, you’re really only supposed to have DHCP on one interface. It also isn’t a valid configuration to have more than one interface on the same subnet. See:

1 Like

“striped mirror” does not sound right.
Assuming you made (2-way) mirror vdev, you either used is as a separate pool, or you striped it with the 3-wide raidz1 in the same pool. To be honest, these 4 TB WD Red with 256 MB cache look suspiciously like the SMR version of WD Red, and if so should best not be used at all with ZFS.
For the sake of your data, I’d prefer that you had five 8 TB drives (or larger) and a 5-wide raidz2. If you use apps, you may use this 500 MB M.2 as a non-redundant app pool, backed up to the HDD pool.

1 Like

Thnx for taking time to reply. You’re probably correct. I’m just using incorrect terminology. To clarify I have set up two separate pools. The first pool has 3-8TB drives and has been set up as a raid z1 pool with a single drive redundancy & 500GB NVMe cache drive. The second pool is the two 4TB drives that are Mirrored together for one drive redundancy. Further, I’m not sure if the WD drives are SMR or CMR but they were just a couple of spare drives that I had kicking around so I basically just put them in there temporarily to test the system before buying larger drives for my primary vdev. I will ultimately remove the 4TB WD drives and expand my primary pool with two more 8TBs. (hoping for a good Black Friday sale) :wink:
As for the 500GB NVMe I noticed a significant increase in file transfer speed after I set it as a cache drive for the raid Z1 pool. ie, transfer speeds were around 300-400MB/sec before and up to 600MB/sec after setting cache drive. Also after setting the cache drive I also noticed that the 32GB RAM became almost fully utilized as cache while the Sys services were accessing only 2-3GB and about 2GB RAM was left free. I’m totally fine with the fact that the RAM is fully utilized as long as it’s not going to cause another problem. I could increase the RAM but the motherboard only has a single SODIM slot and will only support up to 48GB. I would be reluctant to disconnect the cache drive unless there is a better configuration that will increase transfer speeds. Or if there is a reason why I definitely should not have a cache drive set up as I have done.
But enough about that. The real issue that I contacted this forum is to sort out the issue I’m having with not being able to access all of the ethernet ports. It seems to me that I need to figure out how to configure a second Subnet Mask. ?? If that’s possible. Any help with that would be very much appreciated.

What is it, really, that you’re trying to accomplish here?

2 Likes

See the memory sizing guide on ARC, L2ARC and RAM.

https://www.truenas.com/docs/scale/24.10/gettingstarted/scalehardwareguide/#memory-sizing

Fair enough. But as @dan already pointed out, what’s the actual issue here?
TrueNAS really wants ONE network interface plugged in—or two with IPMI.
You will NOT achieve 5 Gb/s transfers by plugging two 2.5 Gb/s interfaces. If you have a managed switch you may aggregate the two links but that will only provide redundant access should one cable fail, or 5 Gb/s bandwith overall for many clients. Any single client is only going to use one physical link at a time, and be capped to a maximum of 2.5 Gb/s.

1 Like

Almost certainly the best arrangement is to ignore the 2.5 GbE ports entirely, get a switch with two 10GbE ports + however many 1/2.5 GbE ports are needed, and use that to connect the NAS, the Mac, and the rest of the network devices together. They’re pretty readily available and cheap these days.

3 Likes

Hello Dan. Thnx for chiming in. My ultimate goal is to connect my Mac Studio desktop computer to the 10G NIC and have the two integrated 2.5G ports active so that I could connect one to my TV (media server) and the other as spare to connect to my laptop or whatever as need be. I have no particular desire to increase performance or bandwidth by Linking, Bridging, Bonding or whatever. I just want them to work. I thought I could just do that (Dumb old guy) but apparently I can only have one IP address for all the NICs on one subnet mask. If I understand it correctly I need to configure a second subnet mask in order to accomplish that. The 10GB NIC is currently connected to my Mac and seems to work fine with a reliable and consistent connection. The thing is I can no longer seem to get the integrated 2.5GB NIC to connect. They show up in the TrueNAS network widget but when I try to edit them I cannot assign an alias IP address to either of them. I am aware that I can only have a single DHCP connection. I have unchecked the DCHP box in the network widget to all NICs. I was able to assign an alias IP address to the 10GB NIC but now I’m unable to assign an IP to either of the two 2.5GB NICs. I’m thinking I’ve just made a stupid Old Guy mistake and just need to reconfigure the settings to make this work. Simple right? But I’m obviously missing something. Further I would rather not have to buy more hardware in order to do this. (ie, managed switches)

Oh I see. From the outset I was convinced that I had just configured my set up incorrectly and changing my settings would solve my problem. With that in mind I was trying to avoid purchasing additional switches/equip. But as you suggest a simple switch is not very expensive and if that will solve the issue then I think I should just do that.
Thnx so much for your input. :slight_smile:

So here’s the problem: even though you don’t realize it, you’re trying to use your NAS as a network switch. It isn’t a network switch. Using it against its design is likely to prove frustrating. Here are the options I see:

DHCP has masked many of the complexities of TCP/IP for simple networks. But once your network is less simple (and multiple interfaces on the same device is not simple), suddenly you need to know much more. Or reframe your problem such that your network is still simple.

2 Likes

Including your router/dhcp server :wink:

1 Like

Agreed, simple sounds like the solution. I just ordered a 6 port switch 4X 2.5G plus 2X 10G switch and will be here Tuesday.
I guess I was trying to do what you describe in the second option above. Just don’t know how to configure different/new subnets. Also didn’t realize that this is something that a NAS would have problems with.
Thnx again Dan.

Hey while I’ve got your attention, @SmallBarky has suggested that the way I’ve configured my Raid Z1 pool to have a 500GB NVMe stick attached to it as a cache drive maybe problematic. He suggests that I should have 64GB RAM in order to have an L2ARC. Is there a different or better way to configure the my Raid pool to make use of the 500GB stick and achieve better data transfer speed? Or should I just disconnect it and use it for other purposes? I could install 48GB RAM but that is the max my motherboard will support.

I’d agree with Barky that the cache device isn’t doing you any favors–best to remove it from your pool and find another use for it.