Noob, need help, can't install OS

Im losing my mind. Let me begin with, I have no idea what I am doing. I have spent an entire weekend trying to install TrueNAS Scale on a UCSS-S3260 and have gotten nowhere. I have two 1.6tb Intel SSD’s, and thought I would put them in RAID1 to install OS, but i get this error: disk is too small to hold gpt data (0 sectors) aborting.

Then I thought, drivers for RAID controller aren’t there, cause I originally tried to install Proxmox but it would not read my NIC (same with Debian/CentOS). Also RAID1 would require the controller, but I have also had everything in JBOD state and still same error.Then I read, no RAID controller for ZFS, better to disable. So tried to disable it, but I think this thing has two?!?! No idea what I am looking for at this point to be honest.

Tried to install CentOS, I get this error: error wiping old signatures from /dev/sdf
Took both SSD’s out, secure erase through ASUS bios, then set GPT partition since it kept saying something about MBR. Still get the same MBR/GPT errors when installing TrueNAS.

Think I am at a point where I have just tried so many random things, I dont know what to do. I cant even run most of the linux commands to troubleshoot drives because I cant get an OS installed. Any help would be greatly appreciated. Can post more specific stuff, its just right now I wouldnt even know what to associate which error with (last 72 hours have been a blur).

Why would you waste two 1.6 TB SSDs as boot devices? TrueNAS does support mirrored boot devices, but the boot device is only the boot device–you can’t use it for app or other data storage. Thus you’d really want the smallest SSD(s) you can find, as long as they’re at least 32 GB. Try attaching one directly to the motherboard, and see how the installation goes.

I’m not at all familiar with that system beyond what I see on the data sheet. I see one of the options is to have two nodes in one chassis–in that case, I have no idea how they interact with the storage. TrueNAS doesn’t support HA unless you’re using the Enterprise version (which would require iX hardware).

1 Like

I have no experience with Cisco servers, but I think we can assume that it is still an industry standard PC-type architecture since it apparently can run Windows server…

  1. @dan 's advice is sound - get yourself a single small SSD as a boot drive - a decent brand but cheap. The spec says that this supports NVMe but what type it doesn’t say.

  2. However if you want to install for practice on one of the 1.6TB SSDs, then install as a non-mirror first - you can always add a mirror device later.

  3. Try NOT to use secure erase on any SSD - SSDs have a limited number of writes, and secure erase will use a significant number of these. Instead to wipe a drive, just re-initialise the GPT partition table using another system.

  4. According to the spec, the controller is an LSI HBA - so you will need to flash it into IT mode using a utility. See other forum posts for details on how to do this.

  5. I am unclear what your system has 2 of?

    • 2x cores - according to the spec, each core is a separate system with shared access to storage, and shared access to storage almost certainly won’t work for TrueNAS, but you can probably run 2 separate TrueNAS systems with independent storage on them; OR

    • One core and 2x LSI HBAs in which case you can attach a LOT of disks.

  6. You may need to configure the hardware to make each node separate, and you almost certainly need to set BIOS for various things. If The HBA is in RAID mode then there are almost certainly BIOS settings to define which disks are combined into a RAID logical disk as presented to the O/S, and this should be configurable through the BIOS. To start off with, you definitely need to configure the RAID so that it shows each physical drive as a separate logical drive in order to get started - and the sector 0 message suggests to me that you haven’t got this right yet. Once you install the O/S and flash the HBA to IT mode, then the physical->logical mapping is turned off.

Hope this helps. But if you need more help, send us some pics of the internal boards and disk connections, as well as pics of the relevant parts of the BIOS settings and we will try to help more.

1 Like

If his objective is to use RAID1 for the boot pool, he won’t be able to if he flashes it in IT mode. Not that it would be a bad thing at all since hardware RAID is not compatibile with ZFS.

1 Like

Ah yes - I had assumed that he meant ZFS mirror, but I agree that he probably meant RAID1.

@Churlie - As @Davvo says, don’t use hardware RAID under any circumstances. Use ZFS mirroring instead.

However boot drive mirroring has limited usage because nothing you need except the system configuration is stored on the boot drive - and once you have TN installed you can use @joeschmuck 's Multi-Report script to send you the system config once per week by email as a backup.

1 Like

If a system really needs high availablity of the boot pool please read this.

However, in most cases it’s just better to go single boot drive… It dies, you swap it and import the configuration, done. Or, if you don’t have ease of access to the system you can use a mirrored boot pool: it refuses to boot, you connect to the IPMI and tell the BIOS to boot from the sane drive of the pair, fixed until you can get your hands on the system.

One of the gratest strenghts of TN is that as long as you have made your honeworks you can easily get the system back online when trouble happens. That’s if you decide that downtime is acceptable, otherwise you have the link.

TN is a fine piece of code.

2 Likes

they were what was already in there, lol. this is a good point, let me see if I can find smaller drive and plug into motherboard

I did mean RAID1, I guess I didnt consider hardware RAID being an issue until I started struggling right away

  1. will look around, the 1.6 is what was on hand and I was being dumb
  2. sounds like mirroring isnt worth it from other replies?
  3. will do
  4. there is C3000 RAID controller for M4 server as well as the LSI. Should i have PCH SATA Mode on AHCI/Disabled/LSI SW RAID? Do I need to flash it to IT mode if I disable it? (im totally lost on best for ZFS and TrueNAS)
  5. It has one server node and I guess the LSI RAID and the C3000 RAID (what I was referring to having 2 of)
  6. Initially these were all set to Virtual Drives already. But once I set everything default and put them all to “JBOD” (no clue if that is right). I am still confused as to this part, do I make virtual drives even if I disable RAID? curious what best setup is here for performance.

Goal is to make this a fast NAS for video editing and Plex. Im an idiot and trying to stumble my way through. Wanted to go TrueNAS since I have decent hardware that I thought would get best performance from TrueNAS (have 512 GB of RAM and 600TB of HDD’s)

this is what I get when trying to install TrueNAS on one of the drives

You do NOT need a fast NAS for Plex. I have a very low power processor and small memory in my NAS and it is great for Plex. But depending on your software and how you set it up you may need a fast NAS for video editing, but they key to this is probably to have sufficient RAM that the video file(s) you are editing are kept in ARC and (for writing) that you have async on.

We all started out with a lack of knowledge and learned what we needed to. However, “stumbling through” i.e. trial-and-error may be a way to get something running, but because it is random attempts until something just happens to work and then you stop, it is quite likely to get you a configuration that gives you major headaches e.g. in stability or pool recovery later on.

So, my advice (as an IT professional of 40+ years) is to get some expert advice here (or hire someone who genuinely knows what they are doing, avoiding all charlatans who say they do but really don’t).

I have no experience of using C3000 (or even LSI) cards, but all the experts say to use LSI, so if you can connect all your drives to that I would remove the C3000 card from the server completely.

Setting to JBOD is a good thing to try - because JBOD (Just a Bunch of Disks) should present the disks in their raw format i.e. no hardware raid. AHCI also sounds like no hardware raid too. (By comparison “virtual drives” pretty much describes what you are trying to avoid - a hardware card pretending to be one type of disk whilst doing clever things with several real disks behind the scenes.)

Best performance is not just about the size of your **** … ooops … I mean RAM and disk. It is about understanding how you will be accessing the data from your PC, and then understanding what happens under the covers and the impact of network and the way ZFS works with RAM and disks to decide on the best way of setting things up. In my own case, even with a very resource constrained and SLOOOOOOW TN server, it is the network that is the constraining factor.

One thing to remember though about using a beast of a server is that it may result in the best of something unexpected - the best (i.e. highest) electricity bill you can possibly imagine.

1 Like

Sticks, right? The ones you RAM in the tight DIMM slots.


Wipe the drives and try again after verifying the checksum of the downloaded file.

1 Like

This is one of two problems:

  1. You still haven’t got your BIOS settings right, and a RAID card is still presenting an invalid disk; or

  2. You need to use a command line utility to re-initialise the disks with an empty GPT partition table.

My advice - start the installation program, then before you try to install, go into the command shell and run some Linux commands to see what disks etc. you have presented and their size etc.

I don’t have time right now to list out the commands in detail, but you can try:

  • lsblk -io KNAME,TYPE,SIZE,MODEL
  • parted --list

Hope this helps.

1 Like

This is the raid card, still struggling. Formatted a 250gb SSD plugged into slot on server. Still get same error of disk is too small to install TrueNAS.

Server only has one node. Got smaller drive and plugged directly into ssd slot on motherboard still same issue.

It’s difficult to know what advice to offer next.

We don’t know the hardware cabling from ports/cards to drives.

We don’t know your BIOS settings.

And the devil is in the details.

In the absence of detailed information, I think my only advice is:

  1. Remove all the HBA/RAID cards and disks and cables except the system boot drive, and get TrueNAS working first.

  2. Then add the LSI-based controller, and the first set of disks, check BIOS that there is absolutely NO hardware RAID defined and that all disks are defined as JBOD, and then see whether the disks appear in TN correctly.

  3. Do the same with the Cisco RAID card (or sell it on ebay and replace it with an LSI based card instead).

When you have the hardware working, come back to use for advice on what pools to define.

Thank you for all the help, I used HUU from Cisco to update my bios and everything else, and boom, TrueNAS saw all the drives and installed on the new 250gb SSD I installed. Still not 100% I will not fall into some trap later yet, but for now I still need to figure out why I cant see my network.

Anyway, thank you all for the time you took to help me out, I really do appreciate it and learned a lot just from this thread.