Beginner - TrueNAS Setup on used SuperMicro X10SDV-6C+-TLN4F Board

Regarding my “Old” WD-RED-Plus’es
It seems like seller was telling the truth, when he said the disks had mostly been “inactive”

The drives has seen approx 37K power on hours
But “only” 7.8K Spindle Power On hours & 2.8K “Head flying” hours

Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4             150  ---  Lifetime Power-On Resets
0x01  0x010  4           36925  ---  Power-on Hours
0x01  0x018  6     78535702790  ---  Logical Sectors Written
0x01  0x020  6       164011802  ---  Number of Write Commands
0x01  0x028  6    435328738408  ---  Logical Sectors Read
0x01  0x030  6       405173831  ---  Number of Read Commands
0x01  0x038  6      4080981120  ---  Date and Time TimeStamp
0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
0x03  0x008  4            7852  ---  Spindle Motor Power-on Hours
0x03  0x010  4            2797  ---  Head Flying Hours
0x03  0x018  4           12154  ---  Head Load Events
0x03  0x020  4               0  ---  Number of Reallocated Logical Sectors
0x03  0x028  4               0  ---  Read Recovery Attempts
0x03  0x030  4               0  ---  Number of Mechanical Start Failures
0x03  0x038  4               0  ---  Number of Realloc. Candidate Logical Sectors
0x03  0x040  4              95  ---  Number of High Priority Unload Events
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4          393222  ---  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              29  ---  Current Temperature
0x05  0x010  1              32  ---  Average Short Term Temperature
0x05  0x018  1              23  ---  Average Long Term Temperature
0x05  0x020  1              47  ---  Highest Temperature
0x05  0x028  1              14  ---  Lowest Temperature
0x05  0x030  1              41  ---  Highest Average Short Term Temperature
0x05  0x038  1              14  ---  Lowest Average Short Term Temperature
0x05  0x040  1              36  ---  Highest Average Long Term Temperature
0x05  0x048  1              16  ---  Lowest Average Long Term Temperature
0x05  0x050  4               0  ---  Time in Over-Temperature
0x05  0x058  1              65  ---  Specified Maximum Operating Temperature
0x05  0x060  4               0  ---  Time in Under-Temperature
0x05  0x068  1               0  ---  Specified Minimum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4             828  ---  Number of Hardware Resets
0x06  0x010  4             307  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value

I have attached the SMART statiscics of one of the drives , all 5 WD-Red-Plus looks quite “alike” …

Graphics

And “Text” …

WDC_WD40EFRX-68N32N0_WD-WCC7K3RUxxxx_2024-09-26_1128.txt (14.1 KB)

.

Could i still use the 4TB RED’s for a while …
They seem to have been “idle” a lot of the power on time …

TIA
Bingo

Yes - it is getting quite confusing now. Next time please start a new one.

If the only concern is power on hours (and 4-5 years is not a lot for a NAS disk) if you are willing to accept that it might fail at some point in the next few years) and there are literally zero other concerns (no reallocated sectors), then I would personally keep using it.

1 Like

Thank you for your elaborate answer.
I know where to start now.

I now have a Tosh-N300 8TB , and two NVME disks (256 & 512 GB)on the posthouse.

NVME Boot
Should i use the 256 or 512GB as TrueNAS Boot ?
If i want to make the Spinning disks - Data only (aka. no Truenas log or app).
Could i put those on the “Boot NVME” - I doubt i’ll be using any apps at all.

6’th HDD
I’m not sure there are HDD screws for the 6’th drive in the Chassis.
Do i need special screws ??? (Box is at work right now)
If the rubber dampening thingys are missing , any pointers to get new ones ??

I’ll try Deb12 first , if succesfull - Then i’m finaly ready to install TN :grinning:

Be prepared for some Pool/Dataset questons comming soon.

And GENTS thank you for your invaluable support

Generally, the cheapest and smallest drive, as the extra capacity will not be used.

Yes. Make sure to always have a recent copy of the configuration file at hand.

There should be a complete set in the box. If not, contact support at Fractal Design.

1 Like

Any quick hints on how2 specify the NVME for those “pools” during an install ?
I’ll also have a look in the guide.

Wouldn’t be fair … I bought the NAS used …

Thnx :grinning:

You can still contact Fractal Design for parts, be it complementary service or a sell.

When replacing my rather old WD Red 4 TB (don’t have the exact model at hand, pre “SMR-gate”), because one of them started to throw errors and all were way past 5 years of operation, I opted for Seagate Ironwolf. Found that the SMART data the drives return is largely garbage, so if you rely on tools like Scrutiny or pull drive health data into Grafana or some such, I’d not recommend them.

Returned them and got WD Red Plus instead.

Now “The Lady” is ready for TrueNAS install.

I have succeded in replacing the MSATA Boot disk with a : Patriot M.2 P300 256GB NVME disk. And i have added a Tosh-N300 8TB as the “Now free” SCSI-0.

This was taken in a “Secure/ UEFI Booted - Deb12”

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0   7,3T  0 disk 
sdb           8:16   0   3,6T  0 disk 
sdc           8:32   0   3,6T  0 disk 
sdd           8:48   0   3,6T  0 disk 
sde           8:64   0   3,6T  0 disk 
sdf           8:80   0   3,6T  0 disk 
nvme0n1     259:0    0 238,5G  0 disk 
├─nvme0n1p1 259:1    0   512M  0 part /boot/efi
├─nvme0n1p2 259:2    0   237G  0 part /
└─nvme0n1p3 259:3    0   977M  0 part [SWAP]

It was "tricky" to get Deb12 to install .... 
I powered the system down , and swapped the MSATA with the NVME.
I went to Bios setup , but there was no trace of a NVME ... 
Well no worries i thought, i'll just reinstall DEB12 via the USB Stick, i used before.

Nope … Got in trouble with : “The software kludge/abomination called UEFI”

Why ?? … I did use the same one to UEFI install DEB12 on the MSATA. :lying_face:
My guess is: That the NVME disk was “Blank” as in no EFI partition, and that made Bios EFI boot go “haywire”.

I needed to get an EFI partition on that NVME drive …
Disabled Secureboot , and enabled CSM in Bios, and tried DEB12 install again.
Now it would boot from the DEB12 USB Stick.

Installed DEB12 on the now visible NVME disk, that also makes an EFI partition.
Booted the newly installed (Non Secure Boot) DEB12 install wo. issues.

In Bios i switched back to “Secure boot enabled”, CSM disabled …
Well before disabling CSM, the Bios insisted that i set “Onboard Video OPROM to EFI”

And to activate that setting i had to reboot …
Now i could disable CSM

NB - I had:
M2 PCI-E OPROM set to EFI
SLOT7 - Bifurcation set to x4x4x4x4
SLOT7 - OPROM set to EFI
ALL the time , no matter if i used legacy or Secureboot
I’m not sure if the SLOT7 would have any influence on the M2 NVME slot.
But i’m quite sure i read that in order to be able to NVME boot from the M2 slot - It had to be set to EFI.

My Secure Boot settings are here

At this point i could have tried to boot the already installed DEB12.
But i wanted to be sure i could UEFI Boot from the DEB12 USB Stick, as i have a feeling that the TrueNAS installler would use the same setup.

I saved the Bios config and did F11 Boot selection , selected the DEB12 USB and … Yesss now it booted un UEFI Secure Boot mode.

Installed DEB12 again … Getting quite used to doing that :upside_down_face:
No issues during install , and on the next reboot :
It Secure Booted directy into DEB12 on the NVME stick.

Long story but i thought i’d descrbe my “issues”, and how i solved them.

During the next days (or next wekend) i might have time to make a TrueNAS Scale USB Stick, and install it.

Any issues creating a bootable stick from the ISO ?
I hope it’s just a “dd or Balena Etcher” dump to the USB Stick or … ??

It might be useful to someone some day… but I never had this kind of trouble on any motherboard with the plainest settings “UEFI only / CSM Disabled / Secure Boot disabled”. (It’s 2024! “Legacy” should not even exist as an option!)
IMHO, you set yourself in this trouble by throwing Secure Boot in the mix.

1 Like

I thought UEFI Boot was a TrueNAS requirement ?
Doesn’t UEFI boot require Secure Boot to be enabled ??

I’m absolutey NOT in love with UEFI / Secure Boot

And not a server Guru.

Legacy should be supported, but UEFI is recommended. (It’s 2024…)
Secure Boot is absolutely NOT necessary. If someone gets physical access to your NAS it won’t help anyway.

2 Likes

Yes. But actually I just use Ventoy these days.

Stick the checksum file in the same directory as the ISO and you can verify the iso was copied correctly.

1 Like

Hmmm … I’m always a bit reluctant to use new (unknown) “Boot software”, even if the source is on github.
The “Alipay” button at the bottom of github page, makes it a bit better.
Then it might not be fully gov funded …

There was a close call in late march,

I’ll have a look n ventoy, but will prob. end up using Balena :blush:

1 Like

As a begnner :
Should i install Scale Dragonfish , or Electric EEL RC1 ?

Is the upgrade path from DF to EE - Easy for a beginner or ??

I’m leaning towards DF, primarily because i’m a beginner. And wouldn’t be able to recognize an obvious “Bug” in EE.

But if i’m hit with some (for a beginner) - Ugly upgrade path from DF to EE… I’m not sure.

Any hints ?

As a beginner you should not mess with BETA or RC (not to mention nightlies…). As for a .0 release you decide…
Upgrade is as simple as selecting a different train in the GUI.

3 Likes

Getting closer to TrueNAS Install

Still using DEB-12:

I performed smart long tests

smartctl -t long /dev/sdx

On all the drives , and all passed … The 8TB Tosh took 667 minutes :roll_eyes:

And after that some Read/Write throughput tests.
In hindsight i should have added count=45000 on all the dd commands, to get dd to stop nicely , instead of having to do Ctrl-C to break it.

Read tests

*** sda         = Brand new Tosh N300 8TB

root@sm01:~# dd if=/dev/sda of=/dev/null bs=1M status=progress
10955522048 bytes (11 GB, 10 GiB) copied, 42 s, 261 MB/s^C
10649+0 records in
10648+0 records out
11165237248 bytes (11 GB, 10 GiB) copied, 43,7934 s, 255 MB/s


*** sdb..sdf    = 35K hour - WD Red Plus 4TB

root@sm01:~#  dd if=/dev/sdb of=/dev/null bs=1M status=progress
9767485440 bytes (9,8 GB, 9,1 GiB) copied, 56 s, 174 MB/s^C
9381+0 records in
9380+0 records out
9835642880 bytes (9,8 GB, 9,2 GiB) copied, 57,3095 s, 172 MB/s


root@sm01:~# dd if=/dev/sdc of=/dev/null bs=1M status=progress
10400825344 bytes (10 GB, 9,7 GiB) copied, 56 s, 186 MB/s^C
9998+0 records in
9997+0 records out
10482614272 bytes (10 GB, 9,8 GiB) copied, 57,4176 s, 183 MB/s


root@sm01:~# dd if=/dev/sdd of=/dev/null bs=1M status=progress
10928259072 bytes (11 GB, 10 GiB) copied, 61 s, 179 MB/s^C
10548+0 records in
10547+0 records out
11059331072 bytes (11 GB, 10 GiB) copied, 62,7862 s, 176 MB/s

root@sm01:~# dd if=/dev/sde of=/dev/null bs=1M status=progress
10498342912 bytes (10 GB, 9,8 GiB) copied, 62 s, 169 MB/s^C
10140+0 records in
10139+0 records out
10631512064 bytes (11 GB, 9,9 GiB) copied, 63,666 s, 167 MB/s


root@sm01:~# dd if=/dev/sdf of=/dev/null bs=1M status=progress
11387535360 bytes (11 GB, 11 GiB) copied, 67 s, 170 MB/s^C
10964+0 records in
10963+0 records out
11495538688 bytes (11 GB, 11 GiB) copied, 68,6915 s, 167 MB/s

Write tests - I chose ~44GB , as I have 32GB Ram

*** sda         = Brand new Tosh N300 8TB

root@sm01:~# dd if=/dev/sda of=/dev/null bs=1M status=progress
10955522048 bytes (11 GB, 10 GiB) copied, 42 s, 261 MB/s^C
10649+0 records in
10648+0 records out
11165237248 bytes (11 GB, 10 GiB) copied, 43,7934 s, 255 MB/s


*** sdb..sdf    = 35K hour - WD Red Plus 4TB

root@sm01:~#  dd if=/dev/sdb of=/dev/null bs=1M status=progress
9767485440 bytes (9,8 GB, 9,1 GiB) copied, 56 s, 174 MB/s^C
9381+0 records in
9380+0 records out
9835642880 bytes (9,8 GB, 9,2 GiB) copied, 57,3095 s, 172 MB/s


root@sm01:~# dd if=/dev/sdc of=/dev/null bs=1M status=progress
10400825344 bytes (10 GB, 9,7 GiB) copied, 56 s, 186 MB/s^C
9998+0 records in
9997+0 records out
10482614272 bytes (10 GB, 9,8 GiB) copied, 57,4176 s, 183 MB/s


root@sm01:~# dd if=/dev/sdd of=/dev/null bs=1M status=progress
10928259072 bytes (11 GB, 10 GiB) copied, 61 s, 179 MB/s^C
10548+0 records in
10547+0 records out
11059331072 bytes (11 GB, 10 GiB) copied, 62,7862 s, 176 MB/s

root@sm01:~# dd if=/dev/sde of=/dev/null bs=1M status=progress
10498342912 bytes (10 GB, 9,8 GiB) copied, 62 s, 169 MB/s^C
10140+0 records in
10139+0 records out
10631512064 bytes (11 GB, 9,9 GiB) copied, 63,666 s, 167 MB/s


root@sm01:~# dd if=/dev/sdf of=/dev/null bs=1M status=progress
11387535360 bytes (11 GB, 11 GiB) copied, 67 s, 170 MB/s^C
10964+0 records in
10963+0 records out
11495538688 bytes (11 GB, 11 GiB) copied, 68,6915 s, 167 MB/s

I was quite surprised that the read speed was slower than the write speed.
Is that normal, to write faster than read ?

Or is writing to /dev/zero slower than reading from it ???

The Brand new 8TB Tosh N300 is quite a bit faster than the old WD Red’s.

Now to disable secureboot as per recommendation here, install DEB-12 to see it working.

And then install TrueNAS, maybe sunday …

Since i’m going to use this box as a : Backup storage box.

This thread has me worried

I will make heavy use of NFS for backing up my other linux’es.

Should i re-consider SCALE , and maybe install Core 13.3 for a start ?

I read that the 13.3 ZFS is compatible w. the newer Scale versions.

Most bad stories taken out of context of the millions of good ones that you never hear about will get you worried.

This guy has a specific piece of hardware, and he either has something incompatible with Debian Linux or a hardware fault that causes Linux to crash.

There are literally billions of Linux installations that don’t crash.

  1. Do we know that this one person was having crashes explicitly due to NFS?
  2. How many other NFS users are there who don’t get crashes?
  3. Why the heck would you use NFS for backups rather than e.g. ZFS replication or rsync which are designed for incremental copying?

I read loads of posts from people who successfully updated from Core to Scale, helped by it being the same ZFS on both versions.

I read one article about someone who had used FreeNAS for a decade or so and a specific type of encryption that is no longer supported on Scale.

At some point you are going to have to stop worrying and start building. My advice: Build it, and then before you put data on it, ask here for a final sanity check.