I have two Intel Optane H10 installed. Thats the one with 256Gig NAND and 16 Gig 3DXpoint on one SSD and one shows up fine as 256Gig and the other one only as 16Gig but I dont know why.
What are you using them for? Do you have one as SLOG and under provisioned? Please post the version of TrueNAS
Browse some other threads and do the Tutorial by the Bot to get your forum trust level up if you need to post images, etc.
TrueNAS-Bot
Type this in a new reply and send to bring up the tutorial, if you havenât done it already.
@TrueNAS-Bot start tutorial
âŠand which requires a specific (Windows only?) driver to work as a single unit, right?
Given what is thought of using ZFS on top of a RAID controller (here, that would be a third-party software RAID), the easy answer is: Donât try to use these in ZFS.
Unless, maybe, you manage to have the QLC NAND and the 3DXpoint report as separate drives.
The output of lspci
(possibly with extra verbosity) could be of interest to understand how the drives show up.
ÎÏÏηÎșα! The x4 H10 consists of two independent x2 drives.
Intel nomme officiellement cet appareil « MĂ©moire Optane H10 avec stockage Ă lâĂ©tat solide » (Optane memory H10 with solid state storage). Il est beaucoup plus facile de le considĂ©rer comme un lecteur hybride, ou deux lecteurs en un. Sur une moitiĂ© de la clĂ© M.2, Intel a placĂ© 32 Go de mĂ©moire Optane. Le reste du M.2 est utilisĂ© pour hĂ©berger un NAND de 512 Go basĂ© sur QLC. Les deux sont des lecteurs indĂ©pendants, chacun avec une bande passante dĂ©diĂ©e x2 PCIe Gen 3. En fait, si vous dĂ©sactivez Optane dans le pilote Intel Rapid Storage Technology, les deux dispositifs apparaĂźtront comme indĂ©pendants dans le gestionnaire de pĂ©riphĂ©riques de Windows 10. UtilisĂ©s comme prĂ©vu par Intel, ils apparaĂźtront cependant comme un lecteur unique.
Unless your slots can bifurcate down to x2x2, or thereâs a small PCIe switch on the H10, you can only see âthe first partâ. That one shows up as the QLC part and the other as the Optane part suggests that the slots have reversed lanes.
Whatâs the motherboard and its BIOS settings? How are the drives connected?
I have an ASRock RB4M2 Card that is bifurcated to 4x4x4x4 where all 4 of my Optane Modules are the 2 H10 256gig and two 16gig and my mainboard is a B450M Pro4. In Lsblk one of them show up as 256gig and nothing else what would probably be the QLC and the other one as 16gig only. Never thought about it that they would be combined in firmware. I thought they would be combined in hardware with a controller like SSDs with a RAM cache.
06:00.0, 07:00.0, 08:00.0, 09:00.0 These do show up as four different devices, with one possibly being a newer revision.
Now if someone has an idea why not all four drives appear in TrueNAS, and what could be done of themâŠ
Drop me a sudo nvme list
and sudo nvme list-subsys
please.
You can use triple backticks around your results ``` or the </>
button in the reply composer to paste the text as unformatted.
I think I didnt make myself clear. I do in fact have 4 real drives. 2 are 16GiB Memory Modules. They only have 16GiB 3dXpoint on them and nothing more and 2 256GiB Drives which have 256GiB Nand and 16GiB each on them so thats why it shows 4 but as you can see 3 are showing up as 16GiB one of which is a 256GiB drive
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme3n1 /dev/ng3n1 PHBT839201T6016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310
/dev/nvme2n1 /dev/ng2n1 PHBT839002V8016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310
/dev/nvme1n1 /dev/ng1n1 PHTE146502ZA256D-2 INTEL HBRPEKNX0101AHO 1 14.40 GB / 14.40 GB 512 B + 0 B HPS3
/dev/nvme0n1 /dev/ng0n1 PHTE1411055E256D-1 INTEL HBRPEKNX0101AH 1 256.06 GB / 256.06 GB 512 B + 0 B HPS3
nvme-subsys3 - NQN=nqn.2014.08.org.nvmexpress:80868086PHBT839201T6016D INTEL MEMPEK1W016GA
\
+- nvme3 pcie 0000:09:00.0 live
nvme-subsys2 - NQN=nqn.2014.08.org.nvmexpress:80868086PHBT839002V8016D INTEL MEMPEK1W016GA
\
+- nvme2 pcie 0000:08:00.0 live
nvme-subsys1 - NQN=nqn.2014.08.org.nvmexpress:80868086PHTE146502ZA256D-2 INTEL HBRPEKNX0101AHO
\
+- nvme1 pcie 0000:07:00.0 live
nvme-subsys0 - NQN=nqn.2021-10.com.intel:nvm-subsystem-sn-phte1411055e256d-1
\
+- nvme0 pcie 0000:06:00.0 live
It looks like Optane H10 does not even work on Intel platforms when itâs on CPU lanes: You need to put it on PCH lanes. (See the table at the bottom of page 1.)
Not much luck with H10 in the old forum:
Intel only supports using the H10 on a validated Intel platform with their RST software. Even their own C242, in the right Coffee Lake generation, does not qualifyâŠ
Wow thatâs really unfortunate. I really should have looked more into this before buying them. They would probably work in my main pc but 256Gig is not gonna get me very far there. Maybe when I decide to upgrade my pc I can use that Hardware for my Server and get proper use out of them
Return and Resell are your best (least worse) options.
These duds are already out of support and not usable beyond 12th gen and RST v.19.
Thatâs really odd. lspci
and nvme
are picking them up, but it is indeed showing the QLC partition of one of the drives as 16G as well.
It may be down to the CPU/PCH issue but I wouldâve expected it to not register the storage at all.
Edit:
Okay, that makes more sense to me. I didnât see this post (entered lower-down and scrolled up) pointing out the four physical NVMe drives. This will be because of the motherboard not supporting the required x2x2 bifurcation level.
Some onboard M.2 slots may support this, and the H10 Optane cards can be addressed and utilized individually on those boards even without Intel software.
But it looks like your board will only go down to x4x4x4x4, and not âx2x2x2x2x2x2x2x2â
One model number has an extra 0, so these are actually two different revisions. I suppose that the two revisions present their drives in opposite order: Optane first on *0, 660p first on the other (which lspci shows as rev. 03). In either case, only the first drive registersâsame as putting multiple drives on riser in a slot that is NOT bifurcated.
That, or the bifurcating riser reverse lanes on some M.2 slots.