Disable PUIS? Exos SAS drives not powering up

Hi,
I’ve been trying to power up two Exos X18 18TB drives, but I can’t get it to work.
I’m using a HBA (IT mode) with SFF-8643 to SFF-8482 connectors.

Using the connector, I tried another SAS drive I got from an old r730, and it worked, while trying with the same connector on the exos, it still didn’t spin up.

I tried to jump pin 3+4 on the drive, but didn’t help, though can’t guarantee I did it right.

Anyone been in the same situation or got a solution?
Would appreciate help very much =)

Correction, I did not jump 3+4, I misunderstood and tried it on the data connector.
Is there any software solution to this?

The only software solution is to use a SAS enclosure with enclosure services.

It is my understanding you can’t disable the “PWDIS feature” in firmware, because the feature is also intended to overcome firmware lockups / bugs.

There are 2 common methods to disable the PWDIS feature:

  1. Use a bit of electrical tape to cover the 3.3v lines on the disk(s)
  2. Use a power cable or drive cage that does not wire up those pins.

The Wikipedia article for SATA even mentions these work arounds:

Workarounds include using a Molex adapter without 3.3 V or putting insulating tape over the PWDIS pin.

You weren’t supposed to jump anything, you were supposed to tape it over/isolate the pin in order to prevent contact as @Arwen explained.

I am not sure if jumping the pins could cause damage, hopefully it didn’t.

Hi,
I tried using electrical tape on the pins, and they spun right up.

I went and purchased molex to sata adapter, and they worked perfectly.

I was in middle of creating a pool with the drives as the power went down, and when I started the server, the drives were gone, and no pool.

I’ve tried reseating the connections and checking after drives with lsblk, but nothing is showing up.
The two other NVME drives are showing up fine.

Just odd it worked, and now suddenly both are gone.
Any ideas?

Thanks

Solved the issue by removing one of the NVME drives, seems like it was disabling the PCIE therefor the SAS Controller.

Thank you Arwen and Neofusion for the help =)

1 Like

Yes, that is one of the problems of consumer and lower end server hardware, it may come with limited PCIe lanes. That may cause a user to choose a regular PCIe slot, a NVMe drive slot, or even an extra piece of built in hardware, like 2nd Ethernet port.

<rant>
The current desktop limitation of CPU supplied PCIe lanes has become a very noticeable problem. For example, the AM4 socket has 24 PCIe lanes, 4 of which need to be used by the chipset. We get a little bit more on AM5, 28 PCIe lanes, again 4 of which are used by the chipset.

My thought is that we need 40 lanes now:

  • 16 lanes for discreet GPU
  • 8 lanes for additional PCIe slot(s), perhaps 1 x 8 config or 2 x 4 config
  • 8 lanes for 2, 4 lane NVMe slots
  • 4 lanes that could be software configurable for x4 PCIe, additional x4 NVMe, or 4 additional SATA
  • 4 lanes for the chipset, which could have some pass through lanes for other I/O

That totals to 40 lanes.

Now to be fair, bumping up the speed to PCIe 5.0 helps. But, does not help if the slots are not available!
<\rant>

1 Like