Using multiple targets with iSCSI sharing

I’ve tried all I can, having one portal on my unique up IP 192.168.0.13, manual and Wizard.
I’ve got multiple Targets and Extends and Associated Targets, but I can’t discover anything other than the first target.

I’ve got 3 in total atm:
dockerhost
share-dockerhost
test

multiple Extends and 2 LUNs on dockerhost, one on test:

Target LUN ID Extent
dockerhost 0 dockerhost
dockerhost 1 share-dockerhost
test 0 test

To list what’s available, I use
iscsi-ls iscsi://192.168.0.13

Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:172.16.0.1:3260,1
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.1.13:3260,1
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.0.13:3260,1

trying the -s option ends up in a failure:
iscsi-ls -s iscsi://192.168.0.13

Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:172.16.0.1:3260,1
list_luns: iscsi_connect failed. iscsi_service failed with : iscsi_service_reconnect_if_loggedin. Can not reconnect right now.

I’ve got a Proxmox instance which accordingly can’t see any other target than dockerhost
A VM is successfully using dockerhost target and its associated Extend as storage, but only sees the first LUN.

Am I doing anything wrong?
Is there anything special to configure to be able to exploit multiple Targets and LUNs on one Portal?

You seem to have mutiple extents for a single target…

Is that intended? Its not recommended for performance reasons.

Suggest you set-up 1 at a time and see where it breaks. With 3 its a bit confusing especially with the same LUN IDs.

ok, I tried to clean up and keep only one Target and one Associated Target.

I wasn’t aware it wasn’t good to have multiple extents for a single target. What would LUN ID be for otherwise?

At any rate, the clean up just killed the scst.service, but the GUI wouldn’t show anything abnormal.

/etc/scst.conf

HANDLER vdisk_fileio {
}
HANDLER vdisk_blockio {
    DEVICE dockerhost {
        filename /dev/zvol/FAST/iSCSI-dockerhost
        blocksize 4096
        read_only 0
        usn c121b0651602135
        naa_id 0x6589cfc000000eff46f0a247a2b7ca55
        prod_id "iSCSI Disk"
        rotational 0
        t10_vend_id TrueNAS
        t10_dev_id c121b0651602135
        threads_num 32
    }

    DEVICE shareexisting {
        filename /dev/zvol/backup/FAST-snapshot-backup/FAST/ubuntudockerhost-0vhbns
        blocksize 4096
        read_only 0
        usn 539962f3cfb47c8
        naa_id 0x6589cfc000000b16b51fe4a5e1eff8f2
        prod_id "iSCSI Disk"
        rotational 0
        t10_vend_id TrueNAS
        t10_dev_id 539962f3cfb47c8
        threads_num 32
    }

    DEVICE share-dockerhost {
        filename /dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content
        blocksize 4096
        read_only 0
        usn c553945fd86e8c9
        naa_id 0x6589cfc0000003e7aa2fef8fa1052373
        prod_id "iSCSI Disk"
        rotational 0
        t10_vend_id TrueNAS
        t10_dev_id c553945fd86e8c9
        threads_num 32
    }

    DEVICE test {
        filename /dev/zvol/SSD0/test
        blocksize 4096
        read_only 0
        usn 7407dc34f71c5c6
        naa_id 0x6589cfc0000007ad8f1d5be8e054b182
        prod_id "iSCSI Disk"
        rotational 0
        t10_vend_id TrueNAS
        t10_dev_id 7407dc34f71c5c6
        threads_num 32
    }

}

TARGET_DRIVER iscsi {
    enabled 1
    link_local 0

    TARGET iqn.2005-10.org.freenas.ctl:dockerhost {
        rel_tgt_id 1
        enabled 1
        per_portal_acl 1

        GROUP security_group {
            INITIATOR *\#192.168.0.13

            LUN 0 dockerhost
        }
    }
}

journalctl -xeu scst.service

░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit scst.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Nov 26 21:04:37 freenas systemd[1]: scst.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit scst.service has entered the 'failed' state with result 'exit-code'.
Nov 26 21:04:37 freenas systemd[1]: Failed to start scst.service - LSB: SCST - A Generic SCSI Target Subsystem.
░░ Subject: A start job for unit scst.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit scst.service has finished with a failure.
░░
░░ The job identifier is 147949 and the job result is failed.
Nov 26 21:17:23 freenas systemd[1]: Starting scst.service - LSB: SCST - A Generic SCSI Target Subsystem...
░░ Subject: A start job for unit scst.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit scst.service has begun execution.
░░
░░ The job identifier is 148255.
Nov 26 21:17:23 freenas iscsi-scstd[2201022]: max_data_seg_len 1048576, max_queued_cmds 2048
Nov 26 21:17:24 freenas scst[2200950]: Loading and configuring SCST
Nov 26 21:17:24 freenas scst[2201114]: Collecting current configuration: done.
Nov 26 21:17:24 freenas scst[2201114]: -> Checking configuration file '/etc/scst.conf' for errors.
Nov 26 21:17:24 freenas scst[2201114]:         -> Done, 0 warnings found.
Nov 26 21:17:24 freenas scst[2201114]: -> Applying configuration.
Nov 26 21:17:24 freenas scst[2201114]:         -> Opening device 'dockerhost' using handler 'vdisk_blockio': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Setting device attribute 'prod_id' to value 'iSCSI Disk' for device 'dockerhost': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Setting device attribute 'threads_num' to value '32' for device 'dockerhost': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Setting device attribute 'usn' to value 'c121b0651602135' for device 'dockerhost': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Setting device attribute 'naa_id' to value '0x6589cfc000000eff46f0a247a2b7ca55' for device 'dockerhost': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Setting device attribute 't10_vend_id' to value 'TrueNAS' for device 'dockerhost': done.
Nov 26 21:17:24 freenas scst[2201114]:         -> Opening device 'share-dockerhost' using handler 'vdisk_blockio': done.
Nov 26 21:17:24 freenas scst[2201114]: FATAL: Received the following error:
Nov 26 21:17:24 freenas scst[2201114]:         A fatal error occurred. See "dmesg" for more information.
Nov 26 21:17:28 freenas scst[2200950]:  failed!
Nov 26 21:17:28 freenas systemd[1]: scst.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit scst.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Nov 26 21:17:28 freenas systemd[1]: scst.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit scst.service has entered the 'failed' state with result 'exit-code'.
Nov 26 21:17:28 freenas systemd[1]: Failed to start scst.service - LSB: SCST - A Generic SCSI Target Subsystem.
░░ Subject: A start job for unit scst.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit scst.service has finished with a failure.
░░
░░ The job identifier is 148255 and the job result is failed.

dmesg

[...]
[1212177.685706] [2200968]: dev_vdisk: Registering virtual vdisk_blockio device dockerhost (BLOCKIO)
[1212177.686797] [2200968]: dev_vdisk: Auto enable thin provisioning for device /dev/zvol/FAST/iSCSI-dockerhost
[1212177.686806] [2200968]: dev_vdisk: Attached SCSI target virtual disk dockerhost (file="/dev/zvol/FAST/iSCSI-dockerhost", fs=61440MB, bs=4096, nblocks=15728640, cyln=7680)
[1212177.687025] [2200968]: scst: Added device dockerhost to group copy_manager_tgt (LUN 0, flags 0x4) to target copy_manager_tgt
[1212177.687047] [2200968]: scst: Attached to virtual device dockerhost (id 1)
[1212177.758419] [2200968]: scst: Changed cmd threads num to 32
[1212177.758661] [2201069]: dev_vdisk: USN for device dockerhost changed to c121b0651602135
[1212177.759227] [2200968]: dev_vdisk: Registering virtual vdisk_blockio device share-dockerhost (BLOCKIO)
[1212177.761257] [2200968]: dev_vdisk: ***WARNING***: Device /dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content doesn't support barriers, switching to NV_CACHE mode. Read README for more details.
[1212177.774932] [2200968]: dev_vdisk: Auto enable thin provisioning for device /dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content
[1212177.774953] [2200968]: dev_vdisk: Attached SCSI target virtual disk share-dockerhost (file="/dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content", fs=61440MB, bs=4096, nblocks=15728640, cyln=7680)
[1212177.775274] [2200968]: scst: ***ERROR***: Device handler's vdisk_blockio attach_tgt() failed: -30
[1212177.775734] [2200968]: dev_vdisk: Detached virtual device share-dockerhost ("/dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content")

Removing Extent share-dockerhost on the GUI and systemctl start scst.service didn’t make any diffrence, error was the same.

CLI removing share-dockerhost from /etc/scst.conf, restarting service the isseu was the same with next Extent in line shareexisting

Removing from GUI, didn’t made any change to scst.conf
CLI removing from scst.conf and restart scst.service did manage to get the service started without error.

And this time, from a VM:

$ iscsi-ls -s iscsi://192.168.0.13
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.0.13:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:59G)

These Extent where made from Devices that showed up on the GUI a selectable device. I intentionally selected device that were from Replicated Tasks, Zvol snapshots replicated.
Wanted to test a way to boot a Proxmox VM from a Truenas VM zvol snapshot replica,

Is there any reason this failed? Why would these be selectable from the Device dropbox when cretaing an Extent?

Further testing iSCSI share, everything worled as expected.
Adding a test Target and Associated Target with Extent test worked:
LUN ID being 0 or 1:

iscsi-ls -s iscsi://192.168.0.13
Target:iqn.2005-10.org.freenas.ctl:test Portal:192.168.0.13:3260,1
Lun:1    Type:DIRECT_ACCESS (Size:19G)
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.0.13:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:59G)

or

$ iscsi-ls -s iscsi://192.168.0.13
Target:iqn.2005-10.org.freenas.ctl:test Portal:192.168.0.13:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:19G)
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.0.13:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:59G)

and even two Associated Target on the same target:

$ iscsi-ls -s iscsi://192.168.0.13
Target:iqn.2005-10.org.freenas.ctl:dockerhost Portal:192.168.0.13:3260,1
Lun:0    Type:DIRECT_ACCESS (Size:59G)
Lun:1    Type:DIRECT_ACCESS (Size:19G)

and then I can see them on Proxmox accordingly.

Why is multiple LUN on a Target is discouraged?

just read the recommended post @Captain_Morgan, I don’t see a clear case against multiple Extent per target though.
True this guy in the 2017 post mention some maintenance issue, but it’s specific and old enough that I’d still go for it anyway if I have a clear use case where it’s convenient.
I’ll double check ASAP if the issue shows up with Proxmox consuming Truenas Scale 24.10

The issue is that a Target represents one TCP connection in iSCSI. That becomes a bandwidth bottleck and there is no sharing of bandwidth between LUNs. (Its also unclear how well clients handle it).

In any case, after your testing of each setup step, is there a specific bug that you have found. Then you can see if same issue exists in SCALE.

Fixing systems isn’t my day job, so its easier to resolve a situation that has the following characteristic:

  1. Working system
  2. Minimum change that should be OK
  3. Non-working system.

The issue is that a Target represents one TCP connection in iSCSI. That becomes a bandwidth bottleck and there is no sharing of bandwidth between LUNs. (Its also unclear how well clients handle it).
I see it clearer now, I didn’t catch that immediately.

I’ve tested removing an Associated Target for a Target with 2 LUN and a LUN mounted and used by a VM on Proxmox.
Everything went smoothly, Promox handled it perfectly, refreshing the available LUNs on TrueNAS SCALE iSCSI storage immediately.

As for the issue I met earlier when trying to create Extend from Replicated Zvol, (available on the Device dropbox in the Extend creation dialog), do we agree that it should work?

If you replicated a zvol, you will have replicated any internal GUID/UUID that Proxmox assigned to it. Presenting the same GUID/UUID to Proxmox from two separate iSCSI targets likely caused a collision, and for Proxmox to “experience a bit of undefined behavior” as to how it would handle that, as it would appear to have the same disk now connected via two distinct targets, but with one side outright refusing writes rather than doing redirection through iSCSI MPIO/ALUA rules.

If I have a moment, I’ll replicate that and see what comes up in the logs, but I imagine it’s going to be a number of iSCSI related errors asking what the heck just happened to a previously unambiguous path to storage that just became “Schrodinger’s Disk”

All right, that’s a good point, I’ll have to pay attention to that when trying to restore along a live volume.

But for this, the error was on Truenas side.
Creating these Extends produced the dmesg error:

-> Opening device 'share-dockerhost' using handler 'vdisk_blockio': done.
Nov 26 21:17:24 freenas scst[2201114]: FATAL: Received the following error:
Nov 26 21:17:24 freenas scst[2201114]:         A fatal error occurred. See "dmesg" for more information.

1212177.759227] [2200968]: dev_vdisk: Registering virtual vdisk_blockio device share-dockerhost (BLOCKIO)
[1212177.761257] [2200968]: dev_vdisk: ***WARNING***: Device /dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content doesn't support barriers, switching to NV_CACHE mode. Read README for more details.
[1212177.774932] [2200968]: dev_vdisk: Auto enable thin provisioning for device /dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content
[1212177.774953] [2200968]: dev_vdisk: Attached SCSI target virtual disk share-dockerhost (file="/dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content", fs=61440MB, bs=4096, nblocks=15728640, cyln=7680)
[1212177.775274] [2200968]: scst: ***ERROR***: Device handler's vdisk_blockio attach_tgt() failed: -30
[1212177.775734] [2200968]: dev_vdisk: Detached virtual device share-dockerhost ("/dev/zvol/backup/FAST-snapshot-backup/SSD0/containers-content")

scst.service was still running after creating the problematic Extents but nothing else created from the iSCSI Share was working. And restarting the service would just block it until the Extents were remove.

I don’t know what happened exactly, though I still have the problematic zvol and should be able to reproduce, but the iSCSI share service should know better and display some error in the GUI rather than getting irremediably stuck (from a GUI standpoint)

I’ll see about reproducing this and ping our SCSI team on the back end - although it’s a holiday in the USA today, so I can’t promise anything in terms of an SLA response. :slight_smile: