Trying to find out why its so bad any ideas
pool drives are
Raid mirror
PNY_CS900_1TB_SSD (2)
WDC_WDS100T1R0A-68A4W0 (2)
T-FORCE_1TB (2)
WDC_WDS100T1R0A-68A4W0 (2)
WD_BLACK SN770 1TB cash drive
WD_BLACK SN770 1TB log drive
I get a pitiful 100 - 200 mb right performance over 10 gb Nics.
Im using iscsi hence the log drive.
Iperf test comes back with a ahealthy 9.8 to 8.5 gb/s
Arwen
January 12, 2025, 11:05pm
2
Your SSD pool says 2 drive mirror, 6 drives total but you list 8 drives;
2 pny branded
4 western digital red
2 T-FORCE
First up, please supply the output of the following commands:
zpool status SSD_pool
zpool list SSD_pool
zfs list -t all -r SSD_pool
zfs get recordsize -r SSD_pool
Next, are you using zVols or files on a NFS share?
You state iSCSI, which implies zVols, but just checking.
Their are known gotchas with VM storage and iSCSI volumes.
2 Likes
How are you measuring performance? What is the client? Can you also post lspci -vvv
1 Like
Oops yea there are 8 drives ill need to update that
i noticed the terrible performance from windows copying/moving files
also the pools name is just SSD not SSD_pool
root@truenas:/mnt# zpool status SSD
pool: SSD
state: ONLINE
scan: scrub repaired 0B in 00:17:07 with 0 errors on Sun Jan 12 00:17:08 2025
config:
NAME STATE READ WRITE CKSUM
SSD ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
71d2769c-4bbd-425f-b01a-1bbeeaa9b97a ONLINE 0 0 0
5d665544-a62b-4fab-8e61-5be5e01ee0f8 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
88eed3d1-a012-4ff5-9e79-86e8f8b39abc ONLINE 0 0 0
d04980b6-9566-4e78-bb40-dea9846ce000 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
4613f3cb-6e76-4b70-b8c5-4a1a09aeaf8a ONLINE 0 0 0
4cb72d18-fa1a-4880-9999-94f5c90f1558 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
17576d4c-f63e-46dc-800f-de8d391caf7d ONLINE 0 0 0
489628b7-bc15-4971-b107-ee57f879162c ONLINE 0 0 0
logs
48b129c8-8e89-4b17-b858-d07c18fd9a09 ONLINE 0 0 0
cache
774380bd-9e95-4054-a94f-e627abe04937 ONLINE 0 0 0
errors: No known data errors
root@truenas:/mnt# zpool list SSD
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
SSD 3.63T 576G 3.07T - - 0% 15% 1.00x ONLINE /mn
root@truenas:/mnt# zfs list -t all -r SSD
NAME USED AVAIL REFER MOUNTPOINT
SSD 2.27T 1.25T 112K /mnt/SSD
SSD/SsdIscsiDataset 2.20T 1.25T 96K /mnt/SSD/SsdIscsiDataset
SSD/SsdIscsiDataset/ssdmain 2.20T 2.96T 505G -
SSD/game_saves 946M 1.25T 946M /mnt/SSD/game_saves
SSD/game_saves@auto-2024-08-24_14-00 60K - 946M -
SSD/mineos 65.5G 422G 58.0G /mnt/SSD/mineos
SSD/mineos@auto-2024-09-04_22-20 7.55G - 41.7G -
SSD/mineos@main-2024-10-04_04-00 0B - 58.0G -
SSD/mineos@main-2024-10-05_04-00 0B - 58.0G -
SSD/server-games 4.40G 1.25T 96K /mnt/SSD/server-games
SSD/server-games@serversaver-2024-10-05_13-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_14-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_15-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_17-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_18-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_19-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_20-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_21-00 0B - 96K -
SSD/server-games@serversaver-2024-10-05_22-00 0B - 96K -
SSD/server-games@before modding 0B - 96K -
SSD/server-games@auto-2025-01-10_16-00 0B - 96K -
SSD/server-games@auto-2025-01-10_17-00 0B - 96K -
SSD/server-games@auto-2025-01-10_18-00 0B - 96K -
SSD/server-games@auto-2025-01-10_19-00 0B - 96K -
SSD/server-games@auto-2025-01-10_20-00 0B - 96K -
SSD/server-games@auto-2025-01-10_21-00 0B - 96K -
SSD/server-games@auto-2025-01-10_22-00 0B - 96K -
SSD/server-games@auto-2025-01-10_23-00 0B - 96K -
SSD/server-games@auto-2025-01-11_00-00 0B - 96K -
SSD/server-games@auto-2025-01-11_01-00 0B - 96K -
SSD/server-games@auto-2025-01-11_02-00 0B - 96K -
SSD/server-games@auto-2025-01-11_03-00 0B - 96K -
SSD/server-games@auto-2025-01-11_04-00 0B - 96K -
SSD/server-games@auto-2025-01-11_05-00 0B - 96K -
SSD/server-games@auto-2025-01-11_06-00 0B - 96K -
SSD/server-games@auto-2025-01-11_07-00 0B - 96K -
SSD/server-games@auto-2025-01-11_08-00 0B - 96K -
SSD/server-games@auto-2025-01-11_09-00 0B - 96K -
SSD/server-games@auto-2025-01-11_10-00 0B - 96K -
SSD/server-games@auto-2025-01-11_11-00 0B - 96K -
SSD/server-games@auto-2025-01-11_12-00 0B - 96K -
SSD/server-games@auto-2025-01-11_13-00 0B - 96K -
SSD/server-games@auto-2025-01-11_14-00 0B - 96K -
SSD/server-games@auto-2025-01-11_15-00 0B - 96K -
SSD/server-games@auto-2025-01-11_16-00 0B - 96K -
SSD/server-games@auto-2025-01-11_17-00 0B - 96K -
SSD/server-games@auto-2025-01-11_18-00 0B - 96K -
SSD/server-games@auto-2025-01-11_19-00 0B - 96K -
SSD/server-games@auto-2025-01-11_20-00 0B - 96K -
SSD/server-games@auto-2025-01-11_21-00 0B - 96K -
SSD/server-games@auto-2025-01-11_22-00 0B - 96K -
SSD/server-games@auto-2025-01-11_23-00 0B - 96K -
SSD/server-games@auto-2025-01-12_00-00 0B - 96K -
SSD/server-games@auto-2025-01-12_01-00 0B - 96K -
SSD/server-games@auto-2025-01-12_02-00 0B - 96K -
SSD/server-games@auto-2025-01-12_03-00 0B - 96K -
SSD/server-games@auto-2025-01-12_04-00 0B - 96K -
SSD/server-games@auto-2025-01-12_05-00 0B - 96K -
SSD/server-games@auto-2025-01-12_06-00 0B - 96K -
SSD/server-games@auto-2025-01-12_07-00 0B - 96K -
SSD/server-games@auto-2025-01-12_08-00 0B - 96K -
SSD/server-games@auto-2025-01-12_09-00 0B - 96K -
SSD/server-games@auto-2025-01-12_10-00 0B - 96K -
SSD/server-games@auto-2025-01-12_11-00 0B - 96K -
SSD/server-games@auto-2025-01-12_12-00 0B - 96K -
SSD/server-games@auto-2025-01-12_13-00 0B - 96K -
SSD/server-games@auto-2025-01-12_14-00 0B - 96K -
SSD/server-games@auto-2025-01-12_15-00 0B - 96K -
SSD/server-games/satisfactory 4.40G 1.25T 2.24G /mnt/SSD/server-games/satisfactory
SSD/server-games/satisfactory@serversaver-2024-10-05_13-00 2.75M - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_14-00 2.86M - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_15-00 104K - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_17-00 84K - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_18-00 72K - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_19-00 2.29M - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_20-00 2.51M - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_21-00 2.69M - 1.92G -
SSD/server-games/satisfactory@serversaver-2024-10-05_22-00 3.00M - 1.92G -
SSD/server-games/satisfactory@auto-2025-01-10_16-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_17-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_18-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_19-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_20-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_21-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_22-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-10_23-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_00-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_01-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_02-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_03-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_04-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_05-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_06-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_07-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_08-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_09-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_10-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_11-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_12-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_13-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_14-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_15-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_16-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_17-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_18-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_19-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_20-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_21-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_22-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-11_23-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_00-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_01-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_02-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_03-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_04-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_05-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_06-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_07-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_08-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_09-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_10-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_11-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_12-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_13-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_14-00 0B - 2.24G -
SSD/server-games/satisfactory@auto-2025-01-12_15-00 150M - 2.24G -
root@truenas:/mnt#
root@truenas:/mnt# zfs get recordsize -r SSD
NAME PROPERTY VALUE SOURCE
SSD recordsize 128K default
SSD/SsdIscsiDataset recordsize 128K default
SSD/SsdIscsiDataset/ssdmain recordsize - -
SSD/game_saves recordsize 128K default
SSD/game_saves@auto-2024-08-24_14-00 recordsize - -
SSD/mineos recordsize 128K default
SSD/mineos@auto-2024-09-04_22-20 recordsize - -
SSD/mineos@main-2024-10-04_04-00 recordsize - -
SSD/mineos@main-2024-10-05_04-00 recordsize - -
SSD/server-games recordsize 128K default
SSD/server-games@serversaver-2024-10-05_13-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_14-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_15-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_17-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_18-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_19-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_20-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_21-00 recordsize - -
SSD/server-games@serversaver-2024-10-05_22-00 recordsize - -
SSD/server-games@before modding recordsize - -
SSD/server-games@auto-2025-01-10_16-00 recordsize - -
SSD/server-games@auto-2025-01-10_17-00 recordsize - -
SSD/server-games@auto-2025-01-10_18-00 recordsize - -
SSD/server-games@auto-2025-01-10_19-00 recordsize - -
SSD/server-games@auto-2025-01-10_20-00 recordsize - -
SSD/server-games@auto-2025-01-10_21-00 recordsize - -
SSD/server-games@auto-2025-01-10_22-00 recordsize - -
SSD/server-games@auto-2025-01-10_23-00 recordsize - -
SSD/server-games@auto-2025-01-11_00-00 recordsize - -
SSD/server-games@auto-2025-01-11_01-00 recordsize - -
SSD/server-games@auto-2025-01-11_02-00 recordsize - -
SSD/server-games@auto-2025-01-11_03-00 recordsize - -
SSD/server-games@auto-2025-01-11_04-00 recordsize - -
SSD/server-games@auto-2025-01-11_05-00 recordsize - -
SSD/server-games@auto-2025-01-11_06-00 recordsize - -
SSD/server-games@auto-2025-01-11_07-00 recordsize - -
SSD/server-games@auto-2025-01-11_08-00 recordsize - -
SSD/server-games@auto-2025-01-11_09-00 recordsize - -
SSD/server-games@auto-2025-01-11_10-00 recordsize - -
SSD/server-games@auto-2025-01-11_11-00 recordsize - -
SSD/server-games@auto-2025-01-11_12-00 recordsize - -
SSD/server-games@auto-2025-01-11_13-00 recordsize - -
SSD/server-games@auto-2025-01-11_14-00 recordsize - -
SSD/server-games@auto-2025-01-11_15-00 recordsize - -
SSD/server-games@auto-2025-01-11_16-00 recordsize - -
SSD/server-games@auto-2025-01-11_17-00 recordsize - -
SSD/server-games@auto-2025-01-11_18-00 recordsize - -
SSD/server-games@auto-2025-01-11_19-00 recordsize - -
SSD/server-games@auto-2025-01-11_20-00 recordsize - -
SSD/server-games@auto-2025-01-11_21-00 recordsize - -
SSD/server-games@auto-2025-01-11_22-00 recordsize - -
SSD/server-games@auto-2025-01-11_23-00 recordsize - -
SSD/server-games@auto-2025-01-12_00-00 recordsize - -
SSD/server-games@auto-2025-01-12_01-00 recordsize - -
SSD/server-games@auto-2025-01-12_02-00 recordsize - -
SSD/server-games@auto-2025-01-12_03-00 recordsize - -
SSD/server-games@auto-2025-01-12_04-00 recordsize - -
SSD/server-games@auto-2025-01-12_05-00 recordsize - -
SSD/server-games@auto-2025-01-12_06-00 recordsize - -
SSD/server-games@auto-2025-01-12_07-00 recordsize - -
SSD/server-games@auto-2025-01-12_08-00 recordsize - -
SSD/server-games@auto-2025-01-12_09-00 recordsize - -
SSD/server-games@auto-2025-01-12_10-00 recordsize - -
SSD/server-games@auto-2025-01-12_11-00 recordsize - -
SSD/server-games@auto-2025-01-12_12-00 recordsize - -
SSD/server-games@auto-2025-01-12_13-00 recordsize - -
SSD/server-games@auto-2025-01-12_14-00 recordsize - -
SSD/server-games@auto-2025-01-12_15-00 recordsize - -
SSD/server-games/satisfactory recordsize 128K default
SSD/server-games/satisfactory@serversaver-2024-10-05_13-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_14-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_15-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_17-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_18-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_19-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_20-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_21-00 recordsize - -
SSD/server-games/satisfactory@serversaver-2024-10-05_22-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_16-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_17-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_18-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_19-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_20-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_21-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_22-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-10_23-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_00-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_01-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_02-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_03-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_04-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_05-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_06-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_07-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_08-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_09-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_10-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_11-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_12-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_13-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_14-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_15-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_16-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_17-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_18-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_19-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_20-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_21-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_22-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-11_23-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_00-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_01-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_02-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_03-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_04-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_05-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_06-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_07-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_08-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_09-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_10-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_11-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_12-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_13-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_14-00 recordsize - -
SSD/server-games/satisfactory@auto-2025-01-12_15-00 recordsize - -
root@truenas:/mnt#
You need to explain:
How the disks are shared on the NIC and what the client is?
What your iperf results are for the network (so we can determine whether this is a network or disk issue)?
Whether this is a read or write performance issue (because how ZFS does reads and writes under the covers is very very different)
If a write performance issue whether you are doing synchronous or asynchronous writes?
P.S. Unclear what the benefit is of an SSD L2ARC or indeed an SSD SLOG on an SSD pool.
this is output of lspci -vvv
its to big to fit in preformatted txt
log.txt (94.4 KB)
Its a 10gb nic connected to a 10 gb sfp+ over direct attach cable
my computer is connected to the switch over sfp+ fiber cable i gave the results of iperf
iperf client from pc
\iperf>iperf3 -c 192.168.0.250
Connecting to host 192.168.0.250, port 5201
[ 4] local 192.168.0.25 port 56686 connected to 192.168.0.250 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1021 MBytes 8.56 Gbits/sec
[ 4] 1.00-2.00 sec 1015 MBytes 8.51 Gbits/sec
[ 4] 2.00-3.00 sec 1019 MBytes 8.55 Gbits/sec
[ 4] 3.00-4.00 sec 1.01 GBytes 8.65 Gbits/sec
[ 4] 4.00-5.00 sec 1.00 GBytes 8.63 Gbits/sec
[ 4] 5.00-6.00 sec 1.00 GBytes 8.61 Gbits/sec
[ 4] 6.00-7.00 sec 1015 MBytes 8.51 Gbits/sec
[ 4] 7.00-8.00 sec 1020 MBytes 8.56 Gbits/sec
[ 4] 8.00-9.00 sec 1.00 GBytes 8.59 Gbits/sec
[ 4] 9.00-10.00 sec 1.00 GBytes 8.59 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 9.98 GBytes 8.58 Gbits/sec sender
[ 4] 0.00-10.00 sec 9.98 GBytes 8.58 Gbits/sec receiver
iperf Done.
iperf server from truenas
brandon@truenas:~$ iperf3 -c 192.168.0.25
Connecting to host 192.168.0.25, port 5201
[ 5] local 192.168.0.250 port 54026 connected to 192.168.0.25 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1011 MBytes 8.48 Gbits/sec 0 411 KBytes
[ 5] 1.00-2.00 sec 1015 MBytes 8.51 Gbits/sec 0 411 KBytes
[ 5] 2.00-3.00 sec 1.00 GBytes 8.60 Gbits/sec 0 411 KBytes
[ 5] 3.00-4.00 sec 1.01 GBytes 8.69 Gbits/sec 0 411 KBytes
[ 5] 4.00-5.00 sec 1.01 GBytes 8.68 Gbits/sec 0 411 KBytes
[ 5] 5.00-6.00 sec 1.00 GBytes 8.61 Gbits/sec 0 411 KBytes
[ 5] 6.00-7.00 sec 1.01 GBytes 8.70 Gbits/sec 0 411 KBytes
[ 5] 7.00-8.00 sec 1.01 GBytes 8.65 Gbits/sec 0 411 KBytes
[ 5] 8.00-9.00 sec 1.01 GBytes 8.67 Gbits/sec 0 411 KBytes
[ 5] 9.00-10.00 sec 1.07 GBytes 9.17 Gbits/sec 0 411 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.1 GBytes 8.68 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.1 GBytes 8.67 Gbits/sec receiver
iperf Done.
The iscsi share is set to forced synced write
Everything else is set to Truenas default
Read performance is grate it maxes out the nic
Write performance is Not. Im getting an average of 2 GB/s or 150 - 200 MB/s
How much data are you using to test your write performance? And could it be limited by reading the data off your PC disk?
brandon_evans:
2 gb/s or 150 - 200 mb/s
Can we please use correct terminology to avoid risk of confusion. Bytes are denoted by B, bits by b.
So this should say ā2Gb/s or 150-200MB/sā.
Ithankfully, The hba is running at PCIE gen 3 x8 and the NIC is running at PCIE gen 2 x8 so thatās good and expected for your hardware choices.
Can you provide an example of how you are testing? Where are you copying to and from? From the iscsi drive to the iscsi drive? From your c drive to the iscsi drive? What type of files?
If you can, Iād be interested in seeing what local performance looks like. This will remove the iscsi variable and help us identify if thereās a problem that exists in the system itself or if the problem is with the iSCSI performance between your client and the NAS.
You can use the tool I wrote for this potentially
While I do plan on making things more configurable at some point, the default threading behavior was intentional.
As an example, I have an all NVME system with a large amount of threads and alot of RAM bandwidth (Top example, 2 CPUs Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
) and another all-NVME system (Bottom Example, 1 CPU, AMD Ryzen 5 5600G with Radeon Graphics) with alot less RAM and RAM bandwidth, but much faster cores.
###################################
# DD Benchmark Resultā¦
well it seems that for some reason i cant get this program to work. sad
root@truenas:/# sudo git clone https://github.com/nickf1227/TN-Bench.git && cd TN-Bench && python3 truenas-bench.py
fatal: could not create work tree dir 'TN-Bench': Read-only file system
know a fix
i was testing with 150 gb of data to remove the zfs arc cash witch is 75.8 GIB using windows built in file transfer i also tryed RoboCopy it got me slightly better at 3 gbps
Change your directory into /mnt/SSD or some other location on your pool thatās writeable.
i was testing with 150 gb of data to remove the zfs arc cash witch is 75.8 GIB using windows built in file transfer i also tryed RoboCopy it got me slightly better at 3 gbps
A single large 150gb file? A bunch of smaller files? These things matter.
Also can you provide
zfs get all SSD/SsdIscsiDataset/ssdmain
Assuming thats the zvol?
1 Like
well all write that works ill be back with results. thanks
Sara
January 13, 2025, 7:03am
13
That probably explains it.
You use consumer SSD but force sync writes.
You have a SN770 as your log drive.
Every single write you do, it will use the SN770 for ZIL.
Basically you get the write speed of a one consumer disk that has horrible sync write performance (because of the missing PLP) to begin with.
Now what to do next? First of all, get rid of cache or logs.
Start with mirrors. There is a high probability that you donāt anything other than mirrors.
If you are still not happy with the performance, understand which part you are unhappy with
(opinions_about_tech_stuff/ZFS/ZFS tuning flowchart.md at main Ā· jameskimmel/opinions_about_tech_stuff Ā· GitHub ) and how L2ARC or SLOG work and what their hardware need is.
I would also think about if it really is necessary to use force sync or not. This depends on your use case of course which we donāt know.
2 Likes
Im afraid that the log drive is not it. Iāve set sync back to default and then off and got a very similar result. and the nas is connected to a UPS
well its finally finished with the test on the SSD pool
Note this cool bit of software makes it dataset with sync set to off
and its already a mirror pool
###################################
# DD Benchmark Results for Pool: SSD #
###################################
# Threads: 1 #
# 1M Seq Write Run 1: 458.15 MB/s #
# 1M Seq Write Run 2: 339.52 MB/s #
# 1M Seq Write Avg: 398.83 MB/s #
# 1M Seq Read Run 1: 5999.34 MB/s #
# 1M Seq Read Run 2: 6867.89 MB/s #
# 1M Seq Read Avg: 6433.62 MB/s #
###################################
# Threads: 5 #
# 1M Seq Write Run 1: 294.24 MB/s #
# 1M Seq Write Run 2: 238.95 MB/s #
# 1M Seq Write Avg: 266.59 MB/s #
# 1M Seq Read Run 1: 893.43 MB/s #
# 1M Seq Read Run 2: 962.56 MB/s #
# 1M Seq Read Avg: 927.99 MB/s #
###################################
# Threads: 10 #
# 1M Seq Write Run 1: 188.29 MB/s #
# 1M Seq Write Run 2: 199.36 MB/s #
# 1M Seq Write Avg: 193.83 MB/s #
# 1M Seq Read Run 1: 928.48 MB/s #
# 1M Seq Read Run 2: 998.26 MB/s #
# 1M Seq Read Avg: 963.37 MB/s #
###################################
# Threads: 20 #
# 1M Seq Write Run 1: 190.38 MB/s #
# 1M Seq Write Run 2: 183.57 MB/s #
# 1M Seq Write Avg: 186.97 MB/s #
# 1M Seq Read Run 1: 978.49 MB/s #
# 1M Seq Read Run 2: 1036.84 MB/s #
# 1M Seq Read Avg: 1007.67 MB/s #
###################################
Cleaning up test files...
also for other if you named you pool with spaces in it like āapps poolā or āBig Guyā it will not work yay bugs program donāt like spaces it spames this
Try 'dd --help' for more information.
dd: unrecognized operand āpool/tn-bench/file_1.datā
Sara
January 13, 2025, 9:15am
16
By setting it to default on TrueNAS, you let the initiator decide. Your iSCSI initiator uses sync by default. So probably nothing changed.
Set it to disabled on TrueNAS and test again.
This has nothing to do with it.
I know where you are coming from ixSystems writing that they recommend a UPS has confused thousands of users. But UPS has nothing to do with SLOG or PLP or performance.
Why the heck are you attempting to run a 3rd party tool as root
Do you understand how dangerous that is on a Linux-based system
brandon_evans:
dd
If you are going to use dd
as the basis for the source of data you are writing to disk, then you need to use it correctly for the results to be meaningful, and for us to have any idea whether you have used it correctly (and I reckon the chances at 80/20 that you havenāt) you need to tell us what your dd command was when you share the results with us.
1 Like
etorix
January 13, 2025, 1:14pm
20
But your SLOG is a single consumer-grade SN770 which lacks PLP. Itās probably worse than the default ZIL which, at least, would be striped over all drives. (Oops! Ninjaād by @Sara )
Try removing the SLOG and test again.
And how much RAM do you have? A 1 TB L2ARC may actually harm you.
1 Like