25.04 performance problems after upgrade

Since upgrading to 25.04 i am getting only 200 megabits over a gigabit connection to my machine either smb or iscsi. i can stuff the gigabit connection sending. my desktop is wired windows 11 pro ryzen 9 9500x with intel nic and 64 gigs of ram and a gen 3 nvme ssd. Right now i am making a final replication and am going to revert back to EE and see if things improve. i will add more details once i run the test…it’s going to take a bit as i am moving 11 TB around…:slight_smile: My server contigs are in my signature.

1 Like

just to add some notes. smb performance is 150Mbps while iscsi is 400Mbps. it will take a bit to replicate and then rebuild the primary and then move 10TB back. Will post updates as i have more data.

1 Like

so i have reloaded my nic drivers on my workstation to no avail. i am using custom share parameters with smb2/3 file handles enabled but agai8n that has not changed except upgrading to 24.04. i am going to try setting the smb default share type and then making sure the acl’s are set to how i want them…but this doesn’t explain the 50 percent performance hit on iscsi. I also have a win10 workstation and will go investigate that in case it’s a 25.x/ win11 interaction.

so on the win10 machine i
scsi is performing fine but i sitll see abohut a 20 percent performance drop on smb. observing the server itself i am used to seeing around 15 percent cpu usage near constantly per workstation and i am NOT seeing that…so something isn’t right with 24.04. i am going to revert probably to EE or earlier as a test…i do not know if it’s the new zfs pool options, something within 25.x or both.

so for right now i simply deleted the pool on 25.x, rebooted, restored the config and am restoring the files now. will try again once the replication is complete.

i found another thread that says the suggestion for a different issue was to return the share type back to default…which i do not wish to do. Once the replication is completed i will see about my performance numbers again…and if still poor will revert the share type back to default for my personal shares and reetest.

smb performance unchanged after a full rebuild of 25.04 and iscsi will not show up on my windows workstations. looks like i will be reverting back to EE at the least…will keep posting as i continue.

so i did some file transfers on my macbook air and i am getting 400 megabits a second which is what is the maximum it does over my wifi. so it appears to be something between windows and 25.x for sure. this weekend i will revert to an earlier version.

deleted

ok i have now reinstalled EE and am now restoring the config. once that is done it’s time to replicate the 11 TB back and see if my performance has been restored or not…

Did you reboot your windows clients and restart the SMB server after removing your multi-protocol-nfs shares? If you didn’t then the testing was most likely invalid since the Windows client most likely cached lease support info.

yeppers…i have now totally slicked my primary server and reinstalled EE…but now it will not replicate back to itself. let me start another thread since this is a new issue.

i wound up going back to c ore on the primary. vs the hours and hours and not being able to get the primary to connect to the backup using any version of scale i had core up and running in 20 minutes and receiving data. once the replication is completed i will be able to resume my testing again…and i will be taking my backup server back to core.

so now the primary is on dragonfish and i am now replicating my 12TB of data…then i will recreate the shares and iscsi connections and then test the performance.

1 Like

so i had a hardware and softwarew problem. i finally redid my primary from scratch with core 13.3. that allowed me to get FT to replicate back to the primary. it refused to do so otherwise. Once i did that i then upgraded the system to EE. my daughter’s windows 10 meachine is now performaing properly. My windows 11 pro system was still hamgstrung. so now the issue was limited there. All cable tests and other diagnostics passed so i decided to bypass the run between my server room and my machine in thew wall…and that took care of it. so the fix was to go back to core, replicate, upgrade to EE for one Machin and bypasss a bad network segment for the other. ;looks like i will be taking the backup machine back to EE as well and staying there for the forseeable future.