Stux
June 23, 2024, 1:12pm
21
So, you’re not sure
And now we’re back to not knowing
FWIW, I think ARC is cleverer and doesn’t waste space on block padding. The block is just small. After all it’s stored compressed anyway in arc so can be any size.
It is decompressed as it’s written out to the destination of the fread function.
But no. I haven’t checked that part of the source code recently.
(Last checked zfs source code when figuring out how arc sizing works…)
EDIT: BTW, the app expects to read exactly the file size in bytes, no padding.
1 Like
Kiwi
June 23, 2024, 1:14pm
22
This one is not on Realtek, at least not really. This one is on me.
Live Ubuntu yielded basically full interface performance in either direction.
Since the same hardware can do fine on a different OS, I checked the driver and updated it from a version released in 2015 to one released in 2022.
Results:
Accepted connection from 2001:a61:27c9:3f01:ad98:5fa4:95f:33ff, port 54773
[ 5] local 2001:a61:27c9:3f01:9e69:b4ff:fe65:8a40 port 5201 connected to 2001:a61:27c9:3f01:ad98:5fa4:95f:33ff port 54774
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 277 MBytes 2.32 Gbits/sec
[ 5] 1.00-2.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 2.00-3.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 3.00-4.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 4.00-5.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 5.00-6.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 6.00-7.00 sec 259 MBytes 2.17 Gbits/sec
[ 5] 7.00-8.00 sec 249 MBytes 2.09 Gbits/sec
[ 5] 8.00-9.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 9.00-10.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 10.00-10.02 sec 4.28 MBytes 2.34 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.02 sec 2.68 GBytes 2.30 Gbits/sec receiver
Connecting to host 192.168.178.70, port 5201
[ 5] local 192.168.178.14 port 60502 connected to 192.168.178.70 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 284 MBytes 2.38 Gbits/sec 188 462 KBytes
[ 5] 1.00-2.00 sec 282 MBytes 2.37 Gbits/sec 237 349 KBytes
[ 5] 2.00-3.00 sec 282 MBytes 2.37 Gbits/sec 142 399 KBytes
[ 5] 3.00-4.00 sec 283 MBytes 2.37 Gbits/sec 194 408 KBytes
[ 5] 4.00-5.00 sec 281 MBytes 2.36 Gbits/sec 170 409 KBytes
[ 5] 5.00-6.00 sec 283 MBytes 2.37 Gbits/sec 79 413 KBytes
[ 5] 6.00-7.00 sec 284 MBytes 2.38 Gbits/sec 40 406 KBytes
[ 5] 7.00-8.00 sec 283 MBytes 2.37 Gbits/sec 72 416 KBytes
[ 5] 8.00-9.00 sec 283 MBytes 2.38 Gbits/sec 80 415 KBytes
[ 5] 9.00-10.00 sec 282 MBytes 2.37 Gbits/sec 30 355 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec 1232 sender
[ 5] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec receiver
iperf Done.
Real world file transfers are now at around 2.3 Gbit/s which matches these numbers.
Drivers matter. Lesson learned.
Thank you for the handholding. All the things that can go wrong with TrueNAS/ZFS scared me into headless chicken mode.
I will get back to following the interesting discussion concerning record sizes now. (…and to thinking about how to ugprade to 10 GbE)
4 Likes
Stux:
So, you’re not sure
And now we’re back to not knowing
FWIW, I think ARC is cleverer and doesn’t waste space on block padding. The block is just small. After all it’s stored compressed anyway in arc so can be any size.
I’m like “95% sure” ( ) that this is how the ARC behaves, as opposed to non-ARC buffers in RAM accessed by applications.
1 Like
@Kiwi : You’re being very rude. Please don’t hijack our off-topic discussion in your own thread.
2 Likes
Stux
June 23, 2024, 1:26pm
25
Okay, we both agree that we both think that this is how ARC works… right?
Ie compressed and not padded
(I don’t care about the users memory allocation, those semantics were nailed down by k&r circa 50 years ago. )
2 Likes
Sara
June 24, 2024, 7:20am
27
Still not sure if I get this right, send help please
Reading through your examples, this is how I thought it would behave, only that I assumed that the selection of the record size happens after compression.
Question 1: Going by your examples, I can’t see a scenario where a smaller record size offers any advantage. Since again, it is only a max value for the power of two and not a fixed size.
Question 2: How does this work:
winnielinnie:
If your dataset’s “recordsize” policy is 1M, and you save a highly-compressible 980K file, then it will be comprised of a single block that is 1M (next power-of-two) in size. It will be 1M in RAM when used by applications, and perhaps 160K stored on disk. (Again, assuming inline compression is enabled.)
So if it uses a 1M block, but only uses 160k stored on disk, is it technically still a 1M block that uses only 160k storage? I was under the assumption it compresses first, gets to 160k, because of that creates a 256k block (power of two).