Rebuilding 4-way RAIDZ2 pool into striped mirrors pool

Hello everyone.

New to the forums, first post looking for advice on rebuilding a pool with new drives coming in. This will be a somewhat long post as I try to be as thorough as possible.

I have a main system running TrueNAS Scale with 64GB RAM and a 4 x 16TB RAIDZ2 pool. The drives are 17 months old and the pool is currently about 35% full. I recently fitted this system with a 10GbE NIC as I intend to use it for VFX workloads. I want to rebuild my pool into a striped mirrors configuration for a more performant pool.

I ordered some parts that haven’t arrived yet:

  • New backup system
  • 4 x 16TB drives identical to the current ones
  • 2 x 500GB M.2 NVMe drives

The pool in the backup system will be configured in RAIDZ2 as it will be offsite and I won’t have 24/7 access to it so greater redundancy is preferred.

When I receive the new parts I plan on using the new system initially only to burn in the new drives. After that I need advice on how to best utilize the new drives to configure both pools.

I’ve narrowed it down to 3 options, but I’m not sure which is the best approach.

    • Create the striped mirrors pool using the new drives in the main system.
    • Replicate the datasets to the new pool.
    • Import the old RAIDZ2 pool into the new backup system.
    • Create the backup RAIDZ2 using the new drives in the main system.
    • Replicate the datasets to the new RAIDZ2 pool.
    • Destroy existing pool and create striped mirrors pool using the old drives.
    • Replicate datasets to the new striped mirrors pool.
    • Import the new RAIDZ2 pool into the new backup system.
    • Use 2 of the new drives to replace 2 drives in the main pool. Do this one drive at a time and wait for each to finish resilvering.
    • Use the 2 removed drives with the other 2 new drives to create the new striped mirrors pool in the main system, using one of each in each mirror VDEV.
    • Replicate the datasets to the new pool.
    • Import the RAIDZ2 pool into the new backup system.

Option 1 is the one with less friction. New performance pool has all new drives and the old pool continues as it is.

Option 2 I’m the least fond of as it’s using all the aged drives for the performance pool and has the added cost of replicating the datasets twice.

Option 3 I hadn’t even thought of until recently where I read somewhere that it may be good practice to mix drives of different ages in a striped mirrors config as it avoids symmetric wear and tear in each of the mirrors. Same would apply for the RAIDZ2 pool I presume. This option requires 2 separate resilverings which would take considerably longer than the other 2 options.

I am leaning toward option 1 but I am open to suggestions and advice. Is there another option I haven’t considered?

As far as the NVMe drives are considered, the situation is as follows:

The main system is currently using about 20GB of memory on services. I bought an Intel NUC, that will be running either proxmox or xcp-ng, to delegate compute to it in order to free up more memory for ZFS cache. I’m still not sure if the main system will be running any containers or VMs after all this but the aim is to have as few as possible.

The plan was to use both NVMe drives in a mirror VDEV pool for containers / VMs and maybe iSCSI for the NUC. This pool would be replicated onto the main pool, which would be backed up on the new system.

I’m not at all familiar with all the different types of cache drive available in TrueNAS as I have no experience with them. From the reading I’ve done it seems that the only type that might be beneficial to my use case would be L2ARC, but its benefit may range from marginal to inconsequential. Also 500GB may be excessive and a waste.

I don’t expect to receive the new hardware for at least another 2 weeks, so I will continue researching this further. However, I’ve reached a point where a sanity check, general thoughts, and recommendations are more than welcome and will be greatly appreciated.

Given that any VDEV in a ZFS pool is striped to the others, a “striped mirror pool” is a redundant, albeit effettive, way of saying it.

I would go either option 1 or 3, depending on what I wanted to achieve.

Only L2ARC is proper cache, SLOG is NOT a write cache as most users understand it. Start without frills and then look at your ARC’s hit ratio: generally if it’s below 90%[1] you can benefit[2] from L2ARC.


  1. ie, 82%. ↩︎

  2. a metadata-only L2ARC is a slightly different story. ↩︎

1 Like

500 GB is rightly sized for a L2ARC with 64 GB RAM. Run arc_summary on your system after some days (or weeks) of uptime to know if you would benefit from it—quite possibly not.

SLOG is not useful if you do not have critical workloads with sync writes (databases, VMs, etc.).

As for your migration, option 2 makes little sense to me.


(Why make it simple when you can make it complicated)

Option 3 implies you expect to have earlier drive failures on the older drives, which you’d need to address immediately. If you go this route, and have a spare SATA port, do the replacements with the old drive still in place to maintain redundancy.

Option 1 is the most straightforward, and raidz2 will deal with drive failures more gracefully than 2-way mirrors so it makes sense to leave the older drives there.

1 Like

I was under the impression that L2ARC could only be added on pool creation. Will do just that and see where it leads.

I mentioned this option because RAIDZ2 puts more strain on drives because of parity and I figured it may be a good idea to use the new drives for RAIDZ2, and possibly extend the lifespan of the older ones in the mirror VDEVs config.

Story of my life :sweat_smile:

I do not expect any of those drives to fail. They’ve been rock solid the entire time.

L2ARC (and SLOG) can be added and removed at any time, irrespective of the geometry of the underlying data vdevs. If your system has been up and in use for some time, you may run arc_summary now and post the result.

I updated it an hour ago. However, this system is used only as a systems and projects backup at the moment. It won’t be until after I’ve built the new pool that it will be used for a VFX workflow, so running arc_summary now wouldn’t give any meaningful results.

It would give your a base point.

That’s during resilvering and, more marginally, scrubbing: it’s main advantage is being able to withstand up to two drive failures on the same VDEV.

It’s almost midnight where I am and the system is barely in use, just my brothers streaming movies (separate pool, will be removed from this system when I put it into production).

arc_summary

ZFS Subsystem Report Tue Sep 17 23:26:08 2024
Linux 6.6.32-production+truenas 2.2.4-1
Machine: truenas (x86_64) 2.2.4-1

ARC status: HEALTHY
Memory throttle count: 0

ARC size (current): 72.3 % 44.2 GiB
Target size (adaptive): 72.3 % 44.2 GiB
Min size (hard limit): 3.2 % 1.9 GiB
Max size (high water): 31:1 61.2 GiB
Anonymous data size: < 0.1 % 640.5 KiB
Anonymous metadata size: < 0.1 % 6.6 MiB
MFU data target: 37.5 % 15.9 GiB
MFU data size: 6.1 % 2.6 GiB
MFU ghost data size: 0 Bytes
MFU metadata target: 12.5 % 5.3 GiB
MFU metadata size: 2.7 % 1.2 GiB
MFU ghost metadata size: 0 Bytes
MRU data target: 37.5 % 15.9 GiB
MRU data size: 89.9 % 38.2 GiB
MRU ghost data size: 2.1 GiB
MRU metadata target: 12.5 % 5.3 GiB
MRU metadata size: 1.2 % 515.6 MiB
MRU ghost metadata size: 0 Bytes
Uncached data size: 0.0 % 0 Bytes
Uncached metadata size: 0.0 % 0 Bytes
Bonus size: 0.6 % 269.9 MiB
Dnode cache target: 10.0 % 6.1 GiB
Dnode cache size: 13.5 % 844.5 MiB
Dbuf size: 0.8 % 356.8 MiB
Header size: 0.7 % 326.6 MiB
L2 header size: 0.0 % 0 Bytes
ABD chunk waste size: < 0.1 % 11.7 MiB

ARC hash breakdown:
Elements max: 1.4M
Elements current: 100.0 % 1.4M
Collisions: 152.6k
Chain max: 4
Chains: 107.6k

ARC misc:
Deleted: 403.8k
Mutex misses: 122
Eviction skips: 22.9k
Eviction skips due to L2 writes: 0
L2 cached evictions: 0 Bytes
L2 eligible evictions: 44.3 GiB
L2 eligible MFU evictions: < 0.1 % 304.5 KiB
L2 eligible MRU evictions: 100.0 % 44.3 GiB
L2 ineligible evictions: 4.3 GiB

ARC total accesses: 97.3M
Total hits: 97.8 % 95.2M
Total I/O hits: 0.3 % 294.1k
Total misses: 1.8 % 1.8M

ARC demand data accesses: 17.9 % 17.4M
Demand data hits: 95.8 % 16.7M
Demand data I/O hits: 1.4 % 235.1k
Demand data misses: 2.9 % 498.1k

ARC demand metadata accesses: 80.8 % 78.6M
Demand metadata hits: 99.9 % 78.5M
Demand metadata I/O hits: < 0.1 % 30.4k
Demand metadata misses: 0.1 % 53.7k

ARC prefetch data accesses: 1.2 % 1.2M
Prefetch data hits: 0.3 % 3.8k
Prefetch data I/O hits: 0.2 % 2.0k
Prefetch data misses: 99.5 % 1.2M

ARC prefetch metadata accesses: 0.1 % 117.6k
Prefetch metadata hits: 30.0 % 35.2k
Prefetch metadata I/O hits: 22.6 % 26.5k
Prefetch metadata misses: 47.5 % 55.8k

ARC predictive prefetches: 100.0 % 1.3M
Demand hits after predictive: 74.5 % 980.8k
Demand I/O hits after predictive: 18.5 % 242.9k
Never demanded after predictive: 7.0 % 92.5k

ARC prescient prefetches: < 0.1 % 131
Demand hits after prescient: 80.2 % 105
Demand I/O hits after prescient: 19.8 % 26
Never demanded after prescient: 0.0 % 0

ARC states hits of all accesses:
Most frequently used (MFU): 92.2 % 89.7M
Most recently used (MRU): 5.7 % 5.5M
Most frequently used (MFU) ghost: 0.0 % 0
Most recently used (MRU) ghost: < 0.1 % 18
Uncached: 0.0 % 0

DMU predictive prefetcher calls: 15.3M
Stream hits: 9.0 % 1.4M
Hits ahead of stream: 9.5 % 1.5M
Hits behind stream: 71.2 % 10.9M
Stream misses: 10.2 % 1.6M
Streams limit reached: 33.5 % 526.0k
Stream strides: 8.9k
Prefetches issued 1.2M

L2ARC not detected, skipping section

Solaris Porting Layer (SPL):
spl_hostid 0
spl_hostid_path /etc/hostid
spl_kmem_alloc_max 16777216
spl_kmem_alloc_warn 65536
spl_kmem_cache_kmem_threads 4
spl_kmem_cache_magazine_size 0
spl_kmem_cache_max_size 32
spl_kmem_cache_obj_per_slab 8
spl_kmem_cache_slab_limit 16384
spl_max_show_tasks 512
spl_panic_halt 1
spl_schedule_hrtimeout_slack_us 0
spl_taskq_kick 0
spl_taskq_thread_bind 0
spl_taskq_thread_dynamic 1
spl_taskq_thread_priority 1
spl_taskq_thread_sequential 4
spl_taskq_thread_timeout_ms 5000

Tunables:
brt_zap_default_bs 12
brt_zap_default_ibs 12
brt_zap_prefetch 1
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 18446744073709551615
dbuf_cache_shift 5
dbuf_metadata_cache_max_bytes 18446744073709551615
dbuf_metadata_cache_shift 6
dbuf_mutex_cache_shift 0
ddt_zap_default_bs 15
ddt_zap_default_ibs 15
dmu_object_alloc_chunk_shift 7
dmu_prefetch_max 134217728
icp_aes_impl cycle [fastest] generic x86_64 aesni
icp_gcm_avx_chunk_size 32736
icp_gcm_impl cycle [fastest] avx generic pclmulqdq
ignore_hole_birth 1
l2arc_exclude_special 0
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_meta_percent 33
l2arc_mfuonly 0
l2arc_noprefetch 1
l2arc_norw 0
l2arc_rebuild_blocks_min_l2size 1073741824
l2arc_rebuild_enabled 1
l2arc_trim_ahead 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 1048576
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_df_max_search 16777216
metaslab_df_use_largest_segment 0
metaslab_force_ganging 16777217
metaslab_force_ganging_pct 3
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
metaslab_preload_limit 10
metaslab_preload_pct 50
metaslab_unload_delay 32
metaslab_unload_delay_ms 600000
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_print_vdev_tree 0
spa_load_verify_data 1
spa_load_verify_metadata 1
spa_load_verify_shift 4
spa_slop_shift 5
spa_upgrade_errlog_limit 0
vdev_file_logical_ashift 9
vdev_file_physical_ashift 9
vdev_removal_max_span 32768
vdev_validate_skip 0
zap_iterate_prefetch 1
zap_micro_max_size 131072
zfetch_hole_shift 2
zfetch_max_distance 67108864
zfetch_max_idistance 67108864
zfetch_max_reorder 16777216
zfetch_max_sec_reap 2
zfetch_max_streams 8
zfetch_min_distance 4194304
zfetch_min_sec_reap 1
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 13
zfs_abd_scatter_min_size 1536
zfs_admin_snapshot 0
zfs_allow_redacted_dataset_mount 0
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_evict_batch_limit 10
zfs_arc_eviction_pct 200
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 0
zfs_arc_meta_balance 500
zfs_arc_min 0
zfs_arc_min_prefetch_ms 0
zfs_arc_min_prescient_prefetch_ms 0
zfs_arc_pc_percent 300
zfs_arc_prune_task_threads 1
zfs_arc_shrink_shift 0
zfs_arc_shrinker_limit 0
zfs_arc_sys_free 0
zfs_async_block_max_blocks 18446744073709551615
zfs_autoimport_disable 1
zfs_bclone_enabled 1
zfs_bclone_wait_dirty 0
zfs_blake3_impl cycle [fastest] generic sse2 sse41 avx2
zfs_btree_verify_intensity 0
zfs_checksum_events_per_second 20
zfs_commit_timeout_pct 10
zfs_compressed_arc_enabled 1
zfs_condense_indirect_commit_entry_delay_ms 0
zfs_condense_indirect_obsolete_pct 25
zfs_condense_indirect_vdevs_enable 1
zfs_condense_max_obsolete_bytes 1073741824
zfs_condense_min_mapping_bytes 131072
zfs_dbgmsg_enable 1
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_ddt_data_is_special 1
zfs_deadman_checktime_ms 60000
zfs_deadman_enabled 1
zfs_deadman_failmode wait
zfs_deadman_synctime_ms 600000
zfs_deadman_ziotime_ms 300000
zfs_dedup_prefetch 0
zfs_default_bs 9
zfs_default_ibs 15
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync_percent 20
zfs_disable_ivset_guid_check 0
zfs_dmu_offset_next_sync 1
zfs_embedded_slog_min_ms 64
zfs_expire_snapshot 300
zfs_fallocate_reserve_percent 110
zfs_flags 0
zfs_fletcher_4_impl [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_min_time_ms 1000
zfs_history_output_max 1048576
zfs_immediate_write_sz 32768
zfs_initialize_chunk_size 1048576
zfs_initialize_value 16045690984833335022
zfs_keep_log_spacemaps_at_export 0
zfs_key_max_salt_uses 400000000
zfs_livelist_condense_new_alloc 0
zfs_livelist_condense_sync_cancel 0
zfs_livelist_condense_sync_pause 0
zfs_livelist_condense_zthr_cancel 0
zfs_livelist_condense_zthr_pause 0
zfs_livelist_max_entries 500000
zfs_livelist_min_percent_shared 75
zfs_lua_max_instrlimit 100000000
zfs_lua_max_memlimit 104857600
zfs_max_async_dedup_frees 100000
zfs_max_dataset_nesting 50
zfs_max_log_walking 5
zfs_max_logsm_summary_length 10
zfs_max_missing_tvds 0
zfs_max_nvlist_src_size 0
zfs_max_recordsize 16777216
zfs_metaslab_find_max_tries 100
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_max_size_cache_sec 3600
zfs_metaslab_mem_limit 25
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_metaslab_try_hard_before_gang 0
zfs_mg_fragmentation_threshold 95
zfs_mg_noalloc_threshold 0
zfs_min_metaslabs_to_flush 1
zfs_multihost_fail_intervals 10
zfs_multihost_history 0
zfs_multihost_import_intervals 20
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_obsolete_min_time_ms 500
zfs_override_estimate_recordsize 0
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 30
zfs_prefetch_disable 0
zfs_read_history 0
zfs_read_history_hits 0
zfs_rebuild_max_segment 1048576
zfs_rebuild_scrub_enabled 1
zfs_rebuild_vdev_limit 67108864
zfs_reconstruct_indirect_combinations_max 4096
zfs_recover 0
zfs_recv_best_effort_corrective 0
zfs_recv_queue_ff 20
zfs_recv_queue_length 16777216
zfs_recv_write_batch_size 1048576
zfs_removal_ignore_errors 0
zfs_removal_suspend_progress 0
zfs_remove_max_segment 16777216
zfs_resilver_disable_defer 0
zfs_resilver_min_time_ms 3000
zfs_scan_blkstats 0
zfs_scan_checkpoint_intval 7200
zfs_scan_fill_weight 3
zfs_scan_ignore_errors 0
zfs_scan_issue_strategy 0
zfs_scan_legacy 0
zfs_scan_max_ext_gap 2097152
zfs_scan_mem_lim_fact 20
zfs_scan_mem_lim_soft_fact 20
zfs_scan_report_txgs 0
zfs_scan_strict_mem_lim 0
zfs_scan_suspend_progress 0
zfs_scan_vdev_limit 16777216
zfs_scrub_error_blocks_per_txg 4096
zfs_scrub_min_time_ms 1000
zfs_send_corrupt_data 0
zfs_send_no_prefetch_queue_ff 20
zfs_send_no_prefetch_queue_length 1048576
zfs_send_queue_ff 20
zfs_send_queue_length 16777216
zfs_send_unmodified_spill_blocks 1
zfs_sha256_impl cycle [fastest] generic x64 ssse3 avx avx2 shani
zfs_sha512_impl cycle [fastest] generic x64 avx avx2
zfs_slow_io_events_per_second 20
zfs_snapshot_history_enabled 1
zfs_spa_discard_memory_limit 16777216
zfs_special_class_metadata_reserve_pct 25
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 8
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_traverse_indirect_prefetch_limit 32
zfs_trim_extent_bytes_max 134217728
zfs_trim_extent_bytes_min 32768
zfs_trim_metaslab_skip 0
zfs_trim_queue_limit 10
zfs_trim_txg_batch 32
zfs_txg_history 100
zfs_txg_timeout 5
zfs_unflushed_log_block_max 131072
zfs_unflushed_log_block_min 1000
zfs_unflushed_log_block_pct 400
zfs_unflushed_log_txg_max 1000
zfs_unflushed_max_mem_amt 1073741824
zfs_unflushed_max_mem_ppm 1000
zfs_unlink_suspend_progress 0
zfs_user_indirect_is_special 1
zfs_vdev_aggregation_limit 1048576
zfs_vdev_aggregation_limit_non_rotating 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_def_queue_depth 32
zfs_vdev_default_ms_count 200
zfs_vdev_default_ms_shift 29
zfs_vdev_disk_classic 1
zfs_vdev_disk_max_segs 0
zfs_vdev_failfast_mask 1
zfs_vdev_initializing_max_active 1
zfs_vdev_initializing_min_active 1
zfs_vdev_max_active 1000
zfs_vdev_max_auto_ashift 14
zfs_vdev_max_ms_shift 34
zfs_vdev_min_auto_ashift 9
zfs_vdev_min_ms_count 16
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_ms_count_limit 131072
zfs_vdev_nia_credit 5
zfs_vdev_nia_delay 5
zfs_vdev_open_timeout_ms 1000
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
zfs_vdev_read_gap_limit 32768
zfs_vdev_rebuild_max_active 3
zfs_vdev_rebuild_min_active 1
zfs_vdev_removal_max_active 2
zfs_vdev_removal_min_active 1
zfs_vdev_scheduler unused
zfs_vdev_scrub_max_active 3
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_trim_max_active 2
zfs_vdev_trim_min_active 1
zfs_vdev_write_gap_limit 4096
zfs_vnops_read_chunk_size 1048576
zfs_wrlog_data_max 8589934592
zfs_xattr_compat 0
zfs_zevent_len_max 512
zfs_zevent_retain_expire_secs 900
zfs_zevent_retain_max 2000
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zfs_zil_saxattr 1
zil_maxblocksize 131072
zil_maxcopied 7680
zil_nocacheflush 0
zil_replay_disable 0
zil_slog_bulk 67108864
zio_deadman_log_all 0
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_slow_io_ms 30000
zio_taskq_batch_pct 80
zio_taskq_batch_tpq 0
zio_taskq_read fixed,1,8 null scale null
zio_taskq_write batch fixed,1,5 scale fixed,1,5
zstd_abort_size 131072
zstd_earlyabort_pass 1
zvol_blk_mq_blocks_per_thread 8
zvol_blk_mq_queue_depth 128
zvol_enforce_quotas 1
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_num_taskqs 0
zvol_open_timeout_ms 1000
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 0
zvol_use_blk_mq 0
zvol_volmode 2

ZIL committed transactions: 554.6k
Commit requests: 25.7k
Flushes to stable storage: 25.7k
Transactions to SLOG storage pool: 0 Bytes 0
Transactions to non-SLOG storage pool: 1.4 GiB 33.7k

Doesn’t it also put more strain on the pool when writing data? It has to write the data and the parity for that data. When I use zfs send | zfs receive the read rate from the source pool is around half the write rate on the destination pool.

I suggest reading the quotes incorporated in this resource and the thread they come from.

1 Like

Thanks for this. Just skimmed it now and it looks like the kind of information I need to read. Will look at it tomorrow as it’s getting late here.

1 Like