L2ARC or new pool

Hello guys,
I get a new intel 900P
So I am wondering whether I should add it to the current pool’s L2ARC or add it as a new pool to host VMs or some apps

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Aug 11 20:34:21 2025
Linux 6.6.44-production+truenas                                 2.2.99-1
Machine: truenas (x86_64)                                       2.2.99-1

ARC status:
        Total memory size:                                      62.7 GiB
        Min target size:                                3.1 %    2.0 GiB
        Max target size:                               98.4 %   61.7 GiB
        Target size (adaptive):                        88.2 %   54.6 GiB
        Current size:                                  88.2 %   54.5 GiB
        Free memory size:                                        2.9 GiB
        Available memory size:                                 753.2 MiB

ARC structural breakdown (current size):                        54.5 GiB
        Compressed size:                               95.8 %   52.2 GiB
        Overhead size:                                  3.3 %    1.8 GiB
        Bonus size:                                     0.1 %   43.4 MiB
        Dnode size:                                     0.3 %  148.5 MiB
        Dbuf size:                                      0.1 %   83.5 MiB
        Header size:                                    0.4 %  211.3 MiB
        L2 header size:                                 0.0 %    0 Bytes
        ABD chunk waste size:                         < 0.1 %   10.9 MiB

ARC types breakdown (compressed + overhead):                    54.0 GiB
        Data size:                                     92.8 %   50.1 GiB
        Metadata size:                                  7.2 %    3.9 GiB

ARC states breakdown (compressed + overhead):                   54.0 GiB
        Anonymous data size:                          < 0.1 %    1.1 MiB
        Anonymous metadata size:                        0.1 %   57.4 MiB
        MFU data target:                               37.5 %   20.2 GiB
        MFU data size:                                  2.1 %    1.1 GiB
        MFU evictable data size:                        1.5 %  815.4 MiB
        MFU ghost data size:                                     0 Bytes
        MFU metadata target:                           12.5 %    6.7 GiB
        MFU metadata size:                              1.0 %  571.5 MiB
        MFU evictable metadata size:                    0.7 %  401.1 MiB
        MFU ghost metadata size:                                 0 Bytes
        MRU data target:                               37.5 %   20.2 GiB
        MRU data size:                                 90.7 %   49.0 GiB
        MRU evictable data size:                       86.1 %   46.5 GiB
        MRU ghost data size:                                     2.5 GiB
        MRU metadata target:                           12.5 %    6.7 GiB
        MRU metadata size:                              6.1 %    3.3 GiB
        MRU evictable metadata size:                    5.8 %    3.1 GiB
        MRU ghost metadata size:                                 0 Bytes
        Uncached data size:                             0.0 %    0 Bytes
        Uncached metadata size:                         0.0 %    0 Bytes

ARC hash breakdown:
        Elements max:                                             853.9k
        Elements current:                             100.0 %     853.9k
        Collisions:                                                 5.7M
        Chain max:                                                     5
        Chains:                                                    40.3k

ARC misc:
        Memory throttles:                                              0
        Memory direct reclaims:                                        0
        Memory indirect reclaims:                                      0
        Deleted:                                                   72.7M
        Mutex misses:                                               2.3k
        Eviction skips:                                             9.5k
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   8.7 TiB
        L2 eligible MFU evictions:                    < 0.1 %   96.0 KiB
        L2 eligible MRU evictions:                    100.0 %    8.7 TiB
        L2 ineligible evictions:                               396.9 MiB

ARC total accesses:                                               177.4M
        Total hits:                                    95.2 %     168.8M
        Total I/O hits:                               < 0.1 %      69.8k
        Total misses:                                   4.8 %       8.5M

ARC demand data accesses:                              20.2 %      35.8M
        Demand data hits:                              99.3 %      35.6M
        Demand data I/O hits:                           0.2 %      57.3k
        Demand data misses:                             0.6 %     210.8k

ARC demand metadata accesses:                          75.1 %     133.3M
        Demand metadata hits:                         100.0 %     133.2M
        Demand metadata I/O hits:                     < 0.1 %       1.0k
        Demand metadata misses:                       < 0.1 %      34.0k

ARC prefetch data accesses:                             4.6 %       8.2M
        Prefetch data hits:                           < 0.1 %       2.9k
        Prefetch data I/O hits:                       < 0.1 %        248
        Prefetch data misses:                         100.0 %       8.2M

ARC prefetch metadata accesses:                       < 0.1 %      72.1k
        Prefetch metadata hits:                        37.2 %      26.8k
        Prefetch metadata I/O hits:                    15.5 %      11.2k
        Prefetch metadata misses:                      47.2 %      34.1k

ARC predictive prefetches:                            100.0 %       8.3M
        Demand hits after predictive:                  98.8 %       8.2M
        Demand I/O hits after predictive:               0.7 %      57.9k
        Never demanded after predictive:                0.5 %      41.8k

ARC prescient prefetches:                             < 0.1 %        105
        Demand hits after prescient:                   73.3 %         77
        Demand I/O hits after prescient:               26.7 %         28
        Never demanded after prescient:                 0.0 %          0

ARC states hits of all accesses:
        Most frequently used (MFU):                    86.6 %     153.6M
        Most recently used (MRU):                       8.6 %      15.2M
        Most frequently used (MFU) ghost:               0.0 %          0
        Most recently used (MRU) ghost:               < 0.1 %        105
        Uncached:                                       0.0 %          0

DMU predictive prefetcher calls:                                   16.8M
        Stream hits:                                   15.7 %       2.6M
        Hits ahead of stream:                          13.9 %       2.3M
        Hits behind stream:                            18.2 %       3.1M
        Stream misses:                                 52.1 %       8.7M
        Streams limit reached:                          1.4 %     121.9k
        Stream strides:                                           126.1k
        Prefetches issued                                           8.3M

L2ARC not detected, skipping section

Solaris Porting Layer (SPL):
        spl_hostid                                                     0
        spl_hostid_path                                      /etc/hostid
        spl_kmem_alloc_max                                      16777216
        spl_kmem_alloc_warn                                        65536
        spl_kmem_cache_kmem_threads                                    4
        spl_kmem_cache_magazine_size                                   0
        spl_kmem_cache_max_size                                       32
        spl_kmem_cache_obj_per_slab                                    8
        spl_kmem_cache_slab_limit                                  16384
        spl_panic_halt                                                 1
        spl_schedule_hrtimeout_slack_us                                0
        spl_taskq_kick                                                 0
        spl_taskq_thread_bind                                          0
        spl_taskq_thread_dynamic                                       1
        spl_taskq_thread_priority                                      1
        spl_taskq_thread_sequential                                    4
        spl_taskq_thread_timeout_ms                                 5000

Tunables:
        brt_zap_default_bs                                            12
        brt_zap_default_ibs                                           12
        brt_zap_prefetch                                               1
        dbuf_cache_hiwater_pct                                        10
        dbuf_cache_lowater_pct                                        10
        dbuf_cache_max_bytes                        18446744073709551615
        dbuf_cache_shift                                               5
        dbuf_metadata_cache_max_bytes               18446744073709551615
        dbuf_metadata_cache_shift                                      6
        dbuf_mutex_cache_shift                                         0
        ddt_zap_default_bs                                            15
        ddt_zap_default_ibs                                           15
        dmu_ddt_copies                                                 0
        dmu_object_alloc_chunk_shift                                   7
        dmu_prefetch_max                                       134217728
        icp_aes_impl                cycle [fastest] generic x86_64 aesni
        icp_gcm_avx_chunk_size                                     32736
        icp_gcm_impl               cycle [fastest] avx generic pclmulqdq
        ignore_hole_birth                                              1
        l2arc_exclude_special                                          0
        l2arc_feed_again                                               1
        l2arc_feed_min_ms                                            200
        l2arc_feed_secs                                                1
        l2arc_headroom                                                 8
        l2arc_headroom_boost                                         200
        l2arc_meta_percent                                            33
        l2arc_mfuonly                                                  0
        l2arc_noprefetch                                               1
        l2arc_norw                                                     0
        l2arc_rebuild_blocks_min_l2size                       1073741824
        l2arc_rebuild_enabled                                          1
        l2arc_trim_ahead                                               0
        l2arc_write_boost                                       33554432
        l2arc_write_max                                         33554432
        metaslab_aliquot                                         1048576
        metaslab_bias_enabled                                          1
        metaslab_debug_load                                            0
        metaslab_debug_unload                                          0
        metaslab_df_max_search                                  16777216
        metaslab_df_use_largest_segment                                0
        metaslab_force_ganging                                  16777217
        metaslab_force_ganging_pct                                     3
        metaslab_fragmentation_factor_enabled                          1
        metaslab_lba_weighting_enabled                                 1
        metaslab_preload_enabled                                       1
        metaslab_preload_limit                                        10
        metaslab_preload_pct                                          50
        metaslab_unload_delay                                         32
        metaslab_unload_delay_ms                                  600000
        raidz_expand_max_copy_bytes                            167772160
        raidz_expand_max_reflow_bytes                                  0
        raidz_io_aggregate_rows                                        4
        send_holes_without_birth_time                                  1
        spa_asize_inflation                                           24
        spa_config_path                             /etc/zfs/zpool.cache
        spa_cpus_per_allocator                                         4
        spa_load_print_vdev_tree                                       0
        spa_load_verify_data                                           1
        spa_load_verify_metadata                                       1
        spa_load_verify_shift                                          4
        spa_num_allocators                                             4
        spa_slop_shift                                                 5
        spa_upgrade_errlog_limit                                       0
        vdev_file_logical_ashift                                       9
        vdev_file_physical_ashift                                      9
        vdev_removal_max_span                                      32768
        vdev_validate_skip                                             0
        zap_iterate_prefetch                                           1
        zap_micro_max_size                                        131072
        zap_shrink_enabled                                             1
        zfetch_hole_shift                                              2
        zfetch_max_distance                                     67108864
        zfetch_max_idistance                                    67108864
        zfetch_max_reorder                                      16777216
        zfetch_max_sec_reap                                            2
        zfetch_max_streams                                             8
        zfetch_min_distance                                      4194304
        zfetch_min_sec_reap                                            1
        zfs_abd_scatter_enabled                                        1
        zfs_abd_scatter_max_order                                     13
        zfs_abd_scatter_min_size                                    1536
        zfs_active_allocator                                     dynamic
        zfs_admin_snapshot                                             0
        zfs_allow_redacted_dataset_mount                               0
        zfs_arc_average_blocksize                                   8192
        zfs_arc_dnode_limit                                            0
        zfs_arc_dnode_limit_percent                                   10
        zfs_arc_dnode_reduce_percent                                  10
        zfs_arc_evict_batch_limit                                     10
        zfs_arc_eviction_pct                                         200
        zfs_arc_grow_retry                                             0
        zfs_arc_lotsfree_percent                                      10
        zfs_arc_max                                                    0
        zfs_arc_meta_balance                                         500
        zfs_arc_min                                                    0
        zfs_arc_min_prefetch_ms                                        0
        zfs_arc_min_prescient_prefetch_ms                              0
        zfs_arc_pc_percent                                           300
        zfs_arc_prune_task_threads                                     1
        zfs_arc_shrink_shift                                           0
        zfs_arc_shrinker_limit                                         0
        zfs_arc_shrinker_seeks                                         2
        zfs_arc_sys_free                                               0
        zfs_async_block_max_blocks                  18446744073709551615
        zfs_autoimport_disable                                         1
        zfs_bclone_enabled                                             1
        zfs_bclone_wait_dirty                                          0
        zfs_blake3_impl          cycle [fastest] generic sse2 sse41 avx2
        zfs_btree_verify_intensity                                     0
        zfs_checksum_events_per_second                                20
        zfs_commit_timeout_pct                                        10
        zfs_compressed_arc_enabled                                     1
        zfs_condense_indirect_commit_entry_delay_ms                    0
        zfs_condense_indirect_obsolete_pct                            25
        zfs_condense_indirect_vdevs_enable                             1
        zfs_condense_max_obsolete_bytes                       1073741824
        zfs_condense_min_mapping_bytes                            131072
        zfs_dbgmsg_enable                                              1
        zfs_dbgmsg_maxsize                                       4194304
        zfs_dbuf_state_index                                           0
        zfs_ddt_data_is_special                                        1
        zfs_deadman_checktime_ms                                   60000
        zfs_deadman_enabled                                            1
        zfs_deadman_events_per_second                                  1
        zfs_deadman_failmode                                        wait
        zfs_deadman_synctime_ms                                   600000
        zfs_deadman_ziotime_ms                                    300000
        zfs_dedup_log_flush_entries_min                             1000
        zfs_dedup_log_flush_flow_rate_txgs                            10
        zfs_dedup_log_flush_min_time_ms                             1000
        zfs_dedup_log_flush_passes_max                                 8
        zfs_dedup_log_mem_max                                  673470464
        zfs_dedup_log_mem_max_percent                                  1
        zfs_dedup_log_txg_max                                          8
        zfs_dedup_prefetch                                             0
        zfs_default_bs                                                 9
        zfs_default_ibs                                               15
        zfs_delay_min_dirty_percent                                   60
        zfs_delay_scale                                           500000
        zfs_delete_blocks                                          20480
        zfs_dirty_data_max                                    4294967296
        zfs_dirty_data_max_max                                4294967296
        zfs_dirty_data_max_max_percent                                25
        zfs_dirty_data_max_percent                                    10
        zfs_dirty_data_sync_percent                                   20
        zfs_disable_ivset_guid_check                                   0
        zfs_dmu_offset_next_sync                                       1
        zfs_embedded_slog_min_ms                                      64
        zfs_expire_snapshot                                          300
        zfs_fallocate_reserve_percent                                110
        zfs_flags                                                      0
        zfs_fletcher_4_impl [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2
        zfs_free_bpobj_enabled                                         1
        zfs_free_leak_on_eio                                           0
        zfs_free_min_time_ms                                        1000
        zfs_history_output_max                                   1048576
        zfs_immediate_write_sz                                     32768
        zfs_initialize_chunk_size                                1048576
        zfs_initialize_value                        16045690984833335022
        zfs_keep_log_spacemaps_at_export                               0
        zfs_key_max_salt_uses                                  400000000
        zfs_livelist_condense_new_alloc                                0
        zfs_livelist_condense_sync_cancel                              0
        zfs_livelist_condense_sync_pause                               0
        zfs_livelist_condense_zthr_cancel                              0
        zfs_livelist_condense_zthr_pause                               0
        zfs_livelist_max_entries                                  500000
        zfs_livelist_min_percent_shared                               75
        zfs_lua_max_instrlimit                                 100000000
        zfs_lua_max_memlimit                                   104857600
        zfs_max_async_dedup_frees                                 100000
        zfs_max_dataset_nesting                                       50
        zfs_max_log_walking                                            5
        zfs_max_logsm_summary_length                                  10
        zfs_max_missing_tvds                                           0
        zfs_max_nvlist_src_size                                        0
        zfs_max_recordsize                                      16777216
        zfs_metaslab_find_max_tries                                  100
        zfs_metaslab_fragmentation_threshold                          70
        zfs_metaslab_max_size_cache_sec                             3600
        zfs_metaslab_mem_limit                                        25
        zfs_metaslab_segment_weight_enabled                            1
        zfs_metaslab_switch_threshold                                  2
        zfs_metaslab_try_hard_before_gang                              0
        zfs_mg_fragmentation_threshold                                95
        zfs_mg_noalloc_threshold                                       0
        zfs_min_metaslabs_to_flush                                     1
        zfs_multihost_fail_intervals                                  10
        zfs_multihost_history                                          0
        zfs_multihost_import_intervals                                20
        zfs_multihost_interval                                      1000
        zfs_multilist_num_sublists                                     0
        zfs_no_scrub_io                                                0
        zfs_no_scrub_prefetch                                          0
        zfs_nocacheflush                                               0
        zfs_nopwrite_enabled                                           1
        zfs_object_mutex_size                                         64
        zfs_obsolete_min_time_ms                                     500
        zfs_override_estimate_recordsize                               0
        zfs_pd_bytes_max                                        52428800
        zfs_per_txg_dirty_frees_percent                               30
        zfs_prefetch_disable                                           0
        zfs_read_history                                               0
        zfs_read_history_hits                                          0
        zfs_rebuild_max_segment                                  1048576
        zfs_rebuild_scrub_enabled                                      1
        zfs_rebuild_vdev_limit                                  67108864
        zfs_reconstruct_indirect_combinations_max                   4096
        zfs_recover                                                    0
        zfs_recv_best_effort_corrective                                0
        zfs_recv_queue_ff                                             20
        zfs_recv_queue_length                                   16777216
        zfs_recv_write_batch_size                                1048576
        zfs_removal_ignore_errors                                      0
        zfs_removal_suspend_progress                                   0
        zfs_remove_max_segment                                  16777216
        zfs_resilver_disable_defer                                     0
        zfs_resilver_min_time_ms                                    3000
        zfs_scan_blkstats                                              0
        zfs_scan_checkpoint_intval                                  7200
        zfs_scan_fill_weight                                           3
        zfs_scan_ignore_errors                                         0
        zfs_scan_issue_strategy                                        0
        zfs_scan_legacy                                                0
        zfs_scan_max_ext_gap                                     2097152
        zfs_scan_mem_lim_fact                                         20
        zfs_scan_mem_lim_soft_fact                                    20
        zfs_scan_report_txgs                                           0
        zfs_scan_strict_mem_lim                                        0
        zfs_scan_suspend_progress                                      0
        zfs_scan_vdev_limit                                     16777216
        zfs_scrub_after_expand                                         1
        zfs_scrub_error_blocks_per_txg                              4096
        zfs_scrub_min_time_ms                                       1000
        zfs_send_corrupt_data                                          0
        zfs_send_no_prefetch_queue_ff                                 20
        zfs_send_no_prefetch_queue_length                        1048576
        zfs_send_queue_ff                                             20
        zfs_send_queue_length                                   16777216
        zfs_send_unmodified_spill_blocks                               1
        zfs_sha256_impl       cycle [fastest] generic x64 ssse3 avx avx2
        zfs_sha512_impl             cycle [fastest] generic x64 avx avx2
        zfs_slow_io_events_per_second                                 20
        zfs_snapshot_history_enabled                                   1
        zfs_spa_discard_memory_limit                            16777216
        zfs_special_class_metadata_reserve_pct                        25
        zfs_sync_pass_deferred_free                                    2
        zfs_sync_pass_dont_compress                                    8
        zfs_sync_pass_rewrite                                          2
        zfs_traverse_indirect_prefetch_limit                          32
        zfs_trim_extent_bytes_max                              134217728
        zfs_trim_extent_bytes_min                                  32768
        zfs_trim_metaslab_skip                                         0
        zfs_trim_queue_limit                                          10
        zfs_trim_txg_batch                                            32
        zfs_txg_history                                              100
        zfs_txg_timeout                                                5
        zfs_unflushed_log_block_max                               131072
        zfs_unflushed_log_block_min                                 1000
        zfs_unflushed_log_block_pct                                  400
        zfs_unflushed_log_txg_max                                   1000
        zfs_unflushed_max_mem_amt                             1073741824
        zfs_unflushed_max_mem_ppm                                   1000
        zfs_unlink_suspend_progress                                    0
        zfs_user_indirect_is_special                                   1
        zfs_vdev_aggregation_limit                               1048576
        zfs_vdev_aggregation_limit_non_rotating                   131072
        zfs_vdev_async_read_max_active                                 3
        zfs_vdev_async_read_min_active                                 1
        zfs_vdev_async_write_active_max_dirty_percent                 60
        zfs_vdev_async_write_active_min_dirty_percent                 30
        zfs_vdev_async_write_max_active                               10
        zfs_vdev_async_write_min_active                                2
        zfs_vdev_def_queue_depth                                      32
        zfs_vdev_default_ms_count                                    200
        zfs_vdev_default_ms_shift                                     29
        zfs_vdev_disk_classic                                          0
        zfs_vdev_disk_max_segs                                         0
        zfs_vdev_failfast_mask                                         1
        zfs_vdev_initializing_max_active                               1
        zfs_vdev_initializing_min_active                               1
        zfs_vdev_max_active                                         1000
        zfs_vdev_max_auto_ashift                                      14
        zfs_vdev_max_ms_shift                                         34
        zfs_vdev_min_auto_ashift                                       9
        zfs_vdev_min_ms_count                                         16
        zfs_vdev_mirror_non_rotating_inc                               0
        zfs_vdev_mirror_non_rotating_seek_inc                          1
        zfs_vdev_mirror_rotating_inc                                   0
        zfs_vdev_mirror_rotating_seek_inc                              5
        zfs_vdev_mirror_rotating_seek_offset                     1048576
        zfs_vdev_ms_count_limit                                   131072
        zfs_vdev_nia_credit                                            5
        zfs_vdev_nia_delay                                             5
        zfs_vdev_open_timeout_ms                                    1000
        zfs_vdev_queue_depth_pct                                    1000
        zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
        zfs_vdev_read_gap_limit                                    32768
        zfs_vdev_rebuild_max_active                                    3
        zfs_vdev_rebuild_min_active                                    1
        zfs_vdev_removal_max_active                                    2
        zfs_vdev_removal_min_active                                    1
        zfs_vdev_scheduler                                        unused
        zfs_vdev_scrub_max_active                                      3
        zfs_vdev_scrub_min_active                                      1
        zfs_vdev_sync_read_max_active                                 10
        zfs_vdev_sync_read_min_active                                 10
        zfs_vdev_sync_write_max_active                                10
        zfs_vdev_sync_write_min_active                                10
        zfs_vdev_trim_max_active                                       2
        zfs_vdev_trim_min_active                                       1
        zfs_vdev_write_gap_limit                                    4096
        zfs_vnops_read_chunk_size                                1048576
        zfs_wrlog_data_max                                    8589934592
        zfs_xattr_compat                                               0
        zfs_zevent_len_max                                           512
        zfs_zevent_retain_expire_secs                                900
        zfs_zevent_retain_max                                       2000
        zfs_zil_clean_taskq_maxalloc                             1048576
        zfs_zil_clean_taskq_minalloc                                1024
        zfs_zil_clean_taskq_nthr_pct                                 100
        zfs_zil_saxattr                                                1
        zil_maxblocksize                                          131072
        zil_maxcopied                                               7680
        zil_nocacheflush                                               0
        zil_replay_disable                                             0
        zil_slog_bulk                                           67108864
        zio_deadman_log_all                                            0
        zio_dva_throttle_enabled                                       1
        zio_requeue_io_start_cut_in_line                               1
        zio_slow_io_ms                                             30000
        zio_taskq_batch_pct                                           80
        zio_taskq_batch_tpq                                            0
        zio_taskq_read                         fixed,1,8 null scale null
        zio_taskq_write                             sync null scale null
        zio_taskq_write_tpq                                           16
        zstd_abort_size                                           131072
        zstd_earlyabort_pass                                           1
        zvol_blk_mq_blocks_per_thread                                  8
        zvol_blk_mq_queue_depth                                      128
        zvol_enforce_quotas                                            1
        zvol_inhibit_dev                                               0
        zvol_major                                                   230
        zvol_max_discard_blocks                                    16384
        zvol_num_taskqs                                                0
        zvol_open_timeout_ms                                        1000
        zvol_prefetch_bytes                                       131072
        zvol_request_sync                                              0
        zvol_threads                                                   0
        zvol_use_blk_mq                                                0
        zvol_volmode                                                   2

ZIL committed transactions:                                       470.6k
        Commit requests:                                           93.6k
        Flushes to stable storage:                                 93.6k
        Transactions to SLOG storage pool:            0 Bytes          0
        Transactions to non-SLOG storage pool:        1.6 GiB      98.8k

So currently 95% of your reads are already coming from RAM. L2ARC may speed up the remaining 5% coming from your disks but then again may not.

How long has the system been running since its last restart?

1 Like

Since i restart just two days,
During this period, I uploaded about 7TB of data
Or maybe I need to observe for a while longer?

It’s reads that are cached in ARC. Try and keep it on for a few more days to a week or two and use it as you would normally. My guess is that your ARC ratio will remain at 95% or higher in which case L2 will be of little help if at all.

OK, I will observe for a while longer
thanks.

ARC stats are great on a production system whose workload is fairly constant. They are not as representative if your use case is not constant.

Case in point: I use rsync to back up my NAS. Before the metadata only L2ARC, a terabyte of data would take over an hour to process as rsync traversed every nook and cranny of the NAS file system. The L2ARC sped up my rsync by 12x.

So yes, on average, L2ARC may do little for you per the stats, but I would test under use case conditions you care about rather than stick to ARCstats. After all, L2ARC can be attached and detached without any issues. It can even fail and your pool will be fine.

So I counsel to install it if your system has more than 32GB of RAM, let it get hot (usually three passes or so) and then see how the system changed in your use case. Ideally with a stop watch so you can quantify the benefit.

While the ARC has a nice 95% hit rate, yes that is a nice value. As you said, only 2 days worth of data however I doubt the L2ARC will actually help much at all. But collect a good week or month of data, then see where the hit rate sits. You will have a much more accurate evaluation then. If it drops to 75%, the L2ARC may help a lot. Capacity is important as well.

What is your use case for needing faster data access? I’m trying to help justify it or just the opposite.

Other factors in yoru posting… What is the capacity of this new drive and would it alone be good for your VMs? High speed is great, but if you put it where it will not be utilized, it means nothing. Also, if you had only this one drive supporting VMs, then it would be a stripe. You would do better with a mirror so get two. :clown_face:

I’m sure you have read that many people will add an L2ARC and the result overall is a speed reduction in data access. It is very minor but measurable. You would be better off doubling your RAM if possible.

As it stands, the L2ARC is a no-go from my perspective. If you have some large databases which are frequently accessed, they I’d say an L2ARC may help speed those things up. But again, you already have a 95% hit ratio which is pretty darn good.

This is just my opinion and everyone has them. Gather more data and I’d recommend reading more about L2ARC and where it can shine the most.

1 Like

I can certainly appreciate that stance for your system since it’s (almost entirely?) flash. My main criticism re: arc stats, especially in a SOHO setting, is that the use case may not be as present as ARC Stats assumes. That in turn can lead to misleading stats.

Case in point: my NAS’ arc stats were also in the 90’s prior to adding the L2ARC, and that may have been entirely accurate, since I didn’t rsync the system every day, constantly.

But it did make a big difference, to me, when my metadata-only L2ARC cut down backups from day+ affairs to just a few hours. Similarly, ordinary directory traversal via the MacOS GUI also executed faster.

So while I appreciate the benefit of Arc stats, I would not rely on them unless the system is in constant use. If your system is not constantly doing what you need it to, care about, or whatever then specifically measure the impact of L2ARC on the use case(s) you care about.

And if adding L2ARC does nothing or negatively impacts your workflow, simply remove the L2ARC.

2 Likes

You do make a very compelling case to test and try as like you say you have nothing to lose.

Out of interest what is your ARC and L2 hit-ratio since you added it?

Hah, I switched to a sVDEV so my metadata is even faster than it was with an L2ARC since it doesn’t have to experience a miss in order to get added to the L2ARC. I’ll have a look when I get home.

2 Likes

@Constantin I appreciate your experience and insight.

What? All flash… Well yes but I also don’t use mine they way most people likely do. Mine is just a NAS. It is my backup of all my computers and data, should I even need to recover it. And the data that is only stored on the NAS is backed to a cloud and I periodically backup the entire NAS to a USB drive. Got to tell you, that cloud has got to be full by now.

1 Like

I appreciate your insights more than you can imagine! I’m in the process of getting multi-report set up again and look forward to getting those HDD reports in the email soon!

Hello, i have 280G intel 900P
I’m considering upgrading my LAN switch to a 10G device in the future, so I’m considering using it as the L2ARC.
However, expanding the memory seems like a good option as well.
Regarding the VM, I’m hoping to run a Windows system and some Windows services, so 280GB should be more than enough.

More memory is always better than an L2ARC if you can add more. I purchased half of my capacity and I can easily add the other half if I needed to, but for a home system, 64GB is more than I need. I would like to brag and say I have 128GB but my wallet would scream at me.

What size is your regular backup?

On second thought, you can afford for your data to not be in ARC. You have an NVMe pool after all.

I can fit my entire NAS worth of data on a 4TB SSD. My used capacity is 3.31TB and available is 5.37TB. If my data grows fast, it would only be due to creating complete images of my computers. I had to purge a lot of old images from last year and earlier that I didn’t need.

My use case is a backup device primarily. ESXi likes the faster NVMe drives. The reason I have NVMe is right place, right time, right price. I think of myself as lucky. And I hope these outlast my spinners by a decade. We will see.

Regardless, I still don’t think an sVDEV for the OPs system would be wise. It would work, it might gain a few milliseconds, but is it worth it and the added risk.