Building a vdev for 500 Ip Cameras

I am trying to record about 500 IP Cameras continuously.
5MP-4K
about 6mbps 20-30fps
30 days worth of storage

I have about 40 26TB 7200 WD Drives. I know in the past Ive always had IOPS issues. Trying to figure the best drive layout using raid z2

Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
512GB RAM
25GB NIC x 2

Also will increasing the RAM help?

1 Like

I’m not an expert, but IMO with this amount of cameras, your workload can be considered random writes. So you need 500 write IOPS.

Overall IOPS is a sum of all VDEVs IOPS. VDEV write IOPS is the IOPS of a single drive. Single HDD IOPS is about 100. Mb a bit more with larger QD (your case).

So you need at least 5 or 4 VDEVs. Better to test whether 4 is enough.

Also, my calculations do not take scrub into account. Perhaps even 5 VDEVs could not be enough.


With this many drives it could be beneficial to use dRAID, but I’m not familiar with it.


Also, ZFS is not the most performant filesystem. Do you really need it for IP cameras footage?

Probably not. I don’t think this workload needs much RAM. Perhaps even 16GB can be enough (but better to test it).

On second thought: 6(Mbps) x 60(sec) x 60(min) x 24(h) x 30(d) = 15 552 000 Mbits → 1 944 000 MB ~ 1.85TB.

Mb you should just deploy a single mirror of 2-3 NVMe 4TB drives and call it a day? That is for sure cheaper than 40x26TB drives.

1.85TB x 500 = 925TB . And my thought was wrong.


EDIT: Fixed calculations.

I am going to need about 600TB

For IP cameras?

My bad. I forgot to multiply by 500.

So it is 925TB. With 5 raidz2 VDEVs you wouldn’t have enough space. And scrub would give this pool a hard time anyway.

Perhaps someone more proficient will help you.

I can add more 26TB drives. I am just trying to figure out the best layout. So RAM wont help iops for recording cameras?

You are going to be writing on a continuous basis - so the IOPS / throughput of the pool must match or exceed the rate that the total cameras require. Additional RAM would increase the amount that ZFS could hold in memory before needing to write - but that will only help if the load was bursty (which most loads [not yours] are).
The good news is that each write is sequential, the bad news is that there are 500 different streams on a continuous non-stop basis hammering those drives

I would suggest contacting IX for some advice as you are at the bleeding edge of performance requirements here. But my personal thoughts are to use a number of smaller servers with the cameras split between them. Maybe 5 servers each with a twin vdev pool and 100 cameras on each.

[Please note that at this point my advice is worth exactly what you paid for it.]

4 Likes

Minding that you don’t want to fill a pool near 100%—and your workload likely result in high fragmentation…
Say five 8-wide vdevs: 5*6*26 TB = 780 TB That would just fit, without any guarantee that five vdevs is enough for IOPS.
So you will certainly need more drives.

How is the data being shared? Is it NFS and a slog is going to be helpful? Would a couple of special vdev help manage metadata writes to manage iops pressures? How do you manage bandwidth issues to the drives ie HBAs and expanders?

Whilst this is an interesting technical question to consider, this needs professional input from the iX team.

If its NFS: Its cameras - so setting the sync write setting to disabled would be a good idea. As far as I am aware that setting will override any protocol setting (happy to be corrected). You definately do not want to be needing a SLOG as that will result in a slower pool

1 Like

It’s going to a windows program so SMB. I believe they can take NFS but I would need to install NFS drivers on windows.

Are you going to run BlueIris in a VM, or in a seperate box, just using TrueNAS as a storage solution?
SMB is async by default

I am guessing - you mentioned windows. And I only mentioned BlueIris as its (mostly) a windows only package

No not blue iris were using genetec. It is windows in a VM. Truenas will just be for storage. the software can connect directly to a smb share to record. they have there storage calculators, and it says Raid 6 24x drives. not sure how they are split up or what would be equivalent to truenas zfs

If the cameras are only recording movement that MIGHT make it practical (from what I can tell) depending on what the location is and how busy it gets.

A single RAID6 would have a single set of IOPS - I cannot believe its possible to record 500 cameras on a single R6 array whatever compression tricks are used (H265/264/VC1 etc)

1 Like

With that amount of separate streams to deal with the the traffic congestion may be a starting problem, despite enough total bandwidth. This is a lot of packets to write to files.

But, why create a single point of failure for all 500k cameras? Fundamental security rules say you should split recordings into separate servers, ideally in 2 separate locations; and I don’t mean redundancy.

What I’d expect from such system is that idle cameras (not recording action) would stream 6 fps tops - recording scene where nothing happens at 20 fps is a waste of storage. I’ve heard about systems capable to take 12 fps and live compress long fragments as differential frames with very long duration between key frames to save space in CCTV recordings. But that requires larger amounts of RAM to play.

1 Like

So I ran a fio test on 1 of my drives to see the speed. Which do I look at to determine IOPS for an IP Jobs: 16 (f=16): [m(16)][100.0%][r=530MiB/s,w=537MiB/s][r=4237,w=4292 IOPS][eta 00m:00s]
randrw: (groupid=0, jobs=16): err= 0: pid=37334: Wed Sep 17 07:47:34 2025
read: IOPS=4784, BW=599MiB/s (628MB/s)(35.1GiB/60124msec)
bw ( KiB/s): min=209026, max=1065251, per=100.00%, avg=613513.76, stdev=9763.15, samples=1920
iops : min= 1632, max= 8321, avg=4791.47, stdev=76.26, samples=1920
write: IOPS=4774, BW=597MiB/s (626MB/s)(35.1GiB/60124msec); 0 zone resets
bw ( KiB/s): min=227468, max=1002759, per=100.00%, avg=612292.43, stdev=8878.18, samples=1920
iops : min= 1776, max= 7833, avg=4781.92, stdev=69.35, samples=1920
cpu : usr=0.56%, sys=0.18%, ctx=136776, majf=0, minf=5334
IO depths : 1=0.0%, 2=0.0%, 4=0.1%, 8=11.5%, 16=63.4%, 32=25.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=96.5%, 8=0.7%, 16=1.0%, 32=1.9%, 64=0.0%, >=64=0.0%
issued rwts: total=287652,287068,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
READ: bw=599MiB/s (628MB/s), 599MiB/s-599MiB/s (628MB/s-628MB/s), io=35.1GiB (37.7GB), run=60124-60124msec
WRITE: bw=597MiB/s (626MB/s), 597MiB/s-597MiB/s (626MB/s-626MB/s), io=35.1GiB (37.7GB), run=60124-60124msec

If it was a single camera? I’d look at sequential speeds. For like 500 cameras? Assume which ever number is slowest because I’d argue we’re now in random territory.

*edited out typos as this as posted on phone.

I may be joining this conversation without as much knowledge as the others mentioned above, but I’ll offer my contribution based on a project I’m putting together for a client who needed a 200TB data pool for data archiving.
Until recently, I’d never built anything larger than a TrueNAS with 8 disks in RAIDz2 and a single zvol, but I ended up studying a lot for this project.
In your case, I believe the best option would be to purchase two separate enclosures of 40 or more disks, reduce the disk size to something like 10-16TB, and create 6 different zvols in RAIDz2 with 10 disks each.
This would give you a lot of IOPS due to the large number of disks in the pool. However, the costs would be quite prohibitive; for my much smaller system, it’s already quite expensive. =/

1 Like