Have you used BTRFS recently? And what's your opinion?

Oh - that’s easy to answer. GPL.

In this particular case one cannot even blame NIH syndrome. Linus has publicly stated that unless he gets a written statement from Larry Ellison (that part was a joke) or Oracle’s legal department (this one’s dead serious) that it will be ok to put ZFS into the Linux kernel now and forever, he will simply not do it.

The CDDL under which ZFS is published is a recognized and perfectly valid open source license. It’s the GPL that is the problem.

That’s why Illumos and FreeBSD have had ZFS as the default for years now while the standard Linux kernel will never incorporate it.

Some distribution builders like Canonical’s Mark Shuttleworth go the road of “so sue me, Larry” and include it in Ubuntu despite the legal situation.

3 Likes

okay maybe i dont get that… for what do u use UFS and EXT4?
For VM Images ie images on hypervisor and then u store VM snapshots to ZFS?

thanks!

I think we can still blame a severe case of NIH syndrome on top of usual GPL integrism.
Not to forget philosophical issues with ZFS being both a file system and a volume manager, in plain contradiction of a Linus/Linux dogma that file system and volume manager shall be two separate layers.

It’s implied that it’s used by the virtualized OS.

ZFS provides a zvol, that the VM sees as a formattable “block device”, which is formatted with UFS, XFS, Ext4, etc.


@Stux :point_up:

so its not possible to formate that “block device” as zfs again?

thx

It is, but as I wrote above:

1 Like

@pmh meaning that once one is running lets say ESXi its not a good idea to have ZFS inside VMs, instead use ext4 etc… and Snapshoting is level up from zfs to hypervisor, correct?

Thanks!

sorry for dumb questions.

Yes, the VM disk image files are stored on ZFS so you have redundancy, checksums, snapshots, incremental replication, … on that level. The guest OS is completely oblivious of the situation.

For example @work we run two NVMe based hypervisor systems doing hourly snapshots of all VMs and replicating them to the respective other system.
In case of a machine failure I can boot up the VMs on the other one. Not a real live HA cluster but good enough for us.

storing all VMs on non ZFS NVMe drive - bc of the performance?

thanks

I don’t understand. The NVMe drives are part of a zpool. Our hypervisors use ZFS for storage. Of course. You can pick from at least four alternatives that support local ZFS:

  • TrueNAS CORE (deprecated)
  • TrueNAS SCALE
  • Proxmox
  • XCP-ng

ESXi is dead. Thanks, Broadcom.

1 Like

@pmh
time ago we had that discussion… so i set-up ESXi using NVMe (not sure what fs esxi is using) so all VMs are on that drive.

In case ESXi is dead whats the alternative - Proxmox?
i think i went thru discussions that there is no better hypervisor as ESxi… even virtualized truenas is recco only for ESXi

proxmox? if so how do u export ZFS pool from Truenas to it?

  1. I use TrueNAS as my hypervisor.
  2. Proxmox supports native ZFS - no TrueNAS needed if you decide to use Proxmox.
  3. ESXi uses VMFS which is proprietary and cannot do any redundancy on its own, so it has to rely on “hardware RAID” for local storage. Which is bad.
  4. You can of course build a nice large SAN with TrueNAS and iSCSI and use that as storage for ESXi or Proxmox or HyperV or whatever …
1 Like

what do u use to run VMs (linux, win, bsd…) ? dont get it.

TrueNAS CORE includes bhyve. SCALE includes KVM.

Do you (“you”, not “u”) know what a hypervisor is?

1 Like

TrueNAS - both editions - contains a complete hypervisor and has for years. You can run VMs on TrueNAS.

For me that’s the entire point of TrueNAS. It’s a hyperconverged solution combining storage, virtualisation and containers - jails on FreeBSD and docker on Linux.

2 Likes

With ZFS of course :wink:

Assuming the virtual disks are hosted on a ZFS file system (usually as a Zvol) you snapshot and replicate from outside, like pmh said.

1 Like

I demonstrate using snapshots with VMs in this video:

TrueNAS is sometimes used as a data store for ESXi and in this case supports ESXI triggered snapshots too. (VAAPI iirc)

1 Like

Back to BTRFS.

For practical purposes, BTRFS is a single OS file system. While ZFS exists on various *nix type OSes, (FreeBSD, Solaris, Linux, MacOS), and EVEN MS-Windows. Not saying that OpenZFS on MacOS or MS-Windows are as well supported as FreeBSD or Linux, but they exist.

Next, very recent OpenZFS will run on a dual core Celeron with 2GB of memory, (my old laptop from 2014…). Do I get decent ARC usage? Of course not, that was not the point. I want checksums, scrubs and boot environments that OpenZFS gets me.

However, very few Linux distros have boot to ZFS as an option. That is one reason why I stayed with Gentoo Linux distro. Not great doing updates, but ZFS on root works well.

So if someone wanted easy Linux with single disk, but redundancy, then using BTRFS with it’s “copies=2” function might seem like a good idea. However, their are draw backs;

  1. BTRFS does not checksum everything
  2. BTRFS is NOT 100% COW, (Copy On Write), so during a crash, you can LOOSE DATA!
  3. To this very day, RAID-5/6 is not directly supported as reliable.
  4. Certain features, (compression algorithm, checksum algorithm, encryption algorithm, etc…), are not as well handled as OpenZFS. If I understand correctly, you select 1 algorithm and anything written after that uses it. OpenZFS allows different datasets in the same pool to use different algorithms.
  5. It will be 5 years before some of the newer features of BTRFS get reasonable field testing.

All in all, too much time has passed since the inception of BTRFS. It should have been MUCH farther along than it is today. Now with several alternatives, BTRFS may end up dead.

5 Likes