About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • Domi@lemmy.secnd.me
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.

    I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

    I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

      • sntx@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It’s affected by the write-hole phenomenon. In BTRFS case that can mean that perfectly good old data might corrupt without any notice.

      • vividspecter@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Check status here. It looks like it may be a little better than the past, but I’m not sure I’d trust it.

        An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn’t the best idea for a system drive, but if it’s something like a NAS it works well and snapraid-btrfs doesn’t have the write hole issues that normal snapraid does since it operates on r/o snapshots instead of raw data.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    btrfs raid subsystem hasn’t been fixed and is still buggy, and does weird shit on scrubs. But fill your boots, it’s your data.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    One day I had a power outage and I wasn’t able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn’t able to boot from it anymore. I was very pissed, lost a whole day of work

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

    Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

    https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

    • randombullet@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’m thinking of bumping mine up to 128k since I do mostly photography and videography, but I’ve heard that 1M can increase write speeds but decrease read speeds?

      I’ll have a RAIDZ1 and a RAIDZ2 pool for hot storage and warm storage.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I am using btrfs on raid1 for a few years now and no major issue.

    It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

    Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

  • Brownian Motion@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    My setup is different to yours but not totally different. I run ESXi 8, and I started to use BTRFS on some of my VM’s.

    I had a power failure, that was longer than the UPS could handle. Most of the system shutdown safely, a few VM’s did not. All of the EXT4 VM’s were easily recovered (including another one that was XFS). TWO of the BTRFS systems crashed into a non recoverable state.

    Nothing I could do to fix them, they were just toast. I had no choice but to recover using backups. This made me highly aware that BTRFS is still not a reliable FS.

    I am migrating everything from BTRFS to something more stable and reliable like EXT4. It’s simply not worth the headache.

      • Brownian Motion@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It was only a few weeks ago (maybe 4). Systems are all kept up to date with ansible. Most are Debian but there are few Ubuntu. The two that failed were both Debian.

        Granted both that failed have high [virtual] disk usage compared to the other VM’s. I cannot remember the failure now, but lots of searching confirmed that it was likely unrecoverable (they could boot, but only into read only). None of the btrfs-check “dangerous” commands could recover it, spitting out tons of errors about mismatching somethings (again, forgotten the error).

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Btrfs Raid 5 and raid 6 are unstable and dangerous

      Bcachefs is cool but it is way to new and isn’t even part of the kernel as of yet.

  • SendMePhotos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I run it now because I wanted to try it. I haven’t had any issues. A friend recommended it as a stable option.

  • SRo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    One time I had a power outage and one of the btrfs hds (not in a raid) couldn’t be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.