You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.

In this video I walk through my favorite everyday flags for rsync.

Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/

Here’s a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync

Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ

Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc

Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)

  • clif@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    I’ll never not upvote Veronica Explains. Excellent creator and excellent info on everything I’ve seen.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 hours ago

    Veeam for image/block based backups of Windows, Linux and VMs.
    syncthing for syncing smaller files across devices.

    Thank you very much.

  • atk007@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 hours ago

    Rsnapshot. It uses rsync, but provides snapshot management and multiple backup versioning.

    • BonkTheAnnoyed@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      Yah, I really like this approach. Same reason I set up Timeshift and Mint Backup on all the user machines in my house. For others rsync + cron is aces.

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      Yes, but a few hours writing my own scripts will save me from several minutes of reading its documentation…

      • atk007@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 hours ago

        It took me like 10 min to setup rsnapshot (installing, and writing systemd unit /timer files) on my servers.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 hours ago

      I’m not super familiar with Syncthing, but judging by the name I’d say Syncthing is not at all meant for backups.

    • conartistpanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 hours ago

      Syncthing is technically to synchronize data across different devices in real time (which I do with my phone), but I also use it to transfer data weekly via wi-fi to my old 2013 laptop with a 500GB HDD and Linux Mint (I only boot it to transfer data, and even then I pause the transfers to this device when its done transferring stuff) so I can have larger data backups that wouldn’t fit in my phone, since LocalSend is unreliable for large amounts of data while Synchting can resume the transfer if anything goes wrong. On top of that Syncthing also works in Windows and Android out of the box.

  • vext01@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 hours ago

    I used to use rsnapshot, which is a thin wrapper around rsync to make it incremental, but moved to restic and never looked back. Much easier and encrypted by default.

  • Mio@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    22 hours ago

    I think the there are better alternatives for backup like kopia and restic. Even seafile. Want protection against ransomware, storage compression, encryption, versioning, sync upon write and block deduplication.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      This exactly. I’d use rsync to sync a directory to a location to then be backed up by kopia, but I wouldn’t use rsync exclusively for backups.

  • Xylight@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 hours ago

    rsync for backups? I guess it depends on what kind of backup

    for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well

    I have used rsync to permanently move something to another drive though

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      Compared to something multi threaded, yes. But there are obviously a number of bottlenecks that might diminish the gains of a multi threaded program.

    • okamiueru@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      That part threw me off. Last time i used it, I did incremental backups of a 500 gig disk once a week or so, and it took 20 seconds max.

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we’re already talking about rsync, I guess I may as well ask if this is right way to go?

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 hours ago

      It depends

      rsync is fine, but to clarify a little further…

      If you think you’ll stop the transfer and want it to resume (and some data might have changed), then yep, rsync is best.

      But, if you’re just doing a 1-off bulk transfer in a single run, then you could use other tools like xcopy / scp or - if you’ve mounted the remote NAS at a local mount point - just plain old cp

      The reason for that is that rsync has to work out what’s at the other end for each file, so it’s doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.

      (From memory, I think Raspberry Pi don’t handle large transfers over scp well… I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)

      Also, on a local network, there’s probably no point in using encryption or compression options - esp. for photos / videos / music… you’re just loading the CPU again to work out that it can’t compress any further.

      • ryper@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        It’s just a one-off transfer, I’m not planning to stop the transfer, and it’s my media library, so nothing should change, but I figured something resumable is a good idea for a transfer that’s going to take 12+ hours, in case there’s an unplanned stop.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      yes, it’s the right way to go.

      rsync over ssh is the best, and works as long as rsync is installed on both systems.

      • qjkxbmwvz@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 hours ago

        On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like rsh, but I would only do this if the computers were directly connected to each other by one Ethernet cable.

    • Suburbanl3g3nd@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 day ago

      I couldn’t tell you if it’s the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there’s new data

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        That would only matter if it’s lots of small files, right? And after the initial sync, you’d have very few files, no?

        Rsync is designed for incremental syncs, which is exactly what you want in a backup solution. If your multithreaded alternative doesn’t do a diff, rsync will win on larger data sets that don’t have rapid changes.