Hello selfhosted! Sometimes I have to transfer big files or a large amounts of small files in my homelab. I used rsync but specifying the IP address and the folders and everything is bit fiddly. I thought about writing a bash script but before I do that I wanted to ask you about your favourite way to achieve this. Maybe I am missing out on an awesome tool I wasn’t even thinking about.

  • node815@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I work from home, however my two systems (home and work) are on the same LAN, they don’t see each other for file sharing. I get paid via direct deposit like everyone else which means my pay stubs are all electronic. I print those out and then use WinSCP to copy those over to my desktop. No other files are ever sent.

    At home, depending on the amount of files, I either use SFTP via Filezilla, or if the mood strikes me and for a single file, I will just use SCP if I’m already on the cli which is most of the time it seems anymore doing work on my personal servers. I’ve found that SFTP is faster at transferring than doing a copy/paste to the NFS share to the same drive.

  • motsu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    smb share if its desktop to desktop. If its from phone to PC, I throw it on nextcloud on the phone, then grab it from the web ui on pc.

    Smb is the way to go if you have identity set up, since your PC auth will carry over for the connection to the smb share. Nextcloud will be less typing if not since you can just have persistent auth on the app / web.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Sounds very straight forward. Do you have a samba docker container running on your server or how do you do that?

      • drkt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        19 days ago

        I just type sftp://[ip, domain or SSH alias] into my file manager and browse it as a regular folder

          • drkt@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            17 days ago

            Linux is truly extensible and it is the part I both love and struggle to explain the most.
            I can sit at my desktop, developing code that physically resides on my server and interact with it from my laptop. This does not require any strange janky setup, it’s just SSH. It’s extensible.

            • blackbrook@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              17 days ago

              I love this so much. When I first switched to Linux, being able to just list a bunch of server aliases along with the private key references in my .ssh/config made my life SO much easier then the redundantly maintained and hard to manage putty and winscp configurations in Windows.

      • Kit@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        I have two servers, one Mac and one Windows. For the Mac I just map directly to the smb share, for the Windows it’s a standard network share. My desktop runs Linux and connects to both with ease.

      • Lv_InSaNe_vL@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        I dont have a docker container, I just have Samba running on the server itself.

        I do have an owncloud container running, which is mapped to a directory. And I have that shared out through samba so I can access it through my file manager. But that’s unnecessary because owncloud is kind of trash.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Yeah, I mean I do still use rsync for the stuff that would take a long time, but for one-off file movement I just use a mounted network drive in the normal file browser, including on Windows and MacOS machines.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    19 days ago

    Depends on what I’m transferring and to/from where:

    • scp is my go-to since I’m a Linux household and have SSH keys setup and LDAP SSO as a fallback
    • sshfs if I’m too lazy to connect via SMB/NFS (or I don’t feel like installing the tools for them) or I’m traversing a WAN
    • rsync for bulk transfer and backups
    • Snapdrop/Pairdrop for one-off file/text shares between devices with GUIs (mostly phone <–> PC)
    • SMB if I’m on a client PC and need to work with the files directly from the fileserver
    • NFS between servers
    • To get bulk data to my phone (e.g. updating my music library), I connect via USB in MTP mode and copy from the server via SMB or sshfs.
  • neidu3@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    rsync if it’s a from/to I don’t need very often

    More common transfer locations are done via NFS

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    What’s wrong with rsync? If you don’t like IP addresses, use a domain name. If you use certificate authentication, you can tab complete the folders. It’s a really nice UX IMO.

    If you’ll do this a lot, just mount the target directory with sshfs or NFS. Then use rsync or a GUI file manager.

        • jollyrogue@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          17 days ago

          The daemon tracks file state, so the transfers start quicker because rsync doesn’t have to scan the filesystem.

            • jollyrogue@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              ·
              16 days ago

              Not necessarily. Rsync deltas are very efficient, and not everything supports deltas.

              It may very well be the correct tool for the job.

              Anyway, problem fit wasn’t part of the question.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                16 days ago

                Yeah, there are probably a few perfect fits for it. I don’t rsync between machines very often, so the only use case I might have is backups, which is already well covered with a number of tools. Otherwise I just want to sync a few directories.

    • Grumuk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      I never even set up DNS for things that aren’t public facing. I just keep /etc/hosts updated everywhere and ssh/scp/rsync things around using their non-fqdn hostnames.

  • MasterBlaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    By “homelab”, do you mean your local network? I tend to use shared folders, kdeconnect, or WebDAV.

    I like WebDAV, which i can activate on Android with DavX5 and Material Files, and i use it for Joplin.

    Nice thing about this setup is that i also have a certificate secured OpenVPN, so in a pinch i can access it all remotely when necessary by activating that vpn, then disconnecting.

  • magic_smoke@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    19 days ago
    • sftp for quick shit like config files off a random server because its easy and is on by default with sshd in most distros
    • rsync for big one-time moves
    • smb for client-facing network shares
    • NFS for SAN usage (mostly storage for virtual machines)
  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    rclone. I have a few helper functions;

    fn mount { rclone mount http: X: --network-mode }
    fn kdrama {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/KDrama/$x --filter-from
    ~/.config/filter.txt }
    fn tv {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/TV/$x --filter-from ~/.config/filter.txt }
    fn downloads {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/Downloads/$x --filter-from ~/.config/filter.txt }
    

    So I download something to my seedbox, then use rclone lsd http: to get the exact name of the folder/files, and run tv "filename" and it runs my function. Pulls all the files (based on filter.txt) using multiple threads to the correct folder on my NAS. Works great, and maxes out my connection.