Hello selfhosted! Sometimes I have to transfer big files or a large amounts of small files in my homelab. I used rsync but specifying the IP address and the folders and everything is bit fiddly. I thought about writing a bash script but before I do that I wanted to ask you about your favourite way to achieve this. Maybe I am missing out on an awesome tool I wasn’t even thinking about.
Snapdrop if they both have a gui/webbrowser. https://github.com/SnapDrop/snapdrop
Scp otherwise
I work from home, however my two systems (home and work) are on the same LAN, they don’t see each other for file sharing. I get paid via direct deposit like everyone else which means my pay stubs are all electronic. I print those out and then use WinSCP to copy those over to my desktop. No other files are ever sent.
At home, depending on the amount of files, I either use SFTP via Filezilla, or if the mood strikes me and for a single file, I will just use SCP if I’m already on the cli which is most of the time it seems anymore doing work on my personal servers. I’ve found that SFTP is faster at transferring than doing a copy/paste to the NFS share to the same drive.
As a lazy person, I just prefer
sftp
on thunar.smb share if its desktop to desktop. If its from phone to PC, I throw it on nextcloud on the phone, then grab it from the web ui on pc.
Smb is the way to go if you have identity set up, since your PC auth will carry over for the connection to the smb share. Nextcloud will be less typing if not since you can just have persistent auth on the app / web.
Not gonna lie, I just map a network share and copy and paste through the gui.
Sounds very straight forward. Do you have a samba docker container running on your server or how do you do that?
Do you really need a container for Samba?
I see the benefits of containers, but a use would be overkill.
I just type
sftp://[ip, domain or SSH alias]
into my file manager and browse it as a regular folderDolphin?
Any file manager on Linux supports this
YOU CAN DO THAT???
Linux is truly extensible and it is the part I both love and struggle to explain the most.
I can sit at my desktop, developing code that physically resides on my server and interact with it from my laptop. This does not require any strange janky setup, it’s just SSH. It’s extensible.I love this so much. When I first switched to Linux, being able to just list a bunch of server aliases along with the private key references in my .ssh/config made my life SO much easier then the redundantly maintained and hard to manage putty and winscp configurations in Windows.
I have two servers, one Mac and one Windows. For the Mac I just map directly to the smb share, for the Windows it’s a standard network share. My desktop runs Linux and connects to both with ease.
I dont have a docker container, I just have Samba running on the server itself.
I do have an owncloud container running, which is mapped to a directory. And I have that shared out through samba so I can access it through my file manager. But that’s unnecessary because owncloud is kind of trash.
Yeah, I mean I do still use rsync for the stuff that would take a long time, but for one-off file movement I just use a mounted network drive in the normal file browser, including on Windows and MacOS machines.
Same lol, somebody please enlighten me on a faster way!
scp
Checks username… yeah that tracks
scp is deprecated.
SCP, the protocol, is deprecated. scp, the command, just uses the SFTP protocol these days. I find its syntax convenient.
Oh does it? I didn’t realize that. I’ve just switched over to rsync completely.
Since OpenSSH version 9.0, so like mid '22. So as long as you’re not running something more out of date than that.
Depends on what I’m transferring and to/from where:
scp
is my go-to since I’m a Linux household and have SSH keys setup and LDAP SSO as a fallbacksshfs
if I’m too lazy to connect via SMB/NFS (or I don’t feel like installing the tools for them) or I’m traversing a WANrsync
for bulk transfer and backups- Snapdrop/Pairdrop for one-off file/text shares between devices with GUIs (mostly phone <–> PC)
- SMB if I’m on a client PC and need to work with the files directly from the fileserver
- NFS between servers
- To get bulk data to my phone (e.g. updating my music library), I connect via USB in MTP mode and copy from the server via SMB or sshfs.
rsync if it’s a from/to I don’t need very often
More common transfer locations are done via NFS
Syncthing and/or ftp.
What’s wrong with rsync? If you don’t like IP addresses, use a domain name. If you use certificate authentication, you can tab complete the folders. It’s a really nice UX IMO.
If you’ll do this a lot, just mount the target directory with sshfs or NFS. Then use rsync or a GUI file manager.
Just don’t run rsync as a daemon as that’s a security nightmare
Why would you do that? That sounds awful…
The daemon tracks file state, so the transfers start quicker because rsync doesn’t have to scan the filesystem.
Right, but if you’re transferring things that frequently, there are better solutions.
Not necessarily. Rsync deltas are very efficient, and not everything supports deltas.
It may very well be the correct tool for the job.
Anyway, problem fit wasn’t part of the question.
Yeah, there are probably a few perfect fits for it. I don’t rsync between machines very often, so the only use case I might have is backups, which is already well covered with a number of tools. Otherwise I just want to sync a few directories.
It is, rsync sends data in plain text. There is a optional password that is also sent in plain text.
I never even set up DNS for things that aren’t public facing. I just keep /etc/hosts updated everywhere and ssh/scp/rsync things around using their non-fqdn hostnames.
You could also use mDNS to the same effect.
SFTP! 😃
By “homelab”, do you mean your local network? I tend to use shared folders, kdeconnect, or WebDAV.
I like WebDAV, which i can activate on Android with DavX5 and Material Files, and i use it for Joplin.
Nice thing about this setup is that i also have a certificate secured OpenVPN, so in a pinch i can access it all remotely when necessary by activating that vpn, then disconnecting.
Rsync and NFS for me.
And me.
- sftp for quick shit like config files off a random server because its easy and is on by default with sshd in most distros
- rsync for big one-time moves
- smb for client-facing network shares
- NFS for SAN usage (mostly storage for virtual machines)
rclone. I have a few helper functions;
fn mount { rclone mount http: X: --network-mode } fn kdrama {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/KDrama/$x --filter-from ~/.config/filter.txt } fn tv {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/TV/$x --filter-from ~/.config/filter.txt } fn downloads {|x| rclone --multi-thread-streams=8 --checkers=2 --transfers=2 --ignore-existing --progress copy http:$x nas:Media/Downloads/$x --filter-from ~/.config/filter.txt }
So I download something to my seedbox, then use
rclone lsd http:
to get the exact name of the folder/files, and runtv "filename"
and it runs my function. Pulls all the files (based on filter.txt) using multiple threads to the correct folder on my NAS. Works great, and maxes out my connection.