I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync
periodically, but I’m curious what other solutions may exist.
I’m interested in both command-line, and GUI solutions.
I don’t. I lose my data like all the cool (read: fool) kids.
I too rawdog linux like a chad
Timeshift is a great tool for creating incremental backups. Basically it’s a frontend for rsync and it works great. If needed you can also use it in CLI
I use rsync+btrfs snapshot solution.
- Use rsync to incrementally collect all data into a btrfs subvolume
- Deduplicate using
duperemove
- Create a read-only snapshot of the subvolume
I don’t have a backup server, just an external drive that I only connect during backup.
Deduplication is mediocre, I am still looking for snapshot aware
duperemove
replacement.I’m not trying to start a flame war, but I’m genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so “not ready for prime time”.
Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.
(All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)
btrfs is included in the linux kernel, zfs is not on most distros
the tiny chance that an externel kernel module borking with a kernel upgrade happens sometimes and is probably scary enough for a lot of peopleFair enough
I’ve only ever run ZFS on a proxmox/server system but doesn’t it have a not insignificant amount of resources required to run it? BTRFS is not flawless, but it does have a pretty good feature set.
I have scripts scheduled to run rsync on local machines, which save incremental backups to my NAS. The NAS in turn is incrementally backed up to a remote server with Borg.
Not all of my machines are on all the time so I also built in a routine which checks how old the last backup is, and only makes a new one if the previous backup is older than a set interval.
I also save a lot of my config files to a local git repo, the database of which is regularly dumped and backed up in the same way as above.
I use timeshift. It really is the best. For servers I go with restic.
I use timeshift because it was pre-installed. But I can vouch for it; it works really well, and let’s you choose and tweak every single thing in a legible user interface!
When I do something really dumb I typically just use dd to create an iso. I should probably find something better.
I use Back In Time to backup my important data on an external drive. And for snapshots I use timeshift.
Back In times
Isn’t timeshift have same purpose, or it’s just matter of preference?
Yes, it is the same purpose, kinda. But timeshift runs as a cron and allows for an easy rollback, while I use BIT for manual backups.
At the core it has always been rsync and Cron. Sure I add a NAS and things like rclone+cryptomator to have extra copies of synchronized data (mostly documents and media files) spread around, but it’s always rsync+Cron at the core.
I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.
For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.
I use Borg backup with Vorta for a GUI. Hasn’t let me down yet.
I run Openmediavault and I backup using BorgBackup. Super easy to setup, use, and modify
I use
btrbk
to send btrfs snapshots to a local NAS. Consistent backups with no downtime. The only annoyance (for me at least) is that both send and receive ends must use the same SELinux policy or labels won’t match.Vorta + borgbase
The yearly subscription is cheap and fits my storage needs by quite some margin. Gives me peace of mind to have an off-site back up.
I also store my documents on Google Drive.
I use Pika backup, which uses borg backup under the hood. It’s pretty good, with amazing documentation. Main issue I have with it is its really finicky and is kind of a pain to setup, even if it “just works” after that.
Can you restore from it? That’s the part I’ve always struggled with?
The way pika backup handles it, it loads the backup as a folder you can browse. I’ve used it a few times when hopping distros to copy and paste stuff from my home folder. Not very elegant, but it works and is very intuitive, even if I wish I could just hit a button and reset everything to the snapshot.
Kopia or Restic. Both do incremental, deduplicated backups and support many storage services.
Kopia provides UI for end user and has integrated scheduling. Restic is a powerfull cli tool thatlyou build your backup system on, but usually one does not need more than a cron job for that. I use a set of custom systems jobs and generators for my restic backups.
Keep in mind, than backups on local, constantly connected storage is hardly a backup. When the machine fails hard, backups are lost ,together with the original backup. So timeshift alone is not really a solution. Also: test your backups.