Background
For years, I used Bareos, Bacula fork that was basically started for reason of "well, Bacula is too corporate and keeps the features in enterprise versions for years before putting it in community edition.
It was a very convenient system, as list of all backups of all my machines (half a dozen at home, half a thousand at work)
Over the years it came a full circle, with Bareos stopping to even support packages in Debian distribution. Main driving reason for initial install was "I can just install bareos client and get on with backing up my machine", and bareos looked to be the "community chosen" fork, but with time "just install the pacakge" was no longer true and only provided packages were snapshots of dev branch that were often buggy to the point of needing to occasionally restart the daemon when backups stopped working. It's "everything is a tape in disguise" approach was also problematic when trying to do anything but plain hard drive/tape backup
So I started looking for alternatives. There is no really good "enterprise" solution compared to bareos in OSS space, there is BackupPC but managing an Perl app that had last stable release in 4 years didn't looked like something I want to do in my free time. So I started to looking for alternatives
Rejected alternatives
I looked for something that could at least store data on S3 (due to ease of doing it on wider scale, and having cloud options if needed) and typical nice-to-haves of backup software like retention periods and ability to easily exclude files, preferably by means of just putting a file defining a dir/files to exclude
Borg backup
It's entirely fine and fitting. Sparse choice of storage backends removed it from the list
Syncthing
It's a great file synchronization tool; not designed for backups so it have light un-delete functionality; I already use it for sync but not exactly required feature-set for backups
Amanda
Just as with Bareos/Bacula, they are still in denial and pretend everything is tape. I used it some time ago and it was... okay but not something I was looking for. It also "decides for itself" when to make full or incremental based on storage which isn't exactly great when dealing with limited internet bandwidth..
Kopia
Lastly, the one most similar to Restic, Kopia. I test-drove the two for few months and my main take-away is restic is better for scripted backups while Kopia is better at "user UI first" approach. If all you need is to back-up few desktop machines, by all means just use Kopia, it does that excellent.
Why restic was chosen
- veru good cli support
- always-incremental snapshot approach after every backup
- fast enough (within single digit % compared to kopia)
- Supports mounting snapshot as FUSE (few others like kopia or borg support that too, unlike bareos)
- Backup encryption by client (supported by most backup applications above)
- Available in Debian repository (borgbackup and amanda are available, kopia needs external repo)
- Multiple key support - ability to have "master" key that can access every archive and per-host keys that host can use to restore itself
Preparing storage
I have chosen popular minio to provide S3-compatible storage for the server. Main reason was builtin replication, as I wanted to put second server on my cheapo dedicated Kimsufi server.
Installing minio
Installation couldn't be much easier. We create the unit file
1[Unit] 2Description=Minio S3 server 3ConditionPathExists=/var/backup/minio # this is so if minio storage is on a mount, systemd mounts it before minio starts 4 5[Service] 6Type=simple 7User=minio 8Group=minio 9Environment=MINIO_ROOT_USER="super_secret_access_key" 10Environment=MINIO_ROOT_PASSWORD="super_secret_secert_key" 11ExecStart=/usr/local/bin/minio server "/var/backup/minio" 12Restart=always 13RestartSec=30 14LimitNOFILE=40000 15 16[Install] 17WantedBy=multi-user.target
download the binary
1cd /usr/local/bin 2wget https://dl.min.io/server/minio/release/linux-amd64/minio && chmod +x minio 3wget https://dl.min.io/client/mc/release/linux-amd64/mc && mv mc m-c && chmod +x m-c # to avoid conflicts with midnight commander
(guide assumes you have /usr/local/bin
added to path for purpose of running those commands, if not, export PATH="$PATH:/usr/local/bin
)
make the user
1useradd minio --shell /bin/false --system -d /var/backup/minio --create-home
make sure directory exists and start the service via systemctl start minio
; systemctl status minio
should return something like this:
1* minio.service - Minio S3 server 2 Loaded: loaded (/etc/systemd/system/minio.service; disabled; preset: enabled) 3 Active: active (running) since Sat 2024-12-14 10:16:38 CET; 9min ago 4 Main PID: 2394186 (minio) 5 Tasks: 10 (limit: 18996) 6 Memory: 125.4M 7 CPU: 5.042s 8 CGroup: /system.slice/minio.service 9 `-2394186 /usr/local/bin/minio server /var/backup/minio/data 10 11Dec 14 10:16:38 cthulhu systemd[1]: Started minio.service - Minio S3 server. 12Dec 14 10:16:39 cthulhu minio[2394186]: INFO: Formatting 1st pool, 1 set(s), 1 drives per set. 13Dec 14 10:16:39 cthulhu minio[2394186]: INFO: WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable. 14Dec 14 10:16:39 cthulhu minio[2394186]: MinIO Object Storage Server 15Dec 14 10:16:39 cthulhu minio[2394186]: Copyright: 2015-2024 MinIO, Inc. 16Dec 14 10:16:39 cthulhu minio[2394186]: License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html 17Dec 14 10:16:39 cthulhu minio[2394186]: Version: RELEASE.2024-11-07T00-52-20Z (go1.23.3 linux/amd64) 18Dec 14 10:16:39 cthulhu minio[2394186]: API: http://1.2.3.4:9000 http://5.6.7.8:9000 http://127.0.0.1:9000 19Dec 14 10:16:39 cthulhu minio[2394186]: WebUI: http://1.2.3.4:32827 http://5.6.7.8:32827 http://127.0.0.1:32827 20Dec 14 10:16:39 cthulhu minio[2394186]: Docs: https://docs.min.io
if everything is okay, systemctl enable minio
to make it run on boot
Now we can set alias for our new server in m-c
:
1mc alias set local http://127.0.0.1:9000 super_secret_access_key super_secret_secret_key 2m-c: Configuration written to `/root/.m-c/config.json`. Please update your access credentials. 3m-c: Successfully created `/root/.m-c/share`. 4m-c: Initialized share uploads `/root/.m-c/share/uploads.json` file. 5m-c: Initialized share downloads `/root/.m-c/share/downloads.json` file. 6Added `local` successfully.
and we should get info about our locally installed server
1m-c admin info local 2● 127.0.0.1:9000 3 Uptime: 11 minutes 4 Version: 2024-11-07T00:52:20Z 5 Network: 1/1 OK 6 Drives: 1/1 OK 7 Pool: 1 8 9┌──────┬────────────────────────┬─────────────────────┬──────────────┐ 10│ Pool │ Drives Usage │ Erasure stripe size │ Erasure sets │ 11│ 1st │ 69.8% (total: 700 GiB) │ 1 │ 1 │ 12└──────┴────────────────────────┴─────────────────────┴──────────────┘ 13 141 drive online, 0 drives offline, EC:0
You might need to tweak additional variables for your setup here in the unit file
At this point, you might want to set up external access to the API part (one running at port 9000
). This is beyond scope of this instruction but simplest haproxy config for it would be
1... 2frontend 3... 4 acl minio hdr(host) minio.example.com 5 use_backend b_minio if minio 6... 7 8backend b_minio 9 option httpchk 10 http-check send meth GET uri /minio/health/live ver HTTP/1.1 hdr Host 127.0.0.1 11 server minio 127.0.0.1:9000 check
Setting it up behind reverse proxy/loadbalancer with letsencrypt cert for it is highly recommended, but outside of scope of this tutorial.
You probably also want to limit open IPs to just 127.0.0.1
and only publish it to world under SSL connection
Configuring policies
Each user in minio have established policy on what they can and cannot do with buckets. We will be creating additional one that basically says "you are allowed to only create bucket that is named after your user:
1{ 2 "Version": "2012-10-17", 3 "Statement": [ 4 { 5 "Effect": "Allow", 6 "Action": [ 7 "s3:*" 8 ], 9 "Resource": [ 10 "arn:aws:s3:::${aws:username}", 11 "arn:aws:s3:::${aws:username}/*" 12 ] 13 } 14 ] 15}
put that in JSON file and apply via m-c admin policy create local backup_user minio_backup_policy.json
(version needs to stay 2012-10-17
, it's API version, not version of the policy itself)
storage checklist
At this point you should have
minio
daemon running- have access URL to minio, either direct one (http://your.server.ip.addr:9000) ( NOT recommended for security reasons) or one hidden via SSL proxy, like https://s3.example.com.
m-c
configured and allowing you to edit server config. Minio also have web UI on the other port mentioned in logs ,you also want to put that behind SSL proxy or at the very least, VPNm-c admin policy list local
returningbackup_user
on the list
Setting up restic
First, we create an username and a bucket for the backup of first machine and give it right policy:
CAVEAT use only domain-friendly names (alphanumeric + dash [-]), Minio will allow you to create user with underscore[_] but NOT a bucket with underscore. So name it something-backup
instead of something_backup
.
This is due to buckets needing to work both in path (s3.example.com/bucket-name
) and domain (bucket-name.s3.example.com
) format in S3 protocol
1# m-c admin user add local mydesktop-backup very-long-and-random-desktop-secret 2Added user `mydeskop_backup` successfully. 3# m-c admin policy attach local backup_user --user mydesktop-backup 4Attached Policies: [backup_user] 5To User: mydesktop-backup
Now, everything is ready to make our first backup! I'd split the "credentials" from "actual backup script" for clarity so the environment file should look like this:
1export AWS_ACCESS_KEY_ID="mydesktop-backup" 2export AWS_SECRET_ACCESS_KEY="very-long-and-random-desktop-secret" 3export RESTIC_PASSWORD="different-very-long-secret-for-client-encryption" 4export RESTIC_REPOSITORY="s3:https://s3.example.com/mydesktop-backup/restic"
Remember to chmod 600
the file so no other than root can access the credentials!
Then we can include it in our backup script
1#!/bin/bash 2source /etc/restic.env 3export PATH="$PATH:/usr/local/bin" 4# we first make sure any stale locks (which is rare but still) gets cleared, if that command fails it probably means repo is not initialized or creds are wrong, so we attach `restic init` here so it gets created if needed 5restic unlock || restic init 6restic backup --exclude-if-present .nobackup "/var/www" 7restic forget --keep-hourly 4 --keep-daily 7 --keep-weekly 4 --keep-monthly 3 8restic prune
At this point it is worth to go thru the documentation and see what each option does and which ones you might add. Result of restic backup
should look like this:
1repository b6f23fde opened (repository version 2) successfully, password is correct 2no parent snapshot found, will read all files 3 4Files: 10487 new, 0 changed, 0 unmodified 5Dirs: 1580 new, 0 changed, 0 unmodified 6Added to the repository: 287.263 MiB (225.105 MiB stored) 7 8processed 10487 files, 363.680 MiB in 0:24 9snapshot d40d33d2 saved
Tuning the backup
Backup options
--exclude-if-present .nobackup
- this will exclude every directory (and its subdirectories) from being backed up if file of that name exists. Can be specified multiple times. It is highly recommended to drop that in directiories like browser or programming language's package manager caches (maybe even steam directory) if you do not want to have your backups be filled up by temporary/easily re-created files--one-file-system
- handy if place you are backing up have some mounts you either do not care about or want to back-up separately; for example you might not want to back-up your mounted NAS directory on your desktop computer.--stdin
and--stdin-filename
- this allows you to backup a single "file" by feeding data directly from stdin. That is handy for software that can backup output to stdout, like many databases backup programs and is a recommended way of backing up SQL databases, essentially doingpg_dumpall | restic backup --stdin --stdin-filename db.sql
--exclude-larger-than
- very situationally useful but allows you to omit big files, if for some reason you want to skip it. One use might be avoiding build artifacts, but marking them with exclude is far more reliable method--tag
- add extra tag to backup, convenient if you then want to filter by it
Retention options
restic forget --help
displays whole set of --keep
variables to set, the important thing to remember is
- difference between
--keep-x
and--keep-within-x
as the latter references "relative to latest snapshot" rather than current time of day. Difference is that referencing to snapshot makes sure there is always X old backups left, even if the backups stopped working, while the basic say--keep-monthly=3
will start removing everything older than 3 months even if backups stopped. Both behaviours might be desirable depending on use case - you can filter given policy both by tag and host so you might, for example, create tag
daily
that keeps last 28 days and tagweekly
that just keeps each week's snapshot --group-by
can allow to have different retention applied per tag, not "just" by path, if you so happen to have 2 backups touching same path--host myhostname
might be required if you want to re-use same repository by multiple hosts
Restoring the backup
Restoring simple files can be done via path (you can find files by restic find
, dump single file by restic dump
and restore whole by restic restore
) but by far the easiest method is just mounting a FUSE mount (package fuse
must be installed):
1# restic mount mnt 2repository b6f23fde opened (repository version 2) successfully, password is correct 3Now serving the repository at mnt 4Use another terminal or tool to browse the contents of this folder. 5When finished, quit with Ctrl-c here or umount the mountpoint.
now, on another terminal (or just file browser)
1# cd mnt 2~/mnt# find -maxdepth 2 3. 4./hosts 5./hosts/tt-rss 6./tags 7./ids 8./ids/d40d33d2 9./snapshots 10./snapshots/2024-12-14T11:19:48+01:00 11./snapshots/latest
you can just enter directory containing a given snapshot and copy what you needed
Automating the backup
For more complex setups, I'd divide backup script into "maintenance" (running the prune and the retention stuff) and the singular backups. We could use cron for that, or systemd timers. Example setup
/etc/restic/env
- as the env above, just access data to the scripts
/etc/restic/maintenance.sh
:
1#!/bin/bash 2export PATH="$PATH:/usr/local/bin" 3source /etc/restic/env 4# just in case/for first run 5restic unlock || restic init 6# two tiers, for "stuff I work now" and "stuff that changes rarely" 7restic forget --tag hourly --keep-hourly 48 --keep-daily 14 --keep-weekly 4 --keep-monthly 3 8restic forget --tag daily --keep-daily 7 --keep-weekly 4 --keep-monthly 3 9restic prune
/etc/restic/my-web-stuff.sh
:
1#!/bin/bash 2export PATH="$PATH:/usr/local/bin" 3source /etc/restic/env 4restic backup --tag daily /var/www 5sudo -u postgres pg_dumpall |restic backup --tag daily --stdin --stdin-filename postgres.sql
/etc/restic/my-home-stuff.sh
:
1#!/bin/bash 2export PATH="$PATH:/usr/local/bin" 3source /etc/restic/env 4restic backup --tag hourly /home/myuser/code 5restic backup --tag daily /home/myuser/pics
(don't forget to chmod +x them! /etc/restic should also be only readable to root)
Then just the matter of running the stuff. Simplest way is to use ol' reliable cron/anacron:
/etc/cron.weekly/backup-maintenance
:
1#!/bin/bash 2# send all of the messages to syslog, else cron will spam us 3/etc/restic/maintenance.sh 2>&1 | logger -t backup-maintenance
/etc/cron.daily/backup-daily
:
1#!/bin/bash 2`/etc/restic/my-web-stuff.sh 2>&1 | logger -t backup-daily
/etc/cron.hourly/backup-hourly
:
1#!/bin/bash 2`/etc/restic/my-home-stuff.sh 2>&1 | logger -t backup-hourly
Same scripts can be used in systemd timers if you prefer that way.
Checking the backup
VERY simplistic way to check whether backups are done with a script:
1#!/bin/bash 2export PATH="$PATH:/usr/local/bin" 3source /etc/restic/env 4LC_ALL=C 5CURRENT_DATE=$(date "+%F") 6YESTERDAY=$(date "+%F" -d yesterday) 7EXIT=0 8if [ -z "$1" ]; then 9 expected=1 10else 11 expected=$1 12fi 13backups_in_date=$(restic snapshots --latest 1|grep -P "($CURRENT_DATE|$YESTERDAY)" |wc -l) 14if [ "$backups_in_date" -eq 0 ] ; then 15 echo "no backup found!"; 16 exit 2 17fi 18if [ "$backups_in_date" -lt $expected ] ; then 19 echo "expected $expected backups, got $backups_in_date" 20 restic snapshots --latest 1 21 exit 1 22else 23 echo "all OK: $backups_in_date" 24exit 0 25fi
script will exit if there is ANY recent backup, you can specify minimal number of unique backups (that is, not sharing path/host) to count, for example:
1./check_backup.sh 3 2expected 3 backups, got 1 3repository b6f23fde opened (repository version 2) successfully, password is correct 4ID Time Host Tags Paths 5------------------------------------------------------------ 6f0914d1b 2024-12-14 12:04:03 somehost hourly /root 7------------------------------------------------------------ 81 snapshots