DR;TL: the text below shows commands and prices for using Contabo storage with 3x RAID redundancy: $2 a month for 250GB of user data.
There's this thing called the cloud. If you have an account with a company which owns a bit of the cloud, you can run processes locally which create, amend and delete remote resources for which you make payment.
The granularity of payment differs from one supplier to another. Amazon Web Services (AWS) for instance bills an agreed amount per minute - or second, I'm not sure - which means you can (if you need one) create a virtual computer - a 500 cpu resource with 8TB of memory, say - and run your process for quarter of an hour and then delete the instance. Your monthly bill might show that later as $15. That's cheaper than owning one. Right, I checked... today you can rent a virtual 448 core CPU with 6TB of memory and a 100 GBit network connection for $54.60 an hour, billed by the second. Or you can make a dozen smaller machines for the same price. So long as you destroy them after, they're comparatively cheap.
And AWS storage for data backups is $5 a month per 100GB with 1GB granularity.
Contabo turns out to be cheaper for data backups. Contabo charges $2 a month per 250GB, with 250GB and 1 month granularity, up to an account limit of 240TB. I'm paying for one unit. That one unit is where the data from all five of the machines I'm handling are backed up.
I'm using "block storage", which is fine for few large files. It's awful for storing a file system one file at a time - for that you need a file system backup plan. AWS has one, for example. I'm not sure what Contabo offers in that regard.
So, on each machine I'm backing up, I have a script which gathers all the data into one place and merges it into a single backup file, compresses the file, encrypts the file, and copies to my Contabo block storage account. As they accumulate I'll work out an automated deletion process which keeps just a few long-term and a few recent copies, I've not done that yet.
Code: Select all
#!/bin/bash # remote archive to contabo, requires: # ~/.config/rclone/rclone.conf (contabo permission) # .my.cnf (mysql permission) # passphrase (gpg permission) # contabo-backup.sh (this script) # jh april 2020 if (( $EUID != 0 )); then echo "Please run as root!" exit fi cd /root/contabo ARCHIVE=$(</etc/hostname) BACKDIR=$ARCHIVE.Back.$(date "+%Y.%m.%d-%H.%M.%S") echo "Contabo archive $BACKDIR started" mkdir /tmp/$BACKDIR service apache2 stop echo "apache down" date +%T mysqldump --all-databases >/tmp/$BACKDIR/$ARCHIVE.sql service apache2 start date +%T echo "apache back" rsync -a /usr/local/bin /usr/local/sbin /var/www /etc /root /tmp/$BACKDIR tar -cJ --exclude tmp/$BACKDIR/root/.gnupg/* -f $BACKDIR.txz -C / tmp/$BACKDIR rm -rf /tmp/$BACKDIR gpg --cipher-algo aes256 --output $BACKDIR.txz.gpg --passphrase-file ./passphrase --batch --yes --symmetric $BACKDIR.txz rm $BACKDIR.txz rclone sync -P $BACKDIR.txz.gpg eu2:private/$ARCHIVE --s3-no-head rm $BACKDIR.txz.gpg date +%T echo "Contabo archive $BACKDIR completed"
That runs from /etc/cron.weekly
To get the data restored, I download the backup file to the local machine and:
Code: Select all
gpg -o example.com.Back.2022.04.14-19.19.41.txz --passphrase-file ./passphrase --batch -d example.com.Back.2022.04.14-19.19.41.txz.gpg unxz example.com.Back.2022.04.14-19.19.41.txz tar xf example.com.Back.2022.04.14-19.19.41.tar
It took me a while to work all that out. This thread will be a helpful reminder after I've forgotten stuff.