rsync

Änderungsstand: 2024-04-14

sudo apt install rsync -y

Die Verwendung von rsync ist denkbar einfach, Bsp.:

rsync /zu sicherndes Verzeichnis /Ablage der Sicherung des zu sichernden Verzeichnisses

Oder vereinfacht ausgedrückt:

rsync /Quelle /Ziel

Ein Bsp.:

rsync -a /appdata /mnt/backup

Zusätzliche Befehlsangaben, die verwendet werden können:

  • -a (Beinhaltet folgende Szenarien):
    • -g (Gruppenrechte beibehalten)
    • -l (kleines L – symbolische Links)
    • -o (Besitzerrechte beibehalten)
    • -p (Dateirechte beibehalten)
    • -r (inkl. Unterverzeichnisse)

Das -a als Angabe ist demzufolge ein Zusatz, der niemals fehlen sollte. Weitere Zusätze, die mir wichtig sind:

  • -v -p (verbose & Fortschrittsanzeige –> verwende ich nur, wenn ich rsync manuell starte)
  • -n (NUR eine Simulation des auszuführenden Befehls)
  • –bwlimit (Bandbreitenlimitierung (z.B. bwlimit=50000)) Wichtig!!!, da rsync ohne diesen Zusatz die gesamte Bandbreite in Anspruch nimmt!!!
  • –delete (Vorsicht! Löscht Dateien oder Verzeichnisse auf dem Ziel, wenn diese in der Quelle gelöscht wurden)
  • –rsync-path=“sudo rsync“ (Ein Befehl, der meist übersehen wird. Mit diesem Befehl wird „sudo“ auf dem Fremdsystem bzw. Netzlaufwerk ausgeführt!)

Es gibt noch viel mehr. Das sind für mich die wichtigsten Befehlsangaben, die ich verwenden.

Bsp.: rsync -avp /tank0/ds1 /tank2

Quelle:

https://wiki.ubuntuusers.de/rsync/

Ein Beispiel um rsync zu einem SMB-Netzlaufwerk zu verwenden. Ich erstelle zuerst mein Backupverzeichnis, wohin später die entfernte Freigabe gemountet werden soll, mounte in diesem Beispiel die ext. Freigabe, setze rsync ein und trenne am Ende die Freigabe wieder.

Einrichtung am Zielgerät (Gerät mit entferntem Freigabeverzeichnis):

Nun am Host-Gerät, welches das Backup zur Verfügung stellt:

sudo apt-get install smbclient

Check, ob das Zielgerät (192.168.1.93) verfügbar ist bzw. ob ich dort die SMB-Freigabe richtig erstellt habe:

smbclient -N -L //192.168.1.93/

Ausgabe:

pi@pi:~ $ smbclient -N -L //192.168.1.93/
Anonymous login successful

        Sharename      Type      Comment
        ---------      ----      -------
        zfsbackup      Disk
        IPC$           IPC       IPC Service (pi server)
SMB1 disabled -- no workgroup available
pi@pi:~ $

Ein weiterer Check, ob ich mit meinem angegeben Samba-User Zugriff habe:

smbclient -U sambauser -L //192.168.1.93/

Erscheint nun die Passwortabfrage und anschließend die Ausgabe, passt alles soweit.

Jetzt erstelle ich mein internes Mountverzeichnis:

sudo mkdir -p /mnt/backup

Nun mein Mount-Befehl:

sudo mount -t cifs -o username=SAMBAUSER,password=PASSWORT //192.168.1.254/shared /mnt/backup

Hier wird nicht das offensichtliche Freigabeverzeichnis verwendet, sondern die Freigabe, wie diese am Fremdsystem in der smb.conf deklariert wurde (shared). Bsp.:

[shared]
comment = Samba-Pi-Freigabe
path = /mnt/sharedfolders
browseable = yes
read only = no

Falls der vorherige Mount-Befehl nicht funktioniert, hätte ich eine Alternative:

sudo mount -t cifs //192.168.1.254/shared /mnt/backup -o vers=3.0,username=SAMBAUSER,password=PASSWORT

Das Rsync am Beispiel meines Verzeichnisses Heimdall:

sudo rsync -a --rsync-path="sudo rsync" --bwlimit=50000 /appdata/heimdall /mnt/backup

Gemountetes Verzeichniss wieder „Entmounten“:

sudo umount -l /mnt/backup

In einem Script verpackt:

sudo su
touch /home/backuplog.txt && echo // Updates mit "backupscript"  | tee -a /home/backuplog.txt
cd && mkdir -p scriptfiles
cd && cd scriptfiles && nano backupscript.sh

Folgendes trage ich ein (eigene Pfade, username und password anpassen):

#!/bin/bash

## internes Freigabeverzeichnis anlegen
sudo mkdir -p /mnt/backup
#
## Mount ext. Freigabe
sudo mount -t cifs -o username=SAMBAUSER,password=PASSWORT //192.168.1.254/shared /mnt/backup
#
## rsync
sudo rsync -a --rsync-path="sudo rsync" --bwlimit=50000 --delete /appdata/heimdall /mnt/backup
#
## Weitere Backupbefehle (hier vervollständige ich regelmäßig weiter)

#
## Entmounten ext. Freigabe
sudo umount -l /mnt/backup
#
## Logfile Eintrag (wenn nicht benötigt, dann # davor setzen)
echo d=$(date +%y-%m-%d_%H:%M:%S) | sudo tee -a /home/backuplog.txt
#

Strg-x, y, Enter

sudo chmod 700 backupscript.sh

Zum manuellen Aufruf des Scriptes folgenden Befehl eingeben:

cd && cd scriptfiles && sudo ./backupscript.sh
exit

Nun das Script Automatisieren:

Z.B. am Openmediavault:

Unter System – Geplante Aufgaben eine neue Aufgabe hinzufügen, Zeitpunkt wählen und folgenden Befehl einfügen:

cd /root/scriptfiles && sudo ./backupscript.sh

ZB. am RaspiOS mittels Crontab:

sudo crontab -e

Folgendes am Ende der Datei einfügen:

0 2 * * * ~/scriptfiles/backupscript.sh >/dev/null 2>&1
#

Strg-x, y, Enter

Jetzt wird täglich 02:00 Uhr das Backup-Script gestartet.

Das eben Vorgestellte verwende ich für meine regelmäßigen Daten-Backups.

.

Nun DAS Nonplusultra, wenn es um Backups geht, die mittels Script erstellt werden. Muss natürlich für jeden einzelnen Anwendungsfall angepasst werden!

#!/bin/bash
# #####################################
# Name:        rsync Incremental Backup
# Description: Creates incremental backups and deletes outdated versions
# Author:      Marc Gutt
# Version:     1.3
# Quelle: https://forums.unraid.net/topic/97958-rsync-incremental-backup/
##
# Änderungen:  Knilix
##
#########################################################
BACKUP_ANZAHL="7"   # Gilt nur für die aufbewahrten Dumps
#########################################################
## Wartungsmodus einschalten
# Dieser Befehl gilt nur für /linuxserver/nextcloud
sudo docker exec --user abc nextcloud-new php /config/www/nextcloud/occ maintenance:mode --on
wait
## Backup-Verzeichnis für Dump anlegen, wenn noch nicht vorhanden
sudo mkdir -p /mnt/user/backups/nc-new/dumpfile
#
### Dumpfile (ALL) anlegen:
docker exec -t mariadb-new /usr/bin/mysqldump --user=root --password=ROOTPASSWORT --lock-tables --all-databases | gzip > /mnt/user/backups/nc-new/dumpfile/dumpall_$(date +"%Y-%m-%d_%H_%M_%S").gz
wait
# Unnötige Dumps werden entfernt
pushd /mnt/user/backups/nc-new/dumpfile &> /dev/null; ls -tr /mnt/user/backups/nc-new/dumpfile/dumpall* | head -n -${BACKUP_ANZAHL} | xargs rm -f; popd &> /dev/null
# #####################################
# Settings
# #####################################
# backup source to destination
backup_jobs=(
  # source                               # destination
  "/mnt/user/nextclouddata-new"          "/mnt/user/backups/nc-new/nextclouddata-new"
  "/mnt/cache/appdata/nextcloud-new"     "/mnt/user/backups/nc-new/nextcloud-new"
  "/mnt/cache/appdata/mariadb-new"       "/mnt/user/backups/nc-new/mariadb-new"
  "/mnt/cache/appdata/elasticsearch"     "/mnt/user/backups/nc-new/elasticsearch"
)

# keep backups of the last X days
keep_days=14

# keep multiple backups of one day for X days
keep_days_multiple=1

# keep backups of the last X months
keep_months=12

# keep backups of the last X years
keep_years=3

# keep the most recent X failed backups
keep_fails=3

# rsync options which are used while creating the full and incremental backup
rsync_options=(
#  --dry-run
  --archive # same as --recursive --links --perms --times --group --owner --devices --specials
  --human-readable # output numbers in a human-readable format
  --itemize-changes # output a change-summary for all updates
  --exclude="[Tt][Ee][Mm][Pp]/" # exclude dirs with the name "temp" or "Temp" or "TEMP"
  --exclude="[Tt][Mm][Pp]/" # exclude dirs with the name "tmp" or "Tmp" or "TMP"
# Folgender Befehl muss bei einem Nextcloud-Backup unbedingt entfernt werden (oder zumindest ein # davor setzen)
# --exclude="Cache/" # exclude dirs with the name "Cache"
)
# notify if the backup was successful (1 = notify)
notification_success=0

# notify if last backup is older than X days
notification_backup_older_days=30

# create destination if it does not exist
create_destination=1

# backup does not fail if files vanished during transfer https://linux.die.net/man/1/rsync#:~:text=vanished
skip_error_vanished_source_files=1

# backup does not fail if source path returns "host is down".
# This could happen if the source is a mounted SMB share, which is offline.
skip_error_host_is_down=1

# backup does not fail if file transfers return "host is down"
# This could happen if the source is a mounted SMB share, which went offline during transfer
skip_error_host_went_down=1

# backup does not fail, if source path does not exist, which for example happens if the source is an unmounted SMB share
skip_error_no_such_file_or_directory=1

# a backup fails if it contains less than X files
backup_must_contain_files=2

# a backup fails if more than X % of the files couldn't be transfered because of "Permission denied" errors
permission_error_treshold=20

# user-defined rsync command
#alias rsync='sshpass -p "<password>" rsync -e "ssh -o StrictHostKeyChecking=no"'

# user-defined ssh command
#alias ssh='sshpass -p "<password>" ssh -o "StrictHostKeyChecking no"'

# #####################################
# Script
# #####################################

# make script race condition safe
if [[ -d "/tmp/${0//\//_}" ]] || ! mkdir "/tmp/${0//\//_}"; then echo "Script is already running!" && exit 1; fi; trap 'rmdir "/tmp/${0//\//_}"' EXIT;

# allow usage of alias commands
shopt -s expand_aliases

# functions
remove_last_slash() { [[ "${1%?}" ]] && [[ "${1: -1}" == "/" ]] && echo "${1%?}" || echo "$1"; }
notify() {
  echo "$2"
  if [[ -f /usr/local/emhttp/webGui/scripts/notify ]]; then
    /usr/local/emhttp/webGui/scripts/notify -i "$([[ $2 == Error* ]] && echo alert || echo normal)" -s "$1 ($src_path)" -d "$2" -m "$2"
  fi
}

# check user settings
backup_path=$(remove_last_slash "$backup_path")
[[ "${rsync_options[*]}" == *"--dry-run"* ]] && dryrun=("--dry-run")

# check if rsync exists
! command -v rsync &> /dev/null && echo "rsync command not found!" && exit 1

# check if sshpass exists if it has been used
echo "$(type rsync) $(type ssh)" | grep -q "sshpass" && ! command -v sshpass &> /dev/null && echo "sshpass command not found!" && exit 1

# set empty dir
empty_dir="/tmp/${0//\//_}"

# loop through all backup jobs
for i in "${!backup_jobs[@]}"; do

  # get source path and skip to next element
  ! (( i % 2 )) && src_path="${backup_jobs[i]}" && continue

  # get destination path
  dst_path="${backup_jobs[i]}"

  # check user settings
  src_path=$(remove_last_slash "$src_path")
  dst_path=$(remove_last_slash "$dst_path")
 
  # get ssh login and remote path
  ssh_login=$(echo "$dst_path" | grep -oP "^.*(?=:)")
  remote_dst_path=$(echo "$dst_path" | grep -oP "(?<=:).*")
  if [[ ! "$remote_dst_path" ]]; then
    ssh_login=$(echo "$src_path" | grep -oP "^.*(?=:)")
  fi

  # create timestamp for this backup
  new_backup="$(date +%Y%m%d_%H%M%S)"

  # create log file
  log_file="$(mktemp)"
  exec &> >(tee "$log_file")

  # obtain last backup
  if last_backup=$(rsync --dry-run --recursive --itemize-changes --exclude="*/*/" --include="[0-9]*/" --exclude="*" "$dst_path/" "$empty_dir" 2>&1); then
    last_backup=$(echo "$last_backup" | grep -oP "[0-9_/]*" | sort -r | head -n1)
  # create destination path
  elif echo "$last_backup" | grep -q "No such file or directory" && [[ "$create_destination" == 1 ]]; then
    unset last_backup last_include
    if [[ "$remote_dst_path" ]]; then
      mkdir -p "$empty_dir$remote_dst_path" || exit 1
    else
      mkdir -p "$empty_dir$dst_path" || exit 1
    fi
    IFS="/" read -r -a includes <<< "${dst_path:1}"
    for j in "${!includes[@]}"; do
      includes[j]="--include=$last_include/${includes[j]}"
      last_include="${includes[j]##*=}"
    done
    rsync --itemize-changes --recursive "${includes[@]}" --exclude="*" "$empty_dir/" "/"
    find "$empty_dir" -mindepth 1 -type d -empty -delete
  else
    rsync_errors=$(grep -Pi "rsync:|fail|error:" "$log_file" | tail -n3)
    notify "Could not obtain last backup!" "Error: ${rsync_errors//[$'\r\n'=]/ } ($rsync_status)!"
    continue
  fi

  # create backup
  echo "# #####################################"
  # incremental backup
  if [[ "$last_backup" ]]; then
    echo "last_backup: '$last_backup'"
    # warn user if last backup is really old
    last_backup_days_old=$(( ($(date +%s) - $(date +%s -d "${last_backup:0:4}${last_backup:4:2}${last_backup:6:2}")) / 86400 ))
    if [[ $last_backup_days_old -gt $notification_backup_older_days ]]; then
      notify "Last backup is too old!" "Error: The last backup is $last_backup_days_old days old!"
    fi
    # rsync returned only the subdir name, but we need an absolute path
    last_backup="$dst_path/$last_backup"
    echo "Create incremental backup from $src_path to $dst_path/$new_backup by using last backup $last_backup"
    # remove ssh login if part of path
    last_backup="${last_backup/$(echo "$dst_path" | grep -oP "^.*:")/}"
    rsync "${rsync_options[@]}" --stats --delete --link-dest="$last_backup" "$src_path/" "$dst_path/.$new_backup"
  # full backup
  else
    echo "Create full backup from $src_path to $dst_path/$new_backup"
    rsync "${rsync_options[@]}" --stats "$src_path/" "$dst_path/.$new_backup"
  fi

  # check backup status
  rsync_status=$?
  # obtain file count of rsync
  file_count=$(grep "^Number of files" "$log_file" | cut -d " " -f4)
  file_count=${file_count//,/}
  [[ "$file_count" =~ ^[0-9]+$ ]] || file_count=0
  echo "File count of rsync is $file_count"
  # success
  if [[ "$rsync_status" == 0 ]]; then
    message="Success: Backup of $src_path was successfully created in $dst_path/$new_backup ($rsync_status)!"
  # source path is a mounted SMB server which is offline
  elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" == 0 ]] && [[ $(grep -c "Host is down (112)" "$log_file") == 1 ]]; then
    message="Skip: Backup of $src_path has been skipped as host is down"
    [[ "$skip_error_host_is_down" != 1 ]] && message="Error: Host is down!"
  elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" -gt 0 ]] && [[ $(grep -c "Host is down (112)" "$log_file") == 1 ]]; then
    message="Skip: Backup of $src_path has been skipped as host went down"
    [[ "$skip_error_host_went_down" != 1 ]] && message="Error: Host went down!"
  # source path is wrong (maybe unmounted SMB server)
  elif [[ "$rsync_status" == 23 ]] && [[ "$file_count" == 0 ]] && [[ $(grep -c "No such file or directory (2)" "$log_file") == 1 ]]; then
    message="Skip: Backup of $src_path has been skipped as source path does not exist"
    [[ "$skip_error_no_such_file_or_directory" != 1 ]] && message="Error: Source path does not exist!"
  # check if there were too many permission errors
  elif [[ "$rsync_status" == 23 ]] && grep -c "Permission denied (13)" "$log_file"; then
    message="Warning: Some files had permission problems"
    permission_errors=$(grep -c "Permission denied (13)" "$log_file")
    error_ratio=$((100 * permission_errors / file_count)) # note: integer result, not float!
    if [[ $error_ratio -gt $permission_error_treshold ]]; then
      message="Error: $permission_errors/$file_count files ($error_ratio%) return permission errors ($rsync_status)!"
    fi
  # some source files vanished
  elif [[ "$rsync_status" == 24 ]]; then
    message="Warning: Some files vanished"
    [[ "$skip_error_vanished_source_files" != 1 ]] && message="Error: Some files vanished while backup creation ($rsync_status)!"
  # all other errors are critical
  else
    rsync_errors=$(grep -Pi "rsync:|fail|error:" "$log_file" | tail -n3)
    message="Error: ${rsync_errors//[$'\r\n'=]/ } ($rsync_status)!"
  fi

  # backup remains or is deleted depending on status
  # delete skipped backup
  if [[ "$message" == "Skip"* ]]; then
    echo "Delete $dst_path/.$new_backup"
    rsync "${dryrun[@]}" --recursive --delete --include="/.$new_backup**" --exclude="*" "$empty_dir/" "$dst_path"
  # check if enough files have been transferred
  elif [[ "$message" != "Error"* ]] && [[ "$file_count" -lt "$backup_must_contain_files" ]]; then
    message="Error: rsync transferred less than $backup_must_contain_files files! ($message)!"
  # keep successful backup
  elif [[ "$message" != "Error"* ]]; then
    echo "Make backup visible ..."
    # remote backup
    if [[ "$remote_dst_path" ]]; then
      # check if "mv" command exists on remote server as it is faster
      if ssh -n "$ssh_login" "command -v mv &> /dev/null"; then
        echo "... through remote mv (fast)"
        [[ "${dryrun[*]}" ]] || ssh "$ssh_login" "mv \"$remote_dst_path/.$new_backup\" \"$remote_dst_path/$new_backup\""
      # use rsync (slower)
      else
        echo "... through rsync (slow)"
        # move all files from /.YYYYMMDD_HHIISS to /YYYYMMDD_HHIISS
        if ! rsync "${dryrun[@]}" --delete --recursive --backup --backup-dir="$remote_dst_path/$new_backup" "$empty_dir/" "$dst_path/.$new_backup"; then
          message="Error: Could not move content of $dst_path/.$new_backup to $dst_path/$new_backup!"
        # delete empty source dir
        elif ! rsync "${dryrun[@]}" --recursive --delete --include="/.$new_backup**" --exclude="*" "$empty_dir/" "$dst_path"; then
          message="Error: Could not delete empty dir $dst_path/.$new_backup!"
        fi
      fi
    # use local renaming command
    else
      echo "... through local mv"
      [[ "${dryrun[*]}" ]] || mv -v "$dst_path/.$new_backup" "$dst_path/$new_backup"
    fi
  fi

  # notification
  if [[ $message == "Error"* ]]; then
    notify "Backup failed!" "$message"
  elif [ "$notification_success" == 1 ]; then
    notify "Backup done." "$message"
  fi

  # loop through all backups and delete outdated backups
  echo "# #####################################"
  echo "Clean up outdated backups"
  unset day month year day_count month_count year_count
  while read -r backup_name; do

    # failed backups
    if [[ "${backup_name:0:1}" == "." ]] && ! [[ "$backup_name" =~ ^[.]+$ ]]; then
      if [[ "$keep_fails" -gt 0 ]]; then
        echo "Keep failed backup: $backup_name"
        keep_fails=$((keep_fails-1))
        continue
      fi
      echo "Delete failed backup: $backup_name"

    # successful backups
    else
      last_year=$year
    last_month=$month
    last_day=$day
    year=${backup_name:0:4}
    month=${backup_name:4:2}
    day=${backup_name:6:2}
    # all date parts must be integer
    if ! [[ "$year$month$day" =~ ^[0-9]+$ ]]; then
      echo "Error: $backup_name is not a backup!"
      continue
    fi
    # keep all backups of a day
    if [[ "$day_count" -le "$keep_days_multiple" ]] && [[ "$last_day" == "$day" ]] && [[ "$last_month" == "$month" ]] && [[ "$last_year" = "$year" ]]; then
      echo "Keep multiple backups per day: $backup_name"
      continue
    fi
    # keep daily backups
    if [[ "$keep_days" -gt "$day_count" ]] && [[ "$last_day" != "$day" ]]; then
      echo "Keep daily backup: $backup_name"
      day_count=$((day_count+1))
      continue
    fi
    # keep monthly backups
    if [[ "$keep_months" -gt "$month_count" ]] && [[ "$last_month" != "$month" ]]; then
      echo "Keep monthly backup: $backup_name"
      month_count=$((month_count+1))
      continue
    fi
    # keep yearly backups
    if [[ "$keep_years" -gt "$year_count" ]] && [[ "$last_year" != "$year" ]]; then
      echo "Keep yearly backup: $backup_name"
      year_count=$((year_count+1))
      continue
      fi
      # delete outdated backups
      echo "Delete outdated backup: $backup_name"
    fi

    # ssh
    if [[ "$remote_dst_path" ]]; then
      if ssh -n "$ssh_login" "command -v rm &> /dev/null"; then
        echo "... through remote rm (fast)"
        [[ "${dryrun[*]}" ]] || ssh "$ssh_login" "rm -r \"${remote_dst_path:?}/${backup_name:?}\""
      else
        echo "... through rsync (slow)"
        rsync "${dryrun[@]}" --recursive --delete --include="/$backup_name**" --exclude="*" "$empty_dir/" "$dst_path"
      fi
    # local (rm is 50% faster than rsync)
    else
      [[ "${dryrun[*]}" ]] || rm -r "${dst_path:?}/${backup_name:?}"
    fi

  done < <(rsync --dry-run --recursive --itemize-changes --exclude="*/*/" --include="[.0-9]*/" --exclude="*" "$dst_path/" "$empty_dir" | grep -oP "[.0-9_]*" | sort -r)

  # move log file to destination
  log_path=$(rsync --dry-run --itemize-changes --include=".$new_backup/" --include="$new_backup/" --exclude="*" --recursive "$dst_path/" "$empty_dir" | cut -d " " -f 2)
  [[ $log_path ]] && rsync "${dryrun[@]}" --remove-source-files "$log_file" "$dst_path/$log_path/$new_backup.log"
  [[ -f "$log_file" ]] && rm "$log_file"
done
## Wartungsmodus beenden
sudo docker exec --user abc nextcloud-new php /config/www/nextcloud/occ maintenance:mode --off 
Erstelle eine Website wie diese mit WordPress.com
Jetzt starten