Compare commits

..

2 Commits

Author SHA1 Message Date
Asif Bacchus
8680f91572 update readme to reference wiki 2020-05-24 04:48:21 -06:00
Asif Bacchus
b6c05e4ccb update logwatch readme to point to wiki 2020-05-24 03:55:20 -06:00
2 changed files with 74 additions and 810 deletions

583
README.md
View File

@ -1,569 +1,120 @@
# Mailcow Backup Using borgbackup <!-- omit in toc -->
This script automates backing up your Mailcow installation using borgbackup
and a remote ssh-capable storage system. I suggest using rsync.net since they
have great speeds and a special pricing structure for borgbackup/attic users
([details here](https://www.rsync.net/products/attic.html)).
This script automates backing up your Mailcow installation using borgbackup and a remote ssh-capable storage system. I suggest using rsync.net since they
have great speeds and a special pricing structure for borgbackup/attic users ([details here](https://www.rsync.net/products/attic.html)).
This script automates the following tasks:
- Optionally copies a 503 error page to your webserver so users know when your
server is unavailable due to backups being performed. The 503 file is removed
- Optionally copies a 503 error page to your webserver so users know when your server is unavailable due to backups being performed. The 503 file is removed
when the backup is completed so users can login again
- Dumps the Mailcow mySQL database and adds it to the backup
- Handles stopping and re-starting mail-flow containers (postfix and dovecot) so
everything is in a consistent state during the backup
- Handles stopping and re-starting mail-flow containers (postfix and dovecot) so everything is in a consistent state during the backup
- Allows you to specify additional files you want backed up
- Allows you to specify files/directories to exclude from your backups
- Runs 'borg prune' to make sure you are trimming old backups on your schedule
- Creates a clear, easy to parse log file so you can keep an eye on your backups
and any errors/warnings
- Creates a clear, easy to parse log file so you can keep an eye on your backups and any errors/warnings
## Contents <!-- omit in toc -->
- [Installation/copying](#installationcopying)
- [Environment notes](#environment-notes)
- [Why this script must be run as root](#why-this-script-must-be-run-as-root)
- [Script parameters](#script-parameters)
- [Optional parameters](#optional-parameters)
- [Docker container STOP timeout before error: -1 _number_](#docker-container-stop-timeout-before-error--1-number)
- [Docker container START timeout before error: -2 _number_](#docker-container-start-timeout-before-error--2-number)
- [Path to 503 error page: -5 _/path/to/filename.html_](#path-to-503-error-page--5-pathtofilenamehtml)
- [Path to borg details file: -b _/path/to/filename.file_](#path-to-borg-details-file--b-pathtofilenamefile)
- [File name of docker-compose configuration file: -d _filename.file_](#file-name-of-docker-compose-configuration-file--d-filenamefile)
- [Log file location: -l _/path/to/filename.file_](#log-file-location--l-pathtofilenamefile)
- [File name of Mailcow master configuration file: -m _filename.file_](#file-name-of-mailcow-master-configuration-file--m-filenamefile)
- [Verbose output from borg: -v (no arguments)](#verbose-output-from-borg--v-no-arguments)
- [Path to webroot: -w _/path/to/webroot/_](#path-to-webroot--w-pathtowebroot)
- [Borg details file](#borg-details-file)
- [Protect your borg details file](#protect-your-borg-details-file)
- [borg specific entries (lines 1-4)](#borg-specific-entries-lines-1-4)
- [Line 1: Path to borg base directory](#line-1-path-to-borg-base-directory)
- [Line 2: Path to SSH key for remote server](#line-2-path-to-ssh-key-for-remote-server)
- [Line 3: Connection string to remote repo](#line-3-connection-string-to-remote-repo)
- [Line 4: Password for borg repo/repo key](#line-4-password-for-borg-reporepo-key)
- [additional files/directories to backup](#additional-filesdirectories-to-backup)
- [exclusion patterns](#exclusion-patterns)
- [prune timeframe options](#prune-timeframe-options)
- [borg remote location](#borg-remote-location)
- [Examples](#examples)
- [503 functionality](#503-functionality)
- [Conditional forwarding by your webserver](#conditional-forwarding-by-your-webserver)
- [NGINX](#nginx)
- [Apache](#apache)
- [Disabling 503 functionality altogether](#disabling-503-functionality-altogether)
- [Scheduling: Cron](#scheduling-cron)
- [The log file](#the-log-file)
- [Using Logwatch](#using-logwatch)
- [Remember to rotate your logs](#remember-to-rotate-your-logs)
- [quick start](#quick-start)
- [configuration file](#configuration-file)
- [running the script](#running-the-script)
- [scheduling your backup via cron](#scheduling-your-backup-via-cron)
- [Final notes](#final-notes)
## Installation/copying
## quick start
Once you've either cloned this git or downloaded the release file, simply copy
the files within the archive to whatever location(s) that work for your setup.
I've stored the files in this git archive in a directory structure that should
match most default setups. I suggest keeping the contents of the
*'/root/scripts'* folder in that location since the root user must execute the
script anyways. If you edit the 503.html and mc_borg.details files in place,
then you don't have to specify their locations when running the script.
Clone this repo or download a release file into a directory of your choosing. For all examples in this document, I will assume you will run the script from */scripts/backup*. Make sure the script file is executable and you protect the *.details* file since it contains things like your repo password:
Remember to make the script executable!
```bash
# run commands as root
sudo -s
```Bash
# find somewhere to clone the repo
cd /usr/local/src
# clone the repo from my server (best choice)
git clone https://git.asifbacchus.app/asif/MailcowBackup.git
# or clone from github
git clone https://github.com/asifbacchus/MailcowBackup.git
# make a home for your backup script
mkdir -p /scripts/backup
cd /scripts/backup
# copy files from cloned repo to this new home
cp /usr/local/src/MailcowBackup/backup/* ./
# make script executable and protect your .details file
chmod +x backup.sh
chmod 600 backup.details
```
In addition, you can rename this script file to anything you like. The log file
will use that same name by default when naming itself and any mention of this
file in the logs will automatically use whatever name you choose to give it.
## configuration file
## Environment notes
You will need to let the script know how to access your remote repo along with any passwords/keyfiles needed to encrypt data. This is all handled via the plain-text 'configuration details' file. By default, this file is named *backup.details*. The file itself is fully commented so setting it up should not be difficult. If you need more information, consult [page 4.0](https://git.asifbacchus.app/asif/MailcowBackup/wiki/4.0-Configuration-details-file) in this wiki.
The script is designed to be easy to use but still be flexible enough to
accommodate a wide range of Mailcow setups. The script pulls nearly all it's
configuration from the Mailcow configuration files themselves, so it adapts to
nearly all customizations you may have in your environment. The script accepts
several optional parameters to override its default or detected settings. In
addition, it reads easy-to-edit external plain-text files for borg settings so
you don't have to weed through the script code to supply things like passwords.
## running the script
**This script auto-detects the location of your Mailcow configuration file. If
you have multiple files on your system with the same name as the configuration
file, the script will likely get confused and exit with an error**
After setting up the *.details* file correrctly and assuming you are running a default set up of mailcow according to the documentation, you just have to run the script and it will find everything on it's own. In particular, the defaults are set as follows:
## Why this script must be run as root
- mailcow.conf is located at */opt/mailcow-dockerized/mailcow.conf*
- docker-compose file is located at */opt/mailcow-dockerized/docker-compose.yml*
- the log file will be saved in the same directory as the script with the same name as the script but with the extension *.log*
This script must be run by the root user and will exit with an error if you try
running it otherwise. This is because a default secured setup of borgbackup
contains things like the repository private key that are locked out to root user
access only. In addition, the root user is guaranteed to have access to all
files you might want to backup.
To get a list of all configuration options with defaults:
## Script parameters
You can run the script with the *'-?'* parameter to access the built-in help
which explains the parameters. However, the following is a more detailed
explanation of each parameter and how to use them. **Note that any parameters
needing a directory (webroot, log file location, etc.) can be entered with or
without the trailing '/' since it's stripped by the script anyways.**
General usage:
```Bash
/path/to/script/scriptname.sh -parameter argument -parameter argument ...
```bash
./backup.sh --help
```
### Optional parameters
To run with defaults:
#### Docker container STOP timeout before error: -1 _number_
The amount of time, in seconds, to wait for a docker container to STOP
gracefully before aborting, logging the error and exiting the script.\
**Default: _120_**
#### Docker container START timeout before error: -2 _number_
The amount of time, in seconds, to wait for a docker container to START
before aborting, logging the error and exiting the script.\
**Default: _180_**
#### Path to 503 error page: -5 _/path/to/filename.html_
The path to an html file for the script to copy to your webroot during the
backup process. This file can be scanned by your webserver and a 503 error can
be issued to users letting them know that your Mailcow is 'temporarily
unavailable' while being backed up. A sample 503 page is included for you.
If you remove the default file or the one you specify is missing, a warning will
be issued by the script but, it will continue executing. More details on the
503 notification can be found later in the [503
functionality](#503-functionality) section of this document.\
**Default: _scriptpath/503.html_**
#### Path to borg details file: -b _/path/to/filename.file_
This is a text file that lays out various borg options such as repo name,
password, additional files to include, exclusion patters, etc. A sample file is
included for your reference. More details, including the *required order* of
entries can be found later in this document in the [borg details
file](#borg-details-file) section.\
**Default: _scriptpath/mc_borg.details_**
#### File name of docker-compose configuration file: -d _filename.file_
This is the file name of your docker-compose configuration file that is used to
build/start/stop containers. This script will only search for this file within
the same directory where your Mailcow configuration file is found.\
**Default: _docker-compose.yml_**
#### Log file location: -l _/path/to/filename.file_
If you have a particular place and filename you'd like this script to use for
it's log, then you can specify it using this parameter. I would recommend
*'/var/log/backup.log'*. By default, the script will name the log file
*scriptname*.log and will save it in the same directory as the script itself.\
**Default: _scriptpath/scriptname.log_**
#### File name of Mailcow master configuration file: -m _filename.file_
This is the file name of the Mailcow master configuration file that was
generated after installation and contains all information needed to run Mailcow
(database user name, volume directory prefixes, etc.) This script will search
your computer for either the default file name or the one you have provided.
Upon finding it, the script will derive the file path and use that as the path
in which to run all Mailcow/docker commands. **Please do not have multiple
files on your system with this name, the script WILL get confused and exit with
an error.\
**Default: _mailcow.conf_**
#### Verbose output from borg: -v (no arguments)
By default, the script will ask borg to generate summary only output and record
that in the script's log file. If you are running the backup for the first time
or are troubleshooting, you may want a detailed output of all files and their
changed/unchanged/excluded status from borg. In that case, specify the -v
switch. **Note: This will make your log file very large very quickly since EVERY
file being backed up is written to the log.**
#### Path to webroot: -w _/path/to/webroot/_
This is the path to the directory your webserver is using as it's default root.
In other words, this is the directory that contains the html files served when
someone browses to your server. The correct webroot depends greatly on your
particular setup.
If you directly connect to Mailcow via Docker, then your webroot is by default
*/opt/mailcow-dockerized/data/web*, unless you've made changes to your install
locations. If you are running behind a reverse-proxy, then your webroot is your
webserver's webroot (*/var/www* or */usr/share/nginx/html*, for example).
This is used exclusively for 503 functionality since the script has to know
where to copy the 503 file. If you don't want to use this functionality, you
can omit this parameter and the script will issue a warning and move on. More
details can be found in the [503 functionality](#503-functionality) section
later in this document.
## Borg details file
This file contains all the data needed to access your borg remote data repo.
Each line must contain specific information in a specific order or **needs to be
blank if that data is not required**. The sample file includes this data and
example entries. The file must have the following information in the following
order:
1. path to borg base directory **(required)**
2. path to ssh private key for remote server **(required)**
3. connection string to remote repo **(required)**
4. password for borg repo/repo key **(required)**
5. path to file listing additional files/directories to backup
6. path to file containing borg-specific exclusion patterns
7. prune timeframe options
8. location of borg remote instance
### Protect your borg details file
This file contains information on how to access and decrypt your borg repo,
therefore, you **must** protect it. You should lock it out for everyone but
your root user. Putting it in your root folder is not enough! Run the following
commands to restrict access to the root user only (assuming filename is
*mc_borg.details*):
```Bash
chown root:root mc_borg.details # make root the owner of this file
chmod 600 mc_borg.details # grant access to root user only (read/write)
```bash
./backup.sh
```
### borg specific entries (lines 1-4)
To run with a custom log file name and location:
If you need help with these options, then you should consult the borg
documentation or search my blog at
[https://mytechiethoughts.com](https://mytechiethoughts.com) for borg. Here's a
very brief overview:
#### Line 1: Path to borg base directory
This is primary directory on your local system where your borg configuration is
located, **NOT* the path to your borg binary. The base directory contains the
borg configuration, cache, security files and keys.
#### Line 2: Path to SSH key for remote server
This is the SSH key used to connect to your remote (backup) server where your
borg repo is located. **This is NOT your borg repo key!**
> Please note: If you are planning on executing this script via cron or some
> other form of automation, it is *highly recommended* that you use an SSH key
> **without** a password! SSH is designed such that passwords cannot simply be
> passed to it via environment variables, etc. so this is something not easily
> automated by a script such as this for security reasons. As such, your
> computer will sit and wait for you to enter the password and will NOT execute
> the actual backup portion of the script until the SSH key password is provided.
>
> If you really want/need to use an SSH key password, you will have to look into
> somethign like GNOME keyring or SSH-agent to provide a secure automated way to
> provide that password to SSH and allow this script to continue.
>
> In practice, SSH keys without passwords are still quite safe since the key
> must still be known in order to connect and most keys are quite long. In
> addition, they key only connects to the remote server, your actual information
> within the borg repository is still encrypted and secured with both a key and
> password.
#### Line 3: Connection string to remote repo
This is the full server and path required to connect to your borg repo on the
remote server. Very often it is the in the form of:
```
user@servername.tld:repo-name/
```bash
./backup.sh --log /var/log/mailcow_backup.log
```
for rsync.net it is in the following form:
To copy a 503 error page to your webroot:
```
username@server-number.rsync.net:repo-name/
```bash
# assuming default NGINX webroot (/usr/share/nginx/html)
./backup.sh -5
# custom webroot
./backup.sh -5 -w /var/www/
```
#### Line 4: Password for borg repo/repo key
Common usage: custom log file and copy 503 to custom webroot
This is the password needed to access and decrypt your *borg repo*. Assuming
you set up your borg repo using recommended practices, this will actually be the
password for your *borg repo private key*. **This is NOT your SSH key
password!**
### additional files/directories to backup
This points to a plain-text file listing additional files and directories you'd
like borg to include in the backup. The sample file, *'xtraLocations.borg'*
contains the most likely files you'd want to include assuming you're using a
standard setup like I outline in my blog.
The following would include all files in the home folder for users *'foo'* and
*'bar'* and any conf files in *'/etc/someProgram'*:
```Bash
/home/foo/
/home/bar/
/etc/someProgram/*.conf
```bash
./backup.sh -l /var/log/mailcow_backup.log -5 -w /var/www/
```
*You can leave this line blank* to tell borg to only backup your Mailcow data,
configuration and the SQL dump. However, this is pretty unusual since you would
not be including any server configuration files, reverse-proxy configurations,
etc. If you omit this line, the script will log a warning to remind you of this
unusual situation.
Non-default mailcow location (example: */var/mailcow*):
### exclusion patterns
This points to a plain-text file containing borg-specific patterns describing
what files you'd like borg to ignore during the backup. To specify exclusions,
create a text file in any location you want and specify exclusions patterns, one
per line. Then update line 6 in your borg details file with the path to your
new exclusion file.
You need to run *'borg help patterns'* for help on how to specify exclusion
patterns since the format is not always standard BASH format and only sometimes
uses standard regex.
If you leave this line blank, the script will note it is not processing any
exclusions and will proceed with backing up all files specified.
### prune timeframe options
Here you can let borg prune know how you want to manage your backup history.
Consult the borg documentation and then copy the relevant options directly into
this line including any spaces, etc. The example file contains the following as
a staring point:
```Ini
--keep-within=7d --keep-daily=30 --keep-weekly=12 --keep-monthly=-1
```bash
./backup.sh --docker-compose /var/mailcow/docker-compose.yml --mailcow-config /var/mailcow/mailcow.conf
```
This would tell borg prune to keep ALL backups made for any reason within the
last 7 days, keep 30 days worth of daily backups, 12 weeks of end-of-week
backups and then an infinite amount of end-of-month backups.
For more configuration options, see [page 3.0](https://git.asifbacchus.app/asif/MailcowBackup/wiki/3.0-Script-parameters) in the wiki and [page 4.4](https://git.asifbacchus.app/asif/MailcowBackup/wiki/4.4-Configuration-examples) for some configuration examples. Consult [section 7]() of the wiki for information about the log file and how to integrate it with logwatch.
### borg remote location
## scheduling your backup via cron
If you're using rsync.net, then just have this say *'borg1'*. If you are using
another provider, you'll have to reference their locally installed copy of borg
relative to your home directory. You can also leave this blank if your provider
does not run borg locally but your backups/restores will be slower.
Edit your root user's crontab and add an entry like this which would run the script using defaults at 1:07am daily:
### Examples
Repo in directory *'MailcowBackup'*, all fields including pointers to additional
files to backup, exclusion patterns and a remote borg path. Prune: keep all
backups made in the last 14 days.
```Ini
/var/borgbackup
/var/borgbackup/SSHprivate.key
myuser@usw-s001.rsync.net:MailcowBackup/
myPaSsWoRd
/root/scripts/xtraLocations.borg
/root/scripts/excludeLocations.borg
--keep-within=14d
borg1
```ini
7 1 * * * /scripts/backup/backup.sh -l /var/log/mailcow_backup.log > /dev/null 2>&1
```
Repo in directory *'myBackup'*, no exclusions, keep 14 days end-of-day, 52 weeks
end-of-week
```Ini
/var/borgbackup
/root/keys/rsyncPrivate.key
myuser@usw-s001.rsync.net:myBackup/
PaSsWoRd
/var/borgbackup/include.list
--keep-daily=14 --keep-weekly=52
borg1
```
Repo in directory *'backup'*, no extra file locations, no exclusions, no remote
borg installation. Keep last 30 backups.
```Ini
/root/.borg
/root/.borg/private.key
username@server.tld:backup/
pAsSw0rD
--keep-within=30d
```
**Notice that the blank lines are very important!**
## 503 functionality
This script includes an entire section dedicated to copying an html file to act
as an error 503 notification page. Error 503 is by definition "service
temporarily unavailable" which is exactly the case for your Mailcow server
during a backup since the mail-flow containers have been disabled.
The script copies whatever file is defined by the *'-5'* parameter (or the
default located at *'scriptpath/503.html'*) to whatever path is defined as the
'webroot' by the *'-w'* parameter. This means that if you omit the *'-w'*
parameter, the script will necessarily skip this entire process and just issue a
warning to let you know about it.
### Conditional forwarding by your webserver
The script copying the file to the webroot is the easy part. Your webserver has
to look for the presence of that file and generate a 503 error in order for the
magic to happen. To do that, you have to include an instruction to that effect
in your default server definition and/or your Mailcow virtual server definition
file depending on your setup.
#### NGINX
You can copy the following code into the relevant server definition(s) on an
NGINX server:
```Perl
server {
...
if (-f /usr/share/nginx/html/503.html) {
return 503;
}
...
error_page 503 @backup
location @backup {
root /usr/share/nginx/html;
rewrite ^(.*)$ /503.html break;
}
}
```
This tells NGINX that if it finds the file *'503.html'* at the path
*'/usr/share/nginx/html'* (webroot on reverse proxy) then return an error code
503. Upon encountering a 503 error, rewrite any url to *'domain.tld/503.html'*
and thus, display the custom 503 error page. On the other hand, if it can't
find 503.html at the path specified (i.e. the script has deleted it because the
backup is completed), then go about business as usual.
#### Apache
I don't use apache for anything, ever... so I'm not sure how exactly you'd do
this but I think you'd have to use something like:
```Perl
RewriteEngine On
RewriteCond %{ENV:REDIRECT_STATUS} !=503
RewriteCond "/var/www/503.html" -f
RewriteRule ^ - [R=503,L]
...
ErrorDocument 503 /503.html
...
```
Let me know if that works and I'll update this document accordingly. Like I
said, I don't use Apache so I can't really test it very easily.
#### Disabling 503 functionality altogether
If you don't want to use the 503 functionality for whatever reason and don't
want your log file junked up with warnings about it, then find the section of
the script file that starts with *'--- Begin 503 section ---'* and either
comment all the lines (put a *'#'* at the beginning of each line) or delete all
the lines until you get to *'--- End 503 section ---'*.
## Scheduling: Cron
After running this script at least once manually to test your settings, you
should schedule it to run automatically so things stay backed up. This is
easiest with a simple cron job.
1. Open root's crontab:
```Bash
sudo crontab -e
```
2. Add your script command line and set the time. I'm assuming your script is
located at *'/root/scripts'*, all files are at their default locations and
you want to run your backup at 1:07am daily.
```Bash
7 1 * * * /root/scripts/backup.sh -l /var/log/backup.log -w /usr/share/nginx/html > /dev/null 2>&1
```
The last part redirects all output to 'null' and forwards any errors to
'null' also. You don't need output because the script creates a wonderfully
detailed log file that you can review :-)
3. Save the file and exit.
4. Confirm by listing the root user's crontab:
```Bash
sudo crontab -l
```
## The log file
The script creates a very detailed log file of all major operations along with
any errors and warnings. Everything is timestamped so you can see how long
things take and when any errors took place. The script includes debugging
notes such as where temp files are located, where it's looking for data, whether
it created/moved/copied files, etc. All major operations are tagged *'-- [INFO]
message here --'*. Similarly, warnings are tagged *'-- [WARNING] message here
(code: xxxx) --'* and errors are tagged *'-- [ERROR] message here (code: xxx)
--'*. Successful operations generate a *'-- [SUCCESS] message here --'* stamp.
Sections of the script are all colour-coded to make viewing it easier. This
means you should use something like *'cat backup.log | more'* or *'tail -n
numberOfLines backup.log'* to view the file since the ansi colour codes
would make it difficult to read in nano or vi.
This tagging makes it easy for you to set up a log screening program to make
keeping an eye on your backup results easier. If you plan on using Logwatch
(highly recommended, great program!) then I've done the work for you...
### Using Logwatch
Log-group, conf and service files are included so that you can easily setup
Logwatch to monitor the script's log file and report at your desired detail
level as follows:
1. 0: Summary of total successes, warnings & errors only
2. 1-4: Actual success, error & warning messages
3. 5: Same as above, but includes info messages
4. 6+: Dumps entire raw log file including debugging messages
A detailed breakdown of the files and all options are included in a separate
readme in the *'/etc/logwatch'* folder of this git archive.
If you don't really care how it works, you can just copy the files from this
archive to your *'/etc/logwatch'* directory. The directory structure is correct
for a default Logwatch install on Debian/Ubuntu. You will have to update the
log-group file to reflect the path to your script's log file.
### Remember to rotate your logs
The log file generated by this script is fairly detailed so it can grow quite
large over time. This is especially true if you are using verbose output from
borg for any troubleshooting or for compliance/auditing. I've included a sample
commented *logrotate config file* in this git archive at *'/etc/logrotate.d'*
which you can modify and drop into that same directory on your Debian/Ubuntu
system. If you are using another log rotating solution, then please remember to
configure it so that your log files don't get overwhelmingly large should you
need to parse them if something goes wrong with your backups.
## Final notes
I think that's everything. If I've forgotten to document something, please let
me know. I know this readme is long but, I hate how much stuff for linux and
open-source programs/scripts in general are so poorly documented especially for
newbies and I didn't want to make that same mistake.
I think that's everything. For detailed information, please review the [wiki](https://git.asifbacchus.app/asif/MailcowBackup/wiki/_pages). If I've forgotten to document something there, please let me know. I know the wiki is long but, I hate how much stuff for linux and open-source programs/scripts in general are so poorly documented especially for newbies and I didn't want to make that same mistake.
I don't script too often and I'm a horrible programmer, so if you see anything
that can be/should be improved, please let me know or submit your changes! I
love learning new ways of doing things and getting feedback, so suggestions and
comments are more than welcome.
I don't script too often and I'm a horrible programmer, so if you see anything that can be/should be improved, please let me know by filing an issue or submit your changes via a pull request! I love learning new ways of doing things and getting feedback, so suggestions and comments are more than welcome.
If this has helped you out, then please visit my blog at
[https://mytechiethoughts.com](https://mytechiethoughts.com) where I solve
problems like this all the time on a shoe-string or zero budget. Thanks!
If this has helped you out, then please visit my blog at [https://mytechiethoughts.com](https://mytechiethoughts.com) where I solve problems like this all the time on a shoe-string or zero budget. Thanks!

View File

@ -1,300 +1,13 @@
# Using Logwatch to monitor backup script <!-- omit in toc -->
# Using Logwatch to monitor the backup script
The backup script's log file has been set up so that utilities like Logwatch can
easily parse it. In order to make that happen, a LogFile Group file, Service
and Script have to be created for Logwatch to generate reports. The correct
(general) directory structure has been created in this git archive already.
Below are the details of each file.
## quick start
**If you don't care about how it works, you can simply copy this folder to your
Logwatch configuration directory (_/etc/logwatch/_ by default). Everything is
already in the proper directory structure for a default Debian/Ubuntu
installation.**
Simply copy the contents of this folder to your logwatch configuration directory (*/etc/logwatch/* by default). The directory structure is already correct for a default Debian/Ubuntu logwatch installation. You **must** update the paths in */etc/logwatch/conf/logfiles/backup.conf* to point to your script's log file, but that's the only required change. Please consult [page 7.1.5](https://git.asifbacchus.app/asif/MailcowBackup/wiki/7.1.5-Testing) in the wiki for information on how to test logwatch using this new configuration.
*If you need help installing or setting up Logwatch, please see my blog at
[https://mytechiethoughts.com](https://mytechiethoughts.com) and search for
'_logwatch_'*. These instructions assume you already have Logwatch setup correctly.
## more information
## Contents <!-- omit in toc -->
Please consult [section 7.1](https://git.asifbacchus.app/asif/MailcowBackup/wiki/7.1-Using-logwatch) in the wiki for detailed information about each logwatch configuration file contained within this section of the git repo and how to customize them for your environment.
- [LogFile Group file (/etc/logwatch/conf/logfiles/backup.conf)](#logfile-group-file-etclogwatchconflogfilesbackupconf)
- [Log file location](#log-file-location)
- [Archive location and name format](#archive-location-and-name-format)
- [External script for timestamp processing](#external-script-for-timestamp-processing)
- [Service definition file (/etc/logwatch/conf/services/backup.conf)](#service-definition-file-etclogwatchconfservicesbackupconf)
- [LogFile Group file definition](#logfile-group-file-definition)
- [Report title](#report-title)
- [Detail level](#detail-level)
- [Service script (/etc/logwatch/scripts/services/backup)](#service-script-etclogwatchscriptsservicesbackup)
- [Detail levels](#detail-levels)
- [Timestamp processing script (/etc/logwatch/scripts/shared/sqfullstampanywhere)](#timestamp-processing-script-etclogwatchscriptssharedsqfullstampanywhere)
- [The time format specification](#the-time-format-specification)
- [The search REGEX](#the-search-regex)
- [Testing](#testing)
- [Final thoughts](#final-thoughts)
## final thoughts
## LogFile Group file (/etc/logwatch/conf/logfiles/backup.conf)
### Log file location
Update this as needed to point to the location and name of the log file
generated by the backup script. Remember, by default, the log file is created
in the same directory as the script itself.
```Ini
LogFile = /path/to/your/backup.log
...
```
Best practices suggest you use the backup script's *-l* flag to change this
location to something like */var/log/backup.log*, for example. In that case,
the entry would look like:
```Ini
LogFile = /var/log/backup.log
...
```
### Archive location and name format
If you want Logwatch to process old (archived) log files generated by something
like *Logrotate*, then you have to specify the location and file name format of
those files. I've included the generalized compressed format of such rotated
files as the default in the script. Suppose you store your log files in the
recommended location (*/var/log/*) and are using *Logrotate* with compression
enabled, the archive line would look like:
```Ini
...
Archive = /var/log/backup.log.?.gz
...
```
This would tell Logwatch, when the archive option is set to true, that your
*backup.log* files are archived as: *backup.log.1.gz*, *backup.log.2.gz*, etc.
and are all located in */var/log/*.
**Note: This line is totally optional and only used if you set the archive
option in Logwatch to true (default). You can comment/delete this line if you
wish.**
### External script for timestamp processing
Since the log file uses a non-standard (according to Logwatch) method of
datestamping, a custom filter had to be created. See the
[relevant](#timestamp-processing-script-etclogwatchscriptssharedsqfullstampanywhere)
section of this document for more information.
The script file is called with an *\** before the filename.
```Ini
...
*sqFullStampAnywhere
...
```
If you change the name of this file, you will have to change this line.
Remember that whatever you type here as a name is converted to all-lowercase
so your filename should be all lowercase also.
## Service definition file (/etc/logwatch/conf/services/backup.conf)
### LogFile Group file definition
The service file needs to know what group of log files it is responsible for
processing. This MUST match the name of your *LogFile Group file*:
```Ini
LogFile = backup
...
```
If you change your LogFile Group filename, then update it here too without the
*.conf* extension.
### Report title
The Logwatch output file (html or text) is divided into sections. You can
define the title to be anything that has meaning for you. I have arbitrarily
chosen *"System and Mailcow Backup"* but you can change it to anything you want by
modifying the line:
```Ini
...
Title = "System and Mailcow Backup"
```
### Detail level
If you want to set the *detail* level of this service differently from your
other services (which will use the *--detail* switch value or the value in your
*logwatch.conf*), then you can define that level here. By default, it appears
like this in the service configuration file:
```Ini
...
# Override the detail level for this service
# Remember the levels are: 0, 1-4, 5, 6+
# Detail = 0
```
Simply change it to the value you want enforced. For example, here I'm setting
it to output level 5 regardless of whatever settings everything else is using.
```Ini
# Override the detail level for this service
# Remember the levels are: 0, 1-4, 5, 6+
Detail = 5
```
## Service script (/etc/logwatch/scripts/services/backup)
Logwatch calls any script with a name that **matches the service name**. You'll
notice that I just named everything *backup* to keep things simple. You can
change this to whatever you want, however. If you changed the service name to
*"MailcowBackup*.conf", for example, you would have to rename this script file
to "*MailcowBackup*" with no extension.\
*Note: The script is a PERL file (note the shebang) but it can be written in any
language.*
**In essence, Logwatch just spits out the log file(s) defined in the LogFile
Group file as standard input (STDIN) for the script and then takes whatever is
output (STDOUT) from the script to assemble into it's report.**
### Detail levels
The script supports four (4) detail levels as follows:
- **Level 0: Summary output only**
- This will display an aggregate total of certain logged elements. It will
display the total number of overall successful script executions, total
generated warnings and total errors encountered that stopped the normal
execution of the script. All totals are relative to the reporting period
Logwatch is using (--range parameter).
**This is the recommended reporting level.** It does not take up much space
and is quick to read. If you notice warnings and/or errors, you should
consult the full log.
- **Levels 1-4 (all the same): Critical messages**
- This uses the data which is summarized by Level 0 but outputs the actual
messages in the log file. For example, you will see the actual text of the
errors logged instead of just a total number of errors. This level of
reporting is useful when *initially* monitoring the script's operation since
you can see the actual text of any generated warnings/errors.
- **Level 5: Verbose (debugging) output**
- Like the previous level, this outputs the actual messages found in the log
file. However, it also includes *[INFO] tags* which contain logged
operational messages such as created temporary directories,
starting/stopping docker containers, whether the 503 page was copied, etc.
This level of reporting is useful in diagnosing why errors are occurring or
if you just want more insight into how the script works.
**This level of output will make your Logwatch reports longer and consume
more of your time to review. You should not use this level day-to-day.**
- **Levels 6+ (all the same): Complete log file dump**
- Any number greater than 5 passed as a detail level will trigger the script
to dump the entire log file out to Logwatch line-by-line. This is useful
only if you are debugging an issue and cannot get access to the actual raw
log file itself. The actual log file is colour-coded which makes it much
easier to read for debugging purposes.
**Use this detail level only when you need to see the entire log file and
cannot otherwise access the log file.**
## Timestamp processing script (/etc/logwatch/scripts/shared/sqfullstampanywhere)
This is basically a modified version of the '*applyeurodate*' script that comes
with Logwatch. It had to be modified to search within [square brackets] and to
accept characters coming before the stamp (i.e. ANSI colour codes). If you
change the '**stamp**' variable in the backup script to update the timestamp to
your liking (which to totally fine!) then you'll probably have to update this
file. There are two lines you need to modify to suit your new '**stamp**'
variable.
### The time format specification
'*$SearchDate*' is the variable used in the PERL script to do exactly what it
says, search for the date stamp. I have it set up to look for the format
'*year-month-date hour:minute:second*'. Note, we don't care about brackets or
anything here, we're just defining the format of the date/time stamp.
```Perl
...
$SearchDate = TimeFilter('%Y-%m-%d %H:%M:%S');
...
```
If you changed the '**stamp**' variable so it was formatted as '*month/day/year
hour:minute*' (ex: '*[09/27/2018 18:38]*') then you'd update the **$SearchDate**
variable as follows (note: no mention of the square brackets!):
```Perl
...
$SearchDate = TimeFilter('%m/%d/%Y %H:%M');
...
```
### The search REGEX
The PERL script uses a '*regular expression*' (REGEX) to search within the log file for
'*$SearchDate*'. For the default datestamp, this specification looks like:
```Perl
...
if ($ThisLine =~ m/\[$SearchDate\] /o) {
...
```
The REGEX appears between '*m/*' and '*/o*'. In this case, it searches for
'*$SearchDate*' inside [square brackets] appearing anywhere on the line. This
is because ANSI colour-codes often appear before the datestamp in the default
log file. If you have modified this so that your datestamp appears at the
beginning of the line and in the example format in the section above (using
slashes instead of dashes) then you'd rewrite this REGEX as follows:
```Perl
...
if ($ThisLine =~ m/^\[$SearchDate\] /o) {
...
```
or using regular brackets anywhere on the line:
```Perl
...
if ($ThisLine =~ m/\($SearchDate\) /o) {
...
```
or without any brackets but appearing at the beginning of the line:
```Perl
...
if ($ThisLine =~ m/^$SearchDate /o) {
...
```
## Testing
Run *logwatch --help* and note the options. You can test just this service
locally on your screen with the following command (assuming you kept default
names for everything):
```Bash
# Summary output, entire duration of log file
logwatch --service backup --output stdout --format text --range all --detail 0
# Minimal detail, yesterday only
logwatch --service backup --output stdout --format text --range yesterday --detail 3
# Verbose output, today only
logwatch --service backup --output stdout --format text --range today --detail 5
```
## Final thoughts
That's it! I'm a horrible PERL programmer so if anyone can optimize/improve the
script file used for Logwatch then please do it! Otherwise, I hope this made
sense and helped you integrate the backup script with Logwatch for easy
monitoring :-)
I hope this helps you get your mailcow backup integrated with logwatch easily and quickly. If you have any suggestions/improvements, drop me a line in the issues section!