CloudflareDDNS/etc/logwatch/README.md
2018-09-27 18:55:20 -06:00

10 KiB

Using Logwatch to monitor Cloudflare DDNS updater script

The Cloudflare DDNS update script's log file has been setup so that utilities like Logwatch can easily parse it. In order to make that happen, a LogFile group file, service and script have to be created for Logwatch to generate reports. The correct (general) directory structure has been created in this git archive already. Below are the details of each file.

Contents

LogFile Group file (/etc/logwatch/conf/logfiles/cfddns.conf)

Log file location

This file is commented so you can update it as necessary for your environment (i.e. you've changed the name of the log file generated by the script via the -l parameter).

LogFile = /path/to/your/cfddns.log
...

Update this needed to point to the location and name of the log file generated by the updater script. Remember, by default, the log file is created in the same directory as the script itself. Best practices suggest you use the -l flag to change this location to something like /var/log/cfddns.log, for example. In that case, the entry would look like:

LogFile = /var/log/cfddns.log
...

Archive location and name format

If you want Logwatch to process old (archived) log files generated by something like Logrotate, then you have to specify that location and file name format of those files. I've included the generalized compressed format of such rotated files as the default in the script. Suppose you store your log files in the recommended location (/var/log/) and are using Logrotate with compression enabled, the archive line would look like:

...
Archive = /var/log/cfddns.log.?.gz
...

This would tell Logwatch, when the archive option is set to true, that your cfddns.log files are archived as: cfddns.log.1.gz, cfddns.log.2.gz, etc. and are all located in /var/log/.

Note: This line is totally optional and only used if you set the archive option in Logwatch to true. You can comment/delete this line if you wish.

External script for timestamp processing

Since the log file uses a non-standard (according to Logwatch) method of time-stamping, a custom filter had to be created. See the relevant section of this document for more information.

The script file is called with an * before the filename.

...
*sqFullStampAnywhere
...

If you change the name of this file, you will have to change this line. Remember that whatever you type here as a name is converted to all-lowercase so your filename should be all lowercase also.

Service definition file (/etc/logwatch/conf/services/cfddns.conf)

LogFile group definition

The service file needs to know what group of log file it is responsible for processing. This MUST match the name of your LogFile Group file:

LogFile = cfddns
...

If you change your LogFile Group filename, then update it here too without the .conf extension.

Report title

The Logwatch output file (html or text) is divided into sections. You can define the title to be anything that has meaning for you. I have arbitrarily chosen "CloudFlare DDNS update" but you can change it to anything you want by modifying the line:

...
Title = "CloudFlare DDNS update"

Service script (/etc/logwatch/scripts/services/cfddns)

Logwatch calls any script with a name that matches the service name. You'll notice that I just named everything cfddns to keep things simple. You can change this to whatever you want, however. If you changed the service name to "cloudflare.conf", for example, you would have to rename this script file to "cloudflare" with no extension. Note: The script is a PERL file.

In essence, Logwatch just spits out the log file(s) defined in the LogFile Group file as standard input (STDIN) and then takes whatever is output (STDOUT) and assembles that into it's report.

Detail levels

The script supports four (4) detail levels as follows:

  • Level 0: Summary output only
    • This will display an aggregate total of certain logged elements. It will display the total number of hostnames (A and AAAA) that are already up-to-date, those that needed updated, those successfully updated and the total number of errors (or any type) encountered by the script. All totals are relative to the reporting period Logwatch is using (--range parameter). This is the recommended reporting level. It does not take up much space and is quick to read. If you see successful updates match the number of needed updates and no errors logged, then things are working properly. If you notice errors, you should consult the full logs.
  • Levels 1-4: Critical messages
    • This uses the data which is summarized by Level 0 but outputs the actual messages in the log file. For example, you will see the actual text of the errors logged instead of just a total number of errors. This level of reporting is useful when initially monitoring the script's operation since you can see the actual text of any generated errors.
  • Level 5: Verbose (debugging) output
    • Like the previous level, this outputs the actual messages found in the log file. However, it also includes [INFO] tags which contain logged messages such as the detected IP address and the specific names of any hostnames not found in your Cloudflare account, etc. This level of reporting is useful in diagnosing why errors are occurring or if you just want more insight into how the script works. This level of output will make your Logwatch reports longer and consume more of your time to review. You should not use this level day-to-day.
  • Levels 6+: Complete log file dump
    • Any number greater than 5 passed as a detail level will trigger the script to dump the entire log file out to Logwatch line-by-line. This is useful only if you are debugging an issue and cannot get access to the actual raw log file itself. The actual log file is colour-coded which makes it much easier to read. Use this detail level only when you need to see the entire log file and cannot otherwise access the log file.

Timestamp processing script (/etc/logwatch/scripts/shared/sqfullstampanywhere)

This is basically a modified version of the 'applyeurodate' script that comes with Logwatch. It had to be modified to search within [square brackets] and to accept characters coming before the stamp (i.e. ANSI colour codes). If you change 'stamp' variable in the updater script to update the timestamp to your liking (which to totally fine!) then you'll probably have to update this file. There are two lines you need to modify to suit your new 'stamp' variable.

The time format specification

SearchDate is the variable used in the PERL script to do exactly what it says, search for the date stamp. I have it set up to look for the format 'year-month-date hour:minute:second'. Note, we don't care about brackets or anything here, we're just defining the format of the date/time stamp.

...
$SearchDate = TimeFilter('%Y-%m-%d %H:%M:%S');
...

If you changed the 'stamp' variable so it was formatted as 'month/day/year hour:minute' (ex: '[09/27/2018 18:38]') then you'd update the $SearchDate variable as follows:

...
$SearchDate = TimeFilter('%m/%d/%Y %H:%M');
...

The search REGEX

The PERL script uses a 'regular expression' (REGEX) to search within the log file for '$SearchDate'. For the default datestamp, this specification looks like:

...
if ($ThisLine =~ m/\[$SearchDate\] /o) {
...

The REGEX appears between 'm/' and '/o'. In this case, it searches for '$SearchDate' inside [square brackets] appearing anywhere on the line. This is because ANSI colour-codes often appear before the datestamp in the default log file. If you have modified this so that your datestamp appears at the beginning of the line and in the example format in the section above (using slashes instead of dashes) then you'd rewrite this REGEX as follows:

...
if ($ThisLine =~ m/^\[$SearchDate\] /o) {
...

or using regular brackets anywhere on the line:

...
if ($ThisLine =~ m/\($SearchDate\) /o) {
...

or without any brackets but appearing at the beginning of the line:

...
if ($ThisLine =~ m/^$SearchDate /o) {
...

Testing

Run logwatch --help and note the options. You can test just this service locally on your screen with the following command (assuming you kept default names for everything):

# Summary output entire duration of log file
logwatch --service cfddns --output stdout --format text --range all --detail 0

# Minimal detail yesterday only
logwatch --service cfddns --output stdout --format text --range yesterday --detail 1

# Verbose output today only
logwatch --service cfddns --output stdout --format text --range today --detail 5

Final thoughts

That's it! I'm a horrible PERL programmer so if anyone can optimize/improve the script file used for Logwatch then please do it! Otherwise, I hope this made sense and helped you out integrating the updater script with Logwatch for easy monitoring :-)