CloudflareDDNS/etc/logwatch
Asif Bacchus 57ce2d1ac3 fixed headings so TOC links work 2021-05-10 03:09:30 -06:00
..
conf Added spacing for readability 2018-10-01 22:25:54 -06:00
scripts refactor(LOGWATCH): use new log tags 2021-05-09 10:43:05 -06:00
README.md fixed headings so TOC links work 2021-05-10 03:09:30 -06:00

README.md

Using Logwatch to monitor Cloudflare DDNS updater script

The Cloudflare DDNS update script's log file has been set up so that utilities like Logwatch can easily parse it. In order to make that happen, a LogFile Group file, Service and Script have to be created for Logwatch to generate reports. The correct (general) directory structure has been created in this git archive already. Below are the details of each file.

You can implement this setup easily by copying it into your /etc/logwatch directory and then modifying the files as necessary:

cd /etc/logwatch
cp -R /path/to/CloudflareDDNS_repo/etc/logwatch/* ./

If you need help getting logwatch installed and set-up, please check out my blog post.

Contents

LogFile Group file

This file is located within the repo at /etc/logwatch/conf/logfiles/cfddns.conf

Log file location

Update this as needed to point to the location and name of the log file generated by the updater script. Remember, by default, the log file is created in the same directory as the script itself.

LogFile = /path/to/your/cfddns.log
...

Best practices suggest you use the --log flag to change this location to something like /var/log/cfddns.log, for example. In that case, the entry would look like:

LogFile = /var/log/cfddns.log
...

Archive location and name format

If you want Logwatch to process old (archived) log files generated by something like Logrotate, then you have to specify the location and file name format of those files. I've included the generalized compressed format of such rotated files as the default in the script. Suppose you store your log files in the recommended location (/var/log/) and are using Logrotate with compression enabled, the archive line would look like:

...
Archive = /var/log/cfddns.log.?.gz
...

This would tell Logwatch, when the archive option is set to true, that your cfddns.log files are archived as: cfddns.log.1.gz, cfddns.log.2.gz, etc. and are all located in /var/log/.

Note: This line is totally optional and only used if you set the archive option in Logwatch to true. You can comment/delete this line if you wish.

External script for timestamp processing

Since the log file uses a non-standard (according to Logwatch) method of date-stamping, a custom filter had to be created. See the relevant section of this document for more information.

The script file is called with an asterisk (*) before the filename.

...
*sqFullStampAnywhere
...

If you change the name of this file, you will have to change this line. Remember that whatever you type here as a name is converted to all-lowercase so your filename should be all lowercase also.

Service definition file

This file is located within the repo at /etc/logwatch/conf/services/cfddns.conf

LogFile Group file definition

The service file needs to know what group of log files it is responsible for processing. This MUST match the name of your LogFile Group file:

LogFile = cfddns
...

If you change your LogFile Group filename, then update it here also without the .conf extension.

Report title

The Logwatch output file (html or text) is divided into sections. You can define the title to be anything that has meaning for you. I have arbitrarily chosen "CloudFlare DDNS update" but you can change it to anything you want by modifying the line:

...
Title = "CloudFlare DDNS update"

Detail level

If you want to set the detail level of this service differently from your other services (which will use the --detail switch value or the value in your logwatch.conf), then you can define that level here. By default, it appears like this in the service configuration file:

...
# Override the detail level for this service
# Remember the levels are: 0, 1-4, 5, 6+
# Detail = 0

Simply change it to the value you want enforced. For example, here I'm setting it to output level 5 regardless of whatever settings everything else is using.

# Override the detail level for this service
# Remember the levels are: 0, 1-4, 5, 6+
Detail = 5

Service script

This file is located within the repo at /etc/logwatch/scripts/services/cfddns

Logwatch calls any script with a name that matches the service name. You'll notice that I just named everything cfddns to keep things simple. You can change this to whatever you want. If you changed the service name to "cloudflare.conf", for example, you would have to rename this script file to "cloudflare" with no extension. Note: The script is a PERL file (note the shebang) but it can be written in any language.

In essence, Logwatch just spits out the log file(s) defined in the LogFile Group file as standard input (STDIN) for the script and then takes whatever is output (STDOUT) from the script to assemble into its report.

Detail levels

The script supports four (4) detail levels as follows:

  • Level 0: Summary output only

    • This will display a simple aggregate of status message categories over the reporting period:

      • Entries successfully updated
      • Entries already up-to-date
      • Hosts failed to update
      • Undefined hosts (i.e. requested to update but record doesnt exist)
      • Total warning messages
      • Total errors
    • This is the recommended reporting level. It does not take up much space and is quick to read. If you see successful updates and/or up-to-date numbers match what you expect and no errors logged, then you can assume things are working properly. If the numbers arent right or you see errors/warnings, then you can investigate the situation by consulting the actual logs or increasing the detail level in logwatch.

    • For example: Lets suppose you are running an update every 15 minutes. Doing the math...

      
      (update_{success}) + (update_{up-to-date}) = (24h \times 60min)/15min = 96
      
      Therefore, you expect to see Entries successfully updated and Entries already up-to-date total *96*. If thats the case and no errors or warnings are logged, things are ok. Pretty easy, right? Thats why this is the recommended filter setting.
      
  • Levels 1-4: Critical messages

    • This uses the data which is summarized by Level 0 but outputs the actual messages in the log file. For example, you will see the actual text of the errors logged instead of just a total number of errors. This level of reporting is useful when initially monitoring the script's operation since you can see the text of any generated errors.
    • Levels 1, 2, 3 & 4 are identical so pick your favourite number.
  • Level 5: Verbose output

    • Like the previous level, this outputs the actual messages found in the log file. However, it also includes CF-ERR tags and tally count messages. This can help you pinpoint why the Cloudflare API is rejecting your requests by letting you see things like authentication errors or malformed addresses, etc.
    • Honestly, this is not much more information than L1-L4 and is often a better choice while debugging any issues since you get the Cloudflare API messages.
    • This level of output is much more verbose than the summary report. It also takes much more time and patience to review so it is only recommended when youre dealing with issues.
    • This is not recommended for day-to-day or routine reports.
  • Levels 6+: Complete log file dump

    • Any number greater than 5 passed as a detail level will trigger the script to dump the entire log file out to Logwatch line-by-line. This is really only useful during debugging or dealing with serious issues where you do not have access to the actual log file. While this is an exact echo of the log file, it likely will not be colour-coded which makes it harder to review.
    • Use this detail level only when you need to see the entire log file and cannot otherwise access the log file.
    • Depending on how your logwatch treats this log dump, you may see gibberish control codes like \e[0m;]. If this is the case, run the script with the --no-colour or --nc option to remove ANSI colour formatting.

Timestamp processing script

This file is located within the repo at /etc/logwatch/scripts/shared/sqfullstampanywhere

This is basically a modified version of the 'applyeurodate' script that comes with Logwatch. It had to be modified to search within [square brackets] and to accept characters coming before the stamp (i.e. ANSI colour codes). If you change the 'stamp' variable in the updater script to update the timestamp to your liking (which to totally fine!) then you'll probably have to update this file. There are two lines you need to modify to suit your new 'stamp' variable.

This entire section is only applicable if you are a very curious person or if you change the hard-coded stamp function in the script. If you did not make any changes and you like a little mystery in your life, you can safely skip this entire section.

The time format specification

'$SearchDate' is the variable used in the PERL script to do exactly what it says, search for the date stamp. I have it set up to look for the format 'year-month-date hour:minute:second'. Note, we don't care about brackets or anything here, we're just defining the format of the date/time stamp.

...
$SearchDate = TimeFilter('%Y-%m-%d %H:%M:%S');
...

If you changed the 'stamp' variable so it was formatted as 'month/day/year hour:minute' (ex: '[09/27/2018 18:38]') then you'd update the $SearchDate variable as follows (note: no mention of the square brackets!):

...
$SearchDate = TimeFilter('%m/%d/%Y %H:%M');
...

The search REGEX

The PERL script uses a 'regular expression' (REGEX) to search within the log file for '$SearchDate'. For the default date stamp, this specification looks like:

...
if ($ThisLine =~ m/\[$SearchDate\] /o) {
...

The REGEX appears between 'm/' and '/o'. In this case, it searches for '$SearchDate' inside [square brackets] appearing anywhere on the line. This is because ANSI colour-codes often appear before the date stamp in the default log file. If you have modified this so that your date stamp appears at the beginning of the line and in the example format in the section above (using slashes instead of dashes) then you'd rewrite this REGEX as follows:

...
if ($ThisLine =~ m/^\[$SearchDate\] /o) {
...

or using regular brackets anywhere on the line:

...
if ($ThisLine =~ m/\($SearchDate\) /o) {
...

or without any brackets but appearing at the beginning of the line:

...
if ($ThisLine =~ m/^$SearchDate /o) {
...

Testing

Run logwatch --help and note the options. You can test just this service locally on your screen with the following command (assuming you kept default names for everything):

# Summary output, entire duration of log file
logwatch --service cfddns --output stdout --format text --range all --detail 0

# Minimal detail, yesterday only
logwatch --service cfddns --output stdout --format text --range yesterday --detail 1

# Verbose output, today only
logwatch --service cfddns --output stdout --format text --range today --detail 5

Final thoughts

That's it! I'm a horrible PERL programmer so if anyone can optimize/improve the script file used for Logwatch then please do it! Otherwise, I hope this made sense and helped you integrate the updater script with Logwatch for easy monitoring :-)