In part one of our Linux Logging Guide Overview, we discussed the basics of the Linux logging framework: the common Linux log files and their locations, the syslog protocol, and the rsyslog daemon to ingest message streams. We also covered common Linux commands to access and manipulate these log streams.
In part two, we’ll cover more advanced logging concepts, helping you manage logs on your systems and pinpoint problems faster. We’ll dive into:
- Configuring the rsyslog daemon
- Using the systemd and journald utilities to inspect the logs of services on your system
- Using
logrotate
to maintain the most relevant logs on your system without filling up your disks
The rsyslog Daemon Configuration
As we covered in part one, Linux uses a daemon called rsyslogd to process messages using the syslog protocol. This service evolved from the regular syslog daemon to the current enterprise-level logging system. Let’s inspect the contents of the default rsyslog file. On Centos 8, for example, you can find this file at /etc/rsyslog.conf .
#input(type="imtcp" port="514")###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
#
# Set the default permissions for all log files.
#
$FileOwner root
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
###############
#### RULES ####
###############
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log
user.* -/var/log/user.log
#
# Logging for the mail system. Split it up so that
# It is easy to write scripts to parse these files.
#
mail.info -/var/log/mail.info
mail.warn -/var/log/mail.warn
mail.err /var/log/mail.err
#
# Some "catch-all" log files.
#
*.=debug;
auth,authpriv.none;
mail.none -/var/log/debug
*.=info;*.=notice;*.=warn;
auth,authpriv.none;
cron,daemon.none;
mail.none -/var/log/messages
#
# Emergencies are sent to everybody logged in.
#
*.emerg :omusrmsg:*
Let's look at some components of this file. This configuration calls other utilities like module
multiple times to import a feature and then configures it with input
.
You can chain many of these modules to configure rsyslog for your needs.
module(load="imtcp")input(type="imtcp" port="514")
Directives are provided for the rsyslog process to turn on global features or extend the configuration using other files on the system.
# File syncing capability is disabled by default. This feature is usually not required,# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
Along with configuring the rsyslog service, you can control what actions to run on messages. Some actions include:
- Writing the messages to a provided file path
- Providing a username to print the message in a user's console
- Providing asterisks (*) to write the message to all users
Using @host
to send the message to a different host (to be processed by that remote host’s syslog daemon).
# Write kernel logs to filekern.* /var/log/kern
# Everybody gets emergency messages
*.emerg *
# Forward all messages to another host
*.* @@remote-host:514
Rsyslog can process messages with predefined templates for message formats, processing them to a required output format. Rsyslog has a limited set of log file formats it can process, and these resemble the default formats used by other applications and log analysis tools.
Sometimes, you may need to parse your own format of logs for a system. With the $ActionFileDefaultTemplate
parameter, you can configure how rsyslog processes your custom log format.
# Create a log template formate$template myFormat,"%rawmsg%n"
# Set the template as the default
$ActionFileDefaultTemplate myFormat
After defining the ActionFileDefaultTemplate
parameter in your rsyslog config file, rsyslog will parse further messages based on the prescribed default template format set.
What is the systemd journal?
Systemd is a Linux system and service manager that can start other daemons on-demand based on user-defined configurations. The service allows administrators to configure a range of actions on a system, from low-level services like mounting disks and network locations to starting user-facing services like the GUI or a web server. You can configure the services' lifecycle behavior through the system's unit, target, or plain text configuration files located on the disk.
The systemd journal provides a centralized process and system logging tool for the services managed by systemd. Note that this is unlike the syslog files discussed earlier, as these service logs are not written to a plain text file. The journald daemon maintains the log itself, writing them to the /etc/systemd
directory. The journald service maintains indexed journals for the logging information captured from services managed by systemd. By using journald as the logging service, it provides system administrators with a single toolchain for managing the service lifecycle and monitoring their logs with a single tool. To configure the journald service, the config file (typically at /etc/systemd/journald.conf
) can be updated to control storage options, log retention, and the forwarding of logs to other services if needed.
What is journalctl?
The journalctl
utility allows you to access logging information stored by the journald service. You can use this tool to query logging information for specific applications or services.
# Query for messages from the cron servicejournalctl -u cron.service
Additional querying of messages is also available by timeframe, the user that produced the message, or by message priority.
# Query for messages within a timeframejournalctl --since 09:00 --until "1 hour ago"
# Check logs made by a specific user
journalctl _UID=100
# Show any error messages
journalctl -p err
When inspecting logs, it may be helpful to narrow down some of the information gathered from journalctl. You can do this by specifying the output message format of found messages.
journalctl -o shortjournalctl -o verbose
journalctl -o json-pretty
journald Configuration
To configure the journald service, the /etc/systemd/journald.conf
file is available on a typical installation of systemd and journald. Below is the default config file that is found on a Centos installation:
[Journal]#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=10000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=no
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
As you can see, all the config lines use default values and are commented out. Generally, these values will be sufficient to manage the journals on your machine sensibly. If you have more specific requirements for the journald process, some of the key parameters and their available values are listed as follows:
- Storage: sets which storage backend to use for the server
volatile
: This will data in memory, which is lost on restart.persistent
: This will store journald files on disk in the/var/log/journald
directory.none
: No messages are stored.auto
: This is the default configuration, which will attempt to usepersistent
storage and fall back tovolatile
if the disk is not writable.
- Compress: If
true
, this will compress data objects written to the journal. By default, this will occur when the size is larger than 512 bytes. This parameter can be set to a number directly, defining the threshold size of the data before compression happens. - Splitmode: With this parameter, you can dictate how journald will split the log files written. You can configure this parameter as
none
to avoid splitting journal files and keep all logs in a single stream. You can useuid
to create individual journals per user. Note thatuid
is the default configuration and requires the use of storage as persistent. - RateLimitIntervalSec and RateLimitBurst: These are used to configure how many messages the service will accept before dropping further messages. Once the configured interval is over, further service messages will be processed again.
Note that any that changes to the /etc/systemd/journald.conf
file will require restarting the journald service for the changes to take effect.
$ sudo systemctl restart systemd-journald
Rotating Log Files
As you track messages on a Linux system, the log files grow over time. If left uncontrolled and unmonitored, these large file sizes can cause problems with your system memory and storage space. You can avoid this issue by setting up logrotate
, which automates the creation of new log files and the closure of old log files.
Based on your log retention configuration, log rotation can occur when a log file reaches a specific size or a certain duration of time has passed since the last rotation. Log rotation will also remove the oldest file from the system, keeping the most recent messages available. On a typical Linux installation, the logrotate
configuration file is located at /ect/logrotate.conf
.
# see "man logrotate" for details# global options do not affect preceding include directives
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
#dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may also be configured here.
Parameters to configure include the rotation frequency of the logs with the daily
, weekly
, or monthly
directive. You can also configure how many log files to keep on the system using the rotate
parameter and setting a number, which will control how many files it will keep on disk.
You can also set a maximum size for a log file using maxsize
parameter to set the maximum size for a file before logrotate
creates a new file.
# Rotate the log when it reaches the following sizesize 10M
Other packages and services on your system that produce logs may automatically configure themselves by writing their logrotate file to manage the log files they create. The following include
line in the parameter can be used to tell logrotate
to look for other logrotate
config files to extend the configuration used.
# RPM packages drop log rotation information into this directoryinclude /etc/logrotate.d
If you need to interact with logrotate
manually or through automation, you can use the logrotate
command. This command is automatically called through cron
, but you can also call it any time to run logrotate
and evaluate what it should do with your log files. Using the -f
option, you can pass in a configuration file path to be used for the execution.
logrotate -f /etc/logrotate.conf
To verify what the command will do without making any changes to files, you can perform a dry run of the command using the -d
flag.
logrotate -d /etc/logrotate.conf
Log your data with CrowdStrike Falcon Next-Gen SIEM
Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.