In previous sections of this guide, we examined logging architectures, node-level logging, cluster-level logging, and sidecar patterns. We also touched upon the benefits of centralized logging. In this section, we’ll expand on centralized logging to look at backend systems and how to use them.

Centralized logging helps access historical log data. With it, manually searching through the logs generated by different nodes or pods on a cluster becomes more manageable. Without a centralized logging system, the logs from a pod could be lost after terminating a pod. In part three of this logging guide, we'll cover the numerous advantages of centralizing logs.

Learn More

Explore the complete Kubernetes Logging Guide series:

Understanding Logging in a Kubernetes Cluster

Given the complexity of Kubernetes, knowing where to look in your data for analytics or debugging purposes can be challenging. Capturing pod and system logs is critical for containerized workloads in Kubernetes. You can also collect various other log data types in Kubernetes, such as audit or ingress logs.

In a Kubernetes cluster, applications run as containers. The container runtime handles the generated output and redirects it to a containerized application's stdout and stderr streams, usually written to log files in JSON format. You can fetch these container logs by running this command:

$ kubectl logs podname

Control plane components, such as etcd, kube-apiserver, and kube-scheduler, collect logs at the Kubernetes cluster level:

  • etcd stores information about the desired and actual state of the system. It also contains metrics about disk write performance and gRPC metrics.
  • kube-apiserver logs provide information about the API server responsible for serving the API requests. These logs can be found at /var/log/kube-apiserver.log.
  • kube-scheduler logs contain information about the cluster’s components responsible for scheduling decisions. These logs can be found at /var/log/kube-scheduler.log.

Combined, these various sources of data provide insight into how Kubernetes performs as a system.

Another log-generating component is kubelet, which runs at the node level. Container runtimes using systemd will log to journald. To access these logs, you can run the journalctl command.

The network proxy, kube-proxy, runs on each node and writes logs to the /var/log directory. In addition to these logs, we can collect Kubernetes events and auditing logs. If you're having a cluster-level issue, these logs are a good starting point for troubleshooting.

Logging Agents

Using a node logging agent is common as it allows the configuring of logging parameters. For example, you can configure the frequency, log levels, and logging format for the logging agent.

Logging frequency controls how often the logging agent collects and ships the logs to the centralized logging backend. For example, you can configure the logs to ship once every hour. This results in the logs being collected and persisted locally, before sending them to the logging backend every hour.

You can configure log levels to selectively log events only from the defined levels. For example, you can define a configuration to capture all logs of INFO severity or higher.

A centralized logging system is essential for viewing all these logs in one location. Before we dive into what to look for in a log management backend, consider the following differences between non-centralized and centralized logging.

Challenges of Non-centralized Logging

Various challenges arise when you don’t use centralized logging. As the number of logs grows, so does the complexity of the task of identifying or interpreting a problem in your system. For example, if your database is running slowly, you can analyze database slow query logs; however, this shouldn’t be the only place to look. Most systems are multitiered, with frontend, middleware, and database components. You’ll need to examine the logs generated by one or more of these tiers. Moreover, you might need to examine logs generated from other sources, such as underlying server, network, or storage subsystems.

Kubernetes is highly dynamic and distributed in nature, so you will work with multiple machines, each with various containers that can terminate at any time. The logs from the containers become lost once a container is evicted from the node.

All of this makes using a centralized logging backend solution critical. Centralized logging helps to keep track of containers with short lifespans.

Benefits of Centralized Logging

Some of the key benefits of centralizing logs include, but are not limited to:

  • Handling different log formats
  • Efficient central storage
  • Improved querying
  • Correlating events

Within Kubernetes, one application will run as one or more pods. Finding a particular log statement or an error message becomes difficult as your cluster grows. For example, if your Kubernetes cluster has multiple replicas of a pod, and any of the pods could serve a particular request, it would be difficult to search for the relevant logs manually.

Centralized logging helps you to avoid issues with short-lived container logs or the abrupt termination of a pod. With centralized logging, pod log data can persist beyond the lifecycle of a container or pod. In addition to helping with troubleshooting, having a centralized logging mechanism is extremely useful for identifying performance bottlenecks.

Looking at logs from different pods or nodes separately can also make it difficult to understand patterns or derive overall system metrics. With a centralized logging system, you can obtain an aggregated view of your system, deriving key metrics about the system’s health and performance. This approach to log analysis makes it easier to quickly catch security issues.

Aggregated logs are also required for security audits and for fulfilling compliance requirements.

Learn More

Read our Cybersecurity 101 post on Centralized Logging to learn more about how they work, more benefits, and why they are necessary.

Read: Centralized Logging

Centralized Logging Backends

To make the extensive log data generated by your Kubernetes applications practical, you must collect and aggregate it in a well-structured format. Modern log management tools are designed to retrieve, correlate, and display Kubernetes log data in an interactive interface for additional analytical and technical usage. Let’s look at key features you would need from a centralized logging system for Kubernetes.

Easy Log Searching and Querying

Centralized logging solutions should support search and discovery tools for locating logs and events related to your applications and services. Several platforms support various logging query languages to make the search and discovery easier. For example, Kubernetes logs can differ based on their source, log levels, logging handlers, log format, or severity. A robust query language makes it easier to search these logs.

Scalable Data Ingestion

Centralized logging systems for Kubernetes should provide the capability to comfortably meet the data ingestion requirements as the system scales up or down. It should also offer the ability to use streaming data ingestion to gain immediate system-wide awareness and handle issues.

Complex Data Correlation

The ability to aggregate the data in a centralized logging backend solution allows you to correlate logs and derive useful metrics. Centralized log management unlocks the capability of collecting, storing, and analyzing log data from several sources in a unified platform. When the Kubernetes cluster runs a workload, it might interact with internal or external resources, which could also change its state. Therefore, the logging backend should be capable of correlating logs generated from different components for easy analysis.

Alerting and Analysis

Centralized logging systems for Kubernetes should provide the ability to set up alarms that integrate with third-party tools and manage complex searches. For example, teams might want to integrate with platforms such as PagerDuty or Slack for alert notifications.

Centralized logging systems should also provide insight into seemingly unrelated pieces of data using machine learning or data analysis algorithms.

Log your data with CrowdStrike Falcon Next-Gen SIEM

Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.

Schedule Falcon Next-Gen SIEM Demo

Arfan Sharif is a product marketing lead for the Observability portfolio at CrowdStrike. He has over 15 years experience driving Log Management, ITOps, Observability, Security and CX solutions for companies such as Splunk, Genesys and Quest Software. Arfan graduated in Computer Science at Bucks and Chilterns University and has a career spanning across Product Marketing and Sales Engineering.