Internet Information Services (IIS) is a web server developed by Microsoft, shipped as part of the Windows Server services.

IT operations teams and webmasters use IIS logs for troubleshooting web applications. However, IIS logs are not always straightforward; this is particularly true for busy sites with many parts. The web server can generate numerous verbose logs every day. Making sense of the information in these logs requires you to understand the data. You need to use the right tools to gain meaningful insights.

This article will show how you can get the most from your IIS logs. We’ll cover the different log formats, the important data fields to look at, and how a centralized logging solution tool can help with IIS log analysis.

Let’s dive in.

What Are IIS Logs?

IIS creates log files for each website it serves. You can set the log file location for an IIS-hosted website from the “Logging” section of the website. If you run IIS as a service on a Windows server, then the default location of its log files is %SystemDrive%inetpublogsLogFiles The %SystemDrive%is typically C:.

Each website will have a Site ID. The log file subfolder for a particular site will be under the main LogFilesfolder and named like W3SVC. You can also use the Windows Server Event Viewer to view IIS logs.

IIS logs are automatically enabled and saved in Azure cloud services for the Azure cloud but need to be configured in Azure App Services. The log file paths will differ from the standard Windows Server path in both cases.

IIS logs provide valuable data on how users interact with your website or application. Some useful data elements in IIS logs include:

  • The Source IP address
  • Web pages accessed
  • URI queries
  • HTTP methods
  • HTTP status codes returned

Different IT teams will use IIS logs for different purposes. Application developers can use IIS logs to address bugs and critical errors in the web application. A SecOps team might use the logs to investigate unusual behavior and potentially malicious activities like DDoS attacks. And IT operations teams can use the logs to troubleshoot HTTP response errors or slow response times.

IIS Log Formats

IIS offers flexible logging options, allowing you to choose from different log formats. IIS log formats allow you to specify the log event fields, the field separators, and the time format. Just like the log file location, you can set the log file format of an IIS-hosted website in the “Logging” settings of the website. Regardless of the format you select, all logs are written in ASCII text. The following table shows a high-level overview of the IIS log formats:

TypeDefaultCustomizable FieldsSeparatorTime FormatCompatible with FTP
W3CYesYesSpaceUTCYes
IISNoNoCommaLocal TimeYes
NCSANoNoSpaceLocal TimeNo

W3C Format

W3C is the default IIS log format and lets you choose which fields to include. This also helps reduce the log size. The time is recorded in UTC format. The snippet below shows an IIS log file in W3CFormat.

#Software: Internet Information Services 6.0

#Version: 1.0

#Date: 2001-05-02 17:42:15

#Fields: time c-ip cs-method cs-uri-stem sc-status cs-version

17:42:15 172.16.255.255 GET /default.htm 200 HTTP/1.0

IIS Format

The IIS format is less flexible, as you can’t customize the fields included. However, the file format is CSV and, therefore, easy to parse. The time is the server’s local time. The snippet below shows two log events in IIS format.

192.168.114.201, -, 03/20/01, 7:55:20, W3SVC2, SALES1, 172.21.13.45, 4502, 163, 3223, 200, 0, GET, /DeptLogo.gif, -,

172.16.255.255, anonymous, 03/20/01, 23:58:11, MSFTPSVC, SALES1, 172.16.255.255, 60, 275, 0, 0, 0, PASS, /Intro.htm, -,

NCSA Common Log File Format

The NCSA format is another fixed format, and it does not allow customizing event fields. It’s simpler than the IIS and W3C formats, containing only basic information like username, time, request type, and HTTP status code. The time is recorded as the server’s local time. The snippet below shows an example:

172.21.13.45 - Microsoftfred [08/Apr/2001:17:39:04 -0800] "GET /scripts/iisadmin/ism.dll?http/serv HTTP/1.0" 200 3401

Contents of the IIS Log

IIS log file entries can include different fields based on the log file format. The following table lists some of these fields:

Field NameDescriptionExample Use Case
Date and Time (date and time)Date and time of client requestIt shows the date and time of a client request. You can correlate this with other logs (for example, application logs) to find more details when troubleshooting an issue.
Client IP Address (c-ip)IP address of the website clientYou can use this to track geolocation or specific servers. It can also identify suspicious sources.
Username (cs-username)User making the request. Anonymous users are represented by a hyphen “-”It can help identify application-related issues for specific users, particularly in intranet applications.
Method (cs-method)HTTP request (such as GET, POST, PUT)Tracking the action made by the user.
Bytes Sent and Received (sc-bytes, cs-bytes)Number of bytes sent and received by the serverUsed to assess bandwidth needs or track suspicious behavior if more bytes are sent than usual.
Time Taken (time-taken)Number of milliseconds to complete the requestIt can help troubleshoot website latency issues.
User Agent (cs(User-Agent))Browser Type used by clientIt can be used to understand browser types when troubleshooting possible compatibility issues.
Protocol Status (sc-status)HTTP request status codeIt can help troubleshoot website errors.
Referrer (cs(Referrer))The site that directed the user to this siteHelps customize site content depending on user interaction.

Why Use a Log Management Solution for IIS Logs?

A busy IIS server can host dozens of websites, each with multiple log files. It’s not practical or feasible to manually download, read, and identify issues from each site’s log files—a log management solution is the better option. But it’s not the only reason you’d use one.

Centralization

A log management solution can automatically capture, parse, index, compress, and store your IIS log files. This can save disk space on the web servers and save you from manually logging into each server to collect the logs.

Context

Log management systems enrich data by adding context to it. For example, it can add geolocation data to a log event from the source IP address. Other contextual information can also help identify issues quickly.

Searching and Analyzing

Log management solutions make it easy to search, filter, order, group, and analyze log events. For example, you may only be interested in HTTP status codes 4xx and 5xx. Some tools will use common query languages like SQL; others may use their proprietary languages. In most cases, you can save frequently used search queries.

Correlation

Bringing all of your logs together enables you to correlate events from different systems. For example, you could correlate your IIS logs with network logs to investigate if network latency is causing performance issues on the website. You could also apply AI to correlated IIS and authentication logs to identify suspicious user behavior.

Visualization

Most log management solutions allow you to create trends from events, charts, widgets, and dashboards, and the result is simpler troubleshooting. For example, you can quickly identify spikes in user requests from a graph and correlate that with a geolocation map to identify the traffic source.

Alerting

You can also set alerts in log management solutions for anomalies detected in log files. For example, if over 1,000 GET requests occur within one minute, then an alert can be triggered to warn about a possible DDoS attack.

Discover the world’s leading AI-native platform for next-gen SIEM and log management

Elevate your cybersecurity with the CrowdStrike Falcon® platform, the premier AI-native platform for SIEM and log management. Experience security logging at a petabyte scale, choosing between cloud-native or self-hosted deployment options. Log your data with a powerful, index-free architecture, without bottlenecks, allowing threat hunting with over 1 PB of data ingestion per day. Ensure real-time search capabilities to outpace adversaries, achieving sub-second latency for complex queries. Benefit from 360-degree visibility, consolidating data to break down silos and enabling security, IT, and DevOps teams to hunt threats, monitor performance, and ensure compliance seamlessly across 3 billion events in less than 1 second.