Log Buffering in .NET 9: The Good, the Bad, and the Ugly

In the world of production software, logging presents a difficult trade-off. When everything runs smoothly, excessive logs are not only a costly waste of resources, drowning in a sea of pointless log records is detrimental when diagnosing issues. But when an error occurs, you can never have enough detail—especially about the events that led to the failure. Microsoft’s new log buffering feature in .NET 9 aims to solve this dilemma by letting you have your cake and eat it too.

This feature comes in two nuget packages, the basic buffering infrastructure in Microsoft.Extensions.Telemetry, while Microsoft.AspNetCore.Diagnostics.Middleware adds ASP.NET specific functionality. [Documentation][https://learn.microsoft.com/en-us/dotnet/core/extensions/log-buffering].

The concept is simple: instead of disabling and immediately discarding low-severity log levels like Debug or Information, you can temporarily hold them in a memory buffer. If an error occurs, you can then “flush” this buffer, writing the recent, detailed history to your logs. If no error occurs, the buffered logs are simply discarded, never hitting your ingestion infrastructure. This allows you to retroactively enable detailed logging precisely when you need it most. Basically logging at 1.21 gigawatts with a flux capacitor …


A Quick Word on OpenTelemetry

Before diving into log buffering, it’s worth noting that a comprehensive observability strategy using OpenTelemetry (OTel) can often mitigate the need for overly verbose logging. The three pillars of OTel—tracing, metrics, and logs—work together to provide deep insight.

I have seen applications, where OTel metrics revealed thousands of exceptions handled silently within a library due to configuration/compatibility issues. These exceptions would never appear in any application log, because they were caught in the library code. Similarly, distributed traces can often pinpoint the root cause of an error with minimal log data, because they show the correlated distributed sequence of invocations. If you haven’t explored a full OpenTelemetry setup, it should be your first stop for diagnosing complex issues.


How Log Buffering It Works

Log buffering integrates directly with the existing Microsoft.Extensions.Logging infrastructure you already use.

  1. Add the Packages: You’ll need Microsoft.Extensions.Telemetry for the core functionality and, for web applications, Microsoft.AspNetCore.Diagnostics.Middleware. Both work with .NET 9 and newer.
  2. Configure the Buffer: In your application setup, you enable and configure the buffering policies, such as buffer size, eviction time. Additionally, you provide a set of rules which log statements should be buffered. The easiest filter would be based on LogLevel - buffer everything until LogLevel Information. Higher severity will still be written to the log immediately without being buffered. For ASP.NET Core, you can even configure a buffer scoped per incoming request.
  3. Flush When Needed: The magic happens when you manually call Flush() on the log buffer provider, typically within an exception handler or error-handling middleware. This is the signal to write the buffered logs to your configured log sinks (e.g., console, Serilog, OpenTelemetry Collector). The LogBuffer will be a new type injected from the DI container.

The Good: Seamless Integration & Scoped Buffering 👍

The best parts of this feature are its seamless integration and powerful scoping capabilities.

You continue using the familiar ILogger<T> interface throughout your application without any changes to your existing log statements. It works transparently with your current logging providers and sinks. The primary benefit is cost and noise reduction, as you can confidently add detailed logs knowing they will only be stored when truly needed. Configuration can also be applied from configuration like appsettings.json.

A key advantage for ASP.NET applications is request-scoped buffering. The library can create and manage a separate buffer for each concurrent HTTP request. This isolates your logging, ensuring that when one request fails and flushes its detailed logs, it doesn’t include noise from other requests that are executing successfully in parallel.

The Bad: Conditional Timestamp Troubles 🕰️

A significant drawback stems from how timestamps are handled in the .NET logging pipeline. The new buffering infrastructure correctly captures the timestamp when your log statement is executed. However, the existing pipeline was not originally designed for this scenario or with buffering in mind, and the final logging provider has the last word on the timestamp. Normally, each provider assigns the current timestamp to a log record when a .Log method is called.

The problem that providers who are not (yet) aware of buffering will ignore the captured timestamp and assign a new one when the log is finally written when the buffer is flushed (which can be seconds later than when the log statements were written from the application code to the ILogger instance). [Providers need to implement a new interface](https://github.com/dotnet/extensions/discussions/6507), `IBufferedLogger`, in order to consume the correctly capture the timestamps. When using a provider which does not (yet) do this, your logs will have wrong timestamps.

  • Correct Behavior: Some providers, like the built-in .NET Console Logger, are aware of this and will correctly use the original timestamp.
  • Incorrect Behavior: Other popular sinks, including some configurations for Serilog and the OpenTelemetry Logger, at the time of writing, are not aware of this and will assign their own timestamp at the moment of the flush. When using Aspire Dashboard or Seq to surface logs via OpenTelemetry, the buffered part of the log statements will not have the correct timestamp. In the incorrect case, every log entry in a flushed batch will instead have nearly the same timestamp—the time of the flush, not the time of the original event. This erases critical timing information and, as we’ll see next, leads to bigger problems.

The Ugly: Out-of-Order Mayhem 🤯

The direct consequence of incorrect timestamps is that your logs will likely appear out of order. The official documentation even states that log statement order is not guaranteed.

While in theory your log management platform should be able to sort entries by timestamp to restore chronological order, this system breaks down when the timestamps themselves are wrong.

The buffering mechanism only applies to the log levels you configure it for (e.g., Information and below). Higher-severity logs like Warning or Error will bypass the buffer and be written immediately with a correct, current timestamp.

Consider this sequence of events:

  1. _logger.LogDebug("Starting operation..."); (Goes to buffer)
  2. _logger.LogInformation("Processing item A."); (Goes to buffer)
  3. _logger.LogWarning("Connection is slow."); (Written immediately)
  4. _logger.LogInformation("Processing item B."); (Goes to buffer)
  5. _logger.LogError("Failed to process!"); (Written immediately, error handling code also triggers Flush())

If your logger assigns timestamps on flush, your final output will destroy the chronological reality of what happened:

[WRN] Connection is slow.
[ERR] Failed to process!
[DBG] Starting operation...
[INF] Processing item A.
[INF] Processing item B.

Scoping and Bleed-Over

For ASP.NET Core applications, the .AddPerIncomingRequestBuffer() feature is a clean solution, but it isn’t foolproof. Not all code executes within a request context. Background tasks and hosted services (IHostedService) have no HttpContext, and their logs will fall back to a global buffer. This creates a risk where flushing a request-specific buffer could potentially also flush unrelated logs from a background service.

This problem is compounded by the fact that diagnostic activity information (like TraceId and SpanId from System.Diagnostics.Activity) may not be preserved during buffering. Losing this data can break the crucial link between your logs and their corresponding distributed traces, severely hindering diagnostics in a modern observability stack.


Conclusion: A Powerful Tool with Sharp Edges

Log buffering in .NET 9 is a powerful feature that directly addresses a common pain point in production logging. It offers a smart way to get detailed diagnostics without paying the price of constant verbose logging.

However, it’s a tool that must be used with a clear understanding of its significant limitations. Before implementing it, verify that your chosen logging provider correctly handles pre-captured timestamps. Use it wisely, and always consider if a more robust observability solution with tracing and metrics might be a better fit for your needs.


(I had Gemini tidy up this posting, and though it reads well, it does lose a bit of its (charming) imperfection … and Gemini overdid the brashness a tad)