Audit .NET/.NET Core Apps with Audit.NET and AWS QLDB

In this week we have a guest post from Adrian Iftode. The post will introduce you to the auditing framework Audit.NET and how to create audit trails of .NET/.NET Core applications using AWS QLDB.

I believe every team had to justify how some things happened in a certain way. The customer is asking questions about how the system got into a specific state. Why some users had access to a sensitive module if they didn't actually have the right policy? Why the contract was paid, even if before there was a registered cancellation? An order appears as delivered, and the end client did not receive any notification? The customer wants to know why. Is it a bug, an operations issue, a system misuse? As the team starts investigating, they soon find out there is no regression. The contract was canceled indeed but reopened at the client's request. The users had the right policy at the moment when they accessed the protected module, but this was changed later into one that denies access. The end client was notified about the order delivery, and it turns out the phone number was wrong at the moment of the notification, and thus corrected later. The team can feel that the customer is not convinced, and wishes there is a better way to prove the correctness of both the logs and the system at any point in time.

AWS QLDB (AWS Quantum Ledger Database)

How do you build digital trust? If you ever downloaded a software utility, then you might have noticed the hosting website also provides some strings that you can check against once the download finishes. These strings are created by a so-called hash function, which is a mathematic way to generate a unique string from the input, in this case, the file itself. If any bit is modified during the download, then the string would look completely different if the same hash algorithm would be applied. Once the file is downloaded the next step is to apply the hash function on it and compare the output with the one published by the hosting website. If there is a match, then the file is trusted since the website is also trusted. In fact, the website is perceived as an actual central authority.

In our case, we have more problems to solve. Not only that we need to keep the history of any state change (contract, order, access policies), but we also need to prove the requested state changes from the application services did not tamper. If any database transaction details could be wrapped in a document, then this can form the input of a hash function, and we just need to store the documents and the corresponding hashes. Most of the databases follow the WAL (Write-Ahead Logging) protocol, which means no data is written to the database files if the transaction details are not written to the logs first. Thus storing only the documents and their hashes doesn't solve the problem and they need to be chained and form a hash chain. When a new transaction is added to the chain, the hashes of all the previous transactions will also be included in the input of the newly calculated hash of the new transaction. For large chains such as the ones resulted from OLTP databases, it is needed an efficient way to compute that a transaction is part of the chain. Such a structure is a Merkle Tree. Instead of looping through all the previous transactions to calculate if a transaction is part of the tree, only adjacent transactions are needed, just enough to get to the top of the tree. This data structure could be maintained either by a central authority or by several parties forming thus a distributed blockchain. For our use case, a single authority is good enough and that is provided by AWS as the QLDB service.

Audit Trails - a specialized form of logging

An audit trail is a specialized form of logging: given a system state, it is needed to know which actions were taken in order to reconstruct how and why the system got into that state. Often building systems with audit capabilities is a functional requirement.

To answer the question of how and why a contract was canceled it is first needed to have all system components engaged in this business operation to have audited all the involved actions so later on the audit trail log can be queried. With the flow of a request through the system new information is added to the audit event, like the component name, the identity or the user name of the executing request, how was the data before it was altered, how is data after modification, timestamps, machine names, a common identifier to correlate the request through the components and any other type of information that might be needed to identify the request with other systems. This operation is vital for some business, so often is considered part of the transaction: the cancellation of a contract is considered successful if also there is a record in the audit log trail.

One could rely on the ILogger interfaces to implement this requirement, but there are few problems: it could be easily turned off, failing to send a message to log won't crash the application and it does not have specialized primitives for audit logging.

Here is where Audit.NET shines as it provides two simple primitives specialized for audit logging: AuditScope and AuditScopeFactory.

Integrating Audit.NET and QLDB in .NET Core

Audit.NET is an extensible framework to audit executing operations in .NET and .NET Core. It comes with two types of extensions: the data providers (or the data sinks) and the interaction extensions. The data providers are used to store the audit events into various persistent storages and the interaction extensions are used to create specialized audit events based on the executing context like Entity Framework, MVC, WCF, HttpClient, and many others.

One of the data providers is Audit.NET.AmazonQLDB.

The prerequisite before using Audit.NET with QLDB is that a Ledger must already exist on AWS. To do this you can follow the instructions from AWS docs. Once this is ready the next steps could be followed:

Install the Audit.NET.AmazonQLDB package:

Install-Package Audit.NET.AmazonQLDB

Configure the QLDB driver just before running the .NET Core host; in the configuration bellow all audit events are saved to the QLDBDemo ledger.

using Amazon.QLDB.Driver;
using Audit.Core;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;

namespace QLDBDemo
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Configuration.Setup()
                .UseAmazonQldb(config => config.WithQldbDriver(QldbDriver.Builder()
                        .WithLedger("QLDBDemo")
                        .Build())
                    .Table("AuditEvents"));

            CreateHostBuilder(args).Build().Run();
        }
        
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
}

Register the IAuditScopeFactory interface as a singleton, so it can be injected in the application services. The role of the AuditScopeFactory is to create an AuditScope object that tracks all the audited data that is going to be saved to the persistence storage. The AuditScope class follows the Dispose Design pattern from C# so the save to persistence operation is automatically called in using blocks.

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<IAuditScopeFactory, AuditScopeFactory>();
}

Audit the Hello World endpoint, so when a request reaches the root endpoint, a new AuditScope is created and consumed in a using statement. A name is given to the even type (HelloWorld). The event data is saved to the QLDB Ledger just before the function's end.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapGet("/", async context =>
        {
            var auditScopeFactory = context.Request.HttpContext.RequestServices.GetRequiredService<IAuditScopeFactory>();

            await using var _ = await auditScopeFactory.CreateAsync(new AuditScopeOptions
            {
                EventType = "HelloWorld"
            });

            await context.Response.WriteAsync("Hello World!");
        });
    });
}

To see the actual audit event data, navigate to the AWS QLDB service, select the QLDBDemo ledger and execute the following PartiQL statement in the Query Editor.

SELECT *
FROM AuditEvents

You can view each record as an ION document, a subset of JSON and an AWS proprietary format. It might look like this:

{
  EventType: "HelloWorld",
  Environment: {
    UserName: "Adrian Iftode",
    MachineName: "DESKTOP-8SVJ206",
    DomainName: "DESKTOP-8SVJ206",
    CallingMethodName: "System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()",
    AssemblyName: "System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e",
    Culture: "en-US"
  },
  StartDate: "2020-09-27T11:53:02.973389Z",
  EndDate: "2020-09-27T11:53:02.9848242Z",
  Duration: 11
}

Each record in the AuditEvents table has a history of modifications. Depending on the configured creation strategy, each record will have zero or more modifications. The list of changes can be queried using a history function:

SELECT *
FROM history(AuditEvents)

The output of this query will contain records like the following:

{
  blockAddress: {
    strandId: "5pxmvg4JADmBeFIh7fLULL",
    sequenceNo: 32
  },
  hash: {{ULu/OP9lHr/7xuEx+BrlVQdu5Y9kyp/2KPCjxiMla/A=}},
  data: {
    EventType: "HelloWorld",
    Environment: {
      UserName: "Adrian Iftode",
      MachineName: "DESKTOP-8SVJ206",
      DomainName: "DESKTOP-8SVJ206",
      CallingMethodName: "System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()",
      AssemblyName: "System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e",
      Culture: "en-US"
    },
    StartDate: "2020-09-27T17:58:29.0672349Z",
    Duration: 0
  },
  metadata: {
    id: "0bGI6MKepSe9Trm4haPUnr",
    version: 0,
    txTime: 2020-09-27T17:58:30.728Z,
    txId: "4yjXiIBwKZG9cv3TWWvEdn"
  }
}

Notice the additional fields besides the data one. They contain information about the block position in the Merkle Tree (block address), the hash of this document, and metadata like id, version, time, and transaction id. To prove the integrity of this log entry the following information is needed: block address, document id (metadata.id), and the digest (top hash) of the QLDBDemo ledger. With this information, a proof is requested from AWS QLDB and the service will respond with the list of the intermediary hashes so the client can now rebuild the path to the top node and compare the result with the advertised digest.

There are probably many ways to audit your system and different implementations. I personally prefer to avoid reinventing the wheel, unless there are serious reasons to do so. If a system is critical, if it must comply with state regulations, or if the business partners need to be ensured about data integrity, I would choose a centralized ledger database.

Features steps
We monitor your websites

We monitor your websites

We monitor your websites for crashes and availability. This helps you get an overview of the quality of your applications and to spot trends in your releases.

We notify you

We notify you

We notify you when errors starts happening using Slack, Microsoft Teams, mail or other forms of communication to help you react to errors before your users do.

We help you fix bugs

We help you fix bugs

We help you fix bugs quickly by combining error diagnostic information with innovative quick fixes and answers from Stack Overflow and social media.

See how we can help you monitor your website for crashes Monitor your website