Push MDATP Alerts to Log Analytics using Logic Apps

One of the questions that I get asked all the time is how to integrate cloud solutions into monitoring platforms. Whether it is Azure AD sign-in logs, Exchange Audit Logs, or anything else, the primary desire is a centralized location for these logs to provide a “single pane of glass”.

In the past, Rich and myself have talked with clients about using Azure Log Analytics as the centralized platform. However, it was never really a SIEM, and many clients wanted to keep using their on-premises solutions, such as Splunk, ArcSight, or similar.

With the addition of Azure Sentinel on top of Azure Log Analytics, we’re starting to have more conversations about leveraging cloud-native solutions instead of bolted-on products. Because of the work we do in the Microsoft security space, one of the items we wanted to do was push data from Microsoft Defender ATP (formerly Windows Defender ATP) into Log Analytics, which we can then write queries and alerts on within Sentinel.

To do this, we decided to use Azure Logic Apps for two main reasons:

  1. It is low-cost, with a consumption-based model
  2. It provides a modular, graphical means of authoring workflow

This Logic App consists of five easy steps.

Step 1: The Trigger

#TriggerWarning

All Logic Apps must start with a trigger, either based on an event or a schedule. We decided to start this Logic App off simply: a trigger based on when a Defender ATP Alert occurred. Pretty simple, and we just used Rich’s account for the API integration.

Logic App Trigger

Step 2: API Call, The First

The first real step after receiving data via the trigger is to… Receive data.

Huh?

To push data into Log Analytics and have it parsed properly, we want to submit each alert (and corresponding information) as a single JSON object to the Log Analytics endpoint. However, the output from the trigger is just the alert ID and machine ID. We could send that into the activity specifically used for the retrieval of alerts that is built in to Logic Apps, but that gives us a nicely parsed object, which isn’t what we want.

To fix this, the first step we are doing is to pull the alert directly from the MDATP API, which does give us our JSON object. To do this, we have created an App Registration in Azure AD to handle this connection for us. We are using the following configuration:

  • Method: GET
  • URI: https://api.securitycenter.windows.com/api/alerts/<Alert ID from Step 1>
  • Authentication: Active Directory OAuth
  • Tenant: <AAD Tenant GUID>
  • Audience: https://api.securitycenter.windows.com
    • Note the lack of a trailing “/” at the end
  • Client ID: <App Registration Client ID>
  • Credential Type: Secret
  • Secret: <App Registration Key>
First API Call

Step 3: API Call, The Second

Because alert info isn’t everything we need, we also decided to pull in machine information for a given alert. Luckily, this configuration is nearly identical to the other API call, just a different endpoint. For this, we’re using the following settings. Stop me if this looks familiar:

  • Method: GET
  • URI: https://api.securitycenter.windows.com/api/machines/<Machine ID from Step 1>
  • Authentication: Active Directory OAuth
  • Tenant: <AAD Tenant GUID>
  • Audience: https://api.securitycenter.windows.com
    • Note the lack of a trailing “/” at the end
  • Client ID: <App Registration Client ID>
  • Credential Type: Secret
  • Secret: <App Registration Key>
Second API Call

Step 4: Putting it All Together

Now that we have all of our data, we can push it into ALA. However, we want to ensure that the two items are linked, and we don’t want to maintain multiple tables. So to fix this, we’re simply joining both JSON objects into a single one using the union() function in Logic Apps. Luckily, we aren’t sharing many fields between the two objects, so we don’t have to worry about overwriting properties. This is then utilized by the in-box action to send data to Log Analytics, into a custom log we’ve named MDATPAlerts. Note: Log names in Log Analytics are case sensitive, so if you are creating multiple Logic Apps to forward data, our MDATPAlerts log would be different from a MDATPalerts log.

Sending data to Log Analytics

The Final Product

In the end, we now have all MDATP alerts going into our workspace, which we can generate alerts on in Sentinel, create dashboards, and correlate against other potential IoCs. All these logs are going into our MDATPAlerts_CL log (CL meaning Custom Log) in Log Analytics, and can be searched from there. Then the fun begins!

Running this Blog on Azure

One of the items that I get asked about a lot are for some real world examples of running workloads on Azure. Because this blog is hosted in my Azure subscription, I figured this would be a great place to start! And the number of resources (and the way I’ve deployed them) may give some other people ideas on their own deployments.

So to begin, here is what I have deployed across 3 different Resource Groups (RGs)

Resource Group #1 – Azure DevOps

DevOps Resource Group

This is the easiest so may as well start here. When I was doing some functional load testing, I didn’t have an Azure DevOps instance associated with my personal account. So I used an automatically generated one on Azure, which is deployed into its own Resource Group. That’s it, just a DevOps instance.

  • Total Resources in RG #1: 1
    • Azure DevOps Organization

Resource Group #2 – The Real Stuff

Main Resource Group

Now on to my second Resource Group, which is responsible for hosting the actual blog and directly required services. This is deployed on Azure Platform as a Service (PaaS) so there is not a server in sight! In fact, this is deployed 100% on containers, as most PaaS services are.

The blog itself is running on a single Azure App Service hosted in North Central US, hosted on a Linux App Service Plan, using the Basic tier (sorry for the performance). The Linux app service is running a WordPress container, which is hosting this site, and using an Azure DB for MySQL on the backend. I am also storing all my media straight in an Azure storage account, which gives me 5PB of storage (at this time of writing), versus the 10GB of local storage with the app service.

To monitor the environment I’ve got an App Insights workspace set up, with its shared dashboard resource, and an anomalous failures alert, all of which are separate resources.

The final item I have in this Resource Groups is the SSL certificate bound to the site, which we’ll discuss more in the next section.

  • Total Resources in RG #2: 8
    • Azure App Service
    • Azure App Service Plan
    • Azure Database for MySQL
    • Azure Storage Account
    • Application Insights Workspace
    • App Insights Shared Dashboard
    • App Insights anomalous failures alert
    • App Service Certificate

Resource Group #3 – Security is Kinda Important

Encryption Resource Group

The last item in the previous section is for an App Service certificate. You may or may not have noticed, but this site is currently using LetsEncrypt. LetsEncrypt is great because it’s free, but the certificates are only good for 3 months at a time, rather than the usual 1-3 years. Rather than going through and renewing this every 3 months manually, I’m automating the process using a great WebJob solution from Ohad Schneider called letsencrypt-webapp-renewer which is just a WebJob that runs in an Azure App Service, which itself has its own App Service Plan.

To allow for me to use the Free tier for this to keep costs low, I am triggering this WebJob to run with a Function (hooray for 1M free executions and 400k free GB/s), which also has its own consumption-based App Service Plan.

Finally, I’ve got an additional Storage Account to store required files and such for this solution.

  • Total Resources in RG #3: 5
    • Azure App Service
    • Azure Function
    • Azure App Service Plan x2
    • Azure Storage Account

Recap:

In total, I am running 14 resources in Azure to support this blog, for a total running cost of about $50/mo, spread across 3 different Resource Groups. I could consolidate, but I like having some separation between resources. This also allows me to view the cost of individual components, so that I can better track my costs and projections.