Add pre-built AMIs to Terraform and update documentation

This commit is contained in:
Chris Long
2019-03-09 21:28:06 -08:00
parent 5978e1b750
commit 26140b2d41
15 changed files with 302 additions and 70 deletions

2
.gitignore vendored
View File

@@ -6,3 +6,5 @@ Boxes/*
.DS_Store
Terraform/*/*.tfstate
Terraform/*/.terraform
Terraform/*/*.tfvars
Terraform/*/*.lock.info

View File

@@ -1,17 +0,0 @@
# The region you would like EC2 instances in
# Defaults to us-west-1
region = ""
# Path to the credentials file for AWS (usually /Users/username/.aws/credentials)
shared_credentials_file = ""
# Path to the SSH public key to be added to the logger host
# Example: /Users/username/.ssh/id_terrraform.pub
public_key_path = ""
# AMI ID for each host
# Example: "ami-xxxxxxxxxxxxxxxxx"
logger_ami = ""
dc_ami = ""
wef_ami = ""
win10_ami = ""
# IP Whitelist - Subnets listed here can access the lab over the internet
# Sample: ["1.1.1.1/32", "2.2.2.2/24"]
ip_whitelist = [""]

View File

@@ -0,0 +1,19 @@
# Method 1 - Build Locally and Import to AWS
This method uses Terraform to bring DetectionLab infrastructure online by using pre-built shared AMIs.
The supplied Terraform configuration can then be used to create EC2 instances and all requisite networking components.
## Prerequisites
* A machine to build DetectionLab with
* An AWS account
* An AWS user and access keys to use with the AWS CLI
* Optional but recommended: a separate user for Terraform
## Step by step guide
1. [Configure the AWS command line utility](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html)
2. Copy the file at [/DetectionLab/Terraform/terraform.tfvars.example](./terraform.tfvars.example) to `/DetectionLab/Terraform/terraform.tfvars`
3. In `terraform.tfvars`, provide overrides for the variables specified in [variables.tf](./variables.tf)
4. From the `/DetectionLab/Terraform/` directory, run `terraform init` to setup the initial Terraform configuration
5. Run `terraform apply` to begin the provisioning process

View File

@@ -1,21 +1,16 @@
# DetectionLab Terraform
When I considered the possible ways of building DetectionLab using Terraform, two possibilities came to mind:
### Method 1 - Building locally and exporting VMs
The general concept behind this method is to use Virtualbox or VMware to build DetectionLab. You can then use AWS's VM import capabilities to create AMIs based off of the virtual machines. Once that process is complete, the infrastructure can easily be spun up using a Terraform configuration file.
### Method 1 - Building the VMs locally and exporting them to AWS as AMIs
One method for spinning up DetectionLab in AWS is to begin by using Virtualbox or VMware to build DetectionLab locally. You can then use AWS's VM import capabilities to create AMIs based off of the virtual machines. Once that process is complete, the infrastructure can easily be spun up using a Terraform configuration file.
This method has the benefit of allowing users to customize the VMs before importing them to AWS.
The obvious downside is that it still requires local infrastructure to build the lab, and uploading large OVA files to S3 can be extremely time consuming on slower connections.
The instructions for deploying DetectionLab in AWS via this method are available here: [Build Your Own AMIs README](./VM_to_AMIs.md)
### Method 2 - Building and deploying in AWS
The alternative to building locally would be to build the lab entirely in AWS. This would mean the Packer builds would need to be modified to generate EBS volumes and the Vagrant provisioning would need to be modified to support cloud infrastructure. Virtualbox and VMware-based builds benefit from things like virtual machine guest tools for file sharing, which are obviously unavailable on AWS instances.
This method has the benefit of not requiring any local infrastructure for builds but requires a lot of work, cost, and time to convert the build process to be cloud-based.
### Method 2 - Pre-built AMIs
As of March 2019, I am now sharing pre-built AMIs on the Amazon Marketplace. The code inside of main.tf uses Terraform data sources to determine the correct AMI ID and will use the pre-built AMIs by default.
### Progress Updates
Using this method, it should be possible to bring DetectionLab online in under 15 minutes.
The instructions for deploying DetectionLab to AWS via Method 1 are available here: [Method 1 README](./Method1/Method1.md)
Progress on Method 2 will be tracked using a GitHub project that is viewable here: https://github.com/clong/DetectionLab/projects/1
The instructions for deploying DetectionLab in AWS using the pre-built AMIs are available here: [Pre-Built AMIs README](./Pre-Built_AMIs.md)

45
Terraform/VM_to_AMIs.md Normal file
View File

@@ -0,0 +1,45 @@
# Method 1 - Build Locally and Import to AWS
This method involves using Terraform to bring DetectionLab infrastructure online by first building it locally using Virtualbox/VMware and then [importing the resulting virtual machines](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-vm-image) as AMIs on AWS.
The supplied Terraform configuration can then be used to create EC2 instances and all requisite networking components.
## Prerequisites
* A machine to build DetectionLab with
* An AWS account
* An AWS user and access keys to use with the AWS CLI
* Optional but recommended: a separate user for Terraform
## Step by step guide
1. Build the lab by following the [README](https://github.com/clong/DetectionLab/blob/master/README.md)
2. [Configure the AWS command line utility](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html)
3. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html). You will upload the DetectionLab VMs to this bucket later.
4. For the VM importation to work, you must create a role named `vmimport` with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role:
```aws iam create-role --role-name vmimport --assume-role-policy-document file:///path/to/DetectionLab/Terraform/vm_import/trust-policy.json```
5. Edit `/path/to/DetectionLab/Terraform/vm_import/role-policy.json` and insert the name of the bucket you created in step 3 on lines 12-13, replacing `YOUR_BUCKET_GOES_HERE` with the name of your bucket.
6. Use the create-role command to create a role named vmimport and give VM Import/Export access to it:
```aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///path/to/DetectionLab/Terraform/vm_import/role-policy.json```
7. Export the DetectionLab VMs as single file OVA files if they are not already in that format
8. [Upload the OVAs to the S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) you created in step three
9. Edit the `logger.json`, `dc.json`, `wef.json` and `win10.json` files and modify the S3Bucket and S3Key headers to match the location of the OVA files in your S3 bucket.
10. Import the VMs from S3 as AMIs by running the following commands:
```
aws ec2 import-image --description "dc" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/dc.json
aws ec2 import-image --description "wef" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/wef.json
aws ec2 import-image --description "win10" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/win10.json
aws ec2 import-image --description "logger" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/logger.json
```
11. Check on the status of the importation with the following command:
```aws ec2 describe-import-image-tasks --import-task-ids <import-ami-xxxxxxxxxxxxxxxxx>```
12. Fill out the variables in `/path/to/DetectionLab/Terraform/terraform.tfvars`
13. Run `terraform init` to setup the initial Terraform configuration
14. `cd /path/to/DetectionLab/Terraform/Method1 && terraform apply`

View File

@@ -1,27 +1,3 @@
# Terraform configuration to be used with DetectionLab Method1
# Before using this, you must fill out the variables in terraform.tfvars
# Please follow the instructions in https://github.com/clong/DetectionLab/blob/master/Terraform/Method1/Method1.md
variable "region" {
default = "us-west-1"
}
variable "shared_credentials_file" {
type = "string"
}
variable "key_name" {
default = "id_terraform"
}
variable "public_key_path" {
type = "string"
}
variable "ip_whitelist" {
type = "list"
}
variable "logger_ami" {}
variable "dc_ami" {}
variable "wef_ami" {}
variable "win10_ami" {}
# Specify the provider and access details
provider "aws" {
shared_credentials_file = "${var.shared_credentials_file}"
@@ -129,6 +105,14 @@ resource "aws_security_group" "windows" {
cidr_blocks = "${var.ip_whitelist}"
}
# Windows ATA
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = "${var.ip_whitelist}"
}
# Allow all traffic from the private subnet
ingress {
from_port = 0
@@ -147,75 +131,124 @@ resource "aws_security_group" "windows" {
}
resource "aws_key_pair" "auth" {
key_name = "${var.key_name}"
public_key = "${file(var.public_key_path)}"
key_name = "${var.public_key_name}"
public_key = "${file("${var.public_key_path}")}"
}
resource "aws_instance" "logger" {
instance_type = "t3.medium"
ami = "${var.logger_ami}"
instance_type = "t2.medium"
ami = "ami-0ad16744583f21877"
tags {
Name = "logger"
}
subnet_id = "${aws_subnet.default.id}"
vpc_security_group_ids = ["${aws_security_group.logger.id}"]
key_name = "${aws_key_pair.auth.id}"
key_name = "${aws_key_pair.auth.key_name}"
private_ip = "192.168.38.105"
# Run the following commands to restart Fleet
# Provision the AWS Ubuntu 16.04 AMI from scratch.
provisioner "remote-exec" {
inline = [
"cd /home/vagrant/kolide-quickstart && sudo docker-compose stop",
"sudo service docker restart",
"cd /home/vagrant/kolide-quickstart && sudo docker-compose start"
"sudo add-apt-repository universe && sudo apt-get update && sudo apt-get install -y git",
"echo 'logger' | sudo tee /etc/hostname && sudo hostnamectl set-hostname logger",
"sudo adduser --disabled-password --gecos \"\" vagrant && echo 'vagrant:vagrant' | sudo chpasswd",
"echo 'vagrant ALL=(ALL:ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers",
"sudo git clone https://github.com/clong/DetectionLab.git /opt/DetectionLab",
"sudo sed -i \"s#sed -i 's/archive.ubuntu.com/us.archive.ubuntu.com/g' /etc/apt/sources.list##g\" /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's/eth1/eth0/g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's/ETH1/ETH0/g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's#/usr/local/go/bin/go get -u#GOPATH=/root/go /usr/local/go/bin/go get -u#g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's#/vagrant/resources#/opt/DetectionLab/Vagrant/resources#g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo chmod +x /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo apt-get update",
"sudo /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo pip3.6 install --upgrade --force-reinstall pip==9.0.3 && sudo pip3.6 install -r /home/vagrant/caldera/caldera/requirements.txt && sudo pip3.6 install --upgrade pip",
"sudo service caldera stop && sudo service caldera start"
]
connection {
type = "ssh"
user = "vagrant"
password = "vagrant"
user = "ubuntu"
private_key = "${file("${var.private_key_path}")}"
}
}
root_block_device {
delete_on_termination = true
volume_size = 64
}
}
resource "aws_instance" "dc" {
instance_type = "t2.small"
ami = "${var.dc_ami}"
instance_type = "t2.medium"
ami = "${data.aws_ami.dc_ami.image_id}"
tags {
Name = "dc.windomain.local"
}
subnet_id = "${aws_subnet.default.id}"
vpc_security_group_ids = ["${aws_security_group.windows.id}"]
private_ip = "192.168.38.102"
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "vagrant"
password = "vagrant"
agent = "false"
insecure = "true"
}
inline = [
"powershell -command \"$newDNSServers = @('192.168.38.102','8.8.8.8'); $adapters = Get-WmiObject Win32_NetworkAdapterConfiguration | Where-Object {$_.IPAddress -match '192.168.38.'}; $adapters | ForEach-Object {$_.SetDNSServerSearchOrder($newDNSServers)}\"",
]
}
root_block_device {
delete_on_termination = true
}
}
resource "aws_instance" "wef" {
instance_type = "t2.small"
ami = "${var.wef_ami}"
instance_type = "t2.medium"
ami = "${data.aws_ami.wef_ami.image_id}"
tags {
Name = "wef.windomain.local"
}
subnet_id = "${aws_subnet.default.id}"
vpc_security_group_ids = ["${aws_security_group.windows.id}"]
private_ip = "192.168.38.103"
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "vagrant"
password = "vagrant"
agent = "false"
insecure = "true"
}
inline = [
"powershell -command \"$newDNSServers = @('192.168.38.102','8.8.8.8'); $adapters = Get-WmiObject Win32_NetworkAdapterConfiguration | Where-Object {$_.IPAddress -match '192.168.38.'}; $adapters | ForEach-Object {$_.SetDNSServerSearchOrder($newDNSServers)}\"",
]
}
root_block_device {
delete_on_termination = true
}
}
resource "aws_instance" "win10" {
instance_type = "t2.small"
ami = "${var.win10_ami}"
instance_type = "t2.medium"
ami = "${data.aws_ami.win10_ami.image_id}"
tags {
Name = "win10.windomain.local"
}
subnet_id = "${aws_subnet.default.id}"
vpc_security_group_ids = ["${aws_security_group.windows.id}"]
private_ip = "192.168.38.104"
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "vagrant"
password = "vagrant"
agent = "false"
insecure = "true"
}
inline = [
"powershell -command \"$newDNSServers = @('192.168.38.102','8.8.8.8'); $adapters = Get-WmiObject Win32_NetworkAdapterConfiguration | Where-Object {$_.IPAddress -match '192.168.38.'}; $adapters | ForEach-Object {$_.SetDNSServerSearchOrder($newDNSServers)}\"",
]
}
root_block_device {
delete_on_termination = true
}

15
Terraform/outputs.tf Normal file
View File

@@ -0,0 +1,15 @@
output "logger_public_ip" {
value = "${aws_instance.logger.public_ip}"
}
output "dc_public_ip" {
value = "${aws_instance.dc.public_ip}"
}
output "wef_public_ip" {
value = "${aws_instance.wef.public_ip}"
}
output "win10_public_ip" {
value = "${aws_instance.win10.public_ip}"
}

View File

@@ -0,0 +1,6 @@
region = "us-west-1"
shared_credentials_file = "/home/user/.aws/credentials"
public_key_name = "id_logger"
public_key_path = "/home/user/.ssh/id_logger.pub"
private_key_path = "/home/user/.ssh/id_logger"
ip_whitelist = ["1.2.3.4/32"]

View File

@@ -0,0 +1,9 @@
[
{
"Description": "dc",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "dc.ova"
}
}]

View File

@@ -0,0 +1,9 @@
[
{
"Description": "logger",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "logger.ova"
}
}]

View File

@@ -0,0 +1,27 @@
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource":[
"arn:aws:s3:::YOUR_BUCKET_GOES_HERE",
"arn:aws:s3:::YOUR_BUCKET_GOES_HERE/*"
]
},
{
"Effect":"Allow",
"Action":[
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource":"*"
}
]
}

View File

@@ -0,0 +1,15 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}

View File

@@ -0,0 +1,9 @@
[
{
"Description": "wef",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "wef.ova"
}
}]

View File

@@ -0,0 +1,9 @@
[
{
"Description": "win10",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "win10.ova"
}
}]

View File

@@ -0,0 +1,56 @@
#! /bin/bash
# This script is used to manually prepare an Ubuntu 16.04 server for DetectionLab building
sed -i 's/archive.ubuntu.com/us.archive.ubuntu.com/g' /etc/apt/sources.list
if [[ "$VAGRANT_ONLY" -eq 1 ]] && [[ "$PACKER_ONLY" -eq 1 ]]; then
echo "Somehow this build is configured as both packer-only and vagrant-only. This means something has gone horribly wrong."
exit 1
fi
# Install Virtualbox 5.2
echo "deb http://download.virtualbox.org/virtualbox/debian xenial contrib" >> /etc/apt/sources.list
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
apt-get update
apt-get install -y linux-headers-"$(uname -r)" virtualbox-5.2 build-essential unzip git ufw apache2 python-pip
pip install awscli --upgrade --user
export PATH=$PATH:/root/.local/bin
echo "building" > /var/www/html/index.html
# Set up firewall
ufw allow ssh
ufw allow http
ufw default allow outgoing
ufw --force enable
git clone https://github.com/clong/DetectionLab.git /opt/DetectionLab
# Install Vagrant
mkdir /opt/vagrant
cd /opt/vagrant || exit 1
wget https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.deb
dpkg -i vagrant_2.2.4_x86_64.deb
vagrant plugin install vagrant-reload
# Make the Vagrant instances headless
cd /opt/DetectionLab/Vagrant || exit 1
sed -i 's/vb.gui = true/vb.gui = false/g' Vagrantfile
# Install Packer
mkdir /opt/packer
cd /opt/packer || exit 1
wget https://releases.hashicorp.com/packer/1.3.2/packer_1.3.2_linux_amd64.zip
unzip packer_1.3.2_linux_amd64.zip
cp packer /usr/local/bin/packer
# Make the Packer images headless
cd /opt/DetectionLab/Packer || exit 1
for file in *.json; do
sed -i 's/"headless": false,/"headless": true,/g' "$file";
done
# Ensure the script is executable
chmod +x /opt/DetectionLab/build.sh
cd /opt/DetectionLab || exit 1