Azure development

This commit is contained in:
Chris Long
2020-06-14 18:45:18 -07:00
parent a033ea2b60
commit 5791b99c8f
35 changed files with 1236 additions and 17 deletions

View File

@@ -0,0 +1,25 @@
# Method 1 - Use Pre-Built AMIs
This method uses Terraform to bring DetectionLab infrastructure online by using pre-built shared AMIs.
The supplied Terraform configuration can then be used to create EC2 instances and all requisite networking components.
## Prerequisites
* A system with Terraform, AWS CLI and git installed
* An AWS account
* AWS credentials for Terraform
## Step by step guide
1. Ensure the prerequisites are installed:
* [Terraform](https://www.terraform.io/downloads.html)
* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
* [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
2. [Configure the AWS command line utility](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html) and set up a user for Terraform via `aws configure --profile terraform`.
3. Create a private/public keypair to use to SSH into logger: `ssh-keygen -b 2048 -f ~/.ssh/id_logger`
4. Copy the file at [/DetectionLab/Terraform/terraform.tfvars.example](./terraform.tfvars.example) to `/DetectionLab/Terraform/terraform.tfvars`
5. In `terraform.tfvars`, provide overrides for the variables specified in [variables.tf](./variables.tf)
6. From the `/DetectionLab/Terraform` directory, run `terraform init` to setup the initial Terraform configuration
7. Run `terraform apply` to begin the provisioning process
[![DetectionLab - Terraform](https://i.vimeocdn.com/video/777172792_640.webp)](https://vimeo.com/331695321)

21
AWS/Terraform/README.md Normal file
View File

@@ -0,0 +1,21 @@
# DetectionLab Terraform
### Method 1 - Pre-built AMIs
#### Estimated time to build: 30 minutes
As of March 2019, I am now sharing pre-built AMIs on the Amazon Marketplace. The code inside of main.tf uses Terraform data sources to determine the correct AMI ID and will use the pre-built AMIs by default.
Using this method, it should be possible to bring DetectionLab online in under 30 minutes.
The instructions for deploying DetectionLab in AWS using the pre-built AMIs are available here: [Pre-Built AMIs README](./Pre-Built_AMIs.md)
### Method 2 - Building the VMs locally and exporting them to AWS as AMIs
#### Estimated time to build: 3-4 hours
One method for spinning up DetectionLab in AWS is to begin by using Virtualbox or VMware to build DetectionLab locally. You can then use AWS's VM import capabilities to create AMIs based off of the virtual machines. Once that process is complete, the infrastructure can easily be spun up using a Terraform configuration file.
This method has the benefit of allowing users to customize the VMs before importing them to AWS.
The instructions for deploying DetectionLab in AWS via this method are available here: [Build Your Own AMIs README](./VM_to_AMIs.md)

View File

@@ -0,0 +1,45 @@
# Method 2 - Build Locally and Import to AWS
This method involves using Terraform to bring DetectionLab infrastructure online by first building it locally using Virtualbox/VMware and then [importing the resulting virtual machines](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-vm-image) as AMIs on AWS.
The supplied Terraform configuration can then be used to create EC2 instances and all requisite networking components.
## Prerequisites
* A machine to build DetectionLab with
* An AWS account
* An AWS user and access keys to use with the AWS CLI
* Optional but recommended: a separate user for Terraform
## Step by step guide
1. Build the lab by following the [README](https://github.com/clong/DetectionLab/blob/master/README.md)
2. [Configure the AWS command line utility](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html)
3. [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html). You will upload the DetectionLab VMs to this bucket later.
4. For the VM importation to work, you must create a role named `vmimport` with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role:
```aws iam create-role --role-name vmimport --assume-role-policy-document file:///path/to/DetectionLab/Terraform/vm_import/trust-policy.json```
5. Edit `/path/to/DetectionLab/Terraform/vm_import/role-policy.json` and insert the name of the bucket you created in step 3 on lines 12-13, replacing `YOUR_BUCKET_GOES_HERE` with the name of your bucket.
6. Use the create-role command to create a role named vmimport and give VM Import/Export access to it:
```aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///path/to/DetectionLab/Terraform/vm_import/role-policy.json```
7. Export the DetectionLab VMs as single file OVA files if they are not already in that format
8. [Upload the OVAs to the S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html) you created in step three
9. Edit the `dc.json`, `wef.json` and `win10.json` files and modify the S3Bucket and S3Key headers to match the location of the OVA files in your S3 bucket.
10. Import the VMs from S3 as AMIs by running the following commands:
```
aws ec2 import-image --description "dc" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/dc.json
aws ec2 import-image --description "wef" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/wef.json
aws ec2 import-image --description "win10" --license-type byol --disk-containers file:///path/to/DetectionLab/Terraform/vm_import/win10.json
```
11. Check on the status of the importation with the following command:
```aws ec2 describe-import-image-tasks --import-task-ids <import-ami-xxxxxxxxxxxxxxxxx>```
12. Copy the file at [/DetectionLab/Terraform/terraform.tfvars.example](./terraform.tfvars.example) to `/DetectionLab/Terraform/terraform.tfvars`
13. Fill out the variables in `/DetectionLab/Terraform/terraform.tfvars`
14. Run `terraform init` to setup the initial Terraform configuration
15. cd to `DetectionLab/Terraform` and run `terraform apply`

6
AWS/Terraform/locals.tf Normal file
View File

@@ -0,0 +1,6 @@
locals {
fleet_url = "https://${aws_instance.logger.public_ip}:8412"
splunk_url = "https://${aws_instance.logger.public_ip}:8000"
ata_url = "https://${aws_instance.wef.public_ip}"
guacamole_url = "http://${aws_instance.logger.public_ip}:8080/guacamole"
}

307
AWS/Terraform/main.tf Normal file
View File

@@ -0,0 +1,307 @@
# Specify the provider and access details
provider "aws" {
shared_credentials_file = var.shared_credentials_file
region = var.region
profile = var.profile
}
# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
cidr_block = "192.168.0.0/16"
}
# Create an internet gateway to give our subnet access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = aws_vpc.default.id
}
# Grant the VPC internet access on its main route table
resource "aws_route" "internet_access" {
route_table_id = aws_vpc.default.main_route_table_id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.default.id
}
# Create a subnet to launch our instances into
resource "aws_subnet" "default" {
vpc_id = aws_vpc.default.id
cidr_block = "192.168.38.0/24"
availability_zone = var.availability_zone
map_public_ip_on_launch = true
}
# Adjust VPC DNS settings to not conflict with lab
resource "aws_vpc_dhcp_options" "default" {
domain_name = "windomain.local"
domain_name_servers = concat([aws_instance.dc.private_ip], var.external_dns_servers)
netbios_name_servers = [aws_instance.dc.private_ip]
}
resource "aws_vpc_dhcp_options_association" "default" {
vpc_id = aws_vpc.default.id
dhcp_options_id = aws_vpc_dhcp_options.default.id
}
# Our default security group for the logger host
resource "aws_security_group" "logger" {
name = "logger_security_group"
description = "DetectionLab: Security Group for the logger host"
vpc_id = aws_vpc.default.id
# SSH access
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Splunk access
ingress {
from_port = 8000
to_port = 8000
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Fleet access
ingress {
from_port = 8412
to_port = 8412
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Guacamole access
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
ingress {
from_port = 8443
to_port = 8443
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Allow all traffic from the private subnet
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["192.168.38.0/24"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "windows" {
name = "windows_security_group"
description = "DetectionLab: Security group for the Windows hosts"
vpc_id = aws_vpc.default.id
# RDP
ingress {
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# WinRM
ingress {
from_port = 5985
to_port = 5986
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Windows ATA
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = var.ip_whitelist
}
# Allow all traffic from the private subnet
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["192.168.38.0/24"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_key_pair" "auth" {
key_name = var.public_key_name
public_key = file(var.public_key_path)
}
resource "aws_instance" "logger" {
instance_type = "t3.medium"
ami = coalesce(var.logger_ami, data.aws_ami.logger_ami.image_id)
tags = {
Name = "logger"
}
subnet_id = aws_subnet.default.id
vpc_security_group_ids = [aws_security_group.logger.id]
key_name = aws_key_pair.auth.key_name
private_ip = "192.168.38.105"
# Provision the AWS Ubuntu 18.04 AMI from scratch.
provisioner "remote-exec" {
inline = [
"sudo apt-get -qq update && sudo apt-get -qq install -y git",
"echo 'logger' | sudo tee /etc/hostname && sudo hostnamectl set-hostname logger",
"sudo adduser --disabled-password --gecos \"\" vagrant && echo 'vagrant:vagrant' | sudo chpasswd",
"sudo mkdir /home/vagrant/.ssh && sudo cp /home/ubuntu/.ssh/authorized_keys /home/vagrant/.ssh/authorized_keys && sudo chown -R vagrant:vagrant /home/vagrant/.ssh",
"echo 'vagrant ALL=(ALL:ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers",
"sudo git clone https://github.com/clong/DetectionLab.git /opt/DetectionLab",
"sudo sed -i 's/eth1/ens5/g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's/ETH1/ens5/g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's/eth1/ens5/g' /opt/DetectionLab/Vagrant/resources/suricata/suricata.yaml",
"sudo sed -i 's#/vagrant/resources#/opt/DetectionLab/Vagrant/resources#g' /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config",
"sudo service ssh restart",
"sudo chmod +x /opt/DetectionLab/Vagrant/bootstrap.sh",
"sudo apt-get -qq update",
"sudo /opt/DetectionLab/Vagrant/bootstrap.sh",
]
connection {
host = coalesce(self.public_ip, self.private_ip)
type = "ssh"
user = "ubuntu"
private_key = file(var.private_key_path)
}
}
root_block_device {
delete_on_termination = true
volume_size = 64
}
}
resource "aws_instance" "dc" {
instance_type = "t3.medium"
provisioner "remote-exec" {
inline = [
"choco install -force -y winpcap",
"ipconfig /renew",
"powershell.exe -c \"Add-Content 'c:\\windows\\system32\\drivers\\etc\\hosts' ' 192.168.38.103 wef.windomain.local'\"",
]
connection {
type = "winrm"
user = "vagrant"
password = "vagrant"
host = coalesce(self.public_ip, self.private_ip)
}
}
# Uses the local variable if external data source resolution fails
ami = coalesce(var.dc_ami, data.aws_ami.dc_ami.image_id)
tags = {
Name = "dc.windomain.local"
}
subnet_id = aws_subnet.default.id
vpc_security_group_ids = [aws_security_group.windows.id]
private_ip = "192.168.38.102"
root_block_device {
delete_on_termination = true
}
}
resource "aws_instance" "wef" {
instance_type = "t3.medium"
provisioner "remote-exec" {
inline = [
"choco install -force -y winpcap",
"powershell.exe -c \"Add-Content 'c:\\windows\\system32\\drivers\\etc\\hosts' ' 192.168.38.102 dc.windomain.local'\"",
"powershell.exe -c \"Add-Content 'c:\\windows\\system32\\drivers\\etc\\hosts' ' 192.168.38.102 windomain.local'\"",
"ipconfig /renew",
]
connection {
type = "winrm"
user = "vagrant"
password = "vagrant"
host = coalesce(self.public_ip, self.private_ip)
}
}
# Uses the local variable if external data source resolution fails
ami = coalesce(var.wef_ami, data.aws_ami.wef_ami.image_id)
tags = {
Name = "wef.windomain.local"
}
subnet_id = aws_subnet.default.id
vpc_security_group_ids = [aws_security_group.windows.id]
private_ip = "192.168.38.103"
root_block_device {
delete_on_termination = true
}
}
resource "aws_instance" "win10" {
instance_type = "t2.medium"
provisioner "remote-exec" {
inline = [
"choco install -force -y winpcap",
"powershell.exe -c \"Add-Content 'c:\\windows\\system32\\drivers\\etc\\hosts' ' 192.168.38.102 dc.windomain.local'\"",
"powershell.exe -c \"Add-Content 'c:\\windows\\system32\\drivers\\etc\\hosts' ' 192.168.38.102 windomain.local'\"",
"ipconfig /renew",
]
connection {
type = "winrm"
user = "vagrant"
password = "vagrant"
host = coalesce(self.public_ip, self.private_ip)
}
}
# Uses the local variable if external data source resolution fails
ami = coalesce(var.win10_ami, data.aws_ami.win10_ami.image_id)
tags = {
Name = "win10.windomain.local"
}
subnet_id = aws_subnet.default.id
vpc_security_group_ids = [aws_security_group.windows.id]
private_ip = "192.168.38.104"
root_block_device {
delete_on_termination = true
}
}

35
AWS/Terraform/outputs.tf Normal file
View File

@@ -0,0 +1,35 @@
output "region" {
value = var.region
}
output "logger_public_ip" {
value = aws_instance.logger.public_ip
}
output "dc_public_ip" {
value = aws_instance.dc.public_ip
}
output "wef_public_ip" {
value = aws_instance.wef.public_ip
}
output "win10_public_ip" {
value = aws_instance.win10.public_ip
}
output "ata_url" {
value = local.ata_url
}
output "fleet_url" {
value = local.fleet_url
}
output "splunk_url" {
value = local.splunk_url
}
output "guacamole_url" {
value = local.guacamole_url
}

View File

@@ -0,0 +1,8 @@
region = "us-west-1"
profile = "terraform"
shared_credentials_file = "/home/user/.aws/credentials"
public_key_name = "id_logger"
public_key_path = "/home/user/.ssh/id_logger.pub"
private_key_path = "/home/user/.ssh/id_logger"
ip_whitelist = ["1.2.3.4/32"]
availability_zone = "us-west-1b"

112
AWS/Terraform/variables.tf Normal file
View File

@@ -0,0 +1,112 @@
variable "region" {
default = "us-west-1"
}
variable "profile" {
default = "terraform"
}
variable "availability_zone" {
description = "https://www.terraform.io/docs/providers/aws/d/availability_zone.html"
default = ""
}
variable "shared_credentials_file" {
description = "Path to your AWS credentials file"
type = string
default = "/home/username/.aws/credentials"
}
variable "public_key_name" {
description = "A name for AWS Keypair to use to auth to logger. Can be anything you specify."
default = "id_logger"
}
variable "public_key_path" {
description = "Path to the public key to be loaded into the logger authorized_keys file"
type = string
default = "/home/username/.ssh/id_logger.pub"
}
variable "private_key_path" {
description = "Path to the private key to use to authenticate to logger."
type = string
default = "/home/username/.ssh/id_logger"
}
variable "ip_whitelist" {
description = "A list of CIDRs that will be allowed to access the EC2 instances"
type = list(string)
default = [""]
}
variable "external_dns_servers" {
description = "Configure lab to allow external DNS resolution"
type = list(string)
default = ["8.8.8.8"]
}
# Use Data Sources to resolve the AMI-ID for the Ubuntu 18.04 AMI
data "aws_ami" "logger_ami" {
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20191113"]
}
}
# Use Data Sources to resolve the AMI-ID for the pre-built DC host
data "aws_ami" "dc_ami" {
owners = ["505638924199"]
filter {
name = "name"
values = ["detectionlab-dc"]
}
}
# Use Data Sources to resolve the AMI-ID for the pre-built WEF host
data "aws_ami" "wef_ami" {
owners = ["505638924199"]
most_recent = true
filter {
name = "name"
values = ["detectionlab-wef"]
}
}
# Use Data Sources to resolve the AMI-ID for the pre-built Win10 host
data "aws_ami" "win10_ami" {
owners = ["505638924199"]
most_recent = true
filter {
name = "name"
values = ["detectionlab-win10"]
}
}
# If you are building your own AMIs, replace the default values below with
# the AMI IDs
variable "logger_ami" {
type = string
default = ""
}
variable "dc_ami" {
type = string
default = ""
}
variable "wef_ami" {
type = string
default = ""
}
variable "win10_ami" {
type = string
default = ""
}

View File

@@ -0,0 +1,4 @@
terraform {
required_version = ">= 0.12"
}

View File

@@ -0,0 +1,9 @@
[
{
"Description": "dc",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "dc.ova"
}
}]

View File

@@ -0,0 +1,27 @@
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource":[
"arn:aws:s3:::YOUR_BUCKET_GOES_HERE",
"arn:aws:s3:::YOUR_BUCKET_GOES_HERE/*"
]
},
{
"Effect":"Allow",
"Action":[
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource":"*"
}
]
}

View File

@@ -0,0 +1,15 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}

View File

@@ -0,0 +1,9 @@
[
{
"Description": "wef",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "wef.ova"
}
}]

View File

@@ -0,0 +1,9 @@
[
{
"Description": "win10",
"Format": "ova",
"UserBucket": {
"S3Bucket": "YOUR_BUCKET_GOES_HERE",
"S3Key": "win10.ova"
}
}]