first commit

This commit is contained in:
2022-12-27 21:59:06 +01:00
commit 7ae0b00241
29 changed files with 1157 additions and 0 deletions

19
.gitignore vendored Normal file
View File

@@ -0,0 +1,19 @@
.env
.vagrant/
auditbeat-*.deb
data/certificates/certs/
data/opensearch-dashboards/certs/
data/opensearch-node1/certs/
data/opensearch-node1/data/
data/opensearch-node1/config/internal_users.yml
data/opensearch-node2/certs/
data/opensearch-node2/data/
data/opensearch-node2/config/internal_users.yml
data/grafana/data/
data/traefik/certs/
data/apidemo-cron/output/
data/apidemo-filebeat/data/
data/syslog-filebeat/data/
data/graylog/data/
data/graylog-mongodb/configdb/
data/graylog-mongodb/db/

99
README.md Normal file
View File

@@ -0,0 +1,99 @@
# Log Collection Lab
## Overview
This repository creates a two node opensearch cluster in an fully automated way.
Certificates created (no demo certs) and default passwords changed.
It includes demo configurations for logstash and filebeats.
Full feature list:
- vagrant to download and create an Ubuntu VM and installs docker
- setup container to create a CA and certificates for the services
- two node opensearch cluster
- uses your own certificates
- default passwords changed
- opensearch-dashboard
- creates index pattern automatically in global tenant
- Grafana
- creates datasource automatically
- traefik reverse proxy for opensearch-dashboard
- mdns responder to allow connection by hostname instead of IP
- beats logstash container to other computers in your lab can send information (winlogbeat, auditbeat, or packetbeat)
- log receiver for Cisco ASA, Cisco IOS, Checkpoint, Snort, CEF, Syslog, and Netflow
- cron container to download json information periodically from API including filebeat/logstash pipeline
The repository was created to give you a starting point for your own opensearch installation.
It shows you how to change common settings and replace default passwords and certificates.
The demo configuration give you a start into file injections, syslog, grok, beats, logstash configs, certificate creation, docker, vagrant, and more.
## Requirements
Install virtualbox and vagrant on your computer.
## Start
Run
```
vagrant up
````
in the directory.
## Opensearch Dashboard login
URL: https://opensearch.local (or http://192.168.57.2:5601)
Username: admin
Password: vagrant
## Grafana Dashboard login
URL: https://grafana.local
Username: admin
Password: vagrant
# Network
The logger virtual machine has three network interfaces.
1. NAT
2. private network with static IP 192.168.57.2 (only reachable from your host)
3. bridged network with dhcp
you can send beats from other hosts to the bridged IP address.
## Beats
- install a beats collector (https://www.elastic.co/beats/) on your computer
- winlogbeat, auditbeat, or packetbeat are easy to get started
- change output settings in the configuration file to:
```
output.logstash:
hosts: ["192.168.57.2:5044"]
```
You will need to remove the output.elasticsearch section.
- information will be stored in opensearch-logstash-beats-$date
- you just need to create the index pattern
## Password changes
All passwords are set to "vagrant" in this repository.
The password hashes are stored in internal_users.yml and the logstash clear text password is in the .env file (used by logstash containers)
If you want to change the password your need to replace the hashes and tell opensearch to read the configuration.
The securityadmin command must be executed in both opensearch nodes
```
docker exec -it opensearch-node1 /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh --clustername opensearch-cluster --configdir /usr/share/opensearch/config/opensearch-security -cacert /usr/share/opensearch/config/certs/opensearch-ca.pem -key /usr/share/opensearch/config/certs/opensearch-admin.key -cert /usr/share/opensearch/config/certs/opensearch-admin.pem -h `cat /etc/hostname`
docker exec -it opensearch-node2 /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh --clustername opensearch-cluster --configdir /usr/share/opensearch/config/opensearch-security -cacert /usr/share/opensearch/config/certs/opensearch-ca.pem -key /usr/share/opensearch/config/certs/opensearch-admin.key -cert /usr/share/opensearch/config/certs/opensearch-admin.pem -h `cat /etc/hostname`
```
# Troubleshooting
## Docker
```
vagrant ssh
```
-> logs you into the VM
```
sudo -s && cd /vagrant && docker-compose logs -f
```
-> all files are mapped inth this folder of the VM. you can use all docker and docker-compose commands as usual (ps, exec, ...)
## Elasticsearch
GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
GET _cluster/allocation/explain
GET _cluster/settings?flat_settings=true&include_defaults=true
PUT _cluster/settings
{ "persistent" : { "cluster.routing.allocation.enable" : "all" } }

37
Vagrantfile vendored Normal file
View File

@@ -0,0 +1,37 @@
Vagrant.configure("2") do |config|
config.vm.define "opensearch", autostart: true do |cfg|
cfg.vm.box = "ubuntu/jammy64"
cfg.vm.hostname = "opensearch"
cfg.vm.network :private_network, ip: "192.168.57.2", gateway: "192.168.57.1", dns: "8.8.8.8"
cfg.vm.network "public_network"
cfg.vm.boot_timeout = 1200
cfg.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.name = "opensearch"
vb.cpus = 2
vb.memory = "8192"
end
cfg.vm.provision "shell", run: "once", inline: <<-SHELL
export DEBIAN_FRONTEND=noninteractive
rm -rf /var/lib/apt/lists/*
apt update
apt -y upgrade
apt -y install docker.io docker-compose
apt -y autoremove
apt clean
echo vm.max_map_count=262144 >> /etc/sysctl.conf
sysctl -p
cd /vagrant
docker-compose up -d
mkdir /opt/install && cd /opt/install
wget https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-oss-7.12.1-amd64.deb
dpkg -i auditbeat-oss-7.12.1-amd64.deb
echo "give opensearch some time to start"
echo "connect to opensearch-dashboards afterwards with"
echo "URL: https://opensearch.local/ (or http://192.168.57.2:5601)"
echo "Username: admin"
echo "Password: vagrant"
SHELL
end
end

View File

@@ -0,0 +1,9 @@
FROM ubuntu:22.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install cron curl jq dos2unix
COPY entrypoint.sh /opt/entrypoint.sh
RUN dos2unix /opt/entrypoint.sh ; chmod +x /opt/entrypoint.sh
CMD ["sh", "/opt/entrypoint.sh"]

View File

@@ -0,0 +1,8 @@
#!/bin/bash
dos2unix $COMMAND
echo "$SCHEDULE $USER $COMMAND" > /etc/cron.d/api-cronjob
chmod 0644 /etc/cron.d/api-cronjob
crontab /etc/cron.d/api-cronjob
touch /var/log/cron.log
env > /etc/environment && cron -f

View File

@@ -0,0 +1,5 @@
#!/bin/bash
DATE=`date +"%Y-%m-%d"`
curl https://api.coindesk.com/v1/bpi/currentprice.json > /tmp/cryptocurrency.json
jq -c 'del(.disclaimer)' /tmp/cryptocurrency.json >> /opt/output/cryptocurrency_$DATE.json
find /opt/output/ -mtime +5 -delete

View File

@@ -0,0 +1,13 @@
filebeat.inputs:
- type: log
enabled: true
paths:
- ${INPUT_PATH}
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
enabled: true
hosts: ["${LOGSTASH_HOST}"]

View File

@@ -0,0 +1,26 @@
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
}
output {
#stdout {}
#file {
# path => "/tmp/output.json"
#}
opensearch {
hosts => ["${OPENSEARCH_HOST}"]
index => "${OPENSEARCH_INDEX}-%{+YYYY-MM-dd}"
user => "${LOGSTASH_USER}"
password => "${LOGSTASH_PASSWORD}"
ssl => true
ssl_certificate_verification => false
}
}

View File

@@ -0,0 +1,23 @@
input {
beats {
port => 5044
}
}
filter {
}
output {
#stdout {}
#file {
# path => "/tmp/output.json"
#}
opensearch {
hosts => ["${OPENSEARCH_HOST}"]
index => "${OPENSEARCH_INDEX}-%{+YYYY-MM-dd}"
user => "${LOGSTASH_USER}"
password => "${LOGSTASH_PASSWORD}"
ssl => true
ssl_certificate_verification => false
}
}

View File

@@ -0,0 +1,9 @@
FROM ubuntu:22.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install avahi-utils libnss-mdns dos2unix
ADD entrypoint.sh /opt/entrypoint.sh
RUN dos2unix /opt/entrypoint.sh ; chmod +x /opt/entrypoint.sh
CMD ["sh", "/opt/entrypoint.sh"]

View File

@@ -0,0 +1,15 @@
#!/bin/bash
service dbus start
service avahi-daemon start
dos2unix /opt/config/names.csv
while read LINE; do
PUBLISH_HOSTNAME=$(echo $LINE | cut -d ";" -f 1)
PUBLISH_IP=$(echo $LINE | cut -d ";" -f 2)
echo "$PUBLISH_HOSTNAME - $PUBLISH_IP"
/usr/bin/avahi-publish -a -R $PUBLISH_HOSTNAME $PUBLISH_IP &
done < /opt/config/names.csv
tail -f /dev/null

View File

@@ -0,0 +1,3 @@
opensearch.local;192.168.57.2
traefik.local;192.168.57.2
grafana.local;192.168.57.2
1 opensearch.local 192.168.57.2
2 traefik.local 192.168.57.2
3 grafana.local 192.168.57.2

View File

@@ -0,0 +1,59 @@
---
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh
_meta:
type: "internalusers"
config_version: 2
admin:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: true
backend_roles:
- "admin"
description: "Demo admin user"
anomalyadmin:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: false
opendistro_security_roles:
- "anomaly_full_access"
description: "Demo anomaly admin user, using internal role"
kibanaserver:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: true
description: "Demo OpenSearch Dashboards user"
kibanaro:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: false
backend_roles:
- "kibanauser"
- "readall"
attributes:
attribute1: "value1"
attribute2: "value2"
attribute3: "value3"
description: "Demo OpenSearch Dashboards read only user, using external role mapping"
logstash:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: false
backend_roles:
- "logstash"
description: "Demo logstash user, using external role mapping"
readall:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: false
backend_roles:
- "readall"
description: "Demo readall user, using external role mapping"
snapshotrestore:
hash: "$2y$12$x22en27Ec7WS8OmtW1MxMeu7l0GHHrSwEn3HMH/o4JcKeeAQ.UGFK"
reserved: false
backend_roles:
- "snapshotrestore"
description: "Demo snapshotrestore user, using external role mapping"

View File

@@ -0,0 +1,17 @@
cluster.name: docker-cluster
network.host: 0.0.0.0
plugins.security.authcz.admin_dn:
- "CN=admin,O=security,L=IT,ST=NY,C=US"
plugins.security.nodes_dn:
- "CN=opensearch-node*"
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false
plugins.security.ssl.http.enabled: true
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]

View File

@@ -0,0 +1,17 @@
cluster.name: docker-cluster
network.host: 0.0.0.0
plugins.security.authcz.admin_dn:
- "CN=admin,O=security,L=IT,ST=NY,C=US"
plugins.security.nodes_dn:
- "CN=opensearch-node*"
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.transport.resolve_hostname: false
plugins.security.ssl.http.enabled: true
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".plugins-ml-model", ".plugins-ml-task", ".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opensearch-notifications-*", ".opensearch-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]

View File

@@ -0,0 +1,30 @@
#!/bin/bash
mkdir /data/graylog-mongodb/configdb
mkdir /data/graylog-mongodb/db
chmod 777 /data/graylog-mongodb/configdb
chmod 777 /data/graylog-mongodb/db
if [ ! -f /data/.env ]
then
cp /data/env_example /data/.env
fi
if [ ! -f /data/opensearch-node1/config/internal_users.yml ]
then
cp /data/opensearch-node1/config/internal_users_example.yml /data/opensearch-node1/config/internal_users.yml
mkdir /data/opensearch-node2/config/
cp /data/opensearch-node1/config/internal_users_example.yml /data/opensearch-node2/config/internal_users.yml
fi
if [ ! -d "/data/opensearch-node1/data/" ]
then
echo "creating opensearch node1 data directoy"
mkdir -p /data/opensearch-node1/data/
fi
if [ ! -d "/data/opensearch-node2/data/" ]
then
echo "creating opensearch node2 data directoy"
mkdir -p /data/opensearch-node2/data/
fi

View File

@@ -0,0 +1,109 @@
#!/bin/bash
if [ ! -f /data/certificates/certs/opensearch-ca.key ]
then
echo "generating CA"
mkdir -p /data/certificates/certs/
openssl genrsa -out /data/certificates/certs/opensearch-ca.key 2048
openssl req -new -x509 -sha256 -days 3650 -subj "/C=US/ST=NY/L=IT/O=security/CN=opensearch-ca" -key /data/certificates/certs/opensearch-ca.key -out /data/certificates/certs/opensearch-ca.pem
openssl x509 -noout -subject -in /data/certificates/certs/opensearch-ca.pem
fi
if [ ! -f /data/certificates/certs/opensearch-admin.key ]
then
echo "generating admin user key"
mkdir -p /data/certificates/certs/
openssl genrsa -out /data/certificates/certs/opensearch-admin_rsa.key 2048
openssl pkcs8 -v1 PBE-SHA1-3DES -nocrypt -in /data/certificates/certs/opensearch-admin_rsa.key -topk8 -out /data/certificates/certs/opensearch-admin.key
openssl req -new -inform PEM -outform PEM -subj "/C=US/ST=NY/L=IT/O=security/CN=admin" -key /data/certificates/certs/opensearch-admin.key -out /data/certificates/certs/opensearch-admin.csr
openssl x509 -req -days 3650 -in /data/certificates/certs/opensearch-admin.csr -CA /data/certificates/certs/opensearch-ca.pem -CAkey /data/certificates/certs/opensearch-ca.key -CAcreateserial -sha256 -out /data/certificates/certs/opensearch-admin.pem
#openssl verify -CAfile /data/certificates/certs/opensearch-ca.pem /data/certificates/certs/opensearch-admin.pem
#openssl x509 -noout -subject -in /data/certificates/certs/opensearch-admin.pem
fi
if [ ! -f /data/opensearch-node1/certs/opensearch-node1.key ]
then
for NODE_NAME in "node1" "node2"
do
echo "generating certificate opensearch-$NODE_NAME"
mkdir -p /data/opensearch-$NODE_NAME/certs/
cat << EOF > /tmp/request.conf
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = NY
L = IT
O = security
CN = opensearch-$NODE_NAME
[v3_req]
keyUsage = keyEncipherment, dataEncipherment, digitalSignature, nonRepudiation
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = docker-cluster
DNS.2 = opensearch-$NODE_NAME
RID.1 = 1.2.3.4.5.5
EOF
openssl genrsa -out /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME-rsa.key 2048
openssl pkcs8 -inform PEM -outform PEM -in /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME-rsa.key -topk8 -nocrypt -v1 PBE-SHA1-3DES -out /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.key
openssl req -new -config /tmp/request.conf -key /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.key -out /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.csr
openssl x509 -req -days 3650 -extfile /tmp/request.conf -extensions v3_req -in /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.csr -CA /data/certificates/certs/opensearch-ca.pem -CAkey /data/certificates/certs/opensearch-ca.key -CAcreateserial -sha256 -out /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.pem
cp /data/certificates/certs/opensearch-ca.pem /data/opensearch-$NODE_NAME/certs/
cp /data/certificates/certs/opensearch-admin.pem /data/opensearch-$NODE_NAME/certs/
cp /data/certificates/certs/opensearch-admin.key /data/opensearch-$NODE_NAME/certs/
#openssl verify -CAfile /data/opensearch-$NODE_NAME/certs/opensearch-ca.pem /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.pem
#openssl x509 -text -in /data/opensearch-$NODE_NAME/certs/opensearch-$NODE_NAME.pem
done
fi
if [ ! -f /data/traefik/certs/traefik.key ]
then
echo "generating certificate traefik"
mkdir -p /data/traefik/certs/
cat << EOF > /tmp/request.conf
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = NY
L = IT
O = security
CN = opensearch-lab
[v3_req]
keyUsage = keyEncipherment, dataEncipherment, digitalSignature, nonRepudiation
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = traefik.local
DNS.2 = opensearch.local
DNS.3 = grafana.local
EOF
##openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /data/traefik/certs/server.key -out /data/traefik/certs/server.crt -subj "/C=US/ST=NY/L=IT/O=security/CN=logger"
#openssl genrsa -out /data/traefik/certs/traefik_rsa.key 2048
#openssl pkcs8 -inform PEM -outform PEM -in /data/traefik/certs/traefik_rsa.key -topk8 -nocrypt -v1 PBE-SHA1-3DES -out /data/traefik/certs/traefik.key
#openssl req -new -subj "/C=US/ST=NY/L=IT/O=security/CN=traefik" -key /data/traefik/certs/traefik.key -out /data/traefik/certs/traefik.csr
#openssl x509 -req -days 3650 -in /data/traefik/certs/traefik.csr -CA /data/certificates/certs/opensearch-ca.pem -CAkey /data/certificates/certs/opensearch-ca.key -CAcreateserial -sha256 -out /data/traefik/certs/traefik.pem
#openssl verify -CAfile /data/certificates/certs/opensearch-ca.pem /data/traefik/certs/traefik.pem
#openssl x509 -noout -subject -in /data/traefik/certs/traefik.pem
openssl genrsa -out /data/traefik/certs/server_rsa.key 2048
openssl pkcs8 -inform PEM -outform PEM -in /data/traefik/certs/server_rsa.key -topk8 -nocrypt -v1 PBE-SHA1-3DES -out /data/traefik/certs/server.key
openssl req -new -config /tmp/request.conf -key /data/traefik/certs/server.key -out /data/traefik/certs/server.csr
openssl x509 -req -days 3650 -extfile /tmp/request.conf -extensions v3_req -in /data/traefik/certs/server.csr -CA /data/certificates/certs/opensearch-ca.pem -CAkey /data/certificates/certs/opensearch-ca.key -CAcreateserial -sha256 -out /data/traefik/certs/server.pem
#openssl verify -CAfile /data/traefik/certs/server.pem /data/traefik/certs/server.pem
#openssl x509 -text -in /data/traefik/certs/server.pem
fi
sleep 2

View File

@@ -0,0 +1,42 @@
#!/bin/bash
## run security_admin in each node
#for NODE_NAME in "node1" "node2"
#do
#
# COMMAND=(docker exec -it opensearch-$NODE_NAME /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh --clustername opensearch-cluster --configdir /usr/share/opensearch/config/opensearch-security -cacert /usr/share/opensearch/config/certs/opensearch-ca.pem -key /usr/share/opensearch/config/certs/opensearch-admin.key -cert /usr/share/opensearch/config/certs/opensearch-admin.pem -h opensearch-$NODE_NAME)
#
# until "${COMMAND[@]}" ; do
# echo "opensearch not up yet. retrying in 10 seconds..."
# sleep 10
# done
#done
# use opensearch-dashboards api to create index pattern logstash-* for global tennant until it succeeds. (this will not create it for you personal tenant)
cat > /tmp/opensearch_create_index_pattern.sh << EOF
curl -k \
-X POST "http://opensearch-dashboards:5601/api/saved_objects/index-pattern/logstash-*" \
-u 'admin:vagrant' \
-H "securitytenant:global" \
-H "osd-xsrf:true" \
-H "content-type:application/json" \
-d "{ \"attributes\": { \"title\": \"logstash-*\", \"timeFieldName\": \"@timestamp\" } }"
EOF
cat > /tmp/opensearch_check_index_pattern.sh << EOF
curl -k \
-X GET "http://opensearch-dashboards:5601/api/saved_objects/index-pattern/logstash-*" \
-u 'admin:vagrant' \
-H "securitytenant:global" \
-H "osd-xsrf:true" \
-H "content-type:application/json" \
| grep "namespace"
EOF
chmod +x /tmp/opensearch_*.sh
until "/tmp/opensearch_check_index_pattern.sh" ; do
echo "opensearch index-pattern does not exist; trying to create logstash-*"
/tmp/opensearch_create_index_pattern.sh
sleep 10
done
echo "opensearch index-pattern created"

View File

@@ -0,0 +1,56 @@
#!/bin/bash
cat > /tmp/grafana_check.sh << EOF
curl -k \
-X GET "http://grafana:3000/api/datasources" \
-u 'admin:vagrant' \
-H "content-type:application/json" \
| grep '"name":"OpenSearch"'
EOF
cat > /tmp/grafana_initial_setup.sh << EOF
curl -k \
-X POST "http://grafana:3000/api/datasources" \
-u 'admin:vagrant' \
-H "content-type:application/json" \
-d '
{
"orgId": 1,
"name": "OpenSearch",
"type": "grafana-opensearch-datasource",
"typeName": "OpenSearch",
"typeLogoUrl": "public/plugins/grafana-opensearch-datasource/img/logo.svg",
"access": "proxy",
"url": "https://opensearch-node1:9200",
"basicAuth": true,
"basicAuthUser": "admin",
"isDefault": true,
"secureJsonData": {
"basicAuthPassword": "vagrant"
},
"jsonData": {
"database": "logstash-*",
"esVersion": "8.0.0",
"flavor": "opensearch",
"logLevelField": "fields.level",
"logMessageField": "message",
"maxConcurrentShardRequests": 5,
"pplEnabled": true,
"timeField": "@timestamp",
"tlsAuthWithCACert": false,
"tlsSkipVerify": true,
"version": "1.0.0"
},
"readOnly": false
}
'
EOF
chmod +x /tmp/grafana*.sh
until "/tmp/grafana_check.sh" ; do
echo "Grafana settings not applied; retrying"
/tmp/grafana_initial_setup.sh
sleep 10
done
echo "Grafana settings applied"

View File

@@ -0,0 +1,9 @@
FROM ubuntu:22.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install openssl docker.io curl dos2unix
COPY *.sh /opt/
RUN chmod +x /opt/*.sh ; dos2unix /opt/*.sh
CMD ["bash", "/opt/entrypoint.sh"]

View File

@@ -0,0 +1,11 @@
#!/bin/bash
/opt/01_precreate_folders.sh
/opt/02_generate_certificates.sh
echo "initial setup done. tag setup container as healthy to start other containers"
touch /tmp/healthcheck.txt
/opt/03_configure_opensearch.sh
/opt/04_configure_grafana.sh
sleep infinity

View File

@@ -0,0 +1,51 @@
# for more modules visit https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules-overview.html
filebeat.inputs:
- type: udp
max_message_size: 10KiB
host: "0.0.0.0:514"
tags: ["udp-514"]
- type: tcp
max_message_size: 10MiB
host: "0.0.0.0:514"
tags: ["tcp-514"]
filebeat.modules:
#- module: cisco
# asa:
# var.syslog_host: 0.0.0.0
# var.syslog_port: 9001
# var.log_level: 5
#
#- module: cisco
# ios:
# var.syslog_host: 0.0.0.0
# var.syslog_port: 9002
# var.log_level: 5
#
#- module: cef
# log:
# var.syslog_host: 0.0.0.0
# var.syslog_port: 9003
#
#- module: checkpoint
# firewall:
# var.syslog_host: 0.0.0.0
# var.syslog_port: 9004
#
- module: netflow
log:
enabled: true
var:
netflow_host: 0.0.0.0
netflow_port: 2055
tags: ["netflow"]
#- module: snort
# snort:
# var.syslog_host: 0.0.0.0
# var.syslog_port: 9532
output.logstash:
enabled: true
hosts: ["${LOGSTASH_HOST}"]

View File

@@ -0,0 +1,28 @@
input {
beats {
port => 5044
}
}
filter {
grok {
match => ["message", "<%{DATA:event_priority}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_process}\[%{NUMBER:syslog_uid}\]: %{DATA:SYSLOGMESSAGE}"]
add_tag => [ "syslog" ]
}
}
output {
#stdout {}
#file {
# path => "/tmp/output.json"
#}
opensearch {
hosts => ["${OPENSEARCH_HOST}"]
index => "${OPENSEARCH_INDEX}-%{+YYYY-MM-dd}"
user => "${LOGSTASH_USER}"
password => "${LOGSTASH_PASSWORD}"
ssl => true
ssl_certificate_verification => false
}
}

View File

@@ -0,0 +1,9 @@
[tls.stores]
[tls.stores.default]
[tls.stores.default.defaultCertificate]
certFile = "/etc/traefik/certs/server.pem"
keyFile = "/etc/traefik/certs/server.key"
[[tls.certificates]]
certFile = "/etc/traefik/certs/server.pem"
keyFile = "/etc/traefik/certs/server.key"

412
docker-compose.yml Normal file
View File

@@ -0,0 +1,412 @@
version: '3'
services:
# this container creates certificates used by other services
setup:
build: ./data/setup/build/.
container_name: "setup"
restart: "no"
hostname: setup
volumes:
- "./data:/data"
networks:
- setup-net
healthcheck:
test: ["CMD-SHELL", "test -f /tmp/healthcheck.txt"]
interval: 10s
timeout: 5s
retries: 5
logging:
driver: "json-file"
options:
max-size: "50m"
# avahi mdns broadcasts the name opensearch.local to make the dashboard accessable by this name in your browser
mdns:
build: ./data/mdns/build/.
container_name: "mdns"
restart: "no"
hostname: mdns
volumes:
- "./data/mdns/config:/opt/config"
network_mode: "host"
logging:
driver: "json-file"
options:
max-size: "50m"
# reverse proxy used to accept traffic for http/https and nd forward it to the containers
traefik:
image: "traefik:v2.9.1"
container_name: "traefik"
hostname: traefik
restart: always
depends_on:
- setup
command:
#- "--log.level=DEBUG"
- "--api.dashboard=true" # enable traefik dashboard
- "--api.insecure=true" # URL for traefik dashboard = http://opensearch.local:8080/dashboard/ (needs ports: 8080 to be enabled)
- "--global.sendAnonymousUsage=false"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.http.address=:80"
- "--entrypoints.https.address=:443"
- "--providers.file.filename=/etc/traefik/encryption.toml"
- "--providers.file.watch=true"
labels:
- traefik.enable=true
- traefik.http.routers.traefik.rule=Host(`traefik.local`)
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=https
- traefik.http.routers.traefik.service=api@internal
- traefik.http.routers.traefik.middlewares=traefik-auth-middleware
- traefik.http.middlewares.traefik-auth-middleware.basicauth.users=admin:$$apr1$$QIHSR7rW$$fW5DzBnqnCbHP5L2k6kfY0 #admin:vagrant
- traefik.http.services.traefik.loadbalancer.server.scheme=http
- traefik.http.services.traefik.loadbalancer.server.port=8080
networks:
- traefik-net
ports:
- "80:80"
- "443:443"
#- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik/config/encryption.toml:/etc/traefik/encryption.toml:ro
- ./data/traefik/certs/:/etc/traefik/certs/:ro
logging:
driver: "json-file"
options:
max-size: "50m"
# Opensearch two node cluster
opensearch-node1:
image: opensearchproject/opensearch:2.3.0
container_name: opensearch-node1
hostname: opensearch-node1
restart: always
depends_on:
setup:
condition: service_healthy
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- plugins.security.ssl.transport.pemkey_filepath=certs/opensearch-node1.key
- plugins.security.ssl.transport.pemcert_filepath=certs/opensearch-node1.pem
- plugins.security.ssl.transport.pemtrustedcas_filepath=certs/opensearch-ca.pem
- plugins.security.ssl.http.pemkey_filepath=certs/opensearch-node1.key
- plugins.security.ssl.http.pemcert_filepath=certs/opensearch-node1.pem
- plugins.security.ssl.http.pemtrustedcas_filepath=certs/opensearch-ca.pem
- cluster.routing.allocation.disk.threshold_enabled=true
- cluster.routing.allocation.disk.watermark.low=97%
- cluster.routing.allocation.disk.watermark.high=98%
- cluster.routing.allocation.disk.watermark.flood_stage=99%
#- network.publish_host=192.168.57.2
- DISABLE_INSTALL_DEMO_CONFIG=true
- bootstrap.memory_lock=true
- plugins.security.ssl.transport.enforce_hostname_verification=false
- plugins.security.ssl.transport.resolve_hostname=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- ./data/opensearch-node1/data/:/usr/share/opensearch/data
- ./data/opensearch-node1/certs/:/usr/share/opensearch/config/certs:ro
- ./data/opensearch-node1/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml:ro
- ./data/opensearch-node1/config/internal_users.yml:/usr/share/opensearch/config/opensearch-security/internal_users.yml:ro
#ports:
# - 9200:9200
# - 9600:9600 # required for Performance Analyzer
networks:
- opensearch-db-net
logging:
driver: "json-file"
options:
max-size: "50m"
opensearch-node2:
image: opensearchproject/opensearch:2.3.0
container_name: opensearch-node2
hostname: opensearch-node2
restart: always
depends_on:
setup:
condition: service_healthy
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- plugins.security.ssl.transport.pemkey_filepath=certs/opensearch-node2.key
- plugins.security.ssl.transport.pemcert_filepath=certs/opensearch-node2.pem
- plugins.security.ssl.transport.pemtrustedcas_filepath=certs/opensearch-ca.pem
- plugins.security.ssl.http.pemkey_filepath=certs/opensearch-node2.key
- plugins.security.ssl.http.pemcert_filepath=certs/opensearch-node2.pem
- plugins.security.ssl.http.pemtrustedcas_filepath=certs/opensearch-ca.pem
- cluster.routing.allocation.disk.threshold_enabled=true
- cluster.routing.allocation.disk.watermark.low=97%
- cluster.routing.allocation.disk.watermark.high=98%
- cluster.routing.allocation.disk.watermark.flood_stage=99%
#- network.publish_host=192.168.57.2
- DISABLE_INSTALL_DEMO_CONFIG=true
- bootstrap.memory_lock=true
- plugins.security.ssl.transport.enforce_hostname_verification=false
- plugins.security.ssl.transport.resolve_hostname=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- ./data/opensearch-node2/data/:/usr/share/opensearch/data
- ./data/opensearch-node2/certs/:/usr/share/opensearch/config/certs:ro
- ./data/opensearch-node2/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml:ro
- ./data/opensearch-node2/config/internal_users.yml:/usr/share/opensearch/config/opensearch-security/internal_users.yml:ro
networks:
- opensearch-db-net
logging:
driver: "json-file"
options:
max-size: "50m"
# opensearch dashboards for search and dashboarding
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.3.0
container_name: opensearch-dashboards
hostname: opensearch-node2
restart: always
depends_on:
setup:
condition: service_healthy
opensearch-node1:
condition: service_started
opensearch-node2:
condition: service_started
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]'
OPENSEARCH_USERNAME: "kibanaserver"
OPENSEARCH_PASSWORD: "vagrant"
labels:
- "traefik.enable=true"
- "traefik.http.routers.opensearch-dashboards.service=opensearch-dashboards"
- "traefik.http.routers.opensearch-dashboards.entrypoints=https"
- "traefik.http.routers.opensearch-dashboards.tls=true"
- "traefik.http.routers.opensearch-dashboards.rule=Host(`opensearch.local`)"
- "traefik.http.services.opensearch-dashboards.loadbalancer.server.port=5601"
- "traefik.http.services.opensearch-dashboards.loadbalancer.server.scheme=http"
- "traefik.docker.network=traefik-net"
volumes:
- ./data/opensearch-dashboards/certs/:/usr/share/opensearch-dashboards/config/certs:ro
#ports:
# - 5601:5601
expose:
- "5601"
networks:
- setup-net
- traefik-net
- opensearch-db-net
logging:
driver: "json-file"
options:
max-size: "50m"
# simple logstash listening on port 5044. Install winlogbeat, auditbeat, or packetbeat and send data to this container (5044/tcp -> logstash -> opensearch)
beats-logstash:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:8.4.0
container_name: beats-logstash
hostname: beats-logstash
restart: always
depends_on:
- opensearch-node1
environment:
- OPENSEARCH_HOST=https://opensearch-node1:9200
- LOGSTASH_USER=logstash
- LOGSTASH_PASSWORD=${LOGSTASH_PASSWORD:-vagrant}
- OPENSEARCH_INDEX=logstash-beats
volumes:
- ./data/beats-logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
networks:
- external-net
- opensearch-db-net
ports:
- 5044:5044
logging:
driver: "json-file"
options:
max-size: "50m"
# uses filebeats modules to open syslog ports (network -> filebeat -> logstash -> opensearch)
syslog-filebeat:
image: elastic/filebeat:8.4.3
container_name: "syslog-filebeat"
hostname: syslog-filebeat
restart: always
depends_on:
- syslog-logstash
environment:
- LOGSTASH_HOST=syslog-logstash:5044
command: ["--strict.perms=false"]
volumes:
- ./data/syslog-filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
#- ./data/syslog-filebeat/data:/usr/share/filebeat/data # not needed for test environments
networks:
- external-net
- syslog-net
ports:
- 514:514 # TCP input
- 514:514/udp # UDP input
- 9001:9001 # Cisco ASA
- 9002:9002 # Cisco IOS
- 9003:9003 # CEF
- 9004:9004 # Checkpoint
- 2055:2055 # NetFlow
- 2055:2055/udp # NetFlow
- 9532:9532 # Snort
logging:
driver: "json-file"
options:
max-size: "50m"
syslog-logstash:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:8.4.0
container_name: syslog-logstash
hostname: syslog-logstash
restart: always
depends_on:
- opensearch-node1
environment:
- OPENSEARCH_HOST=https://opensearch-node1:9200
- LOGSTASH_USER=logstash
- LOGSTASH_PASSWORD=${LOGSTASH_PASSWORD:-vagrant}
- OPENSEARCH_INDEX=logstash-syslog
volumes:
- ./data/syslog-logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
networks:
- syslog-net
- opensearch-db-net
expose:
- "5044"
logging:
driver: "json-file"
options:
max-size: "50m"
# api demo example. connects to coindesk free api every minute, uses jq as a parsing example, and sends it it through filesbeats to logstash (cron -> file -> filebeat -> logstash -> opensearch)
apidemo-cron:
build: ./data/apidemo-cron/build/.
container_name: "apidemo-cron"
hostname: apidemo-cron
restart: always
depends_on:
- apidemo-filebeat
environment:
- SCHEDULE=* * * * *
- USER=root
- COMMAND=bash /opt/scripts/get_cryptocurrency.sh
volumes:
- ./data/apidemo-cron/scripts:/opt/scripts/
- ./data/apidemo-cron/output:/opt/output/
networks:
- apidemo-net
logging:
driver: "json-file"
options:
max-size: "50m"
apidemo-filebeat:
image: elastic/filebeat:8.4.3
container_name: "apidemo-filebeat"
hostname: apidemo-filebeat
restart: always
depends_on:
- apidemo-logstash
environment:
- INPUT_PATH=/opt/input/*.json
- LOGSTASH_HOST=apidemo-logstash:5044
command: ["--strict.perms=false"]
volumes:
- ./data/apidemo-filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./data/apidemo-cron/output:/opt/input/
#- ./data/apidemo-filebeat/data:/usr/share/filebeat/data # not needed for test environments
networks:
- apidemo-net
logging:
driver: "json-file"
options:
max-size: "50m"
apidemo-logstash:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:8.4.0
container_name: apidemo-logstash
hostname: apidemo-logstash
restart: always
depends_on:
- opensearch-node1
environment:
- OPENSEARCH_HOST=https://opensearch-node1:9200
- LOGSTASH_USER=logstash
- LOGSTASH_PASSWORD=${LOGSTASH_PASSWORD:-vagrant}
- OPENSEARCH_INDEX=logstash-demoapi
volumes:
- ./data/apidemo-logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
networks:
- apidemo-net
- opensearch-db-net
expose:
- "5044"
logging:
driver: "json-file"
options:
max-size: "50m"
grafana:
image: grafana/grafana
container_name: grafana
hostname: grafana
restart: always
user: root
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.service=grafana"
- "traefik.http.routers.grafana.entrypoints=https"
- "traefik.http.routers.grafana.tls=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.local`)"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
- "traefik.http.services.grafana.loadbalancer.server.scheme=http"
- "traefik.docker.network=traefik-net"
volumes:
- ./data/grafana/data:/var/lib/grafana
environment:
default_timezone: 'Europe/Amsterdam'
GF_INSTALL_PLUGINS: grafana-piechart-panel,grafana-clock-panel,grafana-simple-json-datasource,grafana-opensearch-datasource
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: vagrant
networks:
- setup-net
- traefik-net
- opensearch-db-net
expose:
- 3000
#ports:
# - 3000:3000
networks:
setup-net:
external-net:
traefik-net:
name: traefik-net
opensearch-dashboards-net:
opensearch-db-net:
graylog-net:
apidemo-net:
syslog-net:

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

1
env_example Normal file
View File

@@ -0,0 +1 @@
LOGSTASH_PASSWORD=vagrant

19
reset.ps1 Normal file
View File

@@ -0,0 +1,19 @@
Remove-Item -recurse -path .\data\.env
Remove-Item -recurse -path .\data\certificates\certs
Remove-Item -recurse -path .\data\opensearch-ca
Remove-Item -recurse -path .\data\opensearch-node1\certs
Remove-Item -recurse -path .\data\opensearch-node1\data
Remove-Item -recurse -path .\data\opensearch-node1\config\internal_users.yml
Remove-Item -recurse -path .\data\opensearch-node2\certs
Remove-Item -recurse -path .\data\opensearch-node2\data
Remove-Item -recurse -path .\data\opensearch-node2\config\internal_users.yml
Remove-Item -recurse -path .\data\opensearch-dashboards\certs
Remove-Item -recurse -path .\data\traefik\certs
Remove-Item -recurse -path .\data\apidemo-cron\output
Remove-Item -recurse -path .\data\apidemo-filebeat\data
Remove-Item -recurse -path .\data\syslog-filebeat\data
Remove-Item -recurse -path .\data\grafana\data

21
reset.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
rm -rf data/.env
rm -rf data/certificates/certs/
rm -rf data/opensearch-ca
rm -rf data/opensearch-node1/certs
rm -rf data/opensearch-node1/data
rm -rf data/opensearch-node1/config/internal_users.yml
rm -rf data/opensearch-node2/certs
rm -rf data/opensearch-node2/data
rm -rf data/opensearch-node2/config/internal_users.yml
rm -rf data/opensearch-dashboards/certs
rm -rf data/traefik/certs
rm -rf data/apidemo-cron/output
rm -rf data/apidemo-filebeat/data
rm -rf data/syslog-filebeat/data
rm -rf data/grafana/data