Files
log_collection_docker/README.md
2022-12-27 21:59:06 +01:00

4.1 KiB

Log Collection Lab

Overview

This repository creates a two node opensearch cluster in an fully automated way.
Certificates created (no demo certs) and default passwords changed.
It includes demo configurations for logstash and filebeats.

Full feature list:

  • vagrant to download and create an Ubuntu VM and installs docker
  • setup container to create a CA and certificates for the services
  • two node opensearch cluster
    • uses your own certificates
    • default passwords changed
  • opensearch-dashboard
    • creates index pattern automatically in global tenant
  • Grafana
    • creates datasource automatically
  • traefik reverse proxy for opensearch-dashboard
  • mdns responder to allow connection by hostname instead of IP
  • beats logstash container to other computers in your lab can send information (winlogbeat, auditbeat, or packetbeat)
  • log receiver for Cisco ASA, Cisco IOS, Checkpoint, Snort, CEF, Syslog, and Netflow
  • cron container to download json information periodically from API including filebeat/logstash pipeline

The repository was created to give you a starting point for your own opensearch installation.
It shows you how to change common settings and replace default passwords and certificates.
The demo configuration give you a start into file injections, syslog, grok, beats, logstash configs, certificate creation, docker, vagrant, and more.

Requirements

Install virtualbox and vagrant on your computer.

Start

Run

vagrant up 

in the directory.

Opensearch Dashboard login

URL: https://opensearch.local (or http://192.168.57.2:5601)
Username: admin
Password: vagrant

Grafana Dashboard login

URL: https://grafana.local
Username: admin
Password: vagrant

Network

The logger virtual machine has three network interfaces.

  1. NAT
  2. private network with static IP 192.168.57.2 (only reachable from your host)
  3. bridged network with dhcp

you can send beats from other hosts to the bridged IP address.

Beats

  • install a beats collector (https://www.elastic.co/beats/) on your computer
    • winlogbeat, auditbeat, or packetbeat are easy to get started
  • change output settings in the configuration file to:
output.logstash:
  hosts: ["192.168.57.2:5044"]

You will need to remove the output.elasticsearch section.

  • information will be stored in opensearch-logstash-beats-$date
  • you just need to create the index pattern

Password changes

All passwords are set to "vagrant" in this repository.
The password hashes are stored in internal_users.yml and the logstash clear text password is in the .env file (used by logstash containers)
If you want to change the password your need to replace the hashes and tell opensearch to read the configuration. The securityadmin command must be executed in both opensearch nodes

docker exec -it opensearch-node1 /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh --clustername opensearch-cluster --configdir /usr/share/opensearch/config/opensearch-security -cacert /usr/share/opensearch/config/certs/opensearch-ca.pem -key /usr/share/opensearch/config/certs/opensearch-admin.key -cert /usr/share/opensearch/config/certs/opensearch-admin.pem -h `cat /etc/hostname` 

docker exec -it opensearch-node2 /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh --clustername opensearch-cluster --configdir /usr/share/opensearch/config/opensearch-security -cacert /usr/share/opensearch/config/certs/opensearch-ca.pem -key /usr/share/opensearch/config/certs/opensearch-admin.key -cert /usr/share/opensearch/config/certs/opensearch-admin.pem -h `cat /etc/hostname` 

Troubleshooting

Docker

vagrant ssh 

-> logs you into the VM

sudo -s && cd /vagrant && docker-compose logs -f  

-> all files are mapped inth this folder of the VM. you can use all docker and docker-compose commands as usual (ps, exec, ...)

Elasticsearch

GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state

GET _cluster/allocation/explain

GET _cluster/settings?flat_settings=true&include_defaults=true

PUT _cluster/settings
{ "persistent" : { "cluster.routing.allocation.enable" : "all" } }