Log Monitoring with Elastic Stack
Scenario: Monitoring Apache Logs
- Basic Structure for Log Monitoring using Elastic Stack
- Lets setup the apache server
sudo apt update
sudo apt install apache2 -y
- Once the apache server is setup to generate the traffic we have created the small script
#!/bin/bash
while true
do
SERVER_IP_ADDRESS='54.244.44.152'
curl "http://${SERVER_IP_ADDRESS}"
sleep 1s
curl "http://${SERVER_IP_ADDRESS}"
sleep 1s
curl "http://${SERVER_IP_ADDRESS}/test.html"
sleep 1s
done
- Setting up logstash: Refer Here
- Now we need to create the logstash pipeline to recieve the logs from beats and send it to the elastic search and also transfom the apache logs
- The untested pipeline for transforming apache logs is as shown below
input
{
beats
{
port => 5044
}
}
filter
{
grok
{
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
output
{
elasticsearch
{
cloud_auth => "elastic:vu2fraeXw6bLb7DuDF6w0U3A"
hosts => ["https://qt-elastic.es.us-central1.gcp.cloud.es.io"]
index => "apache-%{+yyyy.MM.dd}"
}
}
- Now save this as apache.conf and save it in
/etc/logstash/conf.d/
and then enable and start logstash
sudo systemctl enable logstash
sudo systemctl start logstash
- Now we need to setup beats which reads the logs written to file
/var/logs/apache2/access.log
- Elastic stack has different beats
- For our scenario, we use file beat
- Refer Here for the installation
- Installation using APT Refer Here
- Refer Here for configuring file beats
- Create a Dataview in kibana for the apache-*
- Lets search for the logs with status code 200 and 404 and save the query
- Refer Here for the changeset containing the polling script and apache config for the logstash.