Logstash architecture
- Logstash processing pipeline has 3 stages
- inputs
- filters
- outputs
- The basic architecture is as shown below

- The logstash pipeline is stored in a configuration file that ends with .conf and is as shown below
input
{
#input plugins
}
filter
{
# filter plugins
}
output
{
# output plugins
}
-
For input plugins Refer Here and for filter plugins Refer Here and for output plugins Refer Here
-
Lets start creating logstash configurations
-
Scenario1: : Read input from standard input and write the output the standard ouput
-
Refer Here for the configuration file created
-
Run the logstash from commandline

-
Scenario 2: Read input from standard input and convert the message into uppercase and write the output the standard output
-
Refer Here for the conf file
-
Run the logstash from command line

-
Scenario3: Read the input from standard input and split the message whenever there is | character, give the first item (message[0]) a new field name firstname and second field as lastname.
-
Refer Here for the configuration file

-
Listing Plugins

-
we can also group the plugins by either input, output, filter and codec
-
Scenario4: Try to read the movies from movies.csv and display the contents
- Refer Here for the configuration file used
-
Now we can conclude given the message format, we can do some formatting/filter using logstash and send it to elastic search
-
When applications run they generate logs
-
We need to extract meanigful information from these logs to gain insights.
-
Basic Log Pipeline

-
Extracting meaningful information from logs is challenging and we need to look into some additional filters offered by logstash
- grok filter Refer Here
