Showing posts from February, 2021

Logs with Filebeat, Logstash, Elasticsearch and Kibana

Complex systems require monitoring. An important part of this are the log files. The best way to access, search and view multiple log files is the combination of Filebeat, Logstash, Elasticsearch and Kibana. However, configuring them can be difficult. Here I am demonstrating a possible setup. First, some words about each of them, in case you don't know them: Filebeat is a tool, that watches for file system changes and uploads the file contents to a destination (output). Elasticsearch and Logstash are the most commonly used, Kafka and many others are also supported. Logstash is a tool for beautifying the logs. It is based on the input-filter-output model. It can convert the log files into a different format, it can add and remove fields, etc. Elasticsearch is a very famous search engine, based on Lucene. Stores documents inside indices. Kibana is basically the GUI of Elasticsearch. It provides a user interface for searching and displaying data

ElasticSearch Index Rollover with Timestamps

If you are using ElasticSearch for storing system or application logs, then your ES cluster can quickly gets very big. Fortunately, ElasticSearch provides functionality for automatic rollover and deletion of indices. Here is how to configure it. Note: all configurations and examples here are based on ELK 7.x. Some functionalities are not available in previous versions. If you are using an older version, consider upgrading. Create lifecycle polices ElasticSearch provides the Index Lifecycle Policies (ilp) functionality, which can rollover indices. It is configured, by defining the phases of an index (hot, warm, cold, delete). When handling logs, we can store them in different indices, which are rolled over automatically on certain conditions (size or time). Additionally old logs indices can be deleted to free up space. In the example below we are creating a lifecycle policy " test ". The "hot" phase, in which the index is