Version 6 of the Elastic Stack has now been available for some time packed with new features and improved performance. Compatibility of syslog-ng was checked already during the alpha phase of development, as syslog-ng is becoming popular among Elasticsearch users: it can greatly simplify logging to Elasticsearch. There are no major changes from a syslog-ng point of view but – to improve your copy & paste experience – I updated my getting started guide from Elastic Stack 5 to 6.

This is a quick how-to guide to get you started with syslog-ng (the current version is 3.13) and Elasticsearch 6 on RHEL/CentOS 7.

Installing applications

As a first step, you have to enable a number of software repositories, and then install applications from them. These repositories contain Elasticsearch, the latest version of syslog-ng, and the dependencies of syslog-ng. These are all required for Elasticsearch 6 support.

  1. In the case of RHEL: You first have to enable the so-called “optional” repository (or repo, in its more popular shorter form), which contains a number of packages that are required to start syslog-ng.

In the case of CentOS: The content of this repo is included CentOS, so you do not have to enable it there separately.

subscription-manager repos --enable rhel-7-server-optional-rpms
  1. The Extra Packages for Enterprise Linux (EPEL) contain many useful packages that are not included in RHEL. It also has an older version of syslog-ng, but that does not support Elasticsearch at all. Still, a few dependencies of syslog-ng are coming from this repo. You can enable it by downloading and installing an RPM package:
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmrpm -Uvh epel-release-latest-7.noarch.rpm
  1. Next, add the repo containing the latest unofficial build of syslog-ng. At the time of writing this blog post, it is syslog-ng 3.13 and it is available on the Copr Build Service. Download the repo file to /etc/yum.repos.d/, so you can install and enable syslog-ng:
cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng313/repo/epel-7/czanik-syslog-ng313-epel-7.repoyum install syslog-ng
yum install syslog-ng-java
systemctl enable syslog-ng
systemctl start syslog-ng

It is not strictly required, but you can avoid some confusion if you also delete rsyslog at the same time:

yum erase rsyslog
  1. To install Elasticsearch, you have to use your text editing skills: copy and paste repository information from https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.htmlinto a file under /etc/yum.repos.d:
cd /etc/yum.repos.d/
vi elastic.repo
yum install elasticsearch
  1. Before starting Elasticsearch, you should change at least one setting in the configuration file: the name of the Elasticsearch cluster. Make sure that there is no other cluster with the same name on your network. What you define here is to be used later also in your syslog-ng configuration. Once you have configured it, you can also enable and start Elasticsearch.
echo cluster.name: syslog-ng >> /etc/elasticsearch/elasticsearch.yml
systemctl enable elasticsearch
systemctl start elasticsearch
  1. Java-based destinations in syslog-ng require libjvm.so in the library path. My blog at https://czanik.blogs.balabit.com/2016/03/troubleshooting-java-support-in-syslog-ng/ describes the topic in detail.

If you only have a single Java version on your system, the commands below add the directory containing libjvm.so to the library path:

echo /usr/lib/jvm/jre/lib/amd64/server > /etc/ld.so.conf.d/java.conf
ldconfig

You can check whether syslog-ng finds the libjvm.so file using the following command:

syslog-ng -V

The version information printed also includes a warning message if syslog-ng cannot find libjvm.so. In this case, refer to the blog mentioned above to resolve the problem.

Configuring syslog-ng

As a last step, create a configuration file for syslog-ng. A base configuration is already in place. You can extend it by creating a file under /etc/syslog-ng/conf.d with a .conf extension.

cd /etc/syslog-ng/conf.d
vi es.conf

The following configuration has a few twists, making it possible to have some name-value pairs to analyze without the need to write PatternDB rules.

The complete configuration will be included at the end of this section; the configuration snippets are used to demonstrate the role of each part.

The first part of the configuration defines a file source for audit.log.

source s_auditd {
  file(/var/log/audit/audit.log);
};

The next part defines the Elasticsearch destination. The name of the Elasticsearch cluster is “syslog-ng”. If you have configured something else as the name of the Elasticsearch cluster, use that name here. Note that the client mode must be “http”, as other modes are not supported for Elasticsearch 5.0 or later (except for “https” used for encrypted connections).

destination d_elastic {
  elasticsearch2 (
    cluster("syslog-ng")
    client_mode("http")
    index("syslog-ng")
    type("test")
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
  )
};

The first log path sends local logs to the Elasticsearch destination without any processing. The source of the local logs, source(s_sys), is defined in /etc/syslog-ng/syslog-ng.conf, the main configuration file of syslog-ng.

log {
  source(s_sys);
  destination(d_elastic);
};

The second log path parses audit.log with the Linux audit parser, and further parses the MSG field of audit logs, which can contain valuable information (for example, source IP address and the status of an SSH login). Just like the other log path, this one also stores the results to Elasticsearch, but in this case, it includes many interesting name-value pairs.

log {
  source(s_auditd);
  parser {
    linux-audit-parser (prefix("auditd."));
  };
  parser {
    kv-parser (template("${auditd.msg}") prefix("amsg."));
  };
  destination(d_elastic);
};

And here is the complete configuration to make copy & paste easier for you:

source s_auditd {
  file(/var/log/audit/audit.log);
};
destination d_elastic {
  elasticsearch2 (
    cluster("syslog-ng")
    client_mode("http")
    index("syslog-ng")
    type("test")
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
  )
};
log {
  source(s_sys);
  destination(d_elastic);
};
log {
  source(s_auditd);
  parser {
    linux-audit-parser (prefix("auditd."));
  };
  parser {
    kv-parser (template("${auditd.msg}") prefix("amsg."));
  };
  destination(d_elastic);
};

Displaying results

Most people use Elasticsearch because they want to use Kibana to search and visualize their log messages. To set up Kibana:

  1. Install Kibana using the previously configured Elastic repository and issuing the following command:
yum install kibana
  1. By default, the Kibana web interface binds only to 127.0.0.1, making it inaccessible if you want to view it from a remote machine. Change the server.host setting in /etc/kibana/kibana.yml to the server’s IP address or to 0.0.0.0 if you want to reach Kibana remotely.
  2. You can now enable and start Kibana:
systemctl enable kibana
systemctl start kibana
  1. When you first open Kibana on port 5601, it will display an initial setup screen. You have to enter the “syslog-ng*” index name here – that is, if you have followed my instructions above and used the same index name.
  2. Once Kibana has found the index, you have to configure the “Time-field name”. If you use the above configuration for syslog-ng, it is “ISODATE”.
  3. Click Create, and Kibana is ready to use.

Are you stuck?

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I am available as @PCzanik.