Balabit Blog https://www.balabit.com/blog Security and Business in Context Wed, 18 Oct 2017 12:22:52 +0000 en-US hourly 1 https://www.balabit.com/blog/wp-content/uploads/2016/07/cropped-szem-32x32.png Balabit Blog https://www.balabit.com/blog 32 32 Web interfaces for your syslog server – an overview https://www.balabit.com/blog/web-interfaces-for-your-syslog-server-an-overview/ https://www.balabit.com/blog/web-interfaces-for-your-syslog-server-an-overview/#respond Thu, 12 Oct 2017 07:30:47 +0000 https://www.balabit.com/blog/?p=121 This is the 2017 edition of my most popular blog about syslog-ng web-based graphical user interfaces (web GUIs). Many things have changed in the past few years. In 2011 only a single logging as a...
Read more

The post Web interfaces for your syslog server – an overview appeared first on Balabit Blog.

]]>
This is the 2017 edition of my most popular blog about syslog-ng web-based graphical user interfaces (web GUIs). Many things have changed in the past few years. In 2011 only a single logging as a service solution was available, now I regularly run into yet another one. Also, the number of logging-related GUIs is growing, so I will mostly focus on syslog-ng based solutions.

Introduction

Centralized logging of events has been an important part of IT for many years. It is more convenient to browse logs in a central location rather than viewing them on individual machines. Central storage is also more secure. Even if logs stored locally are altered or removed, you can still check the logs on the central log server. Compliance with different regulations also makes central logging necessary.

System administrators often prefer to use the command line. Utilities such as grep and AWK are powerful tools, but complex queries can be completed much faster with logs indexed in a database and a web interface. In the case of large amounts of messages, a web-based SQL solution is not just convenient, it is a necessity. With thousands of incoming messages per second, the indexes of log databases still give Google-like response times even for the most complex queries, while traditional text-based tools are not able to scale as efficiently.

Logging as a Service (LaaS)

A couple of years ago Loggly was the pioneer of logging as a service (LaaS). Today there are many other LaaS providers (Papertrail, Logentries, Sumo Logic, and so on) and syslog-ng works perfectly with these. On the other hand it is better to think twice before relying only on a cloud service. LaaS needs a continuous network connection to avoid losing log messages, and high speed uplink network connection if you have more than just a few events per second. For logging applications that are already in the cloud this is less of a concern, but fast and reliable network connections are quite expensive and should be considered when looking for a log management solution for your local logs.

Structured fields and name-value pairs in logs are increasingly important, as they are easier to search and easier to create meaningful reports from. The more recent IETF RFC 5424 syslog standard supports structured data, but it is still not in widespread use.

People started to use JSON embedded into legacy (RFC 3164) syslog messages. The syslog-ng application can send JSON-formatted messages, for example, you can convert the following messages into structured JSON messages:

  • RFC5424-formatted log messages
  • Windows Eventlog messages received from the syslog-ng Agent for Windows application
  • Name-value pairs extracted from a log message with PatternDB or the CSV parser.

Loggly and other services can receive JSON-formatted messages, and make them conveniently available from the web interface.

Some non-syslog-ng-based solutions

Before going on with solutions with syslog-ng at their heart, I would like to say a few words about the others, some which were included in the previous edition of the blog.

LogAnalyzer from the makers of Rsyslog was a simple, easy to use PHP application a few years ago. While it has developed quite a lot, recently I could not get it to work with syslog-ng. Some of the popular monitoring software have syslog support to some extent, for example, Nagios, Cacti and several others. I have tested some of these, I have even sent patches and bug reports to enhance syslog-ng support, but syslog is not the focus here, just one of the possible inputs.The ELK stack (Elasticsearch + Logstash + Kibana) and Graylog2 have become popular recently, but they have their own log collectors instead of syslog-ng, and syslog is just one of many log sources. Syslog support is quite limited both in performance and protocol support. They recommend the use of file readers for collecting syslog messages, but that increases complexity as it is an additional software on top of syslog(-ng) and filtering still needs to be done on the syslog side. Note, that syslog-ng can send logs to Elasticsearch natively which can greatly simplify your logging architecture.

Collecting and displaying metrics data

You can collect metrics data using syslog-ng. Examples include netdata or collectd. You can send the collected data to Graphite or Elasticsearch. Graphite has its own web interface, while you can use Kibana to query and visualize data collected to Elasticsearch. Another option is to use Grafana. Originally it was developed as an alternative web interface to Graphite databases, but now it can also visualize data from many more data sources, including Elasticsearch. It can combine multiple data sources to a single dashboard and provides fine grained access control.

Syslog-ng based solutions

As promised at the beginning, I will focus on syslog-ng based solutions. While every software described below is originally based on syslog-ng Open Source Edition (except for Balabit’s own syslog-ng Store Box (SSB)), there are already some large scale deployments also with syslog-ng Premium Edition as syslog server.

  • The syslog-ng application and SSB focus on generic log management tasks and compliance
  • ELSA serves the need of network security professionals
  • LogZilla focuses on logs from Cisco devices
  • Recent syslog-ng releases are also able to store log messages directly into Elasticsearch, a distributed, scalable database system popular in DevOps environments, which enables the use of Kibana for analyzing log messages.

Benefits of using syslog-ng PE with these solutions include the logstore, a tamper-proof log storage (even if it means that your logs are stored twice), binaries for over 50 platforms – including Windows – and vendor support.

Mojology

There is a MongoDB based web GUI for syslog-ng, but it is not actively maintained anymore. Still, if you use MongoDB for all of your business data, this software might provide a base for your own log viewing software, as the source is available on GitHub: https://github.com/algernon/mojology

LogZilla

LogZilla is the commercial reincarnation of one of the oldest syslog-ng web GUIs: PHP-Syslog-NG. It provides the familiar user interface of its predecessor, but also includes many new features. The user interface supports Cisco Mnemonics, extended graphing capabilities, and e-mail alerts. Behind the scenes, LDAP integration, message de-duplication, and indexing for quick searching were added for large datasets.

Over the past years it received many small improvements. It became faster, and role-based access control was added, as well as live tailing of log messages. Of course, all these new features come with a price; the free edition, which I have often recommended for small sites with Cisco logs is completely gone now.

Two years ago LogZilla 5, a complete rewrite became available with many performance improvements under the hood and a new dashboard on the surface. Development never stopped and now LogZilla can parse and enrich log messages and automatically respond to events.

It is an ideal solution for a network operations center (NOC) full of Cisco devices.

Web site: http://logzilla.net/

ELSA – Enterprise Log Search and Archive

Enterprise Log Search and Archive (ELSA) is a centralized logging framework with syslog-ng at its heart. It is the first larger open source project outside of Balabit utilizing the power of the syslog-ng PatternDB log classification tool.

The ELSA architecture is designed to work at a high, continuous incoming message rate, and to bear even higher load peaks for hours. Unlike other solutions that use regular expressions, ELSA utilizes PatternDB, which is more powerful and requires less resources. For example, you can narrow down a search where a given IP address is the target of an operation, rather than perform a general IP address search. ELSA has many features to make the work of sysadmins easier, including a tab-based user interface to run related queries in parallel, scheduled searches, and easy-to-build queries using menus.

In the past few years ELSA improved in many ways. Now it comes with a nice number of bundled message patterns related to security logs, like firewalls, IDS and IPS logs. There are also some plugins, which can enrich logs with additional information, like whois records for IP addresses found in logs, and so on.

For better scalability, the web interface, the database and the indexers can be distributed to multiple machines. This distributed configuration ensures that even an extreme amount of log messages can be searched with Google-like response times.

ELSA is an ideal solution for security teams with large amount of logs, as ELSA can collect and analyze them in near real-time.

The source code is available at https://github.com/mcholste/elsa

Unfortunately ELSA is not actively developed any more. On the other hand there are still many unique features available in ELSA. If you want to give it a try the easiest way is to install Security Onion, which integrates ELSA with a variety of network security software, like Bro, netsniff-ng and more. On the long term ELSA will be replaced by Kibana in Security Onion.

Elastisearch and Kibana

Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:

  • You can store arbitrary name-value pairs coming from structured logging or message parsing.
  • You can use Kibana as a search and visualization interface.

The syslog-ng application can send logs directly into Elasticsearch. We call this an ESK stack (Elasticsearch + syslog-ng + Kibana).

Learn how you can simplify your logging to Elasticsearch by using syslog-ng: https://www.balabit.com/blog/logging-to-elasticsearch-made-simple-with-syslog-ng/

You can also give the Elastic integration of Security Onion a try. Instructions are available at https://github.com/Security-Onion-Solutions/security-onion/wiki/Elastic Note, that this is still Alpha quality at the moment of writing and not recommended for production.

syslog-ng Store Box (SSB)

SSB is a log management appliance built on syslog-ng Premium Edition. SSB adds a powerful indexing engine, authentication and access control, customized reporting capabilities, and an easy-to-use web-based user interface.

Recent versions introduced AWS and Azure cloud support and horizontal scalability using remote logspaces. The new content-based alerting can send an e-mail alert whenever a match between the contents of a log message and a search expression is found.

SSB is really fast, when it comes to indexing and searching log data. To put this scalability in context, the largest SSB appliance stores up to 10 terabytes of uncompressed, raw logs. With SSB’s current indexing performance of 100,000 events per second that equates to approximately 8.6 billion logs per day or 1.7 terabytes of log data per day (calculating with an average event size of 200 bytes). Using compression, a single, large SSB appliance could store approximately one month of log data for an enterprise generating 1.7 terabytes of event data a day. This compares favorably to other solutions that require several nodes for collecting this amount of messages and even more additional nodes for storing them. While storing logs to the cloud is getting popular, on premise log storage is still a lot cheaper for a large amount of logs.

The GUI makes searching logs, configuring and managing the SSB easy. The search interface allows you to use wildcards and Boolean operators to perform complex searches, and drill down on the results. You can gain a quick overview and pinpoint problems fast by generating ad-hoc charts from the distribution of the log messages.

Configuring the SSB is done through the user interface. All of the flexible filtering, classification and routing features in the syslog-ng Open Source and Premium Editions can be configured with it. Access and authentication policies can be set to integrate with Microsoft Active Directory, LDAP and Radius servers. The web interface is accessible through a network interface dedicated to the management traffic. This management interface is also used for backups, sending alerts, and other administrative traffic.

SSB is a ready-to-use appliance, which means that no software installation is necessary. It is easily scalable, because SSB is available both as a virtual machine and as a physical appliance, ranging from entry level servers to multiple-unit behemoths. For mission critical applications you can use SSB in High Availability mode. Enterprise level support for SSB and syslog-ng PE are also available.

Read more about Balabit syslog-ng and SSB products

Request evaluation version / callback

The post Web interfaces for your syslog server – an overview appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/web-interfaces-for-your-syslog-server-an-overview/feed/ 0
Upgrading from syslog-ng open source to premium edition https://www.balabit.com/blog/upgrading-from-syslog-ng-open-source-to-premium-edition/ https://www.balabit.com/blog/upgrading-from-syslog-ng-open-source-to-premium-edition/#respond Thu, 05 Oct 2017 07:43:05 +0000 http://www.balabit.com/blog/?p=2553 The syslog-ng application has two different editions. Most of my readers use syslog-ng open source edition (OSE). There is also a commercial version of syslog-ng, called the syslog-ng premium edition (PE). It comes with a...
Read more

The post Upgrading from syslog-ng open source to premium edition appeared first on Balabit Blog.

]]>
The syslog-ng application has two different editions. Most of my readers use syslog-ng open source edition (OSE). There is also a commercial version of syslog-ng, called the syslog-ng premium edition (PE). It comes with a few extra features, as well as commercial support. Upgrading from OSE to PE or the other way around is not automated and not even always possible. This is due to feature set differences, OSE and PE share a common core but have a different focus. Learn about some of the limitations and some tips about upgrading.

Background

Development of syslog-ng was started by Balázs Scheidler – one of Balabit’s founder – years before Balabit was founded. At that time and for many years that followed, syslog-ng was fully open source. It quickly became part of most Linux distributions and BSD variants. After a while, however, requests for commercial support started coming in, and that’s how syslog-ng PE was born. While both versions are developed mostly by the same set of people, there are also some important differences.

Both syslog-ng OSE and PE users call their software “syslog-ng” without any additional marking. To make our life more simple, I refer to them as “OSE” and “PE”, or simply “syslog-ng” if a statement is valid for both.

OSE – as its name implies – is developed in the open and includes many community-contributed features. Some of these are highly experimental, require exotic external dependencies, or are important only to a very limited set of users. PE is built from the same code base, but includes only a subset of OSE features which are well tested and represent commercial value. These features can be commercially supported as they are covered by automated end-to-end tests which make sure that they not only compile but work correctly on many different platforms. PE also has some exclusive features mostly related to compliance. Contact us if you (plan to) use PE but miss a feature from it which is currently only available in OSE.

Packaging of syslog-ng OSE and PE also varies greatly. With PE, it is easy: all dependencies are included in a single package either in a distribution specific format (rpm or deb) or in a generic .run installer. With OSE, it is completely different. Distribution packages do not bundle dependencies and only include features for which dependencies are available within the distribution. Packaging is modular to make sure that you install only a minimal set of extra dependencies. For example, SQL drivers are only installed if you install the syslog-ng-sql sub package.

To add insult to injury, the naming and content of sub packages varies between distributions, and there are also unofficial OSE packages enabling more features than available in official distribution packages.

What it means for you

Even if you use only basic features in OSE, you will need to edit your syslog-ng.conf to use the correct version number at the top of the file. But most likely you will need to make some more modifications.

As mentioned above, not all OSE features are available in PE. If you try to start PE with an unknown feature enabled, it fails. Packaging can also trigger conflicts, for example systemd service files:

[root@localhost ~]# rpm -Uvh syslog-ng-premium-edition-compact-7.0.5-1.rhel7.x86_64.rpm 
Preparing...                          ################################# [100%]
	file /usr/lib/systemd/system/syslog-ng.service from install of syslog-ng-premium-edition-compact-7.0.5-1.rhel7.x86_64 conflicts with file from package syslog-ng-3.12.1-2.el7.centos.x86_64
[root@localhost ~]#

Upgrading – the clean way

The cleanest way to upgrade from syslog-ng OSE to PE is to remove the OSE package from the system. Unless you did not touch syslog-ng configuration at all, you should of course make a backup of syslog-ng.conf first. This way you can avoid the packaging conflicts and feature differences and do a clean installation of PE.

In my examples below, I upgrade syslog-ng OSE version 3.12 from my unofficial repositories running on Red Hat Enterprise Linux 7.4 to syslog-ng PE version 7.0.4.

Removing OSE

The following instructions assume that the user is in the /root directory.

  1. Copy the contents of /etc/syslog-ng to a directory under /root (or where you can find it…), so you have a backup you can work from later: cp -R /etc/syslog-ng sngose
  2. Remove the syslog-ng package and dependent sub packages: yum erase syslog-ng
  3. Remove the /etc/syslog-ng directory: rm -fr /etc/syslog-ng

Note that you should check the output of yum carefully. If there are any applications listed other than syslog-ng and sub packages, you should rather remove syslog-ng using rpm -e nodeps, so dependent packages are not removed.

Installing PE

The following instructions assume that the PE rpm package is available in the current directory. You can install syslog-ng PE using the following command:

[root@localhost ~]# rpm -Uvh syslog-ng-premium-edition-compact-7.0.5-1.rhel7.x86_64.rpm 
Preparing...                          ################################# [100%]
Trying to stop syslog services on Linux, using systemd services.
Updating / installing...
   1:syslog-ng-premium-edition-compact################################# [100%]
Created symlink from /etc/systemd/system/multi-user.target.wants/syslog-ng.service to /usr/lib/systemd/system/syslog-ng.service.
[root@localhost ~]#

Merging configurations

The configuration file of the freshly installed PE is available under /opt/syslog-ng/etc/syslog-ng.conf. Before doing anything else, I’d recommend making a backup of it. The next steps are not carved into stone and largely depend on your previous OSE configuration and what you want to achieve:

  • Append your old OSE configuration to /opt/syslog-ng/etc/syslog-ng.conf
  • Edit out redundant configuration parts – for example, a version declaration – and those referring to features unavailable in PE – like the Riemann destination.
  • Syntax check your configuration using the -s option of syslog-ng. Make sure that you use the full path to PE, or add it to the PATH: /opt/syslog-nb/sbin/syslog-ng -s
  • If no errors are found, stop syslog-ng: systemctl stop syslog-ng
  • Try to start syslog-ng from the command line in the foreground using the -F option, so you can see any errors:
[root@localhost etc]# /opt/syslog-ng/sbin/syslog-ng -F
[2017-10-03T14:04:18.968550] Error resolving reference; content='source', name='s_sys', location='/opt/syslog-ng/etc/syslog-ng.conf:86:2'

In this case, I forgot to rename a reference to the local system sources. The OSE package used the ‘s_sys’ name for it, the default PE configuration uses ‘s_local’. Once I fixed it, I ran into another problem. As I’m an OSE user, I completely forgot that some features of PE require a license file to be present:

[root@localhost etc]# /opt/syslog-ng/sbin/syslog-ng -F
[2017-10-03T14:07:05.894534] syslog-ng running in client/relay mode, cannot initialize plugin; plugin name='java'
[2017-10-03T14:07:05.894560] Error initializing message pipeline; plugin name='java', location='#buffer:2:3'
  • All is well that ends well. If your configuration works fine you, do not have to start syslog-ng in the foreground anymore. Stop it using Ctrl-C, and start syslog-ng as a service: systemctl start syslog-ng

Future

There are plans at Balabit to make migration easier. Of course not all of the above problems can be eliminated, but still there is room for improvement. Official OSE packages from Balabit will be available soon. As part of the effort, we will try to make sure that these OSE packages are easier to upgrade to syslog-ng PE.

Related reading

There are many other upgrade scenarios for syslog-ng PE. Check the documentation for details: https://www.balabit.com/documents/syslog-ng-pe-7.0-guides/en/syslog-ng-pe-guide-admin/html/upgrading-syslog-ng.html

 

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Upgrading from syslog-ng open source to premium edition appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/upgrading-from-syslog-ng-open-source-to-premium-edition/feed/ 0
Privileged Identity Theft: A familiar theme in the Deloitte data breach https://www.balabit.com/blog/privileged-identity-theft-familiar-theme-deloitte-data-breach/ https://www.balabit.com/blog/privileged-identity-theft-familiar-theme-deloitte-data-breach/#respond Wed, 04 Oct 2017 05:39:02 +0000 http://www.balabit.com/blog/?p=2545 Like myself, security professionals reading about the Deloitte data breach in the Guardian must have felt a sense of dread as they came across the sentence “‘The hacker compromised the firm’s global email server through...
Read more

The post Privileged Identity Theft: A familiar theme in the Deloitte data breach appeared first on Balabit Blog.

]]>
Like myself, security professionals reading about the Deloitte data breach in the Guardian must have felt a sense of dread as they came across the sentence

“‘The hacker compromised the firm’s global email server through an “administrator’s ‘account’” that, in theory, gave them privileged, unrestricted ‘access to all areas.’”

Privileged identity theft, the compromise of privileged account credentials, is devastating. This is precisely what we saw with Deloitte’s breach, where the hacker compromised the firm’s global email server through a privileged administrator account which required only a single password.

 

Undiscovered hack

In my recent blog “Five Process Changes to Mitigate Privileged Account Risk”, I reviewed some quick wins regarding privileged accounts but these are just the beginning. If a company such as Deloitte, with one of the most skilled IT teams in the industry can suffer a data breach, it serves as a warning to all companies that if hackers are able to obtain privileged credentials, perimeters alone will never be enough to keep them out.

As reported by the Guardian, Deloitte discovered the hack in March, but cyber attackers could have breached its systems as long ago as October or November 2016. It’s not uncommon for hackers to go undiscovered for long periods of time like this. In targeted attacks, hackers usually gain a foothold first through compromising a user account and then look for other accounts to compromise with the aim of escalating privileges. By compromising privileged accounts, they can roam IT systems undetected – even for months – under the guise of authorized users.

 

Deploy an in depth security strategy

While password management – including two-factor authentication – is a good first line of defense, implementing monitoring tools that track privileged users’ activity and notify security teams in case of a potential breach is a necessary part of a defense in depth security strategy. Advanced analytics that examine user behavior in real time to assess if it is normal or unusual, even getting down to minute traits such as changes in typing speed or common spelling errors, provides an added layer of protection.

With these two fundamentals in place – 1) continuously being on the lookout; and 2) looking out for behavioral anomalies – organizations can ensure they’re able to expose hackers at the very moment they gain privileged access to the network.

Our latest white paper “Understanding Privileged Identity Theft”, details the typical attack methods criminals used to compromise credentials, why current methods don’t offer adequate protection, and what measures you can take to stop these threats. You can download it
here.

The post Privileged Identity Theft: A familiar theme in the Deloitte data breach appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/privileged-identity-theft-familiar-theme-deloitte-data-breach/feed/ 0
Filling your data lake with log messages: the syslog-ng Hadoop (HDFS) destination https://www.balabit.com/blog/filling-your-data-lake-with-log-messages-the-syslog-ng-hadoop-hdfs-destination/ https://www.balabit.com/blog/filling-your-data-lake-with-log-messages-the-syslog-ng-hadoop-hdfs-destination/#respond Thu, 28 Sep 2017 06:00:14 +0000 http://www.balabit.com/blog/?p=1332 Petabytes of data are now collected into huge data lakes around the world. Hadoop is the technology enabling this. While syslog-ng was able write logs to Hadoop using some workarounds (mounting HDFS through FUSE) for...
Read more

The post Filling your data lake with log messages: the syslog-ng Hadoop (HDFS) destination appeared first on Balabit Blog.

]]>
Petabytes of data are now collected into huge data lakes around the world. Hadoop is the technology enabling this. While syslog-ng was able write logs to Hadoop using some workarounds (mounting HDFS through FUSE) for quite some time, the new Java-based HDFS destination driver provides both better performance and more reliability. Instead of developing our own implementation, syslog-ng can utilize the official HDFS libraries from http://hadoop.apache.org/ making it a future-proof solution.

Support for HDFS is available in syslog-ng since Open Source Edition version 3.7 and Premium Edition 5.3 with recent versions greatly enhancing its functionality.

 

hadoop-logo

Getting started with Hadoop

When I first tested Hadoop, I did it in the hard way, following the long and detailed instructions on the Apache Hadoop website and configuring everything manually. It worked, but it was a quite long and tiring process. Fortunately, in the age of virtualization, configuration management and Docker there are much easier ways to get started. Just download a Docker image, a Puppet manifest or a complete virtual machine and you can have Hadoop up and running in your environment in a matter of minutes. Of course, while next-next-finish is good for testing, for a production environment you still need to gain some in-depth knowledge about Hadoop.

Hadoop is now an Apache project, as you might have already figured it out from the above URL. On the other hand, there is now a huge ecosystem built around Hadoop with many vendors providing their own Hadoop distribution with support, integration and additional management tools. Some of the largest Hadoop vendors are already certified to work with syslog-ng: we are MapR Certified Technology Partner, Hortonworks HDP Certified, but Cloudera and others should work as well. On the syslog-ng side we tend to use the HDFS library (JAR files) from the Apache project, except for MapR, which uses a slightly different implementation. For the purposes of this blog I used the Hortonworks Sandbox as the HDFS server, which is a virtual machine for testing purposes with everything pre-installed. But anything I write here should be the same for any other Hadoop implementation on the syslog-ng side (except for MapR, as noted).

 

Benefits of using syslog-ng with Hadoop

You most likely already use syslog-ng on your Linux, or even without knowing it, on your network or storage device. And you can use it in the same way in a Big Data environment as well. The syslog-ng application can write log messages to a Hadoop Distributed File System (HDFS). However, not just simply collect syslog messages and write them to HDFS: the syslog-ng application can collect messages from several sources and process as well as filter them before storing them to HDFS. This can simplify the architecture, lessen the load on both the storage and the processing side of the Hadoop infrastructure due to filtering, and ease the work of processing jobs as they receive pre-processed messages.

Data collector

Based on the name of syslog-ng most people consider it as an application for collecting syslog messages. And that is partially right: syslog-ng can collect syslog messages from a large number of platform-specific sources (like /dev/log, journal, sun-streams, and so on). But syslog-ng can also read files, run applications and collect their standard output, read messages from sockets and pipes or receive messages over the network. There is no need for a separate script or application to accomplish these tasks: syslog-ng can be used as a generic data collector that can greatly simplify the data pipeline.

There is a considerable number of devices that emit a high number of syslog messages to the network, but cannot store them: routers, firewalls, network appliances. The syslog-ng application can collect these messages, even at high message rates, no matter if it is transmitted using the legacy or RFC5424 syslog protocols, over TCP, UDP or TLS.

This means that application logs can be enriched with syslog and networking device logs, and provide valuable context for operation teams and all of these provided by a single application: syslog-ng.

Data processor

There are several ways to process data in syslog-ng. First of all, data is parsed. By default it is one of the syslog parsers (either the legacy or the RFC5424), but it can either be replaced by others, or the message content can further be parsed. Columnar data can be parsed with the CSV parser, free form messages – like most syslog messages – with the PatternDB parser, and there are parsers for JSON data and key-value pairs as well. If a message does not fit any of the above categories, you can even write a custom parser in Python. You can read in-depth about the parsers in Chapter 12 (Parsing and segmenting structured messages) and Chapter 13 (Processing message content with a pattern database) of the documentation.

Messages can be rewritten, for example by overwriting credit card numbers or user names due to compliance or privacy regulations.

Data can also be enriched in multiple ways. You can add name-value pairs from external files, or use the PatternDB parser to create additional name-value pairs based on message content. The GeoIP parser can add geographical location based on the IP address contained in the log message.

It is also possible to completely reformat messages using templates based on the requirements or the needs of applications processing the log data. Why send all fields from a web server log if only a third of them are used on the processing end?

All of these can be done close to the message source, anywhere on your syslog-ng clients, relays or servers which can lessen the load significantly on your Hadoop infrastructure.

Data filtering

Unless you really want to forward all collected data, you will use one or more filters in syslog-ng. But even if you store everything, you are most likely to store different logs in different files. There are several filter functions both for message content and message parameters, like application name or message priority. All of these can be freely combined using Boolean operators, making very complex filters possible. The use of filters has two major uses:

  • only relevant messages get through, the rest can be discarded
  • messages can be routed to the right destinations

Either of these can lessen the resource usage and therefore the load as well on both storage nodes and data processing jobs.

How the syslog-ng HDFS driver works?

The last step on the syslog-ng side is to store the collected, processed and filtered log messages on HDFS. In order to do that, you need to configure a HDFS destination in your syslog-ng.conf (or in a new .conf file under /etc/syslog-ng/conf.d if you have the include feature configured). The basic configuration is very simple:

destination d_hdfs {

    hdfs(

        client_lib_dir(/opt/hadoop/libs)

        hdfs_uri("hdfs://10.20.30.40:8020")

        hdfs_file("/user/log/logfile.txt")

    );

};

The client_lib_dir is a list of directories, where required Java classes are located. The hdfs_uri sets the URI in hdfs://hostname:port format. For MapR replace hdfs:// with maprsfs://. The last mandatory option is hdfs_file, which sets the path and name of the log file. For additional options check the documentation.

Hadoop-specific considerations

There are some limitations, when using the HDFS driver due to how Hadoop and the client libraries work.

On syslog-ng OSE 3.11 / PE 7.0.4 and earlier:

  • While appending to files can be enabled, it is still an experimental feature in HDFS. To work around this on the syslog-ng side, it is not possible to use macros in hdfs_file. Also, a UUID is appended to the file name, and a new UUID is generated each time syslog-ng is reloaded or when the HDFS client returns an error.

You cannot define when log messages are flushed. The syslog-ng application cannot influence when the messages are actually written to disk.

The good news is, that with recent Hadoop versions the append mode works reliably. In syslog-ng OSE 3.12 and PE 7.0.5, support for append mode and macros were added. You can enable append mode by using the hdfs-append-enabled(true) option in the HDFS driver. This way UUID is not needed at end of the file name. For macros all you have to do is using them. Here is the above configuration with appending enabled and the file name containing macros:

destination d_hdfs {

    hdfs(

        client-lib-dir("/opt/hdfs")

        hdfs-uri("hdfs://172.16.146.141:8020")

        hdfs-file("/user/log/czplog-$DAY-$HOUR.txt")

        hdfs-append-enabled(true)

    );

};

In addition, Kerberos authentication was added to the HDFS driver with syslog-ng version 3.10.

Summary

  • There are endless debates whether it is better to store all of your logs in your data lake (skeptics call it the grave 🙂 ) or keep only those that are relevant for operation or business analytics. In either case there are many benefits of using syslog-ng as a data collection, processing and filtering tool in a Hadoop environment. A single application can collect log and other data from many sources, which complement each other well. Processing of your data can be done close to the source in efficient C code, lessening the load on the processing side of your Hadoop infrastructure. And before storing your messages to HDFS, you can use filters to throw away irrelevant messages or just to route your messages to the right files.The Hadoop destination is available in both syslog-ng Open Source Edition (OSE) and in the commercially supported Premium Edition (PE). Getting started with it takes a bit more configuration work than with other non-Java based destinations, but it is worth the effort.

     

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Filling your data lake with log messages: the syslog-ng Hadoop (HDFS) destination appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/filling-your-data-lake-with-log-messages-the-syslog-ng-hadoop-hdfs-destination/feed/ 0
Five Process Changes to Mitigate Privileged Account Risk https://www.balabit.com/blog/five-process-changes-mitigate-privileged-account-risk/ https://www.balabit.com/blog/five-process-changes-mitigate-privileged-account-risk/#respond Tue, 19 Sep 2017 11:54:17 +0000 http://www.balabit.com/blog/?p=2511 Cyber attackers target privileged accounts and organizations with weak security practices can easily fall prey to privileged identity theft; the compromise of privileged account credentials. Armed with credentials to administrative and service accounts with access...
Read more

The post Five Process Changes to Mitigate Privileged Account Risk appeared first on Balabit Blog.

]]>
Cyber attackers target privileged accounts and organizations with weak security practices can easily fall prey to privileged identity theft; the compromise of privileged account credentials. Armed with credentials to administrative and service accounts with access to critical IT assets, an attacker can steal data on an industrial scale. If you look at the ten biggest data breaches in history, seven either were suspected or explicitly known to have involved privileged identity theft.

It’s easy to look to technology to harden privileged accounts from attackers but process changes are just as important because technology alone won’t save your organization. These are some straightforward process changes that can reduce the risk of a successful attack:

  1. Understand the size of the target

    You can’t defend what you don’t know exists. Establishing a comprehensive and up-to-date list of privileged accounts allows organizations to implement security measures on all of their accounts. As IT environments grow, the number of administrative, service and other types of privileged accounts can proliferate. In large enterprises, getting a handle on their privileged accounts can be difficult but it’s worth the effort.

  2. Limit the size of the target

    Limit the scope of each account  across the infrastructure of any privileged account to enforce the principle of least privilege: Each account should have exactly the minimum rights required to carry out a specific task. For example, an account set up for administering an application should not have any system privileges beyond what is needed to make changes to the application’s configuration and to restart the application. On a similar note, avoid enabling accounts on systems where they are not needed.

  3. Delete accounts and privileges that are no longer required

    In today’s business environment, organizations experience constant change when it comes to identity and access management. Employees come and go, and change roles as projects begin and end. This dynamic change can lead to security gaps. Inadequate off boarding often creates a situation in which credentials exist for employees that have left the company or changed positions. In the case of contractors, this situation may be more difficult to manage particularly if they only required access for a fixed-term project.

  4. Implement a formal password policy

    Companies with a mature security posture usually implement a formal password policy for privileged accounts. The policy should include changing default passwords as a matter of course and implementing strong passwords. It should also prohibit sharing of passwords for privileged accounts.  These seem like obvious recommendations  but companies large and small still fail to take these steps, making life easy for hackers.

  5. Prevent users taking short cuts

    Most users accessing privileged accounts such as administrative and service accounts will do so to complete their daily tasks. Like anyone, privileged users want to work as efficiently as possible and are just as prone to the temptation of taking shortcuts when it comes to security. Educating employees on security policies and encouraging good behavior can go a long way to mitigating risks.


These five process improvements can yield big results in making privileged identity theft more difficult for hackers. In our latest white paper Understanding Privileged Identity Theft we show why privileged account credentials are a target for criminals, how they are compromised, how current methods fail, and what measures you can take to stop these threats. You can download it here.

The post Five Process Changes to Mitigate Privileged Account Risk appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/five-process-changes-mitigate-privileged-account-risk/feed/ 0
New website and official repo coming up, tell us what you think https://www.balabit.com/blog/new-website-and-official-repo-coming-up-tell-us-what-you-think/ https://www.balabit.com/blog/new-website-and-official-repo-coming-up-tell-us-what-you-think/#respond Fri, 15 Sep 2017 12:46:13 +0000 http://www.balabit.com/blog/?p=2498 We at Balabit are and have always been both very excited and humbled by the community that formed around syslog-ng over the past two decades. Many of you have been a part of this long...
Read more

The post New website and official repo coming up, tell us what you think appeared first on Balabit Blog.

]]>
We at Balabit are and have always been both very excited and humbled by the community that formed around syslog-ng over the past two decades. Many of you have been a part of this long journey and helped syslog-ng to get to where it is today, and we thank you for that.

Throughout the years, we provided a best-in-class documentation not just for our commercial offerings but for syslog-ng Open Source Edition as well. However, for the past half decade official binaries existed only for syslog-ng Premium Edition.

Soon, this will change.

We will introduce a brand new website for all that is syslog-ng, and with that will come an official syslog-ng OSE repository with binaries for some of the most popular Linux distributions. We expect to launch syslog-ng.com in November, moving content for both OSE and its commercial versions to the site to become a single hub for all syslog-ng users.

The official syslog-ng OSE repository will be just a part (although a major one) of a comprehensive overhaul of our OSE related content providing platform. We will provide regular technical webinars and trainings for syslog-ng OSE users, more blogs on practical tips & tricks, common use cases, etc. We plan to channel all syslog-ng related discussion into a forum on the new site, although this will take some more time. And we already started putting in more resources into the development of the Open Source Edition.

This is a major commitment on our side to the community, and we know that it is the right move. However, we haven’t quite decided on the implementation yet, and we are looking for your feedback on this. This is a serious business decision and ultimately all stakeholders (we as a company and you as a member of the community) have to benefit from this move, otherwise there is no reason to do it.

We are listing all these so you can see the bigger picture before we present you the question: Would you register on the upcoming syslog-ng website to get access to the official repository and the aforementioned content?

What would you gain if you registered?

  • Access to a repository containing the latest syslog-ng packages
  • Alerts, announcements and news about the syslog-ng releases
  • Regular news about syslog-ng related content to help you better solve problems with syslog-ng (blog posts, tutorials, interesting use cases about problems that others solved with syslog-ng, and so on)

What would we gain if you registered?

  • First and foremost, we would get information immensely useful for us to optimize our development resources. How many of you use the repositories, which versions of syslog-ng you use, which platforms are the most important, which ones you do not use anymore, and so on. Unfortunately, we cannot get such data if you use external repositories (like the current unofficial ones), or if you install syslog-ng from your distribution.
  • Information about in which countries are hubs of syslog-ng users, for example, where could we organize meetups most successfully, or participate in local conferences to strengthen the community.
  • The ability to send you information about syslog-ng, both to help you learn about the possibilities of syslog-ng, and also to let you know about our commercial offerings. We know that some of you won’t be interested in anything commercial, and that’s fine. But to keep syslog-ng alive, we also need the support of those who do. We believe that registering on the new website and receiving the occasional email from us is a small favor to ask in return of the convenience of having access to a official repository with up-to-date packages, and we will strive to send you emails with useful content.

Please let us know what you think using this short Google Form.

The post New website and official repo coming up, tell us what you think appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/new-website-and-official-repo-coming-up-tell-us-what-you-think/feed/ 0
Installing syslog-ng on AWS Linux AMI https://www.balabit.com/blog/installing-syslog-ng-aws-linux-ami/ https://www.balabit.com/blog/installing-syslog-ng-aws-linux-ami/#respond Thu, 14 Sep 2017 08:46:01 +0000 http://www.balabit.com/blog/?p=2491 You do not have to live without your favorite syslog implementation even in Amazon Web Services (AWS) Linux AMI. This Linux distribution is based on Red Hat Enterprise Linux version 6 and it is minimal...
Read more

The post Installing syslog-ng on AWS Linux AMI appeared first on Balabit Blog.

]]>
You do not have to live without your favorite syslog implementation even in Amazon Web Services (AWS) Linux AMI. This Linux distribution is based on Red Hat Enterprise Linux version 6 and it is minimal extra work to install syslog-ng on it.

Before you begin

There are many different Linux distributions available on AWS, many of these include syslog-ng in an easy to install way as part of the distribution. The one I am writing about is the Amazon Linux AMI, the custom Linux distribution maintained by Amazon:

  • The AWS Linux AMI is based on RHEL 6, so you can use syslog-ng built for that. This means that you can enable the EPEL repository and use syslog-ng from there. While it works, it is not recommended as it contains an ancient version (3.2). I would rather recommend to use my unofficial syslog-ng packages.
  • The latest available version for RHEL 6 is 3.9. This still needs the EPEL repository for dependencies and you will need to enable my repository as well.

If your company policy suggests to use EPEL instead of the latest version, read my blog about the new core features of syslog-ng, which include advanced message parsing and disk based buffering to think again.

Installing syslog-ng

Enter the commands below to install syslog-ng 3.9 on AWS Linux AMI:

  1. yum-config-manager –enable epel
    Enables the EPEL repository that contains some of the dependencies necessary to run syslog-ng. The repo file is already there but it is not enabled.
  2. yum-config-manager –add-repo=https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng39epel6/repo/epel-6/czanik-syslog-ng39epel6-epel-6.repo
    Enables my unofficial syslog-ng repository for RHEL 6. Skip this step only if you are not allowed to use other external repositories than EPEL.
  3. rpm -e –nodeps rsyslog
    Removes rsyslog – which conflicts with syslog-ng – without removing packages, like cronie, depending on syslog functionality.
  4. yum install -y syslog-ng
    Installs syslog-ng. The “-y” option saves you from answering a few prompts.
  5. chkconfig syslog-ng on
    Makes sure that syslog-ng started on boot.
  6. /etc/init.d/syslog-ng start
    Starts syslog-ng.

Automating syslog-ng installation

Installing applications from the command line is OK when you have a single machine. Using a private or public cloud automation is a must, if you do not want to waste a lot of time (and money). You can easily automate the above steps by adding it as a shell script while launching a new machine in AWS.

#!/bin/bash
yum-config-manager --enable epel
yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng39epel6/repo/epel-6/czanik-syslog-ng39epel6-epel-6.repo
rpm -e --nodeps rsyslog
yum install -y syslog-ng
chkconfig syslog-ng on
/etc/init.d/syslog-ng start

If you use the web console to launch a new instance, you can paste the above script in step #3 (“configure instance”) in the text box under “Advanced Details”.

Of course it is even more elegant, if you turn the above commands into a cloud init script. I leave that exercise up to the reader.

Testing

By now syslog-ng is installed on your system with the default configuration. Before tailoring it to your environment make sure that everything works as expected.

You can check that syslog-ng is up and running using the /etc/init.d/syslog-ng status command, which prints the process ID of the syslog-ng application on screen.

You can check (very) basic functionality using the logger command. Enter:

logger this is a test message

And check if it is written to /var/log/messages using the tail command:

tail /var/log/messages

Unless your system is already busy serving users you should see a similar message as one of your last lines:

Sep 11 13:09:39 ip-172-x-y-z ec2-user[3395]: this is a test message

What is next?

Here I list a few resources worth reading if you want to learn more about syslog-ng and AWS or if you get stuck along the way:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Installing syslog-ng on AWS Linux AMI appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/installing-syslog-ng-aws-linux-ami/feed/ 0
Collecting logs from containers using Docker volumes https://www.balabit.com/blog/collecting-logs-containers-using-docker-volumes/ https://www.balabit.com/blog/collecting-logs-containers-using-docker-volumes/#respond Thu, 07 Sep 2017 05:58:56 +0000 http://www.balabit.com/blog/?p=2457 I have already covered how to use syslog-ng in a Docker environment as a traditional central syslog server and how to collect host and container logs from the host journal. There are many software that...
Read more

The post Collecting logs from containers using Docker volumes appeared first on Balabit Blog.

]]>
I have already covered how to use syslog-ng in a Docker environment as a traditional central syslog server and how to collect host and container logs from the host journal. There are many software that log to files or pipes instead of their stdout, the place where Docker expects them. Fortunately by using Docker volumes you can share data among containers and syslog-ng can collect these logs as well.

Before you begin

In this blog I detail methods of collecting logs from other containers that are not covered by Docker’s own log collecting method (collecting everything sent to the stdout of a container as log). For other use cases read my previous blogs:

You should be familiar how Docker handles volumes. In my examples I use the legacy “-v” syntax and use mostly bind mounts. This syntax works with older Docker releases as well. It should not be difficult to turn those to the “–mount” syntax and use volumes instead in most cases. If these are new to you read the Docker storage documentation at https://docs.docker.com/engine/admin/volumes/.

Reading logs from files

This time you will read log files from other containers using Docker volumes. There will be a container running syslog-ng and two other containers sending log messages. This example is just a laboratory experiment but with minimal modifications you should be able to use it to collect messages from web servers with many virtual hosts logging to separate files.

You will need four terminal windows:

  • Terminal 1 and Terminal 2: In the first two you will start up Alpine Linux in interactive mode. I use Alpine Linux Docker images in my examples as it is the quickest to download and start due to its extreme small size. You can use your own favorite Linux distribution with minor modifications of the command line if you really want to.
  • Terminal 3: In the third window you start up syslog-ng.
  • Terminal 4: In the fourth one you can check your logs.

Preparations

First, start up two containers in two separate terminal windows. You will use them later on to create some log messages.

Steps:

Step 1.

Go to Terminal 1 and enter the following command:

docker run -ti -v container1:/var/log alpine /bin/ash

This command will download an Alpine Linux image and start it in interactive mode due to the “-ti” option. Alpine uses “ash” as shell, so this example uses it as a command to start. The most important part of the command line is “-v container1:/var/log”. It automatically creates a Docker volume, called “container1” (if it does not exist yet) and makes it available in the container under the /var/log directory.

Step 2.

Go to Terminal 2 and enter the following command:

docker run -ti -v container2:/var/log alpine /bin/ash

Step 3.

Go to Terminal 3. In this window you will prepare your syslog-ng container. Here, you will use bind mounts as this way files are easier to edit and check from the host machine.

Step 4.

Create a directory structure under the /data/syslog-ng directory for the configuration and logs.

mkdir -p /data/syslog-ng/conf /data/syslog-ng/logs

Step 5.

Create a new syslog-ng configuration file and save it as /data/syslog-ng/conf/filread.conf with the following content:

@version: 3.11

source s_internal {
  internal();
};

destination d_int {
  file("/var/log/int");
};

log {source(s_internal); destination(d_int); };

source s_wild { wildcard-file(
    base-dir("/var/log/common")
    filename-pattern("*.log")
    recursive(yes)
    follow-freq(1)
  );
};

destination d_fromwild {
  file("/var/log/fromwild"
    template("$(format_json --scope rfc5424 --scope nv_pairs)\n\n")
  );
};
log {source(s_wild); destination(d_fromwild);};

The first three lines ensure that the internal messages from syslog-ng are saved to a file. The “s_wild” source defines a wildcard file source. You will see later on that log directories from other containers are mapped in sub-directories under the /var/log/common directory of the syslog-ng container. Files matching the “*.log” pattern are read by syslog-ng in any of the sub-directories recursively under /var/log/common. The “d_fromwild” destination saves logs to a file with JSON formatting, so you will be able to see file names in the file too.

Step 7.

Finally, start syslog-ng using the following command line:

docker run -ti -v container1:/var/log/common/c1 -v container2:/var/log/common/c2 -v /data/syslog-ng/conf/fileread.conf:/etc/syslog-ng/syslog-ng.conf -v /data/syslog-ng/logs/:/var/log --name sng_fileread balabit/syslog-ng:latest

This starts the latest Balabit syslog-ng image where the options mean:

  • “-ti” interactive mode, makes testing and debugging easier. Use “-d” in a production environment.
  • “-v container1:/var/log/common/c1 -v container2:/var/log/common/c2” maps the Docker volumes for log directories from the two other containers to sub-directories under /var/log/common directory.
  • “-v /data/syslog-ng/conf/fileread.conf:/etc/syslog-ng/syslog-ng.conf -v /data/syslog-ng/logs/:/var/log” maps the configuration file and log directories from the host file system.
  • “–name sng_fileread” names the container.
  • “balabit/syslog-ng:latest” starts (and downloads – if necessary) the latest Balabit syslog-ng image from the Docker hub.

Testing

This test will send a sample log message from the first and the second container to syslog-ng that you will be able to check in your fourth terminal window.

Steps:

Step 1.

Go to Terminal 1 and enter:

echo "from container1" >> /var/log/first.log

Step 2.

Go to Terminal 2 and enter:

echo "from container2" >> /var/log/second.log

Expected outcome:

Go to Terminal 4 and check if logs arrive. If you have configured everything correctly, you should see something like the following examples:

# tail -f /data/syslog-ng/logs/fromwild
{"SOURCE":"s_wild","PROGRAM":"from","PRIORITY":"notice","MESSAGE":"container1","LEGACY_MSGHDR":"from ","HOST_FROM":"525628e0a8e5","HOST":"525628e0a8e5","FILE_NAME":"/var/log/common/c1/first.log","FACILITY":"user","DATE":"Sep  1 08:04:51"}

{"SOURCE":"s_wild","PROGRAM":"from","PRIORITY":"notice","MESSAGE":"container2","LEGACY_MSGHDR":"from ","HOST_FROM":"525628e0a8e5","HOST":"525628e0a8e5","FILE_NAME":"/var/log/common/c2/second.log","FACILITY":"user","DATE":"Sep  1 08:05:14"}

Reading logs from pipes

If you want to avoid dealing with log files, allocating disk space, rotating them, and so on, you can often use pipes instead. For simplicity you will reuse the previous test environment.

Preparation

You will need the four terminal windows that you have prepared in the Reading logs from files section.

Steps:

Step 1.

Go to Terminal 3. Append the following lines to /data/syslog-ng/conf/fileread.conf:

source s_pipe { pipe("/var/log/common/c1/pipe");};
destination d_frompipe {file("/var/log/frompipe");};
log {source(s_pipe); destination(d_frompipe);};

The pipe source automatically creates the pipe if it is not yet already there. In this case the pipe will also show up in the /var/log directory of the first container.

Step 2.

Restart your syslog-ng container:

Step 2/a.

To stop the container, press Ctrl-C.

Step 2/b.

To start the container again, enter:

docker start -a sng_fileread

Note that this command will only work if you have used the previous test environment from the Reading logs from files section. If you have named the container something else than “sng fileread” (in Step 7. in Preparations), make sure to use that name instead.

This starts up the container named “sng_fileread” again. The “-a” option makes sure that it runs in the foreground where you can immediately see any error messages from syslog-ng.

Testing

This test will send data into the pipe and check whether it arrives.

Steps:

Step 1.

Go to Terminal 1.

Step 2.

To make sure that the pipe is there, enter

ls -l /var/log

It will display something like the following:

total 4
-rw-r--r--    1 root     root            16 Sep  1 08:04 first.log
prw-------    1 root     root             0 Sep  1 09:21 pipe

Step 3.

Send some data into the pipe:

echo testing the pipe > /var/log/pipe

Expected outcome:

You should see the log message in /data/syslog-ng/logs/frompipe:

[root@localhost ~]# tail /data/syslog-ng/logs/frompipe 
Sep  1 09:57:04 650e03ee2170 testing the pipe
[root@localhost ~]#

What is next?

In this blog I have shown you the basics of using Docker volumes to share logs among containers. I kept the examples simple and focused on testing instead of production usage. In real life you will most likely combine what you learned here with possibilities detailed in my previous two Docker blogs:

As usual, I omitted many details to keep my blog at a reasonable length. Here I list a few resources worth reading if you want to learn more or if you get stuck along the way:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Collecting logs from containers using Docker volumes appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/collecting-logs-containers-using-docker-volumes/feed/ 0
Cost of cybercrime – Analysis of Maersk’s case https://www.balabit.com/blog/cost-of-cybercrime-analysis-of-maersks-case/ https://www.balabit.com/blog/cost-of-cybercrime-analysis-of-maersks-case/#respond Tue, 05 Sep 2017 12:02:02 +0000 http://www.balabit.com/blog/?p=2455 It is very rare that we hear exact numbers from companies who were victims of a cyberattack. Although Ponemon Institute publishes a research report annually on this topic that gives an insight from a global...
Read more

The post Cost of cybercrime – Analysis of Maersk’s case appeared first on Balabit Blog.

]]>
It is very rare that we hear exact numbers from companies who were victims of a cyberattack. Although Ponemon Institute publishes a research report annually on this topic that gives an insight from a global perspective, the data is aggregated so it doesn’t provide details of individual cases. That is why the quarterly report of A.P. Moller – Maersk is an extraordinary read for security professionals. Just to recap, A.P. Moller – Maersk was one of the major high-profile victims of the NotPetya malware at the end of June 2017. According to a Splash247.com report the time,

in the two days since the Maersk Group was hit by the Petya ransomware attack, operations at many of its sites across the globe have returned to manual.

As the company’s press release states:

in the last week of the quarter we were hit by a cyber-attack, which mainly impacted Maersk Line, APM Terminals and Damco. Business volumes were negatively affected for a couple of weeks in July and as a consequence, our Q3 results will be impacted. We expect the cyber-attack will impact results negatively by USD 200-300m.”

That is approximately 1% of the global yearly revenue of the Danish shipping behemoth.

 

Average cost of cybercrime

As it turns out from the Ponemon research, US organizations have the highest average cost of cybercrime ($17.36 million), and Australia has the lowest ($4.30 million). In the Maersk case, the numbers are 10 times higher. Since Maersk is number 558 on Forbes Global 2000 list, we can be sure that there are many more companies who had, have or will suffer the same amount or even higher losses due to cybercriminals, not to mention the thousands of smaller companies who may have  suffered losses in line with the Ponemon average. Therefore, we can conclude that cyberattacks are even more costly than stated in the report.

 

Avoid losses

There are various solutions to avoid these losses. First of all, cybersecurity should be a priority for all companies. There aren’t verticals or companies whose daily operations that do not rely on IT, but there are verticals and companies who don’t care with IT security as they are unregulated or they simply follow the “nothing has happened yet” principle. We have to warn them that a whole industry’s operations can be upended by cyberattacks like the shipping industry experience in the summer of 2017. Besides the Maersk case, HMS Queen Elizabeth is running outdated Windows XP and theoretically exposed to exploits, and based on a BBC report some crucial nautical communication systems, such as Ecdis and VSat also have vulnerabilities. Moreover, when two modern, highly equipped US Navy ships collide with other vessels in the span of three months (4 cases in total this year), a cyberattack is one of the first things that occurs to experts. We don’t know who will be the next victim, but don’t be surprised if a new industry joins the list of compromised victims.

 

Key factors to reduce the cost of cybercrime

Amongst others, Ponemon highlights some key factors from the technical perspective of successful companies that are also essential to reduce the cost of cybercrime (excerpt):

  • Faster detection and recovery. To reduce the time to determine the root cause of the attack and control the costs associated with a lengthy time to detect and contain the attack, these organizations are increasing their investment in technologies to help facilitate the detection process.
  • Reducing third-party risk. These organizations are able to reduce the risk of taking on a significant new supplier or partner by conducting thorough audits and assessments of the third party’s data protection practices.
  • Addressing insider threats. A possible negative consequence of reorganization or acquisition of a new company can be disgruntled or negligent employees. These organizations ensure processes and technologies are in place to manage end user access to sensitive information. Further, there are training and awareness programs in place to address risks to sensitive data caused by changes in organizational structure and new communication channels.
  • Optimizing SIEM. These companies deploy advanced security information and event management (SIEM) with features such as the ability to monitor and correlate events in real-time to detect critical threats and detect unknown threats through user behavior analytics.

 

In Maersk’s case, NotPetya was the main source of financial loss. Our friends at Scademy have published an extensive list how NotPetya could have been eliminated. One of their pieces of advice is to “restrict the local administration access to privileged users; avoid giving each of your users’ local admin access to all machines unless necessary to protect against the PsExec vector”.

 

Solution

We at Balabit are working on products that can successfully reduce the financial losses due to cyber incidents and truly support those the efforts mentioned above, especially the privileged user problem. Balabit Privileged Access Management (PAM) is primarily designed for the support of rapid incident investigation to reduce the detection and recovery time. Here you can find how to accelerate your incident response with privileged access management. Balabit Privileged Session Management is an efficient module of our PAM solution to reduce third-party risks. Here you can find some tips for managing third party system administrators. Together with Balabit Privileged Account Analytics module which is specialized for analyzing privileged user behavior, it also gives a good option to combat with insider threats as it is described in our essential guide to privileged user monitoring. Moreover, Balabit Log Management product line can help you to build an efficient log management infrastructure as you can read in our log management essentials report.

The post Cost of cybercrime – Analysis of Maersk’s case appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/cost-of-cybercrime-analysis-of-maersks-case/feed/ 0
Creating time lapse videos from log messages using OpenShot https://www.balabit.com/blog/creating-time-lapse-videos-log-messages-using-openshot/ https://www.balabit.com/blog/creating-time-lapse-videos-log-messages-using-openshot/#respond Wed, 30 Aug 2017 11:27:51 +0000 http://www.balabit.com/blog/?p=2438 You can create your own time lapse videos from log messages. It is not rocket science and is possible using a purely open source tool chain. In my previous blog, I explained how you can...
Read more

The post Creating time lapse videos from log messages using OpenShot appeared first on Balabit Blog.

]]>
You can create your own time lapse videos from log messages. It is not rocket science and is possible using a purely open source tool chain. In my previous blog, I explained how you can create a heat map from IP addresses in your log messages using syslog-ng, GeoIP, Elasticsearch and Kibana. Here we do a few more steps by configuring Kibana, taking regular screenshots and turning them into a video using OpenShot.

Before you begin

Unless you are only interested in learning how to create a time lapse using OpenShot, you should start by reading my previous blog about creating a heat map. It explains everything up to the point where you have your first heat map compiled from the geolocations of IP addresses on your screen: that is, you have a log source, you have parsed logs, sent them to Elasticsearch and are now displaying them in Kibana.

Creating a time lapse video requires installing a few more software on your machine:

  • screenshot application: I used “gnome-screenshot” coming as part of Gnome
  • a web browser: I used Firefox
  • an application to rename screenshots: I used “pyRenamer”
  • OpenShot to turn screenshots into a time lapse video

You will also need to turn off any screensavers on the machine where you capture the screenshots, otherwise you will be taking screenshots of a black screen. 🙂 Trust me, it is not that funny if you discover it only after a few hours…

I did everything in a virtual machine so my laptop was not blocked from use while the screenshots were being created.

Configuring Kibana

There are many ways you can show information in a time lapse video:

  • You can create a cumulative map, starting with an empty map and showing all connection attempts right from the beginning till the end. The video will start with a few dots at the beginning and will end with larger colored areas, showing a summary of dangerous (more active) areas. In this case, you configure Kibana to show a whole day or week, so all addresses are displayed on the map.
  • You can also rely on the rolling average of connection attempts. In this case, you configure Kibana to display only the last few minutes or hours on screen. The configured time interval will greatly influence the outcome:
    • If you configure a shorter time interval, you will only see some quickly disappearing dots on the screen.
    • With a longer interval, there is a chance for data to accumulate, having a similar effect as the cumulative map but with a new dimension: continuous change is also shown.

Another question is how often you take a screenshot. For my time lapses, I configured one screenshot a minute. For a busy network, 1 minute might be too long, for a quiet network, it could be too short. Make sure that when you are taking screenshots, you consistently use the same interval.

Both of these settings (Time Range and Auto-refresh) can be configured when you click the clock icon in the upper right-hand corner of Kibana:

Creating screenshots

Once you configured Kibana, it is time to start creating screenshots. Before doing that, however, you have one more step to do: maximize the browser window and make the browser full screen. This is not strictly necessary, but this way you don’t have to post-process the images to remove possibly sensitive data like your bookmarks.

My desktop environment, GNOME, has a bundled screenshot application: gnome-screenshot. If you use another desktop, you might need to install another application, like “screenshot-tool”. The only important feature is that the application should work from the command line.

Use this command line from a terminal window running on the same desktop:

while true ; do sleep 60 ; gnome-screenshot -B ; done

Where:

  • while true; do starts an endless loop, which you can break using the Ctrl-C keyboard combination.
  • sleep 60 means that you start with a 60-second sleep period, so you have time to switch back to the browser window before screenshots are taken. If you configured a refresh rate other than 1 minute in Kibana, adjust the value here (in seconds).
  • gnome-screenshot is the name of the screenshot application you are using on the command line.
  • -B is used for two reasons: no window borders this way with gnome-screenshot and even more important: the application runs this way without a GUI.
  • done marks the end of the loop.

All you need now is patience. I was collecting screenshots for a bit more than half a day. For a first experiment, an hour is enough (but be aware that this results in a 2-second time lapse video if you leave the default “30 frames a second” setting untouched). Once you have enough screenshots, switch to the terminal window and terminate the while loop using Ctrl-C.

Renaming screenshots

Depending on your screenshot application, image files are saved to different locations with different names. In the case of gnome-screenshot, files are saved under the “Pictures” directory in your home directory with names that include the date and time the image was created. For example, “Pictures/Screenshot from 2017-08-26 09-24-08.png”. Unfortunately OpenShot does not recognize image sequences this way. You need to rename image files so they have a sequence number in their name, starting with zero.

There are many tools available if you want to mass rename files. My choice was “pyRenamer”. Using “pyRenamer”, you can use a GUI to rename the files:

  1. Switch to the directory containing the images.
  2. Select the files.
  3. Set the “Original file name pattern” to “{X}.png”.
  4. Set the “Renamed file name pattern” to “map{num4}.png”.
  5. Click “Preview”.
  6. Click “Rename”.

Converting screenshots into time lapse video using OpenShot

The final step is to convert the screenshot files into a time lapse video. The use of OpenShot might be an overkill for this task as there are many command line tools which can do the job. On the other hand, OpenShot can hide away their complexities and has many additional features, like creating title screens (not covered here), which can come in handy.

When you start OpenShot, it will start up with an “Untitled Project” with no files and an empty time line. Here I describe only the minimal steps required to create a time lapse video. Check the OpenShot documentation if you want to add some sound or a title / end screen.

  1. Click “File” menu > “Import files”, search for the first screenshot in the file dialog, and click “Open”.
  2. Click “Yes” in the pop-up asking if you want to treat the image as an image sequence.
  3. The image sequence will show up as a thumbnail under “Project files”. If you want to change the frame rate (for example, to create a longer time lapse video from just a few screenshots), right-click the thumbnail, choose “File properties” and change the frame rate.
  4. Drag and drop the thumbnail to the timeline on any of the tracks. You should see a new thumbnail on the track and also a preview window.
  5. Click “Export video” in the “File” menu. Give the video a name. As we used 30 as the frame rate, choose a profile with 30 fps that suits your screen resolution.
  6. Finally click “Export Video”, sit back and relax. Depending on your computer and amount of screenshots, your time lapse video should be ready in a matter of seconds or minutes.

What is next

Heat maps and time lapse videos are extremely powerful tools when it comes to visualizing large amounts of raw data. They can be especially useful when you wish to highlight trends and potential focus areas that merit further attention.

As usual, I omitted many details to keep my blog at a reasonable length. Here I list a few resources worth reading if you want to learn more or if you get stuck along the way:

While my blogs focus on the open source edition (OSE) of syslog-ng, you can use the latest release of syslog-ng Premium Editon as well to parse log messages and add geographical information.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Creating time lapse videos from log messages using OpenShot appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/creating-time-lapse-videos-log-messages-using-openshot/feed/ 0