Balabit Blog https://www.balabit.com/blog Security and Business in Context Thu, 07 Dec 2017 10:22:28 +0000 en-US hourly 1 https://www.balabit.com/blog/wp-content/uploads/2016/07/cropped-szem-32x32.png Balabit Blog https://www.balabit.com/blog 32 32 Sending logs to Splunk through HTTP https://www.balabit.com/blog/sending-logs-splunk-http/ https://www.balabit.com/blog/sending-logs-splunk-http/#respond Thu, 07 Dec 2017 10:15:13 +0000 https://www.balabit.com/blog/?p=2671 For quite some time, Splunk has recommended to collect syslog messages using syslog-ng, save them to files, and send them to Splunk using forwarders. Unless you have a very high message rate, the HTTP destination...
Read more

The post Sending logs to Splunk through HTTP appeared first on Balabit Blog.

]]>
For quite some time, Splunk has recommended to collect syslog messages using syslog-ng, save them to files, and send them to Splunk using forwarders. Unless you have a very high message rate, the HTTP destination of syslog-ng can greatly simplify this logging architecture. Instead of writing messages to files and reading them by a forwarder, syslog-ng can forward messages to Spunk HTTP Event Collector (HEC) directly, using HTTP or HTTPS connections. And if you parse messages using syslog-ng, you can send the resulting name-value pairs to Splunk in JSON format and be able to search them instantly.

Before you begin

For my tests, I used the latest available releases of Splunk and syslog-ng running on CentOS 7. I used Splunk 7.0.1 and syslog-ng 3.13.1. You do not have to run the latest and greatest, as HEC was already available in Splunk 6.x and the HTTP destination is available in syslog-ng from version 3.7, with encryption (HTTPS) added in 3.10.

Note that the HTTP destination is usually not part of the core syslog-ng package. On CentOS, it is in a sub-package called syslog-ng-http. The name may vary in other distributions.

For simplicity, I use unencrypted connections, but unless syslog-ng and Splunk are running on the same machine, you’d better enable SSL.

Enable HEC in Splunk

In order to receive messages over HTTP, you need to enable the HTTP Event Collector in Splunk.

Log in as administrator, and choose Settings > Data inputs > HTTP Event Collector. In the upper right corner, click Global Settings. Here, click Enable and remove the check mark from Enable SSL. If you modify an already existing Splunk installation, make any further site-specific changes as necessary.

Sending syslog messages as is

The easiest way to use the HTTP destination is to send syslog messages as is, without any modifications. This might also be a requirement by some of the Splunk applications which expect unmodified syslog messages.

Setting up a token in Splunk

Splunk needs a token in log messages to figure out their format and intended destination. You can create one by going to the HEC page in Splunk and clicking the New Token button in the upper right corner.

First, give the token a name. On the next screen, select the “syslog” source type. At the end of the process, you will see a token similar to this: 94476318-fc2c-410b-a9a8-5796585ffc9e. Make a note of it as you will need it in the syslog-ng configuration. Keep this tab open, you will need it later on.

Configuring syslog-ng

You can append the configuration snippet below to the end of your current syslog-ng configuration, or create a new .conf file under /etc/syslog-ng/conf.d/ if this possibility is enabled in your distribution.

destination d_http1 {
    http(url("http://localhost:8088/services/collector/raw")
        method("POST")
        user_agent("syslog-ng")
        user("user")
        password("94476318-fc2c-410b-a9a8-5796585ffc9e")
        body("${ISODATE} ${HOST} ${MSGHDR}${MESSAGE}")
    );
};
log { source(s_sys); destination(d_http1); };

The above configuration assumes that Splunk is running on localhost and that you have a source called “s_sys” (name of local log sources on CentOS). You should replace the value of the password() parameter with the token you just generated, and modify any other parameters as necessary.

Testing

Once you restarted syslog-ng, you are ready for testing. Enter the following command in a shell running on the same machine as syslog-ng:

logger This is a test

If you kept the token setup window open, you can now select the Search & Report button there, and it will bring you right to the search interface, narrowing your searches to the new token. Otherwise, start the search application and search for your test message.

Sending parsed messages in JSON format

The syslog-ng application can parse log messages using the built-in parsers. The parsers help you to locate interesting information in log messages and create name-value pairs from them. You can use the name-value pairs to filter log messages and/or to make data available for searching and dashboards.

You can forward parsed values to Splunk HEC in JSON format. This enables you to do more precise searches: instead of trying to locate information in free-form text, you can search the extracted name-value pairs.

Setting up a token in Splunk

Splunk needs a token in log messages to figure out their format and intended destination. You can create one by going to the HEC page in Splunk and clicking the New Token button in the upper right corner.

First, give the token a name. On the next screen, select the “_JSON” source type. At the end of the process, you will see a token similar to this: 1d5f30bb-d9ed-4933-9430-0cd17e9857b6. Make a note of it as you will need it in the syslog-ng configuration. Keep this tab open, you will need it later on.

Configuring syslog-ng

You can append the configuration snippet below to the end of your current syslog-ng configuration, or create a new .conf file under /etc/syslog-ng/conf.d/ if this possibility is enabled in your distribution.

destination d_http2 {
    http(url("http://localhost:8088/services/collector/raw")
        method("POST")
        user_agent("syslog-ng")
        user("user")
        password("1d5f30bb-d9ed-4933-9430-0cd17e9857b6")
        body("$(format-json --scope all-nv-pairs)")
    );
};
parser p_patterndb {
  db-parser(
    file("/etc/syslog-ng/sshd.pdb")
  );
};
log {
  source(s_sys);
  parser(p_patterndb);
  destination(d_http2);
};

The above configuration assumes that Splunk is running on localhost and that you have a source called “s_sys” (name of local log sources on CentOS). You should replace the value of the password() parameter with the token you just generated, and modify any other parameters as necessary.

Compared to the previous configuration, you can notice two changes here:

Testing

Once you restarted syslog-ng, you are ready for testing. Try to connect to the machine running syslog-ng using ssh a few times, and check the logs.

If you kept the token setup window open, you can now select the Search & Report button there, and it will bring you right to the search interface, narrowing your searches to the new token. Otherwise, start the search application and search for sshd messages.

You should see a similar JSON-formatted message on screen:

splunk syslog-ng json

At the top, you can see the original syslog fields. “_classifier” is added by the patterndb parser just as “secevt” and “usracct”. The fields coming from journald are not expanded here to save space, but practically they are the syslog fields repeated and a few additional fields.

If you want to search for all the rejected connections, search for:

sourcetype="_json" | spath "secevt.verdict" | search "secevt.verdict"=REJECT

You can learn more about the HTTP destination at https://www.balabit.com/sites/default/files/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html-single/index.html#configuring-destinations-http-nonjava

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Sending logs to Splunk through HTTP appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/sending-logs-splunk-http/feed/ 0
Why is IT Security winning battles, but losing the war…? https://www.balabit.com/blog/why-is-it-security-winning-battles-but-losing-the-war/ https://www.balabit.com/blog/why-is-it-security-winning-battles-but-losing-the-war/#respond Thu, 07 Dec 2017 10:11:36 +0000 https://www.balabit.com/blog/?p=2653 When a child goes near something hot, a parent will warn them not to touch it. But of course, if the warnings aren’t heeded, the child may get burned. With repetition, the message eventually sinks...
Read more

The post Why is IT Security winning battles, but losing the war…? appeared first on Balabit Blog.

]]>
When a child goes near something hot, a parent will warn them not to touch it. But of course, if the warnings aren’t heeded, the child may get burned. With repetition, the message eventually sinks in and behavior changes.

The message to patch has been repeated many times. There is a seemingly unending litany of examples where this has not happened.

Regulators are considering how best to ensure adequately maintained security, along with privileged access monitoring, protecting data in the cloud are also hot topics high on their agenda.

Of course, none of this is news to us. We have been living in this world of patch-Tuesday-compromize-Wednesday for some time, and yet many organizations are still not embracing patching.  Certain infrastructure may even never be patched on the grounds of stability. One CEO I met ranked their list of IT priorities under: Security, Reliability and Features. If only all boards and execs did that.
 

The patching world 10 years ago

As I interact with people at events and dinners, I look around and see people that want to do the right thing. They know they need to patch. They know they need to implement Privileged Session Management. Yet ask those same people if they are happy with the level of patching within their organizations and they will either avoid the question or laugh at this apparent joke.

Think back to 10 years ago in the consumer patching world, unpatched home computers were getting compromised within minutes of being placed on an always-on broadband connection. They would be turned into unwitting participants in bot nets or have key loggers installed to try and capture online banking credentials.

Fast forward to today, those same consumer PCs are the most patched and up to date that they can be. So what happened? The patching became default. Automatically updated every week and every day for virus definitions, without the user having to do anything, other than the occasional reboot.

Over that same time period, corporate computers have not changed. If we leave aside the cloud for the moment, then the way that corporate desktops get patched is the same as it was 10 years ago.  Sure, the packaging deployment systems may have evolved, but fundamentally no organisation (that I know of at least) defaults to installing all patches as soon as they come out.

When I ask people why they don’t just install all patches they say “stability”. But if I go back to the insightful CEO and follow their order, stability or reliability would come after security. I believe that order is correct. However I also believe you don’t have to shoot yourself in the foot just for security.

Default 1/10th  of all of your desktops to patch within a day or two, and the rest automatically over the next week.  Within a week you would have all desktops patched. Obviously you would need to invest in automation, processes and people. But the cost is far less than having to clean up WannaCry or play whack a mole to remove something once inside a corporate network.

But what about the servers I hear you cry? My simple answer – Do the same.  The two best defences you can have for free are these. Patch and reboot. Patch within days, reboot weekly.

Suggest this to your IT department and they will probably come up with a hundred reasons why not, or some edge case systems that it wouldn’t work for. In one of my previous blogs I talked about ‘ default to yes’ . IT helps them find the one way it could work, instead of focusing on the many reasons it won’t. If you can get them to a position where you don’t have to worry about patching on 80% of the estate, then their precious time can be spent focusing on the 20% that really need it.
 

Conclusion – what the world holds for IT security

I started this by saying we are winning battles and not the war. We are managing internal IT wrong. We continue to do it how we always have, and we’ve failed to heed the lessons of others, so we continue to do the same thing and expect different results.

Ask your teams who test the patches when the last time there was an issue with a patch. I don’t mean a major operating system upgrade, or a major package upgrade, rather a weekly or monthly security patch. When I asked this question I was shocked at the answer. I was expecting it to be low, in the 5-10% region, but they couldn’t think of anything since 2012.

When you think about your internal IT, think about how the cloud can influence your behavior. Can you imagine a cloud vendor testing every one of their customer’s setup against the latest OS patch? No. They just patch it and give you the tools to manage. The more we decouple the platform from the product the better and easier this will be.
 
For more information on winning the IT security war, download the “Privileged Identity Theft” whitepaper here.
 
 
Adrian has been working in Information Security for 16 years of his 21 years working in technology. He classes himself as a technologist that specializes in Security. Working now as London Stock Exchange Group CISO having just been HSBC’s CISO where he was in charge of all IT security globally for two years. He has worked in many industries and companies, bringing his unique brand of security that puts the business first.

Prior to HSBC Adrian was Executive in Residence at Accel Partners (London) helping and advising startups. CISO at Skype for 5 years where web scale and big-data was one of the most challenging environments to apply security to. Betfair, Man Group, Barclays Capital, BAA, BA, are just some of the other companies he has worked for across high tech and financial sectors. He holds a Masters in Information Security from the University of London.
 
 

The post Why is IT Security winning battles, but losing the war…? appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/why-is-it-security-winning-battles-but-losing-the-war/feed/ 0
Artificial Intelligence in cybersecurity: friend or foe? https://www.balabit.com/blog/artificial-intelligence-in-cybersecurity-friend-or-foe/ https://www.balabit.com/blog/artificial-intelligence-in-cybersecurity-friend-or-foe/#respond Fri, 01 Dec 2017 10:21:42 +0000 http://www.balabit.com/blog/?p=2595 There’s no doubting that artificial intelligence (AI) is on the rise. It’s already supporting businesses with their marketing strategies, being used in driverless cars and recommending movies for us to watch. It’s also expected to...
Read more

The post Artificial Intelligence in cybersecurity: friend or foe? appeared first on Balabit Blog.

]]>
There’s no doubting that artificial intelligence (AI) is on the rise. It’s already supporting businesses with their marketing strategies, being used in driverless cars and recommending movies for us to watch.

It’s also expected to grow even further in the coming years. According to Gartner, AI technologies will be found in almost all new software, products and services by 2020. And according to Constellation Research, the AI market will surpass $100 billion by 2025. Businesses will be making the most of AI to complement their data analysis, automate their processes and anticipate future trends.

But the question remains: can businesses adopt AI and remain protected from cyber threats?

Entry points and malware

The Internet of Things (IoT) means everyday objects are generating more traffic, collecting more data and opening up more entry points for attack than ever before. This, alongside more integrated networks, is the unfortunate result of today’s cybercriminals having a plethora of entry points for bringing down an organization.

The truth is that as businesses grow smarter with AI, so do their attackers. Already, malware can infiltrate a system, collect and transmit data, and remain undetected for days. But with AI, an attack has the ability to adapt and learn how to improve its effectiveness with every moment it goes unnoticed.

 

What AI means for cyber-security

It’s worth noting that AI refers to the broad concept of machines being able to mimic human cognitive functions. It can detect patterns, spot anomalies, classify data and group information. Machine learning, meanwhile, can be seen as an embodiment of AI – when machines are given enough data, they can use it to solve problems by themselves.

In an ideal world, AI and machine learning would be able spot and shut down an attack before humans need to do anything. After all, it has the ability to detect anomalous behavior and deter security intrusions on a round-the-clock basis.

However, this isn’t always the case. Machine learning requires feedback when determining what is ‘good’ or ‘bad’. But often, malicious attacks can seem unthreatening from the outset and slip in by AI’s algorithms. What’s more, AI and machine learning may spot deviations in patterns that are not necessarily attacks, leading to inefficiencies in deploying security resources. And because machine learning depends on AI and data to learn, AI hacks become possible.

 

Humans matter

Given its flaws, AI should not be considered as an adequate replacement for human surveillance. At least not in the immediate future. Every technology has limits. And because of AI, human knowledge will remain vital to understanding how to react to a threat, and the depth of the issue at hand.

A hybrid approach, where all processes are automated, while the rest remains the responsibility of humans, is the most logical option. AI can share some of the burden of surveillance, while taking away some mundane chores from human hands.

 

Tomorrow’s solution

The AI-paved path ahead is an efficient one. But CIOs need to ask the right questions when it comes to ensuring they don’t get swept up amongst the AI hype. Any security solution claiming absolute protection should be treated with caution. Because while the potential for security to become more proactive than reactive is there, a dual approach is definitely needed.

Human expertise along with AI technology can achieve better results than either one alone.

 


For more information on how businesses can adopt AI technology, download our whitepaper.

The post Artificial Intelligence in cybersecurity: friend or foe? appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/artificial-intelligence-in-cybersecurity-friend-or-foe/feed/ 0
Five biometrics that are easier to hack than you think https://www.balabit.com/blog/five-biometrics-easier-hack-think/ https://www.balabit.com/blog/five-biometrics-easier-hack-think/#respond Tue, 28 Nov 2017 12:25:29 +0000 https://www.balabit.com/blog/?p=2637 The benefits of using biometrics data as part of a wider security program are clear. Passwords can be difficult to remember, especially when you have to mentally retain multiple passwords for a growing number of...
Read more

The post Five biometrics that are easier to hack than you think appeared first on Balabit Blog.

]]>
The benefits of using biometrics data as part of a wider security program are clear. Passwords can be difficult to remember, especially when you have to mentally retain multiple passwords for a growing number of digital accounts. But you’ll never forget your fingerprints or your voice. Logging in with something you are, rather than something you know, has distinct benefits for the end-user.

But that doesn’t mean that all biometric authentication measures are totally secure. In fact, many biometrics technologies are easier to hack than you think. Think your irises, fingerprints and human subtleties are unique and incorruptible? Think again.

Fingerprint recognition

Hackers have managed to use graphite powder, etching machines and wood glue to create fingerprint replicas good enough to fool scanners. Normally this would require access to something the target had touched, but not for much longer. Tsutomu Matsumoto, a researcher from Yokohama National University, managed to create a graphite mold from a picture of a latent fingerprint on a wine glass. It fooled scanners 80% of the time.

Iris scanning

The Chaos Computer Club, a hacking collective based in Berlin, managed to deceive iris-scanning technology using a dummy eye created from a photo print-out. A high-resolution image of an iris was wrapped around a contact lens to simulate the curvature of the eye. Meaning that anyone with a good quality Twitter profile picture could be hacked.

Facial recognition

Researchers from the University of North Carolina created a system that builds digital models of people’s faces based on photos from Facebook. The models are then rendered in 3D and then displayed using VR technology that simulates the motion and depth cues that facial recognition look for. The animation was convincing enough to bypass four out of the five systems tested.

Voice recognition

Criminals have been known to cold call targets and take voice samples from the call for hacking purposes. Either these samples are fed into a voice synthesizer that can then be used to generate phrases that were never originally said. Or hackers can try to get their victims to say the security phrases that would give them access to their accounts.

DNA sequencing

Even though DNA analysis is not widely-used as a security measure, it’s interesting to know that it could potentially be used for nefarious reasons. Scientists at the University of Washington encoded malware into a genetic molecule that was then used to take control of the computer used to analyze it. While we are perhaps a long way off from DNA hacking becoming commonplace, it is a stark reminder that fraudsters are always coming up with new techniques.

Building a security ecosystem

The fact that many of these biometrics technologies can be hacked is troubling. Especially because, while you can reset a password or a PIN code, you cannot reset your retinas. Once biometric data is in the possession of hackers, there is always a risk it could be used to compromise personal or professional accounts.

One possible way to prevent such attacks is to move towards using behavioral biometrics such as gait recognition, keystroke dynamics or mouse movement analysis. These behaviors can be continuously monitored and verified without disturbing users, unlike physiological biometrics technology, which requires intrusive one-off authentication.

Read our blog on why behavior matters for further information.

Whichever biometrics technology is used, it is crucial that it forms a part of a multi-factor security infrastructure. Utilizing several verification measures in unison will give the largest possible chance to avoid hackers gaining access to sensitive information.


For more information about the pros and cons of various biometrics technologies, download our free whitepaper
.

The post Five biometrics that are easier to hack than you think appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/five-biometrics-easier-hack-think/feed/ 0
Application adapters and enterprise-wide message model for syslog-ng https://www.balabit.com/blog/application-adapters-enterprise-wide-message-model-syslog-ng/ https://www.balabit.com/blog/application-adapters-enterprise-wide-message-model-syslog-ng/#respond Tue, 28 Nov 2017 08:31:53 +0000 https://www.balabit.com/blog/?p=2641 Do you want to simplify parsing your log messages? Try the new “application adapter” and “enterprise-wide message model” frameworks in syslog-ng: you can automatically parse log messages and forward the results to another syslog-ng instance....
Read more

The post Application adapters and enterprise-wide message model for syslog-ng appeared first on Balabit Blog.

]]>
Do you want to simplify parsing your log messages? Try the new “application adapter” and “enterprise-wide message model” frameworks in syslog-ng: you can automatically parse log messages and forward the results to another syslog-ng instance. Optionally, you can also include the original, raw message that you can forward unmodified to a SIEM system for further analysis.

Many organizations store logs without parsing or analyzing the messages, because they are only required to collect the logs. Doing anything further with logs is an effort low on their priority list – except when a crash or breach occurs. The new “application adapter” and “enterprise-wide message model” frameworks of syslog-ng make it easier for you to get started with message parsing, and allow you to forward the results. They provide:

  • A set of example adapters for sudo, iptables and other log messages that create name-value pairs from the important information in the messages.
  • A new syslog-ng() destination which forwards every name-value pair together with the log message.
  • A new default-network-drivers() source that is listening on every standard syslog ports, and automatically parses the incoming messages. The system() source was also changed to parse locally generated messages.
  • A new flag called “store-raw-message” that allows you to forward the original message unmodified, just as syslog-ng has received it

With these changes, syslog-ng becomes capable of turning the incoming unstructured syslog messages into a set of name-value pairs, making structured log processing and searching possible, and also helping you to create dashboards easier (for example, in Kibana or Splunk).

While these features are work in progress and might change considerably in future releases, we appreciate early testing and feedback. With the help of your comments we could cover not just our internal use cases, but also take into account the needs of the wider community.

Before you begin

Note, that both of these features are still work in progress. Right now they are not available in the main source code, only in the https://github.com/balabit/syslog-ng/pull/1689 pull request, where you can read the comments and take a look at the code. For testing, you can also download RPM packages for openSUSE, SLES and RHEL (& compatibles) from my git snapshot repositories.

You should have at least two (virtual) machines available for testing: one acting as a client and an other one acting as a server.

Before showing you configurations for a test environment, let me introduce you to the major components of application adapters and the enterprise-wide message model in more depth.

Application adapters

Application adapters enable automatic parsing for a variety of log messages. Right now there are adapters for sudo, iptables, and some of the Cisco log formats. As a result of parsing, syslog-ng creates name-value pairs that you can use later on for filtering. You can also store them together with the message for easier querying. Syslog-ng adds a prefix to the names of parsed values: a dot and the application name. For example for “sudo” logs the prefix is “.sudo.”.

This is the information forwarded about a sudo event. As you can see, the JSON template replaces the leading dot with an undersore:

{
    "_sudo": {
        "USER": "root",
        "TTY": "pts/0",
        "SUBJECT": "czanik",
        "PWD": "/home/czanik",
        "COMMAND": "/bin/ls /root"
    },
    "SOURCE": "s_dnd",
    "PROGRAM": "sudo",
    "PRIORITY": "notice",
    "PID": "8740",
    "MESSAGE": "  czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root",
    "ISODATE": "2017-11-23T06:05:57-05:00",
    "HOST_FROM": "localhost",
    "HOST": "localhost.localdomain",
    "FACILITY": "authpriv"
}

Before the regular syslog fields you can see values parsed from the message by syslog-ng under the “sudo” hierarchy.

After you have installed syslog-ng, you can find the application adapters bundled with syslog-ng under the “scl” directory, which you can normally find under /usr/share/syslog-ng/include/scl in your filesystem. Taking one of these as an example, you can create such an adapter yourself for the logs of other applications. For example here is the content of /usr/share/syslog-ng/include/scl/sudo/sudo.conf (minus the copyright from the beginning of the file).

block parser sudo-parser(prefix('.sudo.')) {
	kv-parser(prefix(`prefix`) pair-separator(';') extract-stray-words-into('0'));
        csv-parser(columns("`prefix`SUBJECT") template("$(list-head $0)") delimiters(' '));
};

application sudo[syslog] {
        filter { program("sudo" type(string)); };
        parser { sudo-parser(); };
};

Enterprise-wide message model

Until now, you could parse log messages or enrich them with external data on any component of your logging infrastructure that was running syslog-ng: a client, relay, or the server. However, forwarding the parsed values, or any other external information to the next component of the infrastructure was difficult and inconvenient, therefore parsing happened typically on the server. But now, the new transport mechanism of syslog-ng (dubbed enterprise-wide message model) allows you to deliver structured messages from the initial receiving syslog-ng component right up to the central log server, through any number of hops. It does not matter if you parse the messages on the client, on a relay, or on the central server, their structured results will be available where you store the messages. Optionally, you can also forward the original raw message as the first syslog-ng component in your infrastructure has received it, which is important if you want to forward a message for example to a SIEM system. To make use of the enterprise-wide message model, you have to use the syslog-ng() destination on the sender side, and the default-network-drivers() source on the receiver side.

The syslog-ng() destination

The syslog-ng() destination uses a special format to forward messages and the name-value pairs related to the message to other syslog-ng instances. The program name in the message is set to @syslog-ng, and the rest of the information is embedded in JSON format.

The default-network-drivers() source

The default-network-drivers() source is listening on all standard-syslog related ports, and automatically parses messages using the available application adapters. It supports TCP and UDP connections using the legacy and the new syslog protocol. Encrypted connections are also supported, but that needs additional configuration. It also automatically recognizes and parses messages sent using the syslog-ng() destination.

Note that you might need to update your SELinux and firewall rules to be able to use this feature.

The store-raw-message flag

One simple use-case for this mechanism is the transfer of the original RAW message as received from the client. With the new “store-raw-message” flag on inputs, syslog-ng saves the original message as received from the client in a name-value pair called ${RAWMSG}. You can forward this raw message to another syslog-ng component using the syslog-ng() destination + default-network-drivers() source pair, all the way to your central syslog-ng server, which can forward it to a SIEM system, in its original form, ensuring that the SIEM can process it.

Setting up a test environment

If you really want you can test the new features using a single instance of syslog-ng, but I recommend using two (virtual) machines for a more realistic test. My example configurations assume that you have two machines, a client and a server. They give you plenty of room for testing, so even if you do not have an Elasticsearch server ready, you can see how the features work.

For my tests I installed syslog-ng on CentOS 7 from the RPM packages in my experimental git snapshot repository. For the tests I simply disabled SELinux and firewalld. Of course while this is OK for a test environment, you should never do that in production. Check my blog if you need help running syslog-ng with SELinux and firewalld enabled.

Configuring the client

You can append the configuration snippet below to the end of your current syslog-ng configuration, or create a new .conf file under /etc/syslog-ng/conf.d/ if this possibility is enabled in your distribution.

source s_net {
  tcp(ip("0.0.0.0") port("514") flags(store-raw-message));
};

destination d_kv {
  file("/var/log/messages-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3));
};

log { source(s_sys); destination(d_kv); };

destination d_ewmm {
  syslog-ng(server("172.16.146.131"));
};

log { source(s_sys); source(s_net); destination(d_ewmm); };

In this configuration:

  • The s_net source collects syslog messages over a TCP port and also stores the original message to RAWMSG thanks to flags(store-raw-message)
  • The d_kv destination writes the parsed messages into a local file
  • The first log statement connects local system logs to the above destination. The CentOS package uses s_sys for local system logs and as it utilizes the system() source messages are parsed automatically.
  • The d_ewmm destination sends log messages to another syslog-ng instance specified in the server() parameter.
  • The second log statement connects local logs and the above network source to another syslog-ng instance using the enterprise-wide message model.

Configuring the server

You can append the configuration snippet below to the end of your current syslog-ng configuration or create a new .conf file under /etc/syslog-ng/conf.d/ if this possibility is enabled in your distribution.

# network source opening default syslog ports
source s_dnd {
  default-network-drivers();
};

# plain text file destination
destination d_fromnet {
  file("/var/log/fromnet");
};

# saving from network source to plain text file
log {source(s_dnd); destination(d_fromnet); };

# text file destination with all value pairs welf formatted
destination d_kv {
  file("/var/log/fromnet-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3));
};

# saving from network source to text file together with value pairs
log { source(s_dnd); destination(d_kv); };

# Elasticsearch destination
destination d_elastic {
  elasticsearch2 (
    cluster("elasticsearch")
    client_mode("http")
    index("syslog-${YEAR}.${MONTH}.${DAY}")
    time-zone(UTC)
    type("syslog")
    flush-limit(1)
    server("172.16.146.131")
    template("$(format-json --rekey .* --shift 1 --scope rfc5424 --scope dot-nv-pairs --scope nv-pairs --exclude DATE --key ISODATE)")
    persist-name(elasticsearch-syslog)
  )
};

# template test
destination d_templatetest {
  file("/var/log/templatetest" 
    template("$(format-json --rekey .* --shift 1 --scope rfc5424 --scope dot-nv-pairs --scope nv-pairs --exclude DATE --key ISODATE)\n\n")
  );
};

# network destination sending original raw message
destination d_siem {
  tcp("172.16.146.130" port("514") template("${RAWMSG}\n"));
};

# saving from network source to Elasticsearch
log {
  source(s_dnd);
  destination(d_elastic);
  destination(d_templatetest);
#  destination(d_siem);
};

In this configuration:

  • The s_dnd source utilizes the new default-network-drivers source. Encryption does not work, as it is not configured here.
  • The d_fromnet destination is a regular plain text file destination followed by a log statement connecting it to the s_dnd source.
  • The d_kv destination includes all the value pairs welf formatted followed by a log statement connecting it to the s_dnd source.
  • The d_elasticsearch destination sends the message together with all related value pairs to an Elasticsearch database. Note, that the template includes “–rekey .* –shift 1” as field names starting with an underscore are problematic in Elasticsearch. This options removes the leading dot from field names.
  • The d_templatetest destination uses the same template as the Elasticsearch destination, except the additional line feeds at the end. This can be used to fine tune your template or you can use it if you do not have Elasticsearch in your environment.
  • The d_siem destination sends the original raw message to a TCP destination. Note the template: it uses the RAWMSG macro which exists only if the store-raw-message flag is enabled.
  • Finally, there is a log statement which connects the s_dnd source both to Elasticsearch and the test destination.

Testing

Once everything is configured and syslog-ng is reloaded to take configurations into effect, you are ready for testing. We send two test messages. The first one is running sudo as a user and checking traces in the logs. The other one is sending a simple message to the network source on the client so we can check raw message forwarding.

Note that if you use journald for logging, as for example CentOS 7 does, you will see tons of additional name-value pairs coming from the journald driver of syslog-ng.

Testing with sudo

Use sudo for something on the client machine and check the logs. In my example I executed “sudo ls /root” as user czanik.

The /var/log/secure file on the client and the /var/log/fromnet file on the server should contain a similar line:

Nov 24 07:03:31 localhost.localdomain sudo[12766]:   czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root

The /var/log/messages-kv.log file on the client and the /var/log/fromnet-kv.log on the server should contain a similar line. Note the field names starting with .sudo. and the many name-value pairs coming from journald.

2017-11-24T07:03:31.611-05:00 localhost.localdomain .journald.MESSAGE="  czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root" .journald.PRIORITY=5 .journald.SYSLOG_FACILITY=10 .journald.SYSLOG_IDENTIFIER=sudo .journald._AUDIT_LOGINUID=0 .journald._AUDIT_SESSION=2 .journald._BOOT_ID=7091940ba2d14a37ac1d652bb048a06f .journald._CAP_EFFECTIVE=1fffffffff .journald._CMDLINE="sudo ls /root" .journald._COMM=sudo .journald._EXE=/usr/bin/sudo .journald._GID=1000 .journald._HOSTNAME=localhost.localdomain .journald._MACHINE_ID=47cc66fe04ad4698a6bee3ef8d6ac86f .journald._PID=12766 .journald._SELINUX_CONTEXT=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 .journald._SOURCE_REALTIME_TIMESTAMP=1511525011608545 .journald._SYSTEMD_CGROUP=/user.slice/user-0.slice/session-2.scope .journald._SYSTEMD_OWNER_UID=0 .journald._SYSTEMD_SESSION=2 .journald._SYSTEMD_SLICE=user-0.slice .journald._SYSTEMD_UNIT=session-2.scope .journald._TRANSPORT=syslog .journald._UID=0 .sudo.COMMAND="/bin/ls /root" .sudo.PWD=/home/czanik .sudo.SUBJECT=czanik .sudo.TTY=pts/0 .sudo.USER=root HOST=localhost.localdomain HOST_FROM=localhost MESSAGE="  czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root" PID=12766 PROGRAM=sudo SOURCE=s_dnd

The /var/log/templatetest file should contain a similar line in JSON format:

{"sudo":{"USER":"root","TTY":"pts/0","SUBJECT":"czanik","PWD":"/home/czanik","COMMAND":"/bin/ls /root"},"journald":{"_UID":"0","_TRANSPORT":"syslog","_SYSTEMD_UNIT":"session-2.scope","_SYSTEMD_SLICE":"user-0.slice","_SYSTEMD_SESSION":"2","_SYSTEMD_OWNER_UID":"0","_SYSTEMD_CGROUP":"/user.slice/user-0.slice/session-2.scope","_SOURCE_REALTIME_TIMESTAMP":"1511525011608545","_SELINUX_CONTEXT":"unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023","_PID":"12766","_MACHINE_ID":"47cc66fe04ad4698a6bee3ef8d6ac86f","_HOSTNAME":"localhost.localdomain","_GID":"1000","_EXE":"/usr/bin/sudo","_COMM":"sudo","_CMDLINE":"sudo ls /root","_CAP_EFFECTIVE":"1fffffffff","_BOOT_ID":"7091940ba2d14a37ac1d652bb048a06f","_AUDIT_SESSION":"2","_AUDIT_LOGINUID":"0","SYSLOG_IDENTIFIER":"sudo","SYSLOG_FACILITY":"10","PRIORITY":"5","MESSAGE":"  czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root"},"SOURCE":"s_dnd","PROGRAM":"sudo","PRIORITY":"notice","PID":"12766","MESSAGE":"  czanik : TTY=pts/0 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/ls /root","ISODATE":"2017-11-24T07:03:31-05:00","HOST_FROM":"localhost","HOST":"localhost.localdomain","FACILITY":"authpriv"}

And finally you should see the same data also in Elasticsearch if you enabled it.

Testing raw message forwarding

To test raw message forwarding you should send a test message to the network source on the client. Use a similar command line:

logger -T -n 127.0.0.1 -P 514 -i This is a test

Using the above configurations the message is sent directly to the server using the enterprise-wide message model.

The /var/log/fromnet file should contain a similar line:

Nov 24 07:35:36 127.0.0.1 root[13629]: This is a test

The /var/log/templatetest file is more interesting, as it also contains the name-value pairs, including RAWMSG. As you can see it is completely untouched and includes all fields of a syslog message including the <> part at the beginning.

{"SOURCE":"s_dnd","RAWMSG":"<5>Nov 24 07:35:36 root[13629]: This is a test","PROGRAM":"root","PRIORITY":"notice","PID":"13629","MESSAGE":"This is a test","LEGACY_MSGHDR":"root[13629]: ","ISODATE":"2017-11-24T07:35:36-05:00","HOST_FROM":"127.0.0.1","HOST":"127.0.0.1","FACILITY":"kern"}

 

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Application adapters and enterprise-wide message model for syslog-ng appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/application-adapters-enterprise-wide-message-model-syslog-ng/feed/ 0
What’s the difference between a ‘Malicious Threat’ and a ‘Careless Insider’? https://www.balabit.com/blog/whats-the-difference-between-a-malicious-threat-and-a-careless-insider/ https://www.balabit.com/blog/whats-the-difference-between-a-malicious-threat-and-a-careless-insider/#respond Tue, 21 Nov 2017 12:40:21 +0000 https://www.balabit.com/blog/?p=2631 We all fantasize about what we’ll do on our last day of work. But few of us will go as far as the Twitter customer support employee who used their last day on the job...
Read more

The post What’s the difference between a ‘Malicious Threat’ and a ‘Careless Insider’? appeared first on Balabit Blog.

]]>
We all fantasize about what we’ll do on our last day of work. But few of us will go as far as the Twitter customer support employee who used their last day on the job to deactivate Donald Trump’s account. As far as final working day pranks go, one that makes international headlines is hard to beat.

It’s a stark reminder that businesses often grant employees a huge amount of power and access to valuable information. Such access can easily be abused with malicious (or mischievous) intent. Or it can provide the conditions in which a careless employee can allow sensitive data to be compromised.

To truly protect themselves, businesses must understand the difference between Malicious Threats and Careless Insiders, and how to protect against both.

What is a Careless Insider?

Careless Insiders are employees that accidentally leak company data, either through some mistake on their part or because they have unknowingly had their accounts hacked.

With employees and third parties getting more and more access to company data, this is becoming a regular occurrence. For example, a lawyer for Wells Fargo accidentally leaked 1.4 gigabytes of confidential client data when sending documents for a defamation lawsuit. And prior to that, the personal information of 36,000 Boeing workers was leaked when an employee accidentally emailed a spreadsheet containing the data to his spouse.

People make mistakes. But when they have access to huge amounts of sensitive information, these mistakes can have potentially disastrous consequences.

And sometimes, as a result of poor security hygiene, employees can give hackers access to company data without even realizing it. Hackers can compromise employee accounts through social engineering, phishing, installing malware, or sometimes, simply guessing a weak password using information gathered from social media.

In these instances, fraudsters can then exploit the privileged access of the user to steal data. These privileged access accounts often sit far outside of the business itself – the infamous Target breach was initiated using the stolen credentials of a small HVAC supplier with access to Target’s systems.

What is a Malicious Threat?

Malicious Threats are employees that deliberately steal or leak company data for personal or financial gain.

Often, this involves employees making off with company data and selling it on the black market, such as when an employee of healthcare firm Bupa stole 108,000 customers’ private information and shared it with a third party. Sometimes the reasons are more personal. A disgruntled senior auditor at UK retailer Morrisons leaked the details of over 100,000 employees after he was cautioned for inappropriate use of the company’s mail room.

Sadly, the Malicious Threat trend is becoming increasingly prevalent. Recent research found that nearly a quarter (24%) of employees have intentionally misused company email accounts to leak confidential information, typically with a competitor or a new employer.

How to combat the threat

In the light of the growing danger from Malicious Threats and Careless Insiders, there are several tools businesses can use:

  • Password Management – Specially-designed software that controls access to privileged accounts, generates strong passwords, randomizes them and stores them in a password vault. This makes it more difficult for hackers to steal credentials from employees.
  • Privileged Session Management – Systems that restrict user activity to just the areas of the network they need to access, and provide audit trails of user activity in real-time. This limits the type of assets that can be accessed by hackers if they compromise an employee login.
  • User Behavior Analytics – Continuously monitors user behavior (such as the rhythm of keystrokes or mouse movement) to detect anomalies in user activity. This can help identify when a hacker is logged into an employee’s privileged account and exhibiting abnormal behavior.

 

Find out more about to defend against Malicious Threats and Careless Insiders by reading our free whitepaper, Understanding Privileged Identity Theft.

The post What’s the difference between a ‘Malicious Threat’ and a ‘Careless Insider’? appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/whats-the-difference-between-a-malicious-threat-and-a-careless-insider/feed/ 0
Beyond passwords: how human behavior is the next step for security https://www.balabit.com/blog/beyond-passwords-how-human-behavior-is-the-next-step-for-security/ https://www.balabit.com/blog/beyond-passwords-how-human-behavior-is-the-next-step-for-security/#respond Fri, 17 Nov 2017 10:15:46 +0000 https://www.balabit.com/blog/?p=2616 Privileged users come with their own particular set of security challenges. The more access they have to a network, the greater the risk they could compromise data and disclosing confidential information. You just need to...
Read more

The post Beyond passwords: how human behavior is the next step for security appeared first on Balabit Blog.

]]>
Privileged users come with their own particular set of security challenges. The more access they have to a network, the greater the risk they could compromise data and disclosing confidential information. You just need to think back to Wikileaks or the Edward Snowden NSA leaks to grasp the extent of the damage that can be done when a privileged access user goes rogue.

But it’s not just privileged users within an organization that pose a threat. Contractors, suppliers and any third-party user that needs access to critical infrastructure all represent a significant risk. Not to mention the cybercriminals that are a constant danger if defenses aren’t up to scratch.

With a powerful set of credentials, anyone with privileged user access can bypass security controls and turn off monitoring systems – essentially breaching your defenses while going undetected. Plus, it only takes one account to be compromised for things to cascade into an enterprise-wide disaster.


Passwords just aren’t enough

To protect your network, a holistic approach to security is required. This means going beyond passwords and integrating more modern forms of security, such as contextual identity solutions.

Whereas basic privileged access monitoring often gives you an overview of a user’s actions, alerting you to anything that deviates from typical behavioral patterns – such as multiple log in attempts – contextual monitoring provides something entirely different: it can give context to a user’s actions and intent.

Contextual monitoring tools do this by taking into account a user’s device, IP address, time of access and previous interactions to understand if the actions a user is taking are in line with standard behavior. And by using machine learning algorithms, they can help security teams to quickly identify compromised accounts or discover unauthorized account sharing.


Next generation security

Significantly, today’s advanced security tools can take this type of acute monitoring one step further. By monitoring, analyzing and understanding more nuanced user behavior, they give organizations an even greater chance of detecting and preventing security breaches, without having to disrupt the user experience. It’s all about combining the three pillars of authentication into one step.

Two factor authentication in the form of device verification and passwords are already a mainstay for many systems. In-depth behavioral analytics takes this even further by telling a system who the user is from a biological standpoint.

It’s all about understanding unique human movements, such as the way a user types on a keyboard, moves a mouse or holds a device. Even the way they hold a stylus or walks with a device. This kind of detailed biometric analysis can help security teams distinguish between what is usual behavior and what isn’t, as well as what is human and automated activity, allowing them to implement the best course of action – whether that’s session termination or continued monitoring. All without disrupting normal privileged user workflows.

Of course, tracking micro-movements isn’t a failsafe form of defense. Because whether it’s malicious or accidental, insider actions can still cause breaches. But any added security measures that can strengthen defenses will put businesses in a better security position than simply relying on log management alone.

The way the tech landscape is evolving suggests that we’re moving towards a complex IT world in which everyone and everything is connected. To truly stay secure in this environment, firewalls and passwords simply won’t cut it anymore. Behavior is the new battleground.


Download our free whitepaper to learn more about Privileged Access Management today and how, in the event of an incident, can help you respond quickly and effectively.

The post Beyond passwords: how human behavior is the next step for security appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/beyond-passwords-how-human-behavior-is-the-next-step-for-security/feed/ 0
Sending netdata metrics through syslog-ng to Elasticsearch https://www.balabit.com/blog/sending-netdata-metrics-syslog-ng-elasticsearch/ https://www.balabit.com/blog/sending-netdata-metrics-syslog-ng-elasticsearch/#respond Thu, 16 Nov 2017 09:01:04 +0000 https://www.balabit.com/blog/?p=2608 netdata is a system for distributed real-time performance and health monitoring. You can use syslog-ng to collect and filter data provided by netdata and then send it to Elasticsearch for long-term storage and analysis. The...
Read more

The post Sending netdata metrics through syslog-ng to Elasticsearch appeared first on Balabit Blog.

]]>
netdata is a system for distributed real-time performance and health monitoring. You can use syslog-ng to collect and filter data provided by netdata and then send it to Elasticsearch for long-term storage and analysis. The aim is to send both metrics and logs to an Elasticsearch instance, and then access it via Kibana. You could also use Grafana for visualization, but that is not covered in this blog post.

I would like to thank here Fabien Wernli and Bertrand Rigaud for his help in writing this HowTo.

Before you begin

This workflow uses two servers. Server A is the application server where metrics and logs are collected and sent to Server B, which hosts the Elasticsearch and Kibana instances. I use CentOS 7 in my examples but steps should be fairly similar on other platforms, even if package management commands and repository names are different.

 

In the example command lines and configurations, servers will be referred to by the names “servera” and “serverb”. Replace them with their IP addresses or real host names to reflect your environment.

Installation of applications

First we install all necessary applications. Once all components are up and running, we will configure them to work nicely together.

Installation of Server A

Server A runs netdata and syslog-ng. As netdata is quite a new product and develops quickly, it is not yet available in official distribution package repositories. There is a pre-built generic binary available, but installing from source is easy.

  1. Install a few development-related packages:
yum install autoconf automake curl gcc git libmnl-devel libuuid-devel lm_sensors make MySQL-python nc pkgconfig python python-psycopg2 PyYAML zlib-devel
  1. Clone the netdata git repository:
git clone https://github.com/firehol/netdata.git --depth=1
  1. Change to the netdata directory, and start the installation script as root:
cd netdata
./netdata-installer.sh
  1. When prompted, hit Enter to continue.

The installer script not only compiles netdata but also starts it and configures systemd to start netdata automatically. When installation completes, the installer script also prints information about how to access, stop, or uninstall the application.

By default, the web server of netdata listens on all interfaces on port 19999. You can test it at http://servera:19999/.

For other platforms or up-to-date instructions on future netdata versions, check https://github.com/firehol/netdata/wiki/Installation#1-prepare-your-system.

  1. Once netdata is up and running, install syslog-ng.

Enable EPEL and add a repository with the latest syslog-ng version, with Elasticsearch support enabled:

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat <<EOF | tee /etc/yum.repos.d/czanik-syslog-ng312-epel-7.repo
[czanik-syslog-ng312]
name=Copr repo for syslog-ng312 owned by czanik
baseurl=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng312/epel-7-x86_64/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/czanik/syslog-ng312/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
EOF

6. Install syslog-ng and Java modules:

# yum install syslog-ng syslog-ng-java

7. Make sure that libjvm.so is available to syslog-ng (for additional information, check: https://www.balabit.com/blog/troubleshooting-java-support-syslog-ng/):

echo /usr/lib/jvm/jre/lib/amd64/server > /etc/ld.so.conf.d/java.conf
ldconfig

8. Disable rsyslog, then enable and start syslog-ng:

systemctl stop rsyslog
systemctl enable syslog-ng
systemctl start syslog-ng

9. Once you are ready, you can check whether syslog-ng is up and running by sending it a log message and reading it back:

logger bla
tail -1 /var/log/messages

You should see a similar line on screen:

Nov 14 06:07:24 localhost.localdomain root[39494]: bla

Installation of Server B

Server B runs Elasticsearch and Kibana.

  1. Install JRE:
yum install java-1.8.0-openjdk.x86_64
  1. Add the Elasticsearch repository:
cat <<EOF | tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
  1. Install, enable, and start Elasticsearch:
yum install elasticsearch
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
  1. Install, enable, and start Kibana:
yum install kibana
systemctl enable kibana.service
systemctl start kibana.service

Configuring applications

Once we installed all applications, we can start configuring them.

Configuring Server A

  1. First, configure syslog-ng. Replace its original configuration in /etc/syslog-ng/syslog-ng.conf with the following configuration:
@version:3.12
@include "scl.conf"

source s_system {
    system();
    internal();
};

source s_netdata {
  network(
    transport(tcp)
    port(1234)
    flags(no-parse)
    tags(netdata)
  );
};

parser p_netdata {
  json-parser(
    prefix("netdata.")
  );
};

filter f_netdata {
  match("users", value("netdata.chart_type"));
};

destination d_elastic {
  elasticsearch2 (
    cluster("elasticsearch")
    client_mode("http")
    index("syslog-${YEAR}.${MONTH}")
    time-zone(UTC)
    type("syslog")
    flush-limit(1)
    server("serverB")
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
    persist-name(elasticsearch-syslog)
  )
};

destination d_elastic_netdata {
  elasticsearch2 (
    cluster("syslog-ng")
    client_mode("http")
    index("netdata-${YEAR}.${MONTH}.${DAY}")
    time-zone(UTC)
    type("netdata")
    flush-limit(512)
    server("serverB")
    template("${MSG}")
    persist-name(elasticsearch-netdata)
  )
};

log {
  source(s_netdata);
  parser(p_netdata);
  filter(f_netdata);
  destination(d_elastic_netdata);
};

log {
    source(s_system);
    destination(d_elastic);
};

This configuration sends netdata metrics and also all syslog messages to Elasticsearch directly.

  1. If you want to collect some of the logs locally as well, keep some of the original configuration accordingly, or write your own rules.
  2. You need to change the server name from ServerB to something matching your environment. The f_netdata filter shows one possible way of filtering netdata metrics before storing to Elasticsearch. Adopt it to your environment.
  3. Next, configure netdata. Open your configuration (/etc/netdata/netdata.conf), and replace the [backend] statement with the following snippet:
[backend]
        enabled = yes
        type = json
        destination = localhost:1234
        data source = average
        prefix = netdata
        update every = 10
        buffer on failures = 10
        timeout ms = 20000
        send charts matching = *

The “send charts matching” setting here serves a similar role as “f_netdata” in the syslog-ng configuration. You can use either of them, but syslog-ng provides more flexibility.

  1. Finally, restart both netdata and syslog-ng so the configurations take effect. Note that if you used the above configuration, you do not see logs arriving in local files anymore. You can check your logs once the Elasticsearch server part is configured.

Configuring Server B

Elasticsearch controls how data is stored and indexed using index templates. The following two templates will ensure netdata and syslog data have the correct settings.

  1. First, copy and save them to files on Server B:
  • netdata-template.json:
{
    "order" : 0,
    "template" : "netdata-*",
    "settings" : {
      "index" : {
        "query": {
          "default_field": "_all"
        },
        "number_of_shards" : "1",
        "number_of_replicas" : "0"
      }
    },
    "mappings" : {
      "netdata" : {
        "_source" : {
          "enabled" : true
        },
        "dynamic_templates": [
          {
            "string_fields": {
              "mapping": {
                "type": "keyword",
                "doc_values": true
              },
              "match_mapping_type": "string",
              "match": "*"
            }
          }
        ],
        "properties" : {
          "timestamp" : {
            "format" : "epoch_second",
            "type" : "date"
          },
          "value" : {
            "index" : true,
            "type" : "double",
            "doc_values" : true
          }
        }
      }
    },
    "aliases" : { }
}
  • syslog-template.json:
{
    "order": 0,
    "template": "syslog-*",
    "settings": {
      "index": {
        "query": {
          "default_field": "MESSAGE"
        },
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "syslog": {
        "_source": {
          "enabled": true
        },
        "dynamic_templates": [
          {
            "string_fields": {
              "mapping": {
                "type": "keyword",
                "doc_values": true
              },
              "match_mapping_type": "string",
              "match": "*"
            }
          }
        ],
        "properties": {
          "MESSAGE": {
            "type": "text",
            "index": "true"
          }
        }
      }
    },
    "aliases": {}
}

2. Once you saved them, you can use the REST API to push them to Elasticsearch:

curl -XPUT 0:9200/_template/netdata -d@netdata-template.json
curl -XPUT 0:9200/_template/syslog  -d@syslog-template.json

 

3. You can now edit your Elasticsearch configuration file and enable binding to an external interface so it can receive data from syslog-ng. Open /etc/elasticsearch/elasticsearch.yml and set the network.host parameter:

network.host:
  - [serverB_IP]
  - 127.0.0.1

Of course, replace [serverB_IP] with the actual IP address.

4. Restart Elasticsearch so the configuration takes effect.

5. Finally, edit your Kibana configuration (/etc/kibana/kibana.yml), and append the following few lines to the file:

server.port: 5601
server.host: "[serverB_IP]"
server.name: "A_GREAT_TITLE_FOR_MY_LOGS"
elasticsearch.url: "http://127.0.0.1:9200"

As usual, replace [serverB_IP] with the actual IP address.

6. Restart Kibana so the configuration takes effect.

Testing

You should now be able to log in to Kibana on port 5601 of Server B. You should set up your indexes on first use, and then you are ready to query your logs. If it does not work, here is a list of possible problems:

  • “serverb” has not been rewritten to the proper IP address in configurations.
  • SELinux is running (for testing, “setenforce 0” is enough, but for production, make sure that SELinux is properly configured).
  • The firewall is blocking network traffic.

Further reading

I gave here only minimal instructions to get started with netdata, syslog-ng, and Elasticsearch. You can learn more on their respective documentation pages:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Sending netdata metrics through syslog-ng to Elasticsearch appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/sending-netdata-metrics-syslog-ng-elasticsearch/feed/ 0
Don’t make these common password mistakes https://www.balabit.com/blog/dont-make-common-password-mistakes/ https://www.balabit.com/blog/dont-make-common-password-mistakes/#respond Wed, 08 Nov 2017 07:36:55 +0000 http://www.balabit.com/blog/?p=2599 Recent major cyber breaches highlight that there’s still much more to be done when it comes to educating employees about security best practices. Take Deloitte, for example, who were hit with a cyberattack earlier this...
Read more

The post Don’t make these common password mistakes appeared first on Balabit Blog.

]]>
Recent major cyber breaches highlight that there’s still much more to be done when it comes to educating employees about security best practices.

Take Deloitte, for example, who were hit with a cyberattack earlier this year. Their systems were compromised through a hacked administrator’s account that did not require multi-factor authentication. As a result, the hacker had unrestricted privileged access to internal files; revealing the emails, usernames, passwords and personal details of Deloitte’s large blue-chip clients.

The lesson here is simple: you can never be too careful. Robust privileged access management solutions could have prevented this breach. But security can also come from simple steps, such as setting secure passwords.

In fact, all companies should be encouraging employees to set powerful passwords. As an IT manager, you may have the know-how, but this doesn’t always trickle down to employees.

Here are some tips and tricks worth reminding others of when it comes to cyber-hygiene.

Size matters

While a password such as Secur!tTy123 may seem hard to crack for humans, it’s relatively easy for computers to eventually guess. The thing to remember is that the longer the password, the harder it is to crack. So, opt for something like a string of random phrases, such as ‘swan windmill heartbeat soccer Ryvita’ over a shorter combination of alphanumeric nonsense.

Spread out special characters

Most password fields require you to include upper and lower cases as well as numbers and symbols. This is all well and good, but most people tend to capitalize the first letter of the password and add a symbol or number at the end. Again, machines can guess this predictable behavior, making the additional special characters redundant.

Don’t force regular changes

Not too long ago, many regulators and standard organizations recommended regular password changes. But this is no longer the best course of action. Regular changes encourage risky behavior – using passwords that can be easily guessed, using predictable password strategies, or reusing the same password for multiple accounts. Instead, encourage long passwords and the use of a password manager.

 

Assume nothing is private

You may think you have your social media profiles on lockdown, but hackers can still find out the names of your family members or interests by digging around. Likewise, don’t save your passwords in a plain text file and assume no one will be able to find it. Pen and paper is harder to hack than a Word doc. And instead of writing your passwords down, consider writing the name of the website, your login and a clue that will jog your memory.



Of course, passwords alone won’t keep you totally protected. They should be used with multi-factor authentication and work alongside other security measures, such as privileged user access management solutions. These, combined with regular training can lower the risk of a hack and stop your organization from becoming the next Equifax.

 


For more advice on protecting against privileged account hacks, download this whitepaper.

The post Don’t make these common password mistakes appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/dont-make-common-password-mistakes/feed/ 0
Social engineering: how hackers are using your open information https://www.balabit.com/blog/social-engineering-how-hackers-are-using-your-open-information/ https://www.balabit.com/blog/social-engineering-how-hackers-are-using-your-open-information/#respond Mon, 06 Nov 2017 15:10:12 +0000 http://www.balabit.com/blog/?p=2580 Tag. Like. Share. Post. With social media, it’s now too easy to live a part of your life online. While this may be good news for your online friends and the tech companies who want...
Read more

The post Social engineering: how hackers are using your open information appeared first on Balabit Blog.

]]>
Tag. Like. Share. Post.

With social media, it’s now too easy to live a part of your life online.

While this may be good news for your online friends and the tech companies who want to sell you things, it’s also great news for hackers. Because, when used without caution, social media has the means to give criminals the arsenal they need to substantially magnify the effectiveness of their attacks.


Viral clicking

Employees are an important first line of defense against hacking. But they’re also the most vulnerable. While most employees know not to open suspicious emails, seemingly innocent links on social media are another story. Whether it’s discount vouchers, surveys with prizes if forwarded along, or fake recruiters asking for personal details on LinkedIn – people are often more trusting when clicking on links that appear to come from a friend. But once a link is clicked, it can quickly lead to malware being installed on a browser, revealing a user’s location, type of device and operating system. Information which can be handy for launching future attacks.

Without proper privacy settings and vigilance, social media posts can be used to pursue socially engineered fraud, launch targeted phishing campaigns, or commit identity theft. And even when posts are set to private, it’s still possible to be compromised. For example, say a crook gets hold of your Head of Finance’s personal Facebook account by posing as a high school acquaintance. From there, they can see what nickname he or she goes by, and what interests he or she might have. The crook can use this information to hack into their password and gain privileged access to other accounts.

Malicious attacks can also come in the form of bogus plugins. The Google Docs phishing scam in May this year fooled many Gmail users into handing over access to their email. And compromised third-party tools can lead to widespread damage. Earlier this year a number of prominent Twitter accounts, including Forbes and Amnesty International, were hacked to tweet Nazi-related messages after the Twitter analytics tool, Twitter Counter, was hacked.

There’s also insight tools to consider. Today’s technology can scan an employee’s social media account to find out what their interests and tastes are. AI solutions can also scan an image to guess a person’s age and gender. And it doesn’t take a genius to figure out personal addresses through the electoral register and geotagged photos. Combined with the techniques above, these insights can be used to launch a highly personalized and sophisticated phishing scam.

Finally, there’s the old-school method of physical theft. By stealing someone’s phone, tablet or laptop, crooks have immediate access to social media profiles – unless adequate protections are put in place


Why vigilance is the way forward

Large investments in password-centered online security tools won’t necessarily do the trick, especially if employees continue to click on dubious links. And banning social media altogether will cause more harm than good, given the likely cultural backlash and employees tendency to use social media anyway.

Going forward, the best defense is a hybrid one. Businesses must foster a security-conscious culture through continuous training and education. There must be a social media policy in place that ensures all checks and balances are made before any information is shared on the company’s social media. And employers must also look at making sure their security solutions have the means to scan and stop malicious attacks in real time.

Remember, it’s not just the big corporate enterprises or government bodies that are at risk. While they may be the ones we hear most about in the news, the fact is that businesses of all sizes can be affected. LinkedIn, Twitter and Facebook aren’t just harmless distractions. When used without discretion, it can lead to potentially disastrous results.

Businesses and individuals alike should be wary. Here are some tips everyone can use to protect themselves and their businesses.

 

  1. Update passwords regularly

Social media passwords should all be unique and routinely updated. Don’t stick these on a note. A password manager may help.

  1. Be careful when handing out credentials

Third-party sites should all be treated with caution. Regularly check what sites your social media is linked with.

  1. Exercise caution on public Wi-Fi

BYOD means sensitive information can be accessed by anyone, anywhere, unless the right precautions are in place.

  1. Check privacy settings

Birthdays, family members and education can easily be found on social media and used to bypass security check questions in password recovery. Keep these hidden.

  1. Be vigilant

Check what your company is posting and make sure it can’t be used to socially engineer employees. A social media policy can help.


Download our whitepaper on Privileged Identity Theft for more information on protecting against hackers online.

The post Social engineering: how hackers are using your open information appeared first on Balabit Blog.

]]>
https://www.balabit.com/blog/social-engineering-how-hackers-are-using-your-open-information/feed/ 0