broThe Bro Network Security Monitor is an open source network monitoring framework. In a nutshell, Bro monitors packet flows over a network and creates high-level “flow” events from them and stores the events as single tab-separated lines in a log file. You can then parse these log files to data mine for information about the network traffic on the network you are monitoring. An excellent method of parsing the bro log files and visualizing all the data is to use the ELK stack. At the heart of ELK are Elasticsearch, Logstash, and Kibana. Logstash parses the bro logs, Elasticsearch stores the parsed data, and Kibana provides a beautiful GUI for data mining and visualization.

If you already have a network tap installed with optional bonded network interfaces, Bro installed and the ELK stack installed on your system, all that’s left to do is create and deploy a configuration file for Logstash to tell Logstash where to look for the Bro logs, how to manipulate them, and where to put them (Elastic Search).

A redditor on /r/netsec pointed out that the CVS filter is much more efficient than the grok filter and pointed me to a git repo with some logstash conf files for parsing Bro logs. I’ve since then forked the repo and modified the files to suit my needs better including fixing the tab separator delimiter, adding a geoip filter, and fixing a few bugs. A quick way to get the conf file(s) is to pull them directly from github to logstash’s conf.d directory, which is shown in the following code block. Note that logstash will load all the config files it finds in conf.d at start up.

Note that starting from Logstash 2.x, the elasticsearch host configuration has changed. The error you will encounter looks something like this:

The fix is just to change host to hostsin the config files, which I've updated in the above *.conf files.

Explanation

Let’s take a closer look at the file: https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-conn_log.conf

  1. In the input section, we need to put all paths to the actual Bro log files on OUR system.
  2. In the output section at the end of the config file, we need to push the data to Elasticsearch: elasticsearch { host => localhost }.
  3. In the main filter section, a csv filter is assigned and configured for the bro log. You can hand write the csv filters if you want.
  4. The other filter sections do a few more manipulations to the data and are explained quite well in the comment sections.
  5. Starting Elasticsearch 2.0 it does not support field names with a . (or dot character) in them. Since the bro logs contain fields with dots in their names (id.orig_p), we need to use a filter to convert the dots to underscores. If not you may see an error like: failed to put mappings on indices [[logstash-2016.05.02]], type [bro-conn_log] MapperParsingException[Field name [id.orig_h] cannot contain '.']. The mutate plugin is used to convert the field names containing dots to underscores with the rename command.

logstash-filter-translate

The above logstash config uses a plugin called logstash-filter-translate. The following terminal commands show how to install the logstash-filter-translate plugin. For a more in-depth explanation of installing logstash plugins see How to Install Logstash Plugins for Version 1.5.

Deploying

To check if the configuration(s) is(are) valid without starting Logstash, run the following:

Test run in the console:

Restart Logstash and it will automatically pick up the the new config file. It could take up to a minute before it actually starts pumping data.

or for a system with systemd

Debugging

For debugging, we can start Logstash with the --debug flag with the following command:

In any of the config files, you can also change the output to push data to the console instead of to Elasticsearch by adding stdout {}.

codec => rubydebug can also be used for debugging. It’s formatted prettier.

And here are some extra commands controlling the logstash service:

System V

Systemd

If Logstash does not start, look in the following logs for any errors:

To see which pid logstash has for killing it:

To see what config parameters Logstash started up with, use the following command:

You will get something like the following:

sincedb_path

The sincedb_path needs to be writeable by the logstash user. One way to do this is to set the sincedb_path to /var/tmp if you system has this writeable directory. Fi you are having error messages related to the sincedb_path, the first thing to check are the permissions on the configured path.

Related Resources

Installing Bro on Ubuntu: http://knowm.org/how-to-install-bro-network-security-monitor-on-ubuntu/
How to Created a Bonded Network Interface: http://knowm.org/how-to-create-a-bonded-network-interface/
How to Set Up the ELK Stack- Elasticsearch, Logstash and Kibana: http://knowm.org/how-to-set-up-the-elk-stack-elasticsearch-logstash-and-kibana

Related Posts

Subscribe To Our Newsletter

Join our low volume mailing list to receive the latest news and updates from our team.

1 Comment

Leave a Comment

Knowm 32X32 Crossbar

Knowm Newsletter

Are you ready for memristor AI processors? With our newsletter, you will be.