The Bro Network Security Monitor is an open source network monitoring framework. In a nutshell, Bro monitors packet flows over a network and creates high-level “flow” events from them and stores the events as single tab-separated lines in a log file. You can then parse these log files to data mine for information about the network traffic on the network you are monitoring. An excellent method of parsing the bro log files and visualizing all the data is to use the ELK stack. At the heart of ELK are Elasticsearch, Logstash, and Kibana. Logstash parses the bro logs, Elasticsearch stores the parsed data, and Kibana provides a beautiful GUI for data mining and visualization.
If you already have a network tap installed with optional bonded network interfaces, Bro installed and the ELK stack installed on your system, all that’s left to do is create and deploy a configuration file for Logstash to tell Logstash where to look for the Bro logs, how to manipulate them, and where to put them (Elastic Search).
A redditor on /r/netsec pointed out that the CVS filter is much more efficient than the grok filter and pointed me to a git repo with some logstash conf files for parsing Bro logs. I’ve since then forked the repo and modified the files to suit my needs better including fixing the tab separator delimiter, adding a geoip
filter, and fixing a few bugs. A quick way to get the conf
file(s) is to pull them directly from github to logstash’s conf.d
directory, which is shown in the following code block. Note that logstash
will load all the config files it finds in conf.d
at start up.
1 2 3 4 5 6 7 8 9 10 |
cd /etc/logstash/conf.d/ sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-conn_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-dns_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-files_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-http_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-notice_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-ssh_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-ssl_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-weird_log.conf sudo wget -N https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-x509_log.conf |
Note that starting from Logstash 2.x, the elasticsearch host configuration has changed. The error you will encounter looks something like this:
1 2 |
Pipeline aborted due to error: The setting `host` in plugin `elasticsearch` is obsolete and is no longer available. Please use the 'hosts' setting instead. You can specify multiple entries separated by comma in 'host:port' format. |
The fix is just to change host
to hostsin the config files, which I've updated in the above *.conf
files.
Explanation
Let’s take a closer look at the file: https://raw.githubusercontent.com/timmolter/logstash-dfir/master/conf_files/bro/bro-conn_log.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
######################## # logstash Configuration Files - Bro IDS Logs # Created by 505Forensics (http://www.505forensics.com) # MIT License, so do what you want with it! # # For use with logstash, elasticsearch, and kibana to analyze logs # # Usage: Reference this config file for your instance of logstash to parse Bro conn logs # # Limitations: Standard Bro log delimiter is tab. # # Dependencies: Utilizing the logstash 'translate' filter requires having the logstash contrib plugins added, which are community supported and not part of the official release. Visit logstash.net to find out how to install these # ####################### input { file { type => "bro-conn_log" start_position => "end" sincedb_path => "/var/tmp/.bro_conn_sincedb" #Edit the following path to reflect the location of your log files. You can also change the extension if you use something else path => "/nsm/bro/logs/current/conn.log" } } filter { #Let's get rid of those header lines; they begin with a hash if [message] =~ /^#/ { drop { } } #Now, using the csv filter, we can define the Bro log fields if [type] == "bro-conn_log" { csv { columns => ["ts","uid","id.orig_h","id.orig_p","id.resp_h","id.resp_p","proto","service","duration","orig_bytes","resp_bytes","conn_state","local_orig","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts","resp_ip_bytes","tunnel_parents"] #If you use a custom delimiter, change the following value in between the quotes to your delimiter. Otherwise, insert a literal <tab> in between the two quotes on your logstash system, use a text editor like nano that doesn't convert tabs to spaces. separator => " " } #Let's convert our timestamp into the 'ts' field, so we can use Kibana features natively date { match => [ "ts", "UNIX" ] } # add geoip attributes geoip { source => "id.orig_h" target => "orig_geoip" } geoip { source => "id.resp_h" target => "resp_geoip" } #The following makes use of the translate filter (logstash contrib) to convert conn_state into human text. Saves having to look up values for packet introspection translate { field => "conn_state" destination => "conn_state_full" dictionary => [ "S0", "Connection attempt seen, no reply", "S1", "Connection established, not terminated", "S2", "Connection established and close attempt by originator seen (but no reply from responder)", "S3", "Connection established and close attempt by responder seen (but no reply from originator)", "SF", "Normal SYN/FIN completion", "REJ", "Connection attempt rejected", "RSTO", "Connection established, originator aborted (sent a RST)", "RSTR", "Established, responder aborted", "RSTOS0", "Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder", "RSTRH", "Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator", "SH", "Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was 'half' open)", "SHR", "Responder sent a SYN ACK followed by a FIN, we never saw a SYN from the originator", "OTH", "No SYN seen, just midstream traffic (a 'partial connection' that was not later closed)" ] } mutate { convert => [ "id.orig_p", "integer" ] convert => [ "id.resp_p", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "duration", "float" ] convert => [ "resp_bytes", "integer" ] convert => [ "missed_bytes", "integer" ] convert => [ "orig_pkts", "integer" ] convert => [ "orig_ip_bytes", "integer" ] convert => [ "resp_pkts", "integer" ] convert => [ "resp_ip_bytes", "integer" ] rename => [ "id.orig_h", "id_orig_host" ] rename => [ "id.orig_p", "id_orig_port" ] rename => [ "id.resp_h", "id_resp_host" ] rename => [ "id.resp_p", "id_resp_port" ] } } } output { # stdout { codec => rubydebug } elasticsearch { hosts => localhost } } |
- In the
input
section, we need to put all paths to the actual Bro log files on OUR system. - In the
output
section at the end of the config file, we need to push the data to Elasticsearch:elasticsearch { host => localhost }
. - In the main
filter
section, acsv
filter is assigned and configured for the bro log. You can hand write the csv filters if you want. - The other
filter
sections do a few more manipulations to the data and are explained quite well in the comment sections. - Starting Elasticsearch 2.0 it does not support field names with a . (or dot character) in them. Since the bro logs contain fields with dots in their names (
id.orig_p
), we need to use a filter to convert the dots to underscores. If not you may see an error like:failed to put mappings on indices [[logstash-2016.05.02]], type [bro-conn_log] MapperParsingException[Field name [id.orig_h] cannot contain '.']
. Themutate
plugin is used to convert the field names containing dots to underscores with therename
command.
logstash-filter-translate
The above logstash
config uses a plugin called logstash-filter-translate
. The following terminal commands show how to install the logstash-filter-translate plugin. For a more in-depth explanation of installing logstash
plugins see How to Install Logstash Plugins for Version 1.5.
1 2 |
cd /opt/logstash sudo bin/plugin install logstash-filter-translate |
Deploying
To check if the configuration(s) is(are) valid without starting Logstash, run the following:
1 2 |
sudo -u logstash /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d --configtest |
Test run in the console:
1 2 |
sudo -u logstash /opt/logstash/bin/logstash -f /etc/logstash/conf.d --debug |
Restart Logstash
and it will automatically pick up the the new config file. It could take up to a minute before it actually starts pumping data.
1 2 |
sudo /etc/init.d/logstash restart |
or for a system with systemd
…
1 2 |
sudo systemctl restart logstash |
Debugging
For debugging, we can start Logstash with the --debug
flag with the following command:
In any of the config files, you can also change the output to push data to the console instead of to Elasticsearch by adding stdout {}
.
1 2 3 |
output { stdout {} } |
codec => rubydebug
can also be used for debugging. It’s formatted prettier.
1 2 3 |
output { stdout { codec => rubydebug } } |
And here are some extra commands controlling the logstash
service:
System V
1 2 3 4 |
sudo /etc/init.d/logstash stop sudo /etc/init.d/logstash start sudo /etc/init.d/logstash restart |
Systemd
1 2 3 4 |
sudo systemctl stop logstash sudo systemctl start logstash sudo systemctl restart logstash |
If Logstash does not start, look in the following logs for any errors:
1 2 3 4 |
sudo nano /var/log/upstart/logstash.log sudo nano /var/log/logstash/logstash.log /opt/logstash/bin/logstash --help |
To see which pid
logstash has for killing it:
1 2 3 |
ps -ef | grep logstash sudo kill -9 5877 |
To see what config parameters Logstash started up with, use the following command:
1 2 |
ps aux | grep logstash |
You will get something like the following:
1 2 |
logstash 2635 202 2.4 6837484 396528 pts/0 SNl 13:44 0:24 /usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xmx1024m -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log |
sincedb_path
The sincedb_path
needs to be writeable by the logstash user. One way to do this is to set the sincedb_path
to /var/tmp
if you system has this writeable directory. Fi you are having error messages related to the sincedb_path
, the first thing to check are the permissions on the configured path.
Related Resources
Installing Bro on Ubuntu: http://knowm.org/how-to-install-bro-network-security-monitor-on-ubuntu/
How to Created a Bonded Network Interface: http://knowm.org/how-to-create-a-bonded-network-interface/
How to Set Up the ELK Stack- Elasticsearch, Logstash and Kibana: http://knowm.org/how-to-set-up-the-elk-stack-elasticsearch-logstash-and-kibana
1 Comment