Suricata 4.x and ELK with dashboards on Debian

Here I am, a year and a half later finally updating this blog with a new post. I was originally not going to do one but i think there is enough stuff for me to post a quick one.

First things first, I grabbed the latest suricata from the main website (4.0 at the time of writing), installed it straight onto Debian with the following commands:

apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libjansson-dev pkg-config libnetfilter-queue-dev
wget http://www.openinfosecfoundation.org/download/suricata-4.0.0.tar.gz
tar xf suricata-4.0.0.tar.gz
cd suricata-4.0.0
/configure --enable-nfqueue --prefix=/usr --sysconfdir=/etc --localstatedir=/var
make -j3
make install
ldconfig

Next edit /etc/suricata/suricata.yaml then change the line of HOME_NET to match your setup. So for instance, if you public IP is 8.8.8.8 and you have a second IP on a private network like: 10.0.0.2, you should set it to the following:

HOME_NET:"[8.8.8.8,10.0.0.2]"

At this point, you have a functional suricata, you can launch it from the command line with:

suricata -c /etc/suricata/suricata.yaml -i eth0
# if this is happy, you should get something like
1/8/2017 -- 21:52:34 - <Notice> - This is Suricata version 4.0.0 RELEASE
1/8/2017 -- 21:52:40 - <Notice> - all 2 packet processing threads, 4 management threads initialized, engine started.

You then need to install Logstash, Kibana and ElasticSearch. First off, grab the packages you need, this is for Debian, so I am installing the DEB ones, and using systemd to set it all up. Note that we are also installing openjdk-8 which is recommended for this version of logstash and ES.

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.deb
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.1-amd64.deb
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.1.deb
apt-get install openjdk-8-jre
dpkg -i elasticsearch-5.5.1.deb
dpkg -i kibana-5.5.1-amd64.deb
dpkg -i logstash-5.5.1.deb

If like me you are short on memory, you want to set ES to grab less on startup, beware of this setting, this depends on how much data you collect and other things, so this is NOT gospel. Edit /etc/default/elasticsearch and change that line:

ES_JAVA_OPTS="-Xms512m -Xmx512m"

Second, you need to tell logstash to process JSON logs from suricata. It took me a while to find a working syntax, so I decided to copy/paste it all here for posterity. Edit /etc/logstash/conf.d/suricata_eve.conf and add:

input {
  file {
    path => ["/var/log/suricata/eve.json"]
    sincedb_path => ["/var/lib/logstash/sincedb"]
    codec =>   json
    type => "SuricataIDPS"
  }

}

filter {
  if [type] == "SuricataIDPS" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
  ruby {
    code => "
      if event.get('[event_type]') == 'fileinfo'
         event.set('[fileinfo][type]', event.get('[fileinfo][magic]').to_s.split(',')[0])
      end
      "
  }
  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      #database => "/usr/share/GeoIP/GeoIP.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          #database => "/usr/share/GeoIP/GeoIP.dat"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}
}

output {
  elasticsearch {
    hosts => localhost
  }
}

You can add services to startup and start them by doing:

systemctl enable elasticsearch.service
systemctl start elasticsearch.service

systemctl enable logstash.service
systemctl start logstash.service

systemctl enable kibana.service
systemctl start kibana.service

Next check out the logs (/var/log/logstash/logstash-plain.log), the most important is logstash to make sure that all is working, my output looks like this:

[2017-08-01T22:13:01,980][INFO ][logstash.outputs.elasticsearch] Elasticsearch p
ool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-08-01T22:13:01,984][INFO ][logstash.outputs.elasticsearch] Running health
check to see if an Elasticsearch connection is working {:healthcheck_url=>http:/
/localhost:9200/, :path=>"/"}
[2017-08-01T22:13:02,276][WARN ][logstash.outputs.elasticsearch] Restored connec
tion to ES instance {:url=>#<Java::JavaNet::URI:0x203c3fe9>}
[2017-08-01T22:13:02,285][INFO ][logstash.outputs.elasticsearch] Using mapping t
emplate from {:path=>nil}
[2017-08-01T22:13:02,454][INFO ][logstash.outputs.elasticsearch] Attempting to i
nstall template {:manage_template=>{"template"=>"logstash-*", "version"=>50001,
"settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=
>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"pa
th_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text"
, "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-01T22:13:02,469][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x296de911>]}
[2017-08-01T22:13:02,474][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.2.1-java/vendor/GeoLite2-City.mmdb"}
[2017-08-01T22:13:02,578][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.2.1-java/vendor/GeoLite2-City.mmdb"}
[2017-08-01T22:13:02,580][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-08-01T22:13:03,356][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-01T22:13:03,572][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Because I needed to, I also set it up via nginx with some reverse proxying config for Kibana, here’s the snippet for completeness:

server {
        listen 127.0.0.1:6666;
        server_name  localhostkibana;
        access_log  /var/log/nginx/kibana.access.log;
        error_log  /var/log/nginx/kibana.error.log;

   location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

You are now in a position to look at your events. I went to grab these dashboards: https://github.com/StamusNetworks/KTS5.git (be aware that dashboards need to match your major version of Kibana), git clone it then run

./load.sh

At this point you can go to http://localhost:5601 and create a local index based on logstash-* then enjoy your dashboards.