Suricata 4.x and ELK with dashboards on Debian

Here I am, a year and a half later finally updating this blog with a new post. I was originally not going to do one but i think there is enough stuff for me to post a quick one.

First things first, I grabbed the latest suricata from the main website (4.0 at the time of writing), installed it straight onto Debian with the following commands:

apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libjansson-dev pkg-config libnetfilter-queue-dev
tar xf suricata-4.0.0.tar.gz
cd suricata-4.0.0
/configure --enable-nfqueue --prefix=/usr --sysconfdir=/etc --localstatedir=/var
make -j3
make install

Next edit /etc/suricata/suricata.yaml then change the line of HOME_NET to match your setup. So for instance, if you public IP is and you have a second IP on a private network like:, you should set it to the following:


At this point, you have a functional suricata, you can launch it from the command line with:

suricata -c /etc/suricata/suricata.yaml -i eth0
# if this is happy, you should get something like
1/8/2017 -- 21:52:34 - <Notice> - This is Suricata version 4.0.0 RELEASE
1/8/2017 -- 21:52:40 - <Notice> - all 2 packet processing threads, 4 management threads initialized, engine started.

You then need to install Logstash, Kibana and ElasticSearch. First off, grab the packages you need, this is for Debian, so I am installing the DEB ones, and using systemd to set it all up. Note that we are also installing openjdk-8 which is recommended for this version of logstash and ES.

apt-get install openjdk-8-jre
dpkg -i elasticsearch-5.5.1.deb
dpkg -i kibana-5.5.1-amd64.deb
dpkg -i logstash-5.5.1.deb

If like me you are short on memory, you want to set ES to grab less on startup, beware of this setting, this depends on how much data you collect and other things, so this is NOT gospel. Edit /etc/default/elasticsearch and change that line:

ES_JAVA_OPTS="-Xms512m -Xmx512m"

Second, you need to tell logstash to process JSON logs from suricata. It took me a while to find a working syntax, so I decided to copy/paste it all here for posterity. Edit /etc/logstash/conf.d/suricata_eve.conf and add:

input {
  file {
    path => ["/var/log/suricata/eve.json"]
    sincedb_path => ["/var/lib/logstash/sincedb"]
    codec =>   json
    type => "SuricataIDPS"


filter {
  if [type] == "SuricataIDPS" {
    date {
      match => [ "timestamp", "ISO8601" ]
  ruby {
    code => "
      if event.get('[event_type]') == 'fileinfo'
         event.set('[fileinfo][type]', event.get('[fileinfo][magic]').to_s.split(',')[0])
  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      #database => "/usr/share/GeoIP/GeoIP.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          #database => "/usr/share/GeoIP/GeoIP.dat"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]

output {
  elasticsearch {
    hosts => localhost

You can add services to startup and start them by doing:

systemctl enable elasticsearch.service
systemctl start elasticsearch.service

systemctl enable logstash.service
systemctl start logstash.service

systemctl enable kibana.service
systemctl start kibana.service

Next check out the logs (/var/log/logstash/logstash-plain.log), the most important is logstash to make sure that all is working, my output looks like this:

[2017-08-01T22:13:01,980][INFO ][logstash.outputs.elasticsearch] Elasticsearch p
ool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-08-01T22:13:01,984][INFO ][logstash.outputs.elasticsearch] Running health
check to see if an Elasticsearch connection is working {:healthcheck_url=>http:/
/localhost:9200/, :path=>"/"}
[2017-08-01T22:13:02,276][WARN ][logstash.outputs.elasticsearch] Restored connec
tion to ES instance {:url=>#<Java::JavaNet::URI:0x203c3fe9>}
[2017-08-01T22:13:02,285][INFO ][logstash.outputs.elasticsearch] Using mapping t
emplate from {:path=>nil}
[2017-08-01T22:13:02,454][INFO ][logstash.outputs.elasticsearch] Attempting to i
nstall template {:manage_template=>{"template"=>"logstash-*", "version"=>50001,
"settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=
>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"pa
th_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text"
, "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-01T22:13:02,469][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x296de911>]}
[2017-08-01T22:13:02,474][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.2.1-java/vendor/GeoLite2-City.mmdb"}
[2017-08-01T22:13:02,578][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.2.1-java/vendor/GeoLite2-City.mmdb"}
[2017-08-01T22:13:02,580][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-08-01T22:13:03,356][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-01T22:13:03,572][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Because I needed to, I also set it up via nginx with some reverse proxying config for Kibana, here’s the snippet for completeness:

server {
        server_name  localhostkibana;
        access_log  /var/log/nginx/kibana.access.log;
        error_log  /var/log/nginx/kibana.error.log;

   location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

You are now in a position to look at your events. I went to grab these dashboards: (be aware that dashboards need to match your major version of Kibana), git clone it then run


At this point you can go to http://localhost:5601 and create a local index based on logstash-* then enjoy your dashboards.

Mac 2006 SSD upgrade

I have a first generation Intel Mac Book Pro 2006, this is the one with a Radeon card, which makes it incompatible with any other version of Mac OS than 10.7.5 (there are of course alternatives but I have not been too pushed to actually try them due to the incompatibility with the Radeon).

I upgraded this laptop a few years ago to 3GB of RAM and this was quite a nice upgrade. The hard disk however has been the same for over 5 years, a sturdy but slow 5400rpm drive.

When I started looking at options to upgrade this laptop, an obvious one was SSDs. As it turns out, I had a Samsung 840 Pro lying around. Although I am aware the controller supports only SATA-I, I thought this was worth a shot.

You need to also be aware that Apple up until recent versions supports only “official” SSDs for TRIM functions, which is definitely a problem if you want it to survive. I ended up using Chameleon SSD Optimizer which can enable it for “non official” SSDs.

You can then double check on the system report menu if that worked properly. Be also aware that every patch update will disable TRIM support and require you to enable it again.

Last but not least, speed. I am now able to boot that 10 years old laptop within 15 seconds, and that is with file vault enabled. Speed on opening programs is very good and much much much faster than with the old drive.

I say not a bad upgrade at all.

NAS Upgrade, new drives!

This post is more of a reminder to myself on upgrading and rebuilding the RAID on these new drives before I forget.

This is what I did on the parted front to set up the new clean HDDs.

~# parted -a optimal /dev/sdc
(parted) mklabel gpt
(parted) mkpart primary 1MB 6TB
(parted) set 1 raid on
(parted) quit

Rince and repeat for all drives. Then check them out with mdadm (sure, you can use hardware RAID if you have more faith in it, i don’t)

~#  mdadm -E /dev/sd[b-e]
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

Let’s create the RAID for the drives, this will take a very long time to complete, so go to bed, and check it out the next day.

mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Check the progress:

~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde1[3] sdd1[2] sdc1[1] sdb1[0]
      11720779776 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  resync =  0.1% (10083584/5860389888) finish=574.8min speed=169611K/sec
      bitmap: 44/44 pages [176KB], 65536KB chunk

unused devices: <none>

You can detail the raid to make sure it looks good:

~# mdadm --detail /dev/md0
        Version : 1.2
  Creation Time : Tue Nov 17 21:38:06 2015
     Raid Level : raid6
     Array Size : 11720779776 (11177.81 GiB 12002.08 GB)
  Used Dev Size : 5860389888 (5588.90 GiB 6001.04 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Nov 17 21:40:44 2015
          State : clean, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 0% complete

           Name : belial:0  (local to host belial)
           UUID : 95dadcec:5bbf183d:eab5aaff:fc3aa00d
         Events : 31

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

Last but not least, save the config because it does not create it by default:

mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf

That is it.

Steam Link and Controller

I cannot recall the last time I actually wrote a critic on gaming. That’s probably because I have never posted any such thing on this blog 🙂 Though, I am a big gamer, and for that reason, I’ve had a PC under Windows for many years now. I started using Steam about 10/11 years ago. At the time, there was very little indication that Steam would become the behemoth that it is today.

Fast forward to early in the year, I pre-ordered the Steam link and controller which finally arrived today. This is pre-release software as the general availability is pencilled for the 10th of November. Steam Link is a streaming box that you put in your living room onto your TV to stream and play games from your heftier PC. This is quite handy for me as I wanted to play video games from my couch and had quite a hefty list of PC games that I bought during sales that I never completed (and in some case, ever started).

I have created a small gallery of pictures showing you some screens from the Link so you can have some idea of what it looks like. If you want to take a look, head over there.

The unboxing was quite nice, the Steam Link has 3 USB ports, 1 HDMI, 1 ethernet port and a 2.5A power supply. You can also connect it via wireless (which I have not tried). Once I connected it all, it went straight to upgrade the software on the link. It then offered to pair my Link with my PC. I was afraid to have to use my full username/password but was nicely surprised by a one time password instead. Since I bought the controller too, it offered to upgrade its firmware, though for this you will need to connect it to your PC.

After this, you are ready to go. I tried the Steam controller for a while but I got too impatient. It has a lot of potential BUT I could not grasp it quickly enough to enjoy it straight away. One thing it has for itself is that the community has uploaded configurations for given games so you can remap your controller and enjoy it with your favorite games. As I said, I will need to get back to that later. One last point on the controller is that it feels a bit cheap compared to latest console controllers. I then decided to connect my Xbox controller to the Link which worked perfectly straight away.

I tested a fair few games amongst which: Left 4 Dead 2, Saints Row 4, Payday 2, Alan Wake, Bioshock Infinite, Wolf among us and Batman Arkham:Origins. I was pleased to see how reactive the whole setup is. Last time I tried the Steam streaming client was from the PC under Windows as the processing machine to a Macbook Pro 2014 as a client. I will pass on the headache of controller support but I also had another teething issue which was lack of sound. Turns out this seemed to be software bugs. All the games I tested with the Link work perfectly and fast.

You do see some streaming/compression on the picture but this is quite minimal. Bioshock Infinite simply look short of amazing on my big screen TV. Controls are really fast and it almost feels like you are playing from a console or natively. I was curious about the streaming speed/quality so after looking at my network, Link used between 7 and 15Mb/s which is not much in fairness. This is with the settings by default for network streaming set to balanced, there is also a fast and a beautiful option. I did not test fast but i did test beautiful. It makes me wish this was enabled by default as “beautiful” quality does not seem to add lag and provides a much nicer experience on a big screen. Bioshock just became super nice looking (see last picture from my gallery.) It did not seem to take much more bandwidth though you can set it all up to 30Mb/s.

This is my closing thought on this very quick preview of my Steam Link. Loving the box so far and I can see myself using this quite regularly. For a box that retails at about 60 euros, you cannot really go wrong if your aim is to play games from your couch 🙂

The Bike Situation

For once, I am not going to talk about technical stuff but about cycling (this article is about Dublin but I am sure applies to a fair few cities). I have been sitting on this article for a few weeks, and I feel this is now time to publish it. I would like to thank Rich for the edits and corrections.

I am mainly a cyclist, this has to be said so people reading this understand that this article might be slightly biased. I live in Dublin and commute mainly by bicycle every day to the city center. That said, I am also a driver and a pedestrian. I should also mention that I lived in various parts of the world where I have seen all styles of people. This brings me to my point. People are stupid. Yes, it could be you, or your neighbour. The point is, regardless of the type of transport, you have idiots everywhere. This is quite important since on my commute I see idiots everywhere.

Let’s start with the drivers. there are good and bad drivers, like there are good and stupid people. The bad ones are amazing at pushing a cyclist in to the pavement, exceeding (sometimes at high speeds) the speed limit creating waves of air as they pass you, breaking red lights. I see this on my commute every single day. My favourite infraction remains when a vehicle passes you close to turn left just in front of you – this happens much more than you think. Cars parked in the cycle lane (and I do not mean parking spots but cars parked on single or double lines where only cycles are supposed to be allowed) is also very common and quite often on double yellow lines.. I will not expand on the bus drivers that feel the need to push you as you cycle, even if you leave them as much space as possible to pass by.

The second type is the cyclists. One could argue that the rising of Dublin Bikes brought a lot of less aware cyclists out there, and not a single day passes where I do not see one of them breaking a red light or going against traffic on a one way road. The funny thing is, this applies to the people owning their bike , and most likely commuting regularly too. Breaking red lights emphasizes natural selection, I am fairly convinced that chicken run applies here, stupid chicken crosses the road at the wrong time, balance in the universe is restored. Some cyclists also pass bikes on the left hand side, which is not only stupid but very dangerous. On the nice list of infractions, bikes using the pavement at moderate speed is equally bad for pedestrians and do not give the message that cyclists are responsible. Why? Because people are stupid.

The third type is the pedestrians. I hear the argument over and over again that pedestrians are endangered by cyclists as they cross the road. I can confirm that I have seen this on numerous occasions where cyclists were reckless when it comes to people crossing the road. But unfortunately, the same can be said of pedestrians, who are a danger to themselves and to the rest of the traffic. Considering how many never turn their head to check for incoming traffic, and how many look down at their mobiles rather than paying attention, they are quite hazardous in my view, those headless chickens should be cited by the guards.It is also worth mentioning that a fair few pedestrians do not give a shit about common sense and cross in front of you just out of defiance because you are supposed to be in control of your vehicle. I believe natural selection should apply here.

As most of you know, from the 1st of August, new laws have come into effect that allows guards to fine you on the spot for 40 euros per infraction. They are now able to sanction breaking red lights, cycling with no lights at night or cycling on pedestrian access. Garda have already announced blitz operations from the 1st of August. My problem with this is, it will probably catch a good few stupid cyclists, but what about the stupid drivers? According to this article, Garda does not seem concerned about stupid drivers, just stupid cyclists. This does not seem fair on any account, this is not an accurate way to deal with the issue.

It is quite clear Dublin is pro car biased, so I do not see this improving any time soon. Dublin is not a cycle friendly zone. The government likes to think that they have made the city as bike friendly as possible but unfortunately, until you split the cycle lanes and car lanes, this is unlikely to be resolved. And before you say that we cannot do it, I would encourage you to look at Amsterdam’s history, they made it happen. Given the fact that a fair few driving licenses were just given a few years back, this does not make for a very safe place. Dublin should still look into splitting lanes, at least stupid people will be commuting on the same lane with the same transport type.

A cyclist is much more vulnerable than a car, drivers be aware of that fact, it is quite easy to tip over a cyclist, due to speed and/or proximity. The only thing that we have to protect ourselves is a helmet, it protects only the head and not the rest of the body.

So until we actually have more solutions, stop being stupid.

Grafana 2 and Nginx

I have spent a bit more time tuning that setup so it works the way I want(tm). I have also decided that I no longer want to use the default graphite web UI but rather use grafana exclusively. The collector used here is Diamond. It has an extensive list of collectors to grab all the metrics.

First, start by setting up your graphite site in the backend, I allow it to be reachable only locally because Grafana will proxy the queries to it. So create /etc/nginx/sites-available/graphite and fill it with:

server {


  server_name  localhost;
  root   /opt/graphite/webapp/graphite;
  index index.html index.php

  access_log  /var/log/nginx/graphite.access.log;
  error_log  /var/log/nginx/graphite.error.log;

  location / {
                gzip off;
   include uwsgi_params;


You then need to install grafana where you see fit, in this example, I put it under /opt/grafana. Then create /etc/nginx/sites-available/grafana

server {

  listen 80;

  root   /var/www/html;
  index index.html index.php

  access_log  /var/log/nginx/grafana.access.log;
  error_log  /var/log/nginx/grafana.error.log;

  location / {
   proxy_pass http://localhost:3000;

You will of course still need to set up grafana, i will refer you to the official documentation.

Symlink the 2 sites under sites-enabled then restart nginx. Et voila !

Graphite and Nginx with uwsgi (2015 version)

A few years back, I wrote this tutorial to install Graphite with uwsgi on Debian. At the time, I used uwsgi 0.9.9 which has since evolved. My current Debian Jessie uses packages available from the system rather than having to use tarballs. It is now provided as version 2.0.7

Needless to say it has changed quite a bit. I have spent some time configuring it right and eventually got it. For posterity, here’s the config I now use, the rest of the configuration is still pretty much valid from the previous post.

Install the following packages: uwsgi uwsgi-core uwsgi-plugin-python
Create a file called graphite.ini in /etc/uwsgi/apps-available/ then copy the following into it, and symlink it to apps-enabled.

processes = 2
uid = www-data
gid = www-data
chdir = /opt/graphite/webapp
pythonpath = "['/opt/graphite/webapp'] + sys.path"
manage-script-name = true
mount = /graphite=/opt/graphite/conf/graphite.wsgi
socket =

Restart the uwsgi service and check the logs, they should get created in /var/log/uwsgi/app/graphite.log by default.

Upgrading your nexus phone the adb way

Because I got fed up with OTAs and I also play too much with my phone, I decided to load up factory images directly with ADB. This process does not wipe the data for me, as long as you are careful on what you wipe.

In order to do this, you need to have your bootloader unlocked, if you unlock it now, you will lose all your data.

Here goes:

# check that you see your phone
fastboot devices
# get latest image
# md5sum this shit
md5sum hammerhead-lmy47d-factory-6c1ad81e.tgz
# untar and get into it
tar xvf hammerhead-lmy47d-factory-6c1ad81e.tgz
cd hammerhead-lmy47d/
# unzip the different images
# flash all the shit
fastboot flash radio radio-hammerhead-m8974a-
fastboot flash system system.img
fastboot flash boot boot.img
# reboot in recovery and flush cache and dalvik
# reboot and let it upgrade, profit.

6 months later with Tado

I posted a review of the Tado back in March and thought I would re-visit it after over 6 months of using it.

I would like to start with an important disclaimer. Never assume that the wiring is right if you moved into a place and are not quite confident with electricity. Back then I based my wiring on the existing one that was done for the timer. Turns out that it was incorrect. This is why when in summer time, Tado brought the water schedule functionality, it did not work for me. A friend came by to look at the wiring and figured out what was wrong. Thinking back, I got quite lucky that i managed to get the heating working. So long story short, a second set of eyes that actually understand electricity: good! Thanks Glen 🙂

Tado got out a new model with display now, which is not the model I have, just in case you stepped on this from a search and wonder what is coming next.

Tado completely rewrote the web interface and also the phone apps. I got to say, it is very welcome as the previous web interface was a bit clunky. The phone app was also in need of cleanup, just for speed alone. I am glad to report all additions actually helped.

Re-visiting what I think of the Tado, it has been a very good investment. Now that the wiring is fixed, I have thermostat as before but also water schedule when i want to, this is really cool for when the warm days will come back. I have advised a few friends to go for Tado and as far as I know, no one has been disappointed yet. Disclaimer: I am not paid by Tado to write this, I am just very enthusiastic about this device.

Importing sqlite3 to MySQL in a semi non painful way.

The example I will give below should work for most data. I needed to import dashboards for graphite in sqlite3 format to MySQL which is our now standard backend. This is the rough steps I used.

Get that file from /opt/graphite/storage/graphite.db then dump it like it is 1982:

sqlite3 graphite.db
sqlite> .output graphite.sql
sqlite> .dump dashboard_dashboard

Grab that wonderful python script. I am copy/pasting it here just in case in disappears.

#! /usr/bin/env python

import sys

def main():
    print "SET sql_mode='NO_BACKSLASH_ESCAPES';"
    lines =
    for line in lines:

def processLine(line):
    if (
        line.startswith("PRAGMA") or
        line.startswith("BEGIN TRANSACTION;") or
        line.startswith("COMMIT;") or
        line.startswith("DELETE FROM sqlite_sequence;") or
        line.startswith("INSERT INTO \"sqlite_sequence\"")
    line = line.replace("AUTOINCREMENT", "AUTO_INCREMENT")
    line = line.replace("DEFAULT 't'", "DEFAULT '1'")
    line = line.replace("DEFAULT 'f'", "DEFAULT '0'")
    line = line.replace(",'t'", ",'1'")
    line = line.replace(",'f'", ",'0'")
    in_string = False
    newLine = ''
    for c in line:
        if not in_string:
            if c == "'":
                in_string = True
            elif c == '"':
                newLine = newLine + '`'
        elif c == "'":
            in_string = False
        newLine = newLine + c
    print newLine

if __name__ == "__main__":

Rename it something like then execute the following:

cat graphite.sql | python > graphite-mysql.sql

I would advise you add a few statements at the beginning and the end like:

USE graphite; 

All you need to do now is to dump it back to MySQL like this:

mysql < graphite-mysql.sql

You should now have a full DB of dashboards in MySQL format.