Know your event size in Graylog2

It may be interesting to know your event size in Graylog — or as a matter of fact, the size of any document you store in your Elasticsearch backend. You can use it for capacity planning based on your average EPS (Event Per Second), monitor its fluctuation, etc…

This specific example will be Graylog2 centric. It works on Graylog 1.3 and Elasticsearch 1.7.x.

1. Create a file containing the json content to create a new template — let’s call it enable_size.json, such as:

  "template": "graylog*",
  "mappings": {
    "message": {
      "_size" : { "enabled" : true, "store" : true }

2. Using a simple curl command from your shell, send that JSON content to your elasticsearch, such as:

curl -XPUT http://$ES_HOST:9200/_template/custom-graylog -d @enable_size.json

You should get a {“acknowledged”:”true”} in response. All your newly created graylog indices will now store the document’s size.

3. You can then manually cycle your deflector in the UI (System > Indices) or use the POST hook in graylog’s API, for example:

curl -uadmin:admin -XPOST

4. In order to query for the average size of documents for the current index, as pointed to by the graylog deflector alias:

curl -s -XPOST http://$ES_HOST:9200/graylog2_deflector/_search?pretty -d '
  "query" : { "match_all": {}},
  "aggs": {
    "avg_doc_size": {
      "avg": {
        "field": "_size"

NB: The deflector alias name could be graylog_deflector and not graylog2_deflector as above.

5. If you want to use this within Zabbix for example (and a shameless plug for my Zabbix template on Graylog’s marketplace), you could embed the above into a small bash script as a Zabbix externalscript, leveraging the excellent JQ tool and start plotting the data.


[[ -z "$1" ]] && echo "Hostname needed" && exit -1

curl -s -XPOST http://$1:9200/graylog2_deflector/_search?pretty -d '
  "query" : { "match_all": {}},
  "aggs": {
    "avg_doc_size": {
      "avg": {
        "field": "_size"
}' |jq '.aggregations.avg_doc_size.value'

Quickly deploy a mongoDB 3-members replica set with vagrant

If you quickly need to test a 3-member mongoDB replica set, this post may be for you.

Using Vagrant and VirtualBox, you can quickly deploy 3 linux servers running mongoDB, automatically configured as a replica set with 1 primary and 2 secondaries. The configuration below is to be used for quick test/dev purposes, not production.


Download and install VirtualBox and Vagrant on your computer.

Setup vagrant with initial box image

(Optional if you’re using the Vagrantfile in below tarball)
Open up a terminal and type:
[sourcecode language=’bash’]vagrant init precise64[/sourcecode]

Vagrant config files

Virtualbox (default)

Download this tarball in a place of your liking. Untar it. You can edit them as you see fit, but they should get you started.

Mac OSX / Parallels

Thanks to the vagrant-parallels project, you can install the plugin to run vagrant with parallels.

Run the following commands, and download this Vagrantfile:

vagrant plugin install vagrant-parallels
vagrant box add --provider=parallels precise64

You can then up your VMs by appending to the vagrant up commands:


For more info, see the parallels-plugin website.


Go to the newly created directory, and power everything up from the directory where your Vagrantfile resides.

vagrant up

It is important that “mongo1” gets deployed last, as the provisioning for the replica is done on that node, which will become the master. This is automatic based on the file you just downloaded.

It possible that you get this warning message when the VMs come up:

 The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
prevent things such as shared folders from working properly. If you see
shared folder errors, please make sure the guest additions within the
virtual machine match the version of VirtualBox you have installed on
your host and reload your VM.

Guest Additions Version: 4.2.0
VirtualBox Version: 4.3

This should not matter. It doesn’t for these specific versions, and for the purpose of this article.

In a matter of minutes, your mongoDB replica set will be ready.


The 3 servers are bound to the following IP addresses (VirtualBox / Parallels)

  • mongo1 : /
  • mongo2 : /
  • mongo3 : /
  • You should end up with the following mongoDB configuration, out of mongo1:

    set0:PRIMARY> rs.conf()
    	"_id" : "set0",
    	"version" : 3,
    	"members" : [
    			"_id" : 0,
    			"host" : ""
    			"_id" : 1,
    			"host" : "mongo3:27017"
    			"_id" : 2,
    			"host" : "mongo2:27017"

    Testing with data

    Insert a line for testing, on the primary node:

    set0:PRIMARY> db.something.insert( {test : true} )
    set0:PRIMARY> db.something.find();
    { "_id" : ObjectId("53133ee70a67e2fcfad30e41"), "test" : true }

    Now, logon to one of the secondary, and query for that data:

    set0:SECONDARY> db.something.find();
    error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

    This is normal. You have to tell Mongo to allow reads on the secondaries.

    set0:SECONDARY> rs.slaveOk()
    set0:SECONDARY> db.something.find();
    { "_id" : ObjectId("53133ee70a67e2fcfad30e41"), "test" : true }