Graylog and stream alerts on message terms

In a stream, you might be wondering why you have search hits looking for a basic string in message when the query string is built for you as you click the little magnifying glass “+” next to a record’s message field, yet none when you actually copy/paste the query or typing it out — and hence no alert triggering.

This is due to terms.

Our devoted folks at Graylog actually warn us about it:

Here is an example. I want to trigger an alert when value condition message:”has not been migrated”. The screenshot below shows the associated terms. In this particular example, I am getting values from a file via graylog-sidecar and nxlog from a Windows machine.


If I just create my stream alert with “Field content value condition”, with “message” contains “has not been migrated” (i.e. as in just typing it out or copy-pasting the text), it won’t work. But there is a trick. A bit ugly, but well, it works.

Given the screenshot above, build your query simply by clicking on the magnifying glass and search. There, you can (need to) actually adjust it. It will retain its “magic”. And it will match. Yet, nothing stands out in that query compared to when you type it out, right?

Now, let’s look at the Elasticsearch query to see how it was built. Click on Show query.


The query string is probably not the one you would have expected. This is because of terms. Each arrow points to a letter of what I am looking for: “h”, “a”, “s” and so on.

(the whole query doesn’t show obviously)

You need that odd-looking query into your alerting criteria instead of your “plain text” query. It will now trigger as expected.

Know your event size in Graylog2

It may be interesting to know your event size in Graylog — or as a matter of fact, the size of any document you store in your Elasticsearch backend. You can use it for capacity planning based on your average EPS (Event Per Second), monitor its fluctuation, etc…

This specific example will be Graylog2 centric. It works on Graylog 1.3 and Elasticsearch 1.7.x.

1. Create a file containing the json content to create a new template — let’s call it enable_size.json, such as:

  "template": "graylog*",
  "mappings": {
    "message": {
      "_size" : { "enabled" : true, "store" : true }

2. Using a simple curl command from your shell, send that JSON content to your elasticsearch, such as:

curl -XPUT http://$ES_HOST:9200/_template/custom-graylog -d @enable_size.json

You should get a {“acknowledged”:”true”} in response. All your newly created graylog indices will now store the document’s size.

3. You can then manually cycle your deflector in the UI (System > Indices) or use the POST hook in graylog’s API, for example:

curl -uadmin:admin -XPOST

4. In order to query for the average size of documents for the current index, as pointed to by the graylog deflector alias:

curl -s -XPOST http://$ES_HOST:9200/graylog2_deflector/_search?pretty -d '
  "query" : { "match_all": {}},
  "aggs": {
    "avg_doc_size": {
      "avg": {
        "field": "_size"

NB: The deflector alias name could be graylog_deflector and not graylog2_deflector as above.

5. If you want to use this within Zabbix for example (and a shameless plug for my Zabbix template on Graylog’s marketplace), you could embed the above into a small bash script as a Zabbix externalscript, leveraging the excellent JQ tool and start plotting the data.


[[ -z "$1" ]] && echo "Hostname needed" && exit -1

curl -s -XPOST http://$1:9200/graylog2_deflector/_search?pretty -d '
  "query" : { "match_all": {}},
  "aggs": {
    "avg_doc_size": {
      "avg": {
        "field": "_size"
}' |jq '.aggregations.avg_doc_size.value'

Classement général – Tour du lac de Neuchâtel 2015 (populaire)

Après avoir rapidement généré une API pour extraire le contenu, voici un fichier excel qui doit donner un résultat un peu près correct. Il est mieux de downloader le fichier pour le visionner avec excel ou un autre tableur.

Classement général – tour du lac de Neuchâtel 2015

A noter que cette feuille est tout à fait non-officielle et n’est aucunement relié à un quelconque organisme.

QNAP TimeMachine .AppleDB folder

Are you a QNAP user? Ever got the error “The operation couldn’t be completed. (OSStatus error 2.)” out of the blue after months of smooth backup operations on your Mac OSX?

If you have tried disabling/re-enabling afp, the TimeMachine service, deleted your sparsebundle and all kind of other stuff in the hope of fixing the problem but not. Try the following to clear up metadata:

* ssh to your qnap box (in my case a slow TS412)
* cd /share/MD0_DATA/.timemachine
* rm -rf .AppleDB

Then launch your TimeMachine backup again. That should solve the issue.

FreeNAS: howto create a link aggregation when you only have 2 NICs

EDIT: The article can still be useful, but apparently, it seems it can be done without the workaround. Clear all interfaces, create your lagg in the FreeNAS shell console and associate the interfaces. Then reboot mandatory here to see the lagg appear in the first menu to be able to set its IP address.

If you’ve tried to configure LACP on your FreeNAS server (Using here), and only have 2 NICs, you may have run into the issue whereas you can’t attribute an IP address to a lagg at the console, and that you can’t create a lagg from the web interface if any one of your two interfaces is already assigned an IP. Therefore, no way to create a lagg unless you have an extra NIC to plug in your laptop with an IP configured on a different network.

I haven’t found a way to do it with FreeNAS tools. If you have, please let me know!

Drop to your FreeNAS console shell!

Make sure your individual interfaces do not have any IP assigned. Delete previous lagg.
Modify the below info to suit your needs.

# ifconfig lagg0 create
# ifconfig lagg0 laggproto lacp laggport bce0 laggport bce1 netmask
# ifconfig bce0 up
# ifconfig bce1 up
# route add default

Your LACP should now come up. However, if you reboot, you’ll lose everything.

To persist your changes, mount your / read-write.

mount -uw /

Do not modify /etc/rc.conf directly. Instead, modify /conf/base/etc/rc.conf.
Append the following at the end of the file.

ifconfig_lagg0="laggproto lacp laggport bce0 laggport bce1"

Reboot to make sure everything comes back as it should. Check your services also.

The FreeNAS GUI will see your lagg0 as a regular interface, and you won’t see your 2 individual interfaces anymore. Do not add another lagg from the GUI, or it will steal the physical interface and leave your lagg0 unusable.

Edit: It looks like the physical interfaces, in this case bce0 and bce1 do not re-attach to the lagg after a reboot. If someone knows why, please comment send me an email at fred @ ! Thanks 🙂

Deploy Bonita Community on OpenShift

Do you want to run your Bonita Community Tomcat instance on Redhat’s Openshift Online, using MySQL?

This post will not go into the details of using OpenShift, but rather focus on making Bonita Community Ed. 6.2.4 work on it, along with MySQL. I will not go into details of a regular installation of Bonita either.

Note to Bonita Subscription users: It is currently impossible to run the Subscription Edition on OpenShift (and most likely other PaaS services), due to not being able to generate license requests. Being one of these users, I raised a ticket to Bonita support, but didn’t get much support for it nor any hope to see it in the future.


  • Have a free OpenShift account
  • Have deployed an OpenShift application with the MySQL 5.5 and Tomcat 7 (JBoss EWS 2.0) cartridges
  • Downloaded on your workstation the Bonita BPM deployment Bundle

Note: After your git clone, don’t forget to remove the pom.xml from your git root and the src folder in order to avoid triggering a maven build when git pushing your changes.

git rm -r src/ pom.xml

Configuring Bonita the OpenShift way

If you have already deployed Bonita on one of your own server, you will soon notice that you cannot install it the same way using OpenShift.

Don’t forget that for all these local files you’ll modify, you’ll have to git add, git commit, and git push them.

The .openshift folder

After doing your git clone, you will have a local openshift directory (that we’ll call $OPENSHIFT_LOCAL_HOME), along with whatever apps you have. Let’s assign a few variables for this post purposes. Don’t confuse them with variables that you could find on your OpenShift instance:


The gem folders are located under $OPENSHIFT_LOCAL_APP/.openshift. In that folder, you will notice 4 folders, 3 being of interest for now.


Make sure you have an empty file named “java7” in it. This will tell your application to use java 7. Without it, you’d be running with java 6.


This is where you will configure the environment variables needed to start your Bonita context.

Create a file named pre_start_jbossews-2.0, and put something along these lines inside:


export CATALINA_OPTS="${CATALINA_OPTS} ${BONITA_HOME} ${DB_OPTS} ${BTM_OPTS} -Dfile.encoding=UTF-8 -Xshare:auto -XX:+HeapDumpOnOutOfMemoryError"


This is the place where you will put your .properties file. This what you should have at the end:

ironman:config fblaise$ ls -l1

Bonita and Tomcat configuration

Local files configuration

Not all files need to be modified. Find below the ones that do.

In your local config folder under $OPENSHIFT_LOCAL_APP/.openshift, edit the Locate the line starting with “common.loader”, and append the following:
Note that this string will be unique for each of you. the string of “x” above represents your user for your OpenShift instance. You should then get:



Add the following right above the <GlobalNamingResources> tag:


I think there is also a H2 listener line you should also remove, since we’re using MySQL. Look in that space also.


You will notice a resource that is already configured for MySQL, done when you add the MySQL cartridge. Here, we will add our Bonita datasource and bitronix transaction factory. It should look like this:

<!-- Configure Bonita Datasource -->
        <Resource name="bonitaDS" auth="Container" type="javax.sql.DataSource"
       factory="" uniqueName="jdbc/bonitaDSXA" />

        <Transaction factory="" />

        <Resource name="bonitaSequenceManagerDS"
             validationQuery="SELECT 1"

There is no variable interpolation in this file, so you will have to get your IP address from your OpenShift instance. You will notice in the block below the variables holding the values you’re looking for

## NOT INTERPOLATING !! resource.ds1.driverProperties.URL=jdbc:mysql://${OPENSHIFT_MYSQL_DB_HOST}:${OPENSHIFT_MYSQL_DB_PORT}/bonita?dontTrackOpenResources=true&amp;amp;amp;amp;amp;amp;amp;useUnicode=true&amp;amp;amp;amp;amp;amp;amp;characterEncoding=UTF-8
resource.ds1.testQuery=SELECT 1

Don’t forget to git commit.

war, bonita_home and libraries


Copy the bonita.war found in your Bonita download in $OPENSHIFT_LOCAL_APP/webapps. git add and commit.


There are several ways to go about that one. I will just present the one I use.

– On your openshift instance


– scp all the .jar files you will find in your Bonita Community download under Tomcat-6.0.37/lib (without the *h2* ones) to the above directory.
– Don’t forget to put in there your MySQL connector jar file as well.


Upload the bonita_home-6.2.4 (containing the client and server subfolders) to your OpenShift instance, straight into $OPENSHIFT_DATA_DIR.

Don’t forget to change from h2 to mysql


Push and deploy

Everything should now be set.

Make sure you’ve git added and committed all your local files, that your bonita.war in the local webapps directory, that you’ve removed your pom.xml and

git push

Performing a git push will restart your application, albeit all your cartridges.

You can tail the logs straight from your terminal with

rhc tail 

You can check out on your web browser as well, see if the login screen shows. Login with your install user then.


Of course, since the application will be out on the wild, don’t forget to change Bonita’s default passwords if not already done (i.e., install user, platformAdmin, etc…).

I may have forgotten some things, as I have written this article some time after doing it. Please let me know if things are missing or are wrong.

Quickly deploy a mongoDB 3-members replica set with vagrant

If you quickly need to test a 3-member mongoDB replica set, this post may be for you.

Using Vagrant and VirtualBox, you can quickly deploy 3 linux servers running mongoDB, automatically configured as a replica set with 1 primary and 2 secondaries. The configuration below is to be used for quick test/dev purposes, not production.


Download and install VirtualBox and Vagrant on your computer.

Setup vagrant with initial box image

(Optional if you’re using the Vagrantfile in below tarball)
Open up a terminal and type:
[sourcecode language=’bash’]vagrant init precise64[/sourcecode]

Vagrant config files

Virtualbox (default)

Download this tarball in a place of your liking. Untar it. You can edit them as you see fit, but they should get you started.

Mac OSX / Parallels

Thanks to the vagrant-parallels project, you can install the plugin to run vagrant with parallels.

Run the following commands, and download this Vagrantfile:

vagrant plugin install vagrant-parallels
vagrant box add --provider=parallels precise64

You can then up your VMs by appending to the vagrant up commands:


For more info, see the parallels-plugin website.


Go to the newly created directory, and power everything up from the directory where your Vagrantfile resides.

vagrant up

It is important that “mongo1” gets deployed last, as the provisioning for the replica is done on that node, which will become the master. This is automatic based on the file you just downloaded.

It possible that you get this warning message when the VMs come up:

 The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
prevent things such as shared folders from working properly. If you see
shared folder errors, please make sure the guest additions within the
virtual machine match the version of VirtualBox you have installed on
your host and reload your VM.

Guest Additions Version: 4.2.0
VirtualBox Version: 4.3

This should not matter. It doesn’t for these specific versions, and for the purpose of this article.

In a matter of minutes, your mongoDB replica set will be ready.


The 3 servers are bound to the following IP addresses (VirtualBox / Parallels)

  • mongo1 : /
  • mongo2 : /
  • mongo3 : /
  • You should end up with the following mongoDB configuration, out of mongo1:

    set0:PRIMARY> rs.conf()
    	"_id" : "set0",
    	"version" : 3,
    	"members" : [
    			"_id" : 0,
    			"host" : ""
    			"_id" : 1,
    			"host" : "mongo3:27017"
    			"_id" : 2,
    			"host" : "mongo2:27017"

    Testing with data

    Insert a line for testing, on the primary node:

    set0:PRIMARY> db.something.insert( {test : true} )
    set0:PRIMARY> db.something.find();
    { "_id" : ObjectId("53133ee70a67e2fcfad30e41"), "test" : true }

    Now, logon to one of the secondary, and query for that data:

    set0:SECONDARY> db.something.find();
    error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

    This is normal. You have to tell Mongo to allow reads on the secondaries.

    set0:SECONDARY> rs.slaveOk()
    set0:SECONDARY> db.something.find();
    { "_id" : ObjectId("53133ee70a67e2fcfad30e41"), "test" : true }

    Get desktop notification for filesystem usage

    Here is a little un-intrusive script that will alert you whenever your filesystem is going above a certain percentage threshold.
    This was tested under opensuse 12.3, but should really work on any linux running KDE (or at least having the kdialog binary installed).

    You can save it under your user’s bin directory, for example /home/fblaise/bin/ in my case.

    [sourcecode language=’bash’]

    # Fred Blaise
    # Cron this script in order to receive passive alerts about filesystem getting full.
    # Set PCT_THRESHOLD to your liking

    export DISPLAY=:0

    ALERT_TITLE=”WARNING: Filesystem almost full”

    df -h |grep ^/dev |awk {‘print $1,$5,$6′} |
    while read devfs pctused mntpoint; do
    if [[ “${pctnum}” -ge “${PCT_THRESHOLD}” ]]; then
    kdialog –title “${ALERT_TITLE}” –passivepopup “${devfs} mounted on ${mntpoint} is at ${pctused}.”

    Don’t forget to make this shell script executable.

    We could then imagine a crontab looking like this, for checking every 10 minutes:

    [sourcecode language=’bash’]-*/10 * * * * fblaise /home/fblaise/bin/[/sourcecode]

    Whenever one of your filesystem crosses the threshold, a passive box will be displayed. I have my bar on the right side of the screen, and the result is the following:


    It is a very basic script. One could add support for choosing what FS types should be monitored, how often to receive notifications. If you do make this better, please share!

    Monitor your cron jobs with Jenkins

    Have you found yourself having some cron jobs failing for a while before you realize it when checking the user’s mailbox on that machine? How about having a web dashboard that will give you an “at a glance” status of these?

    Here comes Jenkins — . The tool is java-based. It is a tool that actually monitors automatic builds of java code for example, or watch over cron jobs. It is meant to be part of the Continuous Integration concept.

    Let’s create a simple example; We will use this job to capture the console output of a trivial command, ls -l. This is on mac OSX.
    Download jenkins from the homepage. Installation is fairly straight forward, and on mac anyway, your browser opens up automatically with http://localhost:8080 to present you with the Jenkins dashboard.

    Let’s create a new job in this interface. Select “Monitor an external job” and give a meaningful name. Press OK.
    Jenkins job creation
    On the next screen, enter a description then save.

    If you go back to the dashboard, your newly created job appears and do not present any data yet.

    – Open a terminal.
    [sourcecode language=’bash’]export JENKINS_HOME=http:/@localhost:8080/[/sourcecode]

    – Find out where your “jenkins-core” file is. The exact name depends on the version you downloaded.
    On Max OSX, and in my particular case, it is located at
    [sourcecode language=’bash’]/Users/Shared/Jenkins/Home/war/WEB-INF/lib/jenkins-core-1.500.jar[/sourcecode]

    Now, we will prepend the actual command we want to monitor with the jenkins java command, and associate it with the job name we created above.
    [sourcecode language=’bash’]java -jar /Users/Shared/Jenkins/Home/war/WEB-INF/lib/jenkins-core-1.500.jar “List home directory” ls -l[/sourcecode]
    command line

    If you now go back to the dashboard and hit refresh, you will see the status change.
    Status update dashboard

    Click on the job, and then on the permalink “last build” (if you hover the link, a small pop-up appears)
    build output

    And there you can see your console output
    console output

    Therefore, if you were to have a cron jobs that you want to monitor you would just have to follow the same steps, making sure JENKINS_HOME is exported, and the output under the right job name.
    And you have it all in a web dashboard!

    If you want an example of a well fed dashboard, you can visit apache’s at Apache’s builds website

    Netgear WNR2200: firmware upgrade makes bonjour/AFP shares unusable

    For those having a Netgear WNR2200 router and using Apple stuff at home such as AFP shares for your TimeMachine backup needs, please be aware that updating your firmware from to 1.0.1.x will likely make that unusable. It seems that the bonjour discovery is not coming across anymore.

    After reverting back to (the actual original firmware), everything came back on.

    Your mileage may vary. I’d be interested if you’re impacted or not. Please comment either way, if you have a few spare minutes, and if you were impacted, were you able to fix it? If so, how?