Free OnSite SEO Offline Tool

Recently we revamped our company’s website We dropped our Rails based CMS and in favour to use the static site generator Jekyll. Jekyll gives us a great flexibility and extensibility, while the site load gets blazing fast. Not only the asset management with support for less and coffee script is a big win, we were able to implement all missing pieces quickly. E.g. we created new liquid template tags for email obfuscating, google analytics, or youtube video.

So the page was done. Now we need publicity, reputation and SEO – everyone needs SEO. While the OffSite SEO with all the back links, social media and other authorities is a very difficult topic, we could do some homework on OnPage SEO. OnPage SEO evaluates your single page and gives you some recommendations how to improve your page. Some rules are:

  • Have only one h1 heading
  • Write keywords and description in your HTML header
  • All images should have an alt attribute
  • title and h1 header should be different
  • All internal links to anchors should be valid
  • Text to HTML markup ratio should be high

There are many online SEO services out there like where you can check one page for free or pay a lot to get a full report. This was not an option. Back to DIY work. Validating OnPage SEO for one page is fine. However, having more than 30 pages this work is very time consuming. Further, manual work is fragile, error-prone and regress-prone. Why not doing this by a computer program?

Unfortunately, we did not find any appropriate offline tool which can crawl your site and gives some improvement suggestions for all our pages (if you know any, please leave a comment). A small afternoon hack later a little tool called OnSite SEO was born.

OnSite SEO is a tool to crawl, inspect and score your site offline. It inspects each page and extracts some key properties such meta information, headers, resources and text. These properties are scored by a list of different rating functions to give a final score of your page. Also you can validating links, check images and do other arbitrary calculations, if you know javascript. The tool is based on nodejs, phantomjs, jQuery, and angularjs.


Since it is Christmas time we published this tool for free! Download, use, fork and extend the OnSite SEO tool from GitHub or visit the demo page.

We wish you a SEO Christmas and a happy new year 2014!


Massive Graph Insert with OrientDB

In our product REWOO Scope we use a postgres database as underlying data storage. While postgres serves a good overall performance, some of our data structure are more graphish like data structures. A standard RDBS does not suit well here – even a recursive SQL query in postgres. Our current implementation needs over 2 1/2 minutes to collect over 70000 vertices of one particular sub graph.  So we investigate some time to evaluate OrientDB, an awesome open source graph-document database.

One of our problem was to load our graph structure with over 1,3M vertices and almost 2,5M edges into OrientDB. The main problem was that the insert of vertices went well, but the insert the edges was a pain. Further we got OConcurrentModificationException here or OutOfMemoryError (perm space) errors there. After some implementation iteration we found a good way:

  • Create indexes before inserting vertices or edges
  • Insert the vertices with OIntentMassiveInsert()
  • Insert the edges with disabled Level 1 cache

Our setting

  • Directed graph with 1,3M vertices and 2,5M edges
  • Each vertex has 3 properties (1x String, 2x Long). Edges do not have properties.
  • Test machine: i7 8×3,4GHz, 8GB RAM, SSD, Ubuntu 13.04, 64 Bit
  • OrientDB 1.5.0, Java 1.7.0_09 with -Xmx6g

Now it needs about 100 seconds to read 1,3M vertices (0.07ms/vertex) and about 370 seconds to read about 2,5M edges (0.15ms/edge). The graph database needs about 450 MB disk space. BTW: the final graph traversal of 70000 vertices took about 4 seconds which is a very good result against our current implementation.


For dummy source code read the full blog entry.

Continue reading

SeaCon 2013 – Software Architecture, Processes and Management

I attended the SeaCon 2013 in Hamburg, Germany, a conference about software architecture, processes and management. It was a very delightful conference with lots of fresh talks, good food and awesome gimmicks. Here are my findings of these two days:

  • Evaluate the needs of outsourcing well! In most cases it makes more sense to develop a mobile app or an e-commerce software on your own than by a 3rd party. Knowledge is power. If it is your team that gets into it then you are in control of it. Your team has a strong relationship to the project and you are able to change everything whenever it is required. If you outsource something, make sure that all the software sources and rights belong to you after the external project has finished. Otherwise you will have a kind of vendor lock and have to pay for subsequent changes (which might take ages to complete, too).
  • It is very popular to develop software using agile methods like Scrum, Kanban or Scrum Ban. Today, every modern development is agile. However, the old waterfall software development architecture still applies to everything but the development itself: the management, budgeting and external agreements do not fit into the agile development framework — yet. There is a need to change that. The management should adapt agile methods for quicker decision cycles. The budgeting should be approved and reconsidered in shorter intervals than one year (AKA beyond budgeting in Germany) and contracts to 3rd parties should be more loosened.
  • Similar to the previous topic but different: Large projects cannot be controlled but guided. It is impossible to accurately estimate all aspects of a larger project like its complete feature set, required time, and total costs. However with shorter control and decision cycles you can guide the project better than in one large phase. A decision cycle should not be longer than three months and decisions should be made by a heterogeneous group of specialists.
  • It is more important to consider the business and investment value a specific task has for your company rather than asking the software development department about the required time to complete the job. If you know your business value well, you can use and organize your resources (man power, time, and money) in a better way.

Why findAll on null returns empty lists in Groovy

Recently while debugging our grails application I saw something like this:

variable.findAll { it.condition }

Since the debugger told me that this variable was null, I happily thought I might have found my little bug and moved on waiting for the big crash and a nice NullPointerException. Instead I got an empty list.

Wait WAT?

That seemed rather bizarre to me, so I checked it inside a groovy shell.

null.findAll { true } ==> []
null.findAll { false } ==> []
null.findAll { it } ==> []
null.findAll {} ==> []
null.findAll() ==> []

Ok, what’s going on here? First thing to note is that Groovie’s null is an object.

null.getClass().name ==> org.codehaus.groovy.runtime.NullObject

From the Java perspective kind of surprising but since I’m familiar with Ruby it was somehow expectable. So far no magic, but where does that findAll method come from?

==> [equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait, getMetaClass, getProperty, invokeMethod, setMetaClass, setProperty, asBoolean, asType, clone, equals, getNullObject, getProperty, hashCode, invokeMethod, is, iterator, plus, setProperty, toString]

Not there… so Groovy-Voodoo. Luckily there are some developers more experienced with Groovy than me in our company (even one who contributed to it some time ago), so I could ask someone else than Google. We got the source code (Groovy 1.8 in our case) and dug into it. The place where a lot of those magical methods dwell is According to the documentation the static methods in this class will be available in the class of each method’s first parameter. So here we found the following:

public static Collection findAll(Object self, Closure closure) {
    List answer = new ArrayList();
    Iterator iter = InvokerHelper.asIterator(self);
    return findAll(closure, answer, iter);

Which at least explains why that is available for null. Furthermore it shows us that findAll should work on any Object, too. A quick check in the console confirms this.

new Object().findAll { true } ==> [java.lang.Object@79ad86e9]

However it does not explain how the invocation works and why the result is []. So what’s happening here? The asIterator method simply invokes a method named iterator on self. Groovie’s NullObject defines this particular method in the following way:

public Iterator iterator() {
    return Collections.EMPTY_LIST.iterator();

This clearly explains why we get an empty list from our findAll call. In the case of an arbitrary GroovyObject we again find (after an unsuccessful lookup in the iterator method for objects in the DefaultGroovyMethods class simply putting the object into a collection and iterating over it.

public static Iterator iterator(Object o) {
    return DefaultTypeTransformation.asCollection(o).iterator();

What is still missing to a full understanding of this phenomenon is how those default groovy methods get invoked. Covering this would be way beyond the scope of this blog post. If you browse around a little in the source all this meta stuff can get kind of overwhelming. What we can take out of this so far (beside getting confused by Groovie’s method invocation mechanisms) is some more awareness to the fact that anything and everything can happen in languages such as Groovy even when it all starts with an innocent null object…

Flash Container Height in IE10

Recently we had a problem with our flash client container within Internet Explorer 10. Within the CSS the height and width of the container is set to 100% to consume all available space. Firefox, Chrome, Opera and IE (pre version 10) have no problem to size the flash container correctly. However, IE 10 shows the flash container about 30% of the height while consuming 100% of the width. The div container has 100% in height, but the included embed object has not.

Our html source looks like:

<div style="height: 100%; width: 100%;">
  < embed id="RewooClient" width="100%" height="100%"
    type="application/x-shockwave-flash" name="Rewoo" src="Rewoo.swf"
    allowfullscreeninteractive="true" allowscriptaccess="sameDomain" >
    <!-- ... -->

Internet searches did not help. It seems that nobody else have this problem. The pragmatic solution was to use jQuery to set the height manually on page load and on page resizes:

(function($) {
$(document).ready(function() {
  if ($.browser.msie && $.browser.version >= 10) {
    $('#RewooClient').attr('height', $(window).height());
    $(window).resize(function() {
      $('#RewooClient').attr('height', $(window).height());

[update 2013-05-13] Starting with jQuery 1.9 the $.browser feature was removed and you need to add the jQuery Migration plugin to get the code working again (Thanks to Ed Graham)[/update]

Maybe this might help you… And if you know a better solution, please drop a line.

Grails Webdav Plugin with Apache Shiro causes Hibernate LazyInitializationException

We use Grails 2.1 with Apache Shiro 1.1.3 as security layer which handles the user authentication such as login via web app or webdav. To provide an easy and flexible file service we use webdav plugin 3.0.1. It hides the complexity of the HTTP file protocol webdav and gives a simple interface to work with virtual network filesystem structure.

When a user tries to login, our SystemDBRealm authenticates the given user with its password against the database and does other checks as well. These checks include the validation of the user role. The user role is modeled via SystemUserRoleRel within our grails domain model. SystemUserRolRel has two fields: user and role which links the SystemUser and SystemRole domain models together. Hibernate loads these models lazy in special hibernate proxies. These model proxies are loaded on demand to proper domain model instances which requires a valid hibernate session.

To check that given user has a valid role we execute following code (the code is simplified to depict the problem):

def user = SystemUser.findByUsername(username)
def standardRole = SystemRole.findByName('Standard')
def hasStandardRole = SystemUserRolRel.findAllByUser(user).find { it.role == standardRole } != null

On a normal login via browser SystemDBRealm has a hibernate session and can resolve the it.role hibernate proxy within the find closure find { it.role == standardRole }.

In case of a login via Webdav (e.g. through cadaver, a command line webdav client), the user is authenticated via BasicHttpAuthenticationFilter of shiro. As described in the Basic HTTP Auth with Shiro in Grails this basic authentication filter is configured in config.groovy like this:

security.shiro {
        authc.required = false
        filter.config = """\
authcBasic = org.apache.shiro.web.filter.authc.BasicHttpAuthenticationFilter
authcBasic.applicationName = Rewoo World

/webdav/** = authcBasic

The BasicHttpAuthenticationFilter extracts the user password token and returns it to shiro. Shiro processes this authentication token in the SystemDBRealm like the browser based login. But now the hibernate session is missing and the find closure find { it.role == standardRole } throws a hibernate LazyInitializationException: it.role can not be resolved.

To solve this issue we use the withNewSession closure of an arbitrary domain model class to wrap the authentication code defined above (in our case we choose the SystemUser class but another class should be ok as well):

SystemUser.withNewSession {
  def user = SystemUser.findByUsername(username)
  def standardRole = SystemRole.findByName('Standard')
  def hasStandardRole = SystemUserRolRel.findAllByUser(user).find { it.role == standardRole } != null

Now a hibernate session is bound to the closure and it.role hibernate proxy can be resolved again.

PS: A ticket is filed at GPWEBDAV-18 for this issue. Comments are welcome.

Team escape in August 2012

Our team escaped from the daily work into our neighbouring country Rhineland-Palatinate (Rheinland-Pfalz). We ate, drunk, laughed, talked with each other about this and that and walked through the beautiful countryside instead of thinking about computers, code and customers.

on the way to the castle of hambach

We walked from Neustadt an der Weinstrasse through the hills to the Castle of Hambach. The Castle is known as a very important point for the german democracy. In 1832 approx. 30,000 people met here to celebrate the Hambacher Festival. During the festival poeple demanded freedom of assembly, freedom of press, freedom of speech, civil rights and national unitiy of all german countries. This festival is known as the root of german democracy.

on the castle of hambach

After our visit on the Castle of Hambach we walked through the countryside about the Southern Wine Way to the town Edenkoben.
In Edenkoben we let fade out the day during a small pint of beer and wine.

after the walk in edenkoben

Simple user authentication with Postfix and Dovecot

Every public SMTP mail server requires some sort of user authentication. One way is by using SASL the Simple Authentication Security Layer. Reviewers of this technology say: It neither the one nor the other. Anyone who configured a Cyrus saslauthd knows why.

This article shows how to configure SMTP user authentication without configuring a saslauthd. A running Dovecot IMAP/POP3 daemon which authenticates users is required.

Further, the article shows a simple solution how to configure Postfix SMTP server with user authentication with SASL and Dovecot. Any configuration snippets rely to Dovecot and Postfix services delivered by Debian “Squeeze” 6.0. The current versions are Dovecot 1.2.15 and Postfix 2.7.1. In Dovecot 2 anything changed regarding to SASL and the authentication mechanisms. Please read Dovecot wiki for more information.

Dovecot configuration

Before configuring Postfix you should check if Dovecot’s configuration is prepared for SASL authentication. Open the Dovecot configuration file, mostly located in /etc/dovecot/dovecot.conf, and check if following lines are present:

auth default {
    user = vmail
    mechanisms = plain login
    socket listen {
        master {
            path = /var/run/dovecot/auth-master
            mode = 0600
            user = dovecot
            group = dovecot
        client {
            path = /var/spool/postfix/private/auth
            mode = 0600
            user = postfix
            group = postfix

Please note the client configuration section: If you use a chroot environment verify that given UNIX socket file is located within your chroot path of Postfix. Otherwise Postfix won’t be able to access the socket file and can’t authenticate users.

With mechanisms = plain login you configure the specific authentication mechanisms. Note that any mechanism better than PLAIN or LOGIN will need a clear text password in user database. Cryptographic authentication mechanisms like CRAM-MD5 or DIGEST-MD5 won’t work with encrypted passwords in your user database. It wont work if you use a user authentication with MD5 and your passwords are encrypted.

Now restart Dovecot service if you changed anything in its configuration file.

Postfix configuration

If you configured Dovecot for SASL authentication you can enable SASL authentication in Postfix as well. Any configuration will be stored in /etc/postfix/ You shouldn’t need any alias configuration or any other map(ping) file. Open file /etc/postfix/ and add following lines:

# authentication via SASL
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
broken_sasl_auth_clients = yes
smtpd_sasl_security_options = noanonymous
smptd_sasl_tls_security_options = noanonymous

The line smtpd_sasl_type = dovecot activates the Dovecot SASL interface integrated in Postfix since Postfix 2.3. With command postconf -a you get all SASL implementations known by Postfix. On Debian Squeeze you should get following output:

root@host:~# postconf -a

On Debian Squeeze the authentication type dovecot is used. Please activate dovecot in and restart the Postfix mail server. With configuration smtpd_sasl_path = private/auth you define the UNIX socket file used for communication with Dovecot. This file is the same as configured in dovecot.conf in client section. Be sure that Postfix has read and write rights to this file. Line smtpd_sasl_security_options = noanonymous disables anonymous logins. The SMTP server offers PLAIN and LOGIN as authentication mechanisms with this option. You can disable any other mechanisms: e.g noplaintext disables any plain text mechanisms. Unfortunately, this did not work at my machine: Dovecot did not understand this option and so authentication was not successfully.

All my machines are using hashed passwords stored in a database. However, Dovecot can only authenticate plain text passwords. In this case you should add following option: smtpd_tls_auth_only = yes. This option enables authentication only if TLS as secure transport connection is used.
You should test your configuration by connecting to the SMTP service via telnet using the command EHLO <server>. The returned list of supported functions should not contain keywords LOGIN or PLAIN since you are using a insecure connection (plain TCP). If you try the same with an encrypted connection via openssl s_client -connect :25 -starttls smtp and type EHLO <server>, PLAIN and LOGIN authentication functions should be supported.

If you want to enable mail delivery for authenticated users to external systems, you have to add the option permit_sasl_authenticated to the list of smtpd_recipient_restrictions. permit_sasl_authenticated should be added before the first reject_* options. For example:

smtpd_recipient_restrictions =

Again, you have to restart the Postfix mail server to activate the new configuration.


Anyone who already configured Postfix with Cyrus SASL knows that the shown configuration is just one way to get the user authentication done. The given configuration is short and simple. IMHO reducing the complexity of SASL configuration your mail server is better maintainable. You can identify configuration problems faster and solve problems quick. Further, security issues caused by a complex configuration can be eliminated.

Continuing links:

Polluted Mocked Test Data from Unit in Integration Test using Grails 1.3.5

Recently the REWOO source code had some unpredictable test results in your grails test environment running unit and integration tests together. We’re using grail 1.3.5 and are running test-app to execute all our all unit and integration tests at once.

Behaviour: After running all unit tests with success, the first integration test failed. Running the failed test separately succeeds. Ignoring the first integration test did not help because the next integration test failed. Further, executing unit tests with test-app -unit and than integration tests test-app -integration did not show that error.

Inspection: The debugger showed that some test data of your model data from unit test were available in the first integration test and caused the error. What? Unit test data pollutes data in integration test? Hibernate’s PersistenceContext was empty. SecondLevelCache was empty. Next integration tests were cleaned up correctly.

After some time it leads us to a bug entry GRAILS-7514 mentioned “testdata from unit test is available in integration test“. It describes an issue with domain model hierarchy and their mocks in unit tests which are not cleanup correctly. So some metaClass assignments are still alive in following integration tests. Resolution from Graeme Rocher was a kind of “works on my machine” with Grails 2.0. But we have Grails 1.3.5!

Solution: True, we had a bunch of mocked domain models and our domain models have a class hierarchy up to four levels. Digging deeply in Grails test code we found a snippet in grails-1.3.5/src/test/grails/test/MetaTestHelper.groovy which deals with the MetaClassRegistry of Grails. All we had to do is to clean it in the unit tearDown() method of our base unit test class to get clean domain classes. The spook disappeared.

Use following code in your unit tests if you experience the same:

public class RewooUnitTestCase extends GrailsUnitTestCase {

    def tearDown() {
        // ....

     * De-mock class hierarchies of RewooType and RewooElement
     * This cleanup is required to clean metaClass assignments of mocks with class
     * hierarchies. Otherwise it would pollute following tests. In particular, it
     * would pollute the first integration test after a unit test.
     * This solution was inspired by grails' MetaTestHelper.groovy
     * See also
    private mockCleanup() {
        List classes = [RewooType.class, ...]
        classes += [RewooElement.class, ...]
        classes.each { clazz ->
            GroovySystem.metaClassRegistry.removeMetaClass clazz

    // ...



Our research and development department uses a little tool called “Lampengeist” for indicating if a build on the CI was successful or not. All we needed to implement this feature is the Jenkins plugin Hudson Post build task and a small shell script – and of course a programmable power connector. We choose the energenie EG-PMS-LAN by gembird because it is manageable via network.

Copy script

The script is very simple. It only needs the tools curl and logger. Attention: This script uses advanced scripting functions of the Bash shell. Maybe it will not work with other Shells – e.g. dash. Take the script below and copy it into directory /usr/local/bin/ on your build server.


# This script toggles a socket on
# "energenie LAN programmable power connector"

typeset -a connectors=( )
typeset -a passwords=( p4ssw0rd )
typeset toggle="" state_dir="/var/tmp" lock_file="/var/lock/toggle_lamp"
typeset -i i=0 socket=1

typeset -r connectors passwords state_dir

while [ -e ${lock_file} ]; do
    if [ $i -gt 10 ]; then
        logger -p user.error -t `basename $0` -s -- "Could not execute - lock file detected."
        echo "Please contact administrator if problem exists for longer time." >&2
        exit 3
    i=`expr $i + 1`
    sleep 2

touch $lock_file

################# FUNCTIONS ###################

usage() {
    cat << EOF
You called ${0} with unsupported option(s).
Usage: ${0} [1|2|3|4] <on|off>
Numbers 1 to 4 stands for the socket number. If no socket is given, it will
toggle socket 1 per default.
Please try again.

get_states() {
# get states of sockets
    if [ $# -ne 1 ]; then
        return 1
    states=( $(curl -f http://${srv}/status.html 2>/dev/null | sed -r "s/(.*)((ctl.*)([0|1]),([0|1]),([0|1]),([0|1]))(.*)/\4 \5 \6 \7/") )

toggle() {
    local server="" str_state=""
    local -i i=0 state sckt

    if [ $# -ne 3 ]; then
        return 1

    while [ $# -gt 0 ]; do
        case $1 in

    # poll status and toggle only if needed
    get_states ${server}
    if [ ${state} -ne ${states[$( expr ${sckt} - 1 )]} ]; then
        curl -f -d "ctl${sckt}=${state}" http://${server}/ &>/dev/null
        logger -p -t `basename $0` -- "state of ${server} socket ${sckt} toggled ${str_state} by ${LOGNAME}"

persist() {
# for cron job use only
# saves state of sockets

    local state_file
    local -i i=0 j=0
    while [ ${i} -lt ${#connectors[*]} ]; do

        if (curl -f -d "pw=${passwords[$i]}" http://${connectors[$i]}/login.html 2>/dev/null | grep -q 'status.html'); then
            logger -p -t `basename $0` -- "Save states of ${connectors[$i]} to file ${state_file}"
            # get states
            get_states ${connectors[$i]}
            echo "SavedStates=( ${states[*]} )" > ${state_file}

            while [ $j -lt ${#states[*]} ]; do
                j=`expr ${j} + 1`
                toggle ${j} off ${connectors[$i]}
                sleep 1

            curl -f http://${connectors[$i]}/login.html &>/dev/null
            logger -p -t `basename $0` -- "States saved and all sockets switched off"
            logger -p user.error -t `basename $0` -s -- "Login to ${connectors[$i]} failed."

        i=`expr ${i} + 1`
        typeset +r state_file

recover() {
# recovers states from state file

    local state_file new_state
    local -a SavedStates
    local -i i=0 j=0

    while [ ${i} -lt ${#connectors[*]} ]; do
        typeset -r state_file=${state_dir}/${connectors[$i]}

        if [ -r ${state_file} ]; then

            source ${state_file}
            if (curl -f -d "pw=${passwords[$i]}" http://${connectors[$i]}/login.html 2>/dev/null | grep -q 'status.html'); then

                logger -p -t `basename $0` -- "Restore socket states from ${state_file} to ${connectors[$i]}"
                while [ ${j} -lt ${#SavedStates[*]} ]; do
                    if [ ${SavedStates[$j]} -eq 0 ]; then
                    j=`expr ${j} + 1`
                    toggle ${j} ${new_state} ${connectors[$i]}
                    sleep 1

                curl -f http://${connectors[$i]}/login.html &>/dev/null
                logger -p -t `basename $0` -- "Socket states restored and switched on if needed."
                logger -p user.error -t `basename $0` -s -- "Login to ${connectors[$i]} failed."
            rm ${state_file}

            logger -p user.error -t `basename $0` -s -- "Could not read file ${state_file}"

        i=`expr ${i} + 1`

common() {
# common mode

local -i i=0

while  [ ${i} -lt ${#connectors[*]} ]; do
    if [ -e ${state_file} ]; then
        # state file exists -> do not toggle life, change in state file only
        if [ ${new_state} = "on" ]; then
        elif [ ${new_state} = "off" ]; then
        socket=`expr ${socket} - 1`

        source $state_file
        if [ ${SavedStates[${socket}]} -ne ${new_state} ]; then
            echo "SavedStates=( ${SavedStates[*]} )" > ${state_file}
            logger -p -t `basename $0` -- "Toggled state of socket ${socket} to ${new_state} in state file by ${LOGNAME}"

        if (curl -f -d "pw=${passwords[$i]}" http://${connectors[$i]}/login.html 2>/dev/null | grep -q 'status.html'); then
            toggle ${socket} ${new_state} ${connectors[$i]}
#            curl -f -d "ctl${socket}=${new_state}" http://${connectors[$i]}/ &>/dev/null
            curl -f http://${connectors[$i]}/login.html &>/dev/null
            logger -p user.error -t `basename $0` -s -- "Login to ${connectors[$i]} failed."
    i=$( expr $i + 1 )

############# END FUNCTIONS ##################

typeset -r curl_bin="$(which curl | head -n 1)"

if [ -z "${curl_bin}" ]; then
    echo "Tool curl not found. Please install it."
    exit 1

if [ $# -lt 1 ]; then
    echo "No action provided. What should I do?"
    exit 1

while [ $# -ge 1 ]; do
    case ${1} in
            rm $lock_file && exit 2


case ${mode} in

rm $lock_file && exit 0 || exit 1

You can entitle the file as you want. But the file name of the script will be taken as “tag” in the logger utility. This means the file name will be posted at the 4th field in syslog file. The name of the user executing the script will posted into syslog, too. And finally the action – toggle socket X on|off – will posted to syslog file. Messages like these will posted to your system log file:

Mar 19 15:00:13 hostname toggle_lamp: state of socket 1 toggled on by jenkins
Mar 19 15:19:08 hostname toggle_lamp: state of socket 1 toggled off by jenkins

After copying the script onto the build server, you should make it executable (for jenkins). Go to /usr/local/bin and execute chmod 755 toggle_lamp. Edit file and change and p4ssw0rd to the hostname or IP and password of your manageable power connector.

Configure post build task

Now you can configure your project for switching sockets on or off. Go to the Jenkins start page and choose the project/job which should toggle your lamp. Click the link configure and scroll to the end of configure page. In section Post-Build-Actions you will find a new option called Post build task. Activate this option and it will expand. The Hudson post build task plugin will scan the log file generated by Jenkins.

In our environment we are scanning the build log for ABORTED, BUILD SECCESSFUL and BUILD FAILED. If the build successfully finished we call /usr/local/bin/toggle_lamp 1 off to switch off the lamp. And we call /usr/local/bin/toggle_lamp on if build failed or was aborted. Scroll to the end of the page and click the button Save if you have defined your tasks.

Power-saving cron jobs

The script posted above has implemented an advanced feature. You can create a cron job which will scan the state of all sockets of your power management socket, persist the state in a file and then switch off all sockets. In the morning a second cron job will read the state file and restore the state of the sockets. The user calling the script via cron will need write permissions (create and delete files) on directory /var/tmp/.


# save state of manageable power connector and switch off all sockets
30 20 * * 1-5	test -x /usr/local/bin/toggle_lamp && /usr/local/bin/toggle_lamp --save
# restore state of manageable power connector
30 06 * * 1-5	test -x /usr/local/bin/toggle_lamp && /usr/local/bin/toggle_lamp --recover

The cron jobs above will save the states to file at 20:30 each day from Monday until Friday. On Saturday and Sunday all sockets will be disabled. The saved states will be restored to power connector sockets at 06:30 in the morning.