Unit tests in bash

I yesterday ported the tear-down part of MicroFW from Python back to Bash. (It was just unhappy in Python.) In doing so, I tested it by uploading it to my server, running it and seeing what happens. This is obviously unsatisfactory, but since the script runs a bunch of iptables calls and the like, how am I supposed to test it? This integrates pretty deeply into the system, no way of mocking that, right?

Well, yes way: You just need to shamelessly abuse the function keyword…

Mocking system commands

As you may or may not know, bash allows the definition of functions:

function something() {
    echo "I do stuff!"
}

These functions can then be invoked like any other command, at least from inside the script:

function something() {
    echo "I do stuff!"
}

something
# prints I do stuff!

So if I have a script that heavily invokes iptables, like so:

iptables -t filter -D FORWARD -j MFWFORWARD

If I now were to define a function named iptables, that function would get called instead of the actual iptables command, right?

function iptables () {
    echo "iptables" "$@" >> "$TEMPDIR/out.txt"
}

Right! So now I can just run my original script by sourcing it, and whenever it tries to run iptables, it will instead just append the invocation to a logfile but not actually do anything. I can then grep through that logfile to see if the invocation I’m looking for is actually there. Pretty nice.

Did you know source can take args?

Allright, so after we defined the mock functions, let’s source the original script:

source src/microfw.sh
# prints "Need a command, see --help" and exits

Ahh damn, it checks $1 for a command that tells it what to do. How do we pass one? We can’t just define $1 with some value:

1=tear_down
source src/microfw.sh

This just results in 1=tear_down: command not found. But turns out we can pass an argument to source itself:

source src/microfw.sh tear_down

This will work! However, I didn’t end up using it because I needed a RUNNING_IN_CI variable eventually anyway, so I just added code so I can define the command via a variable directly.

I can haz test runner?

Now, for the whole thing to feel really unittesty, we need a test runner that tells us which tests have succeeeded and which ones have failed. How can we achieve that? In JavaScript, I’d probably do something along the lines of:

function test_something() {
    // Whatever unittest framework syntax necessary to assert something
}

function run_test(test_fn) {
    if (test_fn()) {
        console.log(test_fn.name + " ... ok");
    } else {
        console.log(test_fn.name + " ... failed");
    }
}

run_test(test_something);

This is hella crude, but you get the idea: Define the test as a function, pass it to a test runner, and the test runner will then take care of telling you that it ran the test and what the result was. But can we actually pass a function to another function, to have it called by that function instead of calling it ourselves? Turns out yes, we can:

function test_something() {
    echo "Running a test"
}

function run_test() {
    FUNC="$1"
    $FUNC
}

run_test test_something
# prints "Running a test"

See how I use a string variable in the place of a command there, and it just works? Coming from a more high-level language this feels really, really wrong - I’m passing a string for-crying-out-loud, how can I just treat it like a function!? - but that’s how it is. So adding just a tiny bit more ceremony like, you know, an if statement that checks the result and some result printery, we end up with this:

function test_something() {
    echo "Running a test"
}

function run_test() {
    FUNC="$1"
    echo -n "$FUNC... "
    if $FUNC; then
        echo "ok"
    else
        echo "failed"
        exit 1
    fi
}

run_test test_something
# prints "test_something... ok"

And nicely enough, since set -e causes functions to return early when commands fail, I can just add a bunch of tests into my test functions and it’ll just work:

function test_no_state() {
    tear_down
    [ ! -e "$TEMPDIR/out.txt" ]
}

This function runs the code and checks that no out.txt file has been created by it, effectively testing that the code didn’t do anything. I don’t need to add any ceremony to the test_no_state function to check the result of that check, because if it fails, set -e tells bash to just return 1 from the function and run_test will declare the test failed!

Conclusion

So, Unit tests in bash are possible. I’m quite flabbergasted, but oh well :D