Skip to main content

Starting 1000 processes in parallel

I'm currently debugging a little server I've written, and I wanted to see what happens if thousands of clients connect to it at once. I've also written a little client app that I can run pretty easily, both in Python. But how do I run it a thousand times in parallel while still keeping control in case they go rogue in some way?

I was kinda surprised that it's actually pretty easily done using bash:

#!/bin/bash

set -e
set -u

if [ -z "${1:-}" ]; then
    echo "Usage: $0 <number of children>"
    exit 1
fi

function kill_children () {
    pkill -TERM -P "$$"
}

trap kill_children exit


# Start clients

echo "Forking..."

for i in $(seq 1 "$1"); do
    python client.py --id "`printf '%015i' $i`" >/dev/null &
    sleep 0.1
done

echo "Done forking. Waiting..."

wait

That's all there's to it, bash will handle that pretty well. The & creates the processes as background tasks, the wait ensures that this script does not exit while there are still children running, and the kill_children function makes sure that all the children get shot in the head properly once I hit control+c.

This is fun!

And for the worst case scenarios, I guess there's always killall...