Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example:

#!/bin/bash

# Configure variables
declare_variables() {
    noun=geese
    count=three
}

# Announce something
i_am_foo() {
    echo "I am foo"
    sleep 0.5
    echo "hear me roar!"
}

# Tell a joke
walk_into_bar() {
    echo "So these ${count} ${noun} walk into a bar..."
}

# Emulate a pendulum clock for a bit
do_baz() {
    for i in {1..6}; do
        expr $i % 2 >/dev/null && echo "tick" || echo "tock"
        sleep 1
    done
}

# Establish run order
main() {
    declare_variables
    i_am_foo
    walk_into_bar
    do_baz
}

main

Is there any reason to do this other than "readability", which I think could be equally well established with a few more comments and some line spacing?

Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference?

Please note that although the script doesn't demonstrate it well, the "run order" of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.

An example of how I would write the above script would be:

#!/bin/bash

# Configure variables
noun=geese
count=three

# Announce something
echo "I am foo"
sleep 0.5
echo "hear me roar!"

# Tell a joke
echo "So these ${count} ${noun} walk into a bar..."

# Emulate a pendulum clock for a bit
for i in {1..6}; do
    expr $i % 2 >/dev/null && echo "tick" || echo "tock"
    sleep 1
done
share|improve this question
9  
Three advantages of functions are: (1) they are easier to test and verify correctness, (2) functions can be easily reused (sourced) in future scripts, (3) your boss likes them. Never underestimate the importance of number 3. – John1024 7 hours ago
2  
I like your boss. In my scripts I also put main() at the top and add main "$@" at the bottom to call it. That lets you see the high level script logic first thing when you open it. – John Kugelman 3 hours ago

Readability is one thing. But there is more to modularisation than just this. (Semi-modularisation is maybe more correct for functions.)

In functions you can keep some variables local, which increases reliability, decreasing the chance of things getting messed up.

Another pro of functions is re-usability. Once a function is coded, it can be applied multiple times in the script. You can also port it to another script.

Your code now may be linear, but in the future you may enter the realm of multi-threading, or multi-processing in the Bash world. Once you learn to do things in functions, you will be well equipped for the step into the parallel.

share|improve this answer

In my comment, I mentioned three advantages of functions:

  1. They are easier to test and verify correctness.

  2. Functions can be easily reused (sourced) in future scripts

  3. Your boss likes them.

And, never underestimate the importance of number 3.

I would like to address one more issue:

... so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.

To get the benefit of breaking code into functions, one should try to make the functions as independent as possible. If walk_into_bar requires a variable that is not used elsewhere, then that variable should be defined in and made local to walk_into_bar. The process of separating the code into functions and minimizing their inter-dependencies should make the code clearer and simpler.

Ideally, functions should be easy to test individually. If, because of interactions, they are not easy to test, then that is a sign that they might benefit from refactoring.

share|improve this answer
    
I'd argue that it's sometimes sensible to model and enforce those dependencies, vs refactoring to avoid them (since if there are enough of them, and they're sufficiently hairy, that can just lead to a case where things are no longer modularized into functions at all). A very complicated use case once inspired a framework to do just that. – Charles Duffy 2 hours ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.