Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example:

#!/bin/bash

# Configure variables
declare_variables() {
    noun=geese
    count=three
}

# Announce something
i_am_foo() {
    echo "I am foo"
    sleep 0.5
    echo "hear me roar!"
}

# Tell a joke
walk_into_bar() {
    echo "So these ${count} ${noun} walk into a bar..."
}

# Emulate a pendulum clock for a bit
do_baz() {
    for i in {1..6}; do
        expr $i % 2 >/dev/null && echo "tick" || echo "tock"
        sleep 1
    done
}

# Establish run order
main() {
    declare_variables
    i_am_foo
    walk_into_bar
    do_baz
}

main

Is there any reason to do this other than "readability", which I think could be equally well established with a few more comments and some line spacing?

Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference?

Please note that although the script doesn't demonstrate it well, the "run order" of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.

An example of how I would write the above script would be:

#!/bin/bash

# Configure variables
noun=geese
count=three

# Announce something
echo "I am foo"
sleep 0.5
echo "hear me roar!"

# Tell a joke
echo "So these ${count} ${noun} walk into a bar..."

# Emulate a pendulum clock for a bit
for i in {1..6}; do
    expr $i % 2 >/dev/null && echo "tick" || echo "tock"
    sleep 1
done
share|improve this question
32  
Three advantages of functions are: (1) they are easier to test and verify correctness, (2) functions can be easily reused (sourced) in future scripts, (3) your boss likes them. Never underestimate the importance of number 3. – John1024 yesterday
11  
I like your boss. In my scripts I also put main() at the top and add main "$@" at the bottom to call it. That lets you see the high level script logic first thing when you open it. – John Kugelman yesterday
9  
I disagree with the notion that readability can "be equally well established with a few more comments and some line spacing." Except maybe for fiction, I wouldn't want to deal with a book that doesn't have a table of contents and descriptive names for each chapter and section. In programming languages, that's the kind of readability that functions can provide, and comment's can't. – Rhymoid yesterday
2  
Note that variables declared in functions should be declared local - this provides variable scope which is incredibly important in any non-trivial script. – Boris the Spider yesterday
2  
I disagree with your boss. If you have to break down your script into functions, you probably shouldn't write a shell script in the first place. Write a program instead. – el.pescado yesterday

Readability is one thing. But there is more to modularisation than just this. (Semi-modularisation is maybe more correct for functions.)

In functions you can keep some variables local, which increases reliability, decreasing the chance of things getting messed up.

Another pro of functions is re-usability. Once a function is coded, it can be applied multiple times in the script. You can also port it to another script.

Your code now may be linear, but in the future you may enter the realm of multi-threading, or multi-processing in the Bash world. Once you learn to do things in functions, you will be well equipped for the step into the parallel.

share|improve this answer
17  
Very good answer although it would be much better if it were broken into functions. – Pierre Arlaud 23 hours ago
    
Maybe add that functions allow you to import that script into another script (by using source or . scriptname.sh, and use those functions as-if they were in your new script. – SnakeDoc 13 hours ago
    
That's already covered in another answer. – tomas 13 hours ago
    
I recognize that, but your answer is the top voted one, and is missing this information. Thought it might make yours more complete. – SnakeDoc 13 hours ago
2  
I faced a case today where I had to redirect some of the output of a script to a file (to sent it via email) instead of echoing. I simply had to do myFunction >> myFile to redirect the output of the desired functions. Pretty convenient. Might be relevant. – Etsitpab Nioliv 11 hours ago

In my comment, I mentioned three advantages of functions:

  1. They are easier to test and verify correctness.

  2. Functions can be easily reused (sourced) in future scripts

  3. Your boss likes them.

And, never underestimate the importance of number 3.

I would like to address one more issue:

... so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar, that would break things.

To get the benefit of breaking code into functions, one should try to make the functions as independent as possible. If walk_into_bar requires a variable that is not used elsewhere, then that variable should be defined in and made local to walk_into_bar. The process of separating the code into functions and minimizing their inter-dependencies should make the code clearer and simpler.

Ideally, functions should be easy to test individually. If, because of interactions, they are not easy to test, then that is a sign that they might benefit from refactoring.

share|improve this answer
    
I'd argue that it's sometimes sensible to model and enforce those dependencies, vs refactoring to avoid them (since if there are enough of them, and they're sufficiently hairy, that can just lead to a case where things are no longer modularized into functions at all). A very complicated use case once inspired a framework to do just that. – Charles Duffy yesterday
1  
What needs to be divided into functions should be, but the example takes it too far. I think the only one that really bugs me is the variable declaration function. Global variables, especially static ones, should be defined globally in a commented section dedicated to that purpose. Dynamic variables should be local to the functions that use and modify them. – Xalorous 23 hours ago

You break the code into functions for the same reason you would do that for C/C++, python, perl, ruby or whatever programming language code. The deeper reason is abstraction - you encapsulate lower level tasks into higher level primitives (functions) so that you don't need to bother about how things are done. At the same time, the code becomes more readable (and maintainable), and the program logic becomes more clear.

However, looking at your code, I find it quite odd to have a function to declare variables; this really makes me rise an eye brow.

share|improve this answer

While I totally agree with the reusability, readability, and delicately kissing the bosses butt there is one other advantage of functions in : variable scope. As LDP shows:

#!/bin/bash
# ex62.sh: Global and local variables inside a function.

func ()
{
  local loc_var=23       # Declared as local variable.
  echo                   # Uses the 'local' builtin.
  echo "\"loc_var\" in function = $loc_var"
  global_var=999         # Not declared as local.
                         # Therefore, defaults to global. 
  echo "\"global_var\" in function = $global_var"
}  

func

# Now, to see if local variable "loc_var" exists outside the function.

echo
echo "\"loc_var\" outside function = $loc_var"
                                      # $loc_var outside function = 
                                      # No, $loc_var not visible globally.
echo "\"global_var\" outside function = $global_var"
                                      # $global_var outside function = 999
                                      # $global_var is visible globally.
echo                      

exit 0
#  In contrast to C, a Bash variable declared inside a function
#+ is local ONLY if declared as such.

I don't see this very often in real world shell scripts, but it seems like a good idea for more complex scripts. Reducing cohesion helps avoid bugs where you're clobbering a variable expected in another part of the code.

Reusability often means creating a common library of functions and sourceing that library into all of your scripts. This won't help them run faster, but it will help you write them faster.

share|improve this answer

A completely different reason from those already given in other answers: one reason this technique is sometimes used, where the sole non-function-definition statement at top-level is a call to main, is to make sure the script does not accidentally do anything nasty if the script is truncated. The script may be truncated if it is piped from process A to process B (the shell), and process A terminates for whatever reason before it has finished writing the whole script. This is especially likely to happen if process A fetches the script from a remote resource. While for security reasons that is not a good idea, it is something that is done, and some scripts have been modified to anticipate the problem.

share|improve this answer
3  
Interesting! But I find it troubling that one has to take care of those things in each of the programs. On the other hand, exactly this main() pattern is usual in Python where one uses if __name__ == '__main__': main() at the end of the file. – Martin Ueding 20 hours ago

A process requires a sequence. Most tasks are sequential. It makes no sense to mess with the order.

But the super big thing about programming - which includes scripting - is testing. Testing, testing, testing. What test scripts do you currently have to validate the correctness of your scripts?

Your boss is trying to guide you from being a script kiddy to being a programmer. This is a good direction to go in. People who come after you will like you.

BUT. Always remember your process-oriented roots. If it makes sense to have the functions ordered in the sequence in which they are typically executed, then do that, at least as a first pass.

Later, you will come to see that some of your functions are handling input, others output, others processing, others modelling data, and others manipulating the data, so it may be smart to group similar methods, perhaps even moving them off into separate files.

Later still, you may come to realize you've now written libraries of little helper functions that you use in many of your scripts.

share|improve this answer

I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming". He gives quite a few good reasons, but personally I find these the most important:

  • procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors " , instead of dealing with whole lot of awk/grep/sed lines that use god knows what type of regex.

  • you can debug functions by enclosing into set -x and set +x. Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function.

  • printing usage with cat <<- EOF . . . EOF. I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these.

And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience.

share|improve this answer

Aside from the reasons given in other answers, a cynical one: If the coders are billing for lines of code, those functions add lines, and therefore make a program more "valuable". :(

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.