Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. It's 100% free, no registration required.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

if I want to count the lines of code, the trivial thing is

cat *.c *.h | wc -l

But what if I have several subdirectories?

share|improve this question
3  
2  
Off-topic: Why the unnecessary cat? wc -l *.c *.h does the same thing. – Thomas Padron-McCarthy yesterday
4  
@ThomasPadron-McCarthy No it doesn't. You'd need wc -l *.c *.h | tail -n 1 to get similar output. – Gilles 20 hours ago
2  
Note that some (possibly even most) modern shells (Bash v4, Zsh, probably more) provide a recursive-globbing mechanism using **, so you could have used wc -l **/*.{h,c} or something similar. Note that in Bash, at least, this option (called globstar) is off by default. But also note that in this particular case, cloc or SLOCCount is a much better option. (Also, ack may be preferable to find for easily finding/listing source files.) – Kyle Strand 19 hours ago
3  
wc -l counts lines, not lines of code. 7000 blank lines will still show up in wc -l but wouldn't count in a code metric. (comments too usually don't count) – coteyr 9 hours ago
up vote 18 down vote accepted

The easiest way is to use the tool called cloc. Use it this way:

cloc .

That's it. :-)

share|improve this answer
    
-1 because this program doesn't have any way to recognise lines of code in languages outside of its little, boring brain. It knows about Ada and Pascal and C and C++ and Java and JavaScript and "enterprise" type languages, but it refuses to count the SLOC by just file extension, and is thus completely useless for DSLs, or even languages it just happens to not know about. – cat 6 hours ago
6  
@cat Nothing is perfect, and nothing can fulfill all your past and future demands. – Ho1 5 hours ago
1  
Well, the programming language which CLOC refuses to acknowledge does indeed fulfill all my past and future demands :) – cat 5 hours ago

You should probably use SLOCCount or cloc for this, they're designed specifically for counting lines of source code in a project, regardless of directory structure etc.; either

sloccount .

or

cloc .

will produce a report on all the source code starting from the current directory.

If you want to use find and wc, GNU wc has a nice --files0-from option:

find . -name '*.[ch]' -print0 | wc --files0-from=-

(Thanks to SnakeDoc for the cloc suggestion!)

share|improve this answer
    
+1 for sloccount. Interestingly, running sloccount /tmp/stackexchange (created again on May 17 after my most recent reboot) says that the estimated cost to develop the sh, perl, awk, etc files it found is $11,029. and that doesn't include the one-liners that never made it into a script file. – cas yesterday
9  
Estimating cost based on lines of code? What about all the people employed to re-factor spaghetti into something maintainable? – OrangeDog yesterday
    
@OrangeDog you could always try to account for that in the overhead; see the documentation for an explanation of the calculation (with very old salary data) and the parameters you can tweak. – Stephen Kitt yesterday
4  
cloc is good as well: github.com/AlDanial/cloc – SnakeDoc 21 hours ago
    
@StephenKitt> still, the main issue is it's counting backwards. When cleaning up code, you often end up with less lines. Sure you could try to handwave an overhead to incur on the rest of the code to account for the removed one, but I don't see how it's better than just guessing the whole price in the first place. – spectras 11 hours ago

You can use find together with xargs and wc:

find . -type f -name '*.h' -o -name '*.c' | xargs wc -l
share|improve this answer
1  
(that assumes file paths don't contain blanks, newlines, single quote, double quote of backslash characters though. It may also output several total lines if several wcs are being invoked.) – Stéphane Chazelas 9 hours ago
    
Perhaps the several wc commands problem can be addressed by piping find to while read FILENAME; do . . .done structure. And inside the while loop use wc -l. The rest is summing up the total lines into a variable and displaying it. – Serg 7 hours ago

As the wc command can take multiple arguments, you can just pass all the filenames to wc using the + argument of the -exec action of GNU find:

find . -type f -name '*.[ch]' -exec wc -l {} +

Alternately, in bash, using the shell option globstar to traverse the directories recursively:

shopt -s globstar
wc -l **/*.[ch]

Other shells traverse recursively by default (e.g. zsh) or have similar option like globstar, well, at least most ones.

share|improve this answer

easy command:

find . -name '*.[ch]' | xargs wc -l
share|improve this answer
    
(that assumes file paths don't contain blanks, newlines, single quote, double quote of backslash characters though. It may also output several total lines if several wcs are being invoked.) – Stéphane Chazelas 9 hours ago

Sample using awk:

find . -name '*.[ch]' -exec wc -l {} \; |
  awk '{SUM+=$1}; END { print "Total number of lines: " SUM }'
share|improve this answer
    
Use + in place of \;. – Jonathan Leffler 4 hours ago
    
@JonathanLeffler Why? – Hastur 29 mins ago
    
@Hastur: It runs wc -l for groups of files, rather like xargs does, but it handles odd-ball characters (like spaces) in file names without needing either xargs or the (non-standard) -print0 and -0 options to find and xargs respectively. It's a minor optimization. The downside would be that each invocation of wc would output a total line count at the end when given multiple files — the awk script would have deal with that. So, it's not a slam-dunk, but very often, using + in place of \; with find is a good idea. – Jonathan Leffler 25 mins ago

find . -name \*.[ch] -print | wc -l should do the trick. There are several possible variations on that as well, such as using -exec instead of piping the output to wc.

share|improve this answer
4  
But find . -name \*.[ch] -print doesn't print the contents of the files, only the file names. So I count the number of files instead don't I? Do I need `xargs' ? – Programmer 400 yesterday
    
@Programmer400 yes, you'd need xargs, and you'd also need to watch for multiple wc invocations if you have lots of files; you'd need to look for all the total lines and sum them. – Stephen Kitt yesterday
    
If you just want the total line count, you'd need to do find . -name \*.[ch] -print0 | xargs -0 cat | wc -l – fluffy 19 hours ago
    
Note that this (find . -name \*.[ch] -print | wc -l) counts the number of files (unless a file name contains a newline — but that's very unusual) — it does not count the number of lines in the files. – Jonathan Leffler 22 mins ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.