Take the 2-minute tour ×
Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems.. It's 100% free, no registration required.

Yesterday I read this SO comment which says that in the shell (at least bash) >&- "has the same result as" >/dev/null.

That comment actually refers to the ABS guide as the source of its information. But that source says that the >&- syntax "closes file descriptors".

It is not clear to me whether the two actions of closing a file descriptor and redirecting it to the null device are totally equivalent. So my question is: are they?

On the surface of it it seems that closing a descriptor is like closing a door but redirecting it to a null device is opening a door to limbo! The two don't seem exactly the same to me because if I see a closed door, I won't try to throw anything out of it, but if I see an open door I will assume I can.

In other words, I have always wondered if >/dev/null means that cat mybigfile >/dev/null would actually process every byte of the file and write it to /dev/null which forgets it. On the other hand, if the shell encounters a closed file descriptor I tend to think (but am not sure) that it will simply not write anything, though the question remains whether cat will still read every byte.

This comment says >&- and >/dev/null "should" be the same, but it is not so resounding answer to me. I'd like to have a more authoritative answer with some reference to standard or source core or not...

share|improve this question

2 Answers 2

up vote 54 down vote accepted

No, you certainly don't want to close file descriptors 0, 1 and 2.

If you do so, the first time the application opens a file, it will become stdin/stdout/stderr...

For instance, if you do:

echo text | tee file >&-

When tee (at least some implementations, like busybox') opens the file for writing, it will be open on file descriptor 1 (stdout). So tee will write text twice into file:

$ echo text | strace tee file >&-
[...]
open("file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1
read(0, "text\n", 8193)                 = 5
write(1, "text\n", 5)                   = 5
write(1, "text\n", 5)                   = 5
read(0, "", 8193)                       = 0
exit_group(0)                           = ?

That has been known to cause security vulnerabilities. For instance:

chsh 2>&-

And chsh (a setuid application) may end up writing error messages in /etc/passwd.

Some tools and even some libraries try to guard against that. For instance GNU tee will move the file descriptor to one above 2 if the files it opens for writing are assigned 0, 1, 2 while busybox tee won't.

Most tools, if they can't write to stdout (because for instance it's not open), will report an error message on stderr (in the language of the user which means extra processing to open and parse localisation files...), so it will be significantly less efficient, and possibly cause the program to fail.

In any case, it won't be more efficient. The program will still do a write() system call. It can only be more efficient if the program gives up writing to stdout/stderr after the first failing write() system call, but programs generally don't do that. They generally either exit with an error or keep on trying.

share|improve this answer
3  
I think this answer would be even better if the final paragraph was at the top (since it is what most directly answers the OP's question), and it then went on to discuss why it's a bad idea even if it did mostly work. But I'll take it, have an upvote. ;) –  Michael Kjörling 2 days ago
    
Does it even work? Your tee command just gives an error on my bash 4.3.25 (tee: standard output: Bad file descriptor and tee: write error: Bad file descriptor) and so does any other attempt at using >&- like echo foo &>- or echo foo 1>&-. It looks like bash won't let me close stdout. –  terdon 2 days ago
3  
@terdon, that error is by tee and you're using GNU tee. Try with busybox tee as I said. –  Stéphane Chazelas 2 days ago
1  
@jamadagni If the link provided by Stéphane doesn't answer the question, I would say that sounds like the beginnings of a separate question as it isn't directly related to the relative efficiency of the two methods. –  Michael Kjörling 2 days ago
1  
I appreciate that Stephane starts with this vert important security warning, as it would be less visible if the last paragraph was on top. +1 from me. –  Olivier Dulac yesterday

IOW I have always wondered if >/dev/null means that cat mybigfile >/dev/null would actually process every byte of the file and write it to /dev/null which forgets it.

It's not a full answer to your question, but yes, the above is how it works.

cat reads the named file(s), or standard input if no files are named, and outputs to its standard output the contents of those until it encounters an EOF on (including standard input) the last file named. That is its job.

By adding >/dev/null you are redirecting standard output to /dev/null. That is a special file (a device node) which throws away anything written in it (and immediately returns EOF on read). Note that I/O redirection is a feature provided by the shell, not by each individual application, and that there is nothing magical about the name /dev/null, only what happens to exist there on most Unix-like systems.

It's also important to note that the specific mechanics of device nodes vary from operating system to operating system, but cat (which, in a GNU system, means coreutils) is cross-platform (the same source code needs to run at least on Linux and Hurd) and hence cannot take dependencies to specific operating system kernels. Additionally, it still works if you create a /dev/null alias (on Linux, this means a device node with the same major/minor device number) with another name. And there's always the case of writing somewhere else that behaves effectively the same (say, /dev/zero).

It follows that cat is unaware of the special properties of /dev/null, and indeed likely is unaware of the redirection in the first place, but it still needs to perform exactly the same work: it reads the named files, and outputs the content of the/those file(s) to its standard output. That the standard output of cat happens to go into a void is not something cat itself is concerned with.

share|improve this answer
1  
To extend your answer: Yes, cat mybigfile > /dev/null will cause cat to read every byte of bigfile into memory. And, for every n bytes, it will call write(1, buffer, n). Unbeknownst to the cat program, the write will do absolutely nothing (except maybe for some trivial bookkeeping). Writing to /dev/null does not require processing every byte. –  G-Man 2 days ago
1  
I remember I was blown away when I read the Linux kernel source of the /dev/null device. I was expecting there to be some elaborate system of free()ing buffers, etc., but, no, it's just basically a return(). –  Brian Minton yesterday
3  
@G-Man: I don't know that you can guarantee that to be true in all cases. I can't find evidence now, but I recall some implementation of either cat or cp that would work by mmaping big chunks of the source file into memory, then calling write() on the mapped region. If you were writing to /dev/null, the write() call would return at once without faulting in the pages of the source file, so it would never actually be read from the disk. –  Nate Eldredge yesterday
1  
Also, something like GNU cat does run on many platforms, but a casual glance at the source code will show lots of #ifdefs: it's not literally the same code that runs on all the platforms, and there are plenty of system-dependent sections. –  Nate Eldredge yesterday
    
@NateEldredge: Interesting point, but I was just building on Michael's answer, so you aren't contradicting me so much as you are contradicting Michael. –  G-Man yesterday

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.