Take the 2-minute tour ×
Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It's 100% free, no registration required.

I have a small 10-liner function that writes some data to a file using an std::ofstream. I did not explicitly call .close() at the end of my function, but it failed code review with the reason that it is better to explicitly call it for style and verbosity reasons. I understand there is no harm in calling .close() explicitly, but does calling it explicitly just before a return statement indicate a lack of understanding or faith in RAII?

The C++ standard says:

§27.8.1.2

virtual ~ basic_filebuf ();

[3] Effects: Destroys an object of class basic_filebuf<charT,traits>. Calls close().

Am I justified in my argument that calling .close() at the end of a function is redundant and/or unnecessary?

bool SomeClass::saveData()
{
    std::ofstream saveFile(m_filename);

    if (!saveFile.is_open())
        return false;

    saveFile << m_member1 << std::endl;
    saveFile << m_member2 << std::endl;

    saveFile.close(); // passed review only with this line
    return true;
}

The function is only supposed to return false if the file could not be opened for writing.

share|improve this question
7  
If the reviewers need reassurance that close is called automatically, then C++ is probably not the best language choice. The "verbosity" reason is particularly alarming. –  Fred Nurk Feb 2 '11 at 11:33
2  
“it failed code review with the reason that it is better to explicitly call it for … verbosity reasons ” – Please explain that last bit, it doesn’t make any sense. –  Konrad Rudolph Feb 2 '11 at 14:16
    
Obviously your reviewers' style conflict with the standard design of the standard library. –  kizzx2 Feb 2 '11 at 16:11
3  
@Konrad: By verbosity, I mean that we are closing the file even knowing it will be closed anyway in the destructor, because it shows that we know we are done with it. In some instances, we have files open for a long time; so we have a rule that we close every stream explicitly. –  dreamlax Feb 3 '11 at 2:34
1  
@dreamlax: but how is verbosity ever an advantage? I agree that explicitness may be, and that this sometimes entails verbosity – but this is always a trade-off between the two. I have never seen a situation where verbosity would be an advantage in itself. That’s what I meant by “it doesn’t make any sense”. –  Konrad Rudolph Feb 3 '11 at 8:21

6 Answers 6

up vote 49 down vote accepted

I would argue the exact opposite.

Explicitly closing a stream is probably not what you want to do. This is because when you close() the stream there is the potential for exceptions to be thrown. Thus when you explicitly close a file stream it is an indication you both want to close the stream and explicitly handle any errors that can result (exceptions or bad-bits) from the closing of the stream (or potentially you are saying if this fails I want to fail fast (exception being allowed to kill the application)).

If you don't care about the errors (ie you are not going to handle them anyway). You should just let the destructor do the closing. This is because the destructor will catch and discard any exceptions thus allowing code to flow normally. When dealing with the closing of a file this is what you normally want to do (if the closing fails does it matter?).

share|improve this answer
    
"if the closing fails does it matter?" For output streams, I think it probably does, as the output may be buffered until the call to close(). –  Roddy Feb 2 '11 at 10:40
2  
@Roddy: but you can’t do anything to stop this from failing. It may matter to report this failure, but no other meaningful action can be taken. –  Konrad Rudolph Feb 2 '11 at 14:17
    
@Konrad. Agree you can't stop it failing (but that's true with many exceptions). If the user selects "Save..." in a GUI app, and close() throws, then notifying the user and allowing him to re-try the save (maybe to a different volume) is probably very meaningful... –  Roddy Feb 2 '11 at 15:11
    
@Roddy: granted. But something different: does this error actually occur? I can’t remember ever seeing something like that (except when you prematurely plug off a USB drive etc. …). –  Konrad Rudolph Feb 2 '11 at 15:16
2  
@Roddy: Of course there are times when it does matter and you want ot report it (GUI applications). then you would use close() explicitly and check the error. Weather you care or not is totally situational and only the dev can decide that at the point of usage. In the above example it does not sound like any checking was done and thus they did not care. –  Loki Astari Feb 2 '11 at 17:19

At the end of the day this is a further incarnation of the famous question: "ARE EXCEPTIONS EVIL?". It's a great question for a flame war.

The explicit close is a good idea if exceptions are deprecated by your team. The style and the return value of you function actually suggests that this may be the case.

But if exceptions are deprecated you have to interrogate the error flags after writing and after closing and return true or false accordingly. Just closing explicitly is not enough by far.

fstream::exceptions() will tell you more or you set it to what you want.

share|improve this answer

I agree with Loki's answer: the difference between calling close explicitly, and letting the destructor call close, is that the destructor will implicitly catch (i.e. conceal) any exception thrown by close.

The destructor must do this (not propagate exceptions) because it may be called if/while there is an exception already being thrown; and throwing a 2nd exception during a 1st exception is fatal (therefore, all destructors should avoid throwing exceptions).

Unlike Loki I would argue that you do want to call close explicitly, precisely because you do want any exception from close to be visible. For example, perhaps the data is important and you want it written to disk; perhaps the disk is full, the << output operator is written to in-memory cache, and no-one notices that the disk is full until close implicitly calls flush. You're not allowed to return false because false is defined as meaning that the file couldn't be opened. IMO the only sane/safe thing you can do, then, is throw an exception.

It's up to the caller to catch any exception; having no exception thrown should be a guarantee that close was successful and the data safely written to the O/S.

share|improve this answer

Assuming that the fstream object is local to the function, I would tend to argue against this. People need to become accustomed to letting RAII do its job, and closing an fstream object falls under that heading. Extra code that doesn't accomplish something useful is almost always a poor idea.

Edit: Lest I be misunderstood, I would argue against this, not only for this specific case, but in general. It's not merely useless, but tends to obscure what's needed, and (worst of all) is essentially impossible to enforce in any case -- people who think only in terms of the "normal" exit from the function really need to stop and realize that the minute they added exception handling to C++, the rules changed in a fundamental way. You need to think in terms of RAII (or something similar) that ensures cleanup on exit from scope -- and explicitly closing files, releasing memory, etc., does not qualify.

share|improve this answer
    
Sorry, your assumption is correct, I forgot to mention that the fstream is local to the function. –  dreamlax Feb 2 '11 at 5:24
    
I feel a little bit like asking "what's the point in having destructors if we're doing the work manually..." but I know it won't go far. I put the explicit call in just to satisfy the code review but I felt extremely reluctant. –  dreamlax Feb 2 '11 at 6:41
    
@dreamlax: Its to give you flexibility. In most cases you don't care if the close() works or not (there is nothing you can do about it (except log the information)) so the destructor works perfectly. But in the situations were you do care you have the option of calling close() manually and checking to see if the close worked (and if not sending that 2AM SMS to support telling them the file system is down and needs immediate replacement). –  Loki Astari Feb 2 '11 at 8:09
    
re. Exceptions: The next (il)logical step is to put a try/catch construct in so that the unnecessary 'close()' function still gets called if an exception is throw.... –  Roddy Feb 2 '11 at 10:54
    
One reason that the project manager gave was that under stressed conditions where file handles are scarce, we need to close files as soon as we're done with them. By creating a rule that you close every stream, regardless of how trivial the function, you force the developer to think about the lifetime of the object. –  dreamlax Feb 3 '11 at 2:54

There is a middle ground here. The reason the reviewers want that explicit close() "as a matter of style and verbosity" is that without it they can't tell just from reading the code if you meant to do it that way, or if you completely forgot about it and just got lucky. It's also possible their egos were bruised from failing to notice or remember, at least at first, that close() would be called by the destructor. Adding a comment that the destructor calls close() isn't a bad idea. It's a little gratuitous, but if your coworkers need clarification and/or reassurance now, there's a good chance a random maintainer a few years down the road will too, especially if your team doesn't do a lot of file I/O.

share|improve this answer
    
There shouldn't be any "just got lucky" w.r.t. resources in C++. There should be "just works". That's how RAII is designed, and how it works when used properly. –  Sebastian Redl Feb 24 '14 at 16:28
3  
I disagree. This is C++, if you need a comment reminding you that an fstream will close on its own, then you're not a C++ programmer. –  jliv902 Mar 25 '14 at 22:30

I'm torn on this one. You are absolutely correct. However if a coding standard requires calling close() explicitly or it's a group people's consensus of doing that, there's not much you can do. If I were you, I would just go with the flow. Arguing such things is unproductive.

share|improve this answer
9  
The problem with small group consensus (i.e. development teams). It can often be wrong because the group is swayed by the loudest voice. For group consensus to work the group has to be large enough so that loud individuals can be counteracted by the correct decision of a lot of people (this is why SO works (loud people may get an initial vote up but eventually the correct answer usually get the votes (eventually))). –  Loki Astari Feb 2 '11 at 7:57
1  
@Martin: Nice nested parenthesis. –  GManNickG Feb 3 '11 at 0:32
    
@grokus I agree with Tux-D's sentiment. But, I don't believe in design/review by committee. If someone has the power to dictate these types of standards, they need to be shown another way or proven incorrect. The strategy for this will completely depend on the individual. The only thing that is going to happen in large meetings is arguing, and eventual increase in volume unti a mediator steps in and makes the decision for you. The best response to this is to construct an app in which .close() is called and throws. Show it kills the app. Then give them a choice, handle all errors or use RAII. –  Andrew Finnell Sep 4 '11 at 13:51

protected by Malachi Feb 24 '14 at 15:33

Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.

Would you like to answer one of these unanswered questions instead?

Not the answer you're looking for? Browse other questions tagged or ask your own question.