Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?

I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).

share|improve this question

I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.

But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.

In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.

Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications.

As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.

See also some related questions:

share|improve this answer
2  
This hasn't actually answered the question, though, which asked if there was a technical or practical limit to how high one can set the hard limit. There is, but this answer does not mention it at all. – JdeBP 7 hours ago

The impact wouldn't normally be observable, but the kernel's IO module will have to take care of all those open file descriptors and they could also have an impact on cache efficiency.

Such limits have the advantage of protecting the user from their own (or third parties') mistakes. For example, if you run a small program or script that forks indefinitely, it will eventually block on one of the ulimits and therefore prevent a more intense (possibly unrecoverable) computer freeze.

Unless you have precise reasons to increase any of those limits, you should avoid it and sleep better.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.