So as a general rule to avoid a stack overflow, big objects should be allocated to the heap (correct me if I am wrong). But, since the heap and the stack expand towards each other, wouldn't this cause heap overflow or alternatively limit the space for the stack and higher the chances of stack overflow?
migrated from stackoverflow.com May 8 at 12:10
Obviously, storing X byte of data is going to reduce the amount of free memory by at least X bytes, no matter where you put it.
I don't think that this is the main distinction to draw. If you need data to be accessible outside of the current stack frame (for example as a global variable, or to pass it to another thread), you cannot put it into the stack. The stack shrinks when a subroutine returns, so all data stored in its stack frame is lost. Data on the heap stays "alive" until you de-allocate it. Beyond that, because stack memory is generally much more limited than heap memory in "real life" (not just two unlimited sections growing towards each other like you alluded to in your question), you may want to put anything large on the heap, even if you could put it on the stack. For example, Java places everything in the heap. The downside is more complex memory management. |
|||||||||||||
|
Given the assumptions below, yes -- does not really matter whether you pile it on the floor or hang it from the ceiling. You question indicates that you choose between heap or stack, only. (And we are not talking about very big objects that might be better mmap'd and other less common scenarios). Assumption
Now what happens if we remove these assumptions?
Resume: the exact behavior depends a lot on your OS and your compiler, on Linux only the sky it the limit. Addendum related SO question |
||||
|
Stack overflow is what happens when an architecture with a bounded stack tries to increment its stack pointer beyond its maximum possible value. Sometimes this is a hard limit, such as on the 6502, which had a 256-byte stack at a fixed location in memory and a one-byte stack pointer. Pushing a 257th byte onto the stack would cause the stack pointer to wrap around and clobber the bottom of the stack. In modern systems, it's usually a soft limit that can be reset before starting the program or while the program is running. Step over that line with a POSIX-y system and you get a segmentation fault. It's also theoretically possible to have a stack that continues to grow downward as long as memory is available to hold it. The phenomenon you're referring to is called a stack-heap collision, where one grows so much that it overlaps the other. You saw these a lot more frequently when address spaces (and memory) were smaller; now it's a lot less common because we don't run out of either nearly as often. My advice would be to base your decisions on how you're going to dispose of the memory. If it's something that can disappear when you go out of scope, put it on the stack, because it will save you the headache of managing it. If it needs to stay around past the life of the current scope put it on the heap. Don't take this as a hard-and-fast rule, either, because there are situations where it doesn't apply. Use your judgement: if you're going to allocate something you know will run you out of stack, use the heap and make sure you clean up after yourself. Large allocations tend to be re-used, which is why they get put on the heap more often. The bottom line is that if you're so short on memory that stack and heap will collide, you have bigger problems than where to put what you allocate. |
|||
|
The stack's forte is ultra-fast allocation and deallocation of memory that follows a LIFO pattern (last allocated is first deallocated). If you need memory that will follow a different allocation/deallocation pattern then you are going to have to get it from the heap. Note that for common multiprocessing operating systems every process gets its own stack and its own heap. Typically the stack for a process is only 1MB to 5MB. The small size is why you should avoid allocating large structures on the stack. In contrast the heap can grow to consume as much of the physical memory as the OS will allow. Yes, you can eventually use up all the heap that the OS will grant you, but that's going to typically be a factor of a 1000 times the size of the stack. |
|||
|
This is something that you do not need to worry about. The heap and the stack are managed seperately. More specifically the heap is backed up by virtual memory so if you overallocate it the OS will page things out. The stack is a seperate thing that does not normally expand beyond a certain size. It does not freely allocate from the heap. The reason that they say allocate the big stuff from the heap is that the heap can handle that and the stack, a fixed size, cannot. If you look more closely at the stack you might think that it can keep growing forever, or until memory is exhausted, and you are right. However most/all OSes monitor the size of the stack and will prevent it from continously growing. Lastly when you allocate data off of the heap it is managed data. What I mean by this is that the OS is allocating/deallocating the data when you need it and when its out its out. If the heap were to run out malloc would return a NULL. Note that on most OSes virtual memory backs this up so malloc will always return something. |
|||
|
The answer is clearly platform dependent, compiler dependent and even library dependent. But consider that in a process there is one stack per thread and as many heap as the process requires to the systems (normally 1, but there can be more: an OS can have API to allocate various "heaps"). There are OSs that gives to a process a given chunk, in which heap and stacks are taken by subtraction, and OSs that gives distinct blocks to the process each mapping a different block of physical memory. On OSs and processors that implement virtual memory, the process address space is mapped to physical memory by some pointer tables invisible to the process itself. When we say that the stack grows and shrink we are talking in the "process view". In the "OS view", a stack is just a block that doesn't grow or shrink upon every call/return. it is just more or less filled up. When it becomes full, it has to be reallocated on another physical memory area with more space. The cost of "reallocating a stack frame" is heavy (it must have no "holes" inside) so many OSs doesn't allow a stack to grow over a predefined size. But heap dos not need that care: if a heap block is full, another block is added. No need to make it contiguous. This -on many platform- gives the perception (in the "process space") that a stack is limited (usually 4MB or similar) while the heap is "endless" (and limited only by the physical memory or swap area). |
|||
|
setrlimit(2)
. – Blrfl May 8 at 15:30