Arduino Stack Exchange is a question and answer site for developers of open-source hardware and software that is compatible with Arduino. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

The use of malloc() and free() seems pretty rare in the Arduino world. It is used in pure AVR C much more often, but still with caution.

Is it a really bad idea to use malloc() and free() with Arduino?

share|improve this question
2  
you'll run out of memory really fast otherwise, and if you know how much memory you'll use you might as well statically allocate it anyway – ratchet freak Mar 9 '14 at 13:32
1  
I don't know if it's bad, but I think it isn't used because you almost never run out of RAM for most sketches and it's just a waste of flash and precious clock cycles. Also, don't forget about scope (although I don't know if that space is still allocated for all variables). – Anonymous Penguin Mar 9 '14 at 16:50
2  
As usual, the right answer is "it depends." You haven't provided enough information to know for sure whether dynamic allocation is right for you. – WineSoaked Mar 9 '14 at 17:22

My general rule for embedded systems is to only malloc() large buffers and only once, at the start of the program, e.g., in setup(). The trouble comes when you allocate and de-allocate memory. Over a long run session, memory becomes fragmented and eventually an allocation fails due to lack of a sufficiently large free area, even though the total free memory is more than adequate for the request.

(Historical perspective, skip if not interested): Depending on the loader implementation, the only advantage of run-time allocation vs. compile-time allocation (intialized globals) is the size of the hex file. When embedded systems were built with off the shelf computers having all volatile memory, the program was often uploaded to the embedded system from a network or an instrumentation computer and the upload time was sometimes an issue. Leaving out buffers full of zeros from the image could shorten the time considerably.)

If I need dynamic memory allocation in an embedded system, I generally malloc() a large pool and divide it into fixed-size buffers (or one pool each of small and large buffers, respectively) and do my own allocation/de-allocation from that pool. Then every request for any amount of memory up to the fixed buffer size is honored with one of those buffers. The calling function doesn't need to know whether it's larger than requested, and by avoiding splitting and re-combining blocks we solve fragmentation. Of course memory leaks can still occur if the program has allocate/de-allocate bugs.

share|improve this answer
    
Very insightful answer! Thanks. – Gabriel Staples Oct 5 '15 at 0:20
    
Another historical note, this quickly led to the BSS segment, which allowed a program to zero its own memory for initialization, without slowly copying the zeros during program load. – rsaxvc Apr 13 at 5:22

Typically, when writing Arduino sketches, you will avoid dynamic allocation (be it with malloc or new for C++ instances), people rather use global -or static- variables, or local (stack) variables.

Using dynamic allocation can lead to several problems:

  • memory leaks (if you lose a pointer to a memory you previously allocated, or more likely if you forget to free the allocated memory when you don't need it anymore)
  • heap fragmentation (after several malloc/free calls) where the heap grows bigger thant the actual amount of memory allocated currently

In most situations I have faced, dynamic allocation was either not necessary, or could be avoided with macros as in the following code sample:

MySketch.ino

#define BUFFER_SIZE 32
#include "Dummy.h"

Dummy.h

class Dummy
{
    byte buffer[BUFFER_SIZE];
    ...
};

Without #define BUFFER_SIZE, if we wanted Dummy class to have a non-fixed buffer size, we would have to use dynamic allocation as follows:

class Dummy
{
    const byte* buffer;

    public:
    Dummy(int size):buffer(new byte[size])
    {
    }

    ~Dummy()
    {
        delete [] bufer;
    }
};

In this case, we have more options than in the first sample (e.g. use different Dummy objects with different buffer size for each), but we may have heap fragmentation issues.

Note the use of a destructor to ensure dynamically allocated memory for buffer will be freed when a Dummy instance is deleted.

share|improve this answer

Using dynamic allocation (via malloc/free or new/delete) isn't inherently bad as such. In fact, for something like string processing (e.g. via the String object), it's often quite helpful. That's because many sketches use several small fragments of strings, which eventually get combined into a larger one. Using dynamic allocation lets you use only as much memory as you need for each one. In contrast, using a fixed-size static buffer for each one could end up wasting a lot of space (causing it to run out of memory much faster), although it depends entirely on the context.

With all of that being said, it's very important to make sure memory usage is predictable. Allowing the sketch to use arbitrary amounts of memory depending on run-time circumstances (e.g. input) can easily cause a problem sooner or later. In some cases, it might be perfectly safe, e.g. if you know the usage will never add up to much. Sketches can change during the programming process though. An assumption made early-on could be forgotten when something is changed later, resulting in an unforeseen problem.

For robustness, it's usually better to work with fixed-size buffers where possible, and design the sketch to work explicitly with those limits from the outset. That means any future changes to the sketch, or any unexpected run-time circumstances, should hopefully not cause any memory problems.

share|improve this answer

I disagree with people who think you shouldn't use it or it is generally unnecessary. I believe it can be dangerous if you don't know the ins and outs of it, but it is useful. I do have cases where I don't know (and shouldn't care to know) the size of a structure or a buffer (at compile time or run time), especially when it comes to libraries I send out into the world. I agree that if you're application is only dealing with a single, known structure, you should just bake in that size at compile time.

Example: I have a serial packet class (a library) that can take arbitrary length data payloads (can be struct, array of uint16_t, etc.). On the sending end of that class you simply tell the Packet.send() method the address of the thing you wish to send and the HardwareSerial port through which you wish to send it. However, on the receiving end I need a dynamically allocated receive buffer to hold that incoming payload, as that payload could be a different structure at any given moment, depending on the application's state, for instance. IF I'm only ever sending a single structure back and forth, I'd just make the buffer the size it needs to be at compile time. But, in the case where packets may be different lengths over time, malloc() and free() are not so bad.

I've run tests with the following code for days, letting it loop continuously, and I've found no evidence of memory fragmentation. After freeing the dynamically allocated memory, the free amount returns to its previous value.

// found at learn.adafruit.com/memories-of-an-arduino/measuring-free-memory
int freeRam () {
    extern int __heap_start, *__brkval;
    int v;
    return (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval);
}

uint8_t *_tester;

while(1) {
    uint8_t len = random(1, 1000);
    Serial.println("-------------------------------------");
    Serial.println("len is " + String(len, DEC));
    Serial.println("RAM: " + String(freeRam(), DEC));
    Serial.println("_tester = " + String((uint16_t)_tester, DEC));
    Serial.println("alloating _tester memory");
    _tester = (uint8_t *)malloc(len);
    Serial.println("RAM: " + String(freeRam(), DEC));
    Serial.println("_tester = " + String((uint16_t)_tester, DEC));
    Serial.println("Filling _tester");
    for (uint8_t i = 0; i < len; i++) {
        _tester[i] = 255;
    }
    Serial.println("RAM: " + String(freeRam(), DEC));
    Serial.println("freeing _tester memory");
    free(_tester); _tester = NULL;
    Serial.println("RAM: " + String(freeRam(), DEC));
    Serial.println("_tester = " + String((uint16_t)_tester, DEC));
    delay(1000); // quick look
}

I haven't seen any sort of degradation in RAM or in my ability to allocate it dynamically using this method, so I'd say it's a viable tool. FWIW.

share|improve this answer
1  
Your test code conforms to the usage pattern 2. Allocate only short-lived buffers I described in my previous answer. This is one of those few usage patterns known to be safe. – Edgar Bonet May 18 '15 at 15:33
    
In other words, the problems will come up when you start sharing the processor with other unknown code - which is precisely the problem you think you are avoiding. Generally, if you want something that will always work or else fail during linking, you make a fixed allocation of the maximum size and use it over and over again, for example by having your user pass it in to you in initialization. Remember you are typically running on a chip where everything has to fit in 2048 bytes - maybe more on some boards but also maybe a lot less on others. – Chris Stratton May 18 '15 at 16:48
    
@EdgarBonet Yes, exactly. Just wanted to share. – StuffAndyMakes May 19 '15 at 17:29
    
@ChrisStratton It's because of the memory constraints that I believe dynamically allocating the buffer of only the size required in the moment is ideal. If the buffer isn't need during regular operation, the memory is available for something else. But, I see what you're saying: Allocate a reserved space so that something else doesn't take away the space you may need for the buffer. All great information and thoughts. – StuffAndyMakes May 19 '15 at 17:34
    
Dynamically allocating a buffer of only the size needed is risky, as if anything else allocates before you free it you can be left with fragmentation - memory that you can't re-use. Also, dynamic allocation has tracking overhead. Fixed allocation doesn't mean you can't multiply use the memory, it just means that you have to work the sharing into the design of your program. For a buffer with purely local scope, you might also weigh use of the stack. You haven't checked for the possibility of malloc() failing either. – Chris Stratton May 19 '15 at 17:46

I have taken a look at the algorithm used by malloc(), from avr-libc, and there seems to be a few usage patterns that are safe from the point of view of heap fragmentation:

1. Allocate only long-lived buffers

By this I mean: allocate all you need at the beginning of the program, and never free it. Of course, in this case, you could as well use static buffers...

2. Allocate only short-lived buffers

Meaning: you free the buffer before allocating anything else. A reasonable example might look like this:

void foo()
{
    size_t size = figure_out_needs();
    char * buffer = malloc(size);
    if (!buffer) fail();
    do_whatever_with(buffer);
    free(buffer);
}

If there is no malloc inside do_whatever_with(), or if that function frees whatever it allocates, then you are safe from fragmentation.

3. Always free the last allocated buffer

This is a generalization of the two previous cases. If you use the heap like a stack (last in is first out), then it will behave like a stack and not fragment. It should be noted that in this case it is safe to resize the last allocated buffer with realloc().

4. Always allocate the same size

This will not prevent fragmentation, but it is safe in the sense that the heap will not grow larger than the maximum used size. If all your buffers have the same size, you can be sure that, whenever you free one of them, the slot will be available for subsequent allocations.

share|improve this answer
1  
Pattern 2 should be avoided as it adds cycles for malloc() and free() when this can be done with "char buffer[size];" (in C++). I would also like to add the anti-pattern "Never from an ISR". – Mikael Patel Feb 1 at 14:50

No, but they must be used very carefully in regards to free()ing allocated memory. I have never understood why people say direct memory management should be avoided as it implies a level of incompece thats generally incompatible with software development.

Lets say your using your arduino to control a drone. Any error in any part of your code could potentially cause it to fall out of the sky and hurt someone or something. In other words, if someone lacks the competences to use malloc, they likely shouldent be coding at all since there are so many other areas where small bugs can cause serious issues.

Are the bugs caused by malloc harder to track down and fix? Yes, but thats more of a matter of frustration on the coders part rather than risk. As far as risk goes, any part of your code can be equally or more risky than malloc if you don't take the steps to make sure it's done right.

share|improve this answer
2  
It's interesting you used a drone as an example. According to this article (mil-embedded.com/articles/…), "Due to its risk, dynamic memory allocation is forbidden, under the DO-178B standard, in safety-critical embedded avionics code." – Gabriel Staples Oct 5 '15 at 0:30
    
chalk it up as a bad exaple. In my experience, either an interpreter/compiler will optimize, fix, or warn about bugs, OR the coder must take it upon themselves to find and fix most errors. In such cases, manual memory allocation is just one of thousands of ways a buggy piece of code can cause a system to crash and burn. – JSON Oct 5 '15 at 3:04

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.