The use of malloc()
and free()
seems pretty rare in the Arduino world. It is used in pure AVR C much more often, but still with caution.
Is it a really bad idea to use malloc()
and free()
with Arduino?
The use of Is it a really bad idea to use |
|||||||||||||
|
My general rule for embedded systems is to only (Historical perspective, skip if not interested): Depending on the loader implementation, the only advantage of run-time allocation vs. compile-time allocation (intialized globals) is the size of the hex file. When embedded systems were built with off the shelf computers having all volatile memory, the program was often uploaded to the embedded system from a network or an instrumentation computer and the upload time was sometimes an issue. Leaving out buffers full of zeros from the image could shorten the time considerably.) If I need dynamic memory allocation in an embedded system, I generally |
|||||||||
|
Typically, when writing Arduino sketches, you will avoid dynamic allocation (be it with Using dynamic allocation can lead to several problems:
In most situations I have faced, dynamic allocation was either not necessary, or could be avoided with macros as in the following code sample: MySketch.ino
Dummy.h
Without
In this case, we have more options than in the first sample (e.g. use different Note the use of a destructor to ensure dynamically allocated memory for |
||||
|
Using dynamic allocation (via With all of that being said, it's very important to make sure memory usage is predictable. Allowing the sketch to use arbitrary amounts of memory depending on run-time circumstances (e.g. input) can easily cause a problem sooner or later. In some cases, it might be perfectly safe, e.g. if you know the usage will never add up to much. Sketches can change during the programming process though. An assumption made early-on could be forgotten when something is changed later, resulting in an unforeseen problem. For robustness, it's usually better to work with fixed-size buffers where possible, and design the sketch to work explicitly with those limits from the outset. That means any future changes to the sketch, or any unexpected run-time circumstances, should hopefully not cause any memory problems. |
|||
|
I disagree with people who think you shouldn't use it or it is generally unnecessary. I believe it can be dangerous if you don't know the ins and outs of it, but it is useful. I do have cases where I don't know (and shouldn't care to know) the size of a structure or a buffer (at compile time or run time), especially when it comes to libraries I send out into the world. I agree that if you're application is only dealing with a single, known structure, you should just bake in that size at compile time. Example: I have a serial packet class (a library) that can take arbitrary length data payloads (can be struct, array of uint16_t, etc.). On the sending end of that class you simply tell the Packet.send() method the address of the thing you wish to send and the HardwareSerial port through which you wish to send it. However, on the receiving end I need a dynamically allocated receive buffer to hold that incoming payload, as that payload could be a different structure at any given moment, depending on the application's state, for instance. IF I'm only ever sending a single structure back and forth, I'd just make the buffer the size it needs to be at compile time. But, in the case where packets may be different lengths over time, malloc() and free() are not so bad. I've run tests with the following code for days, letting it loop continuously, and I've found no evidence of memory fragmentation. After freeing the dynamically allocated memory, the free amount returns to its previous value.
I haven't seen any sort of degradation in RAM or in my ability to allocate it dynamically using this method, so I'd say it's a viable tool. FWIW. |
|||||||||||||||||||||
|
I have taken a look at the algorithm used by 1. Allocate only long-lived buffersBy this I mean: allocate all you need at the beginning of the program, and never free it. Of course, in this case, you could as well use static buffers... 2. Allocate only short-lived buffersMeaning: you free the buffer before allocating anything else. A reasonable example might look like this:
If there is no malloc inside 3. Always free the last allocated bufferThis is a generalization of the two previous cases. If you use the heap
like a stack (last in is first out), then it will behave like a stack
and not fragment. It should be noted that in this case it is safe to
resize the last allocated buffer with 4. Always allocate the same sizeThis will not prevent fragmentation, but it is safe in the sense that the heap will not grow larger than the maximum used size. If all your buffers have the same size, you can be sure that, whenever you free one of them, the slot will be available for subsequent allocations. |
|||||
|
No, but they must be used very carefully in regards to free()ing allocated memory. I have never understood why people say direct memory management should be avoided as it implies a level of incompece thats generally incompatible with software development. Lets say your using your arduino to control a drone. Any error in any part of your code could potentially cause it to fall out of the sky and hurt someone or something. In other words, if someone lacks the competences to use malloc, they likely shouldent be coding at all since there are so many other areas where small bugs can cause serious issues. Are the bugs caused by malloc harder to track down and fix? Yes, but thats more of a matter of frustration on the coders part rather than risk. As far as risk goes, any part of your code can be equally or more risky than malloc if you don't take the steps to make sure it's done right. |
|||||||||
|