Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them, it only takes a minute:

In my code for numerical physics, I need to create an array of Derived objects using the unique_ptr with their type being the Base class. Normally, I would have:

// Header file of the Base class
class Particle{
public:
    Particle();             // some constructor
    virtual ~Particle();    // virtual destructor because of polymorphism
    virtual function();     // some random function for demonstration
};

// Header file of the Derived class
class Electron : public Particle{
public:
    Electron();
    // additional things, dynamic_cast<>s, whatever
};

Later in my code, to create an array of Derived objects with the Base type pointer, I would do

Particle* electrons = new Electron[count];

The advantage is that I am able to use the array in a really convenient way of electrons[number].function(), because the incremental value in [] is actually the address of the memory that points to the proper instance of the object Electron in the array. However, using raw pointers gets messy, so I decided to use the smart pointers.

Problem is with the definition of the Derived objects. I can do the following:

std::unique_ptr<Particle, std::default_delete<Particle[]>> electrons(new Electron[count]);

which creates the array of polymorphic Electrons and uses even the proper call of delete[]. The problem lies in the way of calling the specific objects of the array, as I have to do this:

electrons.get()[number].function();

and I don't like the get() part, not a little bit.

I could do the following:

std::unique_ptr<Particle[]> particles(new Particle[count]);

and yes, call the instances of Particle type in the array with the

particles[number].function();

and everything would be fine and dandy, except for the part that I am not using the specific details of the class Electron, therefore the code is useless.

And now for the funny part, let's do one more thing, shall we?

std::unique_ptr<Particle[]> electrons(new Electron[count]);

BOOM!

use of deleted function ‘std::unique_ptr<_Tp [], _Dp>::unique_ptr(_Up*) [with _Up = Electron; <template-
 parameter-2-2> = void; _Tp = Particle; _Dp = std::default_delete<Particle []>]’

What is going on?

EDIT Thank you for all of the responses. Currently, I am thinking about the following solutions:

  • Changing the type of pointer to the type of derived class, therefore std::unique_ptr<Electron[]> electrons(new Electron[count]),
  • using std::vector<std::unique_ptr<Particle>>();

I will post the results later.

share|improve this question
    
The error is only a symptom of a design issue. Your design should best distinguish the creation of the array (keeping in mind that arays are not hemselves polymorphic) and the polymorhic use of the array. – Christophe 2 hours ago

3 Answers 3

The problem with your design is that objects are derived and polymorphic, but not the arrays of objects.

For example, Electron could have additional data that a Particle doesn't have. Then the size of an Electron object would no longer be the same size as a Particle object. So the pointer arithmetic that is needed to access array elements would not work anymore.

This problem exist for raw pointers to array as well as for unique_ptrto array. Only the objects themselves are polymorphic. If you want to use them without the risk of slicing, you'd need an array of pointers to polymorphic objects.

If you look for additional arguments explaining why this design should be avoided, you may have a look at the section of Scott Meyers' book "More effective C++" titled "Item 3: never treat arrays polymorphically".

Alternative: change your design

For example, use a vector of the real type to create your objects. And use a vector to a polymorphic Particle pointer to use these objects polymorphically:

vector<Electron>myelectrons(count);   // my real object store 
vector<Particle*>ve(count, nullptr);  // my adaptor for polymorphic access
transform(myelectrons.begin(), myelectrons.end(), ve.begin(), 
                [](Particle&e){return &e;} );  // use algorithm to populate easlily 
for (auto x: ve)  // make plain use of C++11 to forget about container type and size
   x->function(); 

Here a live demo:

share|improve this answer

std::unique_ptr is preventing from shooting yourself in the foot, as std::default_delete<T[]> calls delete[], which has the behaviour specified in the standard

If a delete-expression begins with a unary :: operator, the deallocation function’s name is looked up in global scope. Otherwise, if the delete-expression is used to deallocate a class object whose static type has a virtual destructor, the deallocation function is the one selected at the point of definition of the dynamic type’s virtual destructor (12.4). 117 Otherwise, if the delete-expression is used to deallocate an object of class T or array thereof, the static and dynamic types of the object shall be identical and the deallocation function’s name is looked up in the scope of T.

In other words, code like this:

Base* p = new Derived[50];
delete[] p;

is undefined behaviour.

It may have seem to work on some implementations - there, the delete[] call looks up the size of the allocated array and calls destructors on the elements - which requires the elements to have a well known size. Since the size of derived objects may differ, the pointer arithmetic goes wrong, and the destructors are called with the wrong address.

Let's review what you tried:

std::unique_ptr<Particle[]> electrons(new Electron[count]);

there's a code in std::unique_ptr's constructor that detects these violations, see cppreference.

std::unique_ptr<Particle, std::default_delete<Particle[]>> electrons(new Electron[count]);

is undefined behaviour, you essentially tell the compiler that delete[] is a valid way to release the resources you push to the constructor of electrons, which isn't true, as mentioned above.

The only safe way with polymorphism is to use std::unique_ptr pointing to derived objects, like std::vector<std::unique_ptr<Particle>>

Since you mention that performance is critical, then dynamically allocating every Particle will be slow - in this case you can:

  • use an object pool
  • make use of flyweight pattern
  • refactor it to avoid inheritance
  • use std::vector<Electron> or std::unique_ptr<Electron[]> directly.
share|improve this answer
    
But I was using it this way the whole time and it worked. I am just stunned now... How am I supposed to create this array, then? – bluecore 3 hours ago
2  
@bluecore That's how undefined behaviour works. Invalid code may seem to run correctly, but crash on the newer compiler or after any code change. I'll update my answer soon. – milleniumbug 3 hours ago
    
Well, that would actually explain why it was working, but weirdly, sometimes... – bluecore 3 hours ago
    
std::unique_ptr<Electron[]> and that's what I am thinking about right now. Even though I would have to implement templates to the other areas of the code, as I am using Ions also. That was the reason I needed to have the same Base type. – bluecore 2 hours ago

Use a std::vector or std::array (if you know how many) of std::unique_ptr. Something like this:

#include <vector>
#include <memory>

class A
{
public:

    A() = default;
    virtual ~A() = default;
};

class B : public A
{
public:

    B() = default;
    virtual ~B() = default;
};

int main(void)
{
    auto v = std::vector<std::unique_ptr<A>>();

    v.push_back(std::make_unique<A>());
    v.push_back(std::make_unique<B>());

    return 0;
}

Edit: In terms of speed I did a quick test with the 3 methods and this is what I found:

Debug

6.59999430  : std::vector (with reserve, unique_ptr)
5.68793220  : std::array (unique_ptr)
4.85969770  : raw array (new())

Release

4.81274890  : std::vector (with reserve, unique_ptr)
4.42210580  : std::array (unique_ptr)
4.12522340  : raw array (new())

Finally, I did a test where I used new() for all 3 versions instead of unique_ptr:

4.13924640 : std::vector
4.14430030 : std::array
4.14081580 : raw array

So you see there's really no difference in a release build, all else being equal.

share|improve this answer
    
The problem with std::vector<> in my application is that it's slow. Raw arrays are about 30% quicker in access than the dynamically created std::vector. Bear in mind please that the number of particles may be in millions, which makes std::vector<> useless. Array might work, but I am defining the array in header files and something like std::array var; doesn't work, as it needs to know the size at compile time. – bluecore 2 hours ago
    
@bluecore: How std::vector can be slower than array created with new[]? – GingerPlusPlus 2 hours ago
1  
You can reserve a suitably large size for std::vector beforehand to avoid a repeated reserve/copy. I mean you're going to do that with a raw array dynamically created anyway presumably. – Robinson 2 hours ago
    
@GingerPlusPlus Try to do std::vector<double> var; and then double* var = new double[]; with the size of array 10 000 000. Now blast that into a for cycle and do some arithmetic operations and watch the time, you'll be surprised. Not to mention that you actaully have to push_back() into the vector first. And it really isn't that surprising at all. std::vector<> is dynamically allocated field, prone to memory fragmentation etc., whereas new[] is just a simple continuous block of memory. – bluecore 2 hours ago
    
Only in debug build. In a release build it should be almost as fast. – Robinson 2 hours ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.