When performing a groupby on dates (as object
), I realized it was way less efficient than on int
. Here is an example:
df = pd.DataFrame({'id1':[1,1,1,1,2,2,2,3,3,3],'id2':[10,20,30,10,20,30,10,20,30,10],'value':[123,156,178,19,354,26,84,56,984,12],
'date':['2015-01-12','2014-09-27','2014-10-14','2010-11-26','2010-04-09','2012-12-21','2009-08-16',
'2013-07-09','2014-02-14','2012-12-04']})
df
Out[1]:
date id1 id2 value
0 2015-01-12 1 10 123
1 2014-15-27 1 20 156
2 2014-10-14 1 30 178
3 2010-11-26 1 10 19
4 2010-04-09 2 20 354
5 2012-12-21 2 30 26
6 2009-08-16 2 10 84
7 2013-07-09 3 20 56
8 2014-02-14 3 30 984
9 2012-12-04 3 10 12
Here are the types of the column:
df.dtypes
Out[2]:
date object
id1 int64
id2 int64
value int64
dtype: object
And now let's take a look at the efficiency of aggregations::
%timeit df.groupby(['id1','id2']).agg({'value':np.sum})
1000 loops, best of 3: 1.35 ms per loop
%timeit df.groupby(['id1','id2']).agg({'date':np.max})
100 loops, best of 3: 2.75 ms per loop
As you can see, it is twice as long for date
as it is for value
, which is inconvenient an big dataframes.
Is there a way to perform the agg
more efficiently on dates? Maybe by changing the type of date
column, or by using another function the get the max?
datetime
but I note that2014-15-27
is not a valid date unless you're using some funky calendar – EdChum 18 hours agodate
column is stillobject
, I suspect this is the problem sincepython
can't use dedicated functions forobjects
) – ysearka 18 hours ago