I have a canonical Pandas transform example in which performance seems inexplicably slow. I have read the Q&A on the apply method, which is related but, in my humble opinion, offers an incomplete and potentially misleading answer to my question as I explain below.
The first five lines of my dataframe are
id date xvar
0 1004 1992-05-31 4.151628
1 1004 1993-05-31 2.868015
2 1004 1994-05-31 3.043287
3 1004 1995-05-31 3.189541
4 1004 1996-05-31 4.008760
- There are 24,693 rows in the dataframe.
- There are 2,992 unique
idvalues.
I want to center xvar by id.
Approach 1 takes 861 ms:
df_r['xvar_center'] = (
df_r
.groupby('id')['xvar']
.transform(lambda x: x - x.mean())
)
Approach 2 takes 9 ms:
# Group means
df_r_mean = (
df_r
.groupby('id', as_index=False)['xvar']
.mean()
.rename(columns={'xvar':'xvar_avg'})
)
# Merge group means onto dataframe and center
df_w = (
pd
.merge(df_r, df_r_mean, on='id', how='left')
.assign(xvar_center=lambda x: x.xvar - x.xvar_avg)
)
The Q&A on the apply method recommends relying on vectorized functions whenever possible, much like @sammywemmy's comment implies. This I see as overlap. However, the Q&A on the apply method also sates:
"...here are some common situations where you will want to get rid of any calls to
apply...Numeric Data"
@sammywemmy's comment does not "get rid of any calls to" the transform method in their answer to my question. On the contrary, the answer relies on the transform method. Therefore, unless @sammywemmy's suggestion is strictly dominated by an alternative approach that does not rely on the transform method, I think my question and its answer are sufficiently distinct from the discussion in Q&A on the apply method. (Thank you for your patience and help.)