You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
30.2 ms ± 2.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
3.18 ms ± 180 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
136 ms ± 7.08 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.25 s ± 331 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Installed Versions
INSTALLED VERSIONS
------------------
commit : e8093ba
python : 3.10.2.final.0
python-bits : 64
OS : Darwin
OS-release : 21.5.0
Version : Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Digging a little deeper, one issue is that IntervalTree hardcodes a leafsize=100 which doesn't scale well for larger data. Ideally leaf size should be set dynamically depending on the number of intervals
Any progress on this issue? Encountered this performance issue as well when working on large dataframes with pd.IntervalIndex as bins. In my case I'm using non-contiguous intervals so I cannot directly use array bins. But I think a simple workaround with minimal performance overhead is just the following
from itertools import chain
import numpy as np
import pandas as pd
array = np.random.uniform(0, 7, size=(1000,))
bins = pd.IntervalIndex.from_arrays([0, 2, 4, 6], [1, 3, 5, 7])
# workaround for `pd.cut(array, bins=bins)`, assuming no duplicate bin edges
out = pd.cut(array, bins=chain.from_iterable(zip(bins.left, bins.right)))
out = out.remove_categories(out.categories[1::2])
# test correctness
expected = pd.cut(array, bins=bins)
pd.testing.assert_extension_array_equal(out, expected)
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
pd.cut
withIntervalIndex
is ~ 10x slower thanpd.cut
with bin edges specified for large arrays. For small arrays, using the IntervalIndex is faster.This was surprising to me. If this is expected behaviour then it would be nice to update the docstring for
pd.cut
Installed Versions
commit : e8093ba
python : 3.10.2.final.0
python-bits : 64
OS : Darwin
OS-release : 21.5.0
Version : Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.4.3
numpy : 1.21.6
pytz : 2021.3
dateutil : 2.8.2
setuptools : 59.8.0
pip : 22.0.3
Cython : None
pytest : 6.2.5
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.3
IPython : 8.0.1
pandas_datareader: None
bs4 : None
bottleneck : None
brotli :
fastparquet : None
fsspec : 2022.01.0
gcsfs : None
markupsafe : 2.0.1
matplotlib : 3.5.1
numba : 0.55.0
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : 2022.6.0rc1.dev16+g6c8db5ed0
xlrd : None
xlwt : None
zstandard : None
Prior Performance
No response
The text was updated successfully, but these errors were encountered: