Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: cut with IntervalIndex slower than cut with array of bin edges for large arrays #47614

Open
3 tasks done
dcherian opened this issue Jul 6, 2022 · 2 comments
Open
3 tasks done
Labels
cut cut, qcut Interval Interval data type Performance Memory or execution speed performance

Comments

@dcherian
Copy link

dcherian commented Jul 6, 2022

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the main branch of pandas.

Reproducible Example

pd.cut with IntervalIndex is ~ 10x slower than pd.cut with bin edges specified for large arrays. For small arrays, using the IntervalIndex is faster.

This was surprising to me. If this is expected behaviour then it would be nice to update the docstring for pd.cut

import numpy as np
import pandas as pd

bins = np.arange(-40, 40, 0.1)
index = pd.IntervalIndex.from_breaks(bins)

N = 1_000

%timeit pd.cut(0 + 20 * np.random.standard_normal(N), bins)
%timeit pd.cut(0 + 20 * np.random.standard_normal(N), index)

N = 1_000_000

%timeit pd.cut(0 + 20 * np.random.standard_normal(N), bins)
%timeit pd.cut(0 + 20 * np.random.standard_normal(N), index)
30.2 ms ± 2.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
3.18 ms ± 180 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

136 ms ± 7.08 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.25 s ± 331 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Installed Versions

INSTALLED VERSIONS ------------------

commit : e8093ba
python : 3.10.2.final.0
python-bits : 64
OS : Darwin
OS-release : 21.5.0
Version : Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.4.3
numpy : 1.21.6
pytz : 2021.3
dateutil : 2.8.2
setuptools : 59.8.0
pip : 22.0.3
Cython : None
pytest : 6.2.5
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.3
IPython : 8.0.1
pandas_datareader: None
bs4 : None
bottleneck : None
brotli :
fastparquet : None
fsspec : 2022.01.0
gcsfs : None
markupsafe : 2.0.1
matplotlib : 3.5.1
numba : 0.55.0
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : 2022.6.0rc1.dev16+g6c8db5ed0
xlrd : None
xlwt : None
zstandard : None

Prior Performance

No response

@dcherian dcherian added Needs Triage Issue that has not been reviewed by a pandas team member Performance Memory or execution speed performance labels Jul 6, 2022
dcherian added a commit to xarray-contrib/flox that referenced this issue Jul 8, 2022
dcherian added a commit to xarray-contrib/flox that referenced this issue Jul 8, 2022
dcherian added a commit to xarray-contrib/flox that referenced this issue Jul 9, 2022
* Skip factorizing with RangeIndex

fastpath for binning by multiple variables.

* Workaround pandas-dev/pandas#47614

* Avoid dispatching to pandas searchsorted

* Remove unused variable.

* ravel to reshape(-1)

* Revert "Avoid dispatching to pandas searchsorted"

This reverts commit 9aab6a4c194fa5c14b1f28ccb89dc7f8f8ebaa7d.
@mroeschke
Copy link
Member

Digging a little deeper, one issue is that IntervalTree hardcodes a leafsize=100 which doesn't scale well for larger data. Ideally leaf size should be set dynamically depending on the number of intervals

@mroeschke mroeschke added Interval Interval data type cut cut, qcut and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Feb 2, 2024
@hchau630
Copy link

hchau630 commented Mar 7, 2025

Any progress on this issue? Encountered this performance issue as well when working on large dataframes with pd.IntervalIndex as bins. In my case I'm using non-contiguous intervals so I cannot directly use array bins. But I think a simple workaround with minimal performance overhead is just the following

from itertools import chain
import numpy as np
import pandas as pd

array = np.random.uniform(0, 7, size=(1000,))
bins = pd.IntervalIndex.from_arrays([0, 2, 4, 6], [1, 3, 5, 7])

# workaround for `pd.cut(array, bins=bins)`, assuming no duplicate bin edges
out = pd.cut(array, bins=chain.from_iterable(zip(bins.left, bins.right)))
out = out.remove_categories(out.categories[1::2])

# test correctness
expected = pd.cut(array, bins=bins)
pd.testing.assert_extension_array_equal(out, expected)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cut cut, qcut Interval Interval data type Performance Memory or execution speed performance
Projects
None yet
Development

No branches or pull requests

3 participants