Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching the tuple hash calculation speeds up some code significantly #131525

Open
mdboom opened this issue Mar 20, 2025 · 1 comment
Open

Caching the tuple hash calculation speeds up some code significantly #131525

mdboom opened this issue Mar 20, 2025 · 1 comment
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage type-feature A feature request or enhancement

Comments

@mdboom
Copy link
Contributor

mdboom commented Mar 20, 2025

Proposal:

Back in 2013, it was determined that caching the result of tuple_hash did not have any significant speedup.

However, a lot has changed since then, and in a recent experiment to add a tuple hash cache back in, the mdp benchmark increased by 86%.

Admittedly, there was no measurable improvement on any other benchmark, but it also seems to have no downside, including for memory usage when measured with max_rss.

Has this already been discussed elsewhere?

No response given

Links to previous discussion of this feature:

  • [ ]

Linked PRs

@mdboom mdboom added the performance Performance or resource usage label Mar 20, 2025
@mdboom mdboom self-assigned this Mar 20, 2025
mdboom added a commit to mdboom/cpython that referenced this issue Mar 20, 2025
mdboom added a commit to mdboom/cpython that referenced this issue Mar 20, 2025
@rhettinger
Copy link
Contributor

This would nicely simplify and speed-up the pure python implementation of lru_cache. The _HashedSeq class would no longer be needed, and a plain tuple would suffice.

@picnixz picnixz added type-feature A feature request or enhancement interpreter-core (Objects, Python, Grammar, and Parser dirs) labels Mar 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage type-feature A feature request or enhancement
Projects
None yet
Development

No branches or pull requests

3 participants