-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add asv benchmarks for essential functions #23935
Conversation
Hello @qwhelan! Thanks for updating the PR.
Comment last updated on November 26, 2018 at 23:13 Hours UTC |
ad87395
to
be0b0cd
Compare
Could you show the output of |
be0b0cd
to
a0450cb
Compare
@mroeschke Sure,
|
Codecov Report
@@ Coverage Diff @@
## master #23935 +/- ##
=======================================
Coverage 92.31% 92.31%
=======================================
Files 161 161
Lines 51471 51471
=======================================
Hits 47515 47515
Misses 3956 3956
Continue to review full report at Codecov.
|
@mroeschke Nothing too shocking, but a reasonable fraction are demonstrating small slowdowns compared to
|
look ok to me. @topper-123 if you have any comments. @mroeschke feel free to merge when satisfied. |
a0450cb
to
9b1a80f
Compare
@mroeschke And the
|
Great thanks @qwhelan! More benchmarks are always appreciated. |
Look good. @qwhelan , you mention
I'm working on windows and experience very uneven results when running ASVs and I actually have more trust in doing timeit manually, which is a bit sad. Got any pointers on this bug? Can't find anything on the ASV Github issue list. |
@topper-123 It's pretty simple - I've been punting on opening a PR on this as I'm waiting for comments on the one I already have open, but I have a small notebook demonstrating the quantization I threw together before I went on vacation. I'll try to post it later today. |
Opened airspeed-velocity/asv#775 for anyone interested in the |
* PERF: add asv benchmarks for uncovered plotting methods * PERF: add asv benchmark for DataFrame.rename() * PERF: add asv benchmarks for .dot() * PERF: add asv benchmarks for uncovered string methods * PERF: add asv benchmarks for .expanding() and .ewm() * PERF: add asv benchmarks for .corr() and .cov() * PERF: add asv benchmarks for TimedeltaIndex * PERF: add asv benchmarks for cut()/qcut()
* PERF: add asv benchmarks for uncovered plotting methods * PERF: add asv benchmark for DataFrame.rename() * PERF: add asv benchmarks for .dot() * PERF: add asv benchmarks for uncovered string methods * PERF: add asv benchmarks for .expanding() and .ewm() * PERF: add asv benchmarks for .corr() and .cov() * PERF: add asv benchmarks for TimedeltaIndex * PERF: add asv benchmarks for cut()/qcut()
I've jury-rigged
asv
withcoverage
to enable automatic identification of what parts of the code base are lackingasv
benchmark coverage and found a number of essential functions on my first pass:pd.DataFrame.rename()
pd.cut()
/pd.qcut()
.dot()
.plot()
method other thanline
.str
methods.corr()
and.cov()
(we're covering the rolling version, which is a different path than full-sample)TimedeltaIndex
There's still a lot left, but the above represents about a 20% increase in benchmark coverage compared to baseline. (This is a little bit of a hand-wavy metric, as
import
s/def
s/etc that are normally counted towards coverage are excluded unless explicitly run inside a benchmark, leading to very low benchmark coverage metrics in the neighborhood of 10%).git diff upstream/master -u -- "*.py" | flake8 --diff