-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunking implemented for Issue #67 for improved memory management #105
Conversation
Main change: - Created chunking logic to call the classifier with a maximum number of tests (detailed description as code comment). In addition the following changes were made: - Replaced the try/catch with an explicit check for the available function either decision_function or predict_proba. - Explicitly defined the solver function in the example due to changes in the interface as per recommendation.
# As we do not know the shape of the output, as it depends on whether a | ||
# decision function or the predict probability is called, we simply test the output | ||
# size for a single sample and use the shape of the output | ||
Y_result = np.empty((X_axis0_size,) + classifier_pred_or_decide(X[0:1]).shape[1:]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need to do that. The time it takes to allocate these is so small that it doesn't really matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just saw your comment re seasoned assembly programmer.
The reason I don't want to optimize here is the cost of maintenance vs speed.
Reallocating is clearly slower, but I'm pretty sure it's imperceptibly slower, and the code using pre-allocation is quite a bit more complex and more likely to introduce bugs, and harder to modify.
I imagine the python community makes a different trade-off between readability and speed here than the assembly community ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback!
Sorry the explanation wasn't very clear. It is not about the pre-allocation of the Y_result versus the allocation of the loop_times individual result chunks; I agree this is not making a difference at all.
It is about concatenation of the incremental growing result set and copying the data as each loop iteration using something like:
y_result = np.stack((y_result, y_chunk))
This allocates the memory and copies the data each loop iteration. For larger data sets for which I tried it is an order of magnitude slower 10 seconds vs 100 seconds. I thought this would justify this.
I got the pre-allocation actually from a Python recipe - in assembler it would look very different ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not grow in each iteration but only call stack once in the end. That should reduce the memory allocated and I would be surprised if that impacted the runtime that much - though I could be wrong of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll write the version without the pre-allocation and will test it. ... let you know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below is the performance evaluation of the current HEAD and the two alternatives.
In summary:
There is a significant performance advantage in terms of memory and execution time compared to the current master.
The advantage for the pre-allocation is minimal / hardly measurable. Hence I submit a new PR for the change code, which is slightly more readable. Please close this one.
*** Performance evaluation for:
MLPClassifier(solver='lbfgs', random_state=0, hidden_layer_sizes=[1000, 1000, 1000])
runtime is in seconds
Current HEAD:
(32bit architecture) - Out of Memory Error
Ran 3 Iterations: Average runtime: 361.9377431869507 - standard deviation: 0.84561111123 (64bit architecture) - 36 GB Memory Allocated
bug_32bit_out_of_memory (pre-allocation of result set) (This PR)
Ran 3 Iterations: Average runtime: 151.96799866358438 - standard deviation: 0.5428240156760912 (32bit architecture) - ~1 GB Memory Allocated
Ran 3 Iterations: Average runtime: 103.92718044916789 - standard deviation: 0.3448667233842946 (64bit architecture) - ~1 GB Memory Allocated
bug_32bit_out_of_memory_v2 (accumulation of result set)
Ran 3 Iterations: Average runtime: 152.8699369430542 - standard deviation: 0.6673264039615153 (32bit architecture) - ~1 GB Memory Allocated
Ran 3 Iterations: Average runtime: 104.28479194641113 - standard deviation: 0.430773539134181 (64bit architecture) - ~1 GB Memory Allocated
Thanks! |
can you check the overhead this creates for simpler models, like decision trees and logistic regression? I imagine it's near zero but it would be good to check. |
I tested it with the Linear Regression, it does not make any difference (if you don't concatenate;) |
@behreth I think just running it on a single fast model and the neural net should be fine, thank you! |
Replaced with revised PR #106 |
Thanks for the benchmark! You could have pushed into this PR but opening a new one is fine as well :) |
This closes issue #67 memory error on 32bit Python
Main change:
In addition the following minor changes were made:
Notes
I initially tested whether the architecture of the Python executable is 64bit or 32 bit in order to avoid the chunking when unnecessary. Using:
is_64bits = sys.maxsize > 2**32
Nevertheless, running the test with a large number of nodes (> 1000) and 3 layers, chunking proved to be an significant performance improvement on 64 bit machines as well, as it avoided paging the memory on my 16 GB machine (factor 3-5 on my 16GB machine).
This was tested across both architectures (on Windows)
Finally it should be noted that it is recommendable for the code to be run on 64bit architectures.