-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pd.to_numeric - float64, object or error? #17007
Comments
This case is clearly a bug: In [18]: pd.to_numeric([200, 300, '', 'NaN', 10000000000000000000], errors='coerce')
Out[18]: array([200, 300, '', 'NaN', 10000000000000000000], dtype=object) But not entirely clear how we should be handling "big" integers. numpy does not cast to a float on default construction, but we could here. In [22]: np.array([10000000000000000000])
Out[22]: array([10000000000000000000], dtype=uint64)
In [23]: np.array([30000000000000000000])
Out[23]: array([30000000000000000000], dtype=object) |
@chris-b1 : Your first example, I can explain. You can't hold both As for your point about "big" integers, do keep in mind that you can't hold integers above |
@mficek : My explanation about not holding For your first Of the four examples, only your last one I would consider a bug. When we |
|
Ah, I see...I thought it was erroring an element earlier. Yes, that
We made a deliberate decision to not do that. |
Thanks for your explanation! I figured some workaround that works for me with pandas==0.20.3 (basically converting to string, adding '.' at the end of every "number" which leads to conversion to 'float64' always and then checking for elements which are bigger than np.iinfo(np.uint64).max, which I replace with np.nan. Then I drop nans and covert everything to uint64 (which is the type I need in my code). d[col] = (d[col].astype(str) + '.').apply(pd.to_numeric, errors='coerce')
d.loc[d[col]>np.iinfo(typ).max, col] = np.nan
d[col] = d[col].dropna()
d[col] = d[col].astype(np.uint64) Nothing nice, but it works. Can I be anyhow helpful in resolving this issue? Despite being a big fan of pandas, I never contributed to it's code... |
@chris-b1 : I actually can explain why the first and third examples behave as they do. When you try to convert the "NaN" to To be fair, this is not well-documented, and we should at least document this (unless people really think this behavior should be changed). |
@mficek : Absolutely! The following code locations will help you out here:
Try to figure out why the Finally, here is the link to documentation for contributing to |
I think that that
That said, I do see your point that in the range of int64max to uint64max it becomes "lossier" |
@chris-b1 : I would disagree with you on that. Also, having working on |
|
Sorry, what I meant is that we don't have a placeholder for missing |
It doesn't matter what is there when we have |
Code Sample, a copy-pastable example if possible
Problem description
Hi guys, I realized that result of to_numeric changes depending on the way you pass a Series to that function. Please see example above. When I call to_numeric with series passed as parameter, it returns "object", but when I apply to_numeric to that series, it returns "float64". Moreover, I'm a bit confused what's the correct behavior of to_numeric, why it doesn't convert looooong int-like number to float64? It throws an exception from which I can't even deduce which number (position, index) caused that exception...
I'm pretty sure my issue is being discussed somewhere already, I tried to search the proper issue but rather found bits and pieces about to_numeric and convertions in general. Please feel free to put my issue in more appropriate thread.
Output of
pd.show_versions()
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 27.2.0
Cython: None
numpy: 1.13.1
scipy: None
xarray: None
IPython: 5.4.1
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: