title
stringlengths 1
185
| diff
stringlengths 0
32.2M
| body
stringlengths 0
123k
⌀ | url
stringlengths 57
58
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
⌀ | updated_at
stringlengths 20
20
|
---|---|---|---|---|---|---|---|
DOC iteritems docstring update and examples | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 81d5c112885ec..288ff26b14bc4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -778,14 +778,50 @@ def style(self):
return Styler(self)
def iteritems(self):
- """
+ r"""
Iterator over (column name, Series) pairs.
- See also
+ Iterates over the DataFrame columns, returning a tuple with the column name
+ and the content as a Series.
+
+ Yields
+ ------
+ label : object
+ The column names for the DataFrame being iterated over.
+ content : Series
+ The column entries belonging to each label, as a Series.
+
+ See Also
--------
- iterrows : Iterate over DataFrame rows as (index, Series) pairs.
- itertuples : Iterate over DataFrame rows as namedtuples of the values.
+ DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs.
+ DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values.
+ Examples
+ --------
+ >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],
+ ... 'population': [1864, 22000, 80000]},
+ ... index=['panda', 'polar', 'koala'])
+ >>> df
+ species population
+ panda bear 1864
+ polar bear 22000
+ koala marsupial 80000
+ >>> for label, content in df.iteritems():
+ ... print('label:', label)
+ ... print('content:', content, sep='\n')
+ ...
+ label: species
+ content:
+ panda bear
+ polar bear
+ koala marsupial
+ Name: species, dtype: object
+ label: population
+ content:
+ panda 1864
+ polar 22000
+ koala 80000
+ Name: population, dtype: int64
"""
if self.columns.is_unique and hasattr(self, '_item_cache'):
for k in self.columns:
| Updated iteritems docstring to start with an infinitive and added a short example
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22658 | 2018-09-10T12:34:25Z | 2018-09-27T12:54:12Z | 2018-09-27T12:54:12Z | 2018-09-27T12:54:28Z |
DOC: Follows ISO 639-1 code | diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf
new file mode 100644
index 0000000000000..daa65a944e68a
Binary files /dev/null and b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pdf differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx
new file mode 100644
index 0000000000000..6270a71e20ee8
Binary files /dev/null and b/doc/cheatsheet/Pandas_Cheat_Sheet_JA.pptx differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pdf b/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pdf
deleted file mode 100644
index 746d1b6c980fe..0000000000000
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pdf and /dev/null differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pptx b/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pptx
deleted file mode 100644
index f8b98a6f1f8e4..0000000000000
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet_JP.pptx and /dev/null differ
| ## Changes
- change suffix `_JP` to `_JA` according to `ISO 639`
- fixed typo in `Pandas_Cheat_Sheet_JA.pdf`
- translated `Pandas_Cheat_Sheet_JA.pptx` in Japanese | https://api.github.com/repos/pandas-dev/pandas/pulls/22657 | 2018-09-10T08:51:47Z | 2018-09-30T21:27:18Z | 2018-09-30T21:27:18Z | 2018-09-30T21:27:18Z |
BUG: output formatting with to_html(), index=False and/or index_names=False (#22579, #22747) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 9bb6e0c90ae06..0890dfe76bbde 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1520,6 +1520,8 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`)
- Bug in :func:`to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`)
+- Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
+- Bug in :func:`to_html()` with ``index_names=False`` displaying index name (:issue:`22747`)
- Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
- Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`)
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index cac0c699d7046..eb11dd461927b 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -43,6 +43,35 @@ def __init__(self, formatter, classes=None, notebook=False, border=None,
self.table_id = table_id
self.render_links = render_links
+ @property
+ def show_col_idx_names(self):
+ # see gh-22579
+ # Column misalignment also occurs for
+ # a standard index when the columns index is named.
+ # Determine if ANY column names need to be displayed
+ # since if the row index is not displayed a column of
+ # blank cells need to be included before the DataFrame values.
+ # TODO: refactor to add show_col_idx_names property to
+ # DataFrameFormatter
+ return all((self.fmt.has_column_names,
+ self.fmt.show_index_names,
+ self.fmt.header))
+
+ @property
+ def row_levels(self):
+ if self.fmt.index:
+ # showing (row) index
+ return self.frame.index.nlevels
+ elif self.show_col_idx_names:
+ # see gh-22579
+ # Column misalignment also occurs for
+ # a standard index when the columns index is named.
+ # If the row index is not displayed a column of
+ # blank cells need to be included before the DataFrame values.
+ return 1
+ # not showing (row) index
+ return 0
+
@property
def is_truncated(self):
return self.fmt.is_truncated
@@ -201,7 +230,7 @@ def write_result(self, buf):
def _write_header(self, indent):
truncate_h = self.fmt.truncate_h
- row_levels = self.frame.index.nlevels
+
if not self.fmt.header:
# write nothing
return indent
@@ -267,12 +296,26 @@ def _write_header(self, indent):
values = (values[:ins_col] + [u('...')] +
values[ins_col:])
- name = self.columns.names[lnum]
- row = [''] * (row_levels - 1) + ['' if name is None else
- pprint_thing(name)]
-
- if row == [""] and self.fmt.index is False:
- row = []
+ # see gh-22579
+ # Column Offset Bug with to_html(index=False) with
+ # MultiIndex Columns and Index.
+ # Initially fill row with blank cells before column names.
+ # TODO: Refactor to remove code duplication with code
+ # block below for standard columns index.
+ row = [''] * (self.row_levels - 1)
+ if self.fmt.index or self.show_col_idx_names:
+ # see gh-22747
+ # If to_html(index_names=False) do not show columns
+ # index names.
+ # TODO: Refactor to use _get_column_name_list from
+ # DataFrameFormatter class and create a
+ # _get_formatted_column_labels function for code
+ # parity with DataFrameFormatter class.
+ if self.fmt.show_index_names:
+ name = self.columns.names[lnum]
+ row.append(pprint_thing(name or ''))
+ else:
+ row.append('')
tags = {}
j = len(row)
@@ -287,18 +330,28 @@ def _write_header(self, indent):
self.write_tr(row, indent, self.indent_delta, tags=tags,
header=True)
else:
- if self.fmt.index:
- row = [''] * (self.frame.index.nlevels - 1)
- row.append(self.columns.name or '')
- else:
- row = []
+ # see gh-22579
+ # Column misalignment also occurs for
+ # a standard index when the columns index is named.
+ # Initially fill row with blank cells before column names.
+ # TODO: Refactor to remove code duplication with code block
+ # above for columns MultiIndex.
+ row = [''] * (self.row_levels - 1)
+ if self.fmt.index or self.show_col_idx_names:
+ # see gh-22747
+ # If to_html(index_names=False) do not show columns
+ # index names.
+ # TODO: Refactor to use _get_column_name_list from
+ # DataFrameFormatter class.
+ if self.fmt.show_index_names:
+ row.append(self.columns.name or '')
+ else:
+ row.append('')
row.extend(self.columns)
align = self.fmt.justify
if truncate_h:
- if not self.fmt.index:
- row_levels = 0
- ins_col = row_levels + self.fmt.tr_col_num
+ ins_col = self.row_levels + self.fmt.tr_col_num
row.insert(ins_col, '...')
self.write_tr(row, indent, self.indent_delta, header=True,
@@ -346,9 +399,6 @@ def _write_regular_rows(self, fmt_values, indent):
index_values = self.fmt.tr_frame.index.map(fmt)
else:
index_values = self.fmt.tr_frame.index.format()
- row_levels = 1
- else:
- row_levels = 0
row = []
for i in range(nrows):
@@ -356,18 +406,24 @@ def _write_regular_rows(self, fmt_values, indent):
if truncate_v and i == (self.fmt.tr_row_num):
str_sep_row = ['...'] * len(row)
self.write_tr(str_sep_row, indent, self.indent_delta,
- tags=None, nindex_levels=row_levels)
+ tags=None, nindex_levels=self.row_levels)
row = []
if self.fmt.index:
row.append(index_values[i])
+ # see gh-22579
+ # Column misalignment also occurs for
+ # a standard index when the columns index is named.
+ # Add blank cell before data cells.
+ elif self.show_col_idx_names:
+ row.append('')
row.extend(fmt_values[j][i] for j in range(self.ncols))
if truncate_h:
- dot_col_ix = self.fmt.tr_col_num + row_levels
+ dot_col_ix = self.fmt.tr_col_num + self.row_levels
row.insert(dot_col_ix, '...')
self.write_tr(row, indent, self.indent_delta, tags=None,
- nindex_levels=row_levels)
+ nindex_levels=self.row_levels)
def _write_hierarchical_rows(self, fmt_values, indent):
template = 'rowspan="{span}" valign="top"'
@@ -376,6 +432,8 @@ def _write_hierarchical_rows(self, fmt_values, indent):
truncate_v = self.fmt.truncate_v
frame = self.fmt.tr_frame
nrows = len(frame)
+ # TODO: after gh-22887 fixed, refactor to use class property
+ # in place of row_levels
row_levels = self.frame.index.nlevels
idx_values = frame.index.format(sparsify=False, adjoin=False,
diff --git a/pandas/tests/io/formats/data/datetime64_hourformatter.html b/pandas/tests/io/formats/data/html/datetime64_hourformatter.html
similarity index 100%
rename from pandas/tests/io/formats/data/datetime64_hourformatter.html
rename to pandas/tests/io/formats/data/html/datetime64_hourformatter.html
diff --git a/pandas/tests/io/formats/data/datetime64_monthformatter.html b/pandas/tests/io/formats/data/html/datetime64_monthformatter.html
similarity index 100%
rename from pandas/tests/io/formats/data/datetime64_monthformatter.html
rename to pandas/tests/io/formats/data/html/datetime64_monthformatter.html
diff --git a/pandas/tests/io/formats/data/escape_disabled.html b/pandas/tests/io/formats/data/html/escape_disabled.html
similarity index 100%
rename from pandas/tests/io/formats/data/escape_disabled.html
rename to pandas/tests/io/formats/data/html/escape_disabled.html
diff --git a/pandas/tests/io/formats/data/escaped.html b/pandas/tests/io/formats/data/html/escaped.html
similarity index 100%
rename from pandas/tests/io/formats/data/escaped.html
rename to pandas/tests/io/formats/data/html/escaped.html
diff --git a/pandas/tests/io/formats/data/gh12031_expected_output.html b/pandas/tests/io/formats/data/html/gh12031_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh12031_expected_output.html
rename to pandas/tests/io/formats/data/html/gh12031_expected_output.html
diff --git a/pandas/tests/io/formats/data/gh14882_expected_output_1.html b/pandas/tests/io/formats/data/html/gh14882_expected_output_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh14882_expected_output_1.html
rename to pandas/tests/io/formats/data/html/gh14882_expected_output_1.html
diff --git a/pandas/tests/io/formats/data/gh14882_expected_output_2.html b/pandas/tests/io/formats/data/html/gh14882_expected_output_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh14882_expected_output_2.html
rename to pandas/tests/io/formats/data/html/gh14882_expected_output_2.html
diff --git a/pandas/tests/io/formats/data/gh14998_expected_output.html b/pandas/tests/io/formats/data/html/gh14998_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh14998_expected_output.html
rename to pandas/tests/io/formats/data/html/gh14998_expected_output.html
diff --git a/pandas/tests/io/formats/data/gh15019_expected_output.html b/pandas/tests/io/formats/data/html/gh15019_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh15019_expected_output.html
rename to pandas/tests/io/formats/data/html/gh15019_expected_output.html
diff --git a/pandas/tests/io/formats/data/gh21625_expected_output.html b/pandas/tests/io/formats/data/html/gh21625_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh21625_expected_output.html
rename to pandas/tests/io/formats/data/html/gh21625_expected_output.html
diff --git a/pandas/tests/io/formats/data/gh22270_expected_output.html b/pandas/tests/io/formats/data/html/gh22270_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh22270_expected_output.html
rename to pandas/tests/io/formats/data/html/gh22270_expected_output.html
diff --git a/pandas/tests/io/formats/data/html/gh22579_expected_output.html b/pandas/tests/io/formats/data/html/gh22579_expected_output.html
new file mode 100644
index 0000000000000..425b0f915ed16
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/gh22579_expected_output.html
@@ -0,0 +1,76 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th colspan="2" halign="left">a</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th>c</th>
+ <th>d</th>
+ <th>c</th>
+ <th>d</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>10</td>
+ <td>10</td>
+ <td>10</td>
+ </tr>
+ <tr>
+ <td>1</td>
+ <td>11</td>
+ <td>11</td>
+ <td>11</td>
+ </tr>
+ <tr>
+ <td>2</td>
+ <td>12</td>
+ <td>12</td>
+ <td>12</td>
+ </tr>
+ <tr>
+ <td>3</td>
+ <td>13</td>
+ <td>13</td>
+ <td>13</td>
+ </tr>
+ <tr>
+ <td>4</td>
+ <td>14</td>
+ <td>14</td>
+ <td>14</td>
+ </tr>
+ <tr>
+ <td>5</td>
+ <td>15</td>
+ <td>15</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <td>6</td>
+ <td>16</td>
+ <td>16</td>
+ <td>16</td>
+ </tr>
+ <tr>
+ <td>7</td>
+ <td>17</td>
+ <td>17</td>
+ <td>17</td>
+ </tr>
+ <tr>
+ <td>8</td>
+ <td>18</td>
+ <td>18</td>
+ <td>18</td>
+ </tr>
+ <tr>
+ <td>9</td>
+ <td>19</td>
+ <td>19</td>
+ <td>19</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/gh22783_expected_output.html b/pandas/tests/io/formats/data/html/gh22783_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh22783_expected_output.html
rename to pandas/tests/io/formats/data/html/gh22783_expected_output.html
diff --git a/pandas/tests/io/formats/data/html/gh22783_named_columns_index.html b/pandas/tests/io/formats/data/html/gh22783_named_columns_index.html
new file mode 100644
index 0000000000000..55ab290920cc5
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/gh22783_named_columns_index.html
@@ -0,0 +1,30 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>3</th>
+ <th>4</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th></th>
+ <td>1.764052</td>
+ <td>0.400157</td>
+ <td>...</td>
+ <td>2.240893</td>
+ <td>1.867558</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>-0.977278</td>
+ <td>0.950088</td>
+ <td>...</td>
+ <td>-0.103219</td>
+ <td>0.410599</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/gh6131_expected_output.html b/pandas/tests/io/formats/data/html/gh6131_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh6131_expected_output.html
rename to pandas/tests/io/formats/data/html/gh6131_expected_output.html
diff --git a/pandas/tests/io/formats/data/gh8452_expected_output.html b/pandas/tests/io/formats/data/html/gh8452_expected_output.html
similarity index 100%
rename from pandas/tests/io/formats/data/gh8452_expected_output.html
rename to pandas/tests/io/formats/data/html/gh8452_expected_output.html
diff --git a/pandas/tests/io/formats/data/index_1.html b/pandas/tests/io/formats/data/html/index_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_1.html
rename to pandas/tests/io/formats/data/html/index_1.html
diff --git a/pandas/tests/io/formats/data/index_2.html b/pandas/tests/io/formats/data/html/index_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_2.html
rename to pandas/tests/io/formats/data/html/index_2.html
diff --git a/pandas/tests/io/formats/data/index_3.html b/pandas/tests/io/formats/data/html/index_3.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_3.html
rename to pandas/tests/io/formats/data/html/index_3.html
diff --git a/pandas/tests/io/formats/data/index_4.html b/pandas/tests/io/formats/data/html/index_4.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_4.html
rename to pandas/tests/io/formats/data/html/index_4.html
diff --git a/pandas/tests/io/formats/data/index_5.html b/pandas/tests/io/formats/data/html/index_5.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_5.html
rename to pandas/tests/io/formats/data/html/index_5.html
diff --git a/pandas/tests/io/formats/data/index_formatter.html b/pandas/tests/io/formats/data/html/index_formatter.html
similarity index 100%
rename from pandas/tests/io/formats/data/index_formatter.html
rename to pandas/tests/io/formats/data/html/index_formatter.html
diff --git a/pandas/tests/io/formats/data/html/index_named_multi_columns_named_multi.html b/pandas/tests/io/formats/data/html/index_named_multi_columns_named_multi.html
new file mode 100644
index 0000000000000..817b54d77f8b1
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_multi_columns_named_multi.html
@@ -0,0 +1,34 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th>columns.name.0</th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>columns.name.1</th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ <tr>
+ <th>index.name.0</th>
+ <th>index.name.1</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_multi_columns_named_standard.html b/pandas/tests/io/formats/data/html/index_named_multi_columns_named_standard.html
new file mode 100644
index 0000000000000..e85965f14075d
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_multi_columns_named_standard.html
@@ -0,0 +1,29 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ <tr>
+ <th>index.name.0</th>
+ <th>index.name.1</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..7af63e893b12e
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_multi.html
@@ -0,0 +1,34 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ <tr>
+ <th>index.name.0</th>
+ <th>index.name.1</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..2f7837864bf88
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_multi_columns_unnamed_standard.html
@@ -0,0 +1,29 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ <tr>
+ <th>index.name.0</th>
+ <th>index.name.1</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_standard_columns_named_multi.html b/pandas/tests/io/formats/data/html/index_named_standard_columns_named_multi.html
new file mode 100644
index 0000000000000..ca9b8bd834a9c
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_standard_columns_named_multi.html
@@ -0,0 +1,30 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>columns.name.0</th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th>columns.name.1</th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_standard_columns_named_standard.html b/pandas/tests/io/formats/data/html/index_named_standard_columns_named_standard.html
new file mode 100644
index 0000000000000..6478c99ad85e9
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_standard_columns_named_standard.html
@@ -0,0 +1,26 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..d7660872177dc
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_multi.html
@@ -0,0 +1,30 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..4810f66018d3b
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_standard_columns_unnamed_standard.html
@@ -0,0 +1,26 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_none_columns_named_multi.html b/pandas/tests/io/formats/data/html/index_none_columns_named_multi.html
new file mode 100644
index 0000000000000..e111f55be7d25
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_none_columns_named_multi.html
@@ -0,0 +1,25 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>columns.name.0</th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th>columns.name.1</th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_none_columns_named_standard.html b/pandas/tests/io/formats/data/html/index_none_columns_named_standard.html
new file mode 100644
index 0000000000000..d3a9ba017b43e
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_none_columns_named_standard.html
@@ -0,0 +1,21 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_none_columns_none.html b/pandas/tests/io/formats/data/html/index_none_columns_none.html
new file mode 100644
index 0000000000000..44899858d9519
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_none_columns_none.html
@@ -0,0 +1,12 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_none_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/index_none_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..b21a618328b1b
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_none_columns_unnamed_multi.html
@@ -0,0 +1,21 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_none_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/index_none_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..1249fa5605099
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_none_columns_unnamed_standard.html
@@ -0,0 +1,18 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_multi.html b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_multi.html
new file mode 100644
index 0000000000000..95c38c9c8fd28
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_multi.html
@@ -0,0 +1,28 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th>columns.name.0</th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>columns.name.1</th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_standard.html b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_standard.html
new file mode 100644
index 0000000000000..9583a21f55f01
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_named_standard.html
@@ -0,0 +1,23 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..f620259037b60
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_multi.html
@@ -0,0 +1,28 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..2ca18c288437b
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_unnamed_standard.html
@@ -0,0 +1,23 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_multi.html b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_multi.html
new file mode 100644
index 0000000000000..ed3360f898afd
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_multi.html
@@ -0,0 +1,25 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>columns.name.0</th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th>columns.name.1</th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_standard.html b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_standard.html
new file mode 100644
index 0000000000000..54da03858a9a4
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_named_standard.html
@@ -0,0 +1,21 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..b57fafbe0ca40
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_multi.html
@@ -0,0 +1,25 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>b</th>
+ <th>c</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..235ca61a9e63d
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_unnamed_standard.html
@@ -0,0 +1,21 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/justify.html b/pandas/tests/io/formats/data/html/justify.html
similarity index 100%
rename from pandas/tests/io/formats/data/justify.html
rename to pandas/tests/io/formats/data/html/justify.html
diff --git a/pandas/tests/io/formats/data/multiindex_1.html b/pandas/tests/io/formats/data/html/multiindex_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_1.html
rename to pandas/tests/io/formats/data/html/multiindex_1.html
diff --git a/pandas/tests/io/formats/data/multiindex_2.html b/pandas/tests/io/formats/data/html/multiindex_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_2.html
rename to pandas/tests/io/formats/data/html/multiindex_2.html
diff --git a/pandas/tests/io/formats/data/multiindex_sparsify_1.html b/pandas/tests/io/formats/data/html/multiindex_sparsify_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_sparsify_1.html
rename to pandas/tests/io/formats/data/html/multiindex_sparsify_1.html
diff --git a/pandas/tests/io/formats/data/multiindex_sparsify_2.html b/pandas/tests/io/formats/data/html/multiindex_sparsify_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_sparsify_2.html
rename to pandas/tests/io/formats/data/html/multiindex_sparsify_2.html
diff --git a/pandas/tests/io/formats/data/multiindex_sparsify_false_multi_sparse_1.html b/pandas/tests/io/formats/data/html/multiindex_sparsify_false_multi_sparse_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_sparsify_false_multi_sparse_1.html
rename to pandas/tests/io/formats/data/html/multiindex_sparsify_false_multi_sparse_1.html
diff --git a/pandas/tests/io/formats/data/multiindex_sparsify_false_multi_sparse_2.html b/pandas/tests/io/formats/data/html/multiindex_sparsify_false_multi_sparse_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/multiindex_sparsify_false_multi_sparse_2.html
rename to pandas/tests/io/formats/data/html/multiindex_sparsify_false_multi_sparse_2.html
diff --git a/pandas/tests/io/formats/data/render_links_false.html b/pandas/tests/io/formats/data/html/render_links_false.html
similarity index 100%
rename from pandas/tests/io/formats/data/render_links_false.html
rename to pandas/tests/io/formats/data/html/render_links_false.html
diff --git a/pandas/tests/io/formats/data/render_links_true.html b/pandas/tests/io/formats/data/html/render_links_true.html
similarity index 100%
rename from pandas/tests/io/formats/data/render_links_true.html
rename to pandas/tests/io/formats/data/html/render_links_true.html
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_multi.html
new file mode 100644
index 0000000000000..e66d3c816e67d
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_multi.html
@@ -0,0 +1,88 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>foo</th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>baz</th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ <tr>
+ <th>foo</th>
+ <th></th>
+ <th>baz</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_standard.html
new file mode 100644
index 0000000000000..536b371145081
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_named_standard.html
@@ -0,0 +1,72 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ <tr>
+ <th>foo</th>
+ <th></th>
+ <th>baz</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..d472cdecb12c9
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_multi.html
@@ -0,0 +1,88 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ <tr>
+ <th>foo</th>
+ <th></th>
+ <th>baz</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..31c71ca3e59f6
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_unnamed_standard.html
@@ -0,0 +1,72 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ <tr>
+ <th>foo</th>
+ <th></th>
+ <th>baz</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_multi.html
new file mode 100644
index 0000000000000..779e84f6ee6d1
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_multi.html
@@ -0,0 +1,74 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>foo</th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th>baz</th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_standard.html
new file mode 100644
index 0000000000000..b86454f5fb11f
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_named_standard.html
@@ -0,0 +1,62 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..24b776e18bef9
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_multi.html
@@ -0,0 +1,74 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..a0ca960207ac0
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_unnamed_standard.html
@@ -0,0 +1,62 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_multi.html
new file mode 100644
index 0000000000000..6640db4cf8704
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_multi.html
@@ -0,0 +1,66 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>foo</th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th>baz</th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_standard.html
new file mode 100644
index 0000000000000..364a0b98d6548
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_named_standard.html
@@ -0,0 +1,54 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th></th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th></th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_none.html b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_none.html
new file mode 100644
index 0000000000000..e2af1ba42e940
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_none.html
@@ -0,0 +1,39 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..8c9a9e244277b
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_multi.html
@@ -0,0 +1,58 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..b9dcf52619490
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_none_columns_unnamed_standard.html
@@ -0,0 +1,48 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_multi.html
new file mode 100644
index 0000000000000..0590d0dea6669
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_multi.html
@@ -0,0 +1,78 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>foo</th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th>baz</th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_standard.html
new file mode 100644
index 0000000000000..28a2d964675a3
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_named_standard.html
@@ -0,0 +1,62 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_none.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_none.html
new file mode 100644
index 0000000000000..387ac51b17634
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_none.html
@@ -0,0 +1,50 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..30cd85904be4e
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_multi.html
@@ -0,0 +1,78 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..81edece220408
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_multi_columns_unnamed_standard.html
@@ -0,0 +1,62 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th></th>
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_multi.html
new file mode 100644
index 0000000000000..2acacfed3a6d0
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_multi.html
@@ -0,0 +1,66 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>foo</th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th>baz</th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_standard.html
new file mode 100644
index 0000000000000..c9bacdbd241a6
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_named_standard.html
@@ -0,0 +1,54 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>columns.name</th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_none.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_none.html
new file mode 100644
index 0000000000000..f2696f7d6b46a
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_none.html
@@ -0,0 +1,44 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_multi.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_multi.html
new file mode 100644
index 0000000000000..37e731520c7d9
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_multi.html
@@ -0,0 +1,66 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">a</th>
+ <th>...</th>
+ <th colspan="2" halign="left">b</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th colspan="2" halign="left">c</th>
+ <th>...</th>
+ <th colspan="2" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>e</th>
+ <th>f</th>
+ <th>...</th>
+ <th>e</th>
+ <th>f</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_standard.html b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_standard.html
new file mode 100644
index 0000000000000..3241ff41c5c58
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_unnamed_standard_columns_unnamed_standard.html
@@ -0,0 +1,54 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>0</th>
+ <th>1</th>
+ <th>...</th>
+ <th>6</th>
+ <th>7</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/truncate.html b/pandas/tests/io/formats/data/html/truncate.html
similarity index 100%
rename from pandas/tests/io/formats/data/truncate.html
rename to pandas/tests/io/formats/data/html/truncate.html
diff --git a/pandas/tests/io/formats/data/truncate_multi_index.html b/pandas/tests/io/formats/data/html/truncate_multi_index.html
similarity index 100%
rename from pandas/tests/io/formats/data/truncate_multi_index.html
rename to pandas/tests/io/formats/data/html/truncate_multi_index.html
diff --git a/pandas/tests/io/formats/data/truncate_multi_index_sparse_off.html b/pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html
similarity index 100%
rename from pandas/tests/io/formats/data/truncate_multi_index_sparse_off.html
rename to pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html
diff --git a/pandas/tests/io/formats/data/unicode_1.html b/pandas/tests/io/formats/data/html/unicode_1.html
similarity index 100%
rename from pandas/tests/io/formats/data/unicode_1.html
rename to pandas/tests/io/formats/data/html/unicode_1.html
diff --git a/pandas/tests/io/formats/data/unicode_2.html b/pandas/tests/io/formats/data/html/unicode_2.html
similarity index 100%
rename from pandas/tests/io/formats/data/unicode_2.html
rename to pandas/tests/io/formats/data/html/unicode_2.html
diff --git a/pandas/tests/io/formats/data/with_classes.html b/pandas/tests/io/formats/data/html/with_classes.html
similarity index 100%
rename from pandas/tests/io/formats/data/with_classes.html
rename to pandas/tests/io/formats/data/html/with_classes.html
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 9662b3d514cb8..ca33185bf79eb 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -29,7 +29,7 @@ def expected_html(datapath, name):
str : contents of HTML file.
"""
filename = '.'.join([name, 'html'])
- filepath = datapath('io', 'formats', 'data', filename)
+ filepath = datapath('io', 'formats', 'data', 'html', filename)
with open(filepath, encoding='utf-8') as f:
html = f.read()
return html.rstrip()
@@ -415,6 +415,96 @@ def test_to_html_multiindex_max_cols(self, datapath):
expected = expected_html(datapath, 'gh6131_expected_output')
assert result == expected
+ def test_to_html_multi_indexes_index_false(self, datapath):
+ # GH 22579
+ df = DataFrame({'a': range(10), 'b': range(10, 20), 'c': range(10, 20),
+ 'd': range(10, 20)})
+ df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']])
+ df.index = MultiIndex.from_product([['a', 'b'],
+ ['c', 'd', 'e', 'f', 'g']])
+ result = df.to_html(index=False)
+ expected = expected_html(datapath, 'gh22579_expected_output')
+ assert result == expected
+
+ @pytest.mark.parametrize('index_names', [True, False])
+ @pytest.mark.parametrize('index', [True, False])
+ @pytest.mark.parametrize('column_index, column_type', [
+ (Index([0, 1]), 'unnamed_standard'),
+ (Index([0, 1], name='columns.name'), 'named_standard'),
+ (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a'], ['b', 'c']], names=['columns.name.0',
+ 'columns.name.1']), 'named_multi')
+ ])
+ @pytest.mark.parametrize('row_index, row_type', [
+ (Index([0, 1]), 'unnamed_standard'),
+ (Index([0, 1], name='index.name'), 'named_standard'),
+ (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a'], ['b', 'c']], names=['index.name.0',
+ 'index.name.1']), 'named_multi')
+ ])
+ def test_to_html_basic_alignment(
+ self, datapath, row_index, row_type, column_index, column_type,
+ index, index_names):
+ # GH 22747, GH 22579
+ df = DataFrame(np.zeros((2, 2), dtype=int),
+ index=row_index, columns=column_index)
+ result = df.to_html(index=index, index_names=index_names)
+
+ if not index:
+ row_type = 'none'
+ elif not index_names and row_type.startswith('named'):
+ row_type = 'un' + row_type
+
+ if not index_names and column_type.startswith('named'):
+ column_type = 'un' + column_type
+
+ filename = 'index_' + row_type + '_columns_' + column_type
+ expected = expected_html(datapath, filename)
+ assert result == expected
+
+ @pytest.mark.parametrize('index_names', [True, False])
+ @pytest.mark.parametrize('index', [True, False])
+ @pytest.mark.parametrize('column_index, column_type', [
+ (Index(np.arange(8)), 'unnamed_standard'),
+ (Index(np.arange(8), name='columns.name'), 'named_standard'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
+ 'named_multi')
+ ])
+ @pytest.mark.parametrize('row_index, row_type', [
+ (Index(np.arange(8)), 'unnamed_standard'),
+ (Index(np.arange(8), name='index.name'), 'named_standard'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
+ 'named_multi')
+ ])
+ def test_to_html_alignment_with_truncation(
+ self, datapath, row_index, row_type, column_index, column_type,
+ index, index_names):
+ # GH 22747, GH 22579
+ df = DataFrame(np.arange(64).reshape(8, 8),
+ index=row_index, columns=column_index)
+ result = df.to_html(max_rows=4, max_cols=4,
+ index=index, index_names=index_names)
+
+ if not index:
+ row_type = 'none'
+ elif not index_names and row_type.startswith('named'):
+ row_type = 'un' + row_type
+
+ if not index_names and column_type.startswith('named'):
+ column_type = 'un' + column_type
+
+ filename = 'trunc_df_index_' + row_type + '_columns_' + column_type
+ expected = expected_html(datapath, filename)
+ assert result == expected
+
@pytest.mark.parametrize('index', [False, 0])
def test_to_html_truncation_index_false_max_rows(self, datapath, index):
# GH 15019
@@ -429,13 +519,20 @@ def test_to_html_truncation_index_false_max_rows(self, datapath, index):
assert result == expected
@pytest.mark.parametrize('index', [False, 0])
- def test_to_html_truncation_index_false_max_cols(self, datapath, index):
+ @pytest.mark.parametrize('col_index_named, expected_output', [
+ (False, 'gh22783_expected_output'),
+ (True, 'gh22783_named_columns_index')
+ ])
+ def test_to_html_truncation_index_false_max_cols(
+ self, datapath, index, col_index_named, expected_output):
# GH 22783
data = [[1.764052, 0.400157, 0.978738, 2.240893, 1.867558],
[-0.977278, 0.950088, -0.151357, -0.103219, 0.410599]]
df = DataFrame(data)
+ if col_index_named:
+ df.columns.rename('columns.name', inplace=True)
result = df.to_html(max_cols=4, index=index)
- expected = expected_html(datapath, 'gh22783_expected_output')
+ expected = expected_html(datapath, expected_output)
assert result == expected
def test_to_html_notebook_has_style(self):
| - [x] closes #22579
- [x] closes #22747
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22655 | 2018-09-10T03:35:05Z | 2019-01-01T16:35:15Z | 2019-01-01T16:35:14Z | 2019-01-01T21:00:37Z |
ENH: Support writing timestamps with timezones with to_sql | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 68faefa872c88..9f458b58717d6 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -4806,6 +4806,36 @@ default ``Text`` type for string columns:
Because of this, reading the database table back in does **not** generate
a categorical.
+.. _io.sql_datetime_data:
+
+Datetime data types
+'''''''''''''''''''
+
+Using SQLAlchemy, :func:`~pandas.DataFrame.to_sql` is capable of writing
+datetime data that is timezone naive or timezone aware. However, the resulting
+data stored in the database ultimately depends on the supported data type
+for datetime data of the database system being used.
+
+The following table lists supported data types for datetime data for some
+common databases. Other database dialects may have different data types for
+datetime data.
+
+=========== ============================================= ===================
+Database SQL Datetime Types Timezone Support
+=========== ============================================= ===================
+SQLite ``TEXT`` No
+MySQL ``TIMESTAMP`` or ``DATETIME`` No
+PostgreSQL ``TIMESTAMP`` or ``TIMESTAMP WITH TIME ZONE`` Yes
+=========== ============================================= ===================
+
+When writing timezone aware data to databases that do not support timezones,
+the data will be written as timezone naive timestamps that are in local time
+with respect to the timezone.
+
+:func:`~pandas.read_sql_table` is also capable of reading datetime data that is
+timezone aware or naive. When reading ``TIMESTAMP WITH TIME ZONE`` types, pandas
+will convert the data to UTC.
+
Reading Tables
''''''''''''''
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index eb7a11e4ba17e..507af19bd3f29 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -222,6 +222,7 @@ Other Enhancements
- :class:`IntervalIndex` has gained the :meth:`~IntervalIndex.set_closed` method to change the existing ``closed`` value (:issue:`21670`)
- :func:`~DataFrame.to_csv`, :func:`~Series.to_csv`, :func:`~DataFrame.to_json`, and :func:`~Series.to_json` now support ``compression='infer'`` to infer compression based on filename extension (:issue:`15008`).
The default compression for ``to_csv``, ``to_json``, and ``to_pickle`` methods has been updated to ``'infer'`` (:issue:`22004`).
+- :meth:`DataFrame.to_sql` now supports writing ``TIMESTAMP WITH TIME ZONE`` types for supported databases. For databases that don't support timezones, datetime data will be stored as timezone unaware local timestamps. See the :ref:`io.sql_datetime_data` for implications (:issue:`9086`).
- :func:`to_timedelta` now supports iso-formated timedelta strings (:issue:`21877`)
- :class:`Series` and :class:`DataFrame` now support :class:`Iterable` in constructor (:issue:`2193`)
- :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`)
@@ -1245,6 +1246,9 @@ MultiIndex
I/O
^^^
+- Bug in :meth:`to_sql` when writing timezone aware data (``datetime64[ns, tz]`` dtype) would raise a ``TypeError`` (:issue:`9086`)
+- Bug in :meth:`to_sql` where a naive DatetimeIndex would be written as ``TIMESTAMP WITH TIMEZONE`` type in supported databases, e.g. PostgreSQL (:issue:`23510`)
+
.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
Proper handling of `np.NaN` in a string data-typed column with the Python engine
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 396b092a286c1..75c00eabe57e8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2397,6 +2397,15 @@ def to_sql(self, name, con, schema=None, if_exists='fail', index=True,
--------
pandas.read_sql : read a DataFrame from a table
+ Notes
+ -----
+ Timezone aware datetime columns will be written as
+ ``Timestamp with timezone`` type with SQLAlchemy if supported by the
+ database. Otherwise, the datetimes will be stored as timezone unaware
+ timestamps local to the original timezone.
+
+ .. versionadded:: 0.24.0
+
References
----------
.. [1] http://docs.sqlalchemy.org
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 00fbc35ed1e7d..2f411a956dfb8 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -592,12 +592,17 @@ def insert_data(self):
data_list = [None] * ncols
blocks = temp._data.blocks
- for i in range(len(blocks)):
- b = blocks[i]
+ for b in blocks:
if b.is_datetime:
- # convert to microsecond resolution so this yields
- # datetime.datetime
- d = b.values.astype('M8[us]').astype(object)
+ # return datetime.datetime objects
+ if b.is_datetimetz:
+ # GH 9086: Ensure we return datetimes with timezone info
+ # Need to return 2-D data; DatetimeIndex is 1D
+ d = b.values.to_pydatetime()
+ d = np.expand_dims(d, axis=0)
+ else:
+ # convert to microsecond resolution for datetime.datetime
+ d = b.values.astype('M8[us]').astype(object)
else:
d = np.array(b.get_values(), dtype=object)
@@ -612,7 +617,7 @@ def insert_data(self):
return column_names, data_list
def _execute_insert(self, conn, keys, data_iter):
- data = [{k: v for k, v in zip(keys, row)} for row in data_iter]
+ data = [dict(zip(keys, row)) for row in data_iter]
conn.execute(self.insert_statement(), data)
def insert(self, chunksize=None):
@@ -741,8 +746,9 @@ def _get_column_names_and_types(self, dtype_mapper):
def _create_table_setup(self):
from sqlalchemy import Table, Column, PrimaryKeyConstraint
- column_names_and_types = \
- self._get_column_names_and_types(self._sqlalchemy_type)
+ column_names_and_types = self._get_column_names_and_types(
+ self._sqlalchemy_type
+ )
columns = [Column(name, typ, index=is_index)
for name, typ, is_index in column_names_and_types]
@@ -841,14 +847,19 @@ def _sqlalchemy_type(self, col):
from sqlalchemy.types import (BigInteger, Integer, Float,
Text, Boolean,
- DateTime, Date, Time)
+ DateTime, Date, Time, TIMESTAMP)
if col_type == 'datetime64' or col_type == 'datetime':
+ # GH 9086: TIMESTAMP is the suggested type if the column contains
+ # timezone information
try:
- tz = col.tzinfo # noqa
- return DateTime(timezone=True)
+ if col.dt.tz is not None:
+ return TIMESTAMP(timezone=True)
except AttributeError:
- return DateTime
+ # The column is actually a DatetimeIndex
+ if col.tz is not None:
+ return TIMESTAMP(timezone=True)
+ return DateTime
if col_type == 'timedelta64':
warnings.warn("the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the "
@@ -1275,8 +1286,9 @@ def _create_table_setup(self):
structure of a DataFrame. The first entry will be a CREATE TABLE
statement while the rest will be CREATE INDEX statements.
"""
- column_names_and_types = \
- self._get_column_names_and_types(self._sql_type_name)
+ column_names_and_types = self._get_column_names_and_types(
+ self._sql_type_name
+ )
pat = re.compile(r'\s+')
column_names = [col_name for col_name, _, _ in column_names_and_types]
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 237cc2936919e..777b04bbae97d 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -961,7 +961,8 @@ def test_sqlalchemy_type_mapping(self):
utc=True)})
db = sql.SQLDatabase(self.conn)
table = sql.SQLTable("test_type", db, frame=df)
- assert isinstance(table.table.c['time'].type, sqltypes.DateTime)
+ # GH 9086: TIMESTAMP is the suggested type for datetimes with timezones
+ assert isinstance(table.table.c['time'].type, sqltypes.TIMESTAMP)
def test_database_uri_string(self):
@@ -1361,9 +1362,51 @@ def check(col):
df = sql.read_sql_table("types_test_data", self.conn)
check(df.DateColWithTz)
+ def test_datetime_with_timezone_roundtrip(self):
+ # GH 9086
+ # Write datetimetz data to a db and read it back
+ # For dbs that support timestamps with timezones, should get back UTC
+ # otherwise naive data should be returned
+ expected = DataFrame({'A': date_range(
+ '2013-01-01 09:00:00', periods=3, tz='US/Pacific'
+ )})
+ expected.to_sql('test_datetime_tz', self.conn, index=False)
+
+ if self.flavor == 'postgresql':
+ # SQLAlchemy "timezones" (i.e. offsets) are coerced to UTC
+ expected['A'] = expected['A'].dt.tz_convert('UTC')
+ else:
+ # Otherwise, timestamps are returned as local, naive
+ expected['A'] = expected['A'].dt.tz_localize(None)
+
+ result = sql.read_sql_table('test_datetime_tz', self.conn)
+ tm.assert_frame_equal(result, expected)
+
+ result = sql.read_sql_query(
+ 'SELECT * FROM test_datetime_tz', self.conn
+ )
+ if self.flavor == 'sqlite':
+ # read_sql_query does not return datetime type like read_sql_table
+ assert isinstance(result.loc[0, 'A'], string_types)
+ result['A'] = to_datetime(result['A'])
+ tm.assert_frame_equal(result, expected)
+
+ def test_naive_datetimeindex_roundtrip(self):
+ # GH 23510
+ # Ensure that a naive DatetimeIndex isn't converted to UTC
+ dates = date_range('2018-01-01', periods=5, freq='6H')
+ expected = DataFrame({'nums': range(5)}, index=dates)
+ expected.to_sql('foo_table', self.conn, index_label='info_date')
+ result = sql.read_sql_table('foo_table', self.conn,
+ index_col='info_date')
+ # result index with gain a name from a set_index operation; expected
+ tm.assert_frame_equal(result, expected, check_names=False)
+
def test_date_parsing(self):
# No Parsing
df = sql.read_sql_table("types_test_data", self.conn)
+ expected_type = object if self.flavor == 'sqlite' else np.datetime64
+ assert issubclass(df.DateCol.dtype.type, expected_type)
df = sql.read_sql_table("types_test_data", self.conn,
parse_dates=['DateCol'])
| - [x] closes #9086
- [x] closes #23510
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22654 | 2018-09-10T02:04:06Z | 2018-11-08T14:55:50Z | 2018-11-08T14:55:50Z | 2018-11-08T18:02:15Z |
BUG SeriesGroupBy.mean() overflowed on some integer arrays (#22487) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 649629714c3b1..1f5ba610cdeb7 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -761,6 +761,7 @@ Groupby/Resample/Rolling
- Bug in :meth:`Resampler.apply` when passing postiional arguments to applied func (:issue:`14615`).
- Bug in :meth:`Series.resample` when passing ``numpy.timedelta64`` to ``loffset`` kwarg (:issue:`7687`).
- Bug in :meth:`Resampler.asfreq` when frequency of ``TimedeltaIndex`` is a subperiod of a new frequency (:issue:`13022`).
+- Bug in :meth:`SeriesGroupBy.mean` when values were integral but could not fit inside of int64, overflowing instead. (:issue:`22487`)
Sparse
^^^^^^
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index b8cbb41501dd1..f6e7e87f1043b 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -90,6 +90,33 @@ def ensure_categorical(arr):
return arr
+def ensure_int64_or_float64(arr, copy=False):
+ """
+ Ensure that an dtype array of some integer dtype
+ has an int64 dtype if possible
+ If it's not possible, potentially because of overflow,
+ convert the array to float64 instead.
+
+ Parameters
+ ----------
+ arr : array-like
+ The array whose data type we want to enforce.
+ copy: boolean
+ Whether to copy the original array or reuse
+ it in place, if possible.
+
+ Returns
+ -------
+ out_arr : The input array cast as int64 if
+ possible without overflow.
+ Otherwise the input array cast to float64.
+ """
+ try:
+ return arr.astype('int64', copy=copy, casting='safe')
+ except TypeError:
+ return arr.astype('float64', copy=copy)
+
+
def is_object_dtype(arr_or_dtype):
"""
Check whether an array-like or dtype is of the object dtype.
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index ba04ff3a3d3ee..d9f7b4d9c31c3 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -23,6 +23,7 @@
ensure_float64,
ensure_platform_int,
ensure_int64,
+ ensure_int64_or_float64,
ensure_object,
needs_i8_conversion,
is_integer_dtype,
@@ -471,7 +472,7 @@ def _cython_operation(self, kind, values, how, axis, min_count=-1,
if (values == iNaT).any():
values = ensure_float64(values)
else:
- values = values.astype('int64', copy=False)
+ values = ensure_int64_or_float64(values)
elif is_numeric and not is_complex_dtype(values):
values = ensure_float64(values)
else:
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index f8a0f1688c64e..775747ce0c6c1 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -1125,3 +1125,12 @@ def h(df, arg3):
expected = pd.Series([4, 8, 12], index=pd.Int64Index([1, 2, 3]))
tm.assert_series_equal(result, expected)
+
+
+def test_groupby_mean_no_overflow():
+ # Regression test for (#22487)
+ df = pd.DataFrame({
+ "user": ["A", "A", "A", "A", "A"],
+ "connections": [4970, 4749, 4719, 4704, 18446744073699999744]
+ })
+ assert df.groupby('user')['connections'].mean()['A'] == 3689348814740003840
| When integer arrays contained integers that were outside
the range of int64, the conversion would overflow.
Instead only allow allow safe casting and if a safe cast can not
be done, cast to float64 instead.
- [X] closes #22487
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22653 | 2018-09-09T22:29:20Z | 2018-09-18T14:47:32Z | 2018-09-18T14:47:31Z | 2018-09-18T14:47:41Z |
ASV: more for str.cat | diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index b203c8b0fa5c9..ccfac2f73f14d 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -1,7 +1,7 @@
import warnings
import numpy as np
-from pandas import Series
+from pandas import Series, DataFrame
import pandas.util.testing as tm
@@ -12,9 +12,6 @@ class Methods(object):
def setup(self):
self.s = Series(tm.makeStringIndex(10**5))
- def time_cat(self):
- self.s.str.cat(sep=',')
-
def time_center(self):
self.s.str.center(100)
@@ -87,6 +84,32 @@ def time_repeat(self, repeats):
self.s.str.repeat(self.repeat)
+class Cat(object):
+
+ goal_time = 0.2
+ params = ([0, 3], [None, ','], [None, '-'], [0.0, 0.001, 0.15])
+ param_names = ['other_cols', 'sep', 'na_rep', 'na_frac']
+
+ def setup(self, other_cols, sep, na_rep, na_frac):
+ N = 10 ** 5
+ mask_gen = lambda: np.random.choice([True, False], N,
+ p=[1 - na_frac, na_frac])
+ self.s = Series(tm.makeStringIndex(N)).where(mask_gen())
+ if other_cols == 0:
+ # str.cat self-concatenates only for others=None
+ self.others = None
+ else:
+ self.others = DataFrame({i: tm.makeStringIndex(N).where(mask_gen())
+ for i in range(other_cols)})
+
+ def time_cat(self, other_cols, sep, na_rep, na_frac):
+ # before the concatenation (one caller + other_cols columns), the total
+ # expected fraction of rows containing any NaN is:
+ # reduce(lambda t, _: t + (1 - t) * na_frac, range(other_cols + 1), 0)
+ # for other_cols=3 and na_frac=0.15, this works out to ~48%
+ self.s.str.cat(others=self.others, sep=sep, na_rep=na_rep)
+
+
class Contains(object):
goal_time = 0.2
| Planning to do a clean-up of the `Series.str.cat` internals and wanted to expand ASV-coverage before doing so.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22652 | 2018-09-09T20:42:12Z | 2018-09-14T12:43:59Z | 2018-09-14T12:43:59Z | 2018-09-15T16:39:27Z |
BUG: Make sure that sas7bdat parsers memory is initialized to 0 (#21616) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index fb7af00f61534..2add33becd679 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -734,7 +734,7 @@ I/O
- :func:`read_html()` no longer ignores all-whitespace ``<tr>`` within ``<thead>`` when considering the ``skiprows`` and ``header`` arguments. Previously, users had to decrease their ``header`` and ``skiprows`` values on such tables to work around the issue. (:issue:`21641`)
- :func:`read_excel()` will correctly show the deprecation warning for previously deprecated ``sheetname`` (:issue:`17994`)
- :func:`read_csv()` will correctly parse timezone-aware datetimes (:issue:`22256`)
--
+- :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`)
Plotting
^^^^^^^^
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index b2d930c1be5e7..efeb306b618d1 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -614,7 +614,7 @@ def read(self, nrows=None):
ns = (self.column_types == b's').sum()
self._string_chunk = np.empty((ns, nrows), dtype=np.object)
- self._byte_chunk = np.empty((nd, 8 * nrows), dtype=np.uint8)
+ self._byte_chunk = np.zeros((nd, 8 * nrows), dtype=np.uint8)
self._current_row_in_chunk_index = 0
p = Parser(self)
diff --git a/pandas/tests/io/sas/data/cars.sas7bdat b/pandas/tests/io/sas/data/cars.sas7bdat
new file mode 100644
index 0000000000000..ca5d3474c36ad
Binary files /dev/null and b/pandas/tests/io/sas/data/cars.sas7bdat differ
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 101ee3e619f5b..efde152a918bd 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -183,6 +183,22 @@ def test_date_time(datapath):
tm.assert_frame_equal(df, df0)
+def test_compact_numerical_values(datapath):
+ # Regression test for #21616
+ fname = datapath("io", "sas", "data", "cars.sas7bdat")
+ df = pd.read_sas(fname, encoding='latin-1')
+ # The two columns CYL and WGT in cars.sas7bdat have column
+ # width < 8 and only contain integral values.
+ # Test that pandas doesn't corrupt the numbers by adding
+ # decimals.
+ result = df['WGT']
+ expected = df['WGT'].round()
+ tm.assert_series_equal(result, expected, check_exact=True)
+ result = df['CYL']
+ expected = df['CYL'].round()
+ tm.assert_series_equal(result, expected, check_exact=True)
+
+
def test_zero_variables(datapath):
# Check if the SAS file has zero variables (PR #18184)
fname = datapath("io", "sas", "data", "zero_variables.sas7bdat")
| Memory for numbers in sas7bdat-parsing was not initialized properly to 0.
For sas7bdat files with numbers smaller than 8 bytes this made the
least significant part of the numbers essentially random.
Fix it by initializing memory correctly.
- [X] closes #21616
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22651 | 2018-09-09T18:58:39Z | 2018-09-15T12:12:56Z | 2018-09-15T12:12:56Z | 2018-09-15T12:13:01Z |
BUG: Dont include deleted rows from sas7bdat files (#15963) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 31ef70703e2ca..3f25d03b22cae 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -760,6 +760,8 @@ I/O
- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`)
+- :func:`read_sas()` will not include rows in sas7bdat files that has been marked as deleted by SAS, but are still present in the file. (:issue:`15963`)
+
Plotting
^^^^^^^^
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index a5bfd5866a261..0e1341fbecd05 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -204,9 +204,9 @@ cdef enum ColumnTypes:
# type the page_data types
cdef int page_meta_type = const.page_meta_type
-cdef int page_mix_types_0 = const.page_mix_types[0]
-cdef int page_mix_types_1 = const.page_mix_types[1]
cdef int page_data_type = const.page_data_type
+cdef int page_mix_type = const.page_mix_type
+cdef int page_type_mask = const.page_type_mask
cdef int subheader_pointers_offset = const.subheader_pointers_offset
@@ -219,7 +219,7 @@ cdef class Parser(object):
int64_t[:] column_types
uint8_t[:, :] byte_chunk
object[:, :] string_chunk
- char *cached_page
+ uint8_t *cached_page
int current_row_on_page_index
int current_page_block_count
int current_page_data_subheader_pointers_len
@@ -231,6 +231,7 @@ cdef class Parser(object):
int bit_offset
int subheader_pointer_length
int current_page_type
+ int current_page_deleted_rows_bitmap_offset
bint is_little_endian
const uint8_t[:] (*decompress)(int result_length,
const uint8_t[:] inbuff)
@@ -253,6 +254,7 @@ cdef class Parser(object):
self.subheader_pointer_length = self.parser._subheader_pointer_length
self.is_little_endian = parser.byte_order == "<"
self.column_types = np.empty(self.column_count, dtype='int64')
+ self.current_page_deleted_rows_bitmap_offset = -1
# page indicators
self.update_next_page()
@@ -309,10 +311,55 @@ cdef class Parser(object):
self.update_next_page()
return done
+ cdef int calculate_deleted_rows_bitmap_offset(self):
+ """Calculate where the deleted rows bitmap is located
+ in the page. It is _current_page_deleted_rows_bitmap_offset's
+ bytes away from the end of the row values"""
+
+ cdef:
+ int deleted_rows_bitmap_offset, page_type
+ int subheader_pointers_length, align_correction
+ int row_count
+
+ if self.parser._current_page_deleted_rows_bitmap_offset is None:
+ return -1
+
+ deleted_rows_bitmap_offset = \
+ self.parser._current_page_deleted_rows_bitmap_offset
+
+ page_type = self.current_page_type
+ subheader_pointers_length = \
+ self.subheader_pointer_length * self.current_page_subheaders_count
+
+ if page_type & page_type_mask == page_data_type:
+ return (
+ self.bit_offset +
+ subheader_pointers_offset +
+ self.row_length * self.current_page_block_count +
+ deleted_rows_bitmap_offset)
+ elif page_type & page_type_mask == page_mix_type:
+ align_correction = (
+ self.bit_offset +
+ subheader_pointers_offset +
+ subheader_pointers_length
+ ) % 8
+ row_count = min(self.parser._mix_page_row_count,
+ self.parser.row_count)
+ return (
+ self.bit_offset +
+ subheader_pointers_offset +
+ subheader_pointers_length +
+ align_correction +
+ self.row_length * row_count +
+ deleted_rows_bitmap_offset)
+ else:
+ # I have never seen this case.
+ return -1
+
cdef update_next_page(self):
# update data for the current page
- self.cached_page = <char *>self.parser._cached_page
+ self.cached_page = <uint8_t * >self.parser._cached_page
self.current_row_on_page_index = 0
self.current_page_type = self.parser._current_page_type
self.current_page_block_count = self.parser._current_page_block_count
@@ -321,11 +368,29 @@ cdef class Parser(object):
self.current_page_subheaders_count =\
self.parser._current_page_subheaders_count
+ self.current_page_deleted_rows_bitmap_offset =\
+ self.calculate_deleted_rows_bitmap_offset()
+
+ cdef bint is_row_deleted(self, int row_number):
+ cdef:
+ int row_disk
+ unsigned char byte, row_bit
+ if self.current_page_deleted_rows_bitmap_offset == -1:
+ return 0
+ row_idx = (row_number + 1) // 8
+ row_bit = 1 << (7 - (row_number % 8))
+
+ byte = self.cached_page[
+ self.current_page_deleted_rows_bitmap_offset + row_idx]
+
+ return byte & row_bit
+
cdef readline(self):
cdef:
int offset, bit_offset, align_correction
int subheader_pointer_length, mn
+ int block_count
bint done, flag
bit_offset = self.bit_offset
@@ -340,7 +405,7 @@ cdef class Parser(object):
# Loop until a data row is read
while True:
- if self.current_page_type == page_meta_type:
+ if self.current_page_type & page_type_mask == page_meta_type:
flag = self.current_row_on_page_index >=\
self.current_page_data_subheader_pointers_len
if flag:
@@ -355,8 +420,7 @@ cdef class Parser(object):
current_subheader_pointer.offset,
current_subheader_pointer.length)
return False
- elif (self.current_page_type == page_mix_types_0 or
- self.current_page_type == page_mix_types_1):
+ elif self.current_page_type & page_type_mask == page_mix_type:
align_correction = (bit_offset + subheader_pointers_offset +
self.current_page_subheaders_count *
subheader_pointer_length)
@@ -365,21 +429,35 @@ cdef class Parser(object):
offset += subheader_pointers_offset
offset += (self.current_page_subheaders_count *
subheader_pointer_length)
- offset += self.current_row_on_page_index * self.row_length
- self.process_byte_array_with_data(offset,
- self.row_length)
+
+ # Skip past rows marked as deleted
mn = min(self.parser.row_count,
self.parser._mix_page_row_count)
+ while (self.is_row_deleted(self.current_row_on_page_index) and
+ self.current_row_on_page_index < mn):
+ self.current_row_on_page_index += 1
+
+ if self.current_row_on_page_index < mn:
+ offset += self.current_row_on_page_index * self.row_length
+ self.process_byte_array_with_data(offset, self.row_length)
if self.current_row_on_page_index == mn:
done = self.read_next_page()
if done:
return True
return False
- elif self.current_page_type & page_data_type == page_data_type:
- self.process_byte_array_with_data(
- bit_offset + subheader_pointers_offset +
- self.current_row_on_page_index * self.row_length,
- self.row_length)
+ elif self.current_page_type & page_type_mask == page_data_type:
+ block_count = self.current_page_block_count
+
+ # Skip past rows marked as deleted
+ while (self.is_row_deleted(self.current_row_on_page_index) and
+ self.current_row_on_page_index != block_count):
+ self.current_row_on_page_index += 1
+
+ if self.current_row_on_page_index < block_count:
+ self.process_byte_array_with_data(
+ bit_offset + subheader_pointers_offset +
+ self.current_row_on_page_index * self.row_length,
+ self.row_length)
flag = (self.current_row_on_page_index ==
self.current_page_block_count)
if flag:
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 3582f538c16bf..f948af0ae293f 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -298,12 +298,12 @@ def _parse_metadata(self):
def _process_page_meta(self):
self._read_page_header()
- pt = [const.page_meta_type, const.page_amd_type] + const.page_mix_types
- if self._current_page_type in pt:
+ pt = [const.page_meta_type, const.page_amd_type, const.page_mix_type]
+ page_type = self._current_page_type
+ if page_type & const.page_type_mask in pt:
self._process_page_metadata()
- is_data_page = self._current_page_type & const.page_data_type
- is_mix_page = self._current_page_type in const.page_mix_types
- return (is_data_page or is_mix_page
+ pt = [const.page_mix_type, const.page_data_type]
+ return (page_type & const.page_type_mask in pt
or self._current_page_data_subheader_pointers != [])
def _read_page_header(self):
@@ -313,6 +313,12 @@ def _read_page_header(self):
tx = const.block_count_offset + bit_offset
self._current_page_block_count = self._read_int(
tx, const.block_count_length)
+ if self._current_page_type & const.page_has_deleted_rows_bitmap:
+ tx = const.page_deleted_rows_bitmap_offset * self._int_length
+ self._current_page_deleted_rows_bitmap_offset = self._read_int(
+ tx, self._int_length)
+ else:
+ self._current_page_deleted_rows_bitmap_offset = None
tx = const.subheader_count_offset + bit_offset
self._current_page_subheaders_count = (
self._read_int(tx, const.subheader_count_length))
@@ -420,6 +426,9 @@ def _process_rowsize_subheader(self, offset, length):
offset + const.row_length_offset_multiplier * int_len, int_len)
self.row_count = self._read_int(
offset + const.row_count_offset_multiplier * int_len, int_len)
+ self.rows_deleted_count = self._read_int(
+ offset + const.rows_deleted_count_offset_multiplier * int_len,
+ int_len)
self.col_count_p1 = self._read_int(
offset + const.col_count_p1_multiplier * int_len, int_len)
self.col_count_p2 = self._read_int(
@@ -601,19 +610,20 @@ def _process_format_subheader(self, offset, length):
def read(self, nrows=None):
+ row_count = self.row_count - self.rows_deleted_count
if (nrows is None) and (self.chunksize is not None):
nrows = self.chunksize
elif nrows is None:
- nrows = self.row_count
+ nrows = row_count
if len(self._column_types) == 0:
self.close()
raise EmptyDataError("No columns to parse from file")
- if self._current_row_in_file_index >= self.row_count:
+ if self._current_row_in_file_index >= row_count:
return None
- m = self.row_count - self._current_row_in_file_index
+ m = row_count - self._current_row_in_file_index
if nrows > m:
nrows = m
@@ -647,12 +657,11 @@ def _read_next_page(self):
self._read_page_header()
page_type = self._current_page_type
- if page_type == const.page_meta_type:
+ if page_type & const.page_type_mask == const.page_meta_type:
self._process_page_metadata()
- is_data_page = page_type & const.page_data_type
- pt = [const.page_meta_type] + const.page_mix_types
- if not is_data_page and self._current_page_type not in pt:
+ pt = [const.page_meta_type, const.page_mix_type, const.page_data_type]
+ if page_type & const.page_type_mask not in pt:
return self._read_next_page()
return False
diff --git a/pandas/io/sas/sas_constants.py b/pandas/io/sas/sas_constants.py
index 98502d32d39e8..d1cb42c44b60b 100644
--- a/pandas/io/sas/sas_constants.py
+++ b/pandas/io/sas/sas_constants.py
@@ -43,6 +43,7 @@
os_name_length = 16
page_bit_offset_x86 = 16
page_bit_offset_x64 = 32
+page_deleted_rows_bitmap_offset = 3
subheader_pointer_length_x86 = 12
subheader_pointer_length_x64 = 24
page_type_offset = 0
@@ -52,11 +53,15 @@
subheader_count_offset = 4
subheader_count_length = 2
page_meta_type = 0
+# If page_type has bit 7 set there may be deleted rows.
+# These are marked in a bitmap following the row data.
+page_has_deleted_rows_bitmap = 128
page_data_type = 256
page_amd_type = 1024
page_metc_type = 16384
page_comp_type = -28672
-page_mix_types = [512, 640]
+page_mix_type = 512
+page_type_mask = (page_data_type | page_mix_type | page_amd_type)
subheader_pointers_offset = 8
truncated_subheader_id = 1
compressed_subheader_id = 4
@@ -64,6 +69,7 @@
text_block_size_length = 2
row_length_offset_multiplier = 5
row_count_offset_multiplier = 6
+rows_deleted_count_offset_multiplier = 7
col_count_p1_multiplier = 9
col_count_p2_multiplier = 10
row_count_on_mix_page_offset_multiplier = 15
diff --git a/pandas/tests/io/sas/data/datetime_deleted_rows.csv b/pandas/tests/io/sas/data/datetime_deleted_rows.csv
new file mode 100644
index 0000000000000..1687dcda79435
--- /dev/null
+++ b/pandas/tests/io/sas/data/datetime_deleted_rows.csv
@@ -0,0 +1,4 @@
+Date1,Date2,DateTime,DateTimeHi,Taiw
+1960-01-06,1960-01-04,1677-09-21 00:12:44,1677-09-21 00:12:43.145225525,1912-01-01
+1960-01-03,1960-01-05,2262-04-11 23:47:16,1960-01-01 00:00:00.000000000,1960-01-02
+1960-01-06,1960-01-04,1677-09-21 00:12:44,2262-04-11 23:47:16.854774475,1912-01-01
diff --git a/pandas/tests/io/sas/data/datetime_deleted_rows.sas7bdat b/pandas/tests/io/sas/data/datetime_deleted_rows.sas7bdat
new file mode 100644
index 0000000000000..a2e25c6fb0b3a
Binary files /dev/null and b/pandas/tests/io/sas/data/datetime_deleted_rows.sas7bdat differ
diff --git a/pandas/tests/io/sas/data/deleted_rows.csv b/pandas/tests/io/sas/data/deleted_rows.csv
new file mode 100644
index 0000000000000..509bcccfc4363
--- /dev/null
+++ b/pandas/tests/io/sas/data/deleted_rows.csv
@@ -0,0 +1,9993 @@
+idx
+4.0
+6.0
+7.0
+8.0
+9.0
+10.0
+11.0
+12.0
+13.0
+14.0
+15.0
+16.0
+17.0
+18.0
+19.0
+20.0
+21.0
+22.0
+23.0
+24.0
+25.0
+26.0
+27.0
+28.0
+29.0
+30.0
+31.0
+32.0
+33.0
+34.0
+35.0
+36.0
+37.0
+38.0
+39.0
+40.0
+41.0
+42.0
+43.0
+44.0
+45.0
+46.0
+47.0
+48.0
+49.0
+50.0
+51.0
+52.0
+53.0
+54.0
+55.0
+56.0
+57.0
+58.0
+59.0
+60.0
+61.0
+62.0
+63.0
+64.0
+65.0
+66.0
+67.0
+68.0
+69.0
+70.0
+71.0
+72.0
+73.0
+74.0
+75.0
+76.0
+77.0
+78.0
+79.0
+80.0
+81.0
+82.0
+83.0
+84.0
+85.0
+86.0
+87.0
+88.0
+89.0
+90.0
+91.0
+92.0
+93.0
+94.0
+95.0
+96.0
+97.0
+98.0
+99.0
+100.0
+101.0
+102.0
+103.0
+104.0
+105.0
+106.0
+107.0
+108.0
+109.0
+110.0
+111.0
+112.0
+113.0
+114.0
+115.0
+116.0
+117.0
+118.0
+119.0
+120.0
+121.0
+122.0
+123.0
+124.0
+125.0
+126.0
+127.0
+128.0
+129.0
+130.0
+131.0
+132.0
+133.0
+134.0
+135.0
+136.0
+137.0
+138.0
+139.0
+140.0
+141.0
+142.0
+143.0
+144.0
+145.0
+146.0
+147.0
+148.0
+149.0
+150.0
+151.0
+152.0
+153.0
+154.0
+155.0
+156.0
+157.0
+158.0
+159.0
+160.0
+161.0
+162.0
+163.0
+164.0
+165.0
+166.0
+167.0
+168.0
+169.0
+170.0
+171.0
+172.0
+173.0
+174.0
+175.0
+176.0
+177.0
+178.0
+179.0
+180.0
+181.0
+182.0
+183.0
+184.0
+185.0
+186.0
+187.0
+188.0
+189.0
+190.0
+191.0
+192.0
+193.0
+194.0
+195.0
+196.0
+197.0
+198.0
+199.0
+200.0
+201.0
+202.0
+203.0
+204.0
+205.0
+206.0
+207.0
+208.0
+209.0
+210.0
+211.0
+212.0
+213.0
+214.0
+215.0
+216.0
+217.0
+218.0
+219.0
+220.0
+221.0
+222.0
+223.0
+224.0
+225.0
+226.0
+227.0
+228.0
+229.0
+230.0
+231.0
+232.0
+233.0
+234.0
+235.0
+236.0
+237.0
+238.0
+239.0
+240.0
+241.0
+242.0
+243.0
+244.0
+245.0
+246.0
+247.0
+248.0
+249.0
+250.0
+251.0
+252.0
+253.0
+254.0
+255.0
+256.0
+257.0
+258.0
+259.0
+260.0
+261.0
+262.0
+263.0
+264.0
+265.0
+266.0
+267.0
+268.0
+269.0
+270.0
+271.0
+272.0
+273.0
+274.0
+275.0
+276.0
+277.0
+278.0
+279.0
+280.0
+281.0
+282.0
+283.0
+284.0
+285.0
+286.0
+287.0
+288.0
+289.0
+290.0
+291.0
+292.0
+293.0
+294.0
+295.0
+296.0
+297.0
+298.0
+299.0
+300.0
+301.0
+302.0
+303.0
+304.0
+305.0
+306.0
+307.0
+308.0
+309.0
+310.0
+311.0
+312.0
+313.0
+314.0
+315.0
+316.0
+317.0
+318.0
+319.0
+320.0
+321.0
+322.0
+323.0
+324.0
+325.0
+326.0
+327.0
+328.0
+329.0
+330.0
+331.0
+332.0
+333.0
+334.0
+335.0
+336.0
+337.0
+338.0
+339.0
+340.0
+341.0
+342.0
+343.0
+344.0
+345.0
+346.0
+347.0
+348.0
+349.0
+350.0
+351.0
+352.0
+353.0
+354.0
+355.0
+356.0
+357.0
+358.0
+359.0
+360.0
+361.0
+362.0
+363.0
+364.0
+365.0
+366.0
+367.0
+368.0
+369.0
+370.0
+371.0
+372.0
+373.0
+374.0
+375.0
+376.0
+377.0
+378.0
+379.0
+380.0
+381.0
+382.0
+383.0
+384.0
+385.0
+386.0
+387.0
+388.0
+389.0
+390.0
+391.0
+392.0
+393.0
+394.0
+395.0
+396.0
+397.0
+398.0
+399.0
+400.0
+401.0
+402.0
+403.0
+404.0
+405.0
+406.0
+407.0
+408.0
+409.0
+410.0
+411.0
+412.0
+413.0
+414.0
+415.0
+416.0
+417.0
+418.0
+419.0
+420.0
+421.0
+422.0
+423.0
+424.0
+425.0
+426.0
+427.0
+428.0
+429.0
+430.0
+431.0
+432.0
+433.0
+434.0
+435.0
+436.0
+437.0
+438.0
+439.0
+440.0
+441.0
+442.0
+443.0
+444.0
+445.0
+446.0
+447.0
+448.0
+449.0
+450.0
+451.0
+452.0
+453.0
+454.0
+455.0
+456.0
+457.0
+458.0
+459.0
+460.0
+461.0
+462.0
+463.0
+464.0
+465.0
+466.0
+467.0
+468.0
+469.0
+470.0
+471.0
+472.0
+473.0
+474.0
+475.0
+476.0
+477.0
+478.0
+479.0
+480.0
+481.0
+482.0
+483.0
+484.0
+485.0
+486.0
+487.0
+488.0
+489.0
+490.0
+491.0
+492.0
+493.0
+494.0
+495.0
+496.0
+497.0
+498.0
+499.0
+500.0
+501.0
+502.0
+503.0
+504.0
+505.0
+506.0
+507.0
+508.0
+509.0
+510.0
+511.0
+512.0
+513.0
+514.0
+515.0
+516.0
+517.0
+518.0
+519.0
+520.0
+521.0
+522.0
+523.0
+524.0
+525.0
+526.0
+527.0
+528.0
+529.0
+530.0
+531.0
+532.0
+533.0
+534.0
+535.0
+536.0
+537.0
+538.0
+539.0
+540.0
+541.0
+542.0
+543.0
+544.0
+545.0
+546.0
+547.0
+548.0
+549.0
+550.0
+551.0
+552.0
+553.0
+554.0
+555.0
+556.0
+557.0
+558.0
+559.0
+560.0
+561.0
+562.0
+563.0
+564.0
+565.0
+566.0
+567.0
+568.0
+569.0
+570.0
+571.0
+572.0
+573.0
+574.0
+575.0
+576.0
+577.0
+578.0
+579.0
+580.0
+581.0
+582.0
+583.0
+584.0
+585.0
+586.0
+587.0
+588.0
+589.0
+590.0
+591.0
+592.0
+593.0
+594.0
+595.0
+596.0
+597.0
+598.0
+599.0
+600.0
+601.0
+602.0
+603.0
+604.0
+605.0
+606.0
+607.0
+608.0
+609.0
+610.0
+611.0
+612.0
+613.0
+614.0
+615.0
+616.0
+617.0
+618.0
+619.0
+620.0
+621.0
+622.0
+623.0
+624.0
+625.0
+626.0
+627.0
+628.0
+629.0
+630.0
+631.0
+632.0
+633.0
+634.0
+635.0
+636.0
+637.0
+638.0
+639.0
+640.0
+641.0
+642.0
+643.0
+644.0
+645.0
+646.0
+647.0
+648.0
+649.0
+650.0
+651.0
+652.0
+653.0
+654.0
+655.0
+656.0
+657.0
+658.0
+659.0
+660.0
+661.0
+662.0
+663.0
+664.0
+665.0
+666.0
+667.0
+668.0
+669.0
+670.0
+671.0
+672.0
+673.0
+674.0
+675.0
+676.0
+677.0
+678.0
+679.0
+680.0
+681.0
+682.0
+683.0
+684.0
+685.0
+686.0
+687.0
+688.0
+689.0
+690.0
+691.0
+692.0
+693.0
+694.0
+695.0
+696.0
+697.0
+698.0
+699.0
+700.0
+701.0
+702.0
+703.0
+704.0
+705.0
+706.0
+707.0
+708.0
+709.0
+710.0
+711.0
+712.0
+713.0
+714.0
+715.0
+716.0
+717.0
+718.0
+719.0
+720.0
+721.0
+722.0
+723.0
+724.0
+725.0
+726.0
+727.0
+728.0
+729.0
+730.0
+731.0
+732.0
+733.0
+734.0
+735.0
+736.0
+737.0
+738.0
+739.0
+740.0
+741.0
+742.0
+743.0
+744.0
+745.0
+746.0
+747.0
+748.0
+749.0
+750.0
+751.0
+752.0
+753.0
+754.0
+755.0
+756.0
+757.0
+758.0
+759.0
+760.0
+761.0
+762.0
+763.0
+764.0
+765.0
+766.0
+767.0
+768.0
+769.0
+770.0
+771.0
+772.0
+773.0
+774.0
+775.0
+776.0
+777.0
+778.0
+779.0
+780.0
+781.0
+782.0
+783.0
+784.0
+785.0
+786.0
+787.0
+788.0
+789.0
+790.0
+791.0
+792.0
+793.0
+794.0
+795.0
+796.0
+797.0
+798.0
+799.0
+800.0
+801.0
+802.0
+803.0
+804.0
+805.0
+806.0
+807.0
+808.0
+809.0
+810.0
+811.0
+812.0
+813.0
+814.0
+815.0
+816.0
+817.0
+818.0
+819.0
+820.0
+821.0
+822.0
+823.0
+824.0
+825.0
+826.0
+827.0
+828.0
+829.0
+830.0
+831.0
+832.0
+833.0
+834.0
+835.0
+836.0
+837.0
+838.0
+839.0
+840.0
+841.0
+842.0
+843.0
+844.0
+845.0
+846.0
+847.0
+848.0
+849.0
+850.0
+851.0
+852.0
+853.0
+854.0
+855.0
+856.0
+857.0
+858.0
+859.0
+860.0
+861.0
+862.0
+863.0
+864.0
+865.0
+866.0
+867.0
+868.0
+869.0
+870.0
+871.0
+872.0
+873.0
+874.0
+875.0
+876.0
+877.0
+878.0
+879.0
+880.0
+881.0
+882.0
+883.0
+884.0
+885.0
+886.0
+887.0
+888.0
+889.0
+890.0
+891.0
+892.0
+893.0
+894.0
+895.0
+896.0
+897.0
+898.0
+899.0
+900.0
+901.0
+902.0
+903.0
+904.0
+905.0
+906.0
+907.0
+908.0
+909.0
+910.0
+911.0
+912.0
+913.0
+914.0
+915.0
+916.0
+917.0
+918.0
+919.0
+920.0
+921.0
+922.0
+923.0
+924.0
+925.0
+926.0
+927.0
+928.0
+929.0
+930.0
+931.0
+932.0
+933.0
+934.0
+935.0
+936.0
+937.0
+938.0
+939.0
+940.0
+941.0
+942.0
+943.0
+944.0
+945.0
+946.0
+947.0
+948.0
+949.0
+950.0
+951.0
+952.0
+953.0
+954.0
+955.0
+956.0
+957.0
+958.0
+959.0
+960.0
+961.0
+962.0
+963.0
+964.0
+965.0
+966.0
+967.0
+968.0
+969.0
+970.0
+971.0
+972.0
+973.0
+974.0
+975.0
+976.0
+977.0
+978.0
+979.0
+980.0
+981.0
+982.0
+983.0
+984.0
+985.0
+986.0
+987.0
+988.0
+989.0
+990.0
+991.0
+992.0
+993.0
+994.0
+995.0
+996.0
+997.0
+998.0
+999.0
+1000.0
+1001.0
+1002.0
+1003.0
+1004.0
+1005.0
+1006.0
+1007.0
+1008.0
+1009.0
+1010.0
+1011.0
+1012.0
+1013.0
+1014.0
+1015.0
+1016.0
+1017.0
+1018.0
+1019.0
+1020.0
+1021.0
+1022.0
+1023.0
+1024.0
+1025.0
+1026.0
+1027.0
+1028.0
+1029.0
+1030.0
+1031.0
+1032.0
+1033.0
+1034.0
+1035.0
+1036.0
+1037.0
+1038.0
+1039.0
+1040.0
+1041.0
+1042.0
+1043.0
+1044.0
+1045.0
+1046.0
+1047.0
+1048.0
+1049.0
+1050.0
+1051.0
+1052.0
+1053.0
+1054.0
+1055.0
+1056.0
+1057.0
+1058.0
+1059.0
+1060.0
+1061.0
+1062.0
+1063.0
+1064.0
+1065.0
+1066.0
+1067.0
+1068.0
+1069.0
+1070.0
+1071.0
+1072.0
+1073.0
+1074.0
+1075.0
+1076.0
+1077.0
+1078.0
+1079.0
+1080.0
+1081.0
+1082.0
+1083.0
+1084.0
+1085.0
+1086.0
+1087.0
+1088.0
+1089.0
+1090.0
+1091.0
+1092.0
+1093.0
+1094.0
+1095.0
+1096.0
+1097.0
+1098.0
+1099.0
+1100.0
+1101.0
+1102.0
+1103.0
+1104.0
+1105.0
+1106.0
+1107.0
+1108.0
+1109.0
+1110.0
+1111.0
+1112.0
+1113.0
+1114.0
+1115.0
+1116.0
+1117.0
+1118.0
+1119.0
+1120.0
+1121.0
+1122.0
+1123.0
+1124.0
+1125.0
+1126.0
+1127.0
+1128.0
+1129.0
+1130.0
+1131.0
+1132.0
+1133.0
+1134.0
+1135.0
+1136.0
+1137.0
+1138.0
+1139.0
+1140.0
+1141.0
+1142.0
+1143.0
+1144.0
+1145.0
+1146.0
+1147.0
+1148.0
+1149.0
+1150.0
+1151.0
+1152.0
+1153.0
+1154.0
+1155.0
+1156.0
+1157.0
+1158.0
+1159.0
+1160.0
+1161.0
+1162.0
+1163.0
+1164.0
+1165.0
+1166.0
+1167.0
+1168.0
+1169.0
+1170.0
+1171.0
+1172.0
+1173.0
+1174.0
+1175.0
+1176.0
+1177.0
+1178.0
+1179.0
+1180.0
+1181.0
+1182.0
+1183.0
+1184.0
+1185.0
+1186.0
+1187.0
+1188.0
+1189.0
+1190.0
+1191.0
+1192.0
+1193.0
+1194.0
+1195.0
+1196.0
+1197.0
+1198.0
+1199.0
+1200.0
+1201.0
+1202.0
+1203.0
+1204.0
+1205.0
+1206.0
+1207.0
+1208.0
+1209.0
+1210.0
+1211.0
+1212.0
+1213.0
+1214.0
+1215.0
+1216.0
+1217.0
+1218.0
+1219.0
+1220.0
+1221.0
+1222.0
+1223.0
+1224.0
+1225.0
+1226.0
+1227.0
+1228.0
+1229.0
+1230.0
+1231.0
+1232.0
+1233.0
+1234.0
+1235.0
+1236.0
+1237.0
+1238.0
+1239.0
+1240.0
+1241.0
+1242.0
+1243.0
+1244.0
+1245.0
+1246.0
+1247.0
+1248.0
+1249.0
+1250.0
+1251.0
+1252.0
+1253.0
+1254.0
+1255.0
+1256.0
+1257.0
+1258.0
+1259.0
+1260.0
+1261.0
+1262.0
+1263.0
+1264.0
+1265.0
+1266.0
+1267.0
+1268.0
+1269.0
+1270.0
+1271.0
+1272.0
+1273.0
+1274.0
+1275.0
+1276.0
+1277.0
+1278.0
+1279.0
+1280.0
+1281.0
+1282.0
+1283.0
+1284.0
+1285.0
+1286.0
+1287.0
+1288.0
+1289.0
+1290.0
+1291.0
+1292.0
+1293.0
+1294.0
+1295.0
+1296.0
+1297.0
+1298.0
+1299.0
+1300.0
+1301.0
+1302.0
+1303.0
+1304.0
+1305.0
+1306.0
+1307.0
+1308.0
+1309.0
+1310.0
+1311.0
+1312.0
+1313.0
+1314.0
+1315.0
+1316.0
+1317.0
+1318.0
+1319.0
+1320.0
+1321.0
+1322.0
+1323.0
+1324.0
+1325.0
+1326.0
+1327.0
+1328.0
+1329.0
+1330.0
+1331.0
+1332.0
+1333.0
+1334.0
+1335.0
+1336.0
+1337.0
+1338.0
+1339.0
+1340.0
+1341.0
+1342.0
+1343.0
+1344.0
+1345.0
+1346.0
+1347.0
+1348.0
+1349.0
+1350.0
+1351.0
+1352.0
+1353.0
+1354.0
+1355.0
+1356.0
+1357.0
+1358.0
+1359.0
+1360.0
+1361.0
+1362.0
+1363.0
+1364.0
+1365.0
+1366.0
+1367.0
+1368.0
+1369.0
+1370.0
+1371.0
+1372.0
+1373.0
+1374.0
+1375.0
+1376.0
+1377.0
+1378.0
+1379.0
+1380.0
+1381.0
+1382.0
+1383.0
+1384.0
+1385.0
+1386.0
+1387.0
+1388.0
+1389.0
+1390.0
+1391.0
+1392.0
+1393.0
+1394.0
+1395.0
+1396.0
+1397.0
+1398.0
+1399.0
+1400.0
+1401.0
+1402.0
+1403.0
+1404.0
+1405.0
+1406.0
+1407.0
+1408.0
+1409.0
+1410.0
+1411.0
+1412.0
+1413.0
+1414.0
+1415.0
+1416.0
+1417.0
+1418.0
+1419.0
+1420.0
+1421.0
+1422.0
+1423.0
+1424.0
+1425.0
+1426.0
+1427.0
+1428.0
+1429.0
+1430.0
+1431.0
+1432.0
+1433.0
+1434.0
+1435.0
+1436.0
+1437.0
+1438.0
+1439.0
+1440.0
+1441.0
+1442.0
+1443.0
+1444.0
+1445.0
+1446.0
+1447.0
+1448.0
+1449.0
+1450.0
+1451.0
+1452.0
+1453.0
+1454.0
+1455.0
+1456.0
+1457.0
+1458.0
+1459.0
+1460.0
+1461.0
+1462.0
+1463.0
+1464.0
+1465.0
+1466.0
+1467.0
+1468.0
+1469.0
+1470.0
+1471.0
+1472.0
+1473.0
+1474.0
+1475.0
+1476.0
+1477.0
+1478.0
+1479.0
+1480.0
+1481.0
+1482.0
+1483.0
+1484.0
+1485.0
+1486.0
+1487.0
+1488.0
+1489.0
+1490.0
+1491.0
+1492.0
+1493.0
+1494.0
+1495.0
+1496.0
+1497.0
+1498.0
+1499.0
+1500.0
+1501.0
+1502.0
+1503.0
+1504.0
+1505.0
+1506.0
+1507.0
+1508.0
+1509.0
+1510.0
+1511.0
+1512.0
+1513.0
+1514.0
+1515.0
+1516.0
+1517.0
+1518.0
+1519.0
+1520.0
+1521.0
+1522.0
+1523.0
+1524.0
+1525.0
+1526.0
+1527.0
+1528.0
+1529.0
+1530.0
+1531.0
+1532.0
+1533.0
+1534.0
+1535.0
+1536.0
+1537.0
+1538.0
+1539.0
+1540.0
+1541.0
+1542.0
+1543.0
+1544.0
+1545.0
+1546.0
+1547.0
+1548.0
+1549.0
+1550.0
+1551.0
+1552.0
+1553.0
+1554.0
+1555.0
+1556.0
+1557.0
+1558.0
+1559.0
+1560.0
+1561.0
+1562.0
+1563.0
+1564.0
+1565.0
+1566.0
+1567.0
+1568.0
+1569.0
+1570.0
+1571.0
+1572.0
+1573.0
+1574.0
+1575.0
+1576.0
+1577.0
+1578.0
+1579.0
+1580.0
+1581.0
+1582.0
+1583.0
+1584.0
+1585.0
+1586.0
+1587.0
+1588.0
+1589.0
+1590.0
+1591.0
+1592.0
+1593.0
+1594.0
+1595.0
+1596.0
+1597.0
+1598.0
+1599.0
+1600.0
+1601.0
+1602.0
+1603.0
+1604.0
+1605.0
+1606.0
+1607.0
+1608.0
+1609.0
+1610.0
+1611.0
+1612.0
+1613.0
+1614.0
+1615.0
+1616.0
+1617.0
+1618.0
+1619.0
+1620.0
+1621.0
+1622.0
+1623.0
+1624.0
+1625.0
+1626.0
+1627.0
+1628.0
+1629.0
+1630.0
+1631.0
+1632.0
+1633.0
+1634.0
+1635.0
+1636.0
+1637.0
+1638.0
+1639.0
+1640.0
+1641.0
+1642.0
+1643.0
+1644.0
+1645.0
+1646.0
+1647.0
+1648.0
+1649.0
+1650.0
+1651.0
+1652.0
+1653.0
+1654.0
+1655.0
+1656.0
+1657.0
+1658.0
+1659.0
+1660.0
+1661.0
+1662.0
+1663.0
+1664.0
+1665.0
+1666.0
+1667.0
+1668.0
+1669.0
+1670.0
+1671.0
+1672.0
+1673.0
+1674.0
+1675.0
+1676.0
+1677.0
+1678.0
+1679.0
+1680.0
+1681.0
+1682.0
+1683.0
+1684.0
+1685.0
+1686.0
+1687.0
+1688.0
+1689.0
+1690.0
+1691.0
+1692.0
+1693.0
+1694.0
+1695.0
+1696.0
+1697.0
+1698.0
+1699.0
+1700.0
+1701.0
+1702.0
+1703.0
+1704.0
+1705.0
+1706.0
+1707.0
+1708.0
+1709.0
+1710.0
+1711.0
+1712.0
+1713.0
+1714.0
+1715.0
+1716.0
+1717.0
+1718.0
+1719.0
+1720.0
+1721.0
+1722.0
+1723.0
+1724.0
+1725.0
+1726.0
+1727.0
+1728.0
+1729.0
+1730.0
+1731.0
+1732.0
+1733.0
+1734.0
+1735.0
+1736.0
+1737.0
+1738.0
+1739.0
+1740.0
+1741.0
+1742.0
+1743.0
+1744.0
+1745.0
+1746.0
+1747.0
+1748.0
+1749.0
+1750.0
+1751.0
+1752.0
+1753.0
+1754.0
+1755.0
+1756.0
+1757.0
+1758.0
+1759.0
+1760.0
+1761.0
+1762.0
+1763.0
+1764.0
+1765.0
+1766.0
+1767.0
+1768.0
+1769.0
+1770.0
+1771.0
+1772.0
+1773.0
+1774.0
+1775.0
+1776.0
+1777.0
+1778.0
+1779.0
+1780.0
+1781.0
+1782.0
+1783.0
+1784.0
+1785.0
+1786.0
+1787.0
+1788.0
+1789.0
+1790.0
+1791.0
+1792.0
+1793.0
+1794.0
+1795.0
+1796.0
+1797.0
+1798.0
+1799.0
+1800.0
+1801.0
+1802.0
+1803.0
+1804.0
+1805.0
+1806.0
+1807.0
+1808.0
+1809.0
+1810.0
+1811.0
+1812.0
+1813.0
+1814.0
+1815.0
+1816.0
+1817.0
+1818.0
+1819.0
+1820.0
+1821.0
+1822.0
+1823.0
+1824.0
+1825.0
+1826.0
+1827.0
+1828.0
+1829.0
+1830.0
+1831.0
+1832.0
+1833.0
+1834.0
+1835.0
+1836.0
+1837.0
+1838.0
+1839.0
+1840.0
+1841.0
+1842.0
+1843.0
+1844.0
+1845.0
+1846.0
+1847.0
+1848.0
+1849.0
+1850.0
+1851.0
+1852.0
+1853.0
+1854.0
+1855.0
+1856.0
+1857.0
+1858.0
+1859.0
+1860.0
+1861.0
+1862.0
+1863.0
+1864.0
+1865.0
+1866.0
+1867.0
+1868.0
+1869.0
+1870.0
+1871.0
+1872.0
+1873.0
+1874.0
+1875.0
+1876.0
+1877.0
+1878.0
+1879.0
+1880.0
+1881.0
+1882.0
+1883.0
+1884.0
+1885.0
+1886.0
+1887.0
+1888.0
+1889.0
+1890.0
+1891.0
+1892.0
+1893.0
+1894.0
+1895.0
+1896.0
+1897.0
+1898.0
+1899.0
+1900.0
+1901.0
+1902.0
+1903.0
+1904.0
+1905.0
+1906.0
+1907.0
+1908.0
+1909.0
+1910.0
+1911.0
+1912.0
+1913.0
+1914.0
+1915.0
+1916.0
+1917.0
+1918.0
+1919.0
+1920.0
+1921.0
+1922.0
+1923.0
+1924.0
+1925.0
+1926.0
+1927.0
+1928.0
+1929.0
+1930.0
+1931.0
+1932.0
+1933.0
+1934.0
+1935.0
+1936.0
+1937.0
+1938.0
+1939.0
+1940.0
+1941.0
+1942.0
+1943.0
+1944.0
+1945.0
+1946.0
+1947.0
+1948.0
+1949.0
+1950.0
+1951.0
+1952.0
+1953.0
+1954.0
+1955.0
+1956.0
+1957.0
+1958.0
+1959.0
+1960.0
+1961.0
+1962.0
+1963.0
+1964.0
+1965.0
+1966.0
+1967.0
+1968.0
+1969.0
+1970.0
+1971.0
+1972.0
+1973.0
+1974.0
+1975.0
+1976.0
+1977.0
+1978.0
+1979.0
+1980.0
+1981.0
+1982.0
+1983.0
+1984.0
+1985.0
+1986.0
+1987.0
+1988.0
+1989.0
+1990.0
+1991.0
+1992.0
+1993.0
+1994.0
+1995.0
+1996.0
+1997.0
+1998.0
+1999.0
+2000.0
+2001.0
+2002.0
+2003.0
+2004.0
+2005.0
+2006.0
+2007.0
+2008.0
+2009.0
+2010.0
+2011.0
+2012.0
+2013.0
+2014.0
+2015.0
+2016.0
+2017.0
+2018.0
+2019.0
+2020.0
+2021.0
+2022.0
+2023.0
+2024.0
+2025.0
+2026.0
+2027.0
+2028.0
+2029.0
+2030.0
+2031.0
+2032.0
+2033.0
+2034.0
+2035.0
+2036.0
+2037.0
+2038.0
+2039.0
+2040.0
+2041.0
+2042.0
+2043.0
+2044.0
+2045.0
+2046.0
+2047.0
+2048.0
+2049.0
+2050.0
+2051.0
+2052.0
+2053.0
+2054.0
+2055.0
+2056.0
+2057.0
+2058.0
+2059.0
+2060.0
+2061.0
+2062.0
+2063.0
+2064.0
+2065.0
+2066.0
+2067.0
+2068.0
+2069.0
+2070.0
+2071.0
+2072.0
+2073.0
+2074.0
+2075.0
+2076.0
+2077.0
+2078.0
+2079.0
+2080.0
+2081.0
+2082.0
+2083.0
+2084.0
+2085.0
+2086.0
+2087.0
+2088.0
+2089.0
+2090.0
+2091.0
+2092.0
+2093.0
+2094.0
+2095.0
+2096.0
+2097.0
+2098.0
+2099.0
+2100.0
+2101.0
+2102.0
+2103.0
+2104.0
+2105.0
+2106.0
+2107.0
+2108.0
+2109.0
+2110.0
+2111.0
+2112.0
+2113.0
+2114.0
+2115.0
+2116.0
+2117.0
+2118.0
+2119.0
+2120.0
+2121.0
+2122.0
+2123.0
+2124.0
+2125.0
+2126.0
+2127.0
+2128.0
+2129.0
+2130.0
+2131.0
+2132.0
+2133.0
+2134.0
+2135.0
+2136.0
+2137.0
+2138.0
+2139.0
+2140.0
+2141.0
+2142.0
+2143.0
+2144.0
+2145.0
+2146.0
+2147.0
+2148.0
+2149.0
+2150.0
+2151.0
+2152.0
+2153.0
+2154.0
+2155.0
+2156.0
+2157.0
+2158.0
+2159.0
+2160.0
+2161.0
+2162.0
+2163.0
+2164.0
+2165.0
+2166.0
+2167.0
+2168.0
+2169.0
+2170.0
+2171.0
+2172.0
+2173.0
+2174.0
+2175.0
+2176.0
+2177.0
+2178.0
+2179.0
+2180.0
+2181.0
+2182.0
+2183.0
+2184.0
+2185.0
+2186.0
+2187.0
+2188.0
+2189.0
+2190.0
+2191.0
+2192.0
+2193.0
+2194.0
+2195.0
+2196.0
+2197.0
+2198.0
+2199.0
+2200.0
+2201.0
+2202.0
+2203.0
+2204.0
+2205.0
+2206.0
+2207.0
+2208.0
+2209.0
+2210.0
+2211.0
+2212.0
+2213.0
+2214.0
+2215.0
+2216.0
+2217.0
+2218.0
+2219.0
+2220.0
+2221.0
+2222.0
+2223.0
+2224.0
+2225.0
+2226.0
+2227.0
+2228.0
+2229.0
+2230.0
+2231.0
+2232.0
+2233.0
+2234.0
+2235.0
+2236.0
+2237.0
+2238.0
+2239.0
+2240.0
+2241.0
+2242.0
+2243.0
+2244.0
+2245.0
+2246.0
+2247.0
+2248.0
+2249.0
+2250.0
+2251.0
+2252.0
+2253.0
+2254.0
+2255.0
+2256.0
+2257.0
+2258.0
+2259.0
+2260.0
+2261.0
+2262.0
+2263.0
+2264.0
+2265.0
+2266.0
+2267.0
+2268.0
+2269.0
+2270.0
+2271.0
+2272.0
+2273.0
+2274.0
+2275.0
+2276.0
+2277.0
+2278.0
+2279.0
+2280.0
+2281.0
+2282.0
+2283.0
+2284.0
+2285.0
+2286.0
+2287.0
+2288.0
+2289.0
+2290.0
+2291.0
+2292.0
+2293.0
+2294.0
+2295.0
+2296.0
+2297.0
+2298.0
+2299.0
+2300.0
+2301.0
+2302.0
+2303.0
+2304.0
+2305.0
+2306.0
+2307.0
+2308.0
+2309.0
+2310.0
+2311.0
+2312.0
+2313.0
+2314.0
+2315.0
+2316.0
+2317.0
+2318.0
+2319.0
+2320.0
+2321.0
+2322.0
+2323.0
+2324.0
+2325.0
+2326.0
+2327.0
+2328.0
+2329.0
+2330.0
+2331.0
+2332.0
+2333.0
+2334.0
+2335.0
+2336.0
+2337.0
+2338.0
+2339.0
+2340.0
+2341.0
+2342.0
+2343.0
+2344.0
+2345.0
+2346.0
+2347.0
+2348.0
+2349.0
+2350.0
+2351.0
+2352.0
+2353.0
+2354.0
+2355.0
+2356.0
+2357.0
+2358.0
+2359.0
+2360.0
+2361.0
+2362.0
+2363.0
+2364.0
+2365.0
+2366.0
+2367.0
+2368.0
+2369.0
+2370.0
+2371.0
+2372.0
+2373.0
+2374.0
+2375.0
+2376.0
+2377.0
+2378.0
+2379.0
+2380.0
+2381.0
+2382.0
+2383.0
+2384.0
+2385.0
+2386.0
+2387.0
+2388.0
+2389.0
+2390.0
+2391.0
+2392.0
+2393.0
+2394.0
+2395.0
+2396.0
+2397.0
+2398.0
+2399.0
+2400.0
+2401.0
+2402.0
+2403.0
+2404.0
+2405.0
+2406.0
+2407.0
+2408.0
+2409.0
+2410.0
+2411.0
+2412.0
+2413.0
+2414.0
+2415.0
+2416.0
+2417.0
+2418.0
+2419.0
+2420.0
+2421.0
+2422.0
+2423.0
+2424.0
+2425.0
+2426.0
+2427.0
+2428.0
+2429.0
+2430.0
+2431.0
+2432.0
+2433.0
+2434.0
+2435.0
+2436.0
+2437.0
+2438.0
+2439.0
+2440.0
+2441.0
+2442.0
+2443.0
+2444.0
+2445.0
+2446.0
+2447.0
+2448.0
+2449.0
+2450.0
+2451.0
+2452.0
+2453.0
+2454.0
+2455.0
+2456.0
+2457.0
+2458.0
+2459.0
+2460.0
+2461.0
+2462.0
+2463.0
+2464.0
+2465.0
+2466.0
+2467.0
+2468.0
+2469.0
+2470.0
+2471.0
+2472.0
+2473.0
+2474.0
+2475.0
+2476.0
+2477.0
+2478.0
+2479.0
+2480.0
+2481.0
+2482.0
+2483.0
+2484.0
+2485.0
+2486.0
+2487.0
+2488.0
+2489.0
+2490.0
+2491.0
+2492.0
+2493.0
+2494.0
+2495.0
+2496.0
+2497.0
+2498.0
+2499.0
+2500.0
+2501.0
+2502.0
+2503.0
+2504.0
+2505.0
+2506.0
+2507.0
+2508.0
+2509.0
+2510.0
+2511.0
+2512.0
+2513.0
+2514.0
+2515.0
+2516.0
+2517.0
+2518.0
+2519.0
+2520.0
+2521.0
+2522.0
+2523.0
+2524.0
+2525.0
+2526.0
+2527.0
+2528.0
+2529.0
+2530.0
+2531.0
+2532.0
+2533.0
+2534.0
+2535.0
+2536.0
+2537.0
+2538.0
+2539.0
+2540.0
+2541.0
+2542.0
+2543.0
+2544.0
+2545.0
+2546.0
+2547.0
+2548.0
+2549.0
+2550.0
+2551.0
+2552.0
+2553.0
+2554.0
+2555.0
+2556.0
+2557.0
+2558.0
+2559.0
+2560.0
+2561.0
+2562.0
+2563.0
+2564.0
+2565.0
+2566.0
+2567.0
+2568.0
+2569.0
+2570.0
+2571.0
+2572.0
+2573.0
+2574.0
+2575.0
+2576.0
+2577.0
+2578.0
+2579.0
+2580.0
+2581.0
+2582.0
+2583.0
+2584.0
+2585.0
+2586.0
+2587.0
+2588.0
+2589.0
+2590.0
+2591.0
+2592.0
+2593.0
+2594.0
+2595.0
+2596.0
+2597.0
+2598.0
+2599.0
+2600.0
+2601.0
+2602.0
+2603.0
+2604.0
+2605.0
+2606.0
+2607.0
+2608.0
+2609.0
+2610.0
+2611.0
+2612.0
+2613.0
+2614.0
+2615.0
+2616.0
+2617.0
+2618.0
+2619.0
+2620.0
+2621.0
+2622.0
+2623.0
+2624.0
+2625.0
+2626.0
+2627.0
+2628.0
+2629.0
+2630.0
+2631.0
+2632.0
+2633.0
+2634.0
+2635.0
+2636.0
+2637.0
+2638.0
+2639.0
+2640.0
+2641.0
+2642.0
+2643.0
+2644.0
+2645.0
+2646.0
+2647.0
+2648.0
+2649.0
+2650.0
+2651.0
+2652.0
+2653.0
+2654.0
+2655.0
+2656.0
+2657.0
+2658.0
+2659.0
+2660.0
+2661.0
+2662.0
+2663.0
+2664.0
+2665.0
+2666.0
+2667.0
+2668.0
+2669.0
+2670.0
+2671.0
+2672.0
+2673.0
+2674.0
+2675.0
+2676.0
+2677.0
+2678.0
+2679.0
+2680.0
+2681.0
+2682.0
+2683.0
+2684.0
+2685.0
+2686.0
+2687.0
+2688.0
+2689.0
+2690.0
+2691.0
+2692.0
+2693.0
+2694.0
+2695.0
+2696.0
+2697.0
+2698.0
+2699.0
+2700.0
+2701.0
+2702.0
+2703.0
+2704.0
+2705.0
+2706.0
+2707.0
+2708.0
+2709.0
+2710.0
+2711.0
+2712.0
+2713.0
+2714.0
+2715.0
+2716.0
+2717.0
+2718.0
+2719.0
+2720.0
+2721.0
+2722.0
+2723.0
+2724.0
+2725.0
+2726.0
+2727.0
+2728.0
+2729.0
+2730.0
+2731.0
+2732.0
+2733.0
+2734.0
+2735.0
+2736.0
+2737.0
+2738.0
+2739.0
+2740.0
+2741.0
+2742.0
+2743.0
+2744.0
+2745.0
+2746.0
+2747.0
+2748.0
+2749.0
+2750.0
+2751.0
+2752.0
+2753.0
+2754.0
+2755.0
+2756.0
+2757.0
+2758.0
+2759.0
+2760.0
+2761.0
+2762.0
+2763.0
+2764.0
+2765.0
+2766.0
+2767.0
+2768.0
+2769.0
+2770.0
+2771.0
+2772.0
+2773.0
+2774.0
+2775.0
+2776.0
+2777.0
+2778.0
+2779.0
+2780.0
+2781.0
+2782.0
+2783.0
+2784.0
+2785.0
+2786.0
+2787.0
+2788.0
+2789.0
+2790.0
+2791.0
+2792.0
+2793.0
+2794.0
+2795.0
+2796.0
+2797.0
+2798.0
+2799.0
+2800.0
+2801.0
+2802.0
+2803.0
+2804.0
+2805.0
+2806.0
+2807.0
+2808.0
+2809.0
+2810.0
+2811.0
+2812.0
+2813.0
+2814.0
+2815.0
+2816.0
+2817.0
+2818.0
+2819.0
+2820.0
+2821.0
+2822.0
+2823.0
+2824.0
+2825.0
+2826.0
+2827.0
+2828.0
+2829.0
+2830.0
+2831.0
+2832.0
+2833.0
+2834.0
+2835.0
+2836.0
+2837.0
+2838.0
+2839.0
+2840.0
+2841.0
+2842.0
+2843.0
+2844.0
+2845.0
+2846.0
+2847.0
+2848.0
+2849.0
+2850.0
+2851.0
+2852.0
+2853.0
+2854.0
+2855.0
+2856.0
+2857.0
+2858.0
+2859.0
+2860.0
+2861.0
+2862.0
+2863.0
+2864.0
+2865.0
+2866.0
+2867.0
+2868.0
+2869.0
+2870.0
+2871.0
+2872.0
+2873.0
+2874.0
+2875.0
+2876.0
+2877.0
+2878.0
+2879.0
+2880.0
+2881.0
+2882.0
+2883.0
+2884.0
+2885.0
+2886.0
+2887.0
+2888.0
+2889.0
+2890.0
+2891.0
+2892.0
+2893.0
+2894.0
+2895.0
+2896.0
+2897.0
+2898.0
+2899.0
+2900.0
+2901.0
+2902.0
+2903.0
+2904.0
+2905.0
+2906.0
+2907.0
+2908.0
+2909.0
+2910.0
+2911.0
+2912.0
+2913.0
+2914.0
+2915.0
+2916.0
+2917.0
+2918.0
+2919.0
+2920.0
+2921.0
+2922.0
+2923.0
+2924.0
+2925.0
+2926.0
+2927.0
+2928.0
+2929.0
+2930.0
+2931.0
+2932.0
+2933.0
+2934.0
+2935.0
+2936.0
+2937.0
+2938.0
+2939.0
+2940.0
+2941.0
+2942.0
+2943.0
+2944.0
+2945.0
+2946.0
+2947.0
+2948.0
+2949.0
+2950.0
+2951.0
+2952.0
+2953.0
+2954.0
+2955.0
+2956.0
+2957.0
+2958.0
+2959.0
+2960.0
+2961.0
+2962.0
+2963.0
+2964.0
+2965.0
+2966.0
+2967.0
+2968.0
+2969.0
+2970.0
+2971.0
+2972.0
+2973.0
+2974.0
+2975.0
+2976.0
+2977.0
+2978.0
+2979.0
+2980.0
+2981.0
+2982.0
+2983.0
+2984.0
+2985.0
+2986.0
+2987.0
+2988.0
+2989.0
+2990.0
+2991.0
+2992.0
+2993.0
+2994.0
+2995.0
+2996.0
+2997.0
+2998.0
+2999.0
+3000.0
+3001.0
+3002.0
+3003.0
+3004.0
+3005.0
+3006.0
+3007.0
+3008.0
+3009.0
+3010.0
+3011.0
+3012.0
+3013.0
+3014.0
+3015.0
+3016.0
+3017.0
+3018.0
+3019.0
+3020.0
+3021.0
+3022.0
+3023.0
+3024.0
+3025.0
+3026.0
+3027.0
+3028.0
+3029.0
+3030.0
+3031.0
+3032.0
+3033.0
+3034.0
+3035.0
+3036.0
+3037.0
+3038.0
+3039.0
+3040.0
+3041.0
+3042.0
+3043.0
+3044.0
+3045.0
+3046.0
+3047.0
+3048.0
+3049.0
+3050.0
+3051.0
+3052.0
+3053.0
+3054.0
+3055.0
+3056.0
+3057.0
+3058.0
+3059.0
+3060.0
+3061.0
+3062.0
+3063.0
+3064.0
+3065.0
+3066.0
+3067.0
+3068.0
+3069.0
+3070.0
+3071.0
+3072.0
+3073.0
+3074.0
+3075.0
+3076.0
+3077.0
+3078.0
+3079.0
+3080.0
+3081.0
+3082.0
+3083.0
+3084.0
+3085.0
+3086.0
+3087.0
+3088.0
+3089.0
+3090.0
+3091.0
+3092.0
+3093.0
+3094.0
+3095.0
+3096.0
+3097.0
+3098.0
+3099.0
+3100.0
+3101.0
+3102.0
+3103.0
+3104.0
+3105.0
+3106.0
+3107.0
+3108.0
+3109.0
+3110.0
+3111.0
+3112.0
+3113.0
+3114.0
+3115.0
+3116.0
+3117.0
+3118.0
+3119.0
+3120.0
+3121.0
+3122.0
+3123.0
+3124.0
+3125.0
+3126.0
+3127.0
+3128.0
+3129.0
+3130.0
+3131.0
+3132.0
+3133.0
+3134.0
+3135.0
+3136.0
+3137.0
+3138.0
+3139.0
+3140.0
+3141.0
+3142.0
+3143.0
+3144.0
+3145.0
+3146.0
+3147.0
+3148.0
+3149.0
+3150.0
+3151.0
+3152.0
+3153.0
+3154.0
+3155.0
+3156.0
+3157.0
+3158.0
+3159.0
+3160.0
+3161.0
+3162.0
+3163.0
+3164.0
+3165.0
+3166.0
+3167.0
+3168.0
+3169.0
+3170.0
+3171.0
+3172.0
+3173.0
+3174.0
+3175.0
+3176.0
+3177.0
+3178.0
+3179.0
+3180.0
+3181.0
+3182.0
+3183.0
+3184.0
+3185.0
+3186.0
+3187.0
+3188.0
+3189.0
+3190.0
+3191.0
+3192.0
+3193.0
+3194.0
+3195.0
+3196.0
+3197.0
+3198.0
+3199.0
+3200.0
+3201.0
+3202.0
+3203.0
+3204.0
+3205.0
+3206.0
+3207.0
+3208.0
+3209.0
+3210.0
+3211.0
+3212.0
+3213.0
+3214.0
+3215.0
+3216.0
+3217.0
+3218.0
+3219.0
+3220.0
+3221.0
+3222.0
+3223.0
+3224.0
+3225.0
+3226.0
+3227.0
+3228.0
+3229.0
+3230.0
+3231.0
+3232.0
+3233.0
+3234.0
+3235.0
+3236.0
+3237.0
+3238.0
+3239.0
+3240.0
+3241.0
+3242.0
+3243.0
+3244.0
+3245.0
+3246.0
+3247.0
+3248.0
+3249.0
+3250.0
+3251.0
+3252.0
+3253.0
+3254.0
+3255.0
+3256.0
+3257.0
+3258.0
+3259.0
+3260.0
+3261.0
+3262.0
+3263.0
+3264.0
+3265.0
+3266.0
+3267.0
+3268.0
+3269.0
+3270.0
+3271.0
+3272.0
+3273.0
+3274.0
+3275.0
+3276.0
+3277.0
+3278.0
+3279.0
+3280.0
+3281.0
+3282.0
+3283.0
+3284.0
+3285.0
+3286.0
+3287.0
+3288.0
+3289.0
+3290.0
+3291.0
+3292.0
+3293.0
+3294.0
+3295.0
+3296.0
+3297.0
+3298.0
+3299.0
+3300.0
+3301.0
+3302.0
+3303.0
+3304.0
+3305.0
+3306.0
+3307.0
+3308.0
+3309.0
+3310.0
+3311.0
+3312.0
+3313.0
+3314.0
+3315.0
+3316.0
+3317.0
+3318.0
+3319.0
+3320.0
+3321.0
+3322.0
+3323.0
+3324.0
+3325.0
+3326.0
+3327.0
+3328.0
+3329.0
+3330.0
+3331.0
+3332.0
+3333.0
+3334.0
+3335.0
+3336.0
+3337.0
+3338.0
+3339.0
+3340.0
+3341.0
+3342.0
+3343.0
+3344.0
+3345.0
+3346.0
+3347.0
+3348.0
+3349.0
+3350.0
+3351.0
+3352.0
+3353.0
+3354.0
+3355.0
+3356.0
+3357.0
+3358.0
+3359.0
+3360.0
+3361.0
+3362.0
+3363.0
+3364.0
+3365.0
+3366.0
+3367.0
+3368.0
+3369.0
+3370.0
+3371.0
+3372.0
+3373.0
+3374.0
+3375.0
+3376.0
+3377.0
+3378.0
+3379.0
+3380.0
+3381.0
+3382.0
+3383.0
+3384.0
+3385.0
+3386.0
+3387.0
+3388.0
+3389.0
+3390.0
+3391.0
+3392.0
+3393.0
+3394.0
+3395.0
+3396.0
+3397.0
+3398.0
+3399.0
+3400.0
+3401.0
+3402.0
+3403.0
+3404.0
+3405.0
+3406.0
+3407.0
+3408.0
+3409.0
+3410.0
+3411.0
+3412.0
+3413.0
+3414.0
+3415.0
+3416.0
+3417.0
+3418.0
+3419.0
+3420.0
+3421.0
+3422.0
+3423.0
+3424.0
+3425.0
+3426.0
+3427.0
+3428.0
+3429.0
+3430.0
+3431.0
+3432.0
+3433.0
+3434.0
+3435.0
+3436.0
+3437.0
+3438.0
+3439.0
+3440.0
+3441.0
+3442.0
+3443.0
+3444.0
+3445.0
+3446.0
+3447.0
+3448.0
+3449.0
+3450.0
+3451.0
+3452.0
+3453.0
+3454.0
+3455.0
+3456.0
+3457.0
+3458.0
+3459.0
+3460.0
+3461.0
+3462.0
+3463.0
+3464.0
+3465.0
+3466.0
+3467.0
+3468.0
+3469.0
+3470.0
+3471.0
+3472.0
+3473.0
+3474.0
+3475.0
+3476.0
+3477.0
+3478.0
+3479.0
+3480.0
+3481.0
+3482.0
+3483.0
+3484.0
+3485.0
+3486.0
+3487.0
+3488.0
+3489.0
+3490.0
+3491.0
+3492.0
+3493.0
+3494.0
+3495.0
+3496.0
+3497.0
+3498.0
+3499.0
+3500.0
+3501.0
+3502.0
+3503.0
+3504.0
+3505.0
+3506.0
+3507.0
+3508.0
+3509.0
+3510.0
+3511.0
+3512.0
+3513.0
+3514.0
+3515.0
+3516.0
+3517.0
+3518.0
+3519.0
+3520.0
+3521.0
+3522.0
+3523.0
+3524.0
+3525.0
+3526.0
+3527.0
+3528.0
+3529.0
+3530.0
+3531.0
+3532.0
+3533.0
+3534.0
+3535.0
+3536.0
+3537.0
+3538.0
+3539.0
+3540.0
+3541.0
+3542.0
+3543.0
+3544.0
+3545.0
+3546.0
+3547.0
+3548.0
+3549.0
+3550.0
+3551.0
+3552.0
+3553.0
+3554.0
+3555.0
+3556.0
+3557.0
+3558.0
+3559.0
+3560.0
+3561.0
+3562.0
+3563.0
+3564.0
+3565.0
+3566.0
+3567.0
+3568.0
+3569.0
+3570.0
+3571.0
+3572.0
+3573.0
+3574.0
+3575.0
+3576.0
+3577.0
+3578.0
+3579.0
+3580.0
+3581.0
+3582.0
+3583.0
+3584.0
+3585.0
+3586.0
+3587.0
+3588.0
+3589.0
+3590.0
+3591.0
+3592.0
+3593.0
+3594.0
+3595.0
+3596.0
+3597.0
+3598.0
+3599.0
+3600.0
+3601.0
+3602.0
+3603.0
+3604.0
+3605.0
+3606.0
+3607.0
+3608.0
+3609.0
+3610.0
+3611.0
+3612.0
+3613.0
+3614.0
+3615.0
+3616.0
+3617.0
+3618.0
+3619.0
+3620.0
+3621.0
+3622.0
+3623.0
+3624.0
+3625.0
+3626.0
+3627.0
+3628.0
+3629.0
+3630.0
+3631.0
+3632.0
+3633.0
+3634.0
+3635.0
+3636.0
+3637.0
+3638.0
+3639.0
+3640.0
+3641.0
+3642.0
+3643.0
+3644.0
+3645.0
+3646.0
+3647.0
+3648.0
+3649.0
+3650.0
+3651.0
+3652.0
+3653.0
+3654.0
+3655.0
+3656.0
+3657.0
+3658.0
+3659.0
+3660.0
+3661.0
+3662.0
+3663.0
+3664.0
+3665.0
+3666.0
+3667.0
+3668.0
+3669.0
+3670.0
+3671.0
+3672.0
+3673.0
+3674.0
+3675.0
+3676.0
+3677.0
+3678.0
+3679.0
+3680.0
+3681.0
+3682.0
+3683.0
+3684.0
+3685.0
+3686.0
+3687.0
+3688.0
+3689.0
+3690.0
+3691.0
+3692.0
+3693.0
+3694.0
+3695.0
+3696.0
+3697.0
+3698.0
+3699.0
+3700.0
+3701.0
+3702.0
+3703.0
+3704.0
+3705.0
+3706.0
+3707.0
+3708.0
+3709.0
+3710.0
+3711.0
+3712.0
+3713.0
+3714.0
+3715.0
+3716.0
+3717.0
+3718.0
+3719.0
+3720.0
+3721.0
+3722.0
+3723.0
+3724.0
+3725.0
+3726.0
+3727.0
+3728.0
+3729.0
+3730.0
+3731.0
+3732.0
+3733.0
+3734.0
+3735.0
+3736.0
+3737.0
+3738.0
+3739.0
+3740.0
+3741.0
+3742.0
+3743.0
+3744.0
+3745.0
+3746.0
+3747.0
+3748.0
+3749.0
+3750.0
+3751.0
+3752.0
+3753.0
+3754.0
+3755.0
+3756.0
+3757.0
+3758.0
+3759.0
+3760.0
+3761.0
+3762.0
+3763.0
+3764.0
+3765.0
+3766.0
+3767.0
+3768.0
+3769.0
+3770.0
+3771.0
+3772.0
+3773.0
+3774.0
+3775.0
+3776.0
+3777.0
+3778.0
+3779.0
+3780.0
+3781.0
+3782.0
+3783.0
+3784.0
+3785.0
+3786.0
+3787.0
+3788.0
+3789.0
+3790.0
+3791.0
+3792.0
+3793.0
+3794.0
+3795.0
+3796.0
+3797.0
+3798.0
+3799.0
+3800.0
+3801.0
+3802.0
+3803.0
+3804.0
+3805.0
+3806.0
+3807.0
+3808.0
+3809.0
+3810.0
+3811.0
+3812.0
+3813.0
+3814.0
+3815.0
+3816.0
+3817.0
+3818.0
+3819.0
+3820.0
+3821.0
+3822.0
+3823.0
+3824.0
+3825.0
+3826.0
+3827.0
+3828.0
+3829.0
+3830.0
+3831.0
+3832.0
+3833.0
+3834.0
+3835.0
+3836.0
+3837.0
+3838.0
+3839.0
+3840.0
+3841.0
+3842.0
+3843.0
+3844.0
+3845.0
+3846.0
+3847.0
+3848.0
+3849.0
+3850.0
+3851.0
+3852.0
+3853.0
+3854.0
+3855.0
+3856.0
+3857.0
+3858.0
+3859.0
+3860.0
+3861.0
+3862.0
+3863.0
+3864.0
+3865.0
+3866.0
+3867.0
+3868.0
+3869.0
+3870.0
+3871.0
+3872.0
+3873.0
+3874.0
+3875.0
+3876.0
+3877.0
+3878.0
+3879.0
+3880.0
+3881.0
+3882.0
+3883.0
+3884.0
+3885.0
+3886.0
+3887.0
+3888.0
+3889.0
+3890.0
+3891.0
+3892.0
+3893.0
+3894.0
+3895.0
+3896.0
+3897.0
+3898.0
+3899.0
+3900.0
+3901.0
+3902.0
+3903.0
+3904.0
+3905.0
+3906.0
+3907.0
+3908.0
+3909.0
+3910.0
+3911.0
+3912.0
+3913.0
+3914.0
+3915.0
+3916.0
+3917.0
+3918.0
+3919.0
+3920.0
+3921.0
+3922.0
+3923.0
+3924.0
+3925.0
+3926.0
+3927.0
+3928.0
+3929.0
+3930.0
+3931.0
+3932.0
+3933.0
+3934.0
+3935.0
+3936.0
+3937.0
+3938.0
+3939.0
+3940.0
+3941.0
+3942.0
+3943.0
+3944.0
+3945.0
+3946.0
+3947.0
+3948.0
+3949.0
+3950.0
+3951.0
+3952.0
+3953.0
+3954.0
+3955.0
+3956.0
+3957.0
+3958.0
+3959.0
+3960.0
+3961.0
+3962.0
+3963.0
+3964.0
+3965.0
+3966.0
+3967.0
+3968.0
+3969.0
+3970.0
+3971.0
+3972.0
+3973.0
+3974.0
+3975.0
+3976.0
+3977.0
+3978.0
+3979.0
+3980.0
+3981.0
+3982.0
+3983.0
+3984.0
+3985.0
+3986.0
+3987.0
+3988.0
+3989.0
+3990.0
+3991.0
+3992.0
+3993.0
+3994.0
+3995.0
+3996.0
+3997.0
+3998.0
+3999.0
+4000.0
+4001.0
+4002.0
+4003.0
+4004.0
+4005.0
+4006.0
+4007.0
+4008.0
+4009.0
+4010.0
+4011.0
+4012.0
+4013.0
+4014.0
+4015.0
+4016.0
+4017.0
+4018.0
+4019.0
+4020.0
+4021.0
+4022.0
+4023.0
+4024.0
+4025.0
+4026.0
+4027.0
+4028.0
+4029.0
+4030.0
+4031.0
+4032.0
+4033.0
+4034.0
+4035.0
+4036.0
+4037.0
+4038.0
+4039.0
+4040.0
+4041.0
+4042.0
+4043.0
+4044.0
+4045.0
+4046.0
+4047.0
+4048.0
+4049.0
+4050.0
+4051.0
+4052.0
+4053.0
+4054.0
+4055.0
+4056.0
+4057.0
+4058.0
+4059.0
+4060.0
+4061.0
+4062.0
+4063.0
+4064.0
+4065.0
+4066.0
+4067.0
+4068.0
+4069.0
+4070.0
+4071.0
+4072.0
+4073.0
+4074.0
+4075.0
+4076.0
+4077.0
+4078.0
+4079.0
+4080.0
+4081.0
+4082.0
+4083.0
+4084.0
+4085.0
+4086.0
+4087.0
+4088.0
+4089.0
+4090.0
+4091.0
+4092.0
+4093.0
+4094.0
+4095.0
+4096.0
+4097.0
+4098.0
+4099.0
+4100.0
+4101.0
+4102.0
+4103.0
+4104.0
+4105.0
+4106.0
+4107.0
+4108.0
+4109.0
+4110.0
+4111.0
+4112.0
+4113.0
+4114.0
+4115.0
+4116.0
+4117.0
+4118.0
+4119.0
+4120.0
+4121.0
+4122.0
+4123.0
+4124.0
+4125.0
+4126.0
+4127.0
+4128.0
+4129.0
+4130.0
+4131.0
+4132.0
+4133.0
+4134.0
+4135.0
+4136.0
+4137.0
+4138.0
+4139.0
+4140.0
+4141.0
+4142.0
+4143.0
+4144.0
+4145.0
+4146.0
+4147.0
+4148.0
+4149.0
+4150.0
+4151.0
+4152.0
+4153.0
+4154.0
+4155.0
+4156.0
+4157.0
+4158.0
+4159.0
+4160.0
+4161.0
+4162.0
+4163.0
+4164.0
+4165.0
+4166.0
+4167.0
+4168.0
+4169.0
+4170.0
+4171.0
+4172.0
+4173.0
+4174.0
+4175.0
+4176.0
+4177.0
+4178.0
+4179.0
+4180.0
+4181.0
+4182.0
+4183.0
+4184.0
+4185.0
+4186.0
+4187.0
+4188.0
+4189.0
+4190.0
+4191.0
+4192.0
+4193.0
+4194.0
+4195.0
+4196.0
+4197.0
+4198.0
+4199.0
+4200.0
+4201.0
+4202.0
+4203.0
+4204.0
+4205.0
+4206.0
+4207.0
+4208.0
+4209.0
+4210.0
+4211.0
+4212.0
+4213.0
+4214.0
+4215.0
+4216.0
+4217.0
+4218.0
+4219.0
+4220.0
+4221.0
+4222.0
+4223.0
+4224.0
+4225.0
+4226.0
+4227.0
+4228.0
+4229.0
+4230.0
+4231.0
+4232.0
+4233.0
+4234.0
+4235.0
+4236.0
+4237.0
+4238.0
+4239.0
+4240.0
+4241.0
+4242.0
+4243.0
+4244.0
+4245.0
+4246.0
+4247.0
+4248.0
+4249.0
+4250.0
+4251.0
+4252.0
+4253.0
+4254.0
+4255.0
+4256.0
+4257.0
+4258.0
+4259.0
+4260.0
+4261.0
+4262.0
+4263.0
+4264.0
+4265.0
+4266.0
+4267.0
+4268.0
+4269.0
+4270.0
+4271.0
+4272.0
+4273.0
+4274.0
+4275.0
+4276.0
+4277.0
+4278.0
+4279.0
+4280.0
+4281.0
+4282.0
+4283.0
+4284.0
+4285.0
+4286.0
+4287.0
+4288.0
+4289.0
+4290.0
+4291.0
+4292.0
+4293.0
+4294.0
+4295.0
+4296.0
+4297.0
+4298.0
+4299.0
+4300.0
+4301.0
+4302.0
+4303.0
+4304.0
+4305.0
+4306.0
+4307.0
+4308.0
+4309.0
+4310.0
+4311.0
+4312.0
+4313.0
+4314.0
+4315.0
+4316.0
+4317.0
+4318.0
+4319.0
+4320.0
+4321.0
+4322.0
+4323.0
+4324.0
+4325.0
+4326.0
+4327.0
+4328.0
+4329.0
+4330.0
+4331.0
+4332.0
+4333.0
+4334.0
+4335.0
+4336.0
+4337.0
+4338.0
+4339.0
+4340.0
+4341.0
+4342.0
+4343.0
+4344.0
+4345.0
+4346.0
+4347.0
+4348.0
+4349.0
+4350.0
+4351.0
+4352.0
+4353.0
+4354.0
+4355.0
+4356.0
+4357.0
+4358.0
+4359.0
+4360.0
+4361.0
+4362.0
+4363.0
+4364.0
+4365.0
+4366.0
+4367.0
+4368.0
+4369.0
+4370.0
+4371.0
+4372.0
+4373.0
+4374.0
+4375.0
+4376.0
+4377.0
+4378.0
+4379.0
+4380.0
+4381.0
+4382.0
+4383.0
+4384.0
+4385.0
+4386.0
+4387.0
+4388.0
+4389.0
+4390.0
+4391.0
+4392.0
+4393.0
+4394.0
+4395.0
+4396.0
+4397.0
+4398.0
+4399.0
+4400.0
+4401.0
+4402.0
+4403.0
+4404.0
+4405.0
+4406.0
+4407.0
+4408.0
+4409.0
+4410.0
+4411.0
+4412.0
+4413.0
+4414.0
+4415.0
+4416.0
+4417.0
+4418.0
+4419.0
+4420.0
+4421.0
+4422.0
+4423.0
+4424.0
+4425.0
+4426.0
+4427.0
+4428.0
+4429.0
+4430.0
+4431.0
+4432.0
+4433.0
+4434.0
+4435.0
+4436.0
+4437.0
+4438.0
+4439.0
+4440.0
+4441.0
+4442.0
+4443.0
+4444.0
+4445.0
+4446.0
+4447.0
+4448.0
+4449.0
+4450.0
+4451.0
+4452.0
+4453.0
+4454.0
+4455.0
+4456.0
+4457.0
+4458.0
+4459.0
+4460.0
+4461.0
+4462.0
+4463.0
+4464.0
+4465.0
+4466.0
+4467.0
+4468.0
+4469.0
+4470.0
+4471.0
+4472.0
+4473.0
+4474.0
+4475.0
+4476.0
+4477.0
+4478.0
+4479.0
+4480.0
+4481.0
+4482.0
+4483.0
+4484.0
+4485.0
+4486.0
+4487.0
+4488.0
+4489.0
+4490.0
+4491.0
+4492.0
+4493.0
+4494.0
+4495.0
+4496.0
+4497.0
+4498.0
+4499.0
+4500.0
+4501.0
+4502.0
+4503.0
+4504.0
+4505.0
+4506.0
+4507.0
+4508.0
+4509.0
+4510.0
+4511.0
+4512.0
+4513.0
+4514.0
+4515.0
+4516.0
+4517.0
+4518.0
+4519.0
+4520.0
+4521.0
+4522.0
+4523.0
+4524.0
+4525.0
+4526.0
+4527.0
+4528.0
+4529.0
+4530.0
+4531.0
+4532.0
+4533.0
+4534.0
+4535.0
+4536.0
+4537.0
+4538.0
+4539.0
+4540.0
+4541.0
+4542.0
+4543.0
+4544.0
+4545.0
+4546.0
+4547.0
+4548.0
+4549.0
+4550.0
+4551.0
+4552.0
+4553.0
+4554.0
+4555.0
+4556.0
+4557.0
+4558.0
+4559.0
+4560.0
+4561.0
+4562.0
+4563.0
+4564.0
+4565.0
+4566.0
+4567.0
+4568.0
+4569.0
+4570.0
+4571.0
+4572.0
+4573.0
+4574.0
+4575.0
+4576.0
+4577.0
+4578.0
+4579.0
+4580.0
+4581.0
+4582.0
+4583.0
+4584.0
+4585.0
+4586.0
+4587.0
+4588.0
+4589.0
+4590.0
+4591.0
+4592.0
+4593.0
+4594.0
+4595.0
+4596.0
+4597.0
+4598.0
+4599.0
+4600.0
+4601.0
+4602.0
+4603.0
+4604.0
+4605.0
+4606.0
+4607.0
+4608.0
+4609.0
+4610.0
+4611.0
+4612.0
+4613.0
+4614.0
+4615.0
+4616.0
+4617.0
+4618.0
+4619.0
+4620.0
+4621.0
+4622.0
+4623.0
+4624.0
+4625.0
+4626.0
+4627.0
+4628.0
+4629.0
+4630.0
+4631.0
+4632.0
+4633.0
+4634.0
+4635.0
+4636.0
+4637.0
+4638.0
+4639.0
+4640.0
+4641.0
+4642.0
+4643.0
+4644.0
+4645.0
+4646.0
+4647.0
+4648.0
+4649.0
+4650.0
+4651.0
+4652.0
+4653.0
+4654.0
+4655.0
+4656.0
+4657.0
+4658.0
+4659.0
+4660.0
+4661.0
+4662.0
+4663.0
+4664.0
+4665.0
+4666.0
+4667.0
+4668.0
+4669.0
+4670.0
+4671.0
+4672.0
+4673.0
+4674.0
+4675.0
+4676.0
+4677.0
+4678.0
+4679.0
+4680.0
+4681.0
+4682.0
+4683.0
+4684.0
+4685.0
+4686.0
+4687.0
+4688.0
+4689.0
+4690.0
+4691.0
+4692.0
+4693.0
+4694.0
+4695.0
+4696.0
+4697.0
+4698.0
+4699.0
+4700.0
+4701.0
+4702.0
+4703.0
+4704.0
+4705.0
+4706.0
+4707.0
+4708.0
+4709.0
+4710.0
+4711.0
+4712.0
+4713.0
+4714.0
+4715.0
+4716.0
+4717.0
+4718.0
+4719.0
+4720.0
+4721.0
+4722.0
+4723.0
+4724.0
+4725.0
+4726.0
+4727.0
+4728.0
+4729.0
+4730.0
+4731.0
+4732.0
+4733.0
+4734.0
+4735.0
+4736.0
+4737.0
+4738.0
+4739.0
+4740.0
+4741.0
+4742.0
+4743.0
+4744.0
+4745.0
+4746.0
+4747.0
+4748.0
+4749.0
+4750.0
+4751.0
+4752.0
+4753.0
+4754.0
+4755.0
+4756.0
+4757.0
+4758.0
+4759.0
+4760.0
+4761.0
+4762.0
+4763.0
+4764.0
+4765.0
+4766.0
+4767.0
+4768.0
+4769.0
+4770.0
+4771.0
+4772.0
+4773.0
+4774.0
+4775.0
+4776.0
+4777.0
+4778.0
+4779.0
+4780.0
+4781.0
+4782.0
+4783.0
+4784.0
+4785.0
+4786.0
+4787.0
+4788.0
+4789.0
+4790.0
+4791.0
+4792.0
+4793.0
+4794.0
+4795.0
+4796.0
+4797.0
+4798.0
+4799.0
+4800.0
+4801.0
+4802.0
+4803.0
+4804.0
+4805.0
+4806.0
+4807.0
+4808.0
+4809.0
+4810.0
+4811.0
+4812.0
+4813.0
+4814.0
+4815.0
+4816.0
+4817.0
+4818.0
+4819.0
+4820.0
+4821.0
+4822.0
+4823.0
+4824.0
+4825.0
+4826.0
+4827.0
+4828.0
+4829.0
+4830.0
+4831.0
+4832.0
+4833.0
+4834.0
+4835.0
+4836.0
+4837.0
+4838.0
+4839.0
+4840.0
+4841.0
+4842.0
+4843.0
+4844.0
+4845.0
+4846.0
+4847.0
+4848.0
+4849.0
+4850.0
+4851.0
+4852.0
+4853.0
+4854.0
+4855.0
+4856.0
+4857.0
+4858.0
+4859.0
+4860.0
+4861.0
+4862.0
+4863.0
+4864.0
+4865.0
+4866.0
+4867.0
+4868.0
+4869.0
+4870.0
+4871.0
+4872.0
+4873.0
+4874.0
+4875.0
+4876.0
+4877.0
+4878.0
+4879.0
+4880.0
+4881.0
+4882.0
+4883.0
+4884.0
+4885.0
+4886.0
+4887.0
+4888.0
+4889.0
+4890.0
+4891.0
+4892.0
+4893.0
+4894.0
+4895.0
+4896.0
+4897.0
+4898.0
+4899.0
+4900.0
+4901.0
+4902.0
+4903.0
+4904.0
+4905.0
+4906.0
+4907.0
+4908.0
+4909.0
+4910.0
+4911.0
+4912.0
+4913.0
+4914.0
+4915.0
+4916.0
+4917.0
+4918.0
+4919.0
+4920.0
+4921.0
+4922.0
+4923.0
+4924.0
+4925.0
+4926.0
+4927.0
+4928.0
+4929.0
+4930.0
+4931.0
+4932.0
+4933.0
+4934.0
+4935.0
+4936.0
+4937.0
+4938.0
+4939.0
+4940.0
+4941.0
+4942.0
+4943.0
+4944.0
+4945.0
+4946.0
+4947.0
+4948.0
+4949.0
+4950.0
+4951.0
+4952.0
+4953.0
+4954.0
+4955.0
+4956.0
+4957.0
+4958.0
+4959.0
+4960.0
+4961.0
+4962.0
+4963.0
+4964.0
+4965.0
+4966.0
+4967.0
+4968.0
+4969.0
+4970.0
+4971.0
+4972.0
+4973.0
+4974.0
+4975.0
+4976.0
+4977.0
+4978.0
+4979.0
+4980.0
+4981.0
+4982.0
+4983.0
+4984.0
+4985.0
+4986.0
+4987.0
+4988.0
+4989.0
+4990.0
+4991.0
+4992.0
+4993.0
+4994.0
+4995.0
+4996.0
+4997.0
+4998.0
+4999.0
+5000.0
+5001.0
+5002.0
+5003.0
+5004.0
+5005.0
+5006.0
+5007.0
+5008.0
+5009.0
+5010.0
+5011.0
+5012.0
+5013.0
+5014.0
+5015.0
+5016.0
+5017.0
+5018.0
+5019.0
+5020.0
+5021.0
+5022.0
+5023.0
+5024.0
+5025.0
+5026.0
+5027.0
+5028.0
+5029.0
+5030.0
+5031.0
+5032.0
+5033.0
+5034.0
+5035.0
+5036.0
+5037.0
+5038.0
+5039.0
+5040.0
+5041.0
+5042.0
+5043.0
+5044.0
+5045.0
+5046.0
+5047.0
+5048.0
+5049.0
+5050.0
+5051.0
+5052.0
+5053.0
+5054.0
+5055.0
+5056.0
+5057.0
+5058.0
+5059.0
+5060.0
+5061.0
+5062.0
+5063.0
+5064.0
+5065.0
+5066.0
+5067.0
+5068.0
+5069.0
+5070.0
+5071.0
+5072.0
+5073.0
+5074.0
+5075.0
+5076.0
+5077.0
+5078.0
+5079.0
+5080.0
+5081.0
+5082.0
+5083.0
+5084.0
+5085.0
+5086.0
+5087.0
+5088.0
+5089.0
+5090.0
+5091.0
+5092.0
+5093.0
+5094.0
+5095.0
+5096.0
+5097.0
+5098.0
+5099.0
+5100.0
+5101.0
+5102.0
+5103.0
+5104.0
+5105.0
+5106.0
+5107.0
+5108.0
+5109.0
+5110.0
+5111.0
+5112.0
+5113.0
+5114.0
+5115.0
+5116.0
+5117.0
+5118.0
+5119.0
+5120.0
+5121.0
+5122.0
+5123.0
+5124.0
+5125.0
+5126.0
+5127.0
+5128.0
+5129.0
+5130.0
+5131.0
+5132.0
+5133.0
+5134.0
+5135.0
+5136.0
+5137.0
+5138.0
+5139.0
+5140.0
+5141.0
+5142.0
+5143.0
+5144.0
+5145.0
+5146.0
+5147.0
+5148.0
+5149.0
+5150.0
+5151.0
+5152.0
+5153.0
+5154.0
+5155.0
+5156.0
+5157.0
+5158.0
+5159.0
+5160.0
+5161.0
+5162.0
+5163.0
+5164.0
+5165.0
+5166.0
+5167.0
+5168.0
+5169.0
+5170.0
+5171.0
+5172.0
+5173.0
+5174.0
+5175.0
+5176.0
+5177.0
+5178.0
+5179.0
+5180.0
+5181.0
+5182.0
+5183.0
+5184.0
+5185.0
+5186.0
+5187.0
+5188.0
+5189.0
+5190.0
+5191.0
+5192.0
+5193.0
+5194.0
+5195.0
+5196.0
+5197.0
+5198.0
+5199.0
+5200.0
+5201.0
+5202.0
+5203.0
+5204.0
+5205.0
+5206.0
+5207.0
+5208.0
+5209.0
+5210.0
+5211.0
+5212.0
+5213.0
+5214.0
+5215.0
+5216.0
+5217.0
+5218.0
+5219.0
+5220.0
+5221.0
+5222.0
+5223.0
+5224.0
+5225.0
+5226.0
+5227.0
+5228.0
+5229.0
+5230.0
+5231.0
+5232.0
+5233.0
+5234.0
+5235.0
+5236.0
+5237.0
+5238.0
+5239.0
+5240.0
+5241.0
+5242.0
+5243.0
+5244.0
+5245.0
+5246.0
+5247.0
+5248.0
+5249.0
+5250.0
+5251.0
+5252.0
+5253.0
+5254.0
+5255.0
+5256.0
+5257.0
+5258.0
+5259.0
+5260.0
+5261.0
+5262.0
+5263.0
+5264.0
+5265.0
+5266.0
+5267.0
+5268.0
+5269.0
+5270.0
+5271.0
+5272.0
+5273.0
+5274.0
+5275.0
+5276.0
+5277.0
+5278.0
+5279.0
+5280.0
+5281.0
+5282.0
+5283.0
+5284.0
+5285.0
+5286.0
+5287.0
+5288.0
+5289.0
+5290.0
+5291.0
+5292.0
+5293.0
+5294.0
+5295.0
+5296.0
+5297.0
+5298.0
+5299.0
+5300.0
+5301.0
+5302.0
+5303.0
+5304.0
+5305.0
+5306.0
+5307.0
+5308.0
+5309.0
+5310.0
+5311.0
+5312.0
+5313.0
+5314.0
+5315.0
+5316.0
+5317.0
+5318.0
+5319.0
+5320.0
+5321.0
+5322.0
+5323.0
+5324.0
+5325.0
+5326.0
+5327.0
+5328.0
+5329.0
+5330.0
+5331.0
+5332.0
+5333.0
+5334.0
+5335.0
+5336.0
+5337.0
+5338.0
+5339.0
+5340.0
+5341.0
+5342.0
+5343.0
+5344.0
+5345.0
+5346.0
+5347.0
+5348.0
+5349.0
+5350.0
+5351.0
+5352.0
+5353.0
+5354.0
+5355.0
+5356.0
+5357.0
+5358.0
+5359.0
+5360.0
+5361.0
+5362.0
+5363.0
+5364.0
+5365.0
+5366.0
+5367.0
+5368.0
+5369.0
+5370.0
+5371.0
+5372.0
+5373.0
+5374.0
+5375.0
+5376.0
+5377.0
+5378.0
+5379.0
+5380.0
+5381.0
+5382.0
+5383.0
+5384.0
+5385.0
+5386.0
+5387.0
+5388.0
+5389.0
+5390.0
+5391.0
+5392.0
+5393.0
+5394.0
+5395.0
+5396.0
+5397.0
+5398.0
+5399.0
+5400.0
+5401.0
+5402.0
+5403.0
+5404.0
+5405.0
+5406.0
+5407.0
+5408.0
+5409.0
+5410.0
+5411.0
+5412.0
+5413.0
+5414.0
+5415.0
+5416.0
+5417.0
+5418.0
+5419.0
+5420.0
+5421.0
+5422.0
+5423.0
+5424.0
+5425.0
+5426.0
+5427.0
+5428.0
+5429.0
+5430.0
+5431.0
+5432.0
+5433.0
+5434.0
+5435.0
+5436.0
+5437.0
+5438.0
+5439.0
+5440.0
+5441.0
+5442.0
+5443.0
+5444.0
+5445.0
+5446.0
+5447.0
+5448.0
+5449.0
+5450.0
+5451.0
+5452.0
+5453.0
+5454.0
+5455.0
+5456.0
+5457.0
+5458.0
+5459.0
+5460.0
+5461.0
+5462.0
+5463.0
+5464.0
+5465.0
+5466.0
+5467.0
+5468.0
+5469.0
+5470.0
+5471.0
+5472.0
+5473.0
+5474.0
+5475.0
+5476.0
+5477.0
+5478.0
+5479.0
+5480.0
+5481.0
+5482.0
+5483.0
+5484.0
+5485.0
+5486.0
+5487.0
+5488.0
+5489.0
+5490.0
+5491.0
+5492.0
+5493.0
+5494.0
+5495.0
+5496.0
+5497.0
+5498.0
+5499.0
+5500.0
+5501.0
+5502.0
+5503.0
+5504.0
+5505.0
+5506.0
+5507.0
+5508.0
+5509.0
+5510.0
+5511.0
+5512.0
+5513.0
+5514.0
+5515.0
+5516.0
+5517.0
+5518.0
+5519.0
+5520.0
+5521.0
+5522.0
+5523.0
+5524.0
+5525.0
+5526.0
+5527.0
+5528.0
+5529.0
+5530.0
+5531.0
+5532.0
+5533.0
+5534.0
+5535.0
+5536.0
+5537.0
+5538.0
+5539.0
+5540.0
+5541.0
+5542.0
+5543.0
+5544.0
+5545.0
+5546.0
+5547.0
+5548.0
+5549.0
+5550.0
+5551.0
+5552.0
+5553.0
+5554.0
+5555.0
+5556.0
+5557.0
+5558.0
+5559.0
+5560.0
+5561.0
+5562.0
+5563.0
+5564.0
+5565.0
+5566.0
+5567.0
+5568.0
+5569.0
+5570.0
+5571.0
+5572.0
+5573.0
+5574.0
+5575.0
+5576.0
+5577.0
+5578.0
+5579.0
+5580.0
+5581.0
+5582.0
+5583.0
+5584.0
+5585.0
+5586.0
+5587.0
+5588.0
+5589.0
+5590.0
+5591.0
+5592.0
+5593.0
+5594.0
+5595.0
+5596.0
+5597.0
+5598.0
+5599.0
+5600.0
+5601.0
+5602.0
+5603.0
+5604.0
+5605.0
+5606.0
+5607.0
+5608.0
+5609.0
+5610.0
+5611.0
+5612.0
+5613.0
+5614.0
+5615.0
+5616.0
+5617.0
+5618.0
+5619.0
+5620.0
+5621.0
+5622.0
+5623.0
+5624.0
+5625.0
+5626.0
+5627.0
+5628.0
+5629.0
+5630.0
+5631.0
+5632.0
+5633.0
+5634.0
+5635.0
+5636.0
+5637.0
+5638.0
+5639.0
+5640.0
+5641.0
+5642.0
+5643.0
+5644.0
+5645.0
+5646.0
+5647.0
+5648.0
+5649.0
+5650.0
+5651.0
+5652.0
+5653.0
+5654.0
+5655.0
+5656.0
+5657.0
+5658.0
+5659.0
+5660.0
+5661.0
+5662.0
+5663.0
+5664.0
+5665.0
+5666.0
+5667.0
+5668.0
+5669.0
+5670.0
+5671.0
+5672.0
+5673.0
+5674.0
+5675.0
+5676.0
+5677.0
+5678.0
+5679.0
+5680.0
+5681.0
+5682.0
+5683.0
+5684.0
+5685.0
+5686.0
+5687.0
+5688.0
+5689.0
+5690.0
+5691.0
+5692.0
+5693.0
+5694.0
+5695.0
+5696.0
+5697.0
+5698.0
+5699.0
+5700.0
+5701.0
+5702.0
+5703.0
+5704.0
+5705.0
+5706.0
+5707.0
+5708.0
+5709.0
+5710.0
+5711.0
+5712.0
+5713.0
+5714.0
+5715.0
+5716.0
+5717.0
+5718.0
+5719.0
+5720.0
+5721.0
+5722.0
+5723.0
+5724.0
+5725.0
+5726.0
+5727.0
+5728.0
+5729.0
+5730.0
+5731.0
+5732.0
+5733.0
+5734.0
+5735.0
+5736.0
+5737.0
+5738.0
+5739.0
+5740.0
+5741.0
+5742.0
+5743.0
+5744.0
+5745.0
+5746.0
+5747.0
+5748.0
+5749.0
+5750.0
+5751.0
+5752.0
+5753.0
+5754.0
+5755.0
+5756.0
+5757.0
+5758.0
+5759.0
+5760.0
+5761.0
+5762.0
+5763.0
+5764.0
+5765.0
+5766.0
+5767.0
+5768.0
+5769.0
+5770.0
+5771.0
+5772.0
+5773.0
+5774.0
+5775.0
+5776.0
+5777.0
+5778.0
+5779.0
+5780.0
+5781.0
+5782.0
+5783.0
+5784.0
+5785.0
+5786.0
+5787.0
+5788.0
+5789.0
+5790.0
+5791.0
+5792.0
+5793.0
+5794.0
+5795.0
+5796.0
+5797.0
+5798.0
+5799.0
+5800.0
+5801.0
+5802.0
+5803.0
+5804.0
+5805.0
+5806.0
+5807.0
+5808.0
+5809.0
+5810.0
+5811.0
+5812.0
+5813.0
+5814.0
+5815.0
+5816.0
+5817.0
+5818.0
+5819.0
+5820.0
+5821.0
+5822.0
+5823.0
+5824.0
+5825.0
+5826.0
+5827.0
+5828.0
+5829.0
+5830.0
+5831.0
+5832.0
+5833.0
+5834.0
+5835.0
+5836.0
+5837.0
+5838.0
+5839.0
+5840.0
+5841.0
+5842.0
+5843.0
+5844.0
+5845.0
+5846.0
+5847.0
+5848.0
+5849.0
+5850.0
+5851.0
+5852.0
+5853.0
+5854.0
+5855.0
+5856.0
+5857.0
+5858.0
+5859.0
+5860.0
+5861.0
+5862.0
+5863.0
+5864.0
+5865.0
+5866.0
+5867.0
+5868.0
+5869.0
+5870.0
+5871.0
+5872.0
+5873.0
+5874.0
+5875.0
+5876.0
+5877.0
+5878.0
+5879.0
+5880.0
+5881.0
+5882.0
+5883.0
+5884.0
+5885.0
+5886.0
+5887.0
+5888.0
+5889.0
+5890.0
+5891.0
+5892.0
+5893.0
+5894.0
+5895.0
+5896.0
+5897.0
+5898.0
+5899.0
+5900.0
+5901.0
+5902.0
+5903.0
+5904.0
+5905.0
+5906.0
+5907.0
+5908.0
+5909.0
+5910.0
+5911.0
+5912.0
+5913.0
+5914.0
+5915.0
+5916.0
+5917.0
+5918.0
+5919.0
+5920.0
+5921.0
+5922.0
+5923.0
+5924.0
+5925.0
+5926.0
+5927.0
+5928.0
+5929.0
+5930.0
+5931.0
+5932.0
+5933.0
+5934.0
+5935.0
+5936.0
+5937.0
+5938.0
+5939.0
+5940.0
+5941.0
+5942.0
+5943.0
+5944.0
+5945.0
+5946.0
+5947.0
+5948.0
+5949.0
+5950.0
+5951.0
+5952.0
+5953.0
+5954.0
+5955.0
+5956.0
+5957.0
+5958.0
+5959.0
+5960.0
+5961.0
+5962.0
+5963.0
+5964.0
+5965.0
+5966.0
+5967.0
+5968.0
+5969.0
+5970.0
+5971.0
+5972.0
+5973.0
+5974.0
+5975.0
+5976.0
+5977.0
+5978.0
+5979.0
+5980.0
+5981.0
+5982.0
+5983.0
+5984.0
+5985.0
+5986.0
+5987.0
+5988.0
+5989.0
+5990.0
+5991.0
+5992.0
+5993.0
+5994.0
+5995.0
+5996.0
+5997.0
+5998.0
+5999.0
+6000.0
+6001.0
+6002.0
+6003.0
+6004.0
+6005.0
+6006.0
+6007.0
+6008.0
+6009.0
+6010.0
+6011.0
+6012.0
+6013.0
+6014.0
+6015.0
+6016.0
+6017.0
+6018.0
+6019.0
+6020.0
+6021.0
+6022.0
+6023.0
+6024.0
+6025.0
+6026.0
+6027.0
+6028.0
+6029.0
+6030.0
+6031.0
+6032.0
+6033.0
+6034.0
+6035.0
+6036.0
+6037.0
+6038.0
+6039.0
+6040.0
+6041.0
+6042.0
+6043.0
+6044.0
+6045.0
+6046.0
+6047.0
+6048.0
+6049.0
+6050.0
+6051.0
+6052.0
+6053.0
+6054.0
+6055.0
+6056.0
+6057.0
+6058.0
+6059.0
+6060.0
+6061.0
+6062.0
+6063.0
+6064.0
+6065.0
+6066.0
+6067.0
+6068.0
+6069.0
+6070.0
+6071.0
+6072.0
+6073.0
+6074.0
+6075.0
+6076.0
+6077.0
+6078.0
+6079.0
+6080.0
+6081.0
+6082.0
+6083.0
+6084.0
+6085.0
+6086.0
+6087.0
+6088.0
+6089.0
+6090.0
+6091.0
+6092.0
+6093.0
+6094.0
+6095.0
+6096.0
+6097.0
+6098.0
+6099.0
+6100.0
+6101.0
+6102.0
+6103.0
+6104.0
+6105.0
+6106.0
+6107.0
+6108.0
+6109.0
+6110.0
+6111.0
+6112.0
+6113.0
+6114.0
+6115.0
+6116.0
+6117.0
+6118.0
+6119.0
+6120.0
+6121.0
+6122.0
+6123.0
+6124.0
+6125.0
+6126.0
+6127.0
+6128.0
+6129.0
+6130.0
+6131.0
+6132.0
+6133.0
+6134.0
+6135.0
+6136.0
+6137.0
+6138.0
+6139.0
+6140.0
+6141.0
+6142.0
+6143.0
+6144.0
+6145.0
+6146.0
+6147.0
+6148.0
+6149.0
+6150.0
+6151.0
+6152.0
+6153.0
+6154.0
+6155.0
+6156.0
+6157.0
+6158.0
+6159.0
+6160.0
+6161.0
+6162.0
+6163.0
+6164.0
+6165.0
+6166.0
+6167.0
+6168.0
+6169.0
+6170.0
+6171.0
+6172.0
+6173.0
+6174.0
+6175.0
+6176.0
+6177.0
+6178.0
+6179.0
+6180.0
+6181.0
+6182.0
+6183.0
+6184.0
+6185.0
+6186.0
+6187.0
+6188.0
+6189.0
+6190.0
+6191.0
+6192.0
+6193.0
+6194.0
+6195.0
+6196.0
+6197.0
+6198.0
+6199.0
+6200.0
+6201.0
+6202.0
+6203.0
+6204.0
+6205.0
+6206.0
+6207.0
+6208.0
+6209.0
+6210.0
+6211.0
+6212.0
+6213.0
+6214.0
+6215.0
+6216.0
+6217.0
+6218.0
+6219.0
+6220.0
+6221.0
+6222.0
+6223.0
+6224.0
+6225.0
+6226.0
+6227.0
+6228.0
+6229.0
+6230.0
+6231.0
+6232.0
+6233.0
+6234.0
+6235.0
+6236.0
+6237.0
+6238.0
+6239.0
+6240.0
+6241.0
+6242.0
+6243.0
+6244.0
+6245.0
+6246.0
+6247.0
+6248.0
+6249.0
+6250.0
+6251.0
+6252.0
+6253.0
+6254.0
+6255.0
+6256.0
+6257.0
+6258.0
+6259.0
+6260.0
+6261.0
+6262.0
+6263.0
+6264.0
+6265.0
+6266.0
+6267.0
+6268.0
+6269.0
+6270.0
+6271.0
+6272.0
+6273.0
+6274.0
+6275.0
+6276.0
+6277.0
+6278.0
+6279.0
+6280.0
+6281.0
+6282.0
+6283.0
+6284.0
+6285.0
+6286.0
+6287.0
+6288.0
+6289.0
+6290.0
+6291.0
+6292.0
+6293.0
+6294.0
+6295.0
+6296.0
+6297.0
+6298.0
+6299.0
+6300.0
+6301.0
+6302.0
+6303.0
+6304.0
+6305.0
+6306.0
+6307.0
+6308.0
+6309.0
+6310.0
+6311.0
+6312.0
+6313.0
+6314.0
+6315.0
+6316.0
+6317.0
+6318.0
+6319.0
+6320.0
+6321.0
+6322.0
+6323.0
+6324.0
+6325.0
+6326.0
+6327.0
+6328.0
+6329.0
+6330.0
+6331.0
+6332.0
+6333.0
+6334.0
+6335.0
+6336.0
+6337.0
+6338.0
+6339.0
+6340.0
+6341.0
+6342.0
+6343.0
+6344.0
+6345.0
+6346.0
+6347.0
+6348.0
+6349.0
+6350.0
+6351.0
+6352.0
+6353.0
+6354.0
+6355.0
+6356.0
+6357.0
+6358.0
+6359.0
+6360.0
+6361.0
+6362.0
+6363.0
+6364.0
+6365.0
+6366.0
+6367.0
+6368.0
+6369.0
+6370.0
+6371.0
+6372.0
+6373.0
+6374.0
+6375.0
+6376.0
+6377.0
+6378.0
+6379.0
+6380.0
+6381.0
+6382.0
+6383.0
+6384.0
+6385.0
+6386.0
+6387.0
+6388.0
+6389.0
+6390.0
+6391.0
+6392.0
+6393.0
+6394.0
+6395.0
+6396.0
+6397.0
+6398.0
+6399.0
+6400.0
+6401.0
+6402.0
+6403.0
+6404.0
+6405.0
+6406.0
+6407.0
+6408.0
+6409.0
+6410.0
+6411.0
+6412.0
+6413.0
+6414.0
+6415.0
+6416.0
+6417.0
+6418.0
+6419.0
+6420.0
+6421.0
+6422.0
+6423.0
+6424.0
+6425.0
+6426.0
+6427.0
+6428.0
+6429.0
+6430.0
+6431.0
+6432.0
+6433.0
+6434.0
+6435.0
+6436.0
+6437.0
+6438.0
+6439.0
+6440.0
+6441.0
+6442.0
+6443.0
+6444.0
+6445.0
+6446.0
+6447.0
+6448.0
+6449.0
+6450.0
+6451.0
+6452.0
+6453.0
+6454.0
+6455.0
+6456.0
+6457.0
+6458.0
+6459.0
+6460.0
+6461.0
+6462.0
+6463.0
+6464.0
+6465.0
+6466.0
+6467.0
+6468.0
+6469.0
+6470.0
+6471.0
+6472.0
+6473.0
+6474.0
+6475.0
+6476.0
+6477.0
+6478.0
+6479.0
+6480.0
+6481.0
+6482.0
+6483.0
+6484.0
+6485.0
+6486.0
+6487.0
+6488.0
+6489.0
+6490.0
+6491.0
+6492.0
+6493.0
+6494.0
+6495.0
+6496.0
+6497.0
+6498.0
+6499.0
+6500.0
+6501.0
+6502.0
+6503.0
+6504.0
+6505.0
+6506.0
+6507.0
+6508.0
+6509.0
+6510.0
+6511.0
+6512.0
+6513.0
+6514.0
+6515.0
+6516.0
+6517.0
+6518.0
+6519.0
+6520.0
+6521.0
+6522.0
+6523.0
+6524.0
+6525.0
+6526.0
+6527.0
+6528.0
+6529.0
+6530.0
+6531.0
+6532.0
+6533.0
+6534.0
+6535.0
+6536.0
+6537.0
+6538.0
+6539.0
+6540.0
+6541.0
+6542.0
+6543.0
+6544.0
+6545.0
+6546.0
+6547.0
+6548.0
+6549.0
+6550.0
+6551.0
+6552.0
+6553.0
+6554.0
+6555.0
+6556.0
+6557.0
+6558.0
+6559.0
+6560.0
+6561.0
+6562.0
+6563.0
+6564.0
+6565.0
+6566.0
+6567.0
+6568.0
+6569.0
+6570.0
+6571.0
+6572.0
+6573.0
+6574.0
+6575.0
+6576.0
+6577.0
+6578.0
+6579.0
+6580.0
+6581.0
+6582.0
+6583.0
+6584.0
+6585.0
+6586.0
+6587.0
+6588.0
+6589.0
+6590.0
+6591.0
+6592.0
+6593.0
+6594.0
+6595.0
+6596.0
+6597.0
+6598.0
+6599.0
+6600.0
+6601.0
+6602.0
+6603.0
+6604.0
+6605.0
+6606.0
+6607.0
+6608.0
+6609.0
+6610.0
+6611.0
+6612.0
+6613.0
+6614.0
+6615.0
+6616.0
+6617.0
+6618.0
+6619.0
+6620.0
+6621.0
+6622.0
+6623.0
+6624.0
+6625.0
+6626.0
+6627.0
+6628.0
+6629.0
+6630.0
+6631.0
+6632.0
+6633.0
+6634.0
+6635.0
+6636.0
+6637.0
+6638.0
+6639.0
+6640.0
+6641.0
+6642.0
+6643.0
+6644.0
+6645.0
+6646.0
+6647.0
+6648.0
+6649.0
+6650.0
+6651.0
+6652.0
+6653.0
+6654.0
+6655.0
+6656.0
+6657.0
+6658.0
+6659.0
+6660.0
+6661.0
+6662.0
+6663.0
+6664.0
+6665.0
+6666.0
+6667.0
+6668.0
+6669.0
+6670.0
+6671.0
+6672.0
+6673.0
+6674.0
+6675.0
+6676.0
+6677.0
+6678.0
+6679.0
+6680.0
+6681.0
+6682.0
+6683.0
+6684.0
+6685.0
+6686.0
+6687.0
+6688.0
+6689.0
+6690.0
+6691.0
+6692.0
+6693.0
+6694.0
+6695.0
+6696.0
+6697.0
+6698.0
+6699.0
+6700.0
+6701.0
+6702.0
+6703.0
+6704.0
+6705.0
+6706.0
+6707.0
+6708.0
+6709.0
+6710.0
+6711.0
+6712.0
+6713.0
+6714.0
+6715.0
+6716.0
+6717.0
+6718.0
+6719.0
+6720.0
+6721.0
+6722.0
+6723.0
+6724.0
+6725.0
+6726.0
+6727.0
+6728.0
+6729.0
+6730.0
+6731.0
+6732.0
+6733.0
+6734.0
+6735.0
+6736.0
+6737.0
+6738.0
+6739.0
+6740.0
+6741.0
+6742.0
+6743.0
+6744.0
+6745.0
+6746.0
+6747.0
+6748.0
+6749.0
+6750.0
+6751.0
+6752.0
+6753.0
+6754.0
+6755.0
+6756.0
+6757.0
+6758.0
+6759.0
+6760.0
+6761.0
+6762.0
+6763.0
+6764.0
+6765.0
+6766.0
+6767.0
+6768.0
+6769.0
+6770.0
+6771.0
+6772.0
+6773.0
+6774.0
+6775.0
+6776.0
+6777.0
+6778.0
+6779.0
+6780.0
+6781.0
+6782.0
+6783.0
+6784.0
+6785.0
+6786.0
+6787.0
+6788.0
+6789.0
+6790.0
+6791.0
+6792.0
+6793.0
+6794.0
+6795.0
+6796.0
+6797.0
+6798.0
+6799.0
+6800.0
+6801.0
+6802.0
+6803.0
+6804.0
+6805.0
+6806.0
+6807.0
+6808.0
+6809.0
+6810.0
+6811.0
+6812.0
+6813.0
+6814.0
+6815.0
+6816.0
+6817.0
+6818.0
+6819.0
+6820.0
+6821.0
+6822.0
+6823.0
+6824.0
+6825.0
+6826.0
+6827.0
+6828.0
+6829.0
+6830.0
+6831.0
+6832.0
+6833.0
+6834.0
+6835.0
+6836.0
+6837.0
+6838.0
+6839.0
+6840.0
+6841.0
+6842.0
+6843.0
+6844.0
+6845.0
+6846.0
+6847.0
+6848.0
+6849.0
+6850.0
+6851.0
+6852.0
+6853.0
+6854.0
+6855.0
+6856.0
+6857.0
+6858.0
+6859.0
+6860.0
+6861.0
+6862.0
+6863.0
+6864.0
+6865.0
+6866.0
+6867.0
+6868.0
+6869.0
+6870.0
+6871.0
+6872.0
+6873.0
+6874.0
+6875.0
+6876.0
+6877.0
+6878.0
+6879.0
+6880.0
+6881.0
+6882.0
+6883.0
+6884.0
+6885.0
+6886.0
+6887.0
+6888.0
+6889.0
+6890.0
+6891.0
+6892.0
+6893.0
+6894.0
+6895.0
+6896.0
+6897.0
+6898.0
+6899.0
+6900.0
+6901.0
+6902.0
+6903.0
+6904.0
+6905.0
+6906.0
+6907.0
+6908.0
+6909.0
+6910.0
+6911.0
+6912.0
+6913.0
+6914.0
+6915.0
+6916.0
+6917.0
+6918.0
+6919.0
+6920.0
+6921.0
+6922.0
+6923.0
+6924.0
+6925.0
+6926.0
+6927.0
+6928.0
+6929.0
+6930.0
+6931.0
+6932.0
+6933.0
+6934.0
+6935.0
+6936.0
+6937.0
+6938.0
+6939.0
+6940.0
+6941.0
+6942.0
+6943.0
+6944.0
+6945.0
+6946.0
+6947.0
+6948.0
+6949.0
+6950.0
+6951.0
+6952.0
+6953.0
+6954.0
+6955.0
+6956.0
+6957.0
+6958.0
+6959.0
+6960.0
+6961.0
+6962.0
+6963.0
+6964.0
+6965.0
+6966.0
+6967.0
+6968.0
+6969.0
+6970.0
+6971.0
+6972.0
+6973.0
+6974.0
+6975.0
+6976.0
+6977.0
+6978.0
+6979.0
+6980.0
+6981.0
+6982.0
+6983.0
+6984.0
+6985.0
+6986.0
+6987.0
+6988.0
+6989.0
+6990.0
+6991.0
+6992.0
+6993.0
+6994.0
+6995.0
+6996.0
+6997.0
+6998.0
+6999.0
+7000.0
+7001.0
+7002.0
+7003.0
+7004.0
+7005.0
+7006.0
+7007.0
+7008.0
+7009.0
+7010.0
+7011.0
+7012.0
+7013.0
+7014.0
+7015.0
+7016.0
+7017.0
+7018.0
+7019.0
+7020.0
+7021.0
+7022.0
+7023.0
+7024.0
+7025.0
+7026.0
+7027.0
+7028.0
+7029.0
+7030.0
+7031.0
+7032.0
+7033.0
+7034.0
+7035.0
+7036.0
+7037.0
+7038.0
+7039.0
+7040.0
+7041.0
+7042.0
+7043.0
+7044.0
+7045.0
+7046.0
+7047.0
+7048.0
+7049.0
+7050.0
+7051.0
+7052.0
+7053.0
+7054.0
+7055.0
+7056.0
+7057.0
+7058.0
+7059.0
+7060.0
+7061.0
+7062.0
+7063.0
+7064.0
+7065.0
+7066.0
+7067.0
+7068.0
+7069.0
+7070.0
+7071.0
+7072.0
+7073.0
+7074.0
+7075.0
+7076.0
+7077.0
+7078.0
+7079.0
+7080.0
+7081.0
+7082.0
+7083.0
+7084.0
+7085.0
+7086.0
+7087.0
+7088.0
+7089.0
+7090.0
+7091.0
+7092.0
+7093.0
+7094.0
+7095.0
+7096.0
+7097.0
+7098.0
+7099.0
+7100.0
+7101.0
+7102.0
+7103.0
+7104.0
+7105.0
+7106.0
+7107.0
+7108.0
+7109.0
+7110.0
+7111.0
+7112.0
+7113.0
+7114.0
+7115.0
+7116.0
+7117.0
+7118.0
+7119.0
+7120.0
+7121.0
+7122.0
+7123.0
+7124.0
+7125.0
+7126.0
+7127.0
+7128.0
+7129.0
+7130.0
+7131.0
+7132.0
+7133.0
+7134.0
+7135.0
+7136.0
+7137.0
+7138.0
+7139.0
+7140.0
+7141.0
+7142.0
+7143.0
+7144.0
+7145.0
+7146.0
+7147.0
+7148.0
+7149.0
+7150.0
+7151.0
+7152.0
+7153.0
+7154.0
+7155.0
+7156.0
+7157.0
+7158.0
+7159.0
+7160.0
+7161.0
+7162.0
+7163.0
+7164.0
+7165.0
+7166.0
+7167.0
+7168.0
+7169.0
+7170.0
+7171.0
+7172.0
+7173.0
+7174.0
+7175.0
+7176.0
+7177.0
+7178.0
+7179.0
+7180.0
+7181.0
+7182.0
+7183.0
+7184.0
+7185.0
+7186.0
+7187.0
+7188.0
+7189.0
+7190.0
+7191.0
+7192.0
+7193.0
+7194.0
+7195.0
+7196.0
+7197.0
+7198.0
+7199.0
+7200.0
+7201.0
+7202.0
+7203.0
+7204.0
+7205.0
+7206.0
+7207.0
+7208.0
+7209.0
+7210.0
+7211.0
+7212.0
+7213.0
+7214.0
+7215.0
+7216.0
+7217.0
+7218.0
+7219.0
+7220.0
+7221.0
+7222.0
+7223.0
+7224.0
+7225.0
+7226.0
+7227.0
+7228.0
+7229.0
+7230.0
+7231.0
+7232.0
+7233.0
+7234.0
+7235.0
+7236.0
+7237.0
+7238.0
+7239.0
+7240.0
+7241.0
+7242.0
+7243.0
+7244.0
+7245.0
+7246.0
+7247.0
+7248.0
+7249.0
+7250.0
+7251.0
+7252.0
+7253.0
+7254.0
+7255.0
+7256.0
+7257.0
+7258.0
+7259.0
+7260.0
+7261.0
+7262.0
+7263.0
+7264.0
+7265.0
+7266.0
+7267.0
+7268.0
+7269.0
+7270.0
+7271.0
+7272.0
+7273.0
+7274.0
+7275.0
+7276.0
+7277.0
+7278.0
+7279.0
+7280.0
+7281.0
+7282.0
+7283.0
+7284.0
+7285.0
+7286.0
+7287.0
+7288.0
+7289.0
+7290.0
+7291.0
+7292.0
+7293.0
+7294.0
+7295.0
+7296.0
+7297.0
+7298.0
+7299.0
+7300.0
+7301.0
+7302.0
+7303.0
+7304.0
+7305.0
+7306.0
+7307.0
+7308.0
+7309.0
+7310.0
+7311.0
+7312.0
+7313.0
+7314.0
+7315.0
+7316.0
+7317.0
+7318.0
+7319.0
+7320.0
+7321.0
+7322.0
+7323.0
+7324.0
+7325.0
+7326.0
+7327.0
+7328.0
+7329.0
+7330.0
+7331.0
+7332.0
+7333.0
+7334.0
+7335.0
+7336.0
+7337.0
+7338.0
+7339.0
+7340.0
+7341.0
+7342.0
+7343.0
+7344.0
+7345.0
+7346.0
+7347.0
+7348.0
+7349.0
+7350.0
+7351.0
+7352.0
+7353.0
+7354.0
+7355.0
+7356.0
+7357.0
+7358.0
+7359.0
+7360.0
+7361.0
+7362.0
+7363.0
+7364.0
+7365.0
+7366.0
+7367.0
+7368.0
+7369.0
+7370.0
+7371.0
+7372.0
+7373.0
+7374.0
+7375.0
+7376.0
+7377.0
+7378.0
+7379.0
+7380.0
+7381.0
+7382.0
+7383.0
+7384.0
+7385.0
+7386.0
+7387.0
+7388.0
+7389.0
+7390.0
+7391.0
+7392.0
+7393.0
+7394.0
+7395.0
+7396.0
+7397.0
+7398.0
+7399.0
+7400.0
+7401.0
+7402.0
+7403.0
+7404.0
+7405.0
+7406.0
+7407.0
+7408.0
+7409.0
+7410.0
+7411.0
+7412.0
+7413.0
+7414.0
+7415.0
+7416.0
+7417.0
+7418.0
+7419.0
+7420.0
+7421.0
+7422.0
+7423.0
+7424.0
+7425.0
+7426.0
+7427.0
+7428.0
+7429.0
+7430.0
+7431.0
+7432.0
+7433.0
+7434.0
+7435.0
+7436.0
+7437.0
+7438.0
+7439.0
+7440.0
+7441.0
+7442.0
+7443.0
+7444.0
+7445.0
+7446.0
+7447.0
+7448.0
+7449.0
+7450.0
+7451.0
+7452.0
+7453.0
+7454.0
+7455.0
+7456.0
+7457.0
+7458.0
+7459.0
+7460.0
+7461.0
+7462.0
+7463.0
+7464.0
+7465.0
+7466.0
+7467.0
+7468.0
+7469.0
+7470.0
+7471.0
+7472.0
+7473.0
+7474.0
+7475.0
+7476.0
+7477.0
+7478.0
+7479.0
+7480.0
+7481.0
+7482.0
+7483.0
+7484.0
+7485.0
+7486.0
+7487.0
+7488.0
+7489.0
+7490.0
+7491.0
+7492.0
+7493.0
+7494.0
+7495.0
+7496.0
+7497.0
+7498.0
+7499.0
+7500.0
+7501.0
+7502.0
+7503.0
+7504.0
+7505.0
+7506.0
+7507.0
+7508.0
+7509.0
+7510.0
+7511.0
+7512.0
+7513.0
+7514.0
+7515.0
+7516.0
+7517.0
+7518.0
+7519.0
+7520.0
+7521.0
+7522.0
+7523.0
+7524.0
+7525.0
+7526.0
+7527.0
+7528.0
+7529.0
+7530.0
+7531.0
+7532.0
+7533.0
+7534.0
+7535.0
+7536.0
+7537.0
+7538.0
+7539.0
+7540.0
+7541.0
+7542.0
+7543.0
+7544.0
+7545.0
+7546.0
+7547.0
+7548.0
+7549.0
+7550.0
+7551.0
+7552.0
+7553.0
+7554.0
+7555.0
+7556.0
+7557.0
+7558.0
+7559.0
+7560.0
+7561.0
+7562.0
+7563.0
+7564.0
+7565.0
+7566.0
+7567.0
+7568.0
+7569.0
+7570.0
+7571.0
+7572.0
+7573.0
+7574.0
+7575.0
+7576.0
+7577.0
+7578.0
+7579.0
+7580.0
+7581.0
+7582.0
+7583.0
+7584.0
+7585.0
+7586.0
+7587.0
+7588.0
+7589.0
+7590.0
+7591.0
+7592.0
+7593.0
+7594.0
+7595.0
+7596.0
+7597.0
+7598.0
+7599.0
+7600.0
+7601.0
+7602.0
+7603.0
+7604.0
+7605.0
+7606.0
+7607.0
+7608.0
+7609.0
+7610.0
+7611.0
+7612.0
+7613.0
+7614.0
+7615.0
+7616.0
+7617.0
+7618.0
+7619.0
+7620.0
+7621.0
+7622.0
+7623.0
+7624.0
+7625.0
+7626.0
+7627.0
+7628.0
+7629.0
+7630.0
+7631.0
+7632.0
+7633.0
+7634.0
+7635.0
+7636.0
+7637.0
+7638.0
+7639.0
+7640.0
+7641.0
+7642.0
+7643.0
+7644.0
+7645.0
+7646.0
+7647.0
+7648.0
+7649.0
+7650.0
+7651.0
+7652.0
+7653.0
+7654.0
+7655.0
+7656.0
+7657.0
+7658.0
+7659.0
+7660.0
+7661.0
+7662.0
+7663.0
+7664.0
+7665.0
+7666.0
+7667.0
+7668.0
+7669.0
+7670.0
+7671.0
+7672.0
+7673.0
+7674.0
+7675.0
+7676.0
+7677.0
+7678.0
+7679.0
+7680.0
+7681.0
+7682.0
+7683.0
+7684.0
+7685.0
+7686.0
+7687.0
+7688.0
+7689.0
+7690.0
+7691.0
+7692.0
+7693.0
+7694.0
+7695.0
+7696.0
+7697.0
+7698.0
+7699.0
+7700.0
+7701.0
+7702.0
+7703.0
+7704.0
+7705.0
+7706.0
+7707.0
+7708.0
+7709.0
+7710.0
+7711.0
+7712.0
+7713.0
+7714.0
+7715.0
+7716.0
+7717.0
+7718.0
+7719.0
+7720.0
+7721.0
+7722.0
+7723.0
+7724.0
+7725.0
+7726.0
+7727.0
+7728.0
+7729.0
+7730.0
+7731.0
+7732.0
+7733.0
+7734.0
+7735.0
+7736.0
+7737.0
+7738.0
+7739.0
+7740.0
+7741.0
+7742.0
+7743.0
+7744.0
+7745.0
+7746.0
+7747.0
+7748.0
+7749.0
+7750.0
+7751.0
+7752.0
+7753.0
+7754.0
+7755.0
+7756.0
+7757.0
+7758.0
+7759.0
+7760.0
+7761.0
+7762.0
+7763.0
+7764.0
+7765.0
+7766.0
+7767.0
+7768.0
+7769.0
+7770.0
+7771.0
+7772.0
+7773.0
+7774.0
+7775.0
+7776.0
+7777.0
+7778.0
+7779.0
+7780.0
+7781.0
+7782.0
+7783.0
+7784.0
+7785.0
+7786.0
+7787.0
+7788.0
+7789.0
+7790.0
+7791.0
+7792.0
+7793.0
+7794.0
+7795.0
+7796.0
+7797.0
+7798.0
+7799.0
+7800.0
+7801.0
+7802.0
+7803.0
+7804.0
+7805.0
+7806.0
+7807.0
+7808.0
+7809.0
+7810.0
+7811.0
+7812.0
+7813.0
+7814.0
+7815.0
+7816.0
+7817.0
+7818.0
+7819.0
+7820.0
+7821.0
+7822.0
+7823.0
+7824.0
+7825.0
+7826.0
+7827.0
+7828.0
+7829.0
+7830.0
+7831.0
+7832.0
+7833.0
+7834.0
+7835.0
+7836.0
+7837.0
+7838.0
+7839.0
+7840.0
+7841.0
+7842.0
+7843.0
+7844.0
+7845.0
+7846.0
+7847.0
+7848.0
+7849.0
+7850.0
+7851.0
+7852.0
+7853.0
+7854.0
+7855.0
+7856.0
+7857.0
+7858.0
+7859.0
+7860.0
+7861.0
+7862.0
+7863.0
+7864.0
+7865.0
+7866.0
+7867.0
+7868.0
+7869.0
+7870.0
+7871.0
+7872.0
+7873.0
+7874.0
+7875.0
+7876.0
+7877.0
+7878.0
+7879.0
+7880.0
+7881.0
+7882.0
+7883.0
+7884.0
+7885.0
+7886.0
+7887.0
+7888.0
+7889.0
+7890.0
+7891.0
+7892.0
+7893.0
+7894.0
+7895.0
+7896.0
+7897.0
+7898.0
+7899.0
+7900.0
+7901.0
+7902.0
+7903.0
+7904.0
+7905.0
+7906.0
+7907.0
+7908.0
+7909.0
+7910.0
+7911.0
+7912.0
+7913.0
+7914.0
+7915.0
+7916.0
+7917.0
+7918.0
+7919.0
+7920.0
+7921.0
+7922.0
+7923.0
+7924.0
+7925.0
+7926.0
+7927.0
+7928.0
+7929.0
+7930.0
+7931.0
+7932.0
+7933.0
+7934.0
+7935.0
+7936.0
+7937.0
+7938.0
+7939.0
+7940.0
+7941.0
+7942.0
+7943.0
+7944.0
+7945.0
+7946.0
+7947.0
+7948.0
+7949.0
+7950.0
+7951.0
+7952.0
+7953.0
+7954.0
+7955.0
+7956.0
+7957.0
+7958.0
+7959.0
+7960.0
+7961.0
+7962.0
+7963.0
+7964.0
+7965.0
+7966.0
+7967.0
+7968.0
+7969.0
+7970.0
+7971.0
+7972.0
+7973.0
+7974.0
+7975.0
+7976.0
+7977.0
+7978.0
+7979.0
+7980.0
+7981.0
+7982.0
+7983.0
+7984.0
+7985.0
+7986.0
+7987.0
+7988.0
+7989.0
+7990.0
+7991.0
+7992.0
+7993.0
+7994.0
+7995.0
+7996.0
+7997.0
+7998.0
+7999.0
+8000.0
+8001.0
+8002.0
+8003.0
+8004.0
+8005.0
+8006.0
+8007.0
+8008.0
+8009.0
+8010.0
+8011.0
+8012.0
+8013.0
+8014.0
+8015.0
+8016.0
+8017.0
+8018.0
+8019.0
+8020.0
+8021.0
+8022.0
+8023.0
+8024.0
+8025.0
+8026.0
+8027.0
+8028.0
+8029.0
+8030.0
+8031.0
+8032.0
+8033.0
+8034.0
+8035.0
+8036.0
+8037.0
+8038.0
+8039.0
+8040.0
+8041.0
+8042.0
+8043.0
+8044.0
+8045.0
+8046.0
+8047.0
+8048.0
+8049.0
+8050.0
+8051.0
+8052.0
+8053.0
+8054.0
+8055.0
+8056.0
+8057.0
+8058.0
+8059.0
+8060.0
+8061.0
+8062.0
+8063.0
+8064.0
+8065.0
+8066.0
+8067.0
+8068.0
+8069.0
+8070.0
+8071.0
+8072.0
+8073.0
+8074.0
+8075.0
+8076.0
+8077.0
+8078.0
+8079.0
+8080.0
+8081.0
+8082.0
+8083.0
+8084.0
+8085.0
+8086.0
+8087.0
+8088.0
+8089.0
+8090.0
+8091.0
+8092.0
+8093.0
+8094.0
+8095.0
+8096.0
+8097.0
+8098.0
+8099.0
+8100.0
+8101.0
+8102.0
+8103.0
+8104.0
+8105.0
+8106.0
+8107.0
+8108.0
+8109.0
+8110.0
+8111.0
+8112.0
+8113.0
+8114.0
+8115.0
+8116.0
+8117.0
+8118.0
+8119.0
+8120.0
+8121.0
+8122.0
+8123.0
+8124.0
+8125.0
+8126.0
+8127.0
+8128.0
+8129.0
+8130.0
+8131.0
+8132.0
+8133.0
+8134.0
+8135.0
+8136.0
+8137.0
+8138.0
+8139.0
+8140.0
+8141.0
+8142.0
+8143.0
+8144.0
+8145.0
+8146.0
+8147.0
+8148.0
+8149.0
+8150.0
+8151.0
+8152.0
+8153.0
+8154.0
+8155.0
+8156.0
+8157.0
+8158.0
+8159.0
+8160.0
+8161.0
+8162.0
+8163.0
+8164.0
+8165.0
+8166.0
+8167.0
+8168.0
+8169.0
+8170.0
+8171.0
+8172.0
+8173.0
+8174.0
+8175.0
+8176.0
+8177.0
+8178.0
+8179.0
+8180.0
+8181.0
+8182.0
+8183.0
+8184.0
+8185.0
+8186.0
+8187.0
+8188.0
+8189.0
+8190.0
+8191.0
+8192.0
+8193.0
+8194.0
+8195.0
+8196.0
+8197.0
+8198.0
+8199.0
+8200.0
+8201.0
+8202.0
+8203.0
+8204.0
+8205.0
+8206.0
+8207.0
+8208.0
+8209.0
+8210.0
+8211.0
+8212.0
+8213.0
+8214.0
+8215.0
+8216.0
+8217.0
+8218.0
+8219.0
+8220.0
+8221.0
+8222.0
+8223.0
+8224.0
+8225.0
+8226.0
+8227.0
+8228.0
+8229.0
+8230.0
+8231.0
+8232.0
+8233.0
+8234.0
+8235.0
+8236.0
+8237.0
+8238.0
+8239.0
+8240.0
+8241.0
+8242.0
+8243.0
+8244.0
+8245.0
+8246.0
+8247.0
+8248.0
+8249.0
+8250.0
+8251.0
+8252.0
+8253.0
+8254.0
+8255.0
+8256.0
+8257.0
+8258.0
+8259.0
+8260.0
+8261.0
+8262.0
+8263.0
+8264.0
+8265.0
+8266.0
+8267.0
+8268.0
+8269.0
+8270.0
+8271.0
+8272.0
+8273.0
+8274.0
+8275.0
+8276.0
+8277.0
+8278.0
+8279.0
+8280.0
+8281.0
+8282.0
+8283.0
+8284.0
+8285.0
+8286.0
+8287.0
+8288.0
+8289.0
+8290.0
+8291.0
+8292.0
+8293.0
+8294.0
+8295.0
+8296.0
+8297.0
+8298.0
+8299.0
+8300.0
+8301.0
+8302.0
+8303.0
+8304.0
+8305.0
+8306.0
+8307.0
+8308.0
+8309.0
+8310.0
+8311.0
+8312.0
+8313.0
+8314.0
+8315.0
+8316.0
+8317.0
+8318.0
+8319.0
+8320.0
+8321.0
+8322.0
+8323.0
+8324.0
+8325.0
+8326.0
+8327.0
+8328.0
+8329.0
+8330.0
+8331.0
+8332.0
+8333.0
+8334.0
+8335.0
+8336.0
+8337.0
+8338.0
+8339.0
+8340.0
+8341.0
+8342.0
+8343.0
+8344.0
+8345.0
+8346.0
+8347.0
+8348.0
+8349.0
+8350.0
+8351.0
+8352.0
+8353.0
+8354.0
+8355.0
+8356.0
+8357.0
+8358.0
+8359.0
+8360.0
+8361.0
+8362.0
+8363.0
+8364.0
+8365.0
+8366.0
+8367.0
+8368.0
+8369.0
+8370.0
+8371.0
+8372.0
+8373.0
+8374.0
+8375.0
+8376.0
+8377.0
+8378.0
+8379.0
+8380.0
+8381.0
+8382.0
+8383.0
+8384.0
+8385.0
+8386.0
+8387.0
+8388.0
+8389.0
+8390.0
+8391.0
+8392.0
+8393.0
+8394.0
+8395.0
+8396.0
+8397.0
+8398.0
+8399.0
+8400.0
+8401.0
+8402.0
+8403.0
+8404.0
+8405.0
+8406.0
+8407.0
+8408.0
+8409.0
+8410.0
+8411.0
+8412.0
+8413.0
+8414.0
+8415.0
+8416.0
+8417.0
+8418.0
+8419.0
+8420.0
+8421.0
+8422.0
+8423.0
+8424.0
+8425.0
+8426.0
+8427.0
+8428.0
+8429.0
+8430.0
+8431.0
+8432.0
+8433.0
+8434.0
+8435.0
+8436.0
+8437.0
+8438.0
+8439.0
+8440.0
+8441.0
+8442.0
+8443.0
+8444.0
+8445.0
+8446.0
+8447.0
+8448.0
+8449.0
+8450.0
+8451.0
+8452.0
+8453.0
+8454.0
+8455.0
+8456.0
+8457.0
+8458.0
+8459.0
+8460.0
+8461.0
+8462.0
+8463.0
+8464.0
+8465.0
+8466.0
+8467.0
+8468.0
+8469.0
+8470.0
+8471.0
+8472.0
+8473.0
+8474.0
+8475.0
+8476.0
+8477.0
+8478.0
+8479.0
+8480.0
+8481.0
+8482.0
+8483.0
+8484.0
+8485.0
+8486.0
+8487.0
+8488.0
+8489.0
+8490.0
+8491.0
+8492.0
+8493.0
+8494.0
+8495.0
+8496.0
+8497.0
+8498.0
+8499.0
+8500.0
+8501.0
+8502.0
+8503.0
+8504.0
+8505.0
+8506.0
+8507.0
+8508.0
+8509.0
+8510.0
+8511.0
+8512.0
+8513.0
+8514.0
+8515.0
+8516.0
+8517.0
+8518.0
+8519.0
+8520.0
+8521.0
+8522.0
+8523.0
+8524.0
+8525.0
+8526.0
+8527.0
+8528.0
+8529.0
+8530.0
+8531.0
+8532.0
+8533.0
+8534.0
+8535.0
+8536.0
+8537.0
+8538.0
+8539.0
+8540.0
+8541.0
+8542.0
+8543.0
+8544.0
+8545.0
+8546.0
+8547.0
+8548.0
+8549.0
+8550.0
+8551.0
+8552.0
+8553.0
+8554.0
+8555.0
+8556.0
+8557.0
+8558.0
+8559.0
+8560.0
+8561.0
+8562.0
+8563.0
+8564.0
+8565.0
+8566.0
+8567.0
+8568.0
+8569.0
+8570.0
+8571.0
+8572.0
+8573.0
+8574.0
+8575.0
+8576.0
+8577.0
+8578.0
+8579.0
+8580.0
+8581.0
+8582.0
+8583.0
+8584.0
+8585.0
+8586.0
+8587.0
+8588.0
+8589.0
+8590.0
+8591.0
+8592.0
+8593.0
+8594.0
+8595.0
+8596.0
+8597.0
+8598.0
+8599.0
+8600.0
+8601.0
+8602.0
+8603.0
+8604.0
+8605.0
+8606.0
+8607.0
+8608.0
+8609.0
+8610.0
+8611.0
+8612.0
+8613.0
+8614.0
+8615.0
+8616.0
+8617.0
+8618.0
+8619.0
+8620.0
+8621.0
+8622.0
+8623.0
+8624.0
+8625.0
+8626.0
+8627.0
+8628.0
+8629.0
+8630.0
+8631.0
+8632.0
+8633.0
+8634.0
+8635.0
+8636.0
+8637.0
+8638.0
+8639.0
+8640.0
+8641.0
+8642.0
+8643.0
+8644.0
+8645.0
+8646.0
+8647.0
+8648.0
+8649.0
+8650.0
+8651.0
+8652.0
+8653.0
+8654.0
+8655.0
+8656.0
+8657.0
+8658.0
+8659.0
+8660.0
+8661.0
+8662.0
+8663.0
+8664.0
+8665.0
+8666.0
+8667.0
+8668.0
+8669.0
+8670.0
+8671.0
+8672.0
+8673.0
+8674.0
+8675.0
+8676.0
+8677.0
+8678.0
+8679.0
+8680.0
+8681.0
+8682.0
+8683.0
+8684.0
+8685.0
+8686.0
+8687.0
+8688.0
+8689.0
+8690.0
+8691.0
+8692.0
+8693.0
+8694.0
+8695.0
+8696.0
+8697.0
+8698.0
+8699.0
+8700.0
+8701.0
+8702.0
+8703.0
+8704.0
+8705.0
+8706.0
+8707.0
+8708.0
+8709.0
+8710.0
+8711.0
+8712.0
+8713.0
+8714.0
+8715.0
+8716.0
+8717.0
+8718.0
+8719.0
+8720.0
+8721.0
+8722.0
+8723.0
+8724.0
+8725.0
+8726.0
+8727.0
+8728.0
+8729.0
+8730.0
+8731.0
+8732.0
+8733.0
+8734.0
+8735.0
+8736.0
+8737.0
+8738.0
+8739.0
+8740.0
+8741.0
+8742.0
+8743.0
+8744.0
+8745.0
+8746.0
+8747.0
+8748.0
+8749.0
+8750.0
+8751.0
+8752.0
+8753.0
+8754.0
+8755.0
+8756.0
+8757.0
+8758.0
+8759.0
+8760.0
+8761.0
+8762.0
+8763.0
+8764.0
+8765.0
+8766.0
+8767.0
+8768.0
+8769.0
+8770.0
+8771.0
+8772.0
+8773.0
+8774.0
+8775.0
+8776.0
+8777.0
+8778.0
+8779.0
+8780.0
+8781.0
+8782.0
+8783.0
+8784.0
+8785.0
+8786.0
+8787.0
+8788.0
+8789.0
+8790.0
+8791.0
+8792.0
+8793.0
+8794.0
+8795.0
+8796.0
+8797.0
+8798.0
+8799.0
+8800.0
+8801.0
+8802.0
+8803.0
+8804.0
+8805.0
+8806.0
+8807.0
+8808.0
+8809.0
+8810.0
+8811.0
+8812.0
+8813.0
+8814.0
+8815.0
+8816.0
+8817.0
+8818.0
+8819.0
+8820.0
+8821.0
+8822.0
+8823.0
+8824.0
+8825.0
+8826.0
+8827.0
+8828.0
+8829.0
+8830.0
+8831.0
+8832.0
+8833.0
+8834.0
+8835.0
+8836.0
+8837.0
+8838.0
+8839.0
+8840.0
+8841.0
+8842.0
+8843.0
+8844.0
+8845.0
+8846.0
+8847.0
+8848.0
+8849.0
+8850.0
+8851.0
+8852.0
+8853.0
+8854.0
+8855.0
+8856.0
+8857.0
+8858.0
+8859.0
+8860.0
+8861.0
+8862.0
+8863.0
+8864.0
+8865.0
+8866.0
+8867.0
+8868.0
+8869.0
+8870.0
+8871.0
+8872.0
+8873.0
+8874.0
+8875.0
+8876.0
+8877.0
+8878.0
+8879.0
+8880.0
+8881.0
+8882.0
+8883.0
+8884.0
+8885.0
+8886.0
+8887.0
+8888.0
+8889.0
+8890.0
+8891.0
+8892.0
+8893.0
+8894.0
+8895.0
+8896.0
+8897.0
+8898.0
+8899.0
+8900.0
+8901.0
+8902.0
+8903.0
+8904.0
+8905.0
+8906.0
+8907.0
+8908.0
+8909.0
+8910.0
+8911.0
+8912.0
+8913.0
+8914.0
+8915.0
+8916.0
+8917.0
+8918.0
+8919.0
+8920.0
+8921.0
+8922.0
+8923.0
+8924.0
+8925.0
+8926.0
+8927.0
+8928.0
+8929.0
+8930.0
+8931.0
+8932.0
+8933.0
+8934.0
+8935.0
+8936.0
+8937.0
+8938.0
+8939.0
+8940.0
+8941.0
+8942.0
+8943.0
+8944.0
+8945.0
+8946.0
+8947.0
+8948.0
+8949.0
+8950.0
+8951.0
+8952.0
+8953.0
+8954.0
+8955.0
+8956.0
+8957.0
+8958.0
+8959.0
+8960.0
+8961.0
+8962.0
+8963.0
+8964.0
+8965.0
+8966.0
+8967.0
+8968.0
+8969.0
+8970.0
+8971.0
+8972.0
+8973.0
+8974.0
+8975.0
+8976.0
+8977.0
+8978.0
+8979.0
+8980.0
+8981.0
+8982.0
+8983.0
+8984.0
+8985.0
+8986.0
+8987.0
+8988.0
+8989.0
+8990.0
+8991.0
+8992.0
+8993.0
+8994.0
+8995.0
+8996.0
+8997.0
+8998.0
+8999.0
+9000.0
+9001.0
+9002.0
+9003.0
+9004.0
+9005.0
+9006.0
+9007.0
+9008.0
+9009.0
+9013.0
+9014.0
+9015.0
+9016.0
+9017.0
+9018.0
+9019.0
+9020.0
+9021.0
+9022.0
+9023.0
+9024.0
+9025.0
+9026.0
+9027.0
+9028.0
+9029.0
+9030.0
+9031.0
+9032.0
+9033.0
+9034.0
+9035.0
+9036.0
+9037.0
+9038.0
+9039.0
+9040.0
+9041.0
+9042.0
+9043.0
+9044.0
+9045.0
+9046.0
+9047.0
+9048.0
+9049.0
+9050.0
+9051.0
+9052.0
+9053.0
+9054.0
+9055.0
+9056.0
+9057.0
+9058.0
+9059.0
+9060.0
+9061.0
+9062.0
+9063.0
+9064.0
+9065.0
+9066.0
+9067.0
+9068.0
+9069.0
+9070.0
+9071.0
+9072.0
+9073.0
+9074.0
+9075.0
+9076.0
+9077.0
+9078.0
+9079.0
+9080.0
+9081.0
+9082.0
+9083.0
+9084.0
+9085.0
+9086.0
+9087.0
+9088.0
+9089.0
+9090.0
+9091.0
+9092.0
+9093.0
+9094.0
+9095.0
+9096.0
+9097.0
+9098.0
+9099.0
+9100.0
+9101.0
+9102.0
+9103.0
+9104.0
+9105.0
+9106.0
+9107.0
+9108.0
+9109.0
+9110.0
+9111.0
+9112.0
+9113.0
+9114.0
+9115.0
+9116.0
+9117.0
+9118.0
+9119.0
+9120.0
+9121.0
+9122.0
+9123.0
+9124.0
+9125.0
+9126.0
+9127.0
+9128.0
+9129.0
+9130.0
+9131.0
+9132.0
+9133.0
+9134.0
+9135.0
+9136.0
+9137.0
+9138.0
+9139.0
+9140.0
+9141.0
+9142.0
+9143.0
+9144.0
+9145.0
+9146.0
+9147.0
+9148.0
+9149.0
+9150.0
+9151.0
+9152.0
+9153.0
+9154.0
+9155.0
+9156.0
+9157.0
+9158.0
+9159.0
+9160.0
+9161.0
+9162.0
+9163.0
+9164.0
+9165.0
+9166.0
+9167.0
+9168.0
+9169.0
+9170.0
+9171.0
+9172.0
+9173.0
+9174.0
+9175.0
+9176.0
+9177.0
+9178.0
+9179.0
+9180.0
+9181.0
+9182.0
+9183.0
+9184.0
+9185.0
+9186.0
+9187.0
+9188.0
+9189.0
+9190.0
+9191.0
+9192.0
+9193.0
+9194.0
+9195.0
+9196.0
+9197.0
+9198.0
+9199.0
+9200.0
+9201.0
+9202.0
+9203.0
+9204.0
+9205.0
+9206.0
+9207.0
+9208.0
+9209.0
+9210.0
+9211.0
+9212.0
+9213.0
+9214.0
+9215.0
+9216.0
+9217.0
+9218.0
+9219.0
+9220.0
+9221.0
+9222.0
+9223.0
+9224.0
+9225.0
+9226.0
+9227.0
+9228.0
+9229.0
+9230.0
+9231.0
+9232.0
+9233.0
+9234.0
+9235.0
+9236.0
+9237.0
+9238.0
+9239.0
+9240.0
+9241.0
+9242.0
+9243.0
+9244.0
+9245.0
+9246.0
+9247.0
+9248.0
+9249.0
+9250.0
+9251.0
+9252.0
+9253.0
+9254.0
+9255.0
+9256.0
+9257.0
+9258.0
+9259.0
+9260.0
+9261.0
+9262.0
+9263.0
+9264.0
+9265.0
+9266.0
+9267.0
+9268.0
+9269.0
+9270.0
+9271.0
+9272.0
+9273.0
+9274.0
+9275.0
+9276.0
+9277.0
+9278.0
+9279.0
+9280.0
+9281.0
+9282.0
+9283.0
+9284.0
+9285.0
+9286.0
+9287.0
+9288.0
+9289.0
+9290.0
+9291.0
+9292.0
+9293.0
+9294.0
+9295.0
+9296.0
+9297.0
+9298.0
+9299.0
+9300.0
+9301.0
+9302.0
+9303.0
+9304.0
+9305.0
+9306.0
+9307.0
+9308.0
+9309.0
+9310.0
+9311.0
+9312.0
+9313.0
+9314.0
+9315.0
+9316.0
+9317.0
+9318.0
+9319.0
+9320.0
+9321.0
+9322.0
+9323.0
+9324.0
+9325.0
+9326.0
+9327.0
+9328.0
+9329.0
+9330.0
+9331.0
+9332.0
+9333.0
+9334.0
+9335.0
+9336.0
+9337.0
+9338.0
+9339.0
+9340.0
+9341.0
+9342.0
+9343.0
+9344.0
+9345.0
+9346.0
+9347.0
+9348.0
+9349.0
+9350.0
+9351.0
+9352.0
+9353.0
+9354.0
+9355.0
+9356.0
+9357.0
+9358.0
+9359.0
+9360.0
+9361.0
+9362.0
+9363.0
+9364.0
+9365.0
+9366.0
+9367.0
+9368.0
+9369.0
+9370.0
+9371.0
+9372.0
+9373.0
+9374.0
+9375.0
+9376.0
+9377.0
+9378.0
+9379.0
+9380.0
+9381.0
+9382.0
+9383.0
+9384.0
+9385.0
+9386.0
+9387.0
+9388.0
+9389.0
+9390.0
+9391.0
+9392.0
+9393.0
+9394.0
+9395.0
+9396.0
+9397.0
+9398.0
+9399.0
+9400.0
+9401.0
+9402.0
+9403.0
+9404.0
+9405.0
+9406.0
+9407.0
+9408.0
+9409.0
+9410.0
+9411.0
+9412.0
+9413.0
+9414.0
+9415.0
+9416.0
+9417.0
+9418.0
+9419.0
+9420.0
+9421.0
+9422.0
+9423.0
+9424.0
+9425.0
+9426.0
+9427.0
+9428.0
+9429.0
+9430.0
+9431.0
+9432.0
+9433.0
+9434.0
+9435.0
+9436.0
+9437.0
+9438.0
+9439.0
+9440.0
+9441.0
+9442.0
+9443.0
+9444.0
+9445.0
+9446.0
+9447.0
+9448.0
+9449.0
+9450.0
+9451.0
+9452.0
+9453.0
+9454.0
+9455.0
+9456.0
+9457.0
+9458.0
+9459.0
+9460.0
+9461.0
+9462.0
+9463.0
+9464.0
+9465.0
+9466.0
+9467.0
+9468.0
+9469.0
+9470.0
+9471.0
+9472.0
+9473.0
+9474.0
+9475.0
+9476.0
+9477.0
+9478.0
+9479.0
+9480.0
+9481.0
+9482.0
+9483.0
+9484.0
+9485.0
+9486.0
+9487.0
+9488.0
+9489.0
+9490.0
+9491.0
+9492.0
+9493.0
+9494.0
+9495.0
+9496.0
+9497.0
+9498.0
+9499.0
+9500.0
+9501.0
+9502.0
+9503.0
+9504.0
+9505.0
+9506.0
+9507.0
+9508.0
+9509.0
+9510.0
+9511.0
+9512.0
+9513.0
+9514.0
+9515.0
+9516.0
+9517.0
+9518.0
+9519.0
+9520.0
+9521.0
+9522.0
+9523.0
+9524.0
+9525.0
+9526.0
+9527.0
+9528.0
+9529.0
+9530.0
+9531.0
+9532.0
+9533.0
+9534.0
+9535.0
+9536.0
+9537.0
+9538.0
+9539.0
+9540.0
+9541.0
+9542.0
+9543.0
+9544.0
+9545.0
+9546.0
+9547.0
+9548.0
+9549.0
+9550.0
+9551.0
+9552.0
+9553.0
+9554.0
+9555.0
+9556.0
+9557.0
+9558.0
+9559.0
+9560.0
+9561.0
+9562.0
+9563.0
+9564.0
+9565.0
+9566.0
+9567.0
+9568.0
+9569.0
+9570.0
+9571.0
+9572.0
+9573.0
+9574.0
+9575.0
+9576.0
+9577.0
+9578.0
+9579.0
+9580.0
+9581.0
+9582.0
+9583.0
+9584.0
+9585.0
+9586.0
+9587.0
+9588.0
+9589.0
+9590.0
+9591.0
+9592.0
+9593.0
+9594.0
+9595.0
+9596.0
+9597.0
+9598.0
+9599.0
+9600.0
+9601.0
+9602.0
+9603.0
+9604.0
+9605.0
+9606.0
+9607.0
+9608.0
+9609.0
+9610.0
+9611.0
+9612.0
+9613.0
+9614.0
+9615.0
+9616.0
+9617.0
+9618.0
+9619.0
+9620.0
+9621.0
+9622.0
+9623.0
+9624.0
+9625.0
+9626.0
+9627.0
+9628.0
+9629.0
+9630.0
+9631.0
+9632.0
+9633.0
+9634.0
+9635.0
+9636.0
+9637.0
+9638.0
+9639.0
+9640.0
+9641.0
+9642.0
+9643.0
+9644.0
+9645.0
+9646.0
+9647.0
+9648.0
+9649.0
+9650.0
+9651.0
+9652.0
+9653.0
+9654.0
+9655.0
+9656.0
+9657.0
+9658.0
+9659.0
+9660.0
+9661.0
+9662.0
+9663.0
+9664.0
+9665.0
+9666.0
+9667.0
+9668.0
+9669.0
+9670.0
+9671.0
+9672.0
+9673.0
+9674.0
+9675.0
+9676.0
+9677.0
+9678.0
+9679.0
+9680.0
+9681.0
+9682.0
+9683.0
+9684.0
+9685.0
+9686.0
+9687.0
+9688.0
+9689.0
+9690.0
+9691.0
+9692.0
+9693.0
+9694.0
+9695.0
+9696.0
+9697.0
+9698.0
+9699.0
+9700.0
+9701.0
+9702.0
+9703.0
+9704.0
+9705.0
+9706.0
+9707.0
+9708.0
+9709.0
+9710.0
+9711.0
+9712.0
+9713.0
+9714.0
+9715.0
+9716.0
+9717.0
+9718.0
+9719.0
+9720.0
+9721.0
+9722.0
+9723.0
+9724.0
+9725.0
+9726.0
+9727.0
+9728.0
+9729.0
+9730.0
+9731.0
+9732.0
+9733.0
+9734.0
+9735.0
+9736.0
+9737.0
+9738.0
+9739.0
+9740.0
+9741.0
+9742.0
+9743.0
+9744.0
+9745.0
+9746.0
+9747.0
+9748.0
+9749.0
+9750.0
+9751.0
+9752.0
+9753.0
+9754.0
+9755.0
+9756.0
+9757.0
+9758.0
+9759.0
+9760.0
+9761.0
+9762.0
+9763.0
+9764.0
+9765.0
+9766.0
+9767.0
+9768.0
+9769.0
+9770.0
+9771.0
+9772.0
+9773.0
+9774.0
+9775.0
+9776.0
+9777.0
+9778.0
+9779.0
+9780.0
+9781.0
+9782.0
+9783.0
+9784.0
+9785.0
+9786.0
+9787.0
+9788.0
+9789.0
+9790.0
+9791.0
+9792.0
+9793.0
+9794.0
+9795.0
+9796.0
+9797.0
+9798.0
+9799.0
+9800.0
+9801.0
+9802.0
+9803.0
+9804.0
+9805.0
+9806.0
+9807.0
+9808.0
+9809.0
+9810.0
+9811.0
+9812.0
+9813.0
+9814.0
+9815.0
+9816.0
+9817.0
+9818.0
+9819.0
+9820.0
+9821.0
+9822.0
+9823.0
+9824.0
+9825.0
+9826.0
+9827.0
+9828.0
+9829.0
+9830.0
+9831.0
+9832.0
+9833.0
+9834.0
+9835.0
+9836.0
+9837.0
+9838.0
+9839.0
+9840.0
+9841.0
+9842.0
+9843.0
+9844.0
+9845.0
+9846.0
+9847.0
+9848.0
+9849.0
+9850.0
+9851.0
+9852.0
+9853.0
+9854.0
+9855.0
+9856.0
+9857.0
+9858.0
+9859.0
+9860.0
+9861.0
+9862.0
+9863.0
+9864.0
+9865.0
+9866.0
+9867.0
+9868.0
+9869.0
+9870.0
+9871.0
+9872.0
+9873.0
+9874.0
+9875.0
+9876.0
+9877.0
+9878.0
+9879.0
+9880.0
+9881.0
+9882.0
+9883.0
+9884.0
+9885.0
+9886.0
+9887.0
+9888.0
+9889.0
+9890.0
+9891.0
+9892.0
+9893.0
+9894.0
+9895.0
+9896.0
+9897.0
+9898.0
+9899.0
+9900.0
+9901.0
+9902.0
+9903.0
+9904.0
+9905.0
+9906.0
+9907.0
+9908.0
+9909.0
+9910.0
+9911.0
+9912.0
+9913.0
+9914.0
+9915.0
+9916.0
+9917.0
+9918.0
+9919.0
+9920.0
+9921.0
+9922.0
+9923.0
+9924.0
+9925.0
+9926.0
+9927.0
+9928.0
+9929.0
+9930.0
+9931.0
+9932.0
+9933.0
+9934.0
+9935.0
+9936.0
+9937.0
+9938.0
+9939.0
+9940.0
+9941.0
+9942.0
+9943.0
+9944.0
+9945.0
+9946.0
+9947.0
+9948.0
+9949.0
+9950.0
+9951.0
+9952.0
+9953.0
+9954.0
+9955.0
+9956.0
+9957.0
+9958.0
+9959.0
+9960.0
+9961.0
+9962.0
+9963.0
+9964.0
+9965.0
+9966.0
+9967.0
+9968.0
+9969.0
+9970.0
+9971.0
+9972.0
+9973.0
+9974.0
+9975.0
+9976.0
+9977.0
+9978.0
+9979.0
+9980.0
+9981.0
+9982.0
+9983.0
+9984.0
+9985.0
+9986.0
+9987.0
+9988.0
+9989.0
+9990.0
+9991.0
+9993.0
+9994.0
+9995.0
+9996.0
+9997.0
+9998.0
+9999.0
+10000.0
diff --git a/pandas/tests/io/sas/data/deleted_rows.sas7bdat b/pandas/tests/io/sas/data/deleted_rows.sas7bdat
new file mode 100644
index 0000000000000..052274e690271
Binary files /dev/null and b/pandas/tests/io/sas/data/deleted_rows.sas7bdat differ
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 705387188438f..4e49db381e198 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -214,7 +214,23 @@ def test_inconsistent_number_of_rows(datapath):
# Regression test for issue #16615. (PR #22628)
fname = datapath("io", "sas", "data", "load_log.sas7bdat")
df = pd.read_sas(fname, encoding='latin-1')
- assert len(df) == 2097
+ assert len(df) == 2088
+
+
+def test_deleted_rows(datapath):
+ # Regression test for issue #15963. (PR #22650)
+ TESTS = [['deleted_rows', {}],
+ ['datetime_deleted_rows', {
+ 'parse_dates': ['Date1', 'Date2', 'DateTime',
+ 'DateTimeHi', 'Taiw']
+ }]]
+ for fn, csv_kwargs in TESTS:
+ fname = datapath("io", "sas", "data", "{}.sas7bdat".format(
+ fn))
+ df = pd.read_sas(fname, encoding='latin-1')
+ fname = datapath("io", "sas", "data", "{}.csv".format(fn))
+ df0 = pd.read_csv(fname, **csv_kwargs)
+ tm.assert_frame_equal(df, df0)
def test_zero_variables(datapath):
| Sas7bdat files may contain rows which are actually deleted.
After reverse engineering the file format a bit, I found that
if the page_type has bit 7 set, there is a bitmap following
the normal row data with a bit set for a given row if it has been
deleted. Use that information to not include deleted rows in
the resulting dataframe.
This PR is built on top of #22628
I've added two test-cases one from the issue and another constructed
example, which tests parsing on different page types and across multiple sas7bdat-pages.
The constructed example is rather large, mainly to have a test case where the data
flows across several pages.
- [X] closes #15963
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22650 | 2018-09-09T16:35:50Z | 2019-02-06T03:31:36Z | null | 2020-05-04T05:30:40Z |
BUG/ENH: Handle AmbiguousTimeError in date rounding | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 487d5d0d2accd..de4d33789105a 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -182,6 +182,7 @@ Other Enhancements
- :func:`to_timedelta` now supports iso-formated timedelta strings (:issue:`21877`)
- :class:`Series` and :class:`DataFrame` now support :class:`Iterable` in constructor (:issue:`2193`)
- :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`)
+- :meth:`round`, :meth:`ceil`, and meth:`floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support an ``ambiguous`` argument for handling datetimes that are rounded to ambiguous times (:issue:`18946`)
- :class:`Resampler` now is iterable like :class:`GroupBy` (:issue:`15314`).
- :meth:`Series.resample` and :meth:`DataFrame.resample` have gained the :meth:`Resampler.quantile` (:issue:`15023`).
- :meth:`Index.to_frame` now supports overriding column name(s) (:issue:`22580`).
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index fd8486f690745..ae4f9c821b5d1 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -477,6 +477,13 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the rounding resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
Raises
------
@@ -489,6 +496,17 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the flooring resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
""")
ceil = _make_nat_func('ceil', # noqa:E128
"""
@@ -497,6 +515,17 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the ceiling resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
""")
tz_convert = _make_nat_func('tz_convert', # noqa:E128
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 52343593d1cc1..e985a519c3046 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -656,7 +656,7 @@ class Timestamp(_Timestamp):
return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, freq)
- def _round(self, freq, rounder):
+ def _round(self, freq, rounder, ambiguous='raise'):
if self.tz is not None:
value = self.tz_localize(None).value
else:
@@ -668,10 +668,10 @@ class Timestamp(_Timestamp):
r = round_ns(value, rounder, freq)[0]
result = Timestamp(r, unit='ns')
if self.tz is not None:
- result = result.tz_localize(self.tz)
+ result = result.tz_localize(self.tz, ambiguous=ambiguous)
return result
- def round(self, freq):
+ def round(self, freq, ambiguous='raise'):
"""
Round the Timestamp to the specified resolution
@@ -682,32 +682,61 @@ class Timestamp(_Timestamp):
Parameters
----------
freq : a freq string indicating the rounding resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
Raises
------
ValueError if the freq cannot be converted
"""
- return self._round(freq, np.round)
+ return self._round(freq, np.round, ambiguous)
- def floor(self, freq):
+ def floor(self, freq, ambiguous='raise'):
"""
return a new Timestamp floored to this resolution
Parameters
----------
freq : a freq string indicating the flooring resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
"""
- return self._round(freq, np.floor)
+ return self._round(freq, np.floor, ambiguous)
- def ceil(self, freq):
+ def ceil(self, freq, ambiguous='raise'):
"""
return a new Timestamp ceiled to this resolution
Parameters
----------
freq : a freq string indicating the ceiling resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
"""
- return self._round(freq, np.ceil)
+ return self._round(freq, np.ceil, ambiguous)
@property
def tz(self):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 3f8c07fe7cd21..578167a7db500 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -99,6 +99,18 @@ class TimelikeOps(object):
frequency like 'S' (second) not 'ME' (month end). See
:ref:`frequency aliases <timeseries.offset_aliases>` for
a list of possible `freq` values.
+ ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
+ - 'infer' will attempt to infer fall dst-transition hours based on
+ order
+ - bool-ndarray where True signifies a DST time, False designates
+ a non-DST time (note that this flag is only applicable for
+ ambiguous times)
+ - 'NaT' will return NaT where there are ambiguous times
+ - 'raise' will raise an AmbiguousTimeError if there are ambiguous
+ times
+ Only relevant for DatetimeIndex
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -168,7 +180,7 @@ class TimelikeOps(object):
"""
)
- def _round(self, freq, rounder):
+ def _round(self, freq, rounder, ambiguous):
# round the local times
values = _ensure_datetimelike_to_i8(self)
result = round_ns(values, rounder, freq)
@@ -180,19 +192,20 @@ def _round(self, freq, rounder):
if 'tz' in attribs:
attribs['tz'] = None
return self._ensure_localized(
- self._shallow_copy(result, **attribs))
+ self._shallow_copy(result, **attribs), ambiguous
+ )
@Appender((_round_doc + _round_example).format(op="round"))
- def round(self, freq, *args, **kwargs):
- return self._round(freq, np.round)
+ def round(self, freq, ambiguous='raise'):
+ return self._round(freq, np.round, ambiguous)
@Appender((_round_doc + _floor_example).format(op="floor"))
- def floor(self, freq):
- return self._round(freq, np.floor)
+ def floor(self, freq, ambiguous='raise'):
+ return self._round(freq, np.floor, ambiguous)
@Appender((_round_doc + _ceil_example).format(op="ceil"))
- def ceil(self, freq):
- return self._round(freq, np.ceil)
+ def ceil(self, freq, ambiguous='raise'):
+ return self._round(freq, np.ceil, ambiguous)
class DatetimeIndexOpsMixin(DatetimeLikeArrayMixin):
@@ -264,7 +277,7 @@ def _evaluate_compare(self, other, op):
except TypeError:
return result
- def _ensure_localized(self, result):
+ def _ensure_localized(self, result, ambiguous='raise'):
"""
ensure that we are re-localized
@@ -274,6 +287,8 @@ def _ensure_localized(self, result):
Parameters
----------
result : DatetimeIndex / i8 ndarray
+ ambiguous : str, bool, or bool-ndarray
+ default 'raise'
Returns
-------
@@ -284,7 +299,7 @@ def _ensure_localized(self, result):
if getattr(self, 'tz', None) is not None:
if not isinstance(result, ABCIndexClass):
result = self._simple_new(result)
- result = result.tz_localize(self.tz)
+ result = result.tz_localize(self.tz, ambiguous=ambiguous)
return result
def _box_values_as_index(self):
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index bf41840c58ded..f83aa31edf95a 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -132,6 +132,28 @@ def test_floor(self):
expected = Timestamp('20130101')
assert result == expected
+ @pytest.mark.parametrize('method', ['ceil', 'round', 'floor'])
+ def test_round_dst_border(self, method):
+ # GH 18946 round near DST
+ ts = Timestamp('2017-10-29 00:00:00', tz='UTC').tz_convert(
+ 'Europe/Madrid'
+ )
+ #
+ result = getattr(ts, method)('H', ambiguous=True)
+ assert result == ts
+
+ result = getattr(ts, method)('H', ambiguous=False)
+ expected = Timestamp('2017-10-29 01:00:00', tz='UTC').tz_convert(
+ 'Europe/Madrid'
+ )
+ assert result == expected
+
+ result = getattr(ts, method)('H', ambiguous='NaT')
+ assert result is NaT
+
+ with pytest.raises(pytz.AmbiguousTimeError):
+ getattr(ts, method)('H', ambiguous='raise')
+
# --------------------------------------------------------------
# Timestamp.replace
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 5b45c6003a005..fee2323310b9c 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -5,6 +5,7 @@
import calendar
import unicodedata
import pytest
+import pytz
from datetime import datetime, time, date
@@ -95,42 +96,6 @@ def compare(s, name):
expected = Series(exp_values, index=s.index, name='xxx')
tm.assert_series_equal(result, expected)
- # round
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.round('D')
- expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02',
- '2012-01-01']), name='xxx')
- tm.assert_series_equal(result, expected)
-
- # round with tz
- result = (s.dt.tz_localize('UTC')
- .dt.tz_convert('US/Eastern')
- .dt.round('D'))
- exp_values = pd.to_datetime(['2012-01-01', '2012-01-01',
- '2012-01-01']).tz_localize('US/Eastern')
- expected = Series(exp_values, name='xxx')
- tm.assert_series_equal(result, expected)
-
- # floor
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.floor('D')
- expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01',
- '2012-01-01']), name='xxx')
- tm.assert_series_equal(result, expected)
-
- # ceil
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.ceil('D')
- expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02',
- '2012-01-02']), name='xxx')
- tm.assert_series_equal(result, expected)
-
# datetimeindex with tz
s = Series(date_range('20130101', periods=5, tz='US/Eastern'),
name='xxx')
@@ -261,6 +226,64 @@ def get_dir(s):
with pytest.raises(com.SettingWithCopyError):
s.dt.hour[0] = 5
+ @pytest.mark.parametrize('method, dates', [
+ ['round', ['2012-01-02', '2012-01-02', '2012-01-01']],
+ ['floor', ['2012-01-01', '2012-01-01', '2012-01-01']],
+ ['ceil', ['2012-01-02', '2012-01-02', '2012-01-02']]
+ ])
+ def test_dt_round(self, method, dates):
+ # round
+ s = Series(pd.to_datetime(['2012-01-01 13:00:00',
+ '2012-01-01 12:01:00',
+ '2012-01-01 08:00:00']), name='xxx')
+ result = getattr(s.dt, method)('D')
+ expected = Series(pd.to_datetime(dates), name='xxx')
+ tm.assert_series_equal(result, expected)
+
+ def test_dt_round_tz(self):
+ s = Series(pd.to_datetime(['2012-01-01 13:00:00',
+ '2012-01-01 12:01:00',
+ '2012-01-01 08:00:00']), name='xxx')
+ result = (s.dt.tz_localize('UTC')
+ .dt.tz_convert('US/Eastern')
+ .dt.round('D'))
+
+ exp_values = pd.to_datetime(['2012-01-01', '2012-01-01',
+ '2012-01-01']).tz_localize('US/Eastern')
+ expected = Series(exp_values, name='xxx')
+ tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize('method', ['ceil', 'round', 'floor'])
+ def test_dt_round_tz_ambiguous(self, method):
+ # GH 18946 round near DST
+ df1 = pd.DataFrame([
+ pd.to_datetime('2017-10-29 02:00:00+02:00', utc=True),
+ pd.to_datetime('2017-10-29 02:00:00+01:00', utc=True),
+ pd.to_datetime('2017-10-29 03:00:00+01:00', utc=True)
+ ],
+ columns=['date'])
+ df1['date'] = df1['date'].dt.tz_convert('Europe/Madrid')
+ # infer
+ result = getattr(df1.date.dt, method)('H', ambiguous='infer')
+ expected = df1['date']
+ tm.assert_series_equal(result, expected)
+
+ # bool-array
+ result = getattr(df1.date.dt, method)(
+ 'H', ambiguous=[True, False, False]
+ )
+ tm.assert_series_equal(result, expected)
+
+ # NaT
+ result = getattr(df1.date.dt, method)('H', ambiguous='NaT')
+ expected = df1['date'].copy()
+ expected.iloc[0:2] = pd.NaT
+ tm.assert_series_equal(result, expected)
+
+ # raise
+ with pytest.raises(pytz.AmbiguousTimeError):
+ getattr(df1.date.dt, method)('H', ambiguous='raise')
+
def test_dt_namespace_accessor_categorical(self):
# GH 19468
dti = DatetimeIndex(['20171111', '20181212']).repeat(2)
| - [x] closes #18946
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Discussed in #22560, adding an `ambiguous` argument to `round`, `ceil` and `floor` so the user can dictate how to handle rounding timestamps that land on ambiguous times.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22647 | 2018-09-09T06:48:32Z | 2018-09-23T13:25:41Z | 2018-09-23T13:25:41Z | 2018-09-23T16:18:22Z |
TST: Avoid DeprecationWarnings | diff --git a/pandas/core/common.py b/pandas/core/common.py
index a3fba762509f1..92e4e23ce958e 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -122,6 +122,24 @@ def is_bool_indexer(key):
return False
+def cast_scalar_indexer(val):
+ """
+ To avoid numpy DeprecationWarnings, cast float to integer where valid.
+
+ Parameters
+ ----------
+ val : scalar
+
+ Returns
+ -------
+ outval : scalar
+ """
+ # assumes lib.is_scalar(val)
+ if lib.is_float(val) and val == int(val):
+ return int(val)
+ return val
+
+
def _not_none(*args):
"""Returns a generator consisting of the arguments that are not None"""
return (arg for arg in args if arg is not None)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 710c9d0e296c9..b2b6e02e908c5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2047,6 +2047,7 @@ def __getitem__(self, key):
promote = self._shallow_copy
if is_scalar(key):
+ key = com.cast_scalar_indexer(key)
return getitem(key)
if isinstance(key, slice):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 955f1461075f9..90743033e492c 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1551,6 +1551,8 @@ def __setstate__(self, state):
def __getitem__(self, key):
if is_scalar(key):
+ key = com.cast_scalar_indexer(key)
+
retval = []
for lev, lab in zip(self.levels, self.labels):
if lab[key] == -1:
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 6501717f715cb..b175dd540a518 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -1033,10 +1033,14 @@ def css_bar(start, end, color):
def css(x):
if pd.isna(x):
return ''
+
+ # avoid deprecated indexing `colors[x > zero]`
+ color = colors[1] if x > zero else colors[0]
+
if align == 'left':
- return css_bar(0, x, colors[x > zero])
+ return css_bar(0, x, color)
else:
- return css_bar(min(x, zero), max(x, zero), colors[x > zero])
+ return css_bar(min(x, zero), max(x, zero), color)
if s.ndim == 1:
return [css(x) for x in normed]
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 01fafd7219382..f785ec35f52db 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -2431,7 +2431,7 @@ def assert_raises_regex(_exception, _regexp, _callable=None,
You can also use this in a with statement.
- >>> with assert_raises_regex(TypeError, 'unsupported operand type\(s\)'):
+ >>> with assert_raises_regex(TypeError, r'unsupported operand type\(s\)'):
... 1 + {}
>>> with assert_raises_regex(TypeError, 'banana'):
... 'apple'[0] = 'b'
| Avoid a boatload of these:
```
pandas/tests/plotting/test_frame.py::TestDataFramePlots::()::test_plot
/home/travis/build/pandas-dev/pandas/pandas/core/indexes/multi.py:1556: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
if lab[key] == -1:
/home/travis/build/pandas-dev/pandas/pandas/io/formats/style.py:1039: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index
return css_bar(min(x, zero), max(x, zero), colors[x > zero])
pandas/tests/series/test_analytics.py::TestCategoricalSeriesAnalytics::()::test_drop_duplicates_categorical_non_bool[False-datetime64[D]]
source:2442: DeprecationWarning: invalid escape sequence \(
```
That last one was a nice little adventure tracking down to pd.util.testing.
Does not silence these, as I'm not confident that is The Right Thing To Do:
```
pandas/tests/tslibs/test_parsing.py::TestGuessDatetimeFormat::()::test_guess_datetime_format_nopadding[2011-1-1 0:0:0-%Y-%m-%d %H:%M:%S]
/home/travis/miniconda3/envs/pandas/lib/python3.6/site-packages/dateutil/parser/__init__.py:46: DeprecationWarning: _timelex is a private class and may break without warning, it will be moved and or renamed in future versions.
warnings.warn(msg, DeprecationWarning)
```
Does not silence these, as I'm not sure how:
```
pandas/tests/io/test_sql.py::TestSQLiteAlchemyConn::()::test_read_table
/home/travis/miniconda3/envs/pandas/lib/python3.6/site-packages/_pytest/fixtures.py:795: RemovedInPytest4Warning: Fixture setup_method called directly. Fixtures are not meant to be called directly, are created automatically when test functions request them as parameters. See https://docs.pytest.org/en/latest/fixture.html for more information.
res = next(it)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/22646 | 2018-09-09T03:52:05Z | 2018-09-12T11:26:13Z | 2018-09-12T11:26:13Z | 2018-09-12T13:42:04Z |
TST: Collect/Use arithmetic test fixtures | diff --git a/pandas/tests/arithmetic/conftest.py b/pandas/tests/arithmetic/conftest.py
index 844472b8bcf0d..b800b66e8edea 100644
--- a/pandas/tests/arithmetic/conftest.py
+++ b/pandas/tests/arithmetic/conftest.py
@@ -28,14 +28,32 @@ def zero(request):
return request.param
+# ------------------------------------------------------------------
+# Vector Fixtures
+
@pytest.fixture(params=[pd.Float64Index(np.arange(5, dtype='float64')),
pd.Int64Index(np.arange(5, dtype='int64')),
- pd.UInt64Index(np.arange(5, dtype='uint64'))],
+ pd.UInt64Index(np.arange(5, dtype='uint64')),
+ pd.RangeIndex(5)],
ids=lambda x: type(x).__name__)
-def idx(request):
+def numeric_idx(request):
+ """
+ Several types of numeric-dtypes Index objects
+ """
return request.param
+@pytest.fixture
+def tdser():
+ """
+ Return a Series with dtype='timedelta64[ns]', including a NaT.
+ """
+ return pd.Series(['59 Days', '59 Days', 'NaT'], dtype='timedelta64[ns]')
+
+
+# ------------------------------------------------------------------
+# Scalar Fixtures
+
@pytest.fixture(params=[pd.Timedelta('5m4s').to_pytimedelta(),
pd.Timedelta('5m4s'),
pd.Timedelta('5m4s').to_timedelta64()],
@@ -47,6 +65,72 @@ def scalar_td(request):
return request.param
+@pytest.fixture(params=[pd.offsets.Day(3),
+ pd.offsets.Hour(72),
+ pd.Timedelta(days=3).to_pytimedelta(),
+ pd.Timedelta('72:00:00'),
+ np.timedelta64(3, 'D'),
+ np.timedelta64(72, 'h')])
+def three_days(request):
+ """
+ Several timedelta-like and DateOffset objects that each represent
+ a 3-day timedelta
+ """
+ return request.param
+
+
+@pytest.fixture(params=[pd.offsets.Hour(2),
+ pd.offsets.Minute(120),
+ pd.Timedelta(hours=2).to_pytimedelta(),
+ pd.Timedelta(seconds=2 * 3600),
+ np.timedelta64(2, 'h'),
+ np.timedelta64(120, 'm')])
+def two_hours(request):
+ """
+ Several timedelta-like and DateOffset objects that each represent
+ a 2-hour timedelta
+ """
+ return request.param
+
+
+_common_mismatch = [pd.offsets.YearBegin(2),
+ pd.offsets.MonthBegin(1),
+ pd.offsets.Minute()]
+
+
+@pytest.fixture(params=[pd.Timedelta(minutes=30).to_pytimedelta(),
+ np.timedelta64(30, 's'),
+ pd.Timedelta(seconds=30)] + _common_mismatch)
+def not_hourly(request):
+ """
+ Several timedelta-like and DateOffset instances that are _not_
+ compatible with Hourly frequencies.
+ """
+ return request.param
+
+
+@pytest.fixture(params=[np.timedelta64(4, 'h'),
+ pd.Timedelta(hours=23).to_pytimedelta(),
+ pd.Timedelta('23:00:00')] + _common_mismatch)
+def not_daily(request):
+ """
+ Several timedelta-like and DateOffset instances that are _not_
+ compatible with Daily frequencies.
+ """
+ return request.param
+
+
+@pytest.fixture(params=[np.timedelta64(365, 'D'),
+ pd.Timedelta(days=365).to_pytimedelta(),
+ pd.Timedelta(days=365)] + _common_mismatch)
+def mismatched_freq(request):
+ """
+ Several timedelta-like and DateOffset instances that are _not_
+ compatible with Monthly or Annual frequencies.
+ """
+ return request.param
+
+
# ------------------------------------------------------------------
@pytest.fixture(params=[pd.Index, pd.Series, pd.DataFrame],
@@ -59,6 +143,18 @@ def box(request):
return request.param
+@pytest.fixture(params=[pd.Index,
+ pd.Series,
+ pytest.param(pd.DataFrame,
+ marks=pytest.mark.xfail(strict=True))],
+ ids=lambda x: x.__name__)
+def box_df_fail(request):
+ """
+ Fixture equivalent to `box` fixture but xfailing the DataFrame case.
+ """
+ return request.param
+
+
@pytest.fixture(params=[
pd.Index,
pd.Series,
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index d597ea834f097..a3fa4e6b88256 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -27,29 +27,6 @@
DatetimeIndex, TimedeltaIndex)
-# ------------------------------------------------------------------
-# Fixtures
-
-@pytest.fixture(params=[pd.offsets.Hour(2), timedelta(hours=2),
- np.timedelta64(2, 'h'), Timedelta(hours=2)],
- ids=str)
-def delta(request):
- # Several ways of representing two hours
- return request.param
-
-
-@pytest.fixture(
- params=[
- datetime(2011, 1, 1),
- DatetimeIndex(['2011-01-01', '2011-01-02']),
- DatetimeIndex(['2011-01-01', '2011-01-02']).tz_localize('US/Eastern'),
- np.datetime64('2011-01-01'),
- Timestamp('2011-01-01')],
- ids=lambda x: type(x).__name__)
-def addend(request):
- return request.param
-
-
# ------------------------------------------------------------------
# Comparisons
@@ -697,23 +674,20 @@ def test_dt64ser_sub_datetime_dtype(self):
# TODO: This next block of tests came from tests.series.test_operators,
# needs to be de-duplicated and parametrized over `box` classes
- @pytest.mark.parametrize(
- 'box, assert_func',
- [(Series, tm.assert_series_equal),
- (pd.Index, tm.assert_index_equal)])
- def test_sub_datetime64_not_ns(self, box, assert_func):
+ @pytest.mark.parametrize('klass', [Series, pd.Index])
+ def test_sub_datetime64_not_ns(self, klass):
# GH#7996
dt64 = np.datetime64('2013-01-01')
assert dt64.dtype == 'datetime64[D]'
- obj = box(date_range('20130101', periods=3))
+ obj = klass(date_range('20130101', periods=3))
res = obj - dt64
- expected = box([Timedelta(days=0), Timedelta(days=1),
- Timedelta(days=2)])
- assert_func(res, expected)
+ expected = klass([Timedelta(days=0), Timedelta(days=1),
+ Timedelta(days=2)])
+ tm.assert_equal(res, expected)
res = dt64 - obj
- assert_func(res, -expected)
+ tm.assert_equal(res, -expected)
def test_sub_single_tz(self):
# GH12290
@@ -1113,40 +1087,40 @@ def test_dti_add_intarray_no_freq(self, box):
# -------------------------------------------------------------
# Binary operations DatetimeIndex and timedelta-like
- def test_dti_add_timedeltalike(self, tz_naive_fixture, delta, box):
+ def test_dti_add_timedeltalike(self, tz_naive_fixture, two_hours, box):
# GH#22005, GH#22163 check DataFrame doesn't raise TypeError
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
rng = tm.box_expected(rng, box)
- result = rng + delta
+ result = rng + two_hours
expected = pd.date_range('2000-01-01 02:00',
'2000-02-01 02:00', tz=tz)
expected = tm.box_expected(expected, box)
tm.assert_equal(result, expected)
- def test_dti_iadd_timedeltalike(self, tz_naive_fixture, delta):
+ def test_dti_iadd_timedeltalike(self, tz_naive_fixture, two_hours):
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
expected = pd.date_range('2000-01-01 02:00',
'2000-02-01 02:00', tz=tz)
- rng += delta
+ rng += two_hours
tm.assert_index_equal(rng, expected)
- def test_dti_sub_timedeltalike(self, tz_naive_fixture, delta):
+ def test_dti_sub_timedeltalike(self, tz_naive_fixture, two_hours):
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
expected = pd.date_range('1999-12-31 22:00',
'2000-01-31 22:00', tz=tz)
- result = rng - delta
+ result = rng - two_hours
tm.assert_index_equal(result, expected)
- def test_dti_isub_timedeltalike(self, tz_naive_fixture, delta):
+ def test_dti_isub_timedeltalike(self, tz_naive_fixture, two_hours):
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
expected = pd.date_range('1999-12-31 22:00',
'2000-01-31 22:00', tz=tz)
- rng -= delta
+ rng -= two_hours
tm.assert_index_equal(rng, expected)
# -------------------------------------------------------------
@@ -1252,27 +1226,23 @@ def test_dti_isub_tdi(self, tz_naive_fixture):
# TODO: A couple other tests belong in this section. Move them in
# A PR where there isn't already a giant diff.
- def test_add_datetimelike_and_dti(self, addend):
+ @pytest.mark.parametrize('addend', [
+ datetime(2011, 1, 1),
+ DatetimeIndex(['2011-01-01', '2011-01-02']),
+ DatetimeIndex(['2011-01-01', '2011-01-02']).tz_localize('US/Eastern'),
+ np.datetime64('2011-01-01'),
+ Timestamp('2011-01-01')
+ ], ids=lambda x: type(x).__name__)
+ @pytest.mark.parametrize('tz', [None, 'US/Eastern'])
+ def test_add_datetimelike_and_dti(self, addend, tz):
# GH#9631
- dti = DatetimeIndex(['2011-01-01', '2011-01-02'])
- msg = 'cannot add DatetimeIndex and {0}'.format(
- type(addend).__name__)
+ dti = DatetimeIndex(['2011-01-01', '2011-01-02']).tz_localize(tz)
+ msg = 'cannot add DatetimeIndex and {0}'.format(type(addend).__name__)
with tm.assert_raises_regex(TypeError, msg):
dti + addend
with tm.assert_raises_regex(TypeError, msg):
addend + dti
- def test_add_datetimelike_and_dti_tz(self, addend):
- # GH#9631
- dti_tz = DatetimeIndex(['2011-01-01',
- '2011-01-02']).tz_localize('US/Eastern')
- msg = 'cannot add DatetimeIndex and {0}'.format(
- type(addend).__name__)
- with tm.assert_raises_regex(TypeError, msg):
- dti_tz + addend
- with tm.assert_raises_regex(TypeError, msg):
- addend + dti_tz
-
# -------------------------------------------------------------
# __add__/__sub__ with ndarray[datetime64] and ndarray[timedelta64]
@@ -1391,21 +1361,14 @@ def test_sub_period(self, freq, box):
with pytest.raises(TypeError):
p - idx
- @pytest.mark.parametrize('box', [
- pd.Index,
- pd.Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(reason="Tries to broadcast "
- "incorrectly",
- strict=True,
- raises=ValueError))
- ], ids=lambda x: x.__name__)
@pytest.mark.parametrize('op', [operator.add, ops.radd,
operator.sub, ops.rsub])
@pytest.mark.parametrize('pi_freq', ['D', 'W', 'Q', 'H'])
@pytest.mark.parametrize('dti_freq', [None, 'D'])
- def test_dti_sub_pi(self, dti_freq, pi_freq, op, box):
+ def test_dti_sub_pi(self, dti_freq, pi_freq, op, box_df_broadcast_failure):
# GH#20049 subtracting PeriodIndex should raise TypeError
+ box = box_df_broadcast_failure
+
dti = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], freq=dti_freq)
pi = dti.to_period(pi_freq)
@@ -1748,31 +1711,30 @@ def test_dti_add_offset_tzaware(self, tz_aware_fixture, box):
tm.assert_equal(offset, expected)
-@pytest.mark.parametrize('klass,assert_func', [
- (Series, tm.assert_series_equal),
- (DatetimeIndex, tm.assert_index_equal)])
-def test_dt64_with_offset_array(klass, assert_func):
+@pytest.mark.parametrize('klass', [Series, DatetimeIndex])
+def test_dt64_with_offset_array(klass):
# GH#10699
# array of offsets
box = Series if klass is Series else pd.Index
+ dti = DatetimeIndex([Timestamp('2000-1-1'), Timestamp('2000-2-1')])
+
+ s = klass(dti)
+
with tm.assert_produces_warning(PerformanceWarning):
- s = klass([Timestamp('2000-1-1'), Timestamp('2000-2-1')])
result = s + box([pd.offsets.DateOffset(years=1),
pd.offsets.MonthEnd()])
exp = klass([Timestamp('2001-1-1'), Timestamp('2000-2-29')])
- assert_func(result, exp)
+ tm.assert_equal(result, exp)
# same offset
result = s + box([pd.offsets.DateOffset(years=1),
pd.offsets.DateOffset(years=1)])
exp = klass([Timestamp('2001-1-1'), Timestamp('2001-2-1')])
- assert_func(result, exp)
+ tm.assert_equal(result, exp)
-@pytest.mark.parametrize('klass,assert_func', [
- (Series, tm.assert_series_equal),
- (DatetimeIndex, tm.assert_index_equal)])
-def test_dt64_with_DateOffsets_relativedelta(klass, assert_func):
+@pytest.mark.parametrize('klass', [Series, DatetimeIndex])
+def test_dt64_with_DateOffsets_relativedelta(klass):
# GH#10699
vec = klass([Timestamp('2000-01-05 00:15:00'),
Timestamp('2000-01-31 00:23:00'),
@@ -1789,11 +1751,11 @@ def test_dt64_with_DateOffsets_relativedelta(klass, assert_func):
('microseconds', 5)]
for i, kwd in enumerate(relative_kwargs):
op = pd.DateOffset(**dict([kwd]))
- assert_func(klass([x + op for x in vec]), vec + op)
- assert_func(klass([x - op for x in vec]), vec - op)
+ tm.assert_equal(klass([x + op for x in vec]), vec + op)
+ tm.assert_equal(klass([x - op for x in vec]), vec - op)
op = pd.DateOffset(**dict(relative_kwargs[:i + 1]))
- assert_func(klass([x + op for x in vec]), vec + op)
- assert_func(klass([x - op for x in vec]), vec - op)
+ tm.assert_equal(klass([x + op for x in vec]), vec + op)
+ tm.assert_equal(klass([x - op for x in vec]), vec - op)
@pytest.mark.parametrize('cls_and_kwargs', [
@@ -1816,10 +1778,8 @@ def test_dt64_with_DateOffsets_relativedelta(klass, assert_func):
'Easter', ('DateOffset', {'day': 4}),
('DateOffset', {'month': 5})])
@pytest.mark.parametrize('normalize', [True, False])
-@pytest.mark.parametrize('klass,assert_func', [
- (Series, tm.assert_series_equal),
- (DatetimeIndex, tm.assert_index_equal)])
-def test_dt64_with_DateOffsets(klass, assert_func, normalize, cls_and_kwargs):
+@pytest.mark.parametrize('klass', [Series, DatetimeIndex])
+def test_dt64_with_DateOffsets(klass, normalize, cls_and_kwargs):
# GH#10699
# assert these are equal on a piecewise basis
vec = klass([Timestamp('2000-01-05 00:15:00'),
@@ -1849,26 +1809,24 @@ def test_dt64_with_DateOffsets(klass, assert_func, normalize, cls_and_kwargs):
continue
offset = offset_cls(n, normalize=normalize, **kwargs)
- assert_func(klass([x + offset for x in vec]), vec + offset)
- assert_func(klass([x - offset for x in vec]), vec - offset)
- assert_func(klass([offset + x for x in vec]), offset + vec)
+ tm.assert_equal(klass([x + offset for x in vec]), vec + offset)
+ tm.assert_equal(klass([x - offset for x in vec]), vec - offset)
+ tm.assert_equal(klass([offset + x for x in vec]), offset + vec)
-@pytest.mark.parametrize('klass,assert_func', zip([Series, DatetimeIndex],
- [tm.assert_series_equal,
- tm.assert_index_equal]))
-def test_datetime64_with_DateOffset(klass, assert_func):
+@pytest.mark.parametrize('klass', [Series, DatetimeIndex])
+def test_datetime64_with_DateOffset(klass):
# GH#10699
s = klass(date_range('2000-01-01', '2000-01-31'), name='a')
result = s + pd.DateOffset(years=1)
result2 = pd.DateOffset(years=1) + s
exp = klass(date_range('2001-01-01', '2001-01-31'), name='a')
- assert_func(result, exp)
- assert_func(result2, exp)
+ tm.assert_equal(result, exp)
+ tm.assert_equal(result2, exp)
result = s - pd.DateOffset(years=1)
exp = klass(date_range('1999-01-01', '1999-01-31'), name='a')
- assert_func(result, exp)
+ tm.assert_equal(result, exp)
s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
pd.Timestamp('2000-02-15', tz='US/Central')], name='a')
@@ -1876,8 +1834,8 @@ def test_datetime64_with_DateOffset(klass, assert_func):
result2 = pd.offsets.Day() + s
exp = klass([Timestamp('2000-01-16 00:15:00', tz='US/Central'),
Timestamp('2000-02-16', tz='US/Central')], name='a')
- assert_func(result, exp)
- assert_func(result2, exp)
+ tm.assert_equal(result, exp)
+ tm.assert_equal(result2, exp)
s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
pd.Timestamp('2000-02-15', tz='US/Central')], name='a')
@@ -1885,8 +1843,8 @@ def test_datetime64_with_DateOffset(klass, assert_func):
result2 = pd.offsets.MonthEnd() + s
exp = klass([Timestamp('2000-01-31 00:15:00', tz='US/Central'),
Timestamp('2000-02-29', tz='US/Central')], name='a')
- assert_func(result, exp)
- assert_func(result2, exp)
+ tm.assert_equal(result, exp)
+ tm.assert_equal(result2, exp)
@pytest.mark.parametrize('years', [-1, 0, 1])
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index d3957330f11e4..fcfc3994a88c8 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -17,15 +17,6 @@
from pandas import Timedelta, Series, Index, TimedeltaIndex
-@pytest.fixture(params=[pd.Float64Index(np.arange(5, dtype='float64')),
- pd.UInt64Index(np.arange(5, dtype='uint64')),
- pd.Int64Index(np.arange(5, dtype='int64')),
- pd.RangeIndex(5)],
- ids=lambda x: type(x).__name__)
-def idx(request):
- return request.param
-
-
# ------------------------------------------------------------------
# Comparisons
@@ -135,20 +126,18 @@ def test_ops_series(self):
tm.assert_series_equal(expected, td * other)
tm.assert_series_equal(expected, other * td)
- @pytest.mark.parametrize('index', [
- pd.Int64Index(range(1, 11)),
- pd.UInt64Index(range(1, 11)),
- pd.Float64Index(range(1, 11)),
- pd.RangeIndex(1, 11)],
- ids=lambda x: type(x).__name__)
+ # TODO: also test non-nanosecond timedelta64 and Tick objects;
+ # see test_numeric_arr_rdiv_tdscalar for note on these failing
@pytest.mark.parametrize('scalar_td', [
Timedelta(days=1),
Timedelta(days=1).to_timedelta64(),
Timedelta(days=1).to_pytimedelta()],
ids=lambda x: type(x).__name__)
- def test_numeric_arr_mul_tdscalar(self, scalar_td, index, box):
+ def test_numeric_arr_mul_tdscalar(self, scalar_td, numeric_idx, box):
# GH#19333
- expected = pd.timedelta_range('1 days', '10 days')
+ index = numeric_idx
+
+ expected = pd.timedelta_range('0 days', '4 days')
index = tm.box_expected(index, box)
expected = tm.box_expected(expected, box)
@@ -159,28 +148,27 @@ def test_numeric_arr_mul_tdscalar(self, scalar_td, index, box):
commute = scalar_td * index
tm.assert_equal(commute, expected)
- @pytest.mark.parametrize('index', [
- pd.Int64Index(range(1, 3)),
- pd.UInt64Index(range(1, 3)),
- pd.Float64Index(range(1, 3)),
- pd.RangeIndex(1, 3)],
- ids=lambda x: type(x).__name__)
- @pytest.mark.parametrize('scalar_td', [
- Timedelta(days=1),
- Timedelta(days=1).to_timedelta64(),
- Timedelta(days=1).to_pytimedelta()],
- ids=lambda x: type(x).__name__)
- def test_numeric_arr_rdiv_tdscalar(self, scalar_td, index, box):
- expected = TimedeltaIndex(['1 Day', '12 Hours'])
+ def test_numeric_arr_rdiv_tdscalar(self, three_days, numeric_idx, box):
+ index = numeric_idx[1:3]
+
+ broken = (isinstance(three_days, np.timedelta64) and
+ three_days.dtype != 'm8[ns]')
+ broken = broken or isinstance(three_days, pd.offsets.Tick)
+ if box is not pd.Index and broken:
+ # np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D')
+ raise pytest.xfail("timedelta64 not converted to nanos; "
+ "Tick division not imlpemented")
+
+ expected = TimedeltaIndex(['3 Days', '36 Hours'])
index = tm.box_expected(index, box)
expected = tm.box_expected(expected, box)
- result = scalar_td / index
+ result = three_days / index
tm.assert_equal(result, expected)
with pytest.raises(TypeError):
- index / scalar_td
+ index / three_days
# ------------------------------------------------------------------
@@ -188,7 +176,9 @@ def test_numeric_arr_rdiv_tdscalar(self, scalar_td, index, box):
class TestDivisionByZero(object):
- def test_div_zero(self, zero, idx):
+ def test_div_zero(self, zero, numeric_idx):
+ idx = numeric_idx
+
expected = pd.Index([np.nan, np.inf, np.inf, np.inf, np.inf],
dtype=np.float64)
result = idx / zero
@@ -196,7 +186,9 @@ def test_div_zero(self, zero, idx):
ser_compat = Series(idx).astype('i8') / np.array(zero).astype('i8')
tm.assert_series_equal(ser_compat, Series(result))
- def test_floordiv_zero(self, zero, idx):
+ def test_floordiv_zero(self, zero, numeric_idx):
+ idx = numeric_idx
+
expected = pd.Index([np.nan, np.inf, np.inf, np.inf, np.inf],
dtype=np.float64)
@@ -205,7 +197,9 @@ def test_floordiv_zero(self, zero, idx):
ser_compat = Series(idx).astype('i8') // np.array(zero).astype('i8')
tm.assert_series_equal(ser_compat, Series(result))
- def test_mod_zero(self, zero, idx):
+ def test_mod_zero(self, zero, numeric_idx):
+ idx = numeric_idx
+
expected = pd.Index([np.nan, np.nan, np.nan, np.nan, np.nan],
dtype=np.float64)
result = idx % zero
@@ -213,7 +207,8 @@ def test_mod_zero(self, zero, idx):
ser_compat = Series(idx).astype('i8') % np.array(zero).astype('i8')
tm.assert_series_equal(ser_compat, Series(result))
- def test_divmod_zero(self, zero, idx):
+ def test_divmod_zero(self, zero, numeric_idx):
+ idx = numeric_idx
exleft = pd.Index([np.nan, np.inf, np.inf, np.inf, np.inf],
dtype=np.float64)
@@ -430,8 +425,9 @@ def test_div_equiv_binop(self):
result = second / first
tm.assert_series_equal(result, expected)
- def test_div_int(self, idx):
+ def test_div_int(self, numeric_idx):
# truediv under PY3
+ idx = numeric_idx
result = idx / 1
expected = idx
if PY3:
@@ -445,13 +441,15 @@ def test_div_int(self, idx):
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize('op', [operator.mul, ops.rmul, operator.floordiv])
- def test_mul_int_identity(self, op, idx, box):
+ def test_mul_int_identity(self, op, numeric_idx, box):
+ idx = numeric_idx
idx = tm.box_expected(idx, box)
result = op(idx, 1)
tm.assert_equal(result, idx)
- def test_mul_int_array(self, idx):
+ def test_mul_int_array(self, numeric_idx):
+ idx = numeric_idx
didx = idx * idx
result = idx * np.array(5, dtype='int64')
@@ -461,39 +459,45 @@ def test_mul_int_array(self, idx):
result = idx * np.arange(5, dtype=arr_dtype)
tm.assert_index_equal(result, didx)
- def test_mul_int_series(self, idx):
+ def test_mul_int_series(self, numeric_idx):
+ idx = numeric_idx
didx = idx * idx
arr_dtype = 'uint64' if isinstance(idx, pd.UInt64Index) else 'int64'
result = idx * Series(np.arange(5, dtype=arr_dtype))
tm.assert_series_equal(result, Series(didx))
- def test_mul_float_series(self, idx):
+ def test_mul_float_series(self, numeric_idx):
+ idx = numeric_idx
rng5 = np.arange(5, dtype='float64')
result = idx * Series(rng5 + 0.1)
expected = Series(rng5 * (rng5 + 0.1))
tm.assert_series_equal(result, expected)
- def test_mul_index(self, idx):
+ def test_mul_index(self, numeric_idx):
# in general not true for RangeIndex
+ idx = numeric_idx
if not isinstance(idx, pd.RangeIndex):
result = idx * idx
tm.assert_index_equal(result, idx ** 2)
- def test_mul_datelike_raises(self, idx):
+ def test_mul_datelike_raises(self, numeric_idx):
+ idx = numeric_idx
with pytest.raises(TypeError):
idx * pd.date_range('20130101', periods=5)
- def test_mul_size_mismatch_raises(self, idx):
+ def test_mul_size_mismatch_raises(self, numeric_idx):
+ idx = numeric_idx
with pytest.raises(ValueError):
idx * idx[0:3]
with pytest.raises(ValueError):
idx * np.array([1, 2])
@pytest.mark.parametrize('op', [operator.pow, ops.rpow])
- def test_pow_float(self, op, idx, box):
+ def test_pow_float(self, op, numeric_idx, box):
# test power calculations both ways, GH#14973
+ idx = numeric_idx
expected = pd.Float64Index(op(idx.values, 2.0))
idx = tm.box_expected(idx, box)
@@ -502,8 +506,9 @@ def test_pow_float(self, op, idx, box):
result = op(idx, 2.0)
tm.assert_equal(result, expected)
- def test_modulo(self, idx, box):
+ def test_modulo(self, numeric_idx, box):
# GH#9244
+ idx = numeric_idx
expected = Index(idx.values % 2)
idx = tm.box_expected(idx, box)
@@ -512,7 +517,8 @@ def test_modulo(self, idx, box):
result = idx % 2
tm.assert_equal(result, expected)
- def test_divmod(self, idx):
+ def test_divmod(self, numeric_idx):
+ idx = numeric_idx
result = divmod(idx, 2)
with np.errstate(all='ignore'):
div, mod = divmod(idx.values, 2)
@@ -530,7 +536,8 @@ def test_divmod(self, idx):
@pytest.mark.xfail(reason='GH#19252 Series has no __rdivmod__',
strict=True)
- def test_divmod_series(self, idx):
+ def test_divmod_series(self, numeric_idx):
+ idx = numeric_idx
other = np.ones(idx.values.shape, dtype=idx.values.dtype) * 2
result = divmod(idx, Series(other))
with np.errstate(all='ignore'):
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 92123bf48bb47..3210290b9c5c8 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -3,7 +3,6 @@
# behave identically.
# Specifically for Period dtype
import operator
-from datetime import timedelta
import numpy as np
import pytest
@@ -17,80 +16,10 @@
import pandas.core.indexes.period as period
from pandas.core import ops
from pandas import (
- Period, PeriodIndex, period_range, Timedelta, Series,
+ Period, PeriodIndex, period_range, Series,
_np_version_under1p10)
-# ------------------------------------------------------------------
-# Fixtures
-
-_common_mismatch = [pd.offsets.YearBegin(2),
- pd.offsets.MonthBegin(1),
- pd.offsets.Minute()]
-
-
-@pytest.fixture(params=[timedelta(minutes=30),
- np.timedelta64(30, 's'),
- Timedelta(seconds=30)] + _common_mismatch)
-def not_hourly(request):
- """
- Several timedelta-like and DateOffset instances that are _not_
- compatible with Hourly frequencies.
- """
- return request.param
-
-
-@pytest.fixture(params=[np.timedelta64(4, 'h'),
- timedelta(hours=23),
- Timedelta('23:00:00')] + _common_mismatch)
-def not_daily(request):
- """
- Several timedelta-like and DateOffset instances that are _not_
- compatible with Daily frequencies.
- """
- return request.param
-
-
-@pytest.fixture(params=[np.timedelta64(365, 'D'),
- timedelta(365),
- Timedelta(days=365)] + _common_mismatch)
-def mismatched(request):
- """
- Several timedelta-like and DateOffset instances that are _not_
- compatible with Monthly or Annual frequencies.
- """
- return request.param
-
-
-@pytest.fixture(params=[pd.offsets.Day(3),
- timedelta(days=3),
- np.timedelta64(3, 'D'),
- pd.offsets.Hour(72),
- timedelta(minutes=60 * 24 * 3),
- np.timedelta64(72, 'h'),
- Timedelta('72:00:00')])
-def three_days(request):
- """
- Several timedelta-like and DateOffset objects that each represent
- a 3-day timedelta
- """
- return request.param
-
-
-@pytest.fixture(params=[pd.offsets.Hour(2),
- timedelta(hours=2),
- np.timedelta64(2, 'h'),
- pd.offsets.Minute(120),
- timedelta(minutes=120),
- np.timedelta64(120, 'm')])
-def two_hours(request):
- """
- Several timedelta-like and DateOffset objects that each represent
- a 2-hour timedelta
- """
- return request.param
-
-
# ------------------------------------------------------------------
# Comparisons
@@ -752,8 +681,9 @@ def test_add_iadd_timedeltalike_annual(self):
rng += pd.offsets.YearEnd(5)
tm.assert_index_equal(rng, expected)
- def test_pi_add_iadd_timedeltalike_freq_mismatch_annual(self, mismatched):
- other = mismatched
+ def test_pi_add_iadd_timedeltalike_freq_mismatch_annual(self,
+ mismatched_freq):
+ other = mismatched_freq
rng = pd.period_range('2014', '2024', freq='A')
msg = ('Input has different freq(=.+)? '
'from PeriodIndex\\(freq=A-DEC\\)')
@@ -762,8 +692,9 @@ def test_pi_add_iadd_timedeltalike_freq_mismatch_annual(self, mismatched):
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
rng += other
- def test_pi_sub_isub_timedeltalike_freq_mismatch_annual(self, mismatched):
- other = mismatched
+ def test_pi_sub_isub_timedeltalike_freq_mismatch_annual(self,
+ mismatched_freq):
+ other = mismatched_freq
rng = pd.period_range('2014', '2024', freq='A')
msg = ('Input has different freq(=.+)? '
'from PeriodIndex\\(freq=A-DEC\\)')
@@ -782,8 +713,9 @@ def test_pi_add_iadd_timedeltalike_M(self):
rng += pd.offsets.MonthEnd(5)
tm.assert_index_equal(rng, expected)
- def test_pi_add_iadd_timedeltalike_freq_mismatch_monthly(self, mismatched):
- other = mismatched
+ def test_pi_add_iadd_timedeltalike_freq_mismatch_monthly(self,
+ mismatched_freq):
+ other = mismatched_freq
rng = pd.period_range('2014-01', '2016-12', freq='M')
msg = 'Input has different freq(=.+)? from PeriodIndex\\(freq=M\\)'
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
@@ -791,8 +723,9 @@ def test_pi_add_iadd_timedeltalike_freq_mismatch_monthly(self, mismatched):
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
rng += other
- def test_pi_sub_isub_timedeltalike_freq_mismatch_monthly(self, mismatched):
- other = mismatched
+ def test_pi_sub_isub_timedeltalike_freq_mismatch_monthly(self,
+ mismatched_freq):
+ other = mismatched_freq
rng = pd.period_range('2014-01', '2016-12', freq='M')
msg = 'Input has different freq(=.+)? from PeriodIndex\\(freq=M\\)'
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index def7a8be95fc8..5050922173564 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -18,60 +18,6 @@
DataFrame)
-# ------------------------------------------------------------------
-# Fixtures
-
-@pytest.fixture
-def tdser():
- """
- Return a Series with dtype='timedelta64[ns]', including a NaT.
- """
- return Series(['59 Days', '59 Days', 'NaT'], dtype='timedelta64[ns]')
-
-
-@pytest.fixture(params=[pd.offsets.Hour(2), timedelta(hours=2),
- np.timedelta64(2, 'h'), Timedelta(hours=2)],
- ids=lambda x: type(x).__name__)
-def delta(request):
- """
- Several ways of representing two hours
- """
- return request.param
-
-
-@pytest.fixture(params=[timedelta(minutes=5, seconds=4),
- Timedelta('5m4s'),
- Timedelta('5m4s').to_timedelta64()],
- ids=lambda x: type(x).__name__)
-def scalar_td(request):
- """
- Several variants of Timedelta scalars representing 5 minutes and 4 seconds
- """
- return request.param
-
-
-@pytest.fixture(params=[pd.Index, Series, pd.DataFrame],
- ids=lambda x: x.__name__)
-def box(request):
- """
- Several array-like containers that should have effectively identical
- behavior with respect to arithmetic operations.
- """
- return request.param
-
-
-@pytest.fixture(params=[pd.Index,
- Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(strict=True))],
- ids=lambda x: x.__name__)
-def box_df_fail(request):
- """
- Fixture equivalent to `box` fixture but xfailing the DataFrame case.
- """
- return request.param
-
-
# ------------------------------------------------------------------
# Timedelta64[ns] dtype Comparisons
@@ -522,8 +468,8 @@ def test_td64arr_add_sub_timestamp(self, box):
with pytest.raises(TypeError):
tdser - ts
- def test_tdi_sub_dt64_array(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ def test_tdi_sub_dt64_array(self, box_df_broadcast_failure):
+ box = box_df_broadcast_failure
dti = pd.date_range('2016-01-01', periods=3)
tdi = dti - dti.shift(1)
@@ -540,8 +486,8 @@ def test_tdi_sub_dt64_array(self, box_df_fail):
result = dtarr - tdi
tm.assert_equal(result, expected)
- def test_tdi_add_dt64_array(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ def test_tdi_add_dt64_array(self, box_df_broadcast_failure):
+ box = box_df_broadcast_failure
dti = pd.date_range('2016-01-01', periods=3)
tdi = dti - dti.shift(1)
@@ -559,43 +505,33 @@ def test_tdi_add_dt64_array(self, box_df_fail):
# ------------------------------------------------------------------
# Operations with int-like others
- @pytest.mark.parametrize('box', [
- pd.Index,
- Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(reason="Attempts to broadcast "
- "incorrectly",
- strict=True, raises=ValueError))
- ], ids=lambda x: x.__name__)
- def test_td64arr_add_int_series_invalid(self, box, tdser):
+ def test_td64arr_add_int_series_invalid(self, box_df_broadcast_failure,
+ tdser):
+ box = box_df_broadcast_failure
tdser = tm.box_expected(tdser, box)
err = TypeError if box is not pd.Index else NullFrequencyError
with pytest.raises(err):
tdser + Series([2, 3, 4])
- def test_td64arr_radd_int_series_invalid(self, box_df_fail, tdser):
- box = box_df_fail # Tries to broadcast incorrectly
+ def test_td64arr_radd_int_series_invalid(self, box_df_broadcast_failure,
+ tdser):
+ box = box_df_broadcast_failure
tdser = tm.box_expected(tdser, box)
err = TypeError if box is not pd.Index else NullFrequencyError
with pytest.raises(err):
Series([2, 3, 4]) + tdser
- @pytest.mark.parametrize('box', [
- pd.Index,
- Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(reason="Attempts to broadcast "
- "incorrectly",
- strict=True, raises=ValueError))
- ], ids=lambda x: x.__name__)
- def test_td64arr_sub_int_series_invalid(self, box, tdser):
+ def test_td64arr_sub_int_series_invalid(self, box_df_broadcast_failure,
+ tdser):
+ box = box_df_broadcast_failure
tdser = tm.box_expected(tdser, box)
err = TypeError if box is not pd.Index else NullFrequencyError
with pytest.raises(err):
tdser - Series([2, 3, 4])
- def test_td64arr_rsub_int_series_invalid(self, box_df_fail, tdser):
- box = box_df_fail # Tries to broadcast incorrectly
+ def test_td64arr_rsub_int_series_invalid(self, box_df_broadcast_failure,
+ tdser):
+ box = box_df_broadcast_failure
tdser = tm.box_expected(tdser, box)
err = TypeError if box is not pd.Index else NullFrequencyError
with pytest.raises(err):
@@ -669,9 +605,9 @@ def test_td64arr_add_sub_numeric_scalar_invalid(self, box, scalar, tdser):
Series([1, 2, 3])
# TODO: Add DataFrame in here?
], ids=lambda x: type(x).__name__)
- def test_td64arr_add_sub_numeric_arr_invalid(self, box_df_fail, vec,
- dtype, tdser):
- box = box_df_fail # tries to broadcast incorrectly
+ def test_td64arr_add_sub_numeric_arr_invalid(
+ self, box_df_broadcast_failure, vec, dtype, tdser):
+ box = box_df_broadcast_failure
tdser = tm.box_expected(tdser, box)
err = TypeError
if box is pd.Index and not dtype.startswith('float'):
@@ -744,8 +680,8 @@ def test_timedelta64_operations_with_timedeltas(self):
# roundtrip
tm.assert_series_equal(result + td2, td1)
- def test_td64arr_add_td64_array(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ def test_td64arr_add_td64_array(self, box_df_broadcast_failure):
+ box = box_df_broadcast_failure
dti = pd.date_range('2016-01-01', periods=3)
tdi = dti - dti.shift(1)
@@ -760,8 +696,8 @@ def test_td64arr_add_td64_array(self, box_df_fail):
result = tdarr + tdi
tm.assert_equal(result, expected)
- def test_td64arr_sub_td64_array(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ def test_td64arr_sub_td64_array(self, box_df_broadcast_failure):
+ box = box_df_broadcast_failure
dti = pd.date_range('2016-01-01', periods=3)
tdi = dti - dti.shift(1)
@@ -843,7 +779,7 @@ def test_td64arr_sub_NaT(self, box):
res = ser - pd.NaT
tm.assert_equal(res, expected)
- def test_td64arr_add_timedeltalike(self, delta, box):
+ def test_td64arr_add_timedeltalike(self, two_hours, box):
# only test adding/sub offsets as + is now numeric
rng = timedelta_range('1 days', '10 days')
expected = timedelta_range('1 days 02:00:00', '10 days 02:00:00',
@@ -851,10 +787,10 @@ def test_td64arr_add_timedeltalike(self, delta, box):
rng = tm.box_expected(rng, box)
expected = tm.box_expected(expected, box)
- result = rng + delta
+ result = rng + two_hours
tm.assert_equal(result, expected)
- def test_td64arr_sub_timedeltalike(self, delta, box):
+ def test_td64arr_sub_timedeltalike(self, two_hours, box):
# only test adding/sub offsets as - is now numeric
rng = timedelta_range('1 days', '10 days')
expected = timedelta_range('0 days 22:00:00', '9 days 22:00:00')
@@ -862,7 +798,7 @@ def test_td64arr_sub_timedeltalike(self, delta, box):
rng = tm.box_expected(rng, box)
expected = tm.box_expected(expected, box)
- result = rng - delta
+ result = rng - two_hours
tm.assert_equal(result, expected)
# ------------------------------------------------------------------
@@ -934,9 +870,9 @@ def test_td64arr_add_offset_index(self, names, box):
# TODO: combine with test_td64arr_add_offset_index by parametrizing
# over second box?
- def test_td64arr_add_offset_array(self, box_df_fail):
+ def test_td64arr_add_offset_array(self, box_df_broadcast_failure):
# GH#18849
- box = box_df_fail # tries to broadcast incorrectly
+ box = box_df_broadcast_failure
tdi = TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'])
other = np.array([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)])
@@ -957,9 +893,9 @@ def test_td64arr_add_offset_array(self, box_df_fail):
@pytest.mark.parametrize('names', [(None, None, None),
('foo', 'bar', None),
('foo', 'foo', 'foo')])
- def test_td64arr_sub_offset_index(self, names, box_df_fail):
+ def test_td64arr_sub_offset_index(self, names, box_df_broadcast_failure):
# GH#18824, GH#19744
- box = box_df_fail # tries to broadcast incorrectly
+ box = box_df_broadcast_failure
tdi = TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'],
name=names[0])
other = pd.Index([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)],
@@ -975,9 +911,9 @@ def test_td64arr_sub_offset_index(self, names, box_df_fail):
res = tdi - other
tm.assert_equal(res, expected)
- def test_td64arr_sub_offset_array(self, box_df_fail):
+ def test_td64arr_sub_offset_array(self, box_df_broadcast_failure):
# GH#18824
- box = box_df_fail # tries to broadcast incorrectly
+ box = box_df_broadcast_failure
tdi = TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'])
other = np.array([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)])
@@ -994,9 +930,9 @@ def test_td64arr_sub_offset_array(self, box_df_fail):
@pytest.mark.parametrize('names', [(None, None, None),
('foo', 'bar', None),
('foo', 'foo', 'foo')])
- def test_td64arr_with_offset_series(self, names, box_df_fail):
+ def test_td64arr_with_offset_series(self, names, box_df_broadcast_failure):
# GH#18849
- box = box_df_fail # tries to broadcast incorrectly
+ box = box_df_broadcast_failure
box2 = Series if box is pd.Index else box
tdi = TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'],
@@ -1027,9 +963,10 @@ def test_td64arr_with_offset_series(self, names, box_df_fail):
tm.assert_equal(res3, expected_sub)
@pytest.mark.parametrize('obox', [np.array, pd.Index, pd.Series])
- def test_td64arr_addsub_anchored_offset_arraylike(self, obox, box_df_fail):
+ def test_td64arr_addsub_anchored_offset_arraylike(
+ self, obox, box_df_broadcast_failure):
# GH#18824
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ box = box_df_broadcast_failure
tdi = TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'])
tdi = tm.box_expected(tdi, box)
@@ -1090,11 +1027,11 @@ def test_td64arr_mul_int(self, box):
result = 1 * idx
tm.assert_equal(result, idx)
- def test_td64arr_mul_tdlike_scalar_raises(self, delta, box):
+ def test_td64arr_mul_tdlike_scalar_raises(self, two_hours, box):
rng = timedelta_range('1 days', '10 days', name='foo')
rng = tm.box_expected(rng, box)
with pytest.raises(TypeError):
- rng * delta
+ rng * two_hours
def test_tdi_mul_int_array_zerodim(self, box):
rng5 = np.arange(5, dtype='int64')
@@ -1107,8 +1044,8 @@ def test_tdi_mul_int_array_zerodim(self, box):
result = idx * np.array(5, dtype='int64')
tm.assert_equal(result, expected)
- def test_tdi_mul_int_array(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ def test_tdi_mul_int_array(self, box_df_broadcast_failure):
+ box = box_df_broadcast_failure
rng5 = np.arange(5, dtype='int64')
idx = TimedeltaIndex(rng5)
expected = TimedeltaIndex(rng5 ** 2)
@@ -1120,7 +1057,7 @@ def test_tdi_mul_int_array(self, box_df_fail):
tm.assert_equal(result, expected)
def test_tdi_mul_int_series(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ box = box_df_fail
idx = TimedeltaIndex(np.arange(5, dtype='int64'))
expected = TimedeltaIndex(np.arange(5, dtype='int64') ** 2)
@@ -1133,7 +1070,7 @@ def test_tdi_mul_int_series(self, box_df_fail):
tm.assert_equal(result, expected)
def test_tdi_mul_float_series(self, box_df_fail):
- box = box_df_fail # DataFrame tries to broadcast incorrectly
+ box = box_df_fail
idx = TimedeltaIndex(np.arange(5, dtype='int64'))
idx = tm.box_expected(idx, box)
@@ -1186,7 +1123,7 @@ def test_td64arr_div_int(self, box):
result = idx / 1
tm.assert_equal(result, idx)
- def test_tdi_div_tdlike_scalar(self, delta, box):
+ def test_tdi_div_tdlike_scalar(self, two_hours, box):
# GH#20088, GH#22163 ensure DataFrame returns correct dtype
rng = timedelta_range('1 days', '10 days', name='foo')
expected = pd.Float64Index((np.arange(10) + 1) * 12, name='foo')
@@ -1194,17 +1131,17 @@ def test_tdi_div_tdlike_scalar(self, delta, box):
rng = tm.box_expected(rng, box)
expected = tm.box_expected(expected, box)
- result = rng / delta
+ result = rng / two_hours
tm.assert_equal(result, expected)
- def test_tdi_div_tdlike_scalar_with_nat(self, delta, box):
+ def test_tdi_div_tdlike_scalar_with_nat(self, two_hours, box):
rng = TimedeltaIndex(['1 days', pd.NaT, '2 days'], name='foo')
expected = pd.Float64Index([12, np.nan, 24], name='foo')
rng = tm.box_expected(rng, box)
expected = tm.box_expected(expected, box)
- result = rng / delta
+ result = rng / two_hours
tm.assert_equal(result, expected)
# ------------------------------------------------------------------
@@ -1260,14 +1197,14 @@ def test_td64arr_floordiv_int(self, box):
result = idx // 1
tm.assert_equal(result, idx)
- def test_td64arr_floordiv_tdlike_scalar(self, delta, box):
+ def test_td64arr_floordiv_tdlike_scalar(self, two_hours, box):
tdi = timedelta_range('1 days', '10 days', name='foo')
expected = pd.Int64Index((np.arange(10) + 1) * 12, name='foo')
tdi = tm.box_expected(tdi, box)
expected = tm.box_expected(expected, box)
- result = tdi // delta
+ result = tdi // two_hours
tm.assert_equal(result, expected)
# TODO: Is this redundant with test_td64arr_floordiv_tdlike_scalar?
@@ -1364,14 +1301,6 @@ def test_td64arr_div_numeric_scalar(self, box, two, tdser):
result = tdser / two
tm.assert_equal(result, expected)
- @pytest.mark.parametrize('box', [
- pd.Index,
- Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(reason="broadcasts along "
- "wrong axis",
- strict=True))
- ], ids=lambda x: x.__name__)
@pytest.mark.parametrize('dtype', ['int64', 'int32', 'int16',
'uint64', 'uint32', 'uint16', 'uint8',
'float64', 'float32', 'float16'])
@@ -1380,9 +1309,11 @@ def test_td64arr_div_numeric_scalar(self, box, two, tdser):
Series([20, 30, 40])],
ids=lambda x: type(x).__name__)
@pytest.mark.parametrize('op', [operator.mul, ops.rmul])
- def test_td64arr_rmul_numeric_array(self, op, box, vector, dtype, tdser):
+ def test_td64arr_rmul_numeric_array(self, op, box_df_fail,
+ vector, dtype, tdser):
# GH#4521
# divide/multiply by integers
+ box = box_df_fail # broadcasts incorrectly but doesn't raise
vector = vector.astype(dtype)
expected = Series(['1180 Days', '1770 Days', 'NaT'],
@@ -1428,22 +1359,15 @@ def test_td64arr_div_numeric_array(self, box, vector, dtype, tdser):
with pytest.raises(TypeError):
vector / tdser
- # TODO: Should we be parametrizing over types for `ser` too?
- @pytest.mark.parametrize('box', [
- pd.Index,
- Series,
- pytest.param(pd.DataFrame,
- marks=pytest.mark.xfail(reason="broadcasts along "
- "wrong axis",
- strict=True))
- ], ids=lambda x: x.__name__)
@pytest.mark.parametrize('names', [(None, None, None),
('Egon', 'Venkman', None),
('NCC1701D', 'NCC1701D', 'NCC1701D')])
- def test_td64arr_mul_int_series(self, box, names):
+ def test_td64arr_mul_int_series(self, box_df_fail, names):
# GH#19042 test for correct name attachment
+ box = box_df_fail # broadcasts along wrong axis, but doesn't raise
tdi = TimedeltaIndex(['0days', '1day', '2days', '3days', '4days'],
name=names[0])
+ # TODO: Should we be parametrizing over types for `ser` too?
ser = Series([0, 1, 2, 3, 4], dtype=np.int64, name=names[1])
expected = Series(['0days', '1day', '4days', '9days', '16days'],
@@ -1491,10 +1415,6 @@ def test_float_series_rdiv_td64arr(self, box, names):
class TestTimedeltaArraylikeInvalidArithmeticOps(object):
- @pytest.mark.parametrize('scalar_td', [
- timedelta(minutes=5, seconds=4),
- Timedelta('5m4s'),
- Timedelta('5m4s').to_timedelta64()])
def test_td64arr_pow_invalid(self, scalar_td, box):
td1 = Series([timedelta(minutes=5, seconds=3)] * 3)
td1.iloc[2] = np.nan
| Takes over from #22350 (which would have been a rebasing nightmare), addressing AFAICT all unaddressed comments from there.
All remaining fixtures in tests/arithmetic are collected in tests/arithmetic/conftest.py
Fixtures are given more descriptive names and docstrings as requested in #22350.
A handful of tests are altered to use fixtures; in a number of cases their `pytest.mark.parametrize` arguments were already equivalent to one of the fixtures.
A few new broken cases are identified, specifically numeric `Series` and `DataFrame` `__mul__` or `__rdiv__` with a) non-nanosecond `timedelta64` or b) `Tick`. (I'll open an Issue for these)
Some tests got cleaned up nicely by using `tm.assert_equal` instead of parametrizing with `assert_func` | https://api.github.com/repos/pandas-dev/pandas/pulls/22645 | 2018-09-09T01:36:38Z | 2018-09-12T11:28:48Z | 2018-09-12T11:28:48Z | 2018-09-12T13:41:23Z |
API/ENH: tz_localize handling of nonexistent times: rename keyword + add shift option | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 85b0abe421eb2..a52c80106f100 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -2357,6 +2357,38 @@ constructor as well as ``tz_localize``.
# tz_convert(None) is identical with tz_convert('UTC').tz_localize(None)
didx.tz_convert('UCT').tz_localize(None)
+.. _timeseries.timezone_nonexistent:
+
+Nonexistent Times when Localizing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A DST transition may also shift the local time ahead by 1 hour creating nonexistent
+local times. The behavior of localizing a timeseries with nonexistent times
+can be controlled by the ``nonexistent`` argument. The following options are available:
+
+* ``raise``: Raises a ``pytz.NonExistentTimeError`` (the default behavior)
+* ``NaT``: Replaces nonexistent times with ``NaT``
+* ``shift``: Shifts nonexistent times forward to the closest real time
+
+.. ipython:: python
+ dti = date_range(start='2015-03-29 01:30:00', periods=3, freq='H')
+ # 2:30 is a nonexistent time
+
+Localization of nonexistent times will raise an error by default.
+
+.. code-block:: ipython
+
+ In [2]: dti.tz_localize('Europe/Warsaw')
+ NonExistentTimeError: 2015-03-29 02:30:00
+
+Transform nonexistent times to ``NaT`` or the closest real time forward in time.
+
+.. ipython:: python
+ dti
+ dti.tz_localize('Europe/Warsaw', nonexistent='shift')
+ dti.tz_localize('Europe/Warsaw', nonexistent='NaT')
+
+
.. _timeseries.timezone_series:
TZ Aware Dtypes
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 4f17133ef4a8c..ef576223db4d1 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -205,6 +205,7 @@ Other Enhancements
- New attribute :attr:`__git_version__` will return git commit sha of current build (:issue:`21295`).
- Compatibility with Matplotlib 3.0 (:issue:`22790`).
- Added :meth:`Interval.overlaps`, :meth:`IntervalArray.overlaps`, and :meth:`IntervalIndex.overlaps` for determining overlaps between interval-like objects (:issue:`21998`)
+- :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have gained the ``nonexistent`` argument for alternative handling of nonexistent times. See :ref:`timeseries.timezone_nonexsistent` (:issue:`8917`)
.. _whatsnew_0240.api_breaking:
@@ -912,6 +913,7 @@ Deprecations
- :meth:`FrozenNDArray.searchsorted` has deprecated the ``v`` parameter in favor of ``value`` (:issue:`14645`)
- :func:`DatetimeIndex.shift` and :func:`PeriodIndex.shift` now accept ``periods`` argument instead of ``n`` for consistency with :func:`Index.shift` and :func:`Series.shift`. Using ``n`` throws a deprecation warning (:issue:`22458`, :issue:`22912`)
- The ``fastpath`` keyword of the different Index constructors is deprecated (:issue:`23110`).
+- :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have deprecated the ``errors`` argument in favor of the ``nonexistent`` argument (:issue:`8917`)
.. _whatsnew_0240.prior_deprecations:
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index d7eef546befbd..f9c604cd76472 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -1,5 +1,4 @@
# -*- coding: utf-8 -*-
-
import cython
from cython import Py_ssize_t
@@ -44,6 +43,7 @@ from nattype cimport NPY_NAT, checknull_with_nat
# Constants
cdef int64_t DAY_NS = 86400000000000LL
+cdef int64_t HOURS_NS = 3600000000000
NS_DTYPE = np.dtype('M8[ns]')
TD_DTYPE = np.dtype('m8[ns]')
@@ -458,8 +458,7 @@ cdef _TSObject convert_str_to_tsobject(object ts, object tz, object unit,
if tz is not None:
# shift for localize_tso
ts = tz_localize_to_utc(np.array([ts], dtype='i8'), tz,
- ambiguous='raise',
- errors='raise')[0]
+ ambiguous='raise')[0]
except OutOfBoundsDatetime:
# GH#19382 for just-barely-OutOfBounds falling back to dateutil
@@ -826,7 +825,7 @@ def tz_convert(int64_t[:] vals, object tz1, object tz2):
@cython.boundscheck(False)
@cython.wraparound(False)
def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
- object errors='raise'):
+ object nonexistent=None):
"""
Localize tzinfo-naive i8 to given time zone (using pytz). If
there are ambiguities in the values, raise AmbiguousTimeError.
@@ -837,7 +836,10 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
tz : tzinfo or None
ambiguous : str, bool, or arraylike
If arraylike, must have the same length as vals
- errors : {"raise", "coerce"}, default "raise"
+ nonexistent : str
+ If arraylike, must have the same length as vals
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -849,16 +851,13 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
ndarray ambiguous_array
Py_ssize_t i, idx, pos, ntrans, n = len(vals)
int64_t *tdata
- int64_t v, left, right
+ int64_t v, left, right, val, v_left, v_right
ndarray[int64_t] result, result_a, result_b, dst_hours
npy_datetimestruct dts
bint infer_dst = False, is_dst = False, fill = False
- bint is_coerce = errors == 'coerce', is_raise = errors == 'raise'
+ bint shift = False, fill_nonexist = False
# Vectorized version of DstTzInfo.localize
-
- assert is_coerce or is_raise
-
if tz == UTC or tz is None:
return vals
@@ -888,39 +887,45 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
"the same size as vals")
ambiguous_array = np.asarray(ambiguous)
+ if nonexistent == 'NaT':
+ fill_nonexist = True
+ elif nonexistent == 'shift':
+ shift = True
+ else:
+ assert nonexistent in ('raise', None), ("nonexistent must be one of"
+ " {'NaT', 'raise', 'shift'}")
+
trans, deltas, typ = get_dst_info(tz)
tdata = <int64_t*> cnp.PyArray_DATA(trans)
ntrans = len(trans)
+ # Determine whether each date lies left of the DST transition (store in
+ # result_a) or right of the DST transition (store in result_b)
result_a = np.empty(n, dtype=np.int64)
result_b = np.empty(n, dtype=np.int64)
result_a.fill(NPY_NAT)
result_b.fill(NPY_NAT)
- # left side
- idx_shifted = (np.maximum(0, trans.searchsorted(
+ idx_shifted_left = (np.maximum(0, trans.searchsorted(
vals - DAY_NS, side='right') - 1)).astype(np.int64)
- for i in range(n):
- v = vals[i] - deltas[idx_shifted[i]]
- pos = bisect_right_i8(tdata, v, ntrans) - 1
-
- # timestamp falls to the left side of the DST transition
- if v + deltas[pos] == vals[i]:
- result_a[i] = v
-
- # right side
- idx_shifted = (np.maximum(0, trans.searchsorted(
+ idx_shifted_right = (np.maximum(0, trans.searchsorted(
vals + DAY_NS, side='right') - 1)).astype(np.int64)
for i in range(n):
- v = vals[i] - deltas[idx_shifted[i]]
- pos = bisect_right_i8(tdata, v, ntrans) - 1
+ val = vals[i]
+ v_left = val - deltas[idx_shifted_left[i]]
+ pos_left = bisect_right_i8(tdata, v_left, ntrans) - 1
+ # timestamp falls to the left side of the DST transition
+ if v_left + deltas[pos_left] == val:
+ result_a[i] = v_left
+ v_right = val - deltas[idx_shifted_right[i]]
+ pos_right = bisect_right_i8(tdata, v_right, ntrans) - 1
# timestamp falls to the right side of the DST transition
- if v + deltas[pos] == vals[i]:
- result_b[i] = v
+ if v_right + deltas[pos_right] == val:
+ result_b[i] = v_right
if infer_dst:
dst_hours = np.empty(n, dtype=np.int64)
@@ -935,7 +940,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
stamp = _render_tstamp(vals[trans_idx])
raise pytz.AmbiguousTimeError(
"Cannot infer dst time from %s as there "
- "are no repeated times" % stamp)
+ "are no repeated times".format(stamp))
# Split the array into contiguous chunks (where the difference between
# indices is 1). These are effectively dst transitions in different
# years which is useful for checking that there is not an ambiguous
@@ -960,7 +965,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
if switch_idx.size > 1:
raise pytz.AmbiguousTimeError(
"There are %i dst switches when "
- "there should only be 1." % switch_idx.size)
+ "there should only be 1.".format(switch_idx.size))
switch_idx = switch_idx[0] + 1
# Pull the only index and adjust
a_idx = grp[:switch_idx]
@@ -968,10 +973,11 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
dst_hours[grp] = np.hstack((result_a[a_idx], result_b[b_idx]))
for i in range(n):
+ val = vals[i]
left = result_a[i]
right = result_b[i]
- if vals[i] == NPY_NAT:
- result[i] = vals[i]
+ if val == NPY_NAT:
+ result[i] = val
elif left != NPY_NAT and right != NPY_NAT:
if left == right:
result[i] = left
@@ -986,19 +992,27 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
elif fill:
result[i] = NPY_NAT
else:
- stamp = _render_tstamp(vals[i])
+ stamp = _render_tstamp(val)
raise pytz.AmbiguousTimeError(
"Cannot infer dst time from %r, try using the "
- "'ambiguous' argument" % stamp)
+ "'ambiguous' argument".format(stamp))
elif left != NPY_NAT:
result[i] = left
elif right != NPY_NAT:
result[i] = right
else:
- if is_coerce:
+ # Handle nonexistent times
+ if shift:
+ # Shift the nonexistent time forward to the closest existing
+ # time
+ remaining_minutes = val % HOURS_NS
+ new_local = val + (HOURS_NS - remaining_minutes)
+ delta_idx = trans.searchsorted(new_local, side='right') - 1
+ result[i] = new_local - deltas[delta_idx]
+ elif fill_nonexist:
result[i] = NPY_NAT
else:
- stamp = _render_tstamp(vals[i])
+ stamp = _render_tstamp(val)
raise pytz.NonExistentTimeError(stamp)
return result
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index ae4f9c821b5d1..0eec84ecf8285 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -564,14 +564,26 @@ class NaTType(_NaT):
- 'NaT' will return NaT for an ambiguous time
- 'raise' will raise an AmbiguousTimeError for an ambiguous time
- errors : 'raise', 'coerce', default 'raise'
+ nonexistent : 'shift', 'NaT', default 'raise'
+ A nonexistent time does not exist in a particular timezone
+ where clocks moved forward due to DST.
+
+ - 'shift' will shift the nonexistent time forward to the closest
+ existing time
+ - 'NaT' will return NaT where there are nonexistent times
+ - 'raise' will raise an NonExistentTimeError if there are
+ nonexistent times
+
+ .. versionadded:: 0.24.0
+
+ errors : 'raise', 'coerce', default None
- 'raise' will raise a NonExistentTimeError if a timestamp is not
valid in the specified timezone (e.g. due to a transition from
- or to DST time)
+ or to DST time). Use ``nonexistent='raise'`` instead.
- 'coerce' will return NaT if the timestamp can not be converted
- into the specified timezone
+ into the specified timezone. Use ``nonexistent='NaT'`` instead.
- .. versionadded:: 0.19.0
+ .. deprecated:: 0.24.0
Returns
-------
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 0c2753dbc6f28..08b0c5472549e 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -961,7 +961,8 @@ class Timestamp(_Timestamp):
def is_leap_year(self):
return bool(ccalendar.is_leapyear(self.year))
- def tz_localize(self, tz, ambiguous='raise', errors='raise'):
+ def tz_localize(self, tz, ambiguous='raise', nonexistent='raise',
+ errors=None):
"""
Convert naive Timestamp to local time zone, or remove
timezone from tz-aware Timestamp.
@@ -978,14 +979,26 @@ class Timestamp(_Timestamp):
- 'NaT' will return NaT for an ambiguous time
- 'raise' will raise an AmbiguousTimeError for an ambiguous time
- errors : 'raise', 'coerce', default 'raise'
+ nonexistent : 'shift', 'NaT', default 'raise'
+ A nonexistent time does not exist in a particular timezone
+ where clocks moved forward due to DST.
+
+ - 'shift' will shift the nonexistent time forward to the closest
+ existing time
+ - 'NaT' will return NaT where there are nonexistent times
+ - 'raise' will raise an NonExistentTimeError if there are
+ nonexistent times
+
+ .. versionadded:: 0.24.0
+
+ errors : 'raise', 'coerce', default None
- 'raise' will raise a NonExistentTimeError if a timestamp is not
valid in the specified timezone (e.g. due to a transition from
- or to DST time)
+ or to DST time). Use ``nonexistent='raise'`` instead.
- 'coerce' will return NaT if the timestamp can not be converted
- into the specified timezone
+ into the specified timezone. Use ``nonexistent='NaT'`` instead.
- .. versionadded:: 0.19.0
+ .. deprecated:: 0.24.0
Returns
-------
@@ -999,13 +1012,31 @@ class Timestamp(_Timestamp):
if ambiguous == 'infer':
raise ValueError('Cannot infer offset with only one time.')
+ if errors is not None:
+ warnings.warn("The errors argument is deprecated and will be "
+ "removed in a future release. Use "
+ "nonexistent='NaT' or nonexistent='raise' "
+ "instead.", FutureWarning)
+ if errors == 'coerce':
+ nonexistent = 'NaT'
+ elif errors == 'raise':
+ nonexistent = 'raise'
+ else:
+ raise ValueError("The errors argument must be either 'coerce' "
+ "or 'raise'.")
+
+ if nonexistent not in ('raise', 'NaT', 'shift'):
+ raise ValueError("The nonexistent argument must be one of 'raise',"
+ " 'NaT' or 'shift'")
+
if self.tzinfo is None:
# tz naive, localize
tz = maybe_get_tz(tz)
if not is_string_object(ambiguous):
ambiguous = [ambiguous]
value = tz_localize_to_utc(np.array([self.value], dtype='i8'), tz,
- ambiguous=ambiguous, errors=errors)[0]
+ ambiguous=ambiguous,
+ nonexistent=nonexistent)[0]
return Timestamp(value, tz=tz)
else:
if tz is None:
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index ac90483513af5..b6574c121c087 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -611,7 +611,8 @@ def tz_convert(self, tz):
# No conversion since timestamps are all UTC to begin with
return self._shallow_copy(tz=tz)
- def tz_localize(self, tz, ambiguous='raise', errors='raise'):
+ def tz_localize(self, tz, ambiguous='raise', nonexistent='raise',
+ errors=None):
"""
Localize tz-naive Datetime Array/Index to tz-aware
Datetime Array/Index.
@@ -627,8 +628,7 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone to convert timestamps to. Passing ``None`` will
remove the time zone information preserving local time.
- ambiguous : str {'infer', 'NaT', 'raise'} or bool array,
- default 'raise'
+ ambiguous : 'infer', 'NaT', bool array, default 'raise'
- 'infer' will attempt to infer fall dst-transition hours based on
order
@@ -639,15 +639,27 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
- 'raise' will raise an AmbiguousTimeError if there are ambiguous
times
- errors : {'raise', 'coerce'}, default 'raise'
+ nonexistent : 'shift', 'NaT' default 'raise'
+ A nonexistent time does not exist in a particular timezone
+ where clocks moved forward due to DST.
+
+ - 'shift' will shift the nonexistent times forward to the closest
+ existing time
+ - 'NaT' will return NaT where there are nonexistent times
+ - 'raise' will raise an NonExistentTimeError if there are
+ nonexistent times
+
+ .. versionadded:: 0.24.0
+
+ errors : {'raise', 'coerce'}, default None
- 'raise' will raise a NonExistentTimeError if a timestamp is not
valid in the specified time zone (e.g. due to a transition from
- or to DST time)
+ or to DST time). Use ``nonexistent='raise'`` instead.
- 'coerce' will return NaT if the timestamp can not be converted
- to the specified time zone
+ to the specified time zone. Use ``nonexistent='NaT'`` instead.
- .. versionadded:: 0.19.0
+ .. deprecated:: 0.24.0
Returns
-------
@@ -689,6 +701,23 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
'2018-03-03 09:00:00'],
dtype='datetime64[ns]', freq='D')
"""
+ if errors is not None:
+ warnings.warn("The errors argument is deprecated and will be "
+ "removed in a future release. Use "
+ "nonexistent='NaT' or nonexistent='raise' "
+ "instead.", FutureWarning)
+ if errors == 'coerce':
+ nonexistent = 'NaT'
+ elif errors == 'raise':
+ nonexistent = 'raise'
+ else:
+ raise ValueError("The errors argument must be either 'coerce' "
+ "or 'raise'.")
+
+ if nonexistent not in ('raise', 'NaT', 'shift'):
+ raise ValueError("The nonexistent argument must be one of 'raise',"
+ " 'NaT' or 'shift'")
+
if self.tz is not None:
if tz is None:
new_dates = conversion.tz_convert(self.asi8, 'UTC', self.tz)
@@ -698,9 +727,9 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
tz = timezones.maybe_get_tz(tz)
# Convert to UTC
- new_dates = conversion.tz_localize_to_utc(self.asi8, tz,
- ambiguous=ambiguous,
- errors=errors)
+ new_dates = conversion.tz_localize_to_utc(
+ self.asi8, tz, ambiguous=ambiguous, nonexistent=nonexistent,
+ )
new_dates = new_dates.view(_NS_DTYPE)
return self._shallow_copy(new_dates, tz=tz)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 31b700abcfdb3..c24872d7c89e9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8627,7 +8627,7 @@ def _tz_convert(ax, tz):
return result.__finalize__(self)
def tz_localize(self, tz, axis=0, level=None, copy=True,
- ambiguous='raise'):
+ ambiguous='raise', nonexistent='raise'):
"""
Localize tz-naive TimeSeries to target time zone.
@@ -8649,6 +8649,17 @@ def tz_localize(self, tz, axis=0, level=None, copy=True,
- 'NaT' will return NaT where there are ambiguous times
- 'raise' will raise an AmbiguousTimeError if there are ambiguous
times
+ nonexistent : 'shift', 'NaT', default 'raise'
+ A nonexistent time does not exist in a particular timezone
+ where clocks moved forward due to DST.
+
+ - 'shift' will shift the nonexistent times forward to the closest
+ existing time
+ - 'NaT' will return NaT where there are nonexistent times
+ - 'raise' will raise an NonExistentTimeError if there are
+ nonexistent times
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -8658,10 +8669,14 @@ def tz_localize(self, tz, axis=0, level=None, copy=True,
TypeError
If the TimeSeries is tz-aware and tz is not None.
"""
+ if nonexistent not in ('raise', 'NaT', 'shift'):
+ raise ValueError("The nonexistent argument must be one of 'raise',"
+ " 'NaT' or 'shift'")
+
axis = self._get_axis_number(axis)
ax = self._get_axis(axis)
- def _tz_localize(ax, tz, ambiguous):
+ def _tz_localize(ax, tz, ambiguous, nonexistent):
if not hasattr(ax, 'tz_localize'):
if len(ax) > 0:
ax_name = self._get_axis_name(axis)
@@ -8670,19 +8685,23 @@ def _tz_localize(ax, tz, ambiguous):
else:
ax = DatetimeIndex([], tz=tz)
else:
- ax = ax.tz_localize(tz, ambiguous=ambiguous)
+ ax = ax.tz_localize(
+ tz, ambiguous=ambiguous, nonexistent=nonexistent
+ )
return ax
# if a level is given it must be a MultiIndex level or
# equivalent to the axis name
if isinstance(ax, MultiIndex):
level = ax._get_level_number(level)
- new_level = _tz_localize(ax.levels[level], tz, ambiguous)
+ new_level = _tz_localize(
+ ax.levels[level], tz, ambiguous, nonexistent
+ )
ax = ax.set_levels(new_level, level=level)
else:
if level not in (None, 0, ax.name):
raise ValueError("The level {0} is not valid".format(level))
- ax = _tz_localize(ax, tz, ambiguous)
+ ax = _tz_localize(ax, tz, ambiguous, nonexistent)
result = self._constructor(self._data, copy=copy)
result.set_axis(ax, axis=axis, inplace=True)
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index dc01f7ccbd496..1369783657f92 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -312,9 +312,13 @@ def test_dti_tz_localize_nonexistent_raise_coerce(self):
index.tz_localize(tz=tz)
with pytest.raises(pytz.NonExistentTimeError):
- index.tz_localize(tz=tz, errors='raise')
+ with tm.assert_produces_warning(FutureWarning):
+ index.tz_localize(tz=tz, errors='raise')
- result = index.tz_localize(tz=tz, errors='coerce')
+ with tm.assert_produces_warning(FutureWarning,
+ clear=FutureWarning,
+ check_stacklevel=False):
+ result = index.tz_localize(tz=tz, errors='coerce')
test_times = ['2015-03-08 01:00-05:00', 'NaT',
'2015-03-08 03:00-04:00']
dti = to_datetime(test_times, utc=True)
@@ -574,6 +578,42 @@ def test_dti_tz_localize_bdate_range(self):
localized = dr.tz_localize(pytz.utc)
tm.assert_index_equal(dr_utc, localized)
+ @pytest.mark.parametrize('tz', ['Europe/Warsaw', 'dateutil/Europe/Warsaw'])
+ @pytest.mark.parametrize('method, exp', [
+ ['shift', '2015-03-29 03:00:00'],
+ ['NaT', pd.NaT],
+ ['raise', None],
+ ['foo', 'invalid']
+ ])
+ def test_dti_tz_localize_nonexistent(self, tz, method, exp):
+ # GH 8917
+ n = 60
+ dti = date_range(start='2015-03-29 02:00:00', periods=n, freq='min')
+ if method == 'raise':
+ with pytest.raises(pytz.NonExistentTimeError):
+ dti.tz_localize(tz, nonexistent=method)
+ elif exp == 'invalid':
+ with pytest.raises(ValueError):
+ dti.tz_localize(tz, nonexistent=method)
+ else:
+ result = dti.tz_localize(tz, nonexistent=method)
+ expected = DatetimeIndex([exp] * n, tz=tz)
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.filterwarnings('ignore::FutureWarning')
+ def test_dti_tz_localize_errors_deprecation(self):
+ # GH 22644
+ tz = 'Europe/Warsaw'
+ n = 60
+ dti = date_range(start='2015-03-29 02:00:00', periods=n, freq='min')
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ with pytest.raises(ValueError):
+ dti.tz_localize(tz, errors='foo')
+ # make sure errors='coerce' gets mapped correctly to nonexistent
+ result = dti.tz_localize(tz, errors='coerce')
+ expected = dti.tz_localize(tz, nonexistent='NaT')
+ tm.assert_index_equal(result, expected)
+
# -------------------------------------------------------------
# DatetimeIndex.normalize
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index 8cebfafeae82a..827ad3581cd49 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -79,20 +79,44 @@ def test_tz_localize_ambiguous(self):
('2015-03-08 02:30', 'US/Pacific'),
('2015-03-29 02:00', 'Europe/Paris'),
('2015-03-29 02:30', 'Europe/Belgrade')])
+ @pytest.mark.filterwarnings('ignore::FutureWarning')
def test_tz_localize_nonexistent(self, stamp, tz):
# GH#13057
ts = Timestamp(stamp)
with pytest.raises(NonExistentTimeError):
ts.tz_localize(tz)
+ # GH 22644
with pytest.raises(NonExistentTimeError):
- ts.tz_localize(tz, errors='raise')
- assert ts.tz_localize(tz, errors='coerce') is NaT
+ with tm.assert_produces_warning(FutureWarning):
+ ts.tz_localize(tz, errors='raise')
+ with tm.assert_produces_warning(FutureWarning):
+ assert ts.tz_localize(tz, errors='coerce') is NaT
def test_tz_localize_errors_ambiguous(self):
# GH#13057
ts = Timestamp('2015-11-1 01:00')
with pytest.raises(AmbiguousTimeError):
- ts.tz_localize('US/Pacific', errors='coerce')
+ with tm.assert_produces_warning(FutureWarning):
+ ts.tz_localize('US/Pacific', errors='coerce')
+
+ @pytest.mark.filterwarnings('ignore::FutureWarning')
+ def test_tz_localize_errors_invalid_arg(self):
+ # GH 22644
+ tz = 'Europe/Warsaw'
+ ts = Timestamp('2015-03-29 02:00:00')
+ with pytest.raises(ValueError):
+ with tm.assert_produces_warning(FutureWarning):
+ ts.tz_localize(tz, errors='foo')
+
+ def test_tz_localize_errors_coerce(self):
+ # GH 22644
+ # make sure errors='coerce' gets mapped correctly to nonexistent
+ tz = 'Europe/Warsaw'
+ ts = Timestamp('2015-03-29 02:00:00')
+ with tm.assert_produces_warning(FutureWarning):
+ result = ts.tz_localize(tz, errors='coerce')
+ expected = ts.tz_localize(tz, nonexistent='NaT')
+ assert result is expected
@pytest.mark.parametrize('stamp', ['2014-02-01 09:00', '2014-07-08 09:00',
'2014-11-01 17:00', '2014-11-05 00:00'])
@@ -158,6 +182,30 @@ def test_timestamp_tz_localize(self, tz):
assert result.hour == expected.hour
assert result == expected
+ @pytest.mark.parametrize('tz', ['Europe/Warsaw', 'dateutil/Europe/Warsaw'])
+ def test_timestamp_tz_localize_nonexistent_shift(self, tz):
+ # GH 8917
+ ts = Timestamp('2015-03-29 02:20:00')
+ result = ts.tz_localize(tz, nonexistent='shift')
+ expected = Timestamp('2015-03-29 03:00:00').tz_localize(tz)
+ assert result == expected
+
+ @pytest.mark.parametrize('tz', ['Europe/Warsaw', 'dateutil/Europe/Warsaw'])
+ def test_timestamp_tz_localize_nonexistent_NaT(self, tz):
+ # GH 8917
+ ts = Timestamp('2015-03-29 02:20:00')
+ result = ts.tz_localize(tz, nonexistent='NaT')
+ assert result is NaT
+
+ @pytest.mark.parametrize('tz', ['Europe/Warsaw', 'dateutil/Europe/Warsaw'])
+ def test_timestamp_tz_localize_nonexistent_raise(self, tz):
+ # GH 8917
+ ts = Timestamp('2015-03-29 02:20:00')
+ with pytest.raises(pytz.NonExistentTimeError):
+ ts.tz_localize(tz, nonexistent='raise')
+ with pytest.raises(ValueError):
+ ts.tz_localize(tz, nonexistent='foo')
+
# ------------------------------------------------------------------
# Timestamp.tz_convert
diff --git a/pandas/tests/series/test_timezones.py b/pandas/tests/series/test_timezones.py
index 472b2c5644fa5..8c1ea6bff5f4d 100644
--- a/pandas/tests/series/test_timezones.py
+++ b/pandas/tests/series/test_timezones.py
@@ -13,7 +13,7 @@
from pandas._libs.tslibs import timezones, conversion
from pandas.compat import lrange
from pandas.core.indexes.datetimes import date_range
-from pandas import Series, Timestamp, DatetimeIndex, Index
+from pandas import Series, Timestamp, DatetimeIndex, Index, NaT
class TestSeriesTimezones(object):
@@ -33,6 +33,21 @@ def test_series_tz_localize(self):
tm.assert_raises_regex(TypeError, 'Already tz-aware',
ts.tz_localize, 'US/Eastern')
+ @pytest.mark.filterwarnings('ignore::FutureWarning')
+ def test_tz_localize_errors_deprecation(self):
+ # GH 22644
+ tz = 'Europe/Warsaw'
+ n = 60
+ rng = date_range(start='2015-03-29 02:00:00', periods=n, freq='min')
+ ts = Series(rng)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ with pytest.raises(ValueError):
+ ts.dt.tz_localize(tz, errors='foo')
+ # make sure errors='coerce' gets mapped correctly to nonexistent
+ result = ts.dt.tz_localize(tz, errors='coerce')
+ expected = ts.dt.tz_localize(tz, nonexistent='NaT')
+ tm.assert_series_equal(result, expected)
+
def test_series_tz_localize_ambiguous_bool(self):
# make sure that we are correctly accepting bool values as ambiguous
@@ -60,6 +75,29 @@ def test_series_tz_localize_ambiguous_bool(self):
result = ser.dt.tz_localize('US/Central', ambiguous=[False])
tm.assert_series_equal(result, expected1)
+ @pytest.mark.parametrize('tz', ['Europe/Warsaw', 'dateutil/Europe/Warsaw'])
+ @pytest.mark.parametrize('method, exp', [
+ ['shift', '2015-03-29 03:00:00'],
+ ['NaT', NaT],
+ ['raise', None],
+ ['foo', 'invalid']
+ ])
+ def test_series_tz_localize_nonexistent(self, tz, method, exp):
+ # GH 8917
+ n = 60
+ dti = date_range(start='2015-03-29 02:00:00', periods=n, freq='min')
+ s = Series(1, dti)
+ if method == 'raise':
+ with pytest.raises(pytz.NonExistentTimeError):
+ s.tz_localize(tz, nonexistent=method)
+ elif exp == 'invalid':
+ with pytest.raises(ValueError):
+ dti.tz_localize(tz, nonexistent=method)
+ else:
+ result = s.tz_localize(tz, nonexistent=method)
+ expected = Series(1, index=DatetimeIndex([exp] * n, tz=tz))
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize('tzstr', ['US/Eastern', 'dateutil/US/Eastern'])
def test_series_tz_localize_empty(self, tzstr):
# GH#2248
| - [x] closes #8917
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Currently, users do not have any control over nonexistent datetime handling when `tz_localize`ing like they do ambiguous times. This adds a new keyword `nonexistent` to `tz_localize` so that users now can:
``'raise'``: Raise an error (default)
``'NaT'``: Replace nonexistent times with ``'NaT'``
``'shift'``: Shift nonexistent times forward to the closest existing time
| https://api.github.com/repos/pandas-dev/pandas/pulls/22644 | 2018-09-09T01:34:58Z | 2018-10-25T11:46:30Z | 2018-10-25T11:46:30Z | 2018-10-25T15:48:10Z |
DOC: improve doc string for .aggregate and .transform | diff --git a/ci/doctests.sh b/ci/doctests.sh
index 2af5dbd26aeb1..654bd57107904 100755
--- a/ci/doctests.sh
+++ b/ci/doctests.sh
@@ -21,7 +21,7 @@ if [ "$DOCTEST" ]; then
# DataFrame / Series docstrings
pytest --doctest-modules -v pandas/core/frame.py \
- -k"-assign -axes -combine -isin -itertuples -join -nlargest -nsmallest -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_dict -to_stata -transform"
+ -k"-assign -axes -combine -isin -itertuples -join -nlargest -nsmallest -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_dict -to_stata"
if [ $? -ne "0" ]; then
RET=1
@@ -35,7 +35,7 @@ if [ "$DOCTEST" ]; then
fi
pytest --doctest-modules -v pandas/core/generic.py \
- -k"-_set_axis_name -_xs -describe -droplevel -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -resample -sample -to_json -to_xarray -transform -transpose -values -xs"
+ -k"-_set_axis_name -_xs -describe -droplevel -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -resample -sample -to_json -to_xarray -transpose -values -xs"
if [ $? -ne "0" ]; then
RET=1
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 251bc6587872d..bb08d4fa5582b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -109,10 +109,9 @@
_shared_doc_kwargs = dict(
axes='index, columns', klass='DataFrame',
axes_single_arg="{0 or 'index', 1 or 'columns'}",
- axis="""
- axis : {0 or 'index', 1 or 'columns'}, default 0
- - 0 or 'index': apply function to each column.
- - 1 or 'columns': apply function to each row.""",
+ axis="""axis : {0 or 'index', 1 or 'columns'}, default 0
+ If 0 or 'index': apply function to each column.
+ If 1 or 'columns': apply function to each row.""",
optional_by="""
by : str or list of str
Name or list of names to sort by.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2e5da21f573b0..243784ea84d43 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4545,17 +4545,16 @@ def pipe(self, func, *args, **kwargs):
Parameters
----------
- func : function, string, dictionary, or list of string/functions
+ func : function, str, list or dict
Function to use for aggregating the data. If a function, must either
- work when passed a %(klass)s or when passed to %(klass)s.apply. For
- a DataFrame, can pass a dict, if the keys are DataFrame column names.
+ work when passed a %(klass)s or when passed to %(klass)s.apply.
Accepted combinations are:
- - string function name.
- - function.
- - list of functions.
- - dict of column names -> functions (or list of functions).
+ - function
+ - string function name
+ - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
+ - dict of axis labels -> functions, function names or list of such.
%(axis)s
*args
Positional arguments to pass to `func`.
@@ -4564,7 +4563,11 @@ def pipe(self, func, *args, **kwargs):
Returns
-------
- aggregated : %(klass)s
+ DataFrame, Series or scalar
+ if DataFrame.agg is called with a single function, returns a Series
+ if DataFrame.agg is called with several functions, returns a DataFrame
+ if Series.agg is called with single function, returns a scalar
+ if Series.agg is called with several functions, returns a Series
Notes
-----
@@ -4574,50 +4577,71 @@ def pipe(self, func, *args, **kwargs):
""")
_shared_docs['transform'] = ("""
- Call function producing a like-indexed %(klass)s
- and return a %(klass)s with the transformed values
+ Call ``func`` on self producing a %(klass)s with transformed values
+ and that has the same axis length as self.
.. versionadded:: 0.20.0
Parameters
----------
- func : callable, string, dictionary, or list of string/callables
- To apply to column
+ func : function, str, list or dict
+ Function to use for transforming the data. If a function, must either
+ work when passed a %(klass)s or when passed to %(klass)s.apply.
- Accepted Combinations are:
+ Accepted combinations are:
- - string function name
- function
- - list of functions
- - dict of column names -> functions (or list of functions)
+ - string function name
+ - list of functions and/or function names, e.g. ``[np.exp. 'sqrt']``
+ - dict of axis labels -> functions, function names or list of such.
+ %(axis)s
+ *args
+ Positional arguments to pass to `func`.
+ **kwargs
+ Keyword arguments to pass to `func`.
Returns
-------
- transformed : %(klass)s
+ %(klass)s
+ A %(klass)s that must have the same length as self.
- Examples
+ Raises
+ ------
+ ValueError : If the returned %(klass)s has a different length than self.
+
+ See Also
--------
- >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],
- ... index=pd.date_range('1/1/2000', periods=10))
- df.iloc[3:7] = np.nan
-
- >>> df.transform(lambda x: (x - x.mean()) / x.std())
- A B C
- 2000-01-01 0.579457 1.236184 0.123424
- 2000-01-02 0.370357 -0.605875 -1.231325
- 2000-01-03 1.455756 -0.277446 0.288967
- 2000-01-04 NaN NaN NaN
- 2000-01-05 NaN NaN NaN
- 2000-01-06 NaN NaN NaN
- 2000-01-07 NaN NaN NaN
- 2000-01-08 -0.498658 1.274522 1.642524
- 2000-01-09 -0.540524 -1.012676 -0.828968
- 2000-01-10 -1.366388 -0.614710 0.005378
-
- See also
+ %(klass)s.agg : Only perform aggregating type operations.
+ %(klass)s.apply : Invoke function on a %(klass)s.
+
+ Examples
--------
- pandas.%(klass)s.aggregate
- pandas.%(klass)s.apply
+ >>> df = pd.DataFrame({'A': range(3), 'B': range(1, 4)})
+ >>> df
+ A B
+ 0 0 1
+ 1 1 2
+ 2 2 3
+ >>> df.transform(lambda x: x + 1)
+ A B
+ 0 1 2
+ 1 2 3
+ 2 3 4
+
+ Even though the resulting %(klass)s must have the same length as the
+ input %(klass)s, it is possible to provide several input functions:
+
+ >>> s = pd.Series(range(3))
+ >>> s
+ 0 0
+ 1 1
+ 2 2
+ dtype: int64
+ >>> s.transform([np.sqrt, np.exp])
+ sqrt exp
+ 0 0.000000 1.000000
+ 1 1.000000 2.718282
+ 2 1.414214 7.389056
""")
# ----------------------------------------------------------------------
@@ -9401,7 +9425,7 @@ def ewm(self, com=None, span=None, halflife=None, alpha=None,
cls.ewm = ewm
- @Appender(_shared_docs['transform'] % _shared_doc_kwargs)
+ @Appender(_shared_docs['transform'] % dict(axis="", **_shared_doc_kwargs))
def transform(self, func, *args, **kwargs):
result = self.agg(func, *args, **kwargs)
if is_scalar(result) or len(result) != len(self):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a4d403e4bcd94..654ba01bc7897 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -89,10 +89,8 @@
_shared_doc_kwargs = dict(
axes='index', klass='Series', axes_single_arg="{0 or 'index'}",
- axis="""
- axis : {0 or 'index'}
- Parameter needed for compatibility with DataFrame.
- """,
+ axis="""axis : {0 or 'index'}
+ Parameter needed for compatibility with DataFrame.""",
inplace="""inplace : boolean, default False
If True, performs operation inplace and returns None.""",
unique='np.ndarray', duplicated='Series',
@@ -3098,6 +3096,12 @@ def aggregate(self, func, axis=0, *args, **kwargs):
agg = aggregate
+ @Appender(generic._shared_docs['transform'] % _shared_doc_kwargs)
+ def transform(self, func, axis=0, *args, **kwargs):
+ # Validate the axis parameter
+ self._get_axis_number(axis)
+ return super(Series, self).transform(func, *args, **kwargs)
+
def apply(self, func, convert_dtype=True, args=(), **kwds):
"""
Invoke function on values of Series. Can be ufunc (a NumPy function
| Since #21224, operations using ``axis=1`` in df.aggregate and df.transform now work the same as when axis=0.
This PR updates the methods' doc strings to reflect the new reality. For example, we can now pass a dict to DataFrame.agg/transform when ``axis=1`` also, and ``DataFrame.transform`` now has an ``axis`` parameter.
There's a minor API change, as ``Series.transform`` should have a ``axis=0`` parameter to have the same API as ``Series.aggregate``.
Also some related minor clarifications. | https://api.github.com/repos/pandas-dev/pandas/pulls/22641 | 2018-09-08T21:31:34Z | 2018-09-18T12:18:09Z | 2018-09-18T12:18:08Z | 2018-09-20T21:12:02Z |
BUG: df.sort_values() not respecting na_position with categoricals #22556 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 16f0b9ee99909..351dc363c9550 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -788,6 +788,7 @@ Categorical
^^^^^^^^^^^
- Bug in :meth:`Categorical.from_codes` where ``NaN`` values in ``codes`` were silently converted to ``0`` (:issue:`21767`). In the future this will raise a ``ValueError``. Also changes the behavior of ``.from_codes([1.1, 2.0])``.
+- Bug in :meth:`Categorical.sort_values` where ``NaN`` values were always positioned in front regardless of ``na_position`` value. (:issue:`22556`).
- Bug when indexing with a boolean-valued ``Categorical``. Now a boolean-valued ``Categorical`` is treated as a boolean mask (:issue:`22665`)
- Constructing a :class:`CategoricalIndex` with empty values and boolean categories was raising a ``ValueError`` after a change to dtype coercion (:issue:`22702`).
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 79070bbbfd11a..8735284617f31 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -45,6 +45,8 @@
import pandas.core.algorithms as algorithms
+from pandas.core.sorting import nargsort
+
from pandas.io.formats import console
from pandas.io.formats.terminal import get_terminal_size
from pandas.util._validators import validate_bool_kwarg, validate_fillna_kwargs
@@ -1605,32 +1607,15 @@ def sort_values(self, inplace=False, ascending=True, na_position='last'):
msg = 'invalid na_position: {na_position!r}'
raise ValueError(msg.format(na_position=na_position))
- codes = np.sort(self._codes)
- if not ascending:
- codes = codes[::-1]
-
- # NaN handling
- na_mask = (codes == -1)
- if na_mask.any():
- n_nans = len(codes[na_mask])
- if na_position == "first":
- # in this case sort to the front
- new_codes = codes.copy()
- new_codes[0:n_nans] = -1
- new_codes[n_nans:] = codes[~na_mask]
- codes = new_codes
- elif na_position == "last":
- # ... and to the end
- new_codes = codes.copy()
- pos = len(codes) - n_nans
- new_codes[0:pos] = codes[~na_mask]
- new_codes[pos:] = -1
- codes = new_codes
+ sorted_idx = nargsort(self,
+ ascending=ascending,
+ na_position=na_position)
+
if inplace:
- self._codes = codes
- return
+ self._codes = self._codes[sorted_idx]
else:
- return self._constructor(values=codes, dtype=self.dtype,
+ return self._constructor(values=self._codes[sorted_idx],
+ dtype=self.dtype,
fastpath=True)
def _values_for_rank(self):
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 5aa9ea658482b..ee1c62f3decf9 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -241,7 +241,19 @@ def nargsort(items, kind='quicksort', ascending=True, na_position='last'):
# specially handle Categorical
if is_categorical_dtype(items):
- return items.argsort(ascending=ascending, kind=kind)
+ if na_position not in {'first', 'last'}:
+ raise ValueError('invalid na_position: {!r}'.format(na_position))
+
+ mask = isna(items)
+ cnt_null = mask.sum()
+ sorted_idx = items.argsort(ascending=ascending, kind=kind)
+ if ascending and na_position == 'last':
+ # NaN is coded as -1 and is listed in front after sorting
+ sorted_idx = np.roll(sorted_idx, -cnt_null)
+ elif not ascending and na_position == 'first':
+ # NaN is coded as -1 and is listed in the end after sorting
+ sorted_idx = np.roll(sorted_idx, cnt_null)
+ return sorted_idx
items = np.asanyarray(items)
idx = np.arange(len(items))
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index 599ae683f914b..41b11d9c15f35 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -10,7 +10,7 @@
from pandas.compat import lrange
from pandas.api.types import CategoricalDtype
from pandas import (DataFrame, Series, MultiIndex, Timestamp,
- date_range, NaT, IntervalIndex)
+ date_range, NaT, IntervalIndex, Categorical)
from pandas.util.testing import assert_series_equal, assert_frame_equal
@@ -161,7 +161,7 @@ def test_sort_nan(self):
'B': [5, 9, 2, nan, 5, 5, 4]},
index=[2, 0, 3, 1, 6, 4, 5])
sorted_df = df.sort_values(['A', 'B'], ascending=[
- 1, 0], na_position='first')
+ 1, 0], na_position='first')
assert_frame_equal(sorted_df, expected)
# na_position='last', not order
@@ -170,7 +170,7 @@ def test_sort_nan(self):
'B': [4, 5, 5, nan, 2, 9, 5]},
index=[5, 4, 6, 1, 3, 0, 2])
sorted_df = df.sort_values(['A', 'B'], ascending=[
- 0, 1], na_position='last')
+ 0, 1], na_position='last')
assert_frame_equal(sorted_df, expected)
# Test DataFrame with nan label
@@ -514,7 +514,7 @@ def test_sort_index_categorical_index(self):
df = (DataFrame({'A': np.arange(6, dtype='int64'),
'B': Series(list('aabbca'))
- .astype(CategoricalDtype(list('cab')))})
+ .astype(CategoricalDtype(list('cab')))})
.set_index('B'))
result = df.sort_index()
@@ -598,3 +598,81 @@ def test_sort_index_intervalindex(self):
closed='right')
result = result.columns.levels[1].categories
tm.assert_index_equal(result, expected)
+
+ def test_sort_index_na_position_with_categories(self):
+ # GH 22556
+ # Positioning missing value properly when column is Categorical.
+ categories = ['A', 'B', 'C']
+ category_indices = [0, 2, 4]
+ list_of_nans = [np.nan, np.nan]
+ na_indices = [1, 3]
+ na_position_first = 'first'
+ na_position_last = 'last'
+ column_name = 'c'
+
+ reversed_categories = sorted(categories, reverse=True)
+ reversed_category_indices = sorted(category_indices, reverse=True)
+ reversed_na_indices = sorted(na_indices, reverse=True)
+
+ df = pd.DataFrame({
+ column_name: pd.Categorical(['A', np.nan, 'B', np.nan, 'C'],
+ categories=categories,
+ ordered=True)})
+ # sort ascending with na first
+ result = df.sort_values(by=column_name,
+ ascending=True,
+ na_position=na_position_first)
+ expected = DataFrame({
+ column_name: Categorical(list_of_nans + categories,
+ categories=categories,
+ ordered=True)
+ }, index=na_indices + category_indices)
+
+ assert_frame_equal(result, expected)
+
+ # sort ascending with na last
+ result = df.sort_values(by=column_name,
+ ascending=True,
+ na_position=na_position_last)
+ expected = DataFrame({
+ column_name: Categorical(categories + list_of_nans,
+ categories=categories,
+ ordered=True)
+ }, index=category_indices + na_indices)
+
+ assert_frame_equal(result, expected)
+
+ # sort descending with na first
+ result = df.sort_values(by=column_name,
+ ascending=False,
+ na_position=na_position_first)
+ expected = DataFrame({
+ column_name: Categorical(list_of_nans + reversed_categories,
+ categories=categories,
+ ordered=True)
+ }, index=reversed_na_indices + reversed_category_indices)
+
+ assert_frame_equal(result, expected)
+
+ # sort descending with na last
+ result = df.sort_values(by=column_name,
+ ascending=False,
+ na_position=na_position_last)
+ expected = DataFrame({
+ column_name: Categorical(reversed_categories + list_of_nans,
+ categories=categories,
+ ordered=True)
+ }, index=reversed_category_indices + reversed_na_indices)
+
+ assert_frame_equal(result, expected)
+
+ def test_sort_index_na_position_with_categories_raises(self):
+ df = pd.DataFrame({
+ 'c': pd.Categorical(['A', np.nan, 'B', np.nan, 'C'],
+ categories=['A', 'B', 'C'],
+ ordered=True)})
+
+ with pytest.raises(ValueError):
+ df.sort_values(by='c',
+ ascending=False,
+ na_position='bad_position')
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 303d3a3d8dbe9..7b9e23fca59aa 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -995,7 +995,7 @@ def test_categorical_sorting(self, file):
parsed = read_stata(getattr(self, file))
# Sort based on codes, not strings
- parsed = parsed.sort_values("srh")
+ parsed = parsed.sort_values("srh", na_position='first')
# Don't sort index
parsed.index = np.arange(parsed.shape[0])
| - [x] closes #22556
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22640 | 2018-09-08T20:51:57Z | 2018-10-18T16:07:26Z | 2018-10-18T16:07:26Z | 2018-10-18T23:32:03Z |
pythonize cython code | diff --git a/.coveragerc b/.coveragerc
deleted file mode 100644
index 13baa100b84b7..0000000000000
--- a/.coveragerc
+++ /dev/null
@@ -1,30 +0,0 @@
-# .coveragerc to control coverage.py
-[run]
-branch = False
-omit = */tests/*
-plugins = Cython.Coverage
-
-[report]
-# Regexes for lines to exclude from consideration
-exclude_lines =
- # Have to re-enable the standard pragma
- pragma: no cover
-
- # Don't complain about missing debug-only code:
- def __repr__
- if self\.debug
-
- # Don't complain if tests don't hit defensive assertion code:
- raise AssertionError
- raise NotImplementedError
- AbstractMethodError
-
- # Don't complain if non-runnable code isn't run:
- if 0:
- if __name__ == .__main__.:
-
-ignore_errors = False
-show_missing = True
-
-[html]
-directory = coverage_html_report
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 249033b8636bd..415e7026e09c8 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from libc.stdlib cimport malloc, free
from libc.string cimport memmove
@@ -114,7 +114,7 @@ cpdef ndarray[int64_t, ndim=1] unique_deltas(ndarray[int64_t] arr):
@cython.wraparound(False)
@cython.boundscheck(False)
-def is_lexsorted(list list_of_arrays):
+def is_lexsorted(list_of_arrays: list) -> bint:
cdef:
Py_ssize_t i
Py_ssize_t n, nlevels
diff --git a/pandas/_libs/hashing.pyx b/pandas/_libs/hashing.pyx
index 88b4d97de492c..c2305c8f3ff00 100644
--- a/pandas/_libs/hashing.pyx
+++ b/pandas/_libs/hashing.pyx
@@ -3,7 +3,6 @@
# at https://github.com/veorq/SipHash
import cython
-from cpython cimport PyBytes_Check, PyUnicode_Check
from libc.stdlib cimport malloc, free
import numpy as np
@@ -44,6 +43,7 @@ def hash_object_array(object[:] arr, object key, object encoding='utf8'):
char **vecs
char *cdata
object val
+ list datas = []
k = <bytes>key.encode(encoding)
kb = <uint8_t *>k
@@ -57,12 +57,11 @@ def hash_object_array(object[:] arr, object key, object encoding='utf8'):
vecs = <char **> malloc(n * sizeof(char *))
lens = <uint64_t*> malloc(n * sizeof(uint64_t))
- cdef list datas = []
for i in range(n):
val = arr[i]
- if PyBytes_Check(val):
+ if isinstance(val, bytes):
data = <bytes>val
- elif PyUnicode_Check(val):
+ elif isinstance(val, unicode):
data = <bytes>val.encode(encoding)
elif val is None or is_nan(val):
# null, stringify and encode
@@ -132,15 +131,6 @@ cdef inline void _sipround(uint64_t* v0, uint64_t* v1,
v2[0] = _rotl(v2[0], 32)
-# TODO: This appears unused; remove?
-cpdef uint64_t siphash(bytes data, bytes key) except? 0:
- if len(key) != 16:
- raise ValueError("key should be a 16-byte bytestring, "
- "got {key} (len {klen})"
- .format(key=key, klen=len(key)))
- return low_level_siphash(data, len(data), key)
-
-
@cython.cdivision(True)
cdef uint64_t low_level_siphash(uint8_t* data, size_t datalen,
uint8_t* key) nogil:
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index d5846f2b42378..562c1ba218141 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -1,10 +1,7 @@
# -*- coding: utf-8 -*-
from datetime import datetime, timedelta, date
-cimport cython
-
-from cpython cimport PyTuple_Check, PyList_Check
-from cpython.slice cimport PySlice_Check
+import cython
import numpy as np
cimport numpy as cnp
@@ -30,15 +27,15 @@ cdef int64_t iNaT = util.get_nat()
cdef inline bint is_definitely_invalid_key(object val):
- if PyTuple_Check(val):
+ if isinstance(val, tuple):
try:
hash(val)
except TypeError:
return True
# we have a _data, means we are a NDFrame
- return (PySlice_Check(val) or util.is_array(val)
- or PyList_Check(val) or hasattr(val, '_data'))
+ return (isinstance(val, slice) or util.is_array(val)
+ or isinstance(val, list) or hasattr(val, '_data'))
cpdef get_value_at(ndarray arr, object loc, object tz=None):
@@ -88,7 +85,7 @@ cdef class IndexEngine:
void* data_ptr
loc = self.get_loc(key)
- if PySlice_Check(loc) or util.is_array(loc):
+ if isinstance(loc, slice) or util.is_array(loc):
return arr[loc]
else:
return get_value_at(arr, loc, tz=tz)
@@ -640,7 +637,7 @@ cdef class BaseMultiIndexCodesEngine:
def get_loc(self, object key):
if is_definitely_invalid_key(key):
raise TypeError("'{key}' is an invalid key".format(key=key))
- if not PyTuple_Check(key):
+ if not isinstance(key, tuple):
raise KeyError(key)
try:
indices = [0 if checknull(v) else lev.get_loc(v) + 1
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 996570dae3302..681530ed494d7 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -1,10 +1,9 @@
# -*- coding: utf-8 -*-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from cpython cimport PyObject
-from cpython.slice cimport PySlice_Check
cdef extern from "Python.h":
Py_ssize_t PY_SSIZE_T_MAX
@@ -30,14 +29,15 @@ cdef class BlockPlacement:
cdef bint _has_slice, _has_array, _is_known_slice_like
def __init__(self, val):
- cdef slice slc
+ cdef:
+ slice slc
self._as_slice = None
self._as_array = None
self._has_slice = False
self._has_array = False
- if PySlice_Check(val):
+ if isinstance(val, slice):
slc = slice_canonize(val)
if slc.start != slc.stop:
@@ -55,7 +55,8 @@ cdef class BlockPlacement:
self._has_array = True
def __str__(self):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
if s is not None:
v = self._as_slice
else:
@@ -66,15 +67,17 @@ cdef class BlockPlacement:
__repr__ = __str__
def __len__(self):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
if s is not None:
return slice_len(s)
else:
return len(self._as_array)
def __iter__(self):
- cdef slice s = self._ensure_has_slice()
- cdef Py_ssize_t start, stop, step, _
+ cdef:
+ slice s = self._ensure_has_slice()
+ Py_ssize_t start, stop, step, _
if s is not None:
start, stop, step, _ = slice_get_indices_ex(s)
return iter(range(start, stop, step))
@@ -83,7 +86,8 @@ cdef class BlockPlacement:
@property
def as_slice(self):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
if s is None:
raise TypeError('Not slice-like')
else:
@@ -91,7 +95,8 @@ cdef class BlockPlacement:
@property
def indexer(self):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
if s is not None:
return s
else:
@@ -103,7 +108,8 @@ cdef class BlockPlacement:
@property
def as_array(self):
- cdef Py_ssize_t start, stop, end, _
+ cdef:
+ Py_ssize_t start, stop, end, _
if not self._has_array:
start, stop, step, _ = slice_get_indices_ex(self._as_slice)
self._as_array = np.arange(start, stop, step,
@@ -113,17 +119,19 @@ cdef class BlockPlacement:
@property
def is_slice_like(self):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
return s is not None
def __getitem__(self, loc):
- cdef slice s = self._ensure_has_slice()
+ cdef:
+ slice s = self._ensure_has_slice()
if s is not None:
val = slice_getitem(s, loc)
else:
val = self._as_array[loc]
- if not PySlice_Check(val) and val.ndim == 0:
+ if not isinstance(val, slice) and val.ndim == 0:
return val
return BlockPlacement(val)
@@ -139,8 +147,9 @@ cdef class BlockPlacement:
[o.as_array for o in others]))
cdef iadd(self, other):
- cdef slice s = self._ensure_has_slice()
- cdef Py_ssize_t other_int, start, stop, step, l
+ cdef:
+ slice s = self._ensure_has_slice()
+ Py_ssize_t other_int, start, stop, step, l
if isinstance(other, int) and s is not None:
other_int = <Py_ssize_t>other
@@ -184,7 +193,7 @@ cdef class BlockPlacement:
return self._as_slice
-cdef slice_canonize(slice s):
+cdef slice slice_canonize(slice s):
"""
Convert slice to canonical bounded form.
"""
@@ -282,7 +291,7 @@ def slice_getitem(slice slc not None, ind):
s_start, s_stop, s_step, s_len = slice_get_indices_ex(slc)
- if PySlice_Check(ind):
+ if isinstance(ind, slice):
ind_start, ind_stop, ind_step, ind_len = slice_get_indices_ex(ind,
s_len)
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index d8e2e8eb4b4ea..82261094022fb 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -271,7 +271,7 @@ cdef class Interval(IntervalMixin):
return ((self.left < key if self.open_left else self.left <= key) and
(key < self.right if self.open_right else key <= self.right))
- def __richcmp__(self, other, int op):
+ def __richcmp__(self, other, op: int):
if hasattr(other, 'ndim'):
# let numpy (or IntervalIndex) handle vectorization
return NotImplemented
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 6b425d7022ecd..0b9793a6ef97a 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2,14 +2,10 @@
from decimal import Decimal
import sys
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from cpython cimport (Py_INCREF, PyTuple_SET_ITEM,
- PyList_Check,
- PyString_Check,
- PyBytes_Check,
- PyUnicode_Check,
PyTuple_New,
Py_EQ,
PyObject_RichCompareBool)
@@ -91,13 +87,14 @@ def values_from_object(object obj):
@cython.wraparound(False)
@cython.boundscheck(False)
-def memory_usage_of_objects(object[:] arr):
+def memory_usage_of_objects(arr: object[:]) -> int64_t:
""" return the memory usage of an object array in bytes,
does not include the actual bytes of the pointers """
- cdef:
- Py_ssize_t i, n
- int64_t size = 0
+ i: Py_ssize_t
+ n: Py_ssize_t
+ size: int64_t
+ size = 0
n = len(arr)
for i in range(n):
size += arr[i].__sizeof__()
@@ -127,7 +124,7 @@ def is_scalar(val: object) -> bint:
return (cnp.PyArray_IsAnyScalar(val)
# As of numpy-1.9, PyArray_IsAnyScalar misses bytearrays on Py3.
- or PyBytes_Check(val)
+ or isinstance(val, bytes)
# We differ from numpy (as of 1.10), which claims that None is
# not scalar in np.isscalar().
or val is None
@@ -140,7 +137,7 @@ def is_scalar(val: object) -> bint:
or util.is_offset_object(val))
-def item_from_zerodim(object val):
+def item_from_zerodim(val: object) -> object:
"""
If the value is a zerodim array, return the item it contains.
@@ -359,7 +356,7 @@ def get_reverse_indexer(ndarray[int64_t] indexer, Py_ssize_t length):
return rev_indexer
-def has_infs_f4(ndarray[float32_t] arr):
+def has_infs_f4(ndarray[float32_t] arr) -> bint:
cdef:
Py_ssize_t i, n = len(arr)
float32_t inf, neginf, val
@@ -374,7 +371,7 @@ def has_infs_f4(ndarray[float32_t] arr):
return False
-def has_infs_f8(ndarray[float64_t] arr):
+def has_infs_f8(ndarray[float64_t] arr) -> bint:
cdef:
Py_ssize_t i, n = len(arr)
float64_t inf, neginf, val
@@ -530,7 +527,8 @@ def clean_index_list(list obj):
for i in range(n):
v = obj[i]
- if not (PyList_Check(v) or util.is_array(v) or hasattr(v, '_data')):
+ if not (isinstance(v, list) or
+ util.is_array(v) or hasattr(v, '_data')):
all_arrays = 0
break
@@ -1120,7 +1118,7 @@ def infer_dtype(object value, bint skipna=False):
.format(typ=type(value)))
else:
- if not PyList_Check(value):
+ if not isinstance(value, list):
value = list(value)
from pandas.core.dtypes.cast import (
construct_1d_object_array_from_listlike)
@@ -1209,15 +1207,15 @@ def infer_dtype(object value, bint skipna=False):
if is_bool_array(values, skipna=skipna):
return 'boolean'
- elif PyString_Check(val):
+ elif isinstance(val, str):
if is_string_array(values, skipna=skipna):
return 'string'
- elif PyUnicode_Check(val):
+ elif isinstance(val, unicode):
if is_unicode_array(values, skipna=skipna):
return 'unicode'
- elif PyBytes_Check(val):
+ elif isinstance(val, bytes):
if is_bytes_array(values, skipna=skipna):
return 'bytes'
@@ -1474,7 +1472,7 @@ cpdef bint is_float_array(ndarray values):
cdef class StringValidator(Validator):
cdef inline bint is_value_typed(self, object value) except -1:
- return PyString_Check(value)
+ return isinstance(value, str)
cdef inline bint is_array_typed(self) except -1:
return issubclass(self.dtype.type, np.str_)
@@ -1490,7 +1488,7 @@ cpdef bint is_string_array(ndarray values, bint skipna=False):
cdef class UnicodeValidator(Validator):
cdef inline bint is_value_typed(self, object value) except -1:
- return PyUnicode_Check(value)
+ return isinstance(value, unicode)
cdef inline bint is_array_typed(self) except -1:
return issubclass(self.dtype.type, np.unicode_)
@@ -1506,7 +1504,7 @@ cdef bint is_unicode_array(ndarray values, bint skipna=False):
cdef class BytesValidator(Validator):
cdef inline bint is_value_typed(self, object value) except -1:
- return PyBytes_Check(value)
+ return isinstance(value, bytes)
cdef inline bint is_array_typed(self) except -1:
return issubclass(self.dtype.type, np.bytes_)
diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index c787cc61e8773..2590a30c57f33 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -1,9 +1,7 @@
# -*- coding: utf-8 -*-
-from cpython cimport PyFloat_Check, PyComplex_Check
-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
import numpy as np
cimport numpy as cnp
@@ -23,8 +21,9 @@ cdef int64_t NPY_NAT = util.get_nat()
cdef inline bint _check_all_nulls(object val):
""" utility to check if a value is any type of null """
- cdef bint res
- if PyFloat_Check(val) or PyComplex_Check(val):
+ res: bint
+
+ if isinstance(val, (float, complex)):
res = val != val
elif val is NaT:
res = 1
@@ -117,7 +116,7 @@ cpdef bint checknull_old(object val):
cdef inline bint _check_none_nan_inf_neginf(object val):
try:
- return val is None or (PyFloat_Check(val) and
+ return val is None or (isinstance(val, float) and
(val != val or val == INF or val == NEGINF))
except ValueError:
return False
diff --git a/pandas/_libs/ops.pyx b/pandas/_libs/ops.pyx
index a194f1588e231..e21bce177b38b 100644
--- a/pandas/_libs/ops.pyx
+++ b/pandas/_libs/ops.pyx
@@ -1,12 +1,11 @@
# -*- coding: utf-8 -*-
import operator
-from cpython cimport (PyFloat_Check, PyBool_Check,
- PyObject_RichCompareBool,
+from cpython cimport (PyObject_RichCompareBool,
Py_EQ, Py_NE, Py_LT, Py_LE, Py_GT, Py_GE)
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
import numpy as np
from numpy cimport ndarray, uint8_t, import_array
@@ -272,7 +271,7 @@ def maybe_convert_bool(ndarray[object] arr,
for i in range(n):
val = arr[i]
- if PyBool_Check(val):
+ if isinstance(val, bool):
if val is True:
result[i] = 1
else:
@@ -281,7 +280,7 @@ def maybe_convert_bool(ndarray[object] arr,
result[i] = 1
elif val in false_vals:
result[i] = 0
- elif PyFloat_Check(val):
+ elif isinstance(val, float):
result[i] = UINT8_MAX
na_count += 1
else:
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 91faed678192f..e3df391c5c45d 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -10,12 +10,12 @@ from csv import QUOTE_MINIMAL, QUOTE_NONNUMERIC, QUOTE_NONE
from libc.stdlib cimport free
from libc.string cimport strncpy, strlen, strcasecmp
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from cpython cimport (PyObject, PyBytes_FromString,
- PyBytes_AsString, PyBytes_Check,
- PyUnicode_Check, PyUnicode_AsUTF8String,
+ PyBytes_AsString,
+ PyUnicode_AsUTF8String,
PyErr_Occurred, PyErr_Fetch)
from cpython.ref cimport Py_XDECREF
@@ -1341,9 +1341,9 @@ cdef object _false_values = [b'False', b'FALSE', b'false']
def _ensure_encoded(list lst):
cdef list result = []
for x in lst:
- if PyUnicode_Check(x):
+ if isinstance(x, unicode):
x = PyUnicode_AsUTF8String(x)
- elif not PyBytes_Check(x):
+ elif not isinstance(x, bytes):
x = asbytes(x)
result.append(x)
@@ -2046,7 +2046,7 @@ cdef kh_str_t* kset_from_list(list values) except NULL:
val = values[i]
# None creeps in sometimes, which isn't possible here
- if not PyBytes_Check(val):
+ if not isinstance(val, bytes):
raise ValueError('Must be all encoded bytes')
k = kh_put_str(table, PyBytes_AsString(val), &ret)
diff --git a/pandas/_libs/properties.pyx b/pandas/_libs/properties.pyx
index 0f2900619fdb6..6e4c0c62b0dd8 100644
--- a/pandas/_libs/properties.pyx
+++ b/pandas/_libs/properties.pyx
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
from cpython cimport (
PyDict_Contains, PyDict_GetItem, PyDict_SetItem)
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index d87a590730fd6..681ea2c6295f2 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
from distutils.version import LooseVersion
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
from cpython cimport Py_INCREF
from libc.stdlib cimport malloc, free
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index 7f5990ce5d65c..2993114a668bb 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -2,7 +2,7 @@
import operator
import sys
-cimport cython
+import cython
import numpy as np
cimport numpy as cnp
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index ab7f3c3de2131..10f68187938c0 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -22,24 +22,30 @@ cdef NUMERIC_TYPES = (
np.float64,
)
+
cdef bint is_comparable_as_number(obj):
return isinstance(obj, NUMERIC_TYPES)
+
cdef bint isiterable(obj):
return hasattr(obj, '__iter__')
+
cdef bint has_length(obj):
return hasattr(obj, '__len__')
+
cdef bint is_dictlike(obj):
return hasattr(obj, 'keys') and hasattr(obj, '__getitem__')
+
cdef bint decimal_almost_equal(double desired, double actual, int decimal):
# Code from
# http://docs.scipy.org/doc/numpy/reference/generated
# /numpy.testing.assert_almost_equal.html
return abs(desired - actual) < (0.5 * 10.0 ** -decimal)
+
cpdef assert_dict_equal(a, b, bint compare_keys=True):
assert is_dictlike(a) and is_dictlike(b), (
"Cannot compare dict objects, one or both is not dict-like"
@@ -56,6 +62,7 @@ cpdef assert_dict_equal(a, b, bint compare_keys=True):
return True
+
cpdef assert_almost_equal(a, b,
check_less_precise=False,
bint check_dtype=True,
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 93fae695d51fd..16fea0615f199 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -1,7 +1,5 @@
# -*- coding: utf-8 -*-
-from cython cimport Py_ssize_t
-
-from cpython cimport PyFloat_Check, PyUnicode_Check
+from cython import Py_ssize_t
from cpython.datetime cimport (PyDateTime_Check, PyDate_Check,
PyDateTime_CheckExact,
@@ -601,7 +599,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise',
if len(val) == 0 or val in nat_strings:
iresult[i] = NPY_NAT
continue
- if PyUnicode_Check(val) and PY2:
+ if isinstance(val, unicode) and PY2:
val = val.encode('utf-8')
try:
@@ -740,7 +738,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise',
# set as nan except if its a NaT
if checknull_with_nat(val):
- if PyFloat_Check(val):
+ if isinstance(val, float):
oresult[i] = np.nan
else:
oresult[i] = NaT
diff --git a/pandas/_libs/tslibs/ccalendar.pyx b/pandas/_libs/tslibs/ccalendar.pyx
index ec54c023290b3..7d58b43e5d460 100644
--- a/pandas/_libs/tslibs/ccalendar.pyx
+++ b/pandas/_libs/tslibs/ccalendar.pyx
@@ -4,8 +4,8 @@
Cython implementations of functions resembling the stdlib calendar module
"""
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from numpy cimport int64_t, int32_t
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index fe664cf03b0b9..d7eef546befbd 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
import numpy as np
cimport numpy as cnp
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index 9cbad8acabff1..684344ceb9002 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -4,8 +4,8 @@ Functions for accessing attributes of Timestamp/datetime64/datetime-like
objects and arrays
"""
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
import numpy as np
cimport numpy as cnp
diff --git a/pandas/_libs/tslibs/frequencies.pyx b/pandas/_libs/tslibs/frequencies.pyx
index 70a3f3f410636..c555fce9dd007 100644
--- a/pandas/_libs/tslibs/frequencies.pyx
+++ b/pandas/_libs/tslibs/frequencies.pyx
@@ -321,7 +321,7 @@ cpdef object get_freq(object freq):
# ----------------------------------------------------------------------
# Frequency comparison
-cpdef bint is_subperiod(source, target):
+def is_subperiod(source, target) -> bint:
"""
Returns True if downsampling is possible between source and target
frequencies
@@ -374,7 +374,7 @@ cpdef bint is_subperiod(source, target):
return source in {'N'}
-cpdef bint is_superperiod(source, target):
+def is_superperiod(source, target) -> bint:
"""
Returns True if upsampling is possible between source and target
frequencies
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 08d9128ff660c..fd8486f690745 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
from cpython cimport (
- PyFloat_Check, PyComplex_Check,
PyObject_RichCompare,
Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, Py_LE)
diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx
index f0aa6389fba56..e0ecfc24804a9 100644
--- a/pandas/_libs/tslibs/np_datetime.pyx
+++ b/pandas/_libs/tslibs/np_datetime.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
from cpython cimport (Py_EQ, Py_NE, Py_GE, Py_GT, Py_LT, Py_LE,
- PyUnicode_Check, PyUnicode_AsASCIIString)
+ PyUnicode_AsASCIIString)
from cpython.datetime cimport (datetime, date,
PyDateTime_IMPORT,
@@ -175,7 +175,7 @@ cdef inline int _string_to_dts(object val, npy_datetimestruct* dts,
int result
char *tmp
- if PyUnicode_Check(val):
+ if isinstance(val, unicode):
val = PyUnicode_AsASCIIString(val)
tmp = val
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 8c53fabffdbeb..4d611f89bca9c 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
import time
from cpython.datetime cimport (PyDateTime_IMPORT,
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 6ee6c4b9d9026..3887957aeefd4 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -537,7 +537,7 @@ except (ImportError, AttributeError):
pass
-def _format_is_iso(f):
+def _format_is_iso(f) -> bint:
"""
Does format match the iso8601 set that can be handled by the C parser?
Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index f68b6d8fdef57..43dc415bfd464 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -2397,7 +2397,6 @@ class Period(_Period):
# ('T', 5) but may be passed in as a string like '5T'
# ordinal is the period offset from the gregorian proleptic epoch
-
cdef _Period self
if freq is not None:
@@ -2495,7 +2494,7 @@ cdef int64_t _ordinal_from_fields(int year, int month, quarter, int day,
minute, second, 0, 0, base)
-def quarter_to_myear(int year, int quarter, freq):
+def quarter_to_myear(year: int, quarter: int, freq):
"""
A quarterly frequency defines a "year" which may not coincide with
the calendar-year. Find the calendar-year and calendar-month associated
diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx
index 4e3350395400c..4acffdea78f55 100644
--- a/pandas/_libs/tslibs/resolution.pyx
+++ b/pandas/_libs/tslibs/resolution.pyx
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
import numpy as np
from numpy cimport ndarray, int64_t, int32_t
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index d472320cfdb12..46a1145009857 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -20,7 +20,7 @@ except:
except:
from _dummy_thread import allocate_lock as _thread_allocate_lock
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
import pytz
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index b84c1a753215a..9b13ef5982396 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -6,9 +6,9 @@ import warnings
import sys
cdef bint PY3 = (sys.version_info[0] >= 3)
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
-from cpython cimport PyUnicode_Check, Py_NE, Py_EQ, PyObject_RichCompare
+from cpython cimport Py_NE, Py_EQ, PyObject_RichCompare
import numpy as np
cimport numpy as cnp
@@ -281,7 +281,7 @@ cpdef inline int64_t cast_from_unit(object ts, object unit) except? -1:
cdef inline _decode_if_necessary(object ts):
# decode ts if necessary
- if not PyUnicode_Check(ts) and not PY3:
+ if not isinstance(ts, unicode) and not PY3:
ts = str(ts).decode('utf-8')
return ts
diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx
index 36ec499c7335c..b7e4de81da35c 100644
--- a/pandas/_libs/tslibs/timezones.pyx
+++ b/pandas/_libs/tslibs/timezones.pyx
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from cython cimport Py_ssize_t
+from cython import Py_ssize_t
# dateutil compat
from dateutil.tz import (
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
index b25fb47065fdd..d4b61b8611b68 100644
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -1,8 +1,8 @@
# -*- coding: utf-8 -*-
# cython: boundscheck=False, wraparound=False, cdivision=True
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from libcpp.deque cimport deque
from libc.stdlib cimport malloc, free
diff --git a/pandas/_libs/writers.pyx b/pandas/_libs/writers.pyx
index 8e55ffad8d231..9af12cbec1e9c 100644
--- a/pandas/_libs/writers.pyx
+++ b/pandas/_libs/writers.pyx
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
-cimport cython
-from cython cimport Py_ssize_t
+import cython
+from cython import Py_ssize_t
from cpython cimport PyBytes_GET_SIZE, PyUnicode_GET_SIZE
@@ -36,9 +36,10 @@ def write_csv_rows(list data, ndarray data_index,
cols : ndarray
writer : object
"""
- cdef int N, j, i, ncols
- cdef list rows
- cdef object val
+ cdef:
+ int N, j, i, ncols
+ list rows
+ object val
# In crude testing, N>100 yields little marginal improvement
N = 100
@@ -157,8 +158,9 @@ def string_array_replace_from_nan_rep(
Replace the values in the array with 'replacement' if
they are 'nan_rep'. Return the same array.
"""
+ cdef:
+ int length = arr.shape[0], i = 0
- cdef int length = arr.shape[0], i = 0
if replace is None:
replace = np.nan
diff --git a/setup.cfg b/setup.cfg
index c4e3243d824e5..5fc0236066b93 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -40,3 +40,33 @@ markers =
high_memory: mark a test as a high-memory only
doctest_optionflags = NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
addopts = --strict-data-files
+
+
+[coverage:run]
+branch = False
+omit = */tests/*
+plugins = Cython.Coverage
+
+[coverage:report]
+ignore_errors = False
+show_missing = True
+# Regexes for lines to exclude from consideration
+exclude_lines =
+ # Have to re-enable the standard pragma
+ pragma: no cover
+
+ # Don't complain about missing debug-only code:
+ def __repr__
+ if self\.debug
+
+ # Don't complain if tests don't hit defensive assertion code:
+ raise AssertionError
+ raise NotImplementedError
+ AbstractMethodError
+
+ # Don't complain if non-runnable code isn't run:
+ if 0:
+ if __name__ == .__main__.:
+
+[coverage:html]
+directory = coverage_html_report
| Use python-style type annotations instead of cython-style in a few places.
Use python-style isinstance checks in cases where cython will automatically optimize them into C calls.
Part of the hope is that we can get the code close enough to valid python that we can trick flake8 into working on it. | https://api.github.com/repos/pandas-dev/pandas/pulls/22638 | 2018-09-08T17:52:45Z | 2018-09-12T11:33:44Z | 2018-09-12T11:33:44Z | 2018-09-12T18:29:06Z |
CI / BLD: Various CI Backports | diff --git a/.circleci/config.yml b/.circleci/config.yml
new file mode 100644
index 0000000000000..e947f30d285cd
--- /dev/null
+++ b/.circleci/config.yml
@@ -0,0 +1,147 @@
+version: 2
+jobs:
+
+ # --------------------------------------------------------------------------
+ # 0. py27_compat
+ # --------------------------------------------------------------------------
+ py27_compat:
+ docker:
+ - image: continuumio/miniconda:latest
+ # databases configuration
+ - image: circleci/postgres:9.6.5-alpine-ram
+ environment:
+ POSTGRES_USER: postgres
+ POSTGRES_DB: pandas_nosetest
+ - image: circleci/mysql:8-ram
+ environment:
+ MYSQL_USER: "root"
+ MYSQL_HOST: "localhost"
+ MYSQL_ALLOW_EMPTY_PASSWORD: "true"
+ MYSQL_DATABASE: "pandas_nosetest"
+ environment:
+ JOB: "2.7_COMPAT"
+ ENV_FILE: "ci/circle-27-compat.yaml"
+ LOCALE_OVERRIDE: "it_IT.UTF-8"
+ MINICONDA_DIR: /home/ubuntu/miniconda3
+ steps:
+ - checkout
+ - run:
+ name: build
+ command: |
+ ./ci/install_circle.sh
+ ./ci/show_circle.sh
+ - run:
+ name: test
+ command: ./ci/run_circle.sh --skip-slow --skip-network
+
+ # --------------------------------------------------------------------------
+ # 1. py36_locale
+ # --------------------------------------------------------------------------
+ py36_locale:
+ docker:
+ - image: continuumio/miniconda:latest
+ # databases configuration
+ - image: circleci/postgres:9.6.5-alpine-ram
+ environment:
+ POSTGRES_USER: postgres
+ POSTGRES_DB: pandas_nosetest
+ - image: circleci/mysql:8-ram
+ environment:
+ MYSQL_USER: "root"
+ MYSQL_HOST: "localhost"
+ MYSQL_ALLOW_EMPTY_PASSWORD: "true"
+ MYSQL_DATABASE: "pandas_nosetest"
+
+ environment:
+ JOB: "3.6_LOCALE"
+ ENV_FILE: "ci/circle-36-locale.yaml"
+ LOCALE_OVERRIDE: "zh_CN.UTF-8"
+ MINICONDA_DIR: /home/ubuntu/miniconda3
+ steps:
+ - checkout
+ - run:
+ name: build
+ command: |
+ ./ci/install_circle.sh
+ ./ci/show_circle.sh
+ - run:
+ name: test
+ command: ./ci/run_circle.sh --skip-slow --skip-network
+
+ # --------------------------------------------------------------------------
+ # 2. py36_locale_slow
+ # --------------------------------------------------------------------------
+ py36_locale_slow:
+ docker:
+ - image: continuumio/miniconda:latest
+ # databases configuration
+ - image: circleci/postgres:9.6.5-alpine-ram
+ environment:
+ POSTGRES_USER: postgres
+ POSTGRES_DB: pandas_nosetest
+ - image: circleci/mysql:8-ram
+ environment:
+ MYSQL_USER: "root"
+ MYSQL_HOST: "localhost"
+ MYSQL_ALLOW_EMPTY_PASSWORD: "true"
+ MYSQL_DATABASE: "pandas_nosetest"
+
+ environment:
+ JOB: "3.6_LOCALE_SLOW"
+ ENV_FILE: "ci/circle-36-locale_slow.yaml"
+ LOCALE_OVERRIDE: "zh_CN.UTF-8"
+ MINICONDA_DIR: /home/ubuntu/miniconda3
+ steps:
+ - checkout
+ - run:
+ name: build
+ command: |
+ ./ci/install_circle.sh
+ ./ci/show_circle.sh
+ - run:
+ name: test
+ command: ./ci/run_circle.sh --only-slow --skip-network
+
+ # --------------------------------------------------------------------------
+ # 3. py35_ascii
+ # --------------------------------------------------------------------------
+ py35_ascii:
+ docker:
+ - image: continuumio/miniconda:latest
+ # databases configuration
+ - image: circleci/postgres:9.6.5-alpine-ram
+ environment:
+ POSTGRES_USER: postgres
+ POSTGRES_DB: pandas_nosetest
+ - image: circleci/mysql:8-ram
+ environment:
+ MYSQL_USER: "root"
+ MYSQL_HOST: "localhost"
+ MYSQL_ALLOW_EMPTY_PASSWORD: "true"
+ MYSQL_DATABASE: "pandas_nosetest"
+
+ environment:
+ JOB: "3.5_ASCII"
+ ENV_FILE: "ci/circle-35-ascii.yaml"
+ LOCALE_OVERRIDE: "C"
+ MINICONDA_DIR: /home/ubuntu/miniconda3
+ steps:
+ - checkout
+ - run:
+ name: build
+ command: |
+ ./ci/install_circle.sh
+ ./ci/show_circle.sh
+ - run:
+ name: test
+ command: ./ci/run_circle.sh --skip-slow --skip-network
+
+
+workflows:
+ version: 2
+ build_and_test:
+ jobs:
+ - py27_compat
+ - py36_locale
+ - py36_locale_slow
+ - py35_ascii
diff --git a/ci/appveyor-27.yaml b/ci/appveyor-27.yaml
index 84107c605b14f..e47ebf75344fa 100644
--- a/ci/appveyor-27.yaml
+++ b/ci/appveyor-27.yaml
@@ -12,7 +12,7 @@ dependencies:
- matplotlib
- numexpr
- numpy=1.10*
- - openpyxl
+ - openpyxl=2.5.5
- pytables==3.2.2
- python=2.7.*
- pytz
diff --git a/ci/appveyor-36.yaml b/ci/appveyor-36.yaml
index 5e370de39958a..d007f04ca0720 100644
--- a/ci/appveyor-36.yaml
+++ b/ci/appveyor-36.yaml
@@ -10,7 +10,7 @@ dependencies:
- matplotlib
- numexpr
- numpy=1.13*
- - openpyxl
+ - openpyxl=2.5.5
- pyarrow
- pytables
- python-dateutil
diff --git a/ci/circle-27-compat.yaml b/ci/circle-27-compat.yaml
index 81a48d4edf11c..e037877819b14 100644
--- a/ci/circle-27-compat.yaml
+++ b/ci/circle-27-compat.yaml
@@ -4,11 +4,11 @@ channels:
- conda-forge
dependencies:
- bottleneck=1.0.0
- - cython=0.24
+ - cython=0.28.2
- jinja2=2.8
- numexpr=2.4.4 # we test that we correctly don't use an unsupported numexpr
- - numpy=1.9.2
- - openpyxl
+ - numpy=1.9.3
+ - openpyxl=2.5.5
- psycopg2
- pytables=3.2.2
- python-dateutil=2.5.0
diff --git a/ci/circle-35-ascii.yaml b/ci/circle-35-ascii.yaml
index 602c414b49bb2..745678791458d 100644
--- a/ci/circle-35-ascii.yaml
+++ b/ci/circle-35-ascii.yaml
@@ -2,7 +2,7 @@ name: pandas
channels:
- defaults
dependencies:
- - cython
+ - cython>=0.28.2
- nomkl
- numpy
- python-dateutil
diff --git a/ci/circle-36-locale.yaml b/ci/circle-36-locale.yaml
index cc852c1e2aeeb..a85e0b58f5e33 100644
--- a/ci/circle-36-locale.yaml
+++ b/ci/circle-36-locale.yaml
@@ -13,7 +13,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pymysql
- pytables
diff --git a/ci/circle-36-locale_slow.yaml b/ci/circle-36-locale_slow.yaml
index cc852c1e2aeeb..a85e0b58f5e33 100644
--- a/ci/circle-36-locale_slow.yaml
+++ b/ci/circle-36-locale_slow.yaml
@@ -13,7 +13,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pymysql
- pytables
diff --git a/ci/install_circle.sh b/ci/install_circle.sh
index 5ffff84c88488..f8bcf6bcffc99 100755
--- a/ci/install_circle.sh
+++ b/ci/install_circle.sh
@@ -6,14 +6,7 @@ echo "[home_dir: $home_dir]"
echo "[ls -ltr]"
ls -ltr
-echo "[Using clean Miniconda install]"
-rm -rf "$MINICONDA_DIR"
-
-# install miniconda
-wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -q -O miniconda.sh || exit 1
-bash miniconda.sh -b -p "$MINICONDA_DIR" || exit 1
-
-export PATH="$MINICONDA_DIR/bin:$PATH"
+apt-get update -y && apt-get install -y build-essential postgresql-client-9.6
echo "[update conda]"
conda config --set ssl_verify false || exit 1
@@ -48,9 +41,17 @@ source $ENVS_FILE
# edit the locale override if needed
if [ -n "$LOCALE_OVERRIDE" ]; then
+
+ apt-get update && apt-get -y install locales locales-all
+
+ export LANG=$LOCALE_OVERRIDE
+ export LC_ALL=$LOCALE_OVERRIDE
+
+ python -c "import locale; locale.setlocale(locale.LC_ALL, \"$LOCALE_OVERRIDE\")" || exit 1;
+
echo "[Adding locale to the first line of pandas/__init__.py]"
rm -f pandas/__init__.pyc
- sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n"
+ sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, \"$LOCALE_OVERRIDE\")\n"
sed -i "$sedc" pandas/__init__.py
echo "[head -4 pandas/__init__.py]"
head -4 pandas/__init__.py
diff --git a/ci/install_db_circle.sh b/ci/install_db_circle.sh
deleted file mode 100755
index a00f74f009f54..0000000000000
--- a/ci/install_db_circle.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/bin/bash
-
-echo "installing dbs"
-mysql -e 'create database pandas_nosetest;'
-psql -c 'create database pandas_nosetest;' -U postgres
-
-echo "done"
-exit 0
diff --git a/ci/requirements-optional-conda.txt b/ci/requirements-optional-conda.txt
index e8cfcdf80f2e8..ca60c772392e7 100644
--- a/ci/requirements-optional-conda.txt
+++ b/ci/requirements-optional-conda.txt
@@ -11,7 +11,7 @@ lxml
matplotlib
nbsphinx
numexpr
-openpyxl
+openpyxl=2.5.5
pyarrow
pymysql
pytables
diff --git a/ci/requirements-optional-pip.txt b/ci/requirements-optional-pip.txt
index 877c52fa0b4fd..a6009c270c2a6 100644
--- a/ci/requirements-optional-pip.txt
+++ b/ci/requirements-optional-pip.txt
@@ -13,7 +13,7 @@ lxml
matplotlib
nbsphinx
numexpr
-openpyxl
+openpyxl=2.5.5
pyarrow
pymysql
tables
@@ -26,4 +26,4 @@ sqlalchemy
xarray
xlrd
xlsxwriter
-xlwt
\ No newline at end of file
+xlwt
diff --git a/ci/run_circle.sh b/ci/run_circle.sh
index 435985bd42148..fc2a8b849a354 100755
--- a/ci/run_circle.sh
+++ b/ci/run_circle.sh
@@ -6,4 +6,4 @@ export PATH="$MINICONDA_DIR/bin:$PATH"
source activate pandas
echo "pytest --strict --junitxml=$CIRCLE_TEST_REPORTS/reports/junit.xml $@ pandas"
-pytest --strict --junitxml=$CIRCLE_TEST_REPORTS/reports/junit.xml $@ pandas
+pytest --strict --color=no --junitxml=$CIRCLE_TEST_REPORTS/reports/junit.xml $@ pandas
diff --git a/ci/travis-27-locale.yaml b/ci/travis-27-locale.yaml
index 1312c1296d46a..eacae4630edeb 100644
--- a/ci/travis-27-locale.yaml
+++ b/ci/travis-27-locale.yaml
@@ -7,7 +7,7 @@ dependencies:
- cython=0.24
- lxml
- matplotlib=1.4.3
- - numpy=1.9.2
+ - numpy=1.9.3
- openpyxl=2.4.0
- python-dateutil
- python-blosc
diff --git a/ci/travis-27.yaml b/ci/travis-27.yaml
index 22b993a2da886..26a520a16a4cc 100644
--- a/ci/travis-27.yaml
+++ b/ci/travis-27.yaml
@@ -27,6 +27,7 @@ dependencies:
- PyCrypto
- pymysql=0.6.3
- pytables
+ - blosc=1.14.3
- python-blosc
- python-dateutil=2.5.0
- python=2.7*
diff --git a/ci/travis-35-osx.yaml b/ci/travis-35-osx.yaml
index e74abac4c9775..5722d91781999 100644
--- a/ci/travis-35-osx.yaml
+++ b/ci/travis-35-osx.yaml
@@ -12,7 +12,7 @@ dependencies:
- nomkl
- numexpr
- numpy=1.10.4
- - openpyxl
+ - openpyxl=2.5.5
- pytables
- python=3.5*
- pytz
diff --git a/ci/travis-36-doc.yaml b/ci/travis-36-doc.yaml
index 8705b82412e7c..05ff26020ac7d 100644
--- a/ci/travis-36-doc.yaml
+++ b/ci/travis-36-doc.yaml
@@ -21,7 +21,7 @@ dependencies:
- notebook
- numexpr
- numpy=1.13*
- - openpyxl
+ - openpyxl=2.5.5
- pandoc
- pyqt
- pytables
diff --git a/ci/travis-36-slow.yaml b/ci/travis-36-slow.yaml
index 6c475dc48723c..ae6353216cc2d 100644
--- a/ci/travis-36-slow.yaml
+++ b/ci/travis-36-slow.yaml
@@ -10,7 +10,7 @@ dependencies:
- matplotlib
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- patsy
- psycopg2
- pymysql
diff --git a/ci/travis-36.yaml b/ci/travis-36.yaml
index 006276ba1a65f..83f963b9d9b6d 100644
--- a/ci/travis-36.yaml
+++ b/ci/travis-36.yaml
@@ -17,7 +17,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pyarrow
- pymysql
diff --git a/circle.yml b/circle.yml
deleted file mode 100644
index 66415defba6fe..0000000000000
--- a/circle.yml
+++ /dev/null
@@ -1,38 +0,0 @@
-machine:
- environment:
- # these are globally set
- MINICONDA_DIR: /home/ubuntu/miniconda3
-
-
-database:
- override:
- - ./ci/install_db_circle.sh
-
-
-checkout:
- post:
- # since circleci does a shallow fetch
- # we need to populate our tags
- - git fetch --depth=1000
-
-
-dependencies:
- override:
- - >
- case $CIRCLE_NODE_INDEX in
- 0)
- sudo apt-get install language-pack-it && ./ci/install_circle.sh JOB="2.7_COMPAT" ENV_FILE="ci/circle-27-compat.yaml" LOCALE_OVERRIDE="it_IT.UTF-8" ;;
- 1)
- sudo apt-get install language-pack-zh-hans && ./ci/install_circle.sh JOB="3.6_LOCALE" ENV_FILE="ci/circle-36-locale.yaml" LOCALE_OVERRIDE="zh_CN.UTF-8" ;;
- 2)
- sudo apt-get install language-pack-zh-hans && ./ci/install_circle.sh JOB="3.6_LOCALE_SLOW" ENV_FILE="ci/circle-36-locale_slow.yaml" LOCALE_OVERRIDE="zh_CN.UTF-8" ;;
- 3)
- ./ci/install_circle.sh JOB="3.5_ASCII" ENV_FILE="ci/circle-35-ascii.yaml" LOCALE_OVERRIDE="C" ;;
- esac
- - ./ci/show_circle.sh
-
-
-test:
- override:
- - case $CIRCLE_NODE_INDEX in 0) ./ci/run_circle.sh --skip-slow --skip-network ;; 1) ./ci/run_circle.sh --only-slow --skip-network ;; 2) ./ci/run_circle.sh --skip-slow --skip-network ;; 3) ./ci/run_circle.sh --skip-slow --skip-network ;; esac:
- parallel: true
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index 056924f2c6663..743cbc107cce5 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -1,5 +1,6 @@
import locale
import calendar
+import unicodedata
import pytest
@@ -7,7 +8,7 @@
import pandas as pd
import pandas.util.testing as tm
from pandas import (Index, DatetimeIndex, datetime, offsets,
- date_range, Timestamp)
+ date_range, Timestamp, compat)
class TestTimeSeries(object):
@@ -284,10 +285,24 @@ def test_datetime_name_accessors(self, time_locale):
dti = DatetimeIndex(freq='M', start='2012', end='2013')
result = dti.month_name(locale=time_locale)
expected = Index([month.capitalize() for month in expected_months])
+
+ # work around different normalization schemes
+ # https://github.com/pandas-dev/pandas/issues/22342
+ if not compat.PY2:
+ result = result.str.normalize("NFD")
+ expected = expected.str.normalize("NFD")
+
tm.assert_index_equal(result, expected)
+
for date, expected in zip(dti, expected_months):
result = date.month_name(locale=time_locale)
- assert result == expected.capitalize()
+ expected = expected.capitalize()
+
+ if not compat.PY2:
+ result = unicodedata.normalize("NFD", result)
+ expected = unicodedata.normalize("NFD", result)
+
+ assert result == expected
dti = dti.append(DatetimeIndex([pd.NaT]))
assert np.isnan(dti.month_name(locale=time_locale)[-1])
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 05ceace20f5a4..1b9cbc57865d2 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -2,6 +2,7 @@
import pandas as pd
import pandas.util.testing as tm
+import pandas.util._test_decorators as td
from pandas.util.testing import assert_frame_equal, assert_raises_regex
@@ -31,6 +32,7 @@ def test_read_zipped_json(datapath):
assert_frame_equal(uncompressed_df, compressed_df)
+@td.skip_if_not_us_locale
def test_with_s3_url(compression):
boto3 = pytest.importorskip('boto3')
pytest.importorskip('s3fs')
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index bcbac4400c953..b5a2be87de1c4 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -15,6 +15,7 @@
assert_series_equal, network,
ensure_clean, assert_index_equal)
import pandas.util.testing as tm
+import pandas.util._test_decorators as td
_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
@@ -1040,6 +1041,7 @@ def test_read_inline_jsonl(self):
expected = DataFrame([[1, 2], [1, 2]], columns=['a', 'b'])
assert_frame_equal(result, expected)
+ @td.skip_if_not_us_locale
def test_read_s3_jsonl(self, s3_resource):
# GH17200
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index e2243b8087a5b..72d2c5fd8d18f 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -55,10 +55,12 @@ def tips_df(datapath):
@pytest.mark.usefixtures("s3_resource")
+@td.skip_if_not_us_locale()
class TestS3(object):
def test_parse_public_s3_bucket(self, tips_df):
pytest.importorskip('s3fs')
+
# more of an integration test due to the not-public contents portion
# can probably mock this though.
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 4e2b2af0ebfe7..20f403e71fd36 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -576,6 +576,7 @@ def test_read_from_http_url(self, ext):
tm.assert_frame_equal(url_table, local_table)
@td.skip_if_no('s3fs')
+ @td.skip_if_not_us_locale
def test_read_from_s3_url(self, ext):
boto3 = pytest.importorskip('boto3')
moto = pytest.importorskip('moto')
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index 4689c7bea626f..e829506e95b53 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -5,6 +5,7 @@
import dateutil
import calendar
import locale
+import unicodedata
import numpy as np
from dateutil.tz import tzutc
@@ -20,7 +21,7 @@
from pandas._libs.tslibs.timezones import get_timezone, dateutil_gettz as gettz
from pandas.errors import OutOfBoundsDatetime
-from pandas.compat import long, PY3
+from pandas.compat import long, PY3, PY2
from pandas.compat.numpy import np_datetime64_compat
from pandas import Timestamp, Period, Timedelta, NaT
@@ -116,8 +117,21 @@ def test_names(self, data, time_locale):
expected_day = calendar.day_name[0].capitalize()
expected_month = calendar.month_name[8].capitalize()
- assert data.day_name(time_locale) == expected_day
- assert data.month_name(time_locale) == expected_month
+ result_day = data.day_name(time_locale)
+ result_month = data.month_name(time_locale)
+
+ # Work around https://github.com/pandas-dev/pandas/issues/22342
+ # different normalizations
+
+ if not PY2:
+ expected_day = unicodedata.normalize("NFD", expected_day)
+ expected_month = unicodedata.normalize("NFD", expected_month)
+
+ result_day = unicodedata.normalize("NFD", result_day,)
+ result_month = unicodedata.normalize("NFD", result_month)
+
+ assert result_day == expected_day
+ assert result_month == expected_month
# Test NaT
nan_ts = Timestamp(NaT)
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 47798d0ddd7f5..5e924ac5c8894 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -3,6 +3,7 @@
import locale
import calendar
+import unicodedata
import pytest
from datetime import datetime, date
@@ -13,7 +14,8 @@
from pandas.core.dtypes.common import is_integer_dtype, is_list_like
from pandas import (Index, Series, DataFrame, bdate_range,
date_range, period_range, timedelta_range,
- PeriodIndex, DatetimeIndex, TimedeltaIndex)
+ PeriodIndex, DatetimeIndex, TimedeltaIndex,
+ compat)
import pandas.core.common as com
from pandas.util.testing import assert_series_equal
@@ -309,10 +311,24 @@ def test_dt_accessor_datetime_name_accessors(self, time_locale):
s = Series(DatetimeIndex(freq='M', start='2012', end='2013'))
result = s.dt.month_name(locale=time_locale)
expected = Series([month.capitalize() for month in expected_months])
+
+ # work around https://github.com/pandas-dev/pandas/issues/22342
+ if not compat.PY2:
+ result = result.str.normalize("NFD")
+ expected = expected.str.normalize("NFD")
+
tm.assert_series_equal(result, expected)
+
for s_date, expected in zip(s, expected_months):
result = s_date.month_name(locale=time_locale)
- assert result == expected.capitalize()
+ expected = expected.capitalize()
+
+ if not compat.PY2:
+ result = unicodedata.normalize("NFD", result)
+ expected = unicodedata.normalize("NFD", expected)
+
+ assert result == expected
+
s = s.append(Series([pd.NaT]))
assert np.isnan(s.dt.month_name(locale=time_locale).iloc[-1])
diff --git a/pandas/tests/util/test_util.py b/pandas/tests/util/test_util.py
index 145be7f85b193..c049dfc874940 100644
--- a/pandas/tests/util/test_util.py
+++ b/pandas/tests/util/test_util.py
@@ -433,6 +433,26 @@ def teardown_class(cls):
del cls.locales
del cls.current_locale
+ def test_can_set_locale_valid_set(self):
+ # Setting the default locale should return True
+ assert tm.can_set_locale('') is True
+
+ def test_can_set_locale_invalid_set(self):
+ # Setting an invalid locale should return False
+ assert tm.can_set_locale('non-existent_locale') is False
+
+ def test_can_set_locale_invalid_get(self, monkeypatch):
+ # In some cases, an invalid locale can be set,
+ # but a subsequent getlocale() raises a ValueError
+ # See GH 22129
+
+ def mockgetlocale():
+ raise ValueError()
+
+ with monkeypatch.context() as m:
+ m.setattr(locale, 'getlocale', mockgetlocale)
+ assert tm.can_set_locale('') is False
+
def test_get_locales(self):
# all systems should have at least a single locale
assert len(tm.get_locales()) > 0
@@ -466,7 +486,7 @@ def test_set_locale(self):
enc = codecs.lookup(enc).name
new_locale = lang, enc
- if not tm._can_set_locale(new_locale):
+ if not tm.can_set_locale(new_locale):
with pytest.raises(locale.Error):
with tm.set_locale(new_locale):
pass
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index b7edbff00a4b9..bb79c25126fab 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -478,6 +478,8 @@ def set_locale(new_locale, lc_var=locale.LC_ALL):
A string of the form <language_country>.<encoding>. For example to set
the current locale to US English with a UTF8 encoding, you would pass
"en_US.UTF-8".
+ lc_var : int, default `locale.LC_ALL`
+ The category of the locale being set.
Notes
-----
@@ -489,37 +491,37 @@ def set_locale(new_locale, lc_var=locale.LC_ALL):
try:
locale.setlocale(lc_var, new_locale)
-
- try:
- normalized_locale = locale.getlocale()
- except ValueError:
- yield new_locale
+ normalized_locale = locale.getlocale()
+ if com._all_not_none(*normalized_locale):
+ yield '.'.join(normalized_locale)
else:
- if com._all_not_none(*normalized_locale):
- yield '.'.join(normalized_locale)
- else:
- yield new_locale
+ yield new_locale
finally:
locale.setlocale(lc_var, current_locale)
-def _can_set_locale(lc):
- """Check to see if we can set a locale without throwing an exception.
+def can_set_locale(lc, lc_var=locale.LC_ALL):
+ """
+ Check to see if we can set a locale, and subsequently get the locale,
+ without raising an Exception.
Parameters
----------
lc : str
The locale to attempt to set.
+ lc_var : int, default `locale.LC_ALL`
+ The category of the locale being set.
Returns
-------
- isvalid : bool
+ is_valid : bool
Whether the passed locale can be set
"""
try:
- with set_locale(lc):
+ with set_locale(lc, lc_var=lc_var):
pass
- except locale.Error: # horrible name for a Exception subclass
+ except (ValueError,
+ locale.Error): # horrible name for a Exception subclass
return False
else:
return True
@@ -546,7 +548,7 @@ def _valid_locales(locales, normalize):
else:
normalizer = lambda x: x.strip()
- return list(filter(_can_set_locale, map(normalizer, locales)))
+ return list(filter(can_set_locale, map(normalizer, locales)))
# -----------------------------------------------------------------------------
# Stdout / stderr decorators
| * Pin `blosc` to `1.14.3` (#22429)
* Migrate to CircleCI 2.0 (#21814)
* Pin `openpyxl` to `2.5.5` (#22601)
* Bump `numpy` to `1.9.3` (#22499)
* Fix locale handling (#21739, #22213)
* Upgrade `Cython` to `>=0.28.2` (#21688) | https://api.github.com/repos/pandas-dev/pandas/pulls/22637 | 2018-09-08T16:48:16Z | 2018-09-11T16:40:59Z | 2018-09-11T16:40:59Z | 2018-09-11T20:07:41Z |
DOC: improve doc string for .aggregate and df.transform | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2e5da21f573b0..4a74b2b472fda 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -63,6 +63,10 @@
_shared_docs = dict()
_shared_doc_kwargs = dict(
axes='keywords for axes', klass='NDFrame',
+ axis="""
+ axis : {0 or 'index', 1 or 'columns'}, default 0
+ - 0 or 'index': apply function to each column.
+ - 1 or 'columns': apply function to each row.""",
axes_single_arg='int or labels for object',
args_transpose='axes to permute (int or label for object)',
optional_by="""
@@ -4545,17 +4549,16 @@ def pipe(self, func, *args, **kwargs):
Parameters
----------
- func : function, string, dictionary, or list of string/functions
+ func : function, string, list of string/functions or dictionary
Function to use for aggregating the data. If a function, must either
- work when passed a %(klass)s or when passed to %(klass)s.apply. For
- a DataFrame, can pass a dict, if the keys are DataFrame column names.
+ work when passed a %(klass)s or when passed to %(klass)s.apply.
Accepted combinations are:
- - string function name.
- - function.
- - list of functions.
- - dict of column names -> functions (or list of functions).
+ - string function name
+ - function
+ - list of functions and/or function names
+ - dict of axis labels -> functions, function names and/or list of such
%(axis)s
*args
Positional arguments to pass to `func`.
@@ -4581,15 +4584,24 @@ def pipe(self, func, *args, **kwargs):
Parameters
----------
- func : callable, string, dictionary, or list of string/callables
- To apply to column
+ func : function, string, list of string/functions or dictionary
+ Function to use for transforming the data. If a function, must either
+ work when passed a %(klass)s or when passed to %(klass)s.apply.
+ The function (or each function in a list/dict) must return an
+ object with the same length for the provided axis as the
+ calling %(klass)s.
- Accepted Combinations are:
+ Accepted combinations are:
- string function name
- function
- - list of functions
- - dict of column names -> functions (or list of functions)
+ - list of functions and/or function names
+ - dict of axis labels -> functions, function names and/or list of such
+ %(axis)s
+ *args
+ Positional arguments to pass to `func`.
+ **kwargs
+ Keyword arguments to pass to `func`.
Returns
-------
| Since #21224, operations using ``axis=1`` in df.aggregate and df.transform now work the same as when ``axis=0``.
This PR updates the methods' doc strings to reflect that. For example, we can now pass a dict to DataFrame.agg/transform, also when ``axis=1``, and DataFrame.transform now accepts an ``axis`` parameter.
Also some related minor clarifications. | https://api.github.com/repos/pandas-dev/pandas/pulls/22636 | 2018-09-08T09:44:04Z | 2018-09-08T11:13:44Z | null | 2018-09-08T11:13:44Z |
BUG: NaN should have pct rank of NaN | diff --git a/doc/source/whatsnew/v0.23.5.txt b/doc/source/whatsnew/v0.23.5.txt
index 304ab12752ad4..f69e38e7fdd50 100644
--- a/doc/source/whatsnew/v0.23.5.txt
+++ b/doc/source/whatsnew/v0.23.5.txt
@@ -20,6 +20,9 @@ and bug fixes. We recommend that all users upgrade to this version.
Fixed Regressions
~~~~~~~~~~~~~~~~~
+- Calling :meth:`DataFrameGroupBy.rank` and :meth:`SeriesGroupBy.rank` with empty groups
+ and ``pct=True`` was raising a ``ZeroDivisionError`` due to `c1068d9
+ <https://github.com/pandas-dev/pandas/commit/c1068d9d242c22cb2199156f6fb82eb5759178ae>`_ (:issue:`22519`)
-
-
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index b3e9b7c9e69ee..d7885e112a7e0 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -587,7 +587,12 @@ def group_rank_{{name}}(ndarray[float64_t, ndim=2] out,
if pct:
for i in range(N):
- out[i, 0] = out[i, 0] / grp_sizes[i, 0]
+ # We don't include NaN values in percentage
+ # rankings, so we assign them percentages of NaN.
+ if out[i, 0] != out[i, 0] or out[i, 0] == NAN:
+ out[i, 0] = NAN
+ else:
+ out[i, 0] = out[i, 0] / grp_sizes[i, 0]
{{endif}}
{{endfor}}
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index 203c3c73bec94..d978e144e5013 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -1,7 +1,7 @@
import pytest
import numpy as np
import pandas as pd
-from pandas import DataFrame, concat
+from pandas import DataFrame, Series, concat
from pandas.util import testing as tm
@@ -252,3 +252,20 @@ def test_rank_object_raises(ties_method, ascending, na_option,
df.groupby('key').rank(method=ties_method,
ascending=ascending,
na_option=na_option, pct=pct)
+
+
+def test_rank_empty_group():
+ # see gh-22519
+ column = "A"
+ df = DataFrame({
+ "A": [0, 1, 0],
+ "B": [1., np.nan, 2.]
+ })
+
+ result = df.groupby(column).B.rank(pct=True)
+ expected = Series([0.5, np.nan, 1.0], name="B")
+ tm.assert_series_equal(result, expected)
+
+ result = df.groupby(column).rank(pct=True)
+ expected = DataFrame({"B": [0.5, np.nan, 1.0]})
+ tm.assert_frame_equal(result, expected)
| Backport of #22600. | https://api.github.com/repos/pandas-dev/pandas/pulls/22634 | 2018-09-08T05:41:02Z | 2018-09-11T21:45:26Z | 2018-09-11T21:45:26Z | 2018-09-11T21:45:46Z |
DOC: Add more documentation showcasing CalendarDay | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 5dfac98d069e7..d0de34aa5eb4c 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -900,12 +900,31 @@ calendar time arithmetic. :class:`CalendarDay` is useful preserving calendar day
semantics with date times with have day light savings transitions, i.e. :class:`CalendarDay`
will preserve the hour before the day light savings transition.
+Addition with :class:`CalendarDay`:
+
.. ipython:: python
ts = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')
ts + pd.offsets.Day(1)
ts + pd.offsets.CalendarDay(1)
+Creating a :func:`date_range`:
+
+.. ipython:: python
+
+ start = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')
+ pd.date_range(start, freq='D', periods=3)
+ pd.date_range(start, freq='CD', periods=3)
+
+Resampling a timeseries:
+
+.. ipython:: python
+
+ idx = pd.date_range("2016-10-30", freq='H', periods=4*24, tz='Europe/Helsinki')
+ s = pd.Series(range(len(idx)), index=idx)
+ s.resample('D').count()
+ s.resample('CD').count()
+
Parametric Offsets
~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 232f879285543..3e8f6ef7bd0e1 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -317,7 +317,10 @@ and respect calendar day arithmetic while :class:`Day` and frequency alias ``'D'
will now respect absolute time (:issue:`22274`, :issue:`20596`, :issue:`16980`, :issue:`8774`)
See the :ref:`documentation here <timeseries.dayvscalendarday>` for more information.
-Addition with :class:`CalendarDay` across a daylight savings time transition:
+The difference between :class:`Day` vs :class:`CalendarDay` is most apparent
+with timezone-aware datetime data with a daylight savings time transition:
+
+Addition with :class:`CalendarDay`:
.. ipython:: python
@@ -325,6 +328,23 @@ Addition with :class:`CalendarDay` across a daylight savings time transition:
ts + pd.offsets.Day(1)
ts + pd.offsets.CalendarDay(1)
+Creating a :func:`date_range`:
+
+.. ipython:: python
+
+ start = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')
+ pd.date_range(start, freq='D', periods=3)
+ pd.date_range(start, freq='CD', periods=3)
+
+Resampling a timeseries:
+
+.. ipython:: python
+
+ idx = pd.date_range("2016-10-30", freq='H', periods=4*24, tz='Europe/Helsinki')
+ s = pd.Series(range(len(idx)), index=idx)
+ s.resample('D').count()
+ s.resample('CD').count()
+
.. _whatsnew_0240.api_breaking.period_end_time:
Time values in ``dt.end_time`` and ``to_timestamp(how='end')``
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index b5f499ba27323..8cb86ee488554 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -291,8 +291,9 @@ def test_constructor_with_datetimelike(self, dtl):
result = repr(c)
assert "NaT" in result
- def test_constructor_from_index_series_datetimetz(self):
- idx = date_range('2015-01-01 10:00', freq='D', periods=3,
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_constructor_from_index_series_datetimetz(self, freq):
+ idx = date_range('2015-01-01 10:00', freq=freq, periods=3,
tz='US/Eastern')
result = Categorical(idx)
tm.assert_index_equal(result.categories, idx)
diff --git a/pandas/tests/indexes/multi/test_partial_indexing.py b/pandas/tests/indexes/multi/test_partial_indexing.py
index 40e5e26e9cb0f..a74169418b44b 100644
--- a/pandas/tests/indexes/multi/test_partial_indexing.py
+++ b/pandas/tests/indexes/multi/test_partial_indexing.py
@@ -6,7 +6,8 @@
from pandas import DataFrame, MultiIndex, date_range
-def test_partial_string_timestamp_multiindex():
+@pytest.mark.parametrize('freq', ['D', 'CD'])
+def test_partial_string_timestamp_multiindex(freq):
# GH10331
dr = pd.date_range('2016-01-01', '2016-01-03', freq='12H')
abc = ['a', 'b', 'c']
@@ -89,7 +90,7 @@ def test_partial_string_timestamp_multiindex():
df_swap.loc['2016-01-01']
# GH12685 (partial string with daily resolution or below)
- dr = date_range('2013-01-01', periods=100, freq='D')
+ dr = date_range('2013-01-01', periods=100, freq=freq)
ix = MultiIndex.from_product([dr, ['a', 'b']])
df = DataFrame(np.random.randn(200, 1), columns=['A'], index=ix)
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index c19f8e57f9ae7..3c5e6fcaa05d9 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -2497,11 +2497,12 @@ def test_date_nanos(self):
result = fmt.Datetime64Formatter(x).get_result()
assert result[0].strip() == "1970-01-01 00:00:00.000000200"
- def test_dates_display(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_dates_display(self, freq):
# 10170
# make sure that we are consistently display date formatting
- x = Series(date_range('20130101 09:00:00', periods=5, freq='D'))
+ x = Series(date_range('20130101 09:00:00', periods=5, freq=freq))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
assert result[0].strip() == "2013-01-01 09:00:00"
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index dcfeab55f94fc..fcac05c6b6388 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -71,9 +71,10 @@ def test_append(self):
result = a['A'].append(b['A'])
tm.assert_series_equal(result, self.frame['A'])
- def test_append_index(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_append_index(self, freq):
idx1 = Index([1.1, 1.2, 1.3])
- idx2 = pd.date_range('2011-01-01', freq='D', periods=3,
+ idx2 = pd.date_range('2011-01-01', freq=freq, periods=3,
tz='Asia/Tokyo')
idx3 = Index(['A', 'B', 'C'])
@@ -2223,75 +2224,76 @@ def test_set_index_datetime(self):
tm.assert_index_equal(df.index.get_level_values(1), idx2)
tm.assert_index_equal(df.index.get_level_values(2), idx3)
- def test_reset_index_datetime(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo', 'US/Eastern'])
+ def test_reset_index_datetime(self, freq, tz):
# GH 3950
- for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']:
- idx1 = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz,
- name='idx1')
- idx2 = Index(range(5), name='idx2', dtype='int64')
- idx = MultiIndex.from_arrays([idx1, idx2])
- df = DataFrame(
- {'a': np.arange(5, dtype='int64'),
- 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
-
- expected = DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
- datetime.datetime(2011, 1, 2),
- datetime.datetime(2011, 1, 3),
- datetime.datetime(2011, 1, 4),
- datetime.datetime(2011, 1, 5)],
- 'idx2': np.arange(5, dtype='int64'),
- 'a': np.arange(5, dtype='int64'),
- 'b': ['A', 'B', 'C', 'D', 'E']},
- columns=['idx1', 'idx2', 'a', 'b'])
- expected['idx1'] = expected['idx1'].apply(
- lambda d: Timestamp(d, tz=tz))
-
- tm.assert_frame_equal(df.reset_index(), expected)
-
- idx3 = pd.date_range('1/1/2012', periods=5, freq='MS',
- tz='Europe/Paris', name='idx3')
- idx = MultiIndex.from_arrays([idx1, idx2, idx3])
- df = DataFrame(
- {'a': np.arange(5, dtype='int64'),
- 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
-
- expected = DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
- datetime.datetime(2011, 1, 2),
- datetime.datetime(2011, 1, 3),
- datetime.datetime(2011, 1, 4),
- datetime.datetime(2011, 1, 5)],
- 'idx2': np.arange(5, dtype='int64'),
- 'idx3': [datetime.datetime(2012, 1, 1),
- datetime.datetime(2012, 2, 1),
- datetime.datetime(2012, 3, 1),
- datetime.datetime(2012, 4, 1),
- datetime.datetime(2012, 5, 1)],
- 'a': np.arange(5, dtype='int64'),
- 'b': ['A', 'B', 'C', 'D', 'E']},
- columns=['idx1', 'idx2', 'idx3', 'a', 'b'])
- expected['idx1'] = expected['idx1'].apply(
- lambda d: Timestamp(d, tz=tz))
- expected['idx3'] = expected['idx3'].apply(
- lambda d: Timestamp(d, tz='Europe/Paris'))
- tm.assert_frame_equal(df.reset_index(), expected)
-
- # GH 7793
- idx = MultiIndex.from_product([['a', 'b'], pd.date_range(
- '20130101', periods=3, tz=tz)])
- df = DataFrame(
- np.arange(6, dtype='int64').reshape(
- 6, 1), columns=['a'], index=idx)
-
- expected = DataFrame({'level_0': 'a a a b b b'.split(),
- 'level_1': [
- datetime.datetime(2013, 1, 1),
- datetime.datetime(2013, 1, 2),
- datetime.datetime(2013, 1, 3)] * 2,
- 'a': np.arange(6, dtype='int64')},
- columns=['level_0', 'level_1', 'a'])
- expected['level_1'] = expected['level_1'].apply(
- lambda d: Timestamp(d, freq='D', tz=tz))
- tm.assert_frame_equal(df.reset_index(), expected)
+ idx1 = pd.date_range('1/1/2011', periods=5, freq=freq, tz=tz,
+ name='idx1')
+ idx2 = Index(range(5), name='idx2', dtype='int64')
+ idx = MultiIndex.from_arrays([idx1, idx2])
+ df = DataFrame(
+ {'a': np.arange(5, dtype='int64'),
+ 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
+
+ expected = DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
+ datetime.datetime(2011, 1, 2),
+ datetime.datetime(2011, 1, 3),
+ datetime.datetime(2011, 1, 4),
+ datetime.datetime(2011, 1, 5)],
+ 'idx2': np.arange(5, dtype='int64'),
+ 'a': np.arange(5, dtype='int64'),
+ 'b': ['A', 'B', 'C', 'D', 'E']},
+ columns=['idx1', 'idx2', 'a', 'b'])
+ expected['idx1'] = expected['idx1'].apply(
+ lambda d: Timestamp(d, tz=tz))
+
+ tm.assert_frame_equal(df.reset_index(), expected)
+
+ idx3 = pd.date_range('1/1/2012', periods=5, freq='MS',
+ tz='Europe/Paris', name='idx3')
+ idx = MultiIndex.from_arrays([idx1, idx2, idx3])
+ df = DataFrame(
+ {'a': np.arange(5, dtype='int64'),
+ 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
+
+ expected = DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
+ datetime.datetime(2011, 1, 2),
+ datetime.datetime(2011, 1, 3),
+ datetime.datetime(2011, 1, 4),
+ datetime.datetime(2011, 1, 5)],
+ 'idx2': np.arange(5, dtype='int64'),
+ 'idx3': [datetime.datetime(2012, 1, 1),
+ datetime.datetime(2012, 2, 1),
+ datetime.datetime(2012, 3, 1),
+ datetime.datetime(2012, 4, 1),
+ datetime.datetime(2012, 5, 1)],
+ 'a': np.arange(5, dtype='int64'),
+ 'b': ['A', 'B', 'C', 'D', 'E']},
+ columns=['idx1', 'idx2', 'idx3', 'a', 'b'])
+ expected['idx1'] = expected['idx1'].apply(
+ lambda d: Timestamp(d, tz=tz))
+ expected['idx3'] = expected['idx3'].apply(
+ lambda d: Timestamp(d, tz='Europe/Paris'))
+ tm.assert_frame_equal(df.reset_index(), expected)
+
+ # GH 7793
+ idx = MultiIndex.from_product([['a', 'b'], pd.date_range(
+ '20130101', periods=3, tz=tz)])
+ df = DataFrame(
+ np.arange(6, dtype='int64').reshape(
+ 6, 1), columns=['a'], index=idx)
+
+ expected = DataFrame({'level_0': 'a a a b b b'.split(),
+ 'level_1': [
+ datetime.datetime(2013, 1, 1),
+ datetime.datetime(2013, 1, 2),
+ datetime.datetime(2013, 1, 3)] * 2,
+ 'a': np.arange(6, dtype='int64')},
+ columns=['level_0', 'level_1', 'a'])
+ expected['level_1'] = expected['level_1'].apply(
+ lambda d: Timestamp(d, freq=freq, tz=tz))
+ tm.assert_frame_equal(df.reset_index(), expected)
def test_reset_index_period(self):
# GH 7746
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 669fa9742a705..491974655baf1 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -279,12 +279,13 @@ def test_agg_consistency(self):
# TODO: once GH 14008 is fixed, move these tests into
# `Base` test class
- def test_agg(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_agg(self, freq):
# test with all three Resampler apis and TimeGrouper
np.random.seed(1234)
index = date_range(datetime(2005, 1, 1),
- datetime(2005, 1, 10), freq='D')
+ datetime(2005, 1, 10), freq=freq)
index.name = 'date'
df = DataFrame(np.random.rand(10, 2), columns=list('AB'), index=index)
df_col = df.reset_index()
@@ -369,12 +370,13 @@ def test_agg(self):
('r2', 'B', 'mean'),
('r2', 'B', 'sum')])
- def test_agg_misc(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_agg_misc(self, freq):
# test with all three Resampler apis and TimeGrouper
np.random.seed(1234)
index = date_range(datetime(2005, 1, 1),
- datetime(2005, 1, 10), freq='D')
+ datetime(2005, 1, 10), freq=freq)
index.name = 'date'
df = DataFrame(np.random.rand(10, 2), columns=list('AB'), index=index)
df_col = df.reset_index()
@@ -473,11 +475,12 @@ def f():
pytest.raises(KeyError, f)
- def test_agg_nested_dicts(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_agg_nested_dicts(self, freq):
np.random.seed(1234)
index = date_range(datetime(2005, 1, 1),
- datetime(2005, 1, 10), freq='D')
+ datetime(2005, 1, 10), freq=freq)
index.name = 'date'
df = DataFrame(np.random.rand(10, 2), columns=list('AB'), index=index)
df_col = df.reset_index()
@@ -531,10 +534,11 @@ def test_try_aggregate_non_existing_column(self):
'y': ['median'],
'z': ['sum']})
- def test_selection_api_validation(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_selection_api_validation(self, freq):
# GH 13500
index = date_range(datetime(2005, 1, 1),
- datetime(2005, 1, 10), freq='D')
+ datetime(2005, 1, 10), freq=freq)
rng = np.arange(len(index), dtype=np.int64)
df = DataFrame({'date': index, 'a': rng},
@@ -1064,10 +1068,11 @@ def test_resample_rounding(self):
]}, index=date_range('2014-11-08', freq='17s', periods=2))
assert_frame_equal(result, expected)
- def test_resample_basic_from_daily(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_basic_from_daily(self, freq):
# from daily
dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='D', name='index')
+ end=datetime(2005, 1, 10), freq=freq, name='index')
s = Series(np.random.rand(len(dti)), dti)
@@ -1120,10 +1125,11 @@ def test_resample_basic_from_daily(self):
assert result.iloc[5] == s['1/9/2005']
assert result.index.name == 'index'
- def test_resample_upsampling_picked_but_not_correct(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_upsampling_picked_but_not_correct(self, freq):
# Test for issue #3020
- dates = date_range('01-Jan-2014', '05-Jan-2014', freq='D')
+ dates = date_range('01-Jan-2014', '05-Jan-2014', freq=freq)
series = Series(1, index=dates)
result = series.resample('D').mean()
@@ -1137,7 +1143,7 @@ def test_resample_upsampling_picked_but_not_correct(self):
s = Series(np.arange(1., 6), index=[datetime.datetime(
1975, 1, i, 12, 0) for i in range(1, 6)])
expected = Series(np.arange(1., 6), index=date_range(
- '19750101', periods=5, freq='D'))
+ '19750101', periods=5, freq=freq))
result = s.resample('D').count()
assert_series_equal(result, Series(1, index=expected.index))
@@ -1170,7 +1176,8 @@ def test_resample_frame_basic(self):
@pytest.mark.parametrize('loffset', [timedelta(minutes=1),
'1min', Minute(1),
np.timedelta64(1, 'm')])
- def test_resample_loffset(self, loffset):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_loffset(self, loffset, freq):
# GH 7687
rng = date_range('1/1/2000 00:00:00', '1/1/2000 00:13:00', freq='min')
s = Series(np.random.randn(14), index=rng)
@@ -1185,7 +1192,7 @@ def test_resample_loffset(self, loffset):
# from daily
dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='D')
+ end=datetime(2005, 1, 10), freq=freq)
ser = Series(np.random.rand(len(dti)), dti)
# to weekly
@@ -1228,10 +1235,11 @@ def test_resample_loffset_count(self):
assert_series_equal(result, expected)
- def test_resample_upsample(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_upsample(self, freq):
# from daily
dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='D', name='index')
+ end=datetime(2005, 1, 10), freq=freq, name='index')
s = Series(np.random.rand(len(dti)), dti)
@@ -1376,9 +1384,10 @@ def test_resample_dup_index(self):
Period(year=2000, quarter=i + 1, freq='Q') for i in range(4)]
assert_frame_equal(result, expected)
- def test_resample_reresample(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_reresample(self, freq):
dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='D')
+ end=datetime(2005, 1, 10), freq=freq)
s = Series(np.random.rand(len(dti)), dti)
bs = s.resample('B', closed='right', label='right').mean()
result = bs.resample('8H').mean()
@@ -1520,15 +1529,16 @@ def test_resample_anchored_ticks(self):
expected = ts.resample(freq, closed='left', label='left').mean()
assert_series_equal(result, expected)
- def test_resample_single_group(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_single_group(self, freq):
mysum = lambda x: x.sum()
- rng = date_range('2000-1-1', '2000-2-10', freq='D')
+ rng = date_range('2000-1-1', '2000-2-10', freq=freq)
ts = Series(np.random.randn(len(rng)), index=rng)
assert_series_equal(ts.resample('M').sum(),
ts.resample('M').apply(mysum))
- rng = date_range('2000-1-1', '2000-1-10', freq='D')
+ rng = date_range('2000-1-1', '2000-1-10', freq=freq)
ts = Series(np.random.randn(len(rng)), index=rng)
assert_series_equal(ts.resample('M').sum(),
ts.resample('M').apply(mysum))
@@ -1700,7 +1710,8 @@ def test_nanosecond_resample_error(self):
assert_series_equal(result, exp)
- def test_resample_anchored_intraday(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_anchored_intraday(self, freq):
# #1471, #1458
rng = date_range('1/1/2012', '4/1/2012', freq='100min')
@@ -1713,7 +1724,7 @@ def test_resample_anchored_intraday(self):
tm.assert_frame_equal(result, expected)
result = df.resample('M', closed='left').mean()
- exp = df.tshift(1, freq='D').resample('M', kind='period').mean()
+ exp = df.tshift(1, freq=freq).resample('M', kind='period').mean()
exp = exp.to_timestamp(how='end')
exp.index = exp.index + Timedelta(1, 'ns') - Timedelta(1, 'D')
@@ -1729,8 +1740,8 @@ def test_resample_anchored_intraday(self):
tm.assert_frame_equal(result, expected)
result = df.resample('Q', closed='left').mean()
- expected = df.tshift(1, freq='D').resample('Q', kind='period',
- closed='left').mean()
+ expected = df.tshift(1, freq=freq).resample('Q', kind='period',
+ closed='left').mean()
expected = expected.to_timestamp(how='end')
expected.index += Timedelta(1, 'ns') - Timedelta(1, 'D')
tm.assert_frame_equal(result, expected)
@@ -1922,7 +1933,8 @@ def test_resample_timegrouper(self):
result = df.groupby(pd.Grouper(freq='M', key='A')).count()
assert_frame_equal(result, expected)
- def test_resample_nunique(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_nunique(self, freq):
# GH 12352
df = DataFrame({
@@ -1931,9 +1943,9 @@ def test_resample_nunique(self):
'DATE': {Timestamp('2015-06-05 00:00:00'): '2015-06-05',
Timestamp('2015-06-08 00:00:00'): '2015-06-08'}})
r = df.resample('D')
- g = df.groupby(pd.Grouper(freq='D'))
- expected = df.groupby(pd.Grouper(freq='D')).ID.apply(lambda x:
- x.nunique())
+ g = df.groupby(pd.Grouper(freq=freq))
+ expected = df.groupby(pd.Grouper(freq=freq)).ID.apply(lambda x:
+ x.nunique())
assert expected.name == 'ID'
for t in [r, g]:
@@ -1943,7 +1955,7 @@ def test_resample_nunique(self):
result = df.ID.resample('D').nunique()
assert_series_equal(result, expected)
- result = df.ID.groupby(pd.Grouper(freq='D')).nunique()
+ result = df.ID.groupby(pd.Grouper(freq=freq)).nunique()
assert_series_equal(result, expected)
def test_resample_nunique_with_date_gap(self):
@@ -2599,7 +2611,8 @@ def test_resample_weekly_all_na(self):
expected = ts.asfreq('W-THU').ffill()
assert_series_equal(result, expected)
- def test_resample_tz_localized(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_resample_tz_localized(self, freq):
dr = date_range(start='2012-4-13', end='2012-5-1')
ts = Series(lrange(len(dr)), dr)
@@ -2626,7 +2639,7 @@ def test_resample_tz_localized(self):
s = Series([1, 2], index=idx)
result = s.resample('D', closed='right', label='right').mean()
- ex_index = date_range('2001-09-21', periods=1, freq='D',
+ ex_index = date_range('2001-09-21', periods=1, freq=freq,
tz='Australia/Sydney')
expected = Series([1.5], index=ex_index)
@@ -3108,9 +3121,10 @@ def f(x):
result = g.apply(f)
assert_frame_equal(result, expected)
- def test_apply_with_mutated_index(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_apply_with_mutated_index(self, freq):
# GH 15169
- index = pd.date_range('1-1-2015', '12-31-15', freq='D')
+ index = pd.date_range('1-1-2015', '12-31-15', freq=freq)
df = DataFrame(data={'col1': np.random.rand(len(index))}, index=index)
def f(x):
@@ -3273,7 +3287,8 @@ def test_fails_on_no_datetime_index(self):
"instance of %r" % name):
df.groupby(TimeGrouper('D'))
- def test_aaa_group_order(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_aaa_group_order(self, freq):
# GH 12840
# check TimeGrouper perform stable sorts
n = 20
@@ -3282,7 +3297,7 @@ def test_aaa_group_order(self):
df['key'] = [datetime(2013, 1, 1), datetime(2013, 1, 2),
datetime(2013, 1, 3), datetime(2013, 1, 4),
datetime(2013, 1, 5)] * 4
- grouped = df.groupby(TimeGrouper(key='key', freq='D'))
+ grouped = df.groupby(TimeGrouper(key='key', freq=freq))
tm.assert_frame_equal(grouped.get_group(datetime(2013, 1, 1)),
df[::5])
@@ -3295,7 +3310,8 @@ def test_aaa_group_order(self):
tm.assert_frame_equal(grouped.get_group(datetime(2013, 1, 5)),
df[4::5])
- def test_aggregate_normal(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_aggregate_normal(self, freq):
# check TimeGrouper's aggregation is identical as normal groupby
n = 20
@@ -3309,18 +3325,18 @@ def test_aggregate_normal(self):
datetime(2013, 1, 5)] * 4
normal_grouped = normal_df.groupby('key')
- dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D'))
+ dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq=freq))
for func in ['min', 'max', 'prod', 'var', 'std', 'mean']:
expected = getattr(normal_grouped, func)()
dt_result = getattr(dt_grouped, func)()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
assert_frame_equal(expected, dt_result)
for func in ['count', 'sum']:
expected = getattr(normal_grouped, func)()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
dt_result = getattr(dt_grouped, func)()
assert_frame_equal(expected, dt_result)
@@ -3328,7 +3344,7 @@ def test_aggregate_normal(self):
# GH 7453
for func in ['size']:
expected = getattr(normal_grouped, func)()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
dt_result = getattr(dt_grouped, func)()
assert_series_equal(expected, dt_result)
@@ -3336,7 +3352,7 @@ def test_aggregate_normal(self):
# GH 7453
for func in ['first', 'last']:
expected = getattr(normal_grouped, func)()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
dt_result = getattr(dt_grouped, func)()
assert_frame_equal(expected, dt_result)
@@ -3387,7 +3403,8 @@ def test_resample_entirly_nat_window(self, method, unit):
('prod', 1),
('count', 0),
])
- def test_aggregate_with_nat(self, func, fill_value):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_aggregate_with_nat(self, func, fill_value, freq):
# check TimeGrouper's aggregation is identical as normal groupby
# if NaT is included, 'var', 'std', 'mean', 'first','last'
# and 'nth' doesn't work yet
@@ -3402,7 +3419,7 @@ def test_aggregate_with_nat(self, func, fill_value):
datetime(2013, 1, 4), datetime(2013, 1, 5)] * 4
normal_grouped = normal_df.groupby('key')
- dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D'))
+ dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq=freq))
normal_result = getattr(normal_grouped, func)()
dt_result = getattr(dt_grouped, func)()
@@ -3411,12 +3428,13 @@ def test_aggregate_with_nat(self, func, fill_value):
columns=['A', 'B', 'C', 'D'])
expected = normal_result.append(pad)
expected = expected.sort_index()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
assert_frame_equal(expected, dt_result)
assert dt_result.index.name == 'key'
- def test_aggregate_with_nat_size(self):
+ @pytest.mark.parametrize('freq', ['CD', 'D'])
+ def test_aggregate_with_nat_size(self, freq):
# GH 9925
n = 20
data = np.random.randn(n, 4).astype('int64')
@@ -3428,7 +3446,7 @@ def test_aggregate_with_nat_size(self):
datetime(2013, 1, 4), datetime(2013, 1, 5)] * 4
normal_grouped = normal_df.groupby('key')
- dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D'))
+ dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq=freq))
normal_result = normal_grouped.size()
dt_result = dt_grouped.size()
@@ -3436,7 +3454,7 @@ def test_aggregate_with_nat_size(self):
pad = Series([0], index=[3])
expected = normal_result.append(pad)
expected = expected.sort_index()
- expected.index = date_range(start='2013-01-01', freq='D',
+ expected.index = date_range(start='2013-01-01', freq=freq,
periods=5, name='key')
assert_series_equal(expected, dt_result)
assert dt_result.index.name == 'key'
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index ec6d83062c8b0..ec4e37dce27cf 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -3491,16 +3491,18 @@ def test_frame_on2(self):
result = df.rolling('2s', on='C')[['A', 'B', 'C']].sum()
tm.assert_frame_equal(result, expected)
- def test_basic_regular(self):
+ @pytest.mark.parametrize('freq', ['D', 'CD'])
+ def test_basic_regular(self, freq):
df = self.regular.copy()
- df.index = pd.date_range('20130101', periods=5, freq='D')
+ df.index = pd.date_range('20130101', periods=5, freq=freq)
expected = df.rolling(window=1, min_periods=1).sum()
result = df.rolling(window='1D').sum()
tm.assert_frame_equal(result, expected)
- df.index = pd.date_range('20130101', periods=5, freq='2D')
+ freq = '2' + freq
+ df.index = pd.date_range('20130101', periods=5, freq=freq)
expected = df.rolling(window=1, min_periods=1).sum()
result = df.rolling(window='2D', min_periods=1).sum()
tm.assert_frame_equal(result, expected)
| xref https://github.com/pandas-dev/pandas/pull/22288#issuecomment-419572535
Highlighting usages of `CalendarDay` with `date_range` and `resample` specifically. | https://api.github.com/repos/pandas-dev/pandas/pulls/22633 | 2018-09-08T05:37:26Z | 2018-09-27T20:26:31Z | null | 2018-09-27T20:26:35Z |
BUG: Some sas7bdat files with many columns are not parseable by read_sas | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 649629714c3b1..949bc7b73af7e 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -742,6 +742,8 @@ I/O
- :func:`read_excel()` will correctly show the deprecation warning for previously deprecated ``sheetname`` (:issue:`17994`)
- :func:`read_csv()` will correctly parse timezone-aware datetimes (:issue:`22256`)
- :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`)
+- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
+- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
Plotting
^^^^^^^^
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index 221c07a0631d2..a5bfd5866a261 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -244,8 +244,8 @@ cdef class Parser(object):
self.parser = parser
self.header_length = self.parser.header_length
self.column_count = parser.column_count
- self.lengths = parser._column_data_lengths
- self.offsets = parser._column_data_offsets
+ self.lengths = parser.column_data_lengths()
+ self.offsets = parser.column_data_offsets()
self.byte_chunk = parser._byte_chunk
self.string_chunk = parser._string_chunk
self.row_length = parser.row_length
@@ -257,7 +257,7 @@ cdef class Parser(object):
# page indicators
self.update_next_page()
- column_types = parser.column_types
+ column_types = parser.column_types()
# map column types
for j in range(self.column_count):
@@ -375,7 +375,7 @@ cdef class Parser(object):
if done:
return True
return False
- elif self.current_page_type == page_data_type:
+ elif self.current_page_type & page_data_type == page_data_type:
self.process_byte_array_with_data(
bit_offset + subheader_pointers_offset +
self.current_row_on_page_index * self.row_length,
@@ -437,7 +437,7 @@ cdef class Parser(object):
elif column_types[j] == column_type_string:
# string
string_chunk[js, current_row] = np.array(source[start:(
- start + lngt)]).tostring().rstrip()
+ start + lngt)]).tostring().rstrip(b"\x00 ")
js += 1
self.current_row_on_page_index += 1
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index efeb306b618d1..3582f538c16bf 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -82,7 +82,6 @@ def __init__(self, path_or_buf, index=None, convert_dates=True,
self.compression = ""
self.column_names_strings = []
self.column_names = []
- self.column_types = []
self.column_formats = []
self.columns = []
@@ -90,6 +89,8 @@ def __init__(self, path_or_buf, index=None, convert_dates=True,
self._cached_page = None
self._column_data_lengths = []
self._column_data_offsets = []
+ self._column_types = []
+
self._current_row_in_file_index = 0
self._current_row_on_page_index = 0
self._current_row_in_file_index = 0
@@ -102,6 +103,19 @@ def __init__(self, path_or_buf, index=None, convert_dates=True,
self._get_properties()
self._parse_metadata()
+ def column_data_lengths(self):
+ """Return a numpy int64 array of the column data lengths"""
+ return np.asarray(self._column_data_lengths, dtype=np.int64)
+
+ def column_data_offsets(self):
+ """Return a numpy int64 array of the column offsets"""
+ return np.asarray(self._column_data_offsets, dtype=np.int64)
+
+ def column_types(self):
+ """Returns a numpy character array of the column types:
+ s (string) or d (double)"""
+ return np.asarray(self._column_types, dtype=np.dtype('S1'))
+
def close(self):
try:
self.handle.close()
@@ -287,8 +301,10 @@ def _process_page_meta(self):
pt = [const.page_meta_type, const.page_amd_type] + const.page_mix_types
if self._current_page_type in pt:
self._process_page_metadata()
- return ((self._current_page_type in [256] + const.page_mix_types) or
- (self._current_page_data_subheader_pointers is not None))
+ is_data_page = self._current_page_type & const.page_data_type
+ is_mix_page = self._current_page_type in const.page_mix_types
+ return (is_data_page or is_mix_page
+ or self._current_page_data_subheader_pointers != [])
def _read_page_header(self):
bit_offset = self._page_bit_offset
@@ -503,12 +519,6 @@ def _process_columnattributes_subheader(self, offset, length):
int_len = self._int_length
column_attributes_vectors_count = (
length - 2 * int_len - 12) // (int_len + 8)
- self.column_types = np.empty(
- column_attributes_vectors_count, dtype=np.dtype('S1'))
- self._column_data_lengths = np.empty(
- column_attributes_vectors_count, dtype=np.int64)
- self._column_data_offsets = np.empty(
- column_attributes_vectors_count, dtype=np.int64)
for i in range(column_attributes_vectors_count):
col_data_offset = (offset + int_len +
const.column_data_offset_offset +
@@ -520,16 +530,13 @@ def _process_columnattributes_subheader(self, offset, length):
const.column_type_offset + i * (int_len + 8))
x = self._read_int(col_data_offset, int_len)
- self._column_data_offsets[i] = x
+ self._column_data_offsets.append(x)
x = self._read_int(col_data_len, const.column_data_length_length)
- self._column_data_lengths[i] = x
+ self._column_data_lengths.append(x)
x = self._read_int(col_types, const.column_type_length)
- if x == 1:
- self.column_types[i] = b'd'
- else:
- self.column_types[i] = b's'
+ self._column_types.append(b'd' if x == 1 else b's')
def _process_columnlist_subheader(self, offset, length):
# unknown purpose
@@ -586,7 +593,7 @@ def _process_format_subheader(self, offset, length):
col.name = self.column_names[current_column_number]
col.label = column_label
col.format = column_format
- col.ctype = self.column_types[current_column_number]
+ col.ctype = self._column_types[current_column_number]
col.length = self._column_data_lengths[current_column_number]
self.column_formats.append(column_format)
@@ -599,7 +606,7 @@ def read(self, nrows=None):
elif nrows is None:
nrows = self.row_count
- if len(self.column_types) == 0:
+ if len(self._column_types) == 0:
self.close()
raise EmptyDataError("No columns to parse from file")
@@ -610,8 +617,8 @@ def read(self, nrows=None):
if nrows > m:
nrows = m
- nd = (self.column_types == b'd').sum()
- ns = (self.column_types == b's').sum()
+ nd = self._column_types.count(b'd')
+ ns = self._column_types.count(b's')
self._string_chunk = np.empty((ns, nrows), dtype=np.object)
self._byte_chunk = np.zeros((nd, 8 * nrows), dtype=np.uint8)
@@ -639,11 +646,13 @@ def _read_next_page(self):
self._page_length))
self._read_page_header()
- if self._current_page_type == const.page_meta_type:
+ page_type = self._current_page_type
+ if page_type == const.page_meta_type:
self._process_page_metadata()
- pt = [const.page_meta_type, const.page_data_type]
- pt += [const.page_mix_types]
- if self._current_page_type not in pt:
+
+ is_data_page = page_type & const.page_data_type
+ pt = [const.page_meta_type] + const.page_mix_types
+ if not is_data_page and self._current_page_type not in pt:
return self._read_next_page()
return False
@@ -660,7 +669,7 @@ def _chunk_to_dataframe(self):
name = self.column_names[j]
- if self.column_types[j] == b'd':
+ if self._column_types[j] == b'd':
rslt[name] = self._byte_chunk[jb, :].view(
dtype=self.byte_order + 'd')
rslt[name] = np.asarray(rslt[name], dtype=np.float64)
@@ -674,7 +683,7 @@ def _chunk_to_dataframe(self):
rslt[name] = pd.to_datetime(rslt[name], unit=unit,
origin="1960-01-01")
jb += 1
- elif self.column_types[j] == b's':
+ elif self._column_types[j] == b's':
rslt[name] = self._string_chunk[js, :]
if self.convert_text and (self.encoding is not None):
rslt[name] = rslt[name].str.decode(
@@ -686,6 +695,6 @@ def _chunk_to_dataframe(self):
else:
self.close()
raise ValueError("unknown column type %s" %
- self.column_types[j])
+ self._column_types[j])
return rslt
diff --git a/pandas/tests/io/sas/data/load_log.sas7bdat b/pandas/tests/io/sas/data/load_log.sas7bdat
new file mode 100644
index 0000000000000..dc78925471baf
Binary files /dev/null and b/pandas/tests/io/sas/data/load_log.sas7bdat differ
diff --git a/pandas/tests/io/sas/data/many_columns.csv b/pandas/tests/io/sas/data/many_columns.csv
new file mode 100644
index 0000000000000..307fc30f33b9f
--- /dev/null
+++ b/pandas/tests/io/sas/data/many_columns.csv
@@ -0,0 +1,4 @@
+DATASRC,PDDOCID,age,agegt89,ASSESSA,ASSESS1,ASSESS3,ASSESS4,ASSESS5,ASSESS6,ASSESS7,week,BECK,conf1,conf2,conf3,demo3,demo4,demo5,demo6,demo7,demo11a,demo11b,demo11c,demo11d,derm1b,derm2,derm3,derm4,derm5a,derm5b,derm7,derm7a,derm7b,derm8,derm9,ECG3,ecgrtxt,ecgrhr,ecgrpr,ecgrqrs,ecgrqrsaxis,ecgrqt,ecgrqtc,ecgrrep,ecgrtime,mmse1,mmse2,mmse3,mmse4,mmse5,mmse6,mmse7,mmse8,mmse9,mmse10,mmse11,mmse12,mmse13,mmse14,mmse15,mmse16,mmse17,mmse18,mmse19,mmse20,mmse,mmsescor,mrf1,mrf2,mrf3,mrf4,mrf5,mrf6,mrf7,mrf8,mrf9,mrf10,mrf11,mrf12,mrf13,nvitl1s,nvitl1d,nvitl1r,nvitl2s,nvitl2d,nvitl2r,nvitl3s,nvitl3d,nvitl3r,nvitl4s,nvitl4d,nvitl4r,nvitl5,nvitl1,nvitl2,nvitl3,nvitl4,phys1,phys1a,phys14,phys15a,phys15b,phys15c,phys15d,phys16a,phys16b,phys16c,phys16d,phys17a,phys17b,phys17c,phys17d,phys18a,phys18b,phys18c,phys18d,phys19a,phys19b,phys20,phys22,phys24,phys26,phys28,PREG1,PREG2,updrsa,updrs1,updrs2,updrs3,updrs4,updrs5a,updrs6a,updrs7a,updrs8a,updrs9a,updrs10a,updrs11a,updrs12a,updrs13a,updrs14a,updrs15a,updrs16a,updrs17a,updrs18a,updrs19a,updrs20a1,updrs20b1,updrs20c1,updrs20d1,updrs20e1,updrs21a1,updrs21b1,updrs22a1,updrs22b1,updrs22c1,updrs22d1,updrs22e1,updrs23a1,updrs23b1,updrs24a1,updrs24b1,updrs25a1,updrs25b1,updrs26a1,updrs26b1,updrs26c1,updrs26d1,updrs27a,updrs28a,updrs29a,updrs30a,updrs31a,updrs32a,updrs33a,updrs34a,updrs35,updrs36,updrs37,updrs38,updrs39,updrs5b,updrs6b,updrs7b,updrs8b,updrs9b,updrs10b,updrs11b,updrs12b,updrs13b,updrs14b,updrs15b,updrs16b,updrs17b,updrs18b,updrs19b,updrs20a2,updrs20b2,updrs20c2,updrs20d2,updrs20e2,updrs21a2,updrs21b2,updrs22a2,updrs22b2,updrs22c2,updrs22d2,updrs22e2,updrs23a2,updrs23b2,updrs24a2,updrs24b2,updrs25a2,updrs25b2,updrs26a2,updrs26b2,updrs26c2,updrs26d2,updrs27b,updrs28b,updrs29b,updrs30b,updrs31b,updrs32b,updrs33b,updrs34b,updrs5c,updrs6c,updrs7c,updrs8c,updrs9c,updrs10c,updrs11c,updrs12c,updrs13c,updrs14c,updrs15c,updrs16c,updrs17c,updrs32c,updrs33c,updrs34c,updrsmental,updrsadl,updrsadlon,updrsadloff,updrsadlmin,updrstremor,updrstremortreat,updrstremormin,updrsrigid,updrsrigidtreat,updrsrigidmin,updrsmotor,updrsmotortreat,updrsmotormin,updrs,updrstrt,updrsmin,updrs4a,updrs41,updrs42,updrs43,updrs44,updrs45,updrs46,updrs47,updrs48,updrs49,updrs410,updrs411,vitl1s,vitl1d,vitl2,vitl3s,vitl3d,vitl4,vitl5,vitl6,assess,fbeck,conf,demo1,derm,ecg,ecgr,mrf,nvitl,fphys1,fpreg,fupdrs,fupdrs4,vitl,site,race,rImaged,rPD,rPDlt5,rAgeGt30,rHY,rMed,rMelanoma,rPreclude,rNeed,rEligible,gender,incsae,incsusp,incterm,increlated,inctermat,increason,incafter24,incendp,incres,disp2,disp3,disp4,disp6,inex1,inex2,inex3,inex4,inex5,inex6,inex7,inex8,inex9,inex10,inex11,inex12,inex13,inex14,inex15,inex16,inex17,inex18,inex19,inex20,inex21,inex22,inex23,inex24,inex25,inex26,inex27,inex28,treatment,treat,disp,inex,classify,enrollyr,demoyear,dob_yr,inexdays,demodays,onsetdays,diagdays,medstartdays,physdays,phys21dys,phys23dys,phys25dys,phys27dys,phys29dys,confdays,pregdays,nvitldays,nvitlscandays,vitldays,labdays,ecgdays,ecgtestdays,mrfdays,dermdays,dermexamdays,dermbiopdays,mmsedays,beckdays,updrdays,updr4days,assessdays,daystotherapy,dispdays,endpdys,termdys,SAEdys,resdys,lmeddys,wddays,VISIT_NO
+a030,ab304,43.0,0.0,0.0,0.0,,,,,,-2.0,0.0,1.0,1.0,,2.0,1.0,19.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,,,,,,,0.0,2.0,ABNORMAL,75.0,150.0,100.0,-3.0,410.0,460.0,2.0,1000.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,3.0,5.0,2.0,1.0,1.0,1.0,0.0,3.0,1.0,1.0,1.0,26.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,150.0,94.0,73.0,155.0,96.0,71.0,148.0,91.0,69.0,146.0,67.0,72.0,1.0,42840.0,46080.0,46980.0,30600.0,100.0,175.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,4.0,4.0,4.0,4.0,2.0,1.0,,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,0.5,0.0,0.0,0.0,1.0,1.0,2.0,2.0,1.0,1.5,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,2.5,95.0,95.0,7.0,,2.0,1.0,1.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0.0,5.0,,,5.0,1.5,,1.5,7.5,,7.5,20.0,,20.0,25.0,,25.0,,,,,,,,,,,,,138.0,86.0,72.0,130.0,80.0,80.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,abc,1.0,1.0,1.0,0.0,1.0,34.0,5.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0,,0.0,3.0,0.0,1.0,0.0,4.0,3.0,,1.0,1.0,1.0,1.0,1.0,1.0,,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,Placebo,1.0,1.0,1.0,1.0,2002.0,2002.0,1914.0,-28.0,-28.0,-404.0,-28.0,0.0,-28.0,,,,,-6.0,-28.0,-13.0,-13.0,-12.0,-28.0,-28.0,-28.0,-28.0,-28.0,-14.0,-14.0,,-28.0,-28.0,-28.0,,-28.0,,659.0,426.0,659.0,,,658.0,100.0,ab
+a030,ab304,43.0,0.0,0.0,0.0,,,,,,0.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1000.0,,,,,,,,,,,,,,,,,,,,,0.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,0.0,1.0,2.0,0.0,0.0,1.0,0.0,1.0,2.0,95.0,95.0,7.0,,2.0,1.0,2.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0.0,3.0,,,3.0,0.0,,0.0,3.0,,3.0,13.0,,13.0,16.0,,16.0,,,,,,,,,,,,,140.0,86.0,76.0,132.0,80.0,84.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,abc,0.0,0.0,1.0,0.0,1.0,34.0,5.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0,,0.0,3.0,0.0,1.0,0.0,4.0,3.0,,1.0,1.0,1.0,1.0,1.0,1.0,,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,Placebo,1.0,1.0,1.0,1.0,2002.0,,1914.0,-28.0,,,,0.0,,,,,,,,,,,0.0,0.0,,,,,,,,,0.0,,0.0,,659.0,426.0,659.0,,,658.0,100.0,ab
+a030,ab304,43.0,0.0,0.0,0.0,,,,,,4.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1000.0,,,,,,,,,,,,,,,,,,,,,0.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1.0,0.0,0.0,0.0,0.0,2.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.5,0.0,0.0,1.0,1.0,0.0,0.0,0.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,2.0,0.0,1.0,1.0,0.5,1.0,2.0,90.0,95.0,7.0,,2.0,2.0,2.0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0.0,5.0,,,5.0,0.5,,0.5,2.0,,2.0,16.0,,16.0,21.0,,21.0,0.0,,,,,,,,,,,,149.0,88.0,80.0,136.0,90.0,82.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,abc,0.0,0.0,1.0,1.0,1.0,34.0,5.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0,,0.0,3.0,0.0,1.0,0.0,4.0,3.0,,1.0,1.0,1.0,1.0,1.0,1.0,,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,Placebo,1.0,1.0,1.0,1.0,2002.0,,1914.0,-28.0,,,,0.0,,,,,,,,,,,29.0,29.0,,,,,,,,,29.0,29.0,29.0,,659.0,426.0,659.0,,,658.0,100.0,ab
diff --git a/pandas/tests/io/sas/data/many_columns.sas7bdat b/pandas/tests/io/sas/data/many_columns.sas7bdat
new file mode 100644
index 0000000000000..582316fc59e18
Binary files /dev/null and b/pandas/tests/io/sas/data/many_columns.sas7bdat differ
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index efde152a918bd..f4b14241ed80e 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -199,6 +199,22 @@ def test_compact_numerical_values(datapath):
tm.assert_series_equal(result, expected, check_exact=True)
+def test_many_columns(datapath):
+ # Test for looking for column information in more places (PR #22628)
+ fname = datapath("io", "sas", "data", "many_columns.sas7bdat")
+ df = pd.read_sas(fname, encoding='latin-1')
+ fname = datapath("io", "sas", "data", "many_columns.csv")
+ df0 = pd.read_csv(fname, encoding='latin-1')
+ tm.assert_frame_equal(df, df0)
+
+
+def test_inconsistent_number_of_rows(datapath):
+ # Regression test for issue #16615. (PR #22628)
+ fname = datapath("io", "sas", "data", "load_log.sas7bdat")
+ df = pd.read_sas(fname, encoding='latin-1')
+ assert len(df) == 2097
+
+
def test_zero_variables(datapath):
# Check if the SAS file has zero variables (PR #18184)
fname = datapath("io", "sas", "data", "zero_variables.sas7bdat")
| - [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
The reason is that column definitions may be split up into different pages.
Allow column information to be parsed from different pages
and add a test for it.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22628 | 2018-09-07T15:07:08Z | 2018-09-18T12:13:46Z | 2018-09-18T12:13:46Z | 2018-09-18T12:17:17Z |
CLN/DEPR: removed deprecated as_indexer arg from str.match() | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 1979bde796452..5560d7edeca1a 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -528,7 +528,7 @@ Removal of prior version deprecations/changes
- Removal of the previously deprecated module ``pandas.core.datetools`` (:issue:`14105`, :issue:`14094`)
- Strings passed into :meth:`DataFrame.groupby` that refer to both column and index levels will raise a ``ValueError`` (:issue:`14432`)
- :meth:`Index.repeat` and :meth:`MultiIndex.repeat` have renamed the ``n`` argument to ``repeats``(:issue:`14645`)
--
+- Removal of the previously deprecated ``as_indexer`` keyword completely from ``str.match()`` (:issue:`22356`,:issue:`6581`)
.. _whatsnew_0240.performance:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index ed1111ed3558a..08709d15c48bf 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -712,7 +712,7 @@ def rep(x, r):
return result
-def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=None):
+def str_match(arr, pat, case=True, flags=0, na=np.nan):
"""
Determine if each string matches a regular expression.
@@ -725,8 +725,6 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=None):
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
na : default NaN, fill value for missing values.
- as_indexer
- .. deprecated:: 0.21.0
Returns
-------
@@ -744,17 +742,6 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=None):
regex = re.compile(pat, flags=flags)
- if (as_indexer is False) and (regex.groups > 0):
- raise ValueError("as_indexer=False with a pattern with groups is no "
- "longer supported. Use '.str.extract(pat)' instead")
- elif as_indexer is not None:
- # Previously, this keyword was used for changing the default but
- # deprecated behaviour. This keyword is now no longer needed.
- warnings.warn("'as_indexer' keyword was specified but is ignored "
- "(match now returns a boolean indexer by default), "
- "and will be removed in a future version.",
- FutureWarning, stacklevel=3)
-
dtype = bool
f = lambda x: bool(regex.match(x))
@@ -2490,9 +2477,8 @@ def contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
return self._wrap_result(result)
@copy(str_match)
- def match(self, pat, case=True, flags=0, na=np.nan, as_indexer=None):
- result = str_match(self._parent, pat, case=case, flags=flags, na=na,
- as_indexer=as_indexer)
+ def match(self, pat, case=True, flags=0, na=np.nan):
+ result = str_match(self._parent, pat, case=case, flags=flags, na=na)
return self._wrap_result(result)
@copy(str_replace)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index ab508174fa4a9..25e634c21c5ef 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -947,21 +947,6 @@ def test_match(self):
exp = Series([True, NA, False])
tm.assert_series_equal(result, exp)
- # test passing as_indexer still works but is ignored
- values = Series(['fooBAD__barBAD', NA, 'foo'])
- exp = Series([True, NA, False])
- with tm.assert_produces_warning(FutureWarning):
- result = values.str.match('.*BAD[_]+.*BAD', as_indexer=True)
- tm.assert_series_equal(result, exp)
- with tm.assert_produces_warning(FutureWarning):
- result = values.str.match('.*BAD[_]+.*BAD', as_indexer=False)
- tm.assert_series_equal(result, exp)
- with tm.assert_produces_warning(FutureWarning):
- result = values.str.match('.*(BAD[_]+).*(BAD)', as_indexer=True)
- tm.assert_series_equal(result, exp)
- pytest.raises(ValueError, values.str.match, '.*(BAD[_]+).*(BAD)',
- as_indexer=False)
-
# mixed
mixed = Series(['aBAD_BAD', NA, 'BAD_b_BAD', True, datetime.today(),
'foo', None, 1, 2.])
| - [x] closes #22316
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is the renewal of #22356, as the git got tangled up I re-forked the repo. | https://api.github.com/repos/pandas-dev/pandas/pulls/22626 | 2018-09-07T09:05:41Z | 2018-09-07T12:47:56Z | 2018-09-07T12:47:56Z | 2018-09-09T09:20:05Z |
STYLE: Fixing #18419 - Fixing flake8 issues to allow for >3.4.1 support | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 29f947e1144ea..e10b788ba7b22 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -565,19 +565,19 @@ def linkcode_resolve(domain, info):
for part in fullname.split('.'):
try:
obj = getattr(obj, part)
- except:
+ except AttributeError:
return None
try:
fn = inspect.getsourcefile(obj)
- except:
+ except TypeError:
fn = None
if not fn:
return None
try:
source, lineno = inspect.getsourcelines(obj)
- except:
+ except OSError:
lineno = None
if lineno:
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 1979bde796452..2399bb0686503 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -737,7 +737,7 @@ Build Changes
- Building pandas for development now requires ``cython >= 0.28.2`` (:issue:`21688`)
- Testing pandas now requires ``hypothesis>=3.58`` (:issue:22280). You can find `the Hypothesis docs here <https://hypothesis.readthedocs.io/en/latest/index.html>`_, and a pandas-specific introduction :ref:`in the contributing guide <using-hypothesis>` .
--
+- ci/lint.sh now supports flake8 > 3.4.1 (:issue:`18419`)
Other
^^^^^
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index c1a9a9fc1ed13..7e6cbedd58cf2 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -33,7 +33,7 @@ def load_reduce(self):
cls = args[0]
stack[-1] = object.__new__(cls)
return
- except:
+ except Exception:
pass
# try to re-encode the arguments
@@ -44,7 +44,7 @@ def load_reduce(self):
try:
stack[-1] = func(*args)
return
- except:
+ except Exception:
pass
# unknown exception, re-raise
@@ -182,7 +182,7 @@ def load_newobj_ex(self):
try:
Unpickler.dispatch[pkl.NEWOBJ_EX[0]] = load_newobj_ex
-except:
+except Exception:
pass
@@ -200,15 +200,11 @@ def load(fh, encoding=None, compat=False, is_verbose=False):
compat: provide Series compatibility mode, boolean, default False
is_verbose: show exception output
"""
+ fh.seek(0)
+ if encoding is not None:
+ up = Unpickler(fh, encoding=encoding)
+ else:
+ up = Unpickler(fh)
+ up.is_verbose = is_verbose
- try:
- fh.seek(0)
- if encoding is not None:
- up = Unpickler(fh, encoding=encoding)
- else:
- up = Unpickler(fh)
- up.is_verbose = is_verbose
-
- return up.load()
- except:
- raise
+ return up.load()
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 2bd1b0c5b3507..d169c4b7c6b0f 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -405,13 +405,13 @@ def visit_Assign(self, node, **kwargs):
return self.visit(cmpr)
def visit_Subscript(self, node, **kwargs):
- # only allow simple suscripts
+ # only allow simple subscripts
value = self.visit(node.value)
slobj = self.visit(node.slice)
try:
value = value.value
- except:
+ except AttributeError:
pass
try:
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index b8cbb41501dd1..0dc428f3c37bf 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -440,7 +440,7 @@ def is_timedelta64_dtype(arr_or_dtype):
return False
try:
tipo = _get_dtype_type(arr_or_dtype)
- except:
+ except Exception:
return False
return issubclass(tipo, np.timedelta64)
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index f53ccc86fc4ff..a8a25ab4759d5 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -344,10 +344,8 @@ def construct_from_string(cls, string):
try:
if string == 'category':
return cls()
- except:
- pass
-
- raise TypeError("cannot construct a CategoricalDtype")
+ except Exception:
+ TypeError("cannot construct a CategoricalDtype")
@staticmethod
def validate_ordered(ordered):
@@ -499,7 +497,7 @@ def __new__(cls, unit=None, tz=None):
if m is not None:
unit = m.groupdict()['unit']
tz = m.groupdict()['tz']
- except:
+ except Exception:
raise ValueError("could not construct DatetimeTZDtype")
elif isinstance(unit, compat.string_types):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4faf4e88e5a3c..46be767d7ff8a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3187,7 +3187,7 @@ def _ensure_valid_index(self, value):
if not len(self.index) and is_list_like(value):
try:
value = Series(value)
- except:
+ except ValueError:
raise ValueError('Cannot set a frame with no defined index '
'and a value that cannot be converted to a '
'Series')
@@ -7621,7 +7621,7 @@ def convert(v):
values = np.array([convert(v) for v in values])
else:
values = convert(values)
- except:
+ except Exception:
values = convert(values)
else:
diff --git a/pandas/core/indexes/frozen.py b/pandas/core/indexes/frozen.py
index 3c6b922178abf..9ecb7538109b7 100644
--- a/pandas/core/indexes/frozen.py
+++ b/pandas/core/indexes/frozen.py
@@ -136,7 +136,7 @@ def searchsorted(self, v, side='left', sorter=None):
# https://github.com/numpy/numpy/issues/5370
try:
v = self.dtype.type(v)
- except:
+ except Exception:
pass
return super(FrozenNDArray, self).searchsorted(
v, side=side, sorter=sorter)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 955f1461075f9..a499bc7b34428 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -980,12 +980,12 @@ def _try_mi(k):
return _try_mi(key)
except (KeyError):
raise
- except:
+ except Exception:
pass
try:
return _try_mi(Timestamp(key))
- except:
+ except Exception:
pass
raise InvalidIndexError(key)
@@ -1640,7 +1640,7 @@ def append(self, other):
# if all(isinstance(x, MultiIndex) for x in other):
try:
return MultiIndex.from_tuples(new_tuples, names=self.names)
- except:
+ except TypeError:
return Index(new_tuples)
def argsort(self, *args, **kwargs):
@@ -2269,8 +2269,7 @@ def maybe_droplevels(indexer, levels, drop_level):
for i in sorted(levels, reverse=True):
try:
new_index = new_index.droplevel(i)
- except:
-
+ except ValueError:
# no dropping here
return orig_index
return new_index
@@ -2769,11 +2768,11 @@ def _convert_can_do_setop(self, other):
labels=[[]] * self.nlevels,
verify_integrity=False)
else:
- msg = 'other must be a MultiIndex or a list of tuples'
try:
other = MultiIndex.from_tuples(other)
- except:
- raise TypeError(msg)
+ except TypeError:
+ raise TypeError('other must be a MultiIndex or a list '
+ 'of tuples.')
else:
result_names = self.names if self.names == other.names else None
return other, result_names
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index a245ecfa007f3..b83c08d3bd9d2 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2146,7 +2146,7 @@ def _getitem_tuple(self, tup):
self._has_valid_tuple(tup)
try:
return self._getitem_lowerdim(tup)
- except:
+ except IndexingError:
pass
retval = self.obj
@@ -2705,13 +2705,13 @@ def maybe_droplevels(index, key):
for _ in key:
try:
index = index.droplevel(0)
- except:
+ except ValueError:
# we have dropped too much, so back out
return original_index
else:
try:
index = index.droplevel(0)
- except:
+ except ValueError:
pass
return index
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index e735b35653cd4..b814ce6d37a5e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -666,7 +666,7 @@ def _astype(self, dtype, copy=False, errors='raise', values=None,
newb = make_block(values, placement=self.mgr_locs,
klass=klass, ndim=self.ndim)
- except:
+ except Exception:
if errors == 'raise':
raise
newb = self.copy() if copy else self
@@ -1142,7 +1142,7 @@ def check_int_bool(self, inplace):
# a fill na type method
try:
m = missing.clean_fill_method(method)
- except:
+ except ValueError:
m = None
if m is not None:
@@ -1157,7 +1157,7 @@ def check_int_bool(self, inplace):
# try an interp method
try:
m = missing.clean_interp_method(method, **kwargs)
- except:
+ except ValueError:
m = None
if m is not None:
@@ -2438,7 +2438,7 @@ def set(self, locs, values, check=False):
try:
if (self.values[locs] == values).all():
return
- except:
+ except Exception:
pass
try:
self.values[locs] = values
@@ -3172,7 +3172,7 @@ def _astype(self, dtype, copy=False, errors='raise', values=None,
def __len__(self):
try:
return self.sp_index.length
- except:
+ except Exception:
return 0
def copy(self, deep=True, mgr=None):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index f44fb4f6e9e14..2b87502ad1c5c 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -503,7 +503,7 @@ def reduction(values, axis=None, skipna=True):
try:
result = getattr(values, meth)(axis, dtype=dtype_max)
result.fill(np.nan)
- except:
+ except AttributeError:
result = np.nan
else:
result = getattr(values, meth)(axis)
@@ -813,7 +813,7 @@ def _ensure_numeric(x):
elif is_object_dtype(x):
try:
x = x.astype(np.complex128)
- except:
+ except Exception:
x = x.astype(np.float64)
else:
if not np.any(x.imag):
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index a86e57fd8876d..6ccd5f5407768 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1546,7 +1546,7 @@ def na_op(x, y):
y = bool(y)
try:
result = libops.scalar_binop(x, y, op)
- except:
+ except Exception:
raise TypeError("cannot compare a dtyped [{dtype}] array "
"with a scalar of type [{typ}]"
.format(dtype=x.dtype,
diff --git a/pandas/core/sparse/array.py b/pandas/core/sparse/array.py
index eb07e5ef6c85f..33302a048e2a8 100644
--- a/pandas/core/sparse/array.py
+++ b/pandas/core/sparse/array.py
@@ -306,7 +306,7 @@ def __setstate__(self, state):
def __len__(self):
try:
return self.sp_index.length
- except:
+ except Exception:
return 0
def __unicode__(self):
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 57387b9ea870a..0b9ba67e74f08 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -241,7 +241,7 @@ def _convert_listlike_datetimes(arg, box, format, name=None, tz=None,
if format == '%Y%m%d':
try:
result = _attempt_YYYYMMDD(arg, errors=errors)
- except:
+ except Exception:
raise ValueError("cannot convert the input to "
"'%Y%m%d' date format")
@@ -331,7 +331,7 @@ def _adjust_to_origin(arg, origin, unit):
raise ValueError("unit must be 'D' for origin='julian'")
try:
arg = arg - j0
- except:
+ except Exception:
raise ValueError("incompatible 'arg' type for given "
"'origin'='julian'")
@@ -728,21 +728,21 @@ def calc_with_mask(carg, mask):
# try intlike / strings that are ints
try:
return calc(arg.astype(np.int64))
- except:
+ except Exception:
pass
# a float with actual np.nan
try:
carg = arg.astype(np.float64)
return calc_with_mask(carg, notna(carg))
- except:
+ except Exception:
pass
# string with NaN-like
try:
mask = ~algorithms.isin(arg, list(tslib.nat_strings))
return calc_with_mask(arg, mask)
- except:
+ except Exception:
pass
return None
diff --git a/pandas/core/window.py b/pandas/core/window.py
index eed0e97f30dc9..76f2655dbed43 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -2502,7 +2502,7 @@ def _offset(window, center):
offset = (window - 1) / 2. if center else 0
try:
return int(offset)
- except:
+ except ValueError:
return offset.astype(int)
diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py
index 0d564069c681f..8fee3befce528 100644
--- a/pandas/io/clipboards.py
+++ b/pandas/io/clipboards.py
@@ -42,7 +42,7 @@ def read_clipboard(sep=r'\s+', **kwargs): # pragma: no cover
text, encoding=(kwargs.get('encoding') or
get_option('display.encoding'))
)
- except:
+ except Exception:
pass
# Excel copies into clipboard with \t separation
diff --git a/pandas/io/formats/console.py b/pandas/io/formats/console.py
index 45d50ea3fa073..ff6b37d4b34e9 100644
--- a/pandas/io/formats/console.py
+++ b/pandas/io/formats/console.py
@@ -100,7 +100,7 @@ def check_main():
try:
return __IPYTHON__ or check_main() # noqa
- except:
+ except Exception:
return check_main()
@@ -118,7 +118,7 @@ def in_qtconsole():
ip.config.get('IPKernelApp', {}).get('parent_appname', ""))
if 'qtconsole' in front_end.lower():
return True
- except:
+ except Exception:
return False
return False
@@ -137,7 +137,7 @@ def in_ipnb():
ip.config.get('IPKernelApp', {}).get('parent_appname', ""))
if 'notebook' in front_end.lower():
return True
- except:
+ except Exception:
return False
return False
@@ -149,7 +149,7 @@ def in_ipython_frontend():
try:
ip = get_ipython() # noqa
return 'zmq' in str(type(ip)).lower()
- except:
+ except Exception:
pass
return False
diff --git a/pandas/io/formats/terminal.py b/pandas/io/formats/terminal.py
index dcd6f2cf4a718..cbc3b140f814d 100644
--- a/pandas/io/formats/terminal.py
+++ b/pandas/io/formats/terminal.py
@@ -78,7 +78,7 @@ def _get_terminal_size_windows():
h = windll.kernel32.GetStdHandle(-12)
csbi = create_string_buffer(22)
res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi)
- except:
+ except Exception:
return None
if res:
import struct
@@ -108,7 +108,7 @@ def _get_terminal_size_tput():
output = proc.communicate(input=None)
rows = int(output[0])
return (cols, rows)
- except:
+ except Exception:
return None
@@ -120,7 +120,7 @@ def ioctl_GWINSZ(fd):
import struct
cr = struct.unpack(
'hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
- except:
+ except Exception:
return None
return cr
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
@@ -129,13 +129,13 @@ def ioctl_GWINSZ(fd):
fd = os.open(os.ctermid(), os.O_RDONLY)
cr = ioctl_GWINSZ(fd)
os.close(fd)
- except:
+ except Exception:
pass
if not cr or cr == (0, 0):
try:
from os import environ as env
cr = (env['LINES'], env['COLUMNS'])
- except:
+ except KeyError:
return None
return int(cr[1]), int(cr[0])
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index 7a1e72637f4ce..4a77c60bdde59 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -703,7 +703,7 @@ def create_block(b):
dtype = dtype_for(obj[u'dtype'])
try:
return dtype(obj[u'data'])
- except:
+ except Exception:
return dtype.type(obj[u'data'])
elif typ == u'np_complex':
return complex(obj[u'real'] + u'+' + obj[u'imag'] + u'j')
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8d37bf4c84d5d..371f81c039040 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1808,7 +1808,7 @@ def close(self):
# close additional handles opened by C parser (for compression)
try:
self._reader.close()
- except:
+ except Exception:
pass
def _set_noconvert_columns(self):
@@ -3037,7 +3037,7 @@ def converter(*date_cols):
errors='ignore',
infer_datetime_format=infer_datetime_format
)
- except:
+ except Exception:
return tools.to_datetime(
parsing.try_parse_dates(strs, dayfirst=dayfirst))
else:
@@ -3266,7 +3266,7 @@ def _floatify_na_values(na_values):
v = float(v)
if not np.isnan(v):
result.add(v)
- except:
+ except Exception:
pass
return result
@@ -3287,11 +3287,11 @@ def _stringify_na_values(na_values):
result.append(str(v))
result.append(v)
- except:
+ except Exception:
pass
try:
result.append(int(x))
- except:
+ except Exception:
pass
return set(result)
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 6738daec9397c..28d1fe37a2122 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -168,12 +168,12 @@ def try_read(path, encoding=None):
return read_wrapper(
lambda f: pc.load(f, encoding=encoding, compat=False))
# compat pickle
- except:
+ except Exception:
return read_wrapper(
lambda f: pc.load(f, encoding=encoding, compat=True))
try:
return try_read(path)
- except:
+ except Exception:
if PY3:
return try_read(path, encoding='latin1')
raise
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index c57b1c3e211f6..1d04833c404f9 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -258,7 +258,7 @@ def _tables():
try:
_table_file_open_policy_is_strict = (
tables.file._FILE_OPEN_POLICY == 'strict')
- except:
+ except Exception:
pass
return _table_mod
@@ -395,11 +395,11 @@ def read_hdf(path_or_buf, key=None, mode='r', **kwargs):
'contains multiple datasets.')
key = candidate_only_group._v_pathname
return store.select(key, auto_close=auto_close, **kwargs)
- except:
+ except Exception:
# if there is an error, close the store
try:
store.close()
- except:
+ except Exception:
pass
raise
@@ -517,10 +517,9 @@ def __getattr__(self, name):
""" allow attribute access to get stores """
try:
return self.get(name)
- except:
- pass
- raise AttributeError("'%s' object has no attribute '%s'" %
- (type(self).__name__, name))
+ except Exception:
+ raise AttributeError("'%s' object has no attribute '%s'" %
+ (type(self).__name__, name))
def __contains__(self, key):
""" check for existence of this key
@@ -675,7 +674,7 @@ def flush(self, fsync=False):
if fsync:
try:
os.fsync(self._handle.fileno())
- except:
+ except Exception:
pass
def get(self, key):
@@ -1161,7 +1160,7 @@ def get_node(self, key):
if not key.startswith('/'):
key = '/' + key
return self._handle.get_node(self.root, key)
- except:
+ except Exception:
return None
def get_storer(self, key):
@@ -1270,7 +1269,7 @@ def _validate_format(self, format, kwargs):
# validate
try:
kwargs['format'] = _FORMAT_MAP[format.lower()]
- except:
+ except Exception:
raise TypeError("invalid HDFStore format specified [{0}]"
.format(format))
@@ -1307,7 +1306,7 @@ def error(t):
try:
pt = _TYPE_MAP[type(value)]
- except:
+ except KeyError:
error('_TYPE_MAP')
# we are actually a table
@@ -1318,7 +1317,7 @@ def error(t):
if u('table') not in pt:
try:
return globals()[_STORER_MAP[pt]](self, group, **kwargs)
- except:
+ except Exception:
error('_STORER_MAP')
# existing node (and must be a table)
@@ -1354,12 +1353,12 @@ def error(t):
fields = group.table._v_attrs.fields
if len(fields) == 1 and fields[0] == u('value'):
tt = u('legacy_frame')
- except:
+ except Exception:
pass
try:
return globals()[_TABLE_MAP[tt]](self, group, **kwargs)
- except:
+ except Exception:
error('_TABLE_MAP')
def _write_to_group(self, key, value, format, index=True, append=False,
@@ -1624,7 +1623,7 @@ def is_indexed(self):
""" return whether I am an indexed column """
try:
return getattr(self.table.cols, self.cname).is_indexed
- except:
+ except Exception:
False
def copy(self):
@@ -1656,7 +1655,7 @@ def convert(self, values, nan_rep, encoding, errors):
kwargs['name'] = _ensure_decoded(self.index_name)
try:
self.values = Index(values, **kwargs)
- except:
+ except Exception:
# if the output freq is different that what we recorded,
# it should be None (see also 'doc example part 2')
@@ -1869,7 +1868,7 @@ def create_for_block(
m = re.search(r"values_block_(\d+)", name)
if m:
name = "values_%s" % m.groups()[0]
- except:
+ except Exception:
pass
return cls(name=name, cname=cname, **kwargs)
@@ -2232,7 +2231,7 @@ def convert(self, values, nan_rep, encoding, errors):
try:
self.data = self.data.astype(dtype, copy=False)
- except:
+ except Exception:
self.data = self.data.astype('O', copy=False)
# convert nans / decode
@@ -2325,7 +2324,7 @@ def set_version(self):
self.version = tuple(int(x) for x in version.split('.'))
if len(self.version) == 2:
self.version = self.version + (0,)
- except:
+ except Exception:
self.version = (0, 0, 0)
@property
@@ -2769,7 +2768,7 @@ def write_array(self, key, value, items=None):
else:
try:
items = list(items)
- except:
+ except TypeError:
pass
ws = performance_doc % (inferred_type, key, items)
warnings.warn(ws, PerformanceWarning, stacklevel=7)
@@ -2843,7 +2842,7 @@ class SeriesFixed(GenericFixed):
def shape(self):
try:
return len(getattr(self.group, 'values')),
- except:
+ except TypeError:
return None
def read(self, **kwargs):
@@ -2961,7 +2960,7 @@ def shape(self):
shape = shape[::-1]
return shape
- except:
+ except Exception:
return None
def read(self, start=None, stop=None, **kwargs):
@@ -3495,7 +3494,7 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None,
if axes is None:
try:
axes = _AXES_MAP[type(obj)]
- except:
+ except KeyError:
raise TypeError("cannot properly create the storer for: "
"[group->%s,value->%s]"
% (self.group._v_name, type(obj)))
@@ -3614,7 +3613,7 @@ def get_blk_items(mgr, blocks):
b, b_items = by_items.pop(items)
new_blocks.append(b)
new_blk_items.append(b_items)
- except:
+ except Exception:
raise ValueError(
"cannot match existing table structure for [%s] on "
"appending data" % ','.join(pprint_thing(item) for
@@ -3642,7 +3641,7 @@ def get_blk_items(mgr, blocks):
if existing_table is not None and validate:
try:
existing_col = existing_table.values_axes[i]
- except:
+ except KeyError:
raise ValueError("Incompatible appended table [%s] with "
"existing table [%s]"
% (blocks, existing_table.values_axes))
@@ -4460,7 +4459,7 @@ def _get_info(info, name):
""" get/create the info for this name """
try:
idx = info[name]
- except:
+ except KeyError:
idx = info[name] = dict()
return idx
@@ -4782,7 +4781,7 @@ def __init__(self, table, where=None, start=None, stop=None, **kwargs):
)
self.coordinates = where
- except:
+ except Exception:
pass
if self.coordinates is None:
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 14e7ad9682db6..993af716f7037 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -246,7 +246,7 @@ def __init__(self, filepath_or_buffer, index=None, encoding='ISO-8859-1',
contents = filepath_or_buffer.read()
try:
contents = contents.encode(self._encoding)
- except:
+ except Exception:
pass
self.filepath_or_buffer = compat.BytesIO(contents)
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index b8a0bf5733158..0478e1ce8bb82 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -46,7 +46,7 @@ def read_sas(filepath_or_buffer, format=None, index=None, encoding=None,
format = "sas7bdat"
else:
raise ValueError("unable to infer format of SAS file")
- except:
+ except Exception:
pass
if format.lower() == 'xport':
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index a582d32741ae9..c34d216f17f1e 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -382,7 +382,7 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
try:
_is_table_name = pandas_sql.has_table(sql)
- except:
+ except Exception:
_is_table_name = False
if _is_table_name:
@@ -847,7 +847,7 @@ def _sqlalchemy_type(self, col):
try:
tz = col.tzinfo # noqa
return DateTime(timezone=True)
- except:
+ except Exception:
return DateTime
if col_type == 'timedelta64':
warnings.warn("the 'timedelta' type is not supported, and will be "
@@ -1360,7 +1360,7 @@ def run_transaction(self):
try:
yield cur
self.con.commit()
- except:
+ except Exception:
self.con.rollback()
raise
finally:
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index efd5f337fdf69..66b66e1f3887e 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1252,12 +1252,12 @@ def _read_old_header(self, first_char):
try:
self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
- except:
+ except KeyError:
raise ValueError("cannot convert stata types [{0}]"
.format(','.join(str(x) for x in typlist)))
try:
self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
- except:
+ except KeyError:
raise ValueError("cannot convert stata dtypes [{0}]"
.format(','.join(str(x) for x in typlist)))
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index f142f770a0c54..4c77309a555ce 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -214,7 +214,7 @@ def test_arith_flex_frame(self):
dtype = dict(C=None)
tm.assert_frame_equal(result, exp)
_check_mixed_int(result, dtype=dtype)
- except:
+ except Exception:
printing.pprint_thing("Failing operation %r" % op)
raise
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index cbf1bdbce9574..6fb1f315e1cc9 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -157,7 +157,7 @@ def get_result(self, obj, method, key, axis):
with catch_warnings(record=True):
try:
xp = getattr(obj, method).__getitem__(_axify(obj, key, axis))
- except:
+ except Exception:
xp = getattr(obj, method).__getitem__(key)
return xp
@@ -219,7 +219,7 @@ def _print(result, error=None):
try:
xp = self.get_result(obj, method2, k2, a)
- except:
+ except Exception:
result = 'no comp'
_print(result)
return
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index c19f8e57f9ae7..0650f912def0f 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -70,7 +70,7 @@ def has_horizontally_truncated_repr(df):
try: # Check header row
fst_line = np.array(repr(df).splitlines()[0].split())
cand_col = np.where(fst_line == '...')[0][0]
- except:
+ except Exception:
return False
# Make sure each row has this ... in the same place
r = repr(df)
@@ -452,7 +452,7 @@ def test_to_string_repr_unicode(self):
for line in rs[1:]:
try:
line = line.decode(get_option("display.encoding"))
- except:
+ except Exception:
pass
if not line.startswith('dtype:'):
assert len(line) == line_len
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index ddcfcc0842d1a..3ee90323afb3b 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -47,7 +47,7 @@ def safe_remove(path):
if path is not None:
try:
os.remove(path)
- except:
+ except Exception:
pass
@@ -55,7 +55,7 @@ def safe_close(store):
try:
if store is not None:
store.close()
- except:
+ except Exception:
pass
@@ -113,7 +113,7 @@ def _maybe_remove(store, key):
no content from previous tests using the same table name."""
try:
store.remove(key)
- except:
+ except Exception:
pass
@@ -4590,7 +4590,7 @@ def do_copy(f, new_f=None, keys=None,
safe_close(tstore)
try:
os.close(fd)
- except:
+ except Exception:
pass
safe_remove(new_f)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 824e5a2b23df3..acf7fb24162eb 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -1783,7 +1783,7 @@ def test_read_procedure(self):
try:
r1 = connection.execute(proc) # noqa
trans.commit()
- except:
+ except Exception:
trans.rollback()
raise
@@ -2363,7 +2363,7 @@ def setup_class(cls):
# No real user should allow root access with a blank password.
pymysql.connect(host='localhost', user='root', passwd='',
db='pandas_nosetest')
- except:
+ except Exception:
pass
else:
return
@@ -2390,7 +2390,7 @@ def setup_method(self, request, datapath):
# No real user should allow root access with a blank password.
self.conn = pymysql.connect(host='localhost', user='root',
passwd='', db='pandas_nosetest')
- except:
+ except Exception:
pass
else:
return
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index dcfeab55f94fc..494f89ce0974f 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1358,7 +1358,7 @@ def f():
try:
df = f()
- except:
+ except Exception:
pass
assert (df['foo', 'one'] == 0).all()
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index a70ee80aee180..75cd2a1b3635a 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -141,12 +141,12 @@ def _coerce_tds(targ, res):
if axis != 0 and hasattr(
targ, 'shape') and targ.ndim and targ.shape != res.shape:
res = np.split(res, [targ.shape[0]], axis=0)[0]
- except:
+ except Exception:
targ, res = _coerce_tds(targ, res)
try:
tm.assert_almost_equal(targ, res, check_dtype=check_dtype)
- except:
+ except AssertionError:
# handle timedelta dtypes
if hasattr(targ, 'dtype') and targ.dtype == 'm8[ns]':
@@ -167,11 +167,11 @@ def _coerce_tds(targ, res):
else:
try:
res = res.astype('c16')
- except:
+ except Exception:
res = res.astype('f8')
try:
targ = targ.astype('c16')
- except:
+ except Exception:
targ = targ.astype('f8')
# there should never be a case where numpy returns an object
# but nanops doesn't, so make that an exception
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index b968c52ce3dfd..0fc71465faf11 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -337,13 +337,13 @@ def check_op(op, name):
for op in ops:
try:
check_op(getattr(operator, op), op)
- except:
+ except Exception:
pprint_thing("Failing operation: %r" % op)
raise
if compat.PY3:
try:
check_op(operator.truediv, 'div')
- except:
+ except Exception:
pprint_thing("Failing operation: %r" % 'div')
raise
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index ab508174fa4a9..b11f2ee18f0de 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -2708,7 +2708,7 @@ def test_slice(self):
expected = Series([s[start:stop:step] if not isna(s) else NA
for s in values])
tm.assert_series_equal(result, expected)
- except:
+ except AssertionError:
print('failed on %s:%s:%s' % (start, stop, step))
raise
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 33dcf6d64b302..a8a05b88c0efb 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -292,7 +292,7 @@ def _apply_rule(self, dates):
def register(cls):
try:
name = cls.name
- except:
+ except Exception:
name = cls.__name__
holiday_calendars[name] = cls
@@ -424,7 +424,7 @@ def merge_class(base, other):
"""
try:
other = other.rules
- except:
+ except Exception:
pass
if not isinstance(other, list):
@@ -433,7 +433,7 @@ def merge_class(base, other):
try:
base = base.rules
- except:
+ except Exception:
pass
if not isinstance(base, list):
diff --git a/pandas/util/_print_versions.py b/pandas/util/_print_versions.py
index 5600834f3b615..252491511105f 100644
--- a/pandas/util/_print_versions.py
+++ b/pandas/util/_print_versions.py
@@ -21,7 +21,7 @@ def get_sys_info():
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
so, serr = pipe.communicate()
- except:
+ except ValueError:
pass
else:
if pipe.returncode == 0:
@@ -50,7 +50,7 @@ def get_sys_info():
("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))),
("LOCALE", '.'.join(map(str, locale.getlocale()))),
])
- except:
+ except Exception:
pass
return blob
@@ -108,7 +108,7 @@ def show_versions(as_json=False):
mod = importlib.import_module(modname)
ver = ver_f(mod)
deps_blob.append((modname, ver))
- except:
+ except Exception:
deps_blob.append((modname, None))
if (as_json):
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index a96563051e7de..51c8c95b63b10 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -59,7 +59,7 @@ def _check_for_default_values(fname, arg_val_dict, compat_args):
# could not compare them directly, so try comparison
# using the 'is' operator
- except:
+ except Exception:
match = (arg_val_dict[key] is compat_args[key])
if not match:
| - [x] closes #18419
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
There were tons of E722's, in a lot of cases I did not know what exceptions the author was expecting, so have unfortunately had to mark as `except Exception`, but where possible, have been more specific. If the reviewer/s know the expected exceptions then happy to adjust.
Tested on flake8 3.5, log below:
```bash
(pandas-dev) ajc:pandas aaron$ ci/lint.sh
inside ci/lint.sh
Linting *.py
Linting *.py DONE
Linting setup.py
Linting setup.py DONE
Linting asv_bench/benchmarks/
Linting asv_bench/benchmarks/*.py DONE
Linting scripts/*.py
Linting scripts/*.py DONE
Linting doc scripts
Linting doc scripts DONE
Linting *.pyx
Linting *.pyx DONE
Linting *.pxi.in
linting -> pandas/src
Linting *.pxi.in DONE
Linting *.pxd
linting -> pandas/_libs
Linting *.pxd DONE
Linting *.c and *.h
linting -> pandas/_libs/src/*.h
ci/lint.sh: line 101: cpplint: command not found
linting -> pandas/_libs/src/parser
ci/lint.sh: line 101: cpplint: command not found
linting -> pandas/_libs/src/ujson
ci/lint.sh: line 101: cpplint: command not found
linting -> pandas/_libs/tslibs/src/datetime
ci/lint.sh: line 107: cpplint: command not found
Linting *.c and *.h DONE
Check for invalid testing
Check for invalid testing DONE
Check for non-standard imports
Check for non-standard imports DONE
Check for incorrect sphinx directives
Check for incorrect sphinx directives DONE
Check for deprecated messages without sphinx directive
Check for deprecated messages without sphinx directive DONE
Check for old-style classes
Check for old-style classes DONE
Check for backticks incorrectly rendering because of missing spaces
Check for backticks incorrectly rendering because of missing spaces DONE
(pandas-dev) ajc:pandas aaron$ flake8 --version
3.5.0 (flake8-comprehensions: 1.4.1, mccabe: 0.6.1, pycodestyle: 2.3.1, pyflakes: 1.6.0) CPython 3.6.5 on Darwin
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/22625 | 2018-09-07T00:03:14Z | 2018-09-07T00:11:06Z | null | 2018-09-07T00:11:06Z |
DataFrame.corrwith() method now supports 'how', 'method' and 'min_periods' params | diff --git a/doc/source/computation.rst b/doc/source/computation.rst
index 95142a7b83435..1e429bc14991c 100644
--- a/doc/source/computation.rst
+++ b/doc/source/computation.rst
@@ -159,8 +159,8 @@ compute the correlation based on histogram intersection:
frame.corr(method=histogram_intersection)
A related method :meth:`~DataFrame.corrwith` is implemented on DataFrame to
-compute the correlation between like-labeled Series contained in different
-DataFrame objects.
+compute the correlation between another DataFrame or Series object with
+the columns of the same names (``how='pairwise'`` by default):
.. ipython:: python
@@ -168,8 +168,18 @@ DataFrame objects.
columns = ['one', 'two', 'three', 'four']
df1 = pd.DataFrame(np.random.randn(5, 4), index=index, columns=columns)
df2 = pd.DataFrame(np.random.randn(4, 4), index=index[:4], columns=columns)
- df1.corrwith(df2)
- df2.corrwith(df1, axis=1)
+ df1.corrwith(df2) # how='pairwise'
+ df2.corrwith(df1, axis=1) # how='pairwise'
+
+.. _computation.corrwith_all:
+.. versionadded:: 0.24.0
+
+... or compute the correlation matrix between another DataFrame with all possible
+columns combinations (``how='all'``):
+
+.. ipython:: python
+ df1.corrwith(df2, how='all')
+ df1.corrwith(df2, axis=1, how='all')
.. _computation.ranking:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index a2abda019812a..9554ef49f48e2 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -27,6 +27,8 @@ New features
the user to override the engine's default behavior to include or omit the
dataframe's indexes from the resulting Parquet file. (:issue:`20768`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (:issue:`22684`)
+- :meth:`DataFrame.corrwith` now can compute correlation matrix between all columns/rows of two DataFrame objects using ``how='all'`` parameter. It also supports ``method`` and ``min_periods`` parameters with the same behavior as :meth:`DataFrame.corr`.
+ See the :ref:`section on using DataFrame.corrwith() <computation.corrwith_all>`. (:issue:`22622`)
- :func:`DataFrame.to_string` now accepts ``decimal`` as an argument, allowing the user to specify which decimal separator should be used in the output. (:issue:`23614`)
- :func:`read_feather` now accepts ``columns`` as an argument, allowing the user to specify which columns should be read. (:issue:`24025`)
- :func:`DataFrame.to_html` now accepts ``render_links`` as an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame.
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c4537db254132..75a4d0bdd6817 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7022,56 +7022,101 @@ def cov(self, min_periods=None):
return self._constructor(baseCov, index=idx, columns=cols)
- def corrwith(self, other, axis=0, drop=False):
+ def corrwith(self, other, axis=0, how='pairwise', drop=False,
+ method='pearson', min_periods=1):
"""
- Compute pairwise correlation between rows or columns of two DataFrame
- objects.
+ Compute correlation between rows or columns of two DataFrame objects
+ or DataFrame and Series objects.
Parameters
----------
other : DataFrame, Series
axis : {0 or 'index', 1 or 'columns'}, default 0
0 or 'index' to compute column-wise, 1 or 'columns' for row-wise
+ how : {'pairwise', 'all'}, default 'pairwise' if other is a DataFrame
+ and default 'all' if other is a Series object
+ * 'pairwise' to compute correlation between rows or columns with
+ the same name/index,
+ * 'all' to compute correlation between all possible pairs of rows
+ or columns
+
+ .. versionadded:: 0.24.0
drop : boolean, default False
Drop missing indices from result, default returns union of all
+ method : {'pearson', 'kendall', 'spearman'} or callable
+ * 'pearson' : standard correlation coefficient
+ * 'kendall' : Kendall Tau correlation coefficient
+ * 'spearman' : Spearman rank correlation
+ * callable : callable with input two 1d ndarrays
+ and returning a float
+
+ .. versionadded:: 0.24.0
+ min_periods : int, optional
+ Minimum number of observations required per pair of columns
+ to have a valid result. Currently only available for pearson
+ and spearman correlation
+
+ .. versionadded:: 0.24.0
Returns
-------
- correls : Series
+ correl : Series (how='pairwise') or DataFrame (how='all')
"""
+
axis = self._get_axis_number(axis)
this = self._get_numeric_data()
if isinstance(other, Series):
- return this.apply(other.corr, axis=axis)
+ return this.apply(other.corr, axis=axis,
+ method=method, min_periods=min_periods)
- other = other._get_numeric_data()
+ if not isinstance(other, DataFrame):
+ raise TypeError("'other' parameter should be a DataFrame "
+ "or a Series object")
- left, right = this.align(other, join='inner', copy=False)
+ other = other._get_numeric_data()
- # mask missing values
- left = left + right * 0
- right = right + left * 0
+ if (isinstance(this.columns, MultiIndex)
+ or isinstance(other.columns, MultiIndex)):
+ raise ValueError("MultiIndex is not supported")
if axis == 1:
- left = left.T
- right = right.T
-
- # demeaned data
- ldem = left - left.mean()
- rdem = right - right.mean()
+ this = this.transpose()
+ other = other.transpose()
+
+ if (len(set(this.columns)) != len(this.columns)
+ or len(set(other.columns)) != len(other.columns)):
+ raise ValueError("Non-unique columns are not supported")
+
+ if how == 'all':
+ corr = np.zeros((this.shape[1], other.shape[1]))
+
+ for i in range(len(this.columns)):
+ for j in range(len(other.columns)):
+ corr[i, j] = this.iloc[:, i].corr(
+ other.iloc[:, j],
+ method=method, min_periods=min_periods)
+ return DataFrame(data=corr, index=this.columns,
+ columns=other.columns)
+
+ elif how == 'pairwise':
+ index = []
+ corr = []
+ for i, col in enumerate(this.columns):
+ if col in other.columns:
+ index.append(col)
+ corr.append(this.iloc[:, i].corr(other.loc[:, col],
+ method=method, min_periods=min_periods))
+
+ correl = Series(data=corr, index=index)
+
+ if not drop: # add missing columns to the resulting Series
+ result_index = this._get_axis(1).union(other._get_axis(1))
+ correl = correl.reindex(result_index)
+ return correl
- num = (ldem * rdem).sum()
- dom = (left.count() - 1) * left.std() * right.std()
-
- correl = num / dom
-
- if not drop:
- raxis = 1 if axis == 0 else 0
- result_index = this._get_axis(raxis).union(other._get_axis(raxis))
- correl = correl.reindex(result_index)
-
- return correl
+ else:
+ raise ValueError("'how' parameter should be 'pairwise' or 'all'")
# ----------------------------------------------------------------------
# ndarray-like stats methods
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 88262220015c7..b6120c216b5cd 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -466,6 +466,58 @@ def test_corrwith_mixed_dtypes(self):
expected = pd.Series(data=corrs, index=['a', 'b'])
tm.assert_series_equal(result, expected)
+ def test_corrwith_multindex(self):
+ # PR #22622
+ df = pd.DataFrame(np.random.random((4,4)))
+ df.columns = pd.MultiIndex.from_product([[1,2],['A','B']])
+
+ # must raise an exception if columns use MultiIndex
+ try:
+ df.corrwith(df)
+ assert False
+ except ValueError:
+ pass
+
+ def test_corrwith_non_unique_columns(self):
+ # PR #22622
+ df = pd.DataFrame(np.random.randn(10,2), columns=['a']*2)
+
+ try:
+ df.corrwith(df)
+ assert False
+ except ValueError:
+ pass
+
+ def test_corrwith_how_all(self):
+ # PR #22622
+ df = DataFrame({'a': np.random.randn(10000),
+ 'b': np.random.randn(10000)})
+ c1 = df.corrwith(df, how='all').loc['a', 'b']
+ c2 = np.corrcoef(df['a'], df['b'])[0][1]
+
+ tm.assert_almost_equal(c1, c2)
+ assert c1 < 1
+
+ # must raise an exception if other is not a DataFrame
+ try:
+ df.corrwith(df.values)
+ assert False
+ except TypeError:
+ pass
+
+ def test_corrwith_how_all_axis1(self):
+ # PR #22622
+ data = np.random.randn(2, 1000)
+ columns = ['c' + str(i) for i in range(1000)]
+ index = ['a', 'b']
+
+ df = DataFrame(data=data, columns=columns, index=index)
+ c1 = df.corrwith(df, how='all', axis=1).loc['a', 'b']
+ c2 = np.corrcoef(df.loc['a', :], df.loc['b', :])[0][1]
+
+ tm.assert_almost_equal(c1, c2)
+ assert c1 < 1
+
def test_bool_describe_in_mixed_frame(self):
df = DataFrame({
'string_data': ['a', 'b', 'c', 'd', 'e'],
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
A method `DataFrame.corrmatrix()` creates a correlation matrix between
all columns of the first DataFrame object and all columns of the second
DataFrame object. It differs from `corrwith()` in that way it computes correlations between all pairs of first and second columns (not only that has the same names).
| https://api.github.com/repos/pandas-dev/pandas/pulls/22622 | 2018-09-06T17:23:20Z | 2018-12-26T07:28:01Z | null | 2018-12-26T07:28:01Z |
Test in scripts/validate_docstrings.py that the short summary is always one line long | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 0c0757c6963d7..00496f771570b 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -362,6 +362,15 @@ def multi_line(self):
which is not correct.
"""
+ def two_paragraph_multi_line(self):
+ """
+ Extends beyond one line
+ which is not correct.
+
+ Extends beyond one line, which in itself is correct but the
+ previous short summary should still be an issue.
+ """
+
class BadParameters(object):
"""
@@ -556,7 +565,9 @@ def test_bad_generic_functions(self, func):
('BadSummaries', 'no_capitalization',
('Summary must start with infinitive verb',)),
('BadSummaries', 'multi_line',
- ('a short summary in a single line should be present',)),
+ ('Summary should fit in a single line.',)),
+ ('BadSummaries', 'two_paragraph_multi_line',
+ ('Summary should fit in a single line.',)),
# Parameters tests
('BadParameters', 'missing_params',
('Parameters {**kwargs} not documented',)),
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 83bb382480eaa..790a62b53845b 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -163,10 +163,12 @@ def double_blank_lines(self):
@property
def summary(self):
- if not self.doc['Extended Summary'] and len(self.doc['Summary']) > 1:
- return ''
return ' '.join(self.doc['Summary'])
+ @property
+ def num_summary_lines(self):
+ return len(self.doc['Summary'])
+
@property
def extended_summary(self):
if not self.doc['Extended Summary'] and len(self.doc['Summary']) > 1:
@@ -452,6 +454,8 @@ def validate_one(func_name):
errs.append('Summary must start with infinitive verb, '
'not third person (e.g. use "Generate" instead of '
'"Generates")')
+ if doc.num_summary_lines > 1:
+ errs.append("Summary should fit in a single line.")
if not doc.extended_summary:
wrns.append('No extended summary found')
| - [x] closes #22615
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
The previous test to check if the summary property should be empty was based on the non-existence of the extended summary which doesn't seem to make sense. The test case I added failed before changing the script. | https://api.github.com/repos/pandas-dev/pandas/pulls/22617 | 2018-09-06T01:16:49Z | 2018-09-18T13:00:19Z | 2018-09-18T13:00:18Z | 2018-09-18T13:27:11Z |
avoid ValueError when overriding eq | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 9c906a00bd4fe..e3e48327d14b1 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -157,12 +157,13 @@ cdef class IndexEngine:
ndarray[intp_t, ndim=1] found
int count
- indexer = self._get_index_values() == val
+ indexer = np.array(self._get_index_values() == val, dtype = bool, copy = False)
found = np.where(indexer)[0]
count = len(found)
if count > 1:
return indexer
+
if count == 1:
return int(found[0])
| Using pandas >= 0.22 together with module [xpress](https://pypi.org/project/xpress) that I maintain, I get a `ValueError` when calling `a.loc['foo']` on an index `a`. The reason has to do with xpress' overloading of a NumPy `eq` operation (and also `leq`, `geq`). This is done through NumPy's PyUFunc_FromFuncAndData() function, which is then passed as a value to a dictionary for key `'equal'`; the same happens with `'less_equal'` and `'greater_equal'`.
This overloading works by replacing function pointers for an array of (operand_type, operand_type, result_type) tuples and possibly changing those types. For xpress to work, one of the two elements of the array having `NPY_OBJECT` as operand types should be changed so that the result is also `NPY_OBJECT`. The ValueError is triggered in _maybe_get_bool_indexer(), where `indexer`, an ndarray of bytes, is cython-defined and then assigned the result of the comparison. The comparison runs xpress' code, which realizes it's a comparison of non-xpress objects and just reverts to the original comparison operation, but returns an array of **objects** rather than of bytes. Assigning it to `indexer` thus returning a ValueError.
My change is to wrap the assignment around a try/except block and use a cython-defined array of objects to do the same task if a ValueError exception is raised.
I realize this is not a fix for any bug in pandas, but I believe this should make pandas compatible again with some modules that do the same sort of overloading, such as modeling modules.
All tests passed.
[Edit] fixes [#22612](https://github.com/pandas-dev/pandas/issues/22612) | https://api.github.com/repos/pandas-dev/pandas/pulls/22611 | 2018-09-05T16:21:21Z | 2018-12-03T01:46:19Z | null | 2023-05-11T01:18:15Z |
BUG: Check types in Index.__contains__ (#22085) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3a44b0260153c..ef6e46976b50c 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -723,6 +723,7 @@ Indexing
- ``Float64Index.get_loc`` now raises ``KeyError`` when boolean key passed. (:issue:`19087`)
- Bug in :meth:`DataFrame.loc` when indexing with an :class:`IntervalIndex` (:issue:`19977`)
- :class:`Index` no longer mangles ``None``, ``NaN`` and ``NaT``, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to a ``NaN`` (:issue:`22332`)
+- Bug in `scalar in Index` if scalar is a float while the ``Index`` is of integer dtype (:issue:`22085`)
Missing
^^^^^^^
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 8d616468a87d9..7f64fb744c682 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -6,6 +6,7 @@
pandas_dtype,
needs_i8_conversion,
is_integer_dtype,
+ is_float,
is_bool,
is_bool_dtype,
is_scalar)
@@ -162,7 +163,25 @@ def insert(self, loc, item):
)
-class Int64Index(NumericIndex):
+class IntegerIndex(NumericIndex):
+ """
+ This is an abstract class for Int64Index, UInt64Index.
+ """
+
+ def __contains__(self, key):
+ """
+ Check if key is a float and has a decimal. If it has, return False.
+ """
+ hash(key)
+ try:
+ if is_float(key) and int(key) != key:
+ return False
+ return key in self._engine
+ except (OverflowError, TypeError, ValueError):
+ return False
+
+
+class Int64Index(IntegerIndex):
__doc__ = _num_index_shared_docs['class_descr'] % _int64_descr_args
_typ = 'int64index'
@@ -220,7 +239,7 @@ def _assert_safe_casting(cls, data, subarr):
)
-class UInt64Index(NumericIndex):
+class UInt64Index(IntegerIndex):
__doc__ = _num_index_shared_docs['class_descr'] % _uint64_descr_args
_typ = 'uint64index'
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 33b7c1b8154c7..761c633f89da3 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -631,6 +631,21 @@ def test_mixed_index_not_contains(self, index, val):
# GH 19860
assert val not in index
+ def test_contains_with_float_index(self):
+ # GH#22085
+ integer_index = pd.Int64Index([0, 1, 2, 3])
+ uinteger_index = pd.UInt64Index([0, 1, 2, 3])
+ float_index = pd.Float64Index([0.1, 1.1, 2.2, 3.3])
+
+ for index in (integer_index, uinteger_index):
+ assert 1.1 not in index
+ assert 1.0 in index
+ assert 1 in index
+
+ assert 1.1 in float_index
+ assert 1.0 not in float_index
+ assert 1 not in float_index
+
def test_index_type_coercion(self):
with catch_warnings(record=True):
| - [x] closes #22085
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
I added is_float, is_integer_dtype in Index.__contains__.
If key is float and dtype of Index object is integer, return False.
And this is the same as #22360. I deleted the branch while rebasing it... 😅 | https://api.github.com/repos/pandas-dev/pandas/pulls/22602 | 2018-09-05T07:07:32Z | 2018-09-19T21:17:13Z | 2018-09-19T21:17:13Z | 2018-09-20T04:13:54Z |
BLD: Fix openpyxl to 2.5.5 | diff --git a/ci/appveyor-27.yaml b/ci/appveyor-27.yaml
index 6843c82236a35..bcd9ddee1715e 100644
--- a/ci/appveyor-27.yaml
+++ b/ci/appveyor-27.yaml
@@ -13,7 +13,7 @@ dependencies:
- matplotlib
- numexpr
- numpy=1.12*
- - openpyxl
+ - openpyxl=2.5.5
- pytables
- python=2.7.*
- pytz
diff --git a/ci/appveyor-36.yaml b/ci/appveyor-36.yaml
index 47b14221bb34b..6230e9b6a1885 100644
--- a/ci/appveyor-36.yaml
+++ b/ci/appveyor-36.yaml
@@ -10,7 +10,7 @@ dependencies:
- matplotlib
- numexpr
- numpy=1.14*
- - openpyxl
+ - openpyxl=2.5.5
- pyarrow
- pytables
- python-dateutil
diff --git a/ci/circle-27-compat.yaml b/ci/circle-27-compat.yaml
index 5dee6b0c8ed07..84ec7e20fc8f1 100644
--- a/ci/circle-27-compat.yaml
+++ b/ci/circle-27-compat.yaml
@@ -8,7 +8,7 @@ dependencies:
- jinja2=2.8
- numexpr=2.4.4 # we test that we correctly don't use an unsupported numexpr
- numpy=1.9.3
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pytables=3.2.2
- python-dateutil=2.5.0
diff --git a/ci/circle-36-locale.yaml b/ci/circle-36-locale.yaml
index 59c8818eaef1e..ef97b85406709 100644
--- a/ci/circle-36-locale.yaml
+++ b/ci/circle-36-locale.yaml
@@ -13,7 +13,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pymysql
- pytables
diff --git a/ci/circle-36-locale_slow.yaml b/ci/circle-36-locale_slow.yaml
index 7e40bd1a9979e..14b23dd6f3e4c 100644
--- a/ci/circle-36-locale_slow.yaml
+++ b/ci/circle-36-locale_slow.yaml
@@ -14,7 +14,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pymysql
- pytables
diff --git a/ci/requirements-optional-conda.txt b/ci/requirements-optional-conda.txt
index 18aac30f04aea..376fdb1e14e3a 100644
--- a/ci/requirements-optional-conda.txt
+++ b/ci/requirements-optional-conda.txt
@@ -12,7 +12,7 @@ lxml
matplotlib
nbsphinx
numexpr
-openpyxl
+openpyxl=2.5.5
pyarrow
pymysql
pytables
diff --git a/ci/requirements-optional-pip.txt b/ci/requirements-optional-pip.txt
index 28dafc43b09c0..2e1bf0ca22bcf 100644
--- a/ci/requirements-optional-pip.txt
+++ b/ci/requirements-optional-pip.txt
@@ -14,7 +14,7 @@ lxml
matplotlib
nbsphinx
numexpr
-openpyxl
+openpyxl=2.5.5
pyarrow
pymysql
tables
@@ -28,4 +28,4 @@ statsmodels
xarray
xlrd
xlsxwriter
-xlwt
\ No newline at end of file
+xlwt
diff --git a/ci/travis-35-osx.yaml b/ci/travis-35-osx.yaml
index 797682bec7208..a36f748ded812 100644
--- a/ci/travis-35-osx.yaml
+++ b/ci/travis-35-osx.yaml
@@ -12,7 +12,7 @@ dependencies:
- nomkl
- numexpr
- numpy=1.10.4
- - openpyxl
+ - openpyxl=2.5.5
- pytables
- python=3.5*
- pytz
diff --git a/ci/travis-36-doc.yaml b/ci/travis-36-doc.yaml
index 9cbc46d0a70d7..50626088d5bc4 100644
--- a/ci/travis-36-doc.yaml
+++ b/ci/travis-36-doc.yaml
@@ -22,7 +22,7 @@ dependencies:
- notebook
- numexpr
- numpy=1.13*
- - openpyxl
+ - openpyxl=2.5.5
- pandoc
- pyqt
- pytables
diff --git a/ci/travis-36-slow.yaml b/ci/travis-36-slow.yaml
index 3157ecac3a902..1a7bc53e1b74b 100644
--- a/ci/travis-36-slow.yaml
+++ b/ci/travis-36-slow.yaml
@@ -10,7 +10,7 @@ dependencies:
- matplotlib
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- patsy
- psycopg2
- pymysql
diff --git a/ci/travis-36.yaml b/ci/travis-36.yaml
index 990ad0fe87dd6..3c9daa5f8b73c 100644
--- a/ci/travis-36.yaml
+++ b/ci/travis-36.yaml
@@ -18,7 +18,7 @@ dependencies:
- nomkl
- numexpr
- numpy
- - openpyxl
+ - openpyxl=2.5.5
- psycopg2
- pyarrow
- pymysql
| `2.5.5` --> `2.5.6` broke compatibility with pandas `Timestamp` objects.
Closes #22595. | https://api.github.com/repos/pandas-dev/pandas/pulls/22601 | 2018-09-05T06:51:54Z | 2018-09-05T10:52:28Z | 2018-09-05T10:52:28Z | 2018-09-18T19:56:59Z |
BUG: NaN should have pct rank of NaN | diff --git a/doc/source/whatsnew/v0.23.5.txt b/doc/source/whatsnew/v0.23.5.txt
index 2a1172c8050ad..8f4b1a13c2e9d 100644
--- a/doc/source/whatsnew/v0.23.5.txt
+++ b/doc/source/whatsnew/v0.23.5.txt
@@ -23,6 +23,9 @@ Fixed Regressions
- Constructing a DataFrame with an index argument that wasn't already an
instance of :class:`~pandas.core.Index` was broken in `4efb39f
<https://github.com/pandas-dev/pandas/commit/4efb39f01f5880122fa38d91e12d217ef70fad9e>`_ (:issue:`22227`).
+- Calling :meth:`DataFrameGroupBy.rank` and :meth:`SeriesGroupBy.rank` with empty groups
+ and ``pct=True`` was raising a ``ZeroDivisionError`` due to `c1068d9
+ <https://github.com/pandas-dev/pandas/commit/c1068d9d242c22cb2199156f6fb82eb5759178ae>`_ (:issue:`22519`)
-
-
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index 0062a6c8d31ab..765381d89705d 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -584,7 +584,12 @@ def group_rank_{{name}}(ndarray[float64_t, ndim=2] out,
if pct:
for i in range(N):
- out[i, 0] = out[i, 0] / grp_sizes[i, 0]
+ # We don't include NaN values in percentage
+ # rankings, so we assign them percentages of NaN.
+ if out[i, 0] != out[i, 0] or out[i, 0] == NAN:
+ out[i, 0] = NAN
+ else:
+ out[i, 0] = out[i, 0] / grp_sizes[i, 0]
{{endif}}
{{endfor}}
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index f0dcf768e3607..f337af4d39e54 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -1,7 +1,7 @@
import pytest
import numpy as np
import pandas as pd
-from pandas import DataFrame, concat
+from pandas import DataFrame, Series, concat
from pandas.util import testing as tm
@@ -273,3 +273,20 @@ def test_rank_naoption_raises(ties_method, ascending, na_option, pct, vals):
df.groupby('key').rank(method=ties_method,
ascending=ascending,
na_option=na_option, pct=pct)
+
+
+def test_rank_empty_group():
+ # see gh-22519
+ column = "A"
+ df = DataFrame({
+ "A": [0, 1, 0],
+ "B": [1., np.nan, 2.]
+ })
+
+ result = df.groupby(column).B.rank(pct=True)
+ expected = Series([0.5, np.nan, 1.0], name="B")
+ tm.assert_series_equal(result, expected)
+
+ result = df.groupby(column).rank(pct=True)
+ expected = DataFrame({"B": [0.5, np.nan, 1.0]})
+ tm.assert_frame_equal(result, expected)
| Closes #22519. | https://api.github.com/repos/pandas-dev/pandas/pulls/22600 | 2018-09-05T06:35:26Z | 2018-09-08T02:27:24Z | 2018-09-08T02:27:24Z | 2018-09-08T05:39:21Z |
Implement delegate_names to allow decorating delegated attributes | diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index 7a853d575aa69..eab529584d1fb 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -105,6 +105,38 @@ def f(self, *args, **kwargs):
setattr(cls, name, f)
+def delegate_names(delegate, accessors, typ, overwrite=False):
+ """
+ Add delegated names to a class using a class decorator. This provides
+ an alternative usage to directly calling `_add_delegate_accessors`
+ below a class definition.
+
+ Parameters
+ ----------
+ delegate : the class to get methods/properties & doc-strings
+ acccessors : string list of accessors to add
+ typ : 'property' or 'method'
+ overwrite : boolean, default False
+ overwrite the method/property in the target class if it exists
+
+ Returns
+ -------
+ decorator
+
+ Examples
+ --------
+ @delegate_names(Categorical, ["categories", "ordered"], "property")
+ class CategoricalAccessor(PandasDelegate):
+ [...]
+ """
+ def add_delegate_accessors(cls):
+ cls._add_delegate_accessors(delegate, accessors, typ,
+ overwrite=overwrite)
+ return cls
+
+ return add_delegate_accessors
+
+
# Ported with modifications from xarray
# https://github.com/pydata/xarray/blob/master/xarray/core/extensions.py
# 1. We don't need to catch and re-raise AttributeErrors as RuntimeErrors
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 9b7320bf143c2..5410412d5f45b 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -34,7 +34,7 @@
is_dict_like)
from pandas.core.algorithms import factorize, take_1d, unique1d, take
-from pandas.core.accessor import PandasDelegate
+from pandas.core.accessor import PandasDelegate, delegate_names
from pandas.core.base import (PandasObject,
NoNewAttributesMixin, _shared_docs)
import pandas.core.common as com
@@ -2365,6 +2365,15 @@ def isin(self, values):
# The Series.cat accessor
+@delegate_names(delegate=Categorical,
+ accessors=["categories", "ordered"],
+ typ="property")
+@delegate_names(delegate=Categorical,
+ accessors=["rename_categories", "reorder_categories",
+ "add_categories", "remove_categories",
+ "remove_unused_categories", "set_categories",
+ "as_ordered", "as_unordered"],
+ typ="method")
class CategoricalAccessor(PandasDelegate, PandasObject, NoNewAttributesMixin):
"""
Accessor object for categorical properties of the Series values.
@@ -2424,15 +2433,6 @@ def _delegate_method(self, name, *args, **kwargs):
return Series(res, index=self.index, name=self.name)
-CategoricalAccessor._add_delegate_accessors(delegate=Categorical,
- accessors=["categories",
- "ordered"],
- typ='property')
-CategoricalAccessor._add_delegate_accessors(delegate=Categorical, accessors=[
- "rename_categories", "reorder_categories", "add_categories",
- "remove_categories", "remove_unused_categories", "set_categories",
- "as_ordered", "as_unordered"], typ='method')
-
# utility routines
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index 6ab8c4659c31e..a1868980faed3 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -12,7 +12,7 @@
is_timedelta64_dtype, is_categorical_dtype,
is_list_like)
-from pandas.core.accessor import PandasDelegate
+from pandas.core.accessor import PandasDelegate, delegate_names
from pandas.core.base import NoNewAttributesMixin, PandasObject
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.period import PeriodIndex
@@ -110,6 +110,12 @@ def _delegate_method(self, name, *args, **kwargs):
return result
+@delegate_names(delegate=DatetimeIndex,
+ accessors=DatetimeIndex._datetimelike_ops,
+ typ="property")
+@delegate_names(delegate=DatetimeIndex,
+ accessors=DatetimeIndex._datetimelike_methods,
+ typ="method")
class DatetimeProperties(Properties):
"""
Accessor object for datetimelike properties of the Series values.
@@ -175,16 +181,12 @@ def freq(self):
return self._get_values().inferred_freq
-DatetimeProperties._add_delegate_accessors(
- delegate=DatetimeIndex,
- accessors=DatetimeIndex._datetimelike_ops,
- typ='property')
-DatetimeProperties._add_delegate_accessors(
- delegate=DatetimeIndex,
- accessors=DatetimeIndex._datetimelike_methods,
- typ='method')
-
-
+@delegate_names(delegate=TimedeltaIndex,
+ accessors=TimedeltaIndex._datetimelike_ops,
+ typ="property")
+@delegate_names(delegate=TimedeltaIndex,
+ accessors=TimedeltaIndex._datetimelike_methods,
+ typ="method")
class TimedeltaProperties(Properties):
"""
Accessor object for datetimelike properties of the Series values.
@@ -268,16 +270,12 @@ def freq(self):
return self._get_values().inferred_freq
-TimedeltaProperties._add_delegate_accessors(
- delegate=TimedeltaIndex,
- accessors=TimedeltaIndex._datetimelike_ops,
- typ='property')
-TimedeltaProperties._add_delegate_accessors(
- delegate=TimedeltaIndex,
- accessors=TimedeltaIndex._datetimelike_methods,
- typ='method')
-
-
+@delegate_names(delegate=PeriodIndex,
+ accessors=PeriodIndex._datetimelike_ops,
+ typ="property")
+@delegate_names(delegate=PeriodIndex,
+ accessors=PeriodIndex._datetimelike_methods,
+ typ="method")
class PeriodProperties(Properties):
"""
Accessor object for datetimelike properties of the Series values.
@@ -293,16 +291,6 @@ class PeriodProperties(Properties):
"""
-PeriodProperties._add_delegate_accessors(
- delegate=PeriodIndex,
- accessors=PeriodIndex._datetimelike_ops,
- typ='property')
-PeriodProperties._add_delegate_accessors(
- delegate=PeriodIndex,
- accessors=PeriodIndex._datetimelike_methods,
- typ='method')
-
-
class CombinedDatetimelikeProperties(DatetimeProperties, TimedeltaProperties):
def __new__(cls, data):
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index e3a21efe269ce..45703c220a4be 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -30,6 +30,17 @@
_index_doc_kwargs.update(dict(target_klass='CategoricalIndex'))
+@accessor.delegate_names(
+ delegate=Categorical,
+ accessors=["rename_categories",
+ "reorder_categories",
+ "add_categories",
+ "remove_categories",
+ "remove_unused_categories",
+ "set_categories",
+ "as_ordered", "as_unordered",
+ "min", "max"],
+ typ='method', overwrite=True)
class CategoricalIndex(Index, accessor.PandasDelegate):
"""
@@ -835,24 +846,8 @@ def _delegate_method(self, name, *args, **kwargs):
return res
return CategoricalIndex(res, name=self.name)
- @classmethod
- def _add_accessors(cls):
- """ add in Categorical accessor methods """
-
- CategoricalIndex._add_delegate_accessors(
- delegate=Categorical, accessors=["rename_categories",
- "reorder_categories",
- "add_categories",
- "remove_categories",
- "remove_unused_categories",
- "set_categories",
- "as_ordered", "as_unordered",
- "min", "max"],
- typ='method', overwrite=True)
-
CategoricalIndex._add_numeric_methods_add_sub_disabled()
CategoricalIndex._add_numeric_methods_disabled()
CategoricalIndex._add_logical_methods_disabled()
CategoricalIndex._add_comparison_methods()
-CategoricalIndex._add_accessors()
| This PR defines a `delegate_names` decorator that provides an alternative (and to my taste, much nicer) syntax for pinning delegated attributes to Accessor classes.
Effectively this just moves the call to `_add_delegate_accessors` from after the class definition to a decorator on the class. I find this much harder to overlook when reading a class definition.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22599 | 2018-09-05T02:14:45Z | 2018-09-08T02:33:41Z | 2018-09-08T02:33:41Z | 2018-09-08T03:14:07Z |
Fix string format in test runner | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index aee7dba450a30..01fafd7219382 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -2394,7 +2394,7 @@ def wrapper(*args, **kwargs):
raise
else:
skip("Skipping test due to lack of connectivity"
- " and error {error}".format(e))
+ " and error {error}".format(error=e))
return wrapper
| `.format()` was expecting keyword arguments. Updated to match other skips nearby.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22598 | 2018-09-04T22:20:33Z | 2018-09-06T19:42:42Z | 2018-09-06T19:42:42Z | 2018-09-06T19:42:49Z |
Set hypothesis healthcheck | diff --git a/pandas/conftest.py b/pandas/conftest.py
index a49bab31f0bc8..fdac045e67ffa 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -9,6 +9,11 @@
from pandas.compat import PY3
import pandas.util._test_decorators as td
+import hypothesis
+hypothesis.settings.suppress_health_check = (hypothesis.HealthCheck.too_slow,)
+# HealthCheck.all() to disable all health checks
+# https://hypothesis.readthedocs.io/en/latest/healthchecks.html
+
def pytest_addoption(parser):
parser.addoption("--skip-slow", action="store_true",
| - [y] closes #22593
- [pending] tests added / passed
- [y] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [N/A] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22597 | 2018-09-04T22:20:22Z | 2018-09-08T02:32:38Z | 2018-09-08T02:32:37Z | 2018-09-08T02:32:41Z |
TST: add test to io/formats/test_to_html.py to close GH6131 | diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index f69cac62513d4..845fb1ee3dc3a 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1844,6 +1844,67 @@ def test_to_html_no_index_max_rows(self):
</table>""")
assert result == expected
+ def test_to_html_multiindex_max_cols(self):
+ # GH 6131
+ index = MultiIndex(levels=[['ba', 'bb', 'bc'], ['ca', 'cb', 'cc']],
+ labels=[[0, 1, 2], [0, 1, 2]],
+ names=['b', 'c'])
+ columns = MultiIndex(levels=[['d'], ['aa', 'ab', 'ac']],
+ labels=[[0, 0, 0], [0, 1, 2]],
+ names=[None, 'a'])
+ data = np.array(
+ [[1., np.nan, np.nan], [np.nan, 2., np.nan], [np.nan, np.nan, 3.]])
+ df = DataFrame(data, index, columns)
+ result = df.to_html(max_cols=2)
+ expected = dedent("""\
+ <table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th></th>
+ <th></th>
+ <th colspan="3" halign="left">d</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>a</th>
+ <th>aa</th>
+ <th>...</th>
+ <th>ac</th>
+ </tr>
+ <tr>
+ <th>b</th>
+ <th>c</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>ba</th>
+ <th>ca</th>
+ <td>1.0</td>
+ <td>...</td>
+ <td>NaN</td>
+ </tr>
+ <tr>
+ <th>bb</th>
+ <th>cb</th>
+ <td>NaN</td>
+ <td>...</td>
+ <td>NaN</td>
+ </tr>
+ <tr>
+ <th>bc</th>
+ <th>cc</th>
+ <td>NaN</td>
+ <td>...</td>
+ <td>3.0</td>
+ </tr>
+ </tbody>
+ </table>""")
+ assert result == expected
+
def test_to_html_notebook_has_style(self):
df = pd.DataFrame({"A": [1, 2, 3]})
result = df.to_html(notebook=True)
| - [x] closes #6131
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22588 | 2018-09-04T16:00:39Z | 2018-09-09T17:11:29Z | 2018-09-09T17:11:29Z | 2018-09-09T18:39:31Z |
ENH: Replace skiprows with skip_rows to begin standardizing underscore usage in keyword arguments | diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py
index 2d4bdc7ae812a..abdd6fc438587 100644
--- a/asv_bench/benchmarks/io/csv.py
+++ b/asv_bench/benchmarks/io/csv.py
@@ -87,9 +87,9 @@ class ReadCSVSkipRows(BaseIO):
goal_time = 0.2
fname = '__test__.csv'
params = [None, 10000]
- param_names = ['skiprows']
+ param_names = ['skip_rows']
- def setup(self, skiprows):
+ def setup(self, skip_rows):
N = 20000
index = tm.makeStringIndex(N)
df = DataFrame({'float1': np.random.randn(N),
@@ -100,8 +100,8 @@ def setup(self, skiprows):
index=index)
df.to_csv(self.fname)
- def time_skipprows(self, skiprows):
- read_csv(self.fname, skiprows=skiprows)
+ def time_skipprows(self, skip_rows):
+ read_csv(self.fname, skip_rows=skip_rows)
class ReadUint64Integers(StringIORewind):
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index f6fa9e9f86143..a4576060fec2b 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -1034,7 +1034,7 @@ Option 1: pass rows explicitly to skip rows
.. ipython:: python
- pd.read_csv(StringIO(data), sep=';', skiprows=[11,12],
+ pd.read_csv(StringIO(data), sep=';', skip_rows=[11,12],
index_col=0, parse_dates=True, header=10)
Option 2: read column names and then data
diff --git a/doc/source/io.rst b/doc/source/io.rst
index c2c8c1c17700f..d9888879e2317 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -186,7 +186,7 @@ false_values : list, default ``None``
Values to consider as ``False``.
skipinitialspace : boolean, default ``False``
Skip spaces after delimiter.
-skiprows : list-like or integer, default ``None``
+skip_rows : list-like or integer, default ``None``
Line numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
@@ -197,7 +197,7 @@ skiprows : list-like or integer, default ``None``
data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
pd.read_csv(StringIO(data))
- pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
+ pd.read_csv(StringIO(data), skip_rows=lambda x: x % 2 != 0)
skipfooter : int, default ``0``
Number of lines at bottom of file to skip (unsupported with engine='c').
@@ -326,7 +326,7 @@ comment : str, default ``None``
Indicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as ``skip_blank_lines=True``), fully
- commented lines are ignored by the parameter `header` but not by `skiprows`.
+ commented lines are ignored by the parameter `header` but not by `skip_rows`.
For example, if ``comment='#'``, parsing '#empty\\na,b,c\\n1,2,3' with
`header=0` will result in 'a,b,c' being treated as the header.
encoding : str, default ``None``
@@ -651,24 +651,24 @@ If ``skip_blank_lines=False``, then ``read_csv`` will not ignore blank lines:
The presence of ignored lines might create ambiguities involving line numbers;
the parameter ``header`` uses row numbers (ignoring commented/empty
- lines), while ``skiprows`` uses line numbers (including commented/empty lines):
+ lines), while ``skip_rows`` uses line numbers (including commented/empty lines):
.. ipython:: python
data = '#comment\na,b,c\nA,B,C\n1,2,3'
pd.read_csv(StringIO(data), comment='#', header=1)
data = 'A,B,C\n#comment\na,b,c\n1,2,3'
- pd.read_csv(StringIO(data), comment='#', skiprows=2)
+ pd.read_csv(StringIO(data), comment='#', skip_rows=2)
- If both ``header`` and ``skiprows`` are specified, ``header`` will be
- relative to the end of ``skiprows``. For example:
+ If both ``header`` and ``skip_rows`` are specified, ``header`` will be
+ relative to the end of ``skip_rows``. For example:
.. ipython:: python
data = '# empty\n# second empty line\n# third empty' \
'line\nX,Y,Z\n1,2,3\nA,B,C\n1,2.,4.\n5.,NaN,10.0'
print(data)
- pd.read_csv(StringIO(data), comment='#', skiprows=4, header=1)
+ pd.read_csv(StringIO(data), comment='#', skip_rows=4, header=1)
.. _io.comments:
@@ -2373,14 +2373,14 @@ Specify a number of rows to skip:
.. code-block:: python
- dfs = pd.read_html(url, skiprows=0)
+ dfs = pd.read_html(url, skip_rows=0)
Specify a number of rows to skip using a list (``xrange`` (Python 2 only) works
as well):
.. code-block:: python
- dfs = pd.read_html(url, skiprows=range(2))
+ dfs = pd.read_html(url, skip_rows=range(2))
Specify an HTML attribute:
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 91faed678192f..e1d7d999a16e4 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -294,7 +294,7 @@ cdef class TextReader:
object header, orig_header, names, header_start, header_end
object index_col
object low_memory
- object skiprows
+ object skip_rows
object dtype
object encoding
object compression
@@ -348,7 +348,7 @@ cdef class TextReader:
false_values=None,
allow_leading_cols=True,
low_memory=False,
- skiprows=None,
+ skip_rows=None,
skipfooter=0,
verbose=False,
mangle_dupe_cols=True,
@@ -436,8 +436,8 @@ cdef class TextReader:
self.parser.error_bad_lines = int(error_bad_lines)
self.parser.warn_bad_lines = int(warn_bad_lines)
- self.skiprows = skiprows
- if skiprows is not None:
+ self.skip_rows = skip_rows
+ if skip_rows is not None:
self._make_skiprow_set()
self.skipfooter = skipfooter
@@ -605,13 +605,13 @@ cdef class TextReader:
self.parser.quotechar = ord(quote_char)
cdef _make_skiprow_set(self):
- if isinstance(self.skiprows, (int, np.integer)):
- parser_set_skipfirstnrows(self.parser, self.skiprows)
- elif not callable(self.skiprows):
- for i in self.skiprows:
+ if isinstance(self.skip_rows, (int, np.integer)):
+ parser_set_skipfirstnrows(self.parser, self.skip_rows)
+ elif not callable(self.skip_rows):
+ for i in self.skip_rows:
parser_add_skiprow(self.parser, i)
else:
- self.parser.skipfunc = <PyObject *> self.skiprows
+ self.parser.skipfunc = <PyObject *> self.skip_rows
cdef _setup_parser_source(self, source):
cdef:
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index e2db6643c5ef0..01ea4a139915f 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -130,7 +130,7 @@
.. versionadded:: 0.19.0
-skiprows : list-like
+skip_rows : list-like
Rows to skip at the beginning (0-indexed)
nrows : int, default None
Number of rows to parse
@@ -295,7 +295,7 @@ def read_excel(io,
converters=None,
true_values=None,
false_values=None,
- skiprows=None,
+ skip_rows=None,
nrows=None,
na_values=None,
parse_dates=False,
@@ -330,7 +330,7 @@ def read_excel(io,
converters=converters,
true_values=true_values,
false_values=false_values,
- skiprows=skiprows,
+ skip_rows=skip_rows,
nrows=nrows,
na_values=na_values,
parse_dates=parse_dates,
@@ -422,7 +422,7 @@ def parse(self,
converters=None,
true_values=None,
false_values=None,
- skiprows=None,
+ skip_rows=None,
nrows=None,
na_values=None,
parse_dates=False,
@@ -457,7 +457,7 @@ def parse(self,
converters=converters,
true_values=true_values,
false_values=false_values,
- skiprows=skiprows,
+ skip_rows=skip_rows,
nrows=nrows,
na_values=na_values,
parse_dates=parse_dates,
@@ -511,7 +511,7 @@ def _parse_excel(self,
dtype=None,
true_values=None,
false_values=None,
- skiprows=None,
+ skip_rows=None,
nrows=None,
na_values=None,
verbose=False,
@@ -649,8 +649,8 @@ def _parse_cell(cell_contents, cell_typ):
header_names = []
control_row = [True] * len(data[0])
for row in header:
- if is_integer(skiprows):
- row += skiprows
+ if is_integer(skip_rows):
+ row += skip_rows
data[row], control_row = _fill_mi_header(
data[row], control_row)
@@ -687,7 +687,7 @@ def _parse_cell(cell_contents, cell_typ):
dtype=dtype,
true_values=true_values,
false_values=false_values,
- skiprows=skiprows,
+ skip_rows=skip_rows,
nrows=nrows,
na_values=na_values,
parse_dates=parse_dates,
diff --git a/pandas/io/html.py b/pandas/io/html.py
index cca27db00f48d..d4999d92ae0b3 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -85,32 +85,36 @@ def _remove_whitespace(s, regex=_RE_WHITESPACE):
return regex.sub(' ', s.strip())
-def _get_skiprows(skiprows):
+def _get_skiprows(skip_rows):
"""Get an iterator given an integer, slice or container.
Parameters
----------
- skiprows : int, slice, container
+ skip_rows : int, slice, container
The iterator to use to skip rows; can also be a slice.
Raises
------
TypeError
- * If `skiprows` is not a slice, integer, or Container
+ * If `skip_rows` is not a slice, integer, or Container
Returns
-------
it : iterable
A proper iterator to use to skip rows of a DataFrame.
"""
- if isinstance(skiprows, slice):
- return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
- elif isinstance(skiprows, numbers.Integral) or is_list_like(skiprows):
- return skiprows
- elif skiprows is None:
+ if isinstance(skip_rows, slice):
+ return lrange(
+ skip_rows.start or 0,
+ skip_rows.stop,
+ skip_rows.step or 1
+ )
+ elif isinstance(skip_rows, numbers.Integral) or is_list_like(skip_rows):
+ return skip_rows
+ elif skip_rows is None:
return 0
raise TypeError('%r is not a valid type for skipping rows' %
- type(skiprows).__name__)
+ type(skip_rows).__name__)
def _read(obj):
@@ -779,7 +783,7 @@ def _expand_elements(body):
def _data_to_frame(**kwargs):
head, body, foot = kwargs.pop('data')
header = kwargs.pop('header')
- kwargs['skiprows'] = _get_skiprows(kwargs['skiprows'])
+ kwargs['skip_rows'] = _get_skiprows(kwargs['skip_rows'])
if head:
body = head + body
@@ -922,7 +926,7 @@ def _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs):
def read_html(io, match='.+', flavor=None, header=None, index_col=None,
- skiprows=None, attrs=None, parse_dates=False,
+ skip_rows=None, attrs=None, parse_dates=False,
tupleize_cols=None, thousands=',', encoding=None,
decimal='.', converters=None, na_values=None,
keep_default_na=True, displayed_only=True):
@@ -956,7 +960,7 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
index_col : int or list-like or None, optional
The column (or list of columns) to use to create the index.
- skiprows : int or list-like or slice or None, optional
+ skip_rows : int or list-like or slice or None, optional
0-based. Number of rows to skip after parsing the column integer. If a
sequence of integers or a slice is given, will skip the rows indexed by
that sequence. Note that a single element sequence means 'skip the nth
@@ -1060,7 +1064,7 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
.. versionadded:: 0.21.0
Similar to :func:`~pandas.read_csv` the `header` argument is applied
- **after** `skiprows` is applied.
+ **after** `skip_rows` is applied.
This function will *always* return a list of :class:`DataFrame` *or*
it will fail, e.g., it will *not* return an empty list.
@@ -1077,13 +1081,13 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
_importers()
# Type check here. We don't want to parse only to fail because of an
- # invalid value of an integer skiprows.
- if isinstance(skiprows, numbers.Integral) and skiprows < 0:
+ # invalid value of an integer skip_rows.
+ if isinstance(skip_rows, numbers.Integral) and skip_rows < 0:
raise ValueError('cannot skip rows starting from the end of the '
'data (you passed a negative value)')
_validate_header_arg(header)
return _parse(flavor=flavor, io=io, match=match, header=header,
- index_col=index_col, skiprows=skiprows,
+ index_col=index_col, skip_rows=skip_rows,
parse_dates=parse_dates, tupleize_cols=tupleize_cols,
thousands=thousands, attrs=attrs, encoding=encoding,
decimal=decimal, converters=converters, na_values=na_values,
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8d37bf4c84d5d..4a3a655417ba4 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -144,7 +144,7 @@
Values to consider as False
skipinitialspace : boolean, default False
Skip spaces after delimiter.
-skiprows : list-like or integer or callable, default None
+skip_rows : list-like or integer or callable, default None
Line numbers to skip (0-indexed) or number of lines to skip (int)
at the start of the file.
@@ -264,7 +264,7 @@
of a line, the line will be ignored altogether. This parameter must be a
single character. Like empty lines (as long as ``skip_blank_lines=True``),
fully commented lines are ignored by the parameter `header` but not by
- `skiprows`. For example, if ``comment='#'``, parsing
+ `skip_rows`. For example, if ``comment='#'``, parsing
``#empty\\na,b,c\\n1,2,3`` with ``header=0`` will result in 'a,b,c' being
treated as the header.
encoding : str, default None
@@ -347,7 +347,7 @@
fields of each line as half-open intervals (i.e., [from, to[ ).
String value 'infer' can be used to instruct the parser to try
detecting the column specifications from the first 100 rows of
- the data which are not being skipped via skiprows (default='infer').
+ the data which are not being skipped via skip_rows (default='infer').
widths : list of ints. optional
A list of field widths which can be used instead of 'colspecs' if
the intervals are contiguous.
@@ -479,7 +479,7 @@ def _read(filepath_or_buffer, kwds):
'index_col': None,
'names': None,
'prefix': None,
- 'skiprows': None,
+ 'skip_rows': None,
'na_values': None,
'true_values': None,
'false_values': None,
@@ -572,7 +572,7 @@ def parser_f(filepath_or_buffer,
true_values=None,
false_values=None,
skipinitialspace=False,
- skiprows=None,
+ skip_rows=None,
nrows=None,
# NA and Missing Data Handling
@@ -617,7 +617,10 @@ def parser_f(filepath_or_buffer,
delim_whitespace=False,
low_memory=_c_parser_defaults['low_memory'],
memory_map=False,
- float_precision=None):
+ float_precision=None,
+
+ # Deprecated with warnings
+ skiprows=None):
# deprecate read_table GH21948
if name == "read_table":
@@ -647,6 +650,15 @@ def parser_f(filepath_or_buffer,
engine = 'c'
engine_specified = False
+ # Handle deprecated kwargs
+ if skiprows:
+ warnings.warn(
+ "skiprows will be deprecated. Use skip_rows instead.",
+ FutureWarning
+ )
+ if not skip_rows:
+ skip_rows = skiprows
+
kwds = dict(delimiter=delimiter,
engine=engine,
dialect=dialect,
@@ -664,7 +676,7 @@ def parser_f(filepath_or_buffer,
index_col=index_col,
names=names,
prefix=prefix,
- skiprows=skiprows,
+ skip_rows=skip_rows,
na_values=na_values,
true_values=true_values,
false_values=false_values,
@@ -960,7 +972,7 @@ def _clean_options(self, options, engine):
names = options['names']
converters = options['converters']
na_values = options['na_values']
- skiprows = options['skiprows']
+ skip_rows = options['skip_rows']
_validate_header_arg(options['header'])
@@ -1009,22 +1021,22 @@ def _clean_options(self, options, engine):
keep_default_na = options['keep_default_na']
na_values, na_fvalues = _clean_na_values(na_values, keep_default_na)
- # handle skiprows; this is internally handled by the
+ # handle skip_rows; this is internally handled by the
# c-engine, so only need for python parsers
if engine != 'c':
- if is_integer(skiprows):
- skiprows = lrange(skiprows)
- if skiprows is None:
- skiprows = set()
- elif not callable(skiprows):
- skiprows = set(skiprows)
+ if is_integer(skip_rows):
+ skip_rows = lrange(skip_rows)
+ if skip_rows is None:
+ skip_rows = set()
+ elif not callable(skip_rows):
+ skip_rows = set(skip_rows)
# put stuff back
result['names'] = names
result['converters'] = converters
result['na_values'] = na_values
result['na_fvalues'] = na_fvalues
- result['skiprows'] = skiprows
+ result['skip_rows'] = skip_rows
return result, engine
@@ -2006,7 +2018,7 @@ def TextParser(*args, **kwds):
parse_dates : boolean, default False
keep_date_col : boolean, default False
date_parser : function, default None
- skiprows : list of integers
+ skip_rows : list of integers
Row numbers to skip
skipfooter : int
Number of line at bottom of file to skip
@@ -2055,12 +2067,12 @@ def __init__(self, f, **kwds):
self.encoding = kwds['encoding']
self.compression = kwds['compression']
self.memory_map = kwds['memory_map']
- self.skiprows = kwds['skiprows']
+ self.skip_rows = kwds['skip_rows']
- if callable(self.skiprows):
- self.skipfunc = self.skiprows
+ if callable(self.skip_rows):
+ self.skipfunc = self.skip_rows
else:
- self.skipfunc = lambda x: x in self.skiprows
+ self.skipfunc = lambda x: x in self.skip_rows
self.skipfooter = _validate_skipfooter_arg(kwds['skipfooter'])
self.delimiter = kwds['delimiter']
@@ -2973,8 +2985,8 @@ def _get_lines(self, rows=None):
new_rows = self.data[self.pos:self.pos + rows]
new_pos = self.pos + rows
- # Check for stop rows. n.b.: self.skiprows is a set.
- if self.skiprows:
+ # Check for stop rows. n.b.: self.skip_rows is a set.
+ if self.skip_rows:
new_rows = [row for i, row in enumerate(new_rows)
if not self.skipfunc(i + self.pos)]
@@ -3000,7 +3012,7 @@ def _get_lines(self, rows=None):
new_rows.append(new_row)
except StopIteration:
- if self.skiprows:
+ if self.skip_rows:
new_rows = [row for i, row in enumerate(new_rows)
if not self.skipfunc(i + self.pos)]
lines.extend(new_rows)
@@ -3364,13 +3376,13 @@ class FixedWidthReader(BaseIterator):
A reader of fixed-width lines.
"""
- def __init__(self, f, colspecs, delimiter, comment, skiprows=None):
+ def __init__(self, f, colspecs, delimiter, comment, skip_rows=None):
self.f = f
self.buffer = None
self.delimiter = '\r\n' + delimiter if delimiter else '\n\r\t '
self.comment = comment
if colspecs == 'infer':
- self.colspecs = self.detect_colspecs(skiprows=skiprows)
+ self.colspecs = self.detect_colspecs(skip_rows=skip_rows)
else:
self.colspecs = colspecs
@@ -3386,14 +3398,14 @@ def __init__(self, f, colspecs, delimiter, comment, skiprows=None):
raise TypeError('Each column specification must be '
'2 element tuple or list of integers')
- def get_rows(self, n, skiprows=None):
+ def get_rows(self, n, skip_rows=None):
"""
Read rows from self.f, skipping as specified.
We distinguish buffer_rows (the first <= n lines)
from the rows returned to detect_colspecs because
it's simpler to leave the other locations with
- skiprows logic alone than to modify them to deal
+ skip_rows logic alone than to modify them to deal
with the fact we skipped some rows here as well.
Parameters
@@ -3401,7 +3413,7 @@ def get_rows(self, n, skiprows=None):
n : int
Number of rows to read from self.f, not counting
rows that are skipped.
- skiprows: set, optional
+ skip_rows: set, optional
Indices of rows to skip.
Returns
@@ -3410,12 +3422,12 @@ def get_rows(self, n, skiprows=None):
A list containing the rows to read.
"""
- if skiprows is None:
- skiprows = set()
+ if skip_rows is None:
+ skip_rows = set()
buffer_rows = []
detect_rows = []
for i, row in enumerate(self.f):
- if i not in skiprows:
+ if i not in skip_rows:
detect_rows.append(row)
buffer_rows.append(row)
if len(detect_rows) >= n:
@@ -3423,11 +3435,11 @@ def get_rows(self, n, skiprows=None):
self.buffer = iter(buffer_rows)
return detect_rows
- def detect_colspecs(self, n=100, skiprows=None):
+ def detect_colspecs(self, n=100, skip_rows=None):
# Regex escape the delimiters
delimiters = ''.join(r'\%s' % x for x in self.delimiter)
pattern = re.compile('([^%s]+)' % delimiters)
- rows = self.get_rows(n, skiprows)
+ rows = self.get_rows(n, skip_rows)
if not rows:
raise EmptyDataError("No rows from which to infer column width")
max_len = max(map(len, rows))
@@ -3470,4 +3482,4 @@ def __init__(self, f, **kwds):
def _make_reader(self, f):
self.data = FixedWidthReader(f, self.colspecs, self.delimiter,
- self.comment, self.skiprows)
+ self.comment, self.skip_rows)
diff --git a/pandas/tests/io/parser/c_parser_only.py b/pandas/tests/io/parser/c_parser_only.py
index 9dc7b070f889d..a0caa92dd3ca8 100644
--- a/pandas/tests/io/parser/c_parser_only.py
+++ b/pandas/tests/io/parser/c_parser_only.py
@@ -423,7 +423,7 @@ def test_comment_whitespace_delimited(self):
9 2 3 # skipped line
# comment"""
df = self.read_csv(StringIO(test_input), comment='#', header=None,
- delimiter='\\s+', skiprows=0,
+ delimiter='\\s+', skip_rows=0,
error_bad_lines=False)
error = sys.stderr.getvalue()
# skipped lines 2, 3, 4, 9
diff --git a/pandas/tests/io/parser/comment.py b/pandas/tests/io/parser/comment.py
index 9987a017cf985..4d2e793d082de 100644
--- a/pandas/tests/io/parser/comment.py
+++ b/pandas/tests/io/parser/comment.py
@@ -65,7 +65,7 @@ def test_comment_skiprows(self):
"""
# this should ignore the first four lines (including comments)
expected = np.array([[1., 2., 4.], [5., np.nan, 10.]])
- df = self.read_csv(StringIO(data), comment='#', skiprows=4)
+ df = self.read_csv(StringIO(data), comment='#', skip_rows=4)
tm.assert_numpy_array_equal(df.values, expected)
def test_comment_header(self):
@@ -91,11 +91,11 @@ def test_comment_skiprows_header(self):
1,2.,4.
5.,NaN,10.0
"""
- # skiprows should skip the first 4 lines (including comments), while
+ # skip_rows should skip the first 4 lines (including comments), while
# header should start from the second non-commented line starting
# with line 5
expected = np.array([[1., 2., 4.], [5., np.nan, 10.]])
- df = self.read_csv(StringIO(data), comment='#', skiprows=4, header=1)
+ df = self.read_csv(StringIO(data), comment='#', skip_rows=4, header=1)
tm.assert_numpy_array_equal(df.values, expected)
def test_custom_comment_char(self):
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 9e871d27f0ce8..38625763522bd 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -146,7 +146,7 @@ def test_malformed(self):
it = self.read_table(StringIO(data), sep=',',
header=1, comment='#',
iterator=True, chunksize=1,
- skiprows=[2])
+ skip_rows=[2])
it.read(5)
# middle chunk
@@ -162,7 +162,7 @@ def test_malformed(self):
with tm.assert_raises_regex(Exception, msg):
it = self.read_table(StringIO(data), sep=',', header=1,
comment='#', iterator=True, chunksize=1,
- skiprows=[2])
+ skip_rows=[2])
it.read(3)
# last chunk
@@ -178,7 +178,7 @@ def test_malformed(self):
with tm.assert_raises_regex(Exception, msg):
it = self.read_table(StringIO(data), sep=',', header=1,
comment='#', iterator=True, chunksize=1,
- skiprows=[2])
+ skip_rows=[2])
it.read()
# skipfooter is not supported with the C parser yet
@@ -507,8 +507,8 @@ def test_iterator(self):
tm.assert_frame_equal(chunks[1], df[2:4])
tm.assert_frame_equal(chunks[2], df[4:])
- # pass skiprows
- parser = TextParser(lines, index_col=0, chunksize=2, skiprows=[1])
+ # pass skip_rows
+ parser = TextParser(lines, index_col=0, chunksize=2, skip_rows=[1])
chunks = list(parser)
tm.assert_frame_equal(chunks[0], df[1:3])
@@ -745,9 +745,9 @@ def test_utf16_bom_skiprows(self):
from io import TextIOWrapper
s = TextIOWrapper(s, encoding='utf-8')
- result = self.read_csv(path, encoding=enc, skiprows=2,
+ result = self.read_csv(path, encoding=enc, skip_rows=2,
sep=sep)
- expected = self.read_csv(s, encoding='utf-8', skiprows=2,
+ expected = self.read_csv(s, encoding='utf-8', skip_rows=2,
sep=sep)
s.close()
@@ -1041,7 +1041,7 @@ def test_eof_states(self):
# SKIP_LINE
data = 'a,b,c\n4,5,6\nskipme'
- result = self.read_csv(StringIO(data), skiprows=[2])
+ result = self.read_csv(StringIO(data), skip_rows=[2])
tm.assert_frame_equal(result, expected)
# With skip_blank_lines = False
@@ -1144,11 +1144,11 @@ def test_trailing_spaces(self):
# lines with trailing whitespace and blank lines
df = self.read_csv(StringIO(data.replace(',', ' ')),
header=None, delim_whitespace=True,
- skiprows=[0, 1, 2, 3, 5, 6], skip_blank_lines=True)
+ skip_rows=[0, 1, 2, 3, 5, 6], skip_blank_lines=True)
tm.assert_frame_equal(df, expected)
df = self.read_table(StringIO(data.replace(',', ' ')),
header=None, delim_whitespace=True,
- skiprows=[0, 1, 2, 3, 5, 6],
+ skip_rows=[0, 1, 2, 3, 5, 6],
skip_blank_lines=True)
tm.assert_frame_equal(df, expected)
@@ -1157,7 +1157,7 @@ def test_trailing_spaces(self):
"C": [4., 10]})
df = self.read_table(StringIO(data.replace(',', ' ')),
delim_whitespace=True,
- skiprows=[1, 2, 3, 5, 6], skip_blank_lines=True)
+ skip_rows=[1, 2, 3, 5, 6], skip_blank_lines=True)
tm.assert_frame_equal(df, expected)
def test_raise_on_sep_with_delim_whitespace(self):
diff --git a/pandas/tests/io/parser/header.py b/pandas/tests/io/parser/header.py
index ad3d4592bd599..13a36edd62797 100644
--- a/pandas/tests/io/parser/header.py
+++ b/pandas/tests/io/parser/header.py
@@ -152,7 +152,7 @@ def test_header_multiindex_common_format(self):
tm.assert_frame_equal(df, result)
# to_csv, tuples
- result = self.read_csv(StringIO(data), skiprows=3,
+ result = self.read_csv(StringIO(data), skip_rows=3,
names=[('a', 'q'), ('a', 'r'), ('a', 's'),
('b', 't'), ('c', 'u'), ('c', 'v')],
index_col=0)
@@ -161,7 +161,7 @@ def test_header_multiindex_common_format(self):
# to_csv, namedtuples
TestTuple = namedtuple('names', ['first', 'second'])
result = self.read_csv(
- StringIO(data), skiprows=3, index_col=0,
+ StringIO(data), skip_rows=3, index_col=0,
names=[TestTuple('a', 'q'), TestTuple('a', 'r'),
TestTuple('a', 's'), TestTuple('b', 't'),
TestTuple('c', 'u'), TestTuple('c', 'v')])
@@ -177,7 +177,7 @@ def test_header_multiindex_common_format(self):
tm.assert_frame_equal(df, result)
# common, tuples
- result = self.read_csv(StringIO(data), skiprows=2,
+ result = self.read_csv(StringIO(data), skip_rows=2,
names=[('a', 'q'), ('a', 'r'), ('a', 's'),
('b', 't'), ('c', 'u'), ('c', 'v')],
index_col=0)
@@ -186,7 +186,7 @@ def test_header_multiindex_common_format(self):
# common, namedtuples
TestTuple = namedtuple('names', ['first', 'second'])
result = self.read_csv(
- StringIO(data), skiprows=2, index_col=0,
+ StringIO(data), skip_rows=2, index_col=0,
names=[TestTuple('a', 'q'), TestTuple('a', 'r'),
TestTuple('a', 's'), TestTuple('b', 't'),
TestTuple('c', 'u'), TestTuple('c', 'v')])
@@ -202,7 +202,7 @@ def test_header_multiindex_common_format(self):
tm.assert_frame_equal(df.reset_index(drop=True), result)
# common, no index_col, tuples
- result = self.read_csv(StringIO(data), skiprows=2,
+ result = self.read_csv(StringIO(data), skip_rows=2,
names=[('a', 'q'), ('a', 'r'), ('a', 's'),
('b', 't'), ('c', 'u'), ('c', 'v')],
index_col=None)
@@ -211,7 +211,7 @@ def test_header_multiindex_common_format(self):
# common, no index_col, namedtuples
TestTuple = namedtuple('names', ['first', 'second'])
result = self.read_csv(
- StringIO(data), skiprows=2, index_col=None,
+ StringIO(data), skip_rows=2, index_col=None,
names=[TestTuple('a', 'q'), TestTuple('a', 'r'),
TestTuple('a', 's'), TestTuple('b', 't'),
TestTuple('c', 'u'), TestTuple('c', 'v')])
diff --git a/pandas/tests/io/parser/multithread.py b/pandas/tests/io/parser/multithread.py
index 2aaef889db6de..0fb176c09fee9 100644
--- a/pandas/tests/io/parser/multithread.py
+++ b/pandas/tests/io/parser/multithread.py
@@ -43,7 +43,7 @@ def reader(arg):
return self.read_csv(path,
index_col=0,
header=None,
- skiprows=int(start) + 1,
+ skip_rows=int(start) + 1,
nrows=nrows,
parse_dates=[9])
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index 880ab707cfd07..142b6c8dd5078 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -104,15 +104,15 @@ def test_custom_na_values(self):
[nan, 5, nan],
[7, 8, nan]])
- df = self.read_csv(StringIO(data), na_values=['baz'], skiprows=[1])
+ df = self.read_csv(StringIO(data), na_values=['baz'], skip_rows=[1])
tm.assert_numpy_array_equal(df.values, expected)
df2 = self.read_table(StringIO(data), sep=',', na_values=['baz'],
- skiprows=[1])
+ skip_rows=[1])
tm.assert_numpy_array_equal(df2.values, expected)
df3 = self.read_table(StringIO(data), sep=',', na_values='baz',
- skiprows=[1])
+ skip_rows=[1])
tm.assert_numpy_array_equal(df3.values, expected)
def test_bool_na_values(self):
diff --git a/pandas/tests/io/parser/parse_dates.py b/pandas/tests/io/parser/parse_dates.py
index ae3c806ac1c8e..2f0c3fdcbaa77 100644
--- a/pandas/tests/io/parser/parse_dates.py
+++ b/pandas/tests/io/parser/parse_dates.py
@@ -349,7 +349,7 @@ def test_parse_dates_custom_euroformat(self):
parser = lambda d: parse_date(d, day_first=True)
pytest.raises(TypeError, self.read_csv,
- StringIO(text), skiprows=[0],
+ StringIO(text), skip_rows=[0],
names=['time', 'Q', 'NTU'], index_col=0,
parse_dates=True, date_parser=parser,
na_values=['NA'])
diff --git a/pandas/tests/io/parser/python_parser_only.py b/pandas/tests/io/parser/python_parser_only.py
index c0616ebbab4a5..6f9d3031fcba0 100644
--- a/pandas/tests/io/parser/python_parser_only.py
+++ b/pandas/tests/io/parser/python_parser_only.py
@@ -68,7 +68,7 @@ def test_sniff_delimiter(self):
baz|7|8|9
"""
data3 = self.read_csv(StringIO(text), index_col=0,
- sep=None, skiprows=2)
+ sep=None, skip_rows=2)
tm.assert_frame_equal(data, data3)
text = u("""ignore this
@@ -85,7 +85,7 @@ def test_sniff_delimiter(self):
from io import TextIOWrapper
s = TextIOWrapper(s, encoding='utf-8')
- data4 = self.read_csv(s, index_col=0, sep=None, skiprows=2,
+ data4 = self.read_csv(s, index_col=0, sep=None, skip_rows=2,
encoding='utf-8')
tm.assert_frame_equal(data, data4)
diff --git a/pandas/tests/io/parser/skiprows.py b/pandas/tests/io/parser/skiprows.py
index fb08ec0447267..7a64e0c272958 100644
--- a/pandas/tests/io/parser/skiprows.py
+++ b/pandas/tests/io/parser/skiprows.py
@@ -30,10 +30,10 @@ def test_skiprows_bug(self):
1/2/2000,4,5,6
1/3/2000,7,8,9
"""
- data = self.read_csv(StringIO(text), skiprows=lrange(6), header=None,
+ data = self.read_csv(StringIO(text), skip_rows=lrange(6), header=None,
index_col=0, parse_dates=True)
- data2 = self.read_csv(StringIO(text), skiprows=6, header=None,
+ data2 = self.read_csv(StringIO(text), skip_rows=6, header=None,
index_col=0, parse_dates=True)
expected = DataFrame(np.arange(1., 10.).reshape((3, 3)),
@@ -52,7 +52,7 @@ def test_deep_skiprows(self):
condensed_text = "a,b,c\n" + \
"\n".join([",".join([str(i), str(i + 1), str(i + 2)])
for i in [0, 1, 2, 3, 4, 6, 8, 9]])
- data = self.read_csv(StringIO(text), skiprows=[6, 8])
+ data = self.read_csv(StringIO(text), skip_rows=[6, 8])
condensed_data = self.read_csv(StringIO(condensed_text))
tm.assert_frame_equal(data, condensed_data)
@@ -68,7 +68,7 @@ def test_skiprows_blank(self):
1/2/2000,4,5,6
1/3/2000,7,8,9
"""
- data = self.read_csv(StringIO(text), skiprows=6, header=None,
+ data = self.read_csv(StringIO(text), skip_rows=6, header=None,
index_col=0, parse_dates=True)
expected = DataFrame(np.arange(1., 10.).reshape((3, 3)),
@@ -90,7 +90,7 @@ def test_skiprow_with_newline(self):
[3, 'line 31', 1]]
expected = DataFrame(expected, columns=[
'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
+ df = self.read_csv(StringIO(data), skip_rows=[1])
tm.assert_frame_equal(df, expected)
data = ('a,b,c\n~a\n b~,~e\n d~,'
@@ -100,7 +100,7 @@ def test_skiprow_with_newline(self):
'a', 'b', 'c'])
df = self.read_csv(StringIO(data),
quotechar="~",
- skiprows=[2])
+ skip_rows=[2])
tm.assert_frame_equal(df, expected)
data = ('Text,url\n~example\n '
@@ -112,7 +112,7 @@ def test_skiprow_with_newline(self):
'Text', 'url'])
df = self.read_csv(StringIO(data),
quotechar="~",
- skiprows=[1, 3])
+ skip_rows=[1, 3])
tm.assert_frame_equal(df, expected)
def test_skiprow_with_quote(self):
@@ -125,7 +125,7 @@ def test_skiprow_with_quote(self):
[3, "line '31' line 32", 1]]
expected = DataFrame(expected, columns=[
'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
+ df = self.read_csv(StringIO(data), skip_rows=[1])
tm.assert_frame_equal(df, expected)
def test_skiprow_with_newline_and_quote(self):
@@ -138,7 +138,7 @@ def test_skiprow_with_newline_and_quote(self):
[3, "line \n'31' line 32", 1]]
expected = DataFrame(expected, columns=[
'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
+ df = self.read_csv(StringIO(data), skip_rows=[1])
tm.assert_frame_equal(df, expected)
data = """id,text,num_lines
@@ -149,7 +149,7 @@ def test_skiprow_with_newline_and_quote(self):
[3, "line '31\n' line 32", 1]]
expected = DataFrame(expected, columns=[
'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
+ df = self.read_csv(StringIO(data), skip_rows=[1])
tm.assert_frame_equal(df, expected)
data = """id,text,num_lines
@@ -160,7 +160,7 @@ def test_skiprow_with_newline_and_quote(self):
[3, "line '31\n' \r\tline 32", 1]]
expected = DataFrame(expected, columns=[
'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
+ df = self.read_csv(StringIO(data), skip_rows=[1])
tm.assert_frame_equal(df, expected)
def test_skiprows_lineterminator(self):
@@ -176,19 +176,19 @@ def test_skiprows_lineterminator(self):
'oflag'])
# test with default line terminators "LF" and "CRLF"
- df = self.read_csv(StringIO(data), skiprows=1, delim_whitespace=True,
+ df = self.read_csv(StringIO(data), skip_rows=1, delim_whitespace=True,
names=['date', 'time', 'var', 'flag', 'oflag'])
tm.assert_frame_equal(df, expected)
df = self.read_csv(StringIO(data.replace('\n', '\r\n')),
- skiprows=1, delim_whitespace=True,
+ skip_rows=1, delim_whitespace=True,
names=['date', 'time', 'var', 'flag', 'oflag'])
tm.assert_frame_equal(df, expected)
# "CR" is not respected with the Python parser yet
if self.engine == 'c':
df = self.read_csv(StringIO(data.replace('\n', '\r')),
- skiprows=1, delim_whitespace=True,
+ skip_rows=1, delim_whitespace=True,
names=['date', 'time', 'var', 'flag', 'oflag'])
tm.assert_frame_equal(df, expected)
@@ -197,29 +197,29 @@ def test_skiprows_infield_quote(self):
data = 'a"\nb"\na\n1'
expected = DataFrame({'a': [1]})
- df = self.read_csv(StringIO(data), skiprows=2)
+ df = self.read_csv(StringIO(data), skip_rows=2)
tm.assert_frame_equal(df, expected)
def test_skiprows_callable(self):
data = 'a\n1\n2\n3\n4\n5'
- skiprows = lambda x: x % 2 == 0
+ skip_rows = lambda x: x % 2 == 0
expected = DataFrame({'1': [3, 5]})
- df = self.read_csv(StringIO(data), skiprows=skiprows)
+ df = self.read_csv(StringIO(data), skip_rows=skip_rows)
tm.assert_frame_equal(df, expected)
expected = DataFrame({'foo': [3, 5]})
- df = self.read_csv(StringIO(data), skiprows=skiprows,
+ df = self.read_csv(StringIO(data), skip_rows=skip_rows,
header=0, names=['foo'])
tm.assert_frame_equal(df, expected)
- skiprows = lambda x: True
+ skip_rows = lambda x: True
msg = "No columns to parse from file"
with tm.assert_raises_regex(EmptyDataError, msg):
- self.read_csv(StringIO(data), skiprows=skiprows)
+ self.read_csv(StringIO(data), skip_rows=skip_rows)
# This is a bad callable and should raise.
msg = "by zero"
- skiprows = lambda x: 1 / 0
+ skip_rows = lambda x: 1 / 0
with tm.assert_raises_regex(ZeroDivisionError, msg):
- self.read_csv(StringIO(data), skiprows=skiprows)
+ self.read_csv(StringIO(data), skip_rows=skip_rows)
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index a60f2b5a4c946..8d07613ca695c 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -377,10 +377,10 @@ def test_skiprows_inference(self):
0.0 1.0
101.6 956.1
""".strip()
- expected = read_csv(StringIO(test), skiprows=2,
+ expected = read_csv(StringIO(test), skip_rows=2,
delim_whitespace=True)
tm.assert_frame_equal(expected, read_fwf(
- StringIO(test), skiprows=2))
+ StringIO(test), skip_rows=2))
def test_skiprows_by_index_inference(self):
test = """
@@ -391,10 +391,10 @@ def test_skiprows_by_index_inference(self):
456 78 9 456
""".strip()
- expected = read_csv(StringIO(test), skiprows=[0, 2],
+ expected = read_csv(StringIO(test), skip_rows=[0, 2],
delim_whitespace=True)
tm.assert_frame_equal(expected, read_fwf(
- StringIO(test), skiprows=[0, 2]))
+ StringIO(test), skip_rows=[0, 2]))
def test_skiprows_inference_empty(self):
test = """
@@ -404,7 +404,7 @@ def test_skiprows_inference_empty(self):
""".strip()
with pytest.raises(EmptyDataError):
- read_fwf(StringIO(test), skiprows=3)
+ read_fwf(StringIO(test), skip_rows=3)
def test_whitespace_preservation(self):
# Addresses Issue #16772
@@ -417,7 +417,7 @@ def test_whitespace_preservation(self):
a bbb
ccdd """
result = read_fwf(StringIO(test_data), widths=[3, 3],
- header=None, skiprows=[0], delimiter="\n\t")
+ header=None, skip_rows=[0], delimiter="\n\t")
tm.assert_frame_equal(result, expected)
@@ -431,6 +431,6 @@ def test_default_delimiter(self):
a \tbbb
cc\tdd """
result = read_fwf(StringIO(test_data), widths=[3, 3],
- header=None, skiprows=[0])
+ header=None, skip_rows=[0])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index c7026e3e0fc88..fccb240c3f8e5 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -329,7 +329,7 @@ def test_empty_field_eof(self):
index=[0, 5, 7, 12])
for _ in range(100):
- df = read_csv(StringIO('a,b\nc\n'), skiprows=0,
+ df = read_csv(StringIO('a,b\nc\n'), skip_rows=0,
names=['a'], engine='c')
assert_frame_equal(df, a)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 5f27ff719fda1..cc9123c0f24f0 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -110,11 +110,11 @@ def test_usecols_int(self, ext):
dfref = self.get_csv_refdf('test1')
dfref = dfref.reindex(columns=['A', 'B', 'C'])
df1 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0, usecols=3)
- df2 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df2 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, usecols=3)
with tm.assert_produces_warning(FutureWarning):
- df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df3 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, parse_cols=3)
# TODO add index to xls file)
@@ -128,11 +128,11 @@ def test_usecols_list(self, ext):
dfref = dfref.reindex(columns=['B', 'C'])
df1 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0,
usecols=[0, 2, 3])
- df2 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df2 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, usecols=[0, 2, 3])
with tm.assert_produces_warning(FutureWarning):
- df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df3 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, parse_cols=[0, 2, 3])
# TODO add index to xls file)
@@ -147,11 +147,11 @@ def test_usecols_str(self, ext):
df1 = dfref.reindex(columns=['A', 'B', 'C'])
df2 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0,
usecols='A:D')
- df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df3 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, usecols='A:D')
with tm.assert_produces_warning(FutureWarning):
- df4 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df4 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, parse_cols='A:D')
# TODO add index to xls, read xls ignores index name ?
@@ -162,7 +162,7 @@ def test_usecols_str(self, ext):
df1 = dfref.reindex(columns=['B', 'C'])
df2 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0,
usecols='A,C,D')
- df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df3 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, usecols='A,C,D')
# TODO add index to xls file
tm.assert_frame_equal(df2, df1, check_names=False)
@@ -171,7 +171,7 @@ def test_usecols_str(self, ext):
df1 = dfref.reindex(columns=['B', 'C'])
df2 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0,
usecols='A,C:D')
- df3 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df3 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0, usecols='A,C:D')
tm.assert_frame_equal(df2, df1, check_names=False)
tm.assert_frame_equal(df3, df1, check_names=False)
@@ -235,12 +235,12 @@ def test_excel_table_sheet_by_index(self, ext):
dfref = self.get_csv_refdf('test1')
df1 = read_excel(excel, 0, index_col=0)
- df2 = read_excel(excel, 1, skiprows=[1], index_col=0)
+ df2 = read_excel(excel, 1, skip_rows=[1], index_col=0)
tm.assert_frame_equal(df1, dfref, check_names=False)
tm.assert_frame_equal(df2, dfref, check_names=False)
df1 = excel.parse(0, index_col=0)
- df2 = excel.parse(1, skiprows=[1], index_col=0)
+ df2 = excel.parse(1, skip_rows=[1], index_col=0)
tm.assert_frame_equal(df1, dfref, check_names=False)
tm.assert_frame_equal(df2, dfref, check_names=False)
@@ -263,7 +263,7 @@ def test_excel_table(self, ext):
dfref = self.get_csv_refdf('test1')
df1 = self.get_exceldf('test1', ext, 'Sheet1', index_col=0)
- df2 = self.get_exceldf('test1', ext, 'Sheet2', skiprows=[1],
+ df2 = self.get_exceldf('test1', ext, 'Sheet2', skip_rows=[1],
index_col=0)
# TODO add index to file
tm.assert_frame_equal(df1, dfref, check_names=False)
@@ -774,7 +774,7 @@ def test_read_excel_multiindex(self, ext):
tm.assert_frame_equal(actual, expected)
actual = read_excel(mi_file, 'both_name_skiprows', index_col=[0, 1],
- header=[0, 1], skiprows=2)
+ header=[0, 1], skip_rows=2)
tm.assert_frame_equal(actual, expected)
@td.skip_if_no('xlsxwriter')
@@ -969,7 +969,7 @@ def test_read_excel_skiprows_list(self, ext):
# GH 4903
actual = pd.read_excel(os.path.join(self.dirpath,
'testskiprows' + ext),
- 'skiprows_list', skiprows=[0, 2])
+ 'skiprows_list', skip_rows=[0, 2])
expected = DataFrame([[1, 2.5, pd.Timestamp('2015-01-01'), True],
[2, 3.5, pd.Timestamp('2015-01-02'), False],
[3, 4.5, pd.Timestamp('2015-01-03'), False],
@@ -979,7 +979,7 @@ def test_read_excel_skiprows_list(self, ext):
actual = pd.read_excel(os.path.join(self.dirpath,
'testskiprows' + ext),
- 'skiprows_list', skiprows=np.array([0, 2]))
+ 'skiprows_list', skip_rows=np.array([0, 2]))
tm.assert_frame_equal(actual, expected)
def test_read_excel_nrows(self, ext):
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index e08899a03d2d7..e1889351f38c5 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -153,57 +153,65 @@ def test_spam_header(self):
assert not df.empty
def test_skiprows_int(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=1)
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=1)
+ df1 = self.read_html(self.spam_data, '.*Water.*', skip_rows=1)
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_xrange(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=range(2))[0]
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=range(2))[0]
+ df1 = self.read_html(
+ self.spam_data,
+ '.*Water.*',
+ skip_rows=range(2)
+ )[0]
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=range(2))[0]
tm.assert_frame_equal(df1, df2)
def test_skiprows_list(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=[1, 2])
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=[2, 1])
+ df1 = self.read_html(self.spam_data, '.*Water.*', skip_rows=[1, 2])
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=[2, 1])
assert_framelist_equal(df1, df2)
def test_skiprows_set(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows={1, 2})
- df2 = self.read_html(self.spam_data, 'Unit', skiprows={2, 1})
+ df1 = self.read_html(self.spam_data, '.*Water.*', skip_rows={1, 2})
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows={2, 1})
assert_framelist_equal(df1, df2)
def test_skiprows_slice(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=1)
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=1)
+ df1 = self.read_html(self.spam_data, '.*Water.*', skip_rows=1)
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_slice_short(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=slice(2))
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=slice(2))
+ df1 = self.read_html(self.spam_data, '.*Water.*', skip_rows=slice(2))
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=slice(2))
assert_framelist_equal(df1, df2)
def test_skiprows_slice_long(self):
- df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=slice(2, 5))
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=slice(4, 1, -1))
+ df1 = self.read_html(
+ self.spam_data,
+ '.*Water.*',
+ skip_rows=slice(2, 5)
+ )
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=slice(4, 1, -1))
assert_framelist_equal(df1, df2)
def test_skiprows_ndarray(self):
df1 = self.read_html(self.spam_data, '.*Water.*',
- skiprows=np.arange(2))
- df2 = self.read_html(self.spam_data, 'Unit', skiprows=np.arange(2))
+ skip_rows=np.arange(2))
+ df2 = self.read_html(self.spam_data, 'Unit', skip_rows=np.arange(2))
assert_framelist_equal(df1, df2)
def test_skiprows_invalid(self):
with tm.assert_raises_regex(TypeError, 'is not a valid type '
'for skipping rows'):
- self.read_html(self.spam_data, '.*Water.*', skiprows='asdf')
+ self.read_html(self.spam_data, '.*Water.*', skip_rows='asdf')
def test_index(self):
df1 = self.read_html(self.spam_data, '.*Water.*', index_col=0)
@@ -312,18 +320,18 @@ def test_multiindex_header_index(self):
@pytest.mark.slow
def test_multiindex_header_skiprows_tuples(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- df = self._bank_data(header=[0, 1], skiprows=1,
+ df = self._bank_data(header=[0, 1], skip_rows=1,
tupleize_cols=True)[0]
assert isinstance(df.columns, Index)
@pytest.mark.slow
def test_multiindex_header_skiprows(self):
- df = self._bank_data(header=[0, 1], skiprows=1)[0]
+ df = self._bank_data(header=[0, 1], skip_rows=1)[0]
assert isinstance(df.columns, MultiIndex)
@pytest.mark.slow
def test_multiindex_header_index_skiprows(self):
- df = self._bank_data(header=[0, 1], index_col=[0, 1], skiprows=1)[0]
+ df = self._bank_data(header=[0, 1], index_col=[0, 1], skip_rows=1)[0]
assert isinstance(df.index, MultiIndex)
assert isinstance(df.columns, MultiIndex)
@@ -340,7 +348,7 @@ def test_regex_idempotency(self):
def test_negative_skiprows(self):
with tm.assert_raises_regex(ValueError,
r'\(you passed a negative value\)'):
- self.read_html(self.spam_data, 'Water', skiprows=-1)
+ self.read_html(self.spam_data, 'Water', skip_rows=-1)
@network
def test_multiple_matches(self):
| By [my rough count](https://docs.google.com/spreadsheets/d/1h_iQ5Pexs5SuCT2pfbN8OH6ld6ZyvWqTEaH2W-xkEJE/edit?usp=sharing), the [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) method has nearly 50 keyword arguments.
Of those, 32 arguments are made up or two or more words. Twenty of those multi-word arguments use an underscore to mark the space between words, like `skip_blank_lines` and `parse_dates`. Twelve do not, like `chunksize` and `lineterminator`.
It is my opinion this is a small flaw in pandas' API, and the library would benefit by standardizing how spaces are handled. It will make pandas more legible and consistent, and therefore easier for users of all experience levels.
Since the underscore method is more common and more legible, I propose it be adopted.
All existing arguments without an underscore will need to be modified. As a first salvo, I have attempted to change the `skiprows` kwarg to `read_csv` and its sibling methods to `skip_rows`. I have included an experimental deprecation warning that aims to continue to support the old argument for some interval into the future.
Due to my lack of expertise on pandas' internals, I expect this request includes some flaws that would need to be corrected before inclusion. However, I hope that the maintainers of the library will agree with the overall aims of this patch, which is to begin a process of introducing greater consistency to the style of keyword arguments.
If you do agree, I would be pleased to lead an effort to gradually standardize the inputs and put in the work to finish the job.
Thank you for your consideration. | https://api.github.com/repos/pandas-dev/pandas/pulls/22587 | 2018-09-04T14:59:27Z | 2018-11-23T03:52:03Z | null | 2018-11-23T03:52:03Z |
Update ecosystem.rst to include Pint | diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index ad389bbe35b71..99bac2a555a08 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -338,6 +338,16 @@ found in NumPy or pandas, which work well with pandas' data containers.
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas' Series and DataFrame.
+`pint`_
+~~~~~~~
+
+`Pint <https://pint.readthedocs.io/en/latest/>` provides an extension type for
+storing numeric arrays with units. These arrays can be stored inside pandas'
+Series and DataFrame. Operations between Series and DataFrame columns which
+use pint's extension array are then units aware.
+
+Note that this feature requires Pint v0.9 or higher.
+
.. _ecosystem.accessors:
Accessors
@@ -352,7 +362,9 @@ Library Accessor Classes
============== ========== =========================
`cyberpandas`_ ``ip`` ``Series``
`pdvega`_ ``vgplot`` ``Series``, ``DataFrame``
+`pint`_ ``pint`` ``Series``, ``DataFrame``
============== ========== =========================
.. _cyberpandas: https://cyberpandas.readthedocs.io/en/latest
.. _pdvega: https://jakevdp.github.io/pdvega/
+.. _pint: https://github.com/hgrecco/pint
| We are working on upgrading pint to be compatible with pandas, see https://github.com/hgrecco/pint/pull/684
I am guessing that the line in the docs,
> If you’re building a library that implements the interface, please publicize it on Extension Data Types.
meant something like this pull request. If that's completely wrong, apologies.
- [x] closes #xxxx (N/A as not directly related to an issue, but makes progress towards #10349 )
- [x] (N/A) tests added / passed
- [x] (N/A) passes `git diff upstream/master -u -- "*.py" | flake8 --diff` *
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22582 | 2018-09-03T16:13:15Z | 2018-12-19T12:17:33Z | null | 2020-07-07T22:42:02Z |
Add 'name' as argument for index 'to_frame' method | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 232f879285543..3dfb0f70b8142 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -184,6 +184,7 @@ Other Enhancements
- :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`)
- :class:`Resampler` now is iterable like :class:`GroupBy` (:issue:`15314`).
- :meth:`Series.resample` and :meth:`DataFrame.resample` have gained the :meth:`Resampler.quantile` (:issue:`15023`).
+- :meth:`Index.to_frame` now supports overriding column name(s) (:issue:`22580`).
.. _whatsnew_0240.api_breaking:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index b2b6e02e908c5..ca381160de0df 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1115,17 +1115,21 @@ def to_series(self, index=None, name=None):
return Series(self._to_embed(), index=index, name=name)
- def to_frame(self, index=True):
+ def to_frame(self, index=True, name=None):
"""
Create a DataFrame with a column containing the Index.
- .. versionadded:: 0.21.0
+ .. versionadded:: 0.24.0
Parameters
----------
index : boolean, default True
Set the index of the returned DataFrame as the original Index.
+ name : object, default None
+ The passed name should substitute for the index name (if it has
+ one).
+
Returns
-------
DataFrame
@@ -1153,10 +1157,19 @@ def to_frame(self, index=True):
0 Ant
1 Bear
2 Cow
+
+ To override the name of the resulting column, specify `name`:
+
+ >>> idx.to_frame(index=False, name='zoo')
+ zoo
+ 0 Ant
+ 1 Bear
+ 2 Cow
"""
from pandas import DataFrame
- name = self.name or 0
+ if name is None:
+ name = self.name or 0
result = DataFrame({name: self.values.copy()})
if index:
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 4f38f61f7b0e4..a7932f667f6de 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1126,20 +1126,23 @@ def _to_safe_for_reshape(self):
""" convert to object if we are a categorical """
return self.set_levels([i._to_safe_for_reshape() for i in self.levels])
- def to_frame(self, index=True):
+ def to_frame(self, index=True, name=None):
"""
Create a DataFrame with the levels of the MultiIndex as columns.
Column ordering is determined by the DataFrame constructor with data as
a dict.
- .. versionadded:: 0.20.0
+ .. versionadded:: 0.24.0
Parameters
----------
index : boolean, default True
Set the index of the returned DataFrame as the original MultiIndex.
+ name : list / sequence of strings, optional
+ The passed names should substitute index level names.
+
Returns
-------
DataFrame : a DataFrame containing the original MultiIndex data.
@@ -1150,10 +1153,22 @@ def to_frame(self, index=True):
"""
from pandas import DataFrame
+ if name is not None:
+ if not is_list_like(name):
+ raise TypeError("'name' must be a list / sequence "
+ "of column names.")
+
+ if len(name) != len(self.levels):
+ raise ValueError("'name' should have same length as "
+ "number of levels on index.")
+ idx_names = name
+ else:
+ idx_names = self.names
+
result = DataFrame({(name or level):
self._get_level_values(level)
for name, level in
- zip(self.names, range(len(self.levels)))},
+ zip(idx_names, range(len(self.levels)))},
copy=False)
if index:
result.index = self
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 56f59851d6d04..49a247608ab0b 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -66,19 +66,24 @@ def test_to_series_with_arguments(self):
assert s.index is not idx
assert s.name != idx.name
- def test_to_frame(self):
- # see gh-15230
+ @pytest.mark.parametrize("name", [None, "new_name"])
+ def test_to_frame(self, name):
+ # see GH-15230, GH-22580
idx = self.create_index()
- name = idx.name or 0
- df = idx.to_frame()
+ if name:
+ idx_name = name
+ else:
+ idx_name = idx.name or 0
+
+ df = idx.to_frame(name=idx_name)
assert df.index is idx
assert len(df.columns) == 1
- assert df.columns[0] == name
- assert df[name].values is not idx.values
+ assert df.columns[0] == idx_name
+ assert df[idx_name].values is not idx.values
- df = idx.to_frame(index=False)
+ df = idx.to_frame(index=False, name=idx_name)
assert df.index is not idx
def test_shift(self):
diff --git a/pandas/tests/indexes/multi/test_conversion.py b/pandas/tests/indexes/multi/test_conversion.py
index fcc22390e17a1..8c9566b7e651f 100644
--- a/pandas/tests/indexes/multi/test_conversion.py
+++ b/pandas/tests/indexes/multi/test_conversion.py
@@ -37,6 +37,27 @@ def test_to_frame():
expected.index = index
tm.assert_frame_equal(result, expected)
+ # See GH-22580
+ index = MultiIndex.from_tuples(tuples)
+ result = index.to_frame(index=False, name=['first', 'second'])
+ expected = DataFrame(tuples)
+ expected.columns = ['first', 'second']
+ tm.assert_frame_equal(result, expected)
+
+ result = index.to_frame(name=['first', 'second'])
+ expected.index = index
+ expected.columns = ['first', 'second']
+ tm.assert_frame_equal(result, expected)
+
+ msg = "'name' must be a list / sequence of column names."
+ with tm.assert_raises_regex(TypeError, msg):
+ index.to_frame(name='first')
+
+ msg = "'name' should have same length as number of levels on index."
+ with tm.assert_raises_regex(ValueError, msg):
+ index.to_frame(name=['first'])
+
+ # Tests for datetime index
index = MultiIndex.from_product([range(5),
pd.date_range('20130101', periods=3)])
result = index.to_frame(index=False)
@@ -45,12 +66,21 @@ def test_to_frame():
1: np.tile(pd.date_range('20130101', periods=3), 5)})
tm.assert_frame_equal(result, expected)
- index = MultiIndex.from_product([range(5),
- pd.date_range('20130101', periods=3)])
result = index.to_frame()
expected.index = index
tm.assert_frame_equal(result, expected)
+ # See GH-22580
+ result = index.to_frame(index=False, name=['first', 'second'])
+ expected = DataFrame(
+ {'first': np.repeat(np.arange(5, dtype='int64'), 3),
+ 'second': np.tile(pd.date_range('20130101', periods=3), 5)})
+ tm.assert_frame_equal(result, expected)
+
+ result = index.to_frame(name=['first', 'second'])
+ expected.index = index
+ tm.assert_frame_equal(result, expected)
+
def test_to_hierarchical():
index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), (
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Since series `to_frame` method has a `name` argument, I believe it makes sense for index also have it. | https://api.github.com/repos/pandas-dev/pandas/pulls/22580 | 2018-09-03T11:19:23Z | 2018-09-14T04:45:16Z | 2018-09-14T04:45:16Z | 2018-09-14T12:36:17Z |
BUG: fix failing DataFrame.loc when indexing with an IntervalIndex | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 1979bde796452..3770e130e4dad 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -669,6 +669,7 @@ Indexing
- Bug where indexing with a Numpy array containing negative values would mutate the indexer (:issue:`21867`)
- Bug where mixed indexes wouldn't allow integers for ``.at`` (:issue:`19860`)
- ``Float64Index.get_loc`` now raises ``KeyError`` when boolean key passed. (:issue:`19087`)
+- Bug in :meth:`DataFrame.loc` when indexing with an :class:`IntervalIndex` (:issue:`19977`)
Missing
^^^^^^^
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index a245ecfa007f3..b63f874abff85 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1491,7 +1491,7 @@ def __getitem__(self, key):
try:
if self._is_scalar_access(key):
return self._getitem_scalar(key)
- except (KeyError, IndexError):
+ except (KeyError, IndexError, AttributeError):
pass
return self._getitem_tuple(key)
else:
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 6a4cf1ffc6071..f0c4d7be2f293 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -3099,6 +3099,28 @@ def test_type_error_multiindex(self):
result = dg['x', 0]
assert_series_equal(result, expected)
+ def test_interval_index(self):
+ # GH 19977
+ index = pd.interval_range(start=0, periods=3)
+ df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
+ index=index,
+ columns=['A', 'B', 'C'])
+
+ expected = 1
+ result = df.loc[0.5, 'A']
+ assert_almost_equal(result, expected)
+
+ index = pd.interval_range(start=0, periods=3, closed='both')
+ df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
+ index=index,
+ columns=['A', 'B', 'C'])
+
+ index_exp = pd.interval_range(start=0, periods=2,
+ freq=1, closed='both')
+ expected = pd.Series([1, 4], index=index_exp, name='A')
+ result = df.loc[1, 'A']
+ assert_series_equal(result, expected)
+
class TestDataFrameIndexingDatetimeWithTZ(TestData):
| - [x] closes #19977
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22576 | 2018-09-02T17:06:47Z | 2018-09-08T02:52:59Z | 2018-09-08T02:52:59Z | 2018-09-13T18:53:00Z |
CLN: tests for str.cat | diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index ab508174fa4a9..c238abdd32f5d 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -144,71 +144,50 @@ def test_cat(self):
with tm.assert_raises_regex(ValueError, rgx):
strings.str_cat(one, 'three')
- @pytest.mark.parametrize('container', [Series, Index])
+ @pytest.mark.parametrize('box', [Series, Index])
@pytest.mark.parametrize('other', [None, Series, Index])
- def test_str_cat_name(self, container, other):
- # https://github.com/pandas-dev/pandas/issues/21053
+ def test_str_cat_name(self, box, other):
+ # GH 21053
values = ['a', 'b']
if other:
other = other(values)
else:
other = values
- result = container(values, name='name').str.cat(other, sep=',',
- join='left')
+ result = box(values, name='name').str.cat(other, sep=',', join='left')
assert result.name == 'name'
- @pytest.mark.parametrize('series_or_index', ['series', 'index'])
- def test_str_cat(self, series_or_index):
- # test_cat above tests "str_cat" from ndarray to ndarray;
- # here testing "str.cat" from Series/Index to Series/Index/ndarray/list
- s = Index(['a', 'a', 'b', 'b', 'c', np.nan])
- if series_or_index == 'series':
- s = Series(s)
- t = Index(['a', np.nan, 'b', 'd', 'foo', np.nan])
+ @pytest.mark.parametrize('box', [Series, Index])
+ def test_str_cat(self, box):
+ # test_cat above tests "str_cat" from ndarray;
+ # here testing "str.cat" from Series/Indext to ndarray/list
+ s = box(['a', 'a', 'b', 'b', 'c', np.nan])
# single array
result = s.str.cat()
- exp = 'aabbc'
- assert result == exp
+ expected = 'aabbc'
+ assert result == expected
result = s.str.cat(na_rep='-')
- exp = 'aabbc-'
- assert result == exp
+ expected = 'aabbc-'
+ assert result == expected
result = s.str.cat(sep='_', na_rep='NA')
- exp = 'a_a_b_b_c_NA'
- assert result == exp
-
- # Series/Index with Index
- exp = Index(['aa', 'a-', 'bb', 'bd', 'cfoo', '--'])
- if series_or_index == 'series':
- exp = Series(exp)
- # s.index / s is different from t (as Index) -> warning
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
-
- # Series/Index with Series
- t = Series(t)
- # s as Series has same index as t -> no warning
- # s as Index is different from t.index -> warning (tested below)
- if series_or_index == 'series':
- assert_series_equal(s.str.cat(t, na_rep='-'), exp)
+ expected = 'a_a_b_b_c_NA'
+ assert result == expected
- # Series/Index with Series: warning if different indexes
- t.index = t.index + 1
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
+ t = np.array(['a', np.nan, 'b', 'd', 'foo', np.nan], dtype=object)
+ expected = box(['aa', 'a-', 'bb', 'bd', 'cfoo', '--'])
# Series/Index with array
- assert_series_or_index_equal(s.str.cat(t.values, na_rep='-'), exp)
+ result = s.str.cat(t, na_rep='-')
+ assert_series_or_index_equal(result, expected)
# Series/Index with list
- assert_series_or_index_equal(s.str.cat(list(t), na_rep='-'), exp)
+ result = s.str.cat(list(t), na_rep='-')
+ assert_series_or_index_equal(result, expected)
# errors for incorrect lengths
- rgx = 'All arrays must be same length, except.*'
+ rgx = 'All arrays must be same length, except those having an index.*'
z = Series(['1', '2', '3'])
with tm.assert_raises_regex(ValueError, rgx):
@@ -220,122 +199,111 @@ def test_str_cat(self, series_or_index):
with tm.assert_raises_regex(ValueError, rgx):
s.str.cat(list(z))
- @pytest.mark.parametrize('series_or_index', ['series', 'index'])
- def test_str_cat_raises_intuitive_error(self, series_or_index):
- # https://github.com/pandas-dev/pandas/issues/11334
- s = Index(['a', 'b', 'c', 'd'])
- if series_or_index == 'series':
- s = Series(s)
+ @pytest.mark.parametrize('box', [Series, Index])
+ def test_str_cat_raises_intuitive_error(self, box):
+ # GH 11334
+ s = box(['a', 'b', 'c', 'd'])
message = "Did you mean to supply a `sep` keyword?"
with tm.assert_raises_regex(ValueError, message):
s.str.cat('|')
with tm.assert_raises_regex(ValueError, message):
s.str.cat(' ')
- @pytest.mark.parametrize('series_or_index, dtype_caller, dtype_target', [
- ('series', 'object', 'object'),
- ('series', 'object', 'category'),
- ('series', 'category', 'object'),
- ('series', 'category', 'category'),
- ('index', 'object', 'object'),
- ('index', 'object', 'category'),
- ('index', 'category', 'object'),
- ('index', 'category', 'category')
- ])
- def test_str_cat_categorical(self, series_or_index,
- dtype_caller, dtype_target):
+ @pytest.mark.parametrize('dtype_target', ['object', 'category'])
+ @pytest.mark.parametrize('dtype_caller', ['object', 'category'])
+ @pytest.mark.parametrize('box', [Series, Index])
+ def test_str_cat_categorical(self, box, dtype_caller, dtype_target):
s = Index(['a', 'a', 'b', 'a'], dtype=dtype_caller)
- if series_or_index == 'series':
- s = Series(s)
+ s = s if box == Index else Series(s, index=s)
t = Index(['b', 'a', 'b', 'c'], dtype=dtype_target)
- exp = Index(['ab', 'aa', 'bb', 'ac'])
- if series_or_index == 'series':
- exp = Series(exp)
+ expected = Index(['ab', 'aa', 'bb', 'ac'])
+ expected = expected if box == Index else Series(expected, index=s)
- # Series/Index with Index
- # s.index / s is different from t (as Index) -> warning
+ # Series/Index with unaligned Index
with tm.assert_produces_warning(expected_warning=FutureWarning):
# FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat(t), exp)
+ result = s.str.cat(t)
+ assert_series_or_index_equal(result, expected)
+
+ # Series/Index with Series having matching Index
+ t = Series(t, index=s)
+ result = s.str.cat(t)
+ assert_series_or_index_equal(result, expected)
- # Series/Index with Series
- t = Series(t)
- # s as Series has same index as t -> no warning
- # s as Index is different from t.index -> warning (tested below)
- if series_or_index == 'series':
- assert_series_equal(s.str.cat(t), exp)
+ # Series/Index with Series.values
+ result = s.str.cat(t.values)
+ assert_series_or_index_equal(result, expected)
- # Series/Index with Series: warning if different indexes
- t.index = t.index + 1
+ # Series/Index with Series having different Index
+ t = Series(t.values, index=t)
with tm.assert_produces_warning(expected_warning=FutureWarning):
# FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
+ result = s.str.cat(t)
+ assert_series_or_index_equal(result, expected)
- @pytest.mark.parametrize('series_or_index', ['series', 'index'])
- def test_str_cat_mixed_inputs(self, series_or_index):
+ @pytest.mark.parametrize('box', [Series, Index])
+ def test_str_cat_mixed_inputs(self, box):
s = Index(['a', 'b', 'c', 'd'])
- if series_or_index == 'series':
- s = Series(s)
- t = Series(['A', 'B', 'C', 'D'])
- d = concat([t, Series(s)], axis=1)
+ s = s if box == Index else Series(s, index=s)
- exp = Index(['aAa', 'bBb', 'cCc', 'dDd'])
- if series_or_index == 'series':
- exp = Series(exp)
+ t = Series(['A', 'B', 'C', 'D'], index=s.values)
+ d = concat([t, Series(s, index=s)], axis=1)
- # Series/Index with DataFrame
- # s as Series has same index as d -> no warning
- # s as Index is different from d.index -> warning (tested below)
- if series_or_index == 'series':
- assert_series_equal(s.str.cat(d), exp)
+ expected = Index(['aAa', 'bBb', 'cCc', 'dDd'])
+ expected = expected if box == Index else Series(expected.values,
+ index=s.values)
- # Series/Index with DataFrame: warning if different indexes
- d.index = d.index + 1
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat(d), exp)
+ # Series/Index with DataFrame
+ result = s.str.cat(d)
+ assert_series_or_index_equal(result, expected)
# Series/Index with two-dimensional ndarray
- assert_series_or_index_equal(s.str.cat(d.values), exp)
+ result = s.str.cat(d.values)
+ assert_series_or_index_equal(result, expected)
# Series/Index with list of Series
- # s as Series has same index as t, s -> no warning
- # s as Index is different from t.index -> warning (tested below)
- if series_or_index == 'series':
- assert_series_equal(s.str.cat([t, s]), exp)
-
- # Series/Index with list of Series: warning if different indexes
- tt = t.copy()
- tt.index = tt.index + 1
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat([tt, s]), exp)
+ result = s.str.cat([t, s])
+ assert_series_or_index_equal(result, expected)
+
+ # Series/Index with mixed list of Series/array
+ result = s.str.cat([t, s.values])
+ assert_series_or_index_equal(result, expected)
# Series/Index with list of list-likes
with tm.assert_produces_warning(expected_warning=FutureWarning):
- # nested lists will be deprecated
- assert_series_or_index_equal(s.str.cat([t.values, list(s)]), exp)
+ # nested list-likes will be deprecated
+ result = s.str.cat([t.values, list(s)])
+ assert_series_or_index_equal(result, expected)
+
+ # Series/Index with list of Series; different indexes
+ t.index = ['b', 'c', 'd', 'a']
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ result = s.str.cat([t, s])
+ assert_series_or_index_equal(result, expected)
- # Series/Index with mixed list of Series/list-like
- # s as Series has same index as t -> no warning
- # s as Index is different from t.index -> warning (tested below)
- if series_or_index == 'series':
- assert_series_equal(s.str.cat([t, s.values]), exp)
+ # Series/Index with mixed list; different indexes
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ result = s.str.cat([t, s.values])
+ assert_series_or_index_equal(result, expected)
- # Series/Index with mixed list: warning if different indexes
+ # Series/Index with DataFrame; different indexes
+ d.index = ['b', 'c', 'd', 'a']
with tm.assert_produces_warning(expected_warning=FutureWarning):
# FutureWarning to switch to alignment by default
- assert_series_or_index_equal(s.str.cat([tt, s.values]), exp)
+ result = s.str.cat(d)
+ assert_series_or_index_equal(result, expected)
# Series/Index with iterator of list-likes
with tm.assert_produces_warning(expected_warning=FutureWarning):
# nested list-likes will be deprecated
- assert_series_or_index_equal(s.str.cat(iter([t.values, list(s)])),
- exp)
+ result = s.str.cat(iter([t.values, list(s)]))
+ assert_series_or_index_equal(result, expected)
# errors for incorrect lengths
- rgx = 'All arrays must be same length, except.*'
+ rgx = 'All arrays must be same length, except those having an index.*'
z = Series(['1', '2', '3'])
e = concat([z, z], axis=1)
@@ -357,7 +325,7 @@ def test_str_cat_mixed_inputs(self, series_or_index):
# mixed list of Series/list-like
with tm.assert_raises_regex(ValueError, rgx):
- s.str.cat([z, s.values])
+ s.str.cat([z.values, s])
# errors for incorrect arguments in list-like
rgx = 'others must be Series, Index, DataFrame,.*'
@@ -384,26 +352,23 @@ def test_str_cat_mixed_inputs(self, series_or_index):
with tm.assert_raises_regex(TypeError, rgx):
s.str.cat(1)
- @pytest.mark.parametrize('series_or_index, join', [
- ('series', 'left'), ('series', 'outer'),
- ('series', 'inner'), ('series', 'right'),
- ('index', 'left'), ('index', 'outer'),
- ('index', 'inner'), ('index', 'right')
- ])
- def test_str_cat_align_indexed(self, series_or_index, join):
+ @pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right'])
+ @pytest.mark.parametrize('box', [Series, Index])
+ def test_str_cat_align_indexed(self, box, join):
# https://github.com/pandas-dev/pandas/issues/18657
s = Series(['a', 'b', 'c', 'd'], index=['a', 'b', 'c', 'd'])
t = Series(['D', 'A', 'E', 'B'], index=['d', 'a', 'e', 'b'])
sa, ta = s.align(t, join=join)
# result after manual alignment of inputs
- exp = sa.str.cat(ta, na_rep='-')
+ expected = sa.str.cat(ta, na_rep='-')
- if series_or_index == 'index':
+ if box == Index:
s = Index(s)
sa = Index(sa)
- exp = Index(exp)
+ expected = Index(expected)
- assert_series_or_index_equal(s.str.cat(t, join=join, na_rep='-'), exp)
+ result = s.str.cat(t, join=join, na_rep='-')
+ assert_series_or_index_equal(result, expected)
@pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right'])
def test_str_cat_align_mixed_inputs(self, join):
@@ -411,31 +376,34 @@ def test_str_cat_align_mixed_inputs(self, join):
t = Series(['d', 'a', 'e', 'b'], index=[3, 0, 4, 1])
d = concat([t, t], axis=1)
- exp_outer = Series(['aaa', 'bbb', 'c--', 'ddd', '-ee'])
- sa, ta = s.align(t, join=join)
- exp = exp_outer.loc[ta.index]
+ expected_outer = Series(['aaa', 'bbb', 'c--', 'ddd', '-ee'])
+ expected = expected_outer.loc[s.index.join(t.index, how=join)]
# list of Series
- tm.assert_series_equal(s.str.cat([t, t], join=join, na_rep='-'), exp)
+ result = s.str.cat([t, t], join=join, na_rep='-')
+ tm.assert_series_equal(result, expected)
# DataFrame
- tm.assert_series_equal(s.str.cat(d, join=join, na_rep='-'), exp)
+ result = s.str.cat(d, join=join, na_rep='-')
+ tm.assert_series_equal(result, expected)
# mixed list of indexed/unindexed
- u = ['A', 'B', 'C', 'D']
- exp_outer = Series(['aaA', 'bbB', 'c-C', 'ddD', '-e-'])
- # u will be forced have index of s -> use s here as placeholder
- e = concat([t, s], axis=1, join=(join if join == 'inner' else 'outer'))
- sa, ea = s.align(e, join=join)
- exp = exp_outer.loc[ea.index]
+ u = np.array(['A', 'B', 'C', 'D'])
+ expected_outer = Series(['aaA', 'bbB', 'c-C', 'ddD', '-e-'])
+ # joint index of rhs [t, u]; u will be forced have index of s
+ rhs_idx = t.index & s.index if join == 'inner' else t.index | s.index
+
+ expected = expected_outer.loc[s.index.join(rhs_idx, how=join)]
+ result = s.str.cat([t, u], join=join, na_rep='-')
+ tm.assert_series_equal(result, expected)
with tm.assert_produces_warning(expected_warning=FutureWarning):
- # nested lists will be deprecated
- tm.assert_series_equal(s.str.cat([t, u], join=join, na_rep='-'),
- exp)
+ # nested list-likes will be deprecated
+ result = s.str.cat([t, list(u)], join=join, na_rep='-')
+ tm.assert_series_equal(result, expected)
# errors for incorrect lengths
- rgx = 'If `others` contains arrays or lists.*'
+ rgx = r'If `others` contains arrays or lists \(or other list-likes.*'
z = Series(['1', '2', '3']).values
# unindexed object of wrong length
@@ -451,14 +419,14 @@ def test_str_cat_special_cases(self):
t = Series(['d', 'a', 'e', 'b'], index=[3, 0, 4, 1])
# iterator of elements with different types
- exp = Series(['aaa', 'bbb', 'c-c', 'ddd', '-e-'])
- tm.assert_series_equal(s.str.cat(iter([t, s.values]),
- join='outer', na_rep='-'), exp)
+ expected = Series(['aaa', 'bbb', 'c-c', 'ddd', '-e-'])
+ result = s.str.cat(iter([t, s.values]), join='outer', na_rep='-')
+ tm.assert_series_equal(result, expected)
# right-align with different indexes in others
- exp = Series(['aa-', 'd-d'], index=[0, 3])
- tm.assert_series_equal(s.str.cat([t.loc[[0]], t.loc[[3]]],
- join='right', na_rep='-'), exp)
+ expected = Series(['aa-', 'd-d'], index=[0, 3])
+ result = s.str.cat([t.loc[[0]], t.loc[[3]]], join='right', na_rep='-')
+ tm.assert_series_equal(result, expected)
def test_cat_on_filtered_index(self):
df = DataFrame(index=MultiIndex.from_product(
@@ -469,12 +437,11 @@ def test_cat_on_filtered_index(self):
str_year = df.year.astype('str')
str_month = df.month.astype('str')
- str_both = str_year.str.cat(str_month, sep=' ', join='left')
+ str_both = str_year.str.cat(str_month, sep=' ')
assert str_both.loc[1] == '2011 2'
- str_multiple = str_year.str.cat([str_month, str_month],
- sep=' ', join='left')
+ str_multiple = str_year.str.cat([str_month, str_month], sep=' ')
assert str_multiple.loc[1] == '2011 2 2'
@@ -1616,7 +1583,7 @@ def test_empty_str_methods(self):
# GH7241
# (extract) on empty series
- tm.assert_series_equal(empty_str, empty.str.cat(empty, join='left'))
+ tm.assert_series_equal(empty_str, empty.str.cat(empty))
assert '' == empty.str.cat()
tm.assert_series_equal(empty_str, empty.str.title())
tm.assert_series_equal(empty_int, empty.str.count('a'))
@@ -3184,9 +3151,9 @@ def test_method_on_bytes(self):
lhs = Series(np.array(list('abc'), 'S1').astype(object))
rhs = Series(np.array(list('def'), 'S1').astype(object))
if compat.PY3:
- pytest.raises(TypeError, lhs.str.cat, rhs, join='left')
+ pytest.raises(TypeError, lhs.str.cat, rhs)
else:
- result = lhs.str.cat(rhs, join='left')
+ result = lhs.str.cat(rhs)
expected = Series(np.array(
['ad', 'be', 'cf'], 'S2').astype(object))
tm.assert_series_equal(result, expected)
| Cleaning up some left-overs from #20347 and preparing the tests for changing the default of `join` to `'left'` in 1.0 (which would break some tests that assume no alignment currently). As a side benefit, this has:
* nicer parametrization
* nicer switches (and reduced usage of them within the tests)
* now follows `result / expected` pattern instead of calculating result within `assert_..._equal` | https://api.github.com/repos/pandas-dev/pandas/pulls/22575 | 2018-09-02T00:36:07Z | 2018-09-08T03:05:50Z | 2018-09-08T03:05:50Z | 2018-09-09T21:23:30Z |
CLN: Rename 'n' to 'repeats' in .repeat methods | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 7ed92935a0991..1c820c5bcd114 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -527,6 +527,7 @@ Removal of prior version deprecations/changes
- Several private functions were removed from the (non-public) module ``pandas.core.common`` (:issue:`22001`)
- Removal of the previously deprecated module ``pandas.core.datetools`` (:issue:`14105`, :issue:`14094`)
- Strings passed into :meth:`DataFrame.groupby` that refer to both column and index levels will raise a ``ValueError`` (:issue:`14432`)
+- :meth:`Index.repeat` and :meth:`MultiIndex.repeat` have renamed the ``n`` argument to ``repeats``(:issue:`14645`)
-
.. _whatsnew_0240.performance:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 7b7fb968b3050..710c9d0e296c9 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -53,7 +53,7 @@
import pandas.core.common as com
from pandas.core import ops
from pandas.util._decorators import (
- Appender, Substitution, cache_readonly, deprecate_kwarg)
+ Appender, Substitution, cache_readonly)
from pandas.core.indexes.frozen import FrozenList
import pandas.core.dtypes.concat as _concat
import pandas.core.missing as missing
@@ -773,7 +773,6 @@ def memory_usage(self, deep=False):
return result
# ops compat
- @deprecate_kwarg(old_arg_name='n', new_arg_name='repeats')
def repeat(self, repeats, *args, **kwargs):
"""
Repeat elements of an Index.
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 5b2e3a76adf05..955f1461075f9 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -27,7 +27,7 @@
from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.errors import PerformanceWarning, UnsortedIndexError
-from pandas.util._decorators import Appender, cache_readonly, deprecate_kwarg
+from pandas.util._decorators import Appender, cache_readonly
import pandas.core.common as com
import pandas.core.missing as missing
import pandas.core.algorithms as algos
@@ -1646,7 +1646,6 @@ def append(self, other):
def argsort(self, *args, **kwargs):
return self.values.argsort(*args, **kwargs)
- @deprecate_kwarg(old_arg_name='n', new_arg_name='repeats')
def repeat(self, repeats, *args, **kwargs):
nv.validate_repeat(args, kwargs)
return MultiIndex(levels=self.levels,
diff --git a/pandas/tests/indexes/multi/test_reshape.py b/pandas/tests/indexes/multi/test_reshape.py
index 85eec6a232180..fa82414680eda 100644
--- a/pandas/tests/indexes/multi/test_reshape.py
+++ b/pandas/tests/indexes/multi/test_reshape.py
@@ -100,10 +100,6 @@ def test_repeat():
numbers, names.repeat(reps)], names=names)
tm.assert_index_equal(m.repeat(reps), expected)
- with tm.assert_produces_warning(FutureWarning):
- result = m.repeat(n=reps)
- tm.assert_index_equal(result, expected)
-
def test_insert_base(idx):
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index c858b4d86cf5e..755b3cc7f1dca 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -2402,15 +2402,6 @@ def test_repeat(self):
result = index.repeat(repeats)
tm.assert_index_equal(result, expected)
- def test_repeat_warns_n_keyword(self):
- index = pd.Index([1, 2, 3])
- expected = pd.Index([1, 1, 2, 2, 3, 3])
-
- with tm.assert_produces_warning(FutureWarning):
- result = index.repeat(n=2)
-
- tm.assert_index_equal(result, expected)
-
@pytest.mark.parametrize("index", [
pd.Index([np.nan]), pd.Index([np.nan, 1]),
pd.Index([1, 2, np.nan]), pd.Index(['a', 'b', np.nan]),
| For `Index` and `MultiIndex`.
xref #14645. | https://api.github.com/repos/pandas-dev/pandas/pulls/22574 | 2018-09-02T00:29:05Z | 2018-09-04T11:13:35Z | 2018-09-04T11:13:35Z | 2018-09-05T03:53:01Z |
CLN: Remove unused variable in test_reshape.py | diff --git a/pandas/tests/indexes/multi/test_reshape.py b/pandas/tests/indexes/multi/test_reshape.py
index 85eec6a232180..efa9fca752157 100644
--- a/pandas/tests/indexes/multi/test_reshape.py
+++ b/pandas/tests/indexes/multi/test_reshape.py
@@ -126,5 +126,5 @@ def test_delete_base(idx):
assert result.name == expected.name
with pytest.raises((IndexError, ValueError)):
- # either depending on numpy version
- result = idx.delete(len(idx))
+ # Exception raised depends on NumPy version.
+ idx.delete(len(idx))
| Title is self-explanatory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22573 | 2018-09-02T00:16:39Z | 2018-09-02T01:17:51Z | 2018-09-02T01:17:51Z | 2018-09-02T01:17:53Z |
Use dispatch_to_series where possible | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 74f760f382c76..d577cfbbd460d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4806,13 +4806,14 @@ def _arith_op(left, right):
return ops.dispatch_to_series(this, other, _arith_op)
else:
result = _arith_op(this.values, other.values)
-
- return self._constructor(result, index=new_index, columns=new_columns,
- copy=False)
+ return self._constructor(result,
+ index=new_index, columns=new_columns,
+ copy=False)
def _combine_match_index(self, other, func, level=None):
left, right = self.align(other, join='outer', axis=0, level=level,
copy=False)
+ assert left.index.equals(right.index)
new_data = func(left.values.T, right.values).T
return self._constructor(new_data,
index=left.index, columns=self.columns,
@@ -4821,6 +4822,7 @@ def _combine_match_index(self, other, func, level=None):
def _combine_match_columns(self, other, func, level=None, try_cast=True):
left, right = self.align(other, join='outer', axis=1, level=level,
copy=False)
+ assert left.columns.equals(right.index)
new_data = left._data.eval(func=func, other=right,
axes=[left.columns, self.index],
@@ -4829,12 +4831,7 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
def _combine_const(self, other, func, errors='raise', try_cast=True):
if lib.is_scalar(other) or np.ndim(other) == 0:
- new_data = {i: func(self.iloc[:, i], other)
- for i, col in enumerate(self.columns)}
-
- result = self._constructor(new_data, index=self.index, copy=False)
- result.columns = self.columns
- return result
+ return ops.dispatch_to_series(self, other, func)
new_data = self._data.eval(func=func, other=other,
errors=errors,
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index b25809bf074f7..a86e57fd8876d 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1638,6 +1638,7 @@ def dispatch_to_series(left, right, func):
"""
# Note: we use iloc to access columns for compat with cases
# with non-unique columns.
+ right = lib.item_from_zerodim(right)
if lib.is_scalar(right):
new_data = {i: func(left.iloc[:, i], right)
for i in range(len(left.columns))}
| Broken off from #22534, since that ended up surfacing a bug elsewhere.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22572 | 2018-09-01T17:27:14Z | 2018-09-05T11:29:54Z | 2018-09-05T11:29:54Z | 2018-09-05T13:43:02Z |
DOC: Updating str_repeat docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..776058bd45d22 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -675,20 +675,42 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
def str_repeat(arr, repeats):
"""
- Duplicate each string in the Series/Index by indicated number
- of times.
+ Duplicate each string in the Series or Index.
Parameters
----------
- repeats : int or array
- Same value for all (int) or different value per (array)
+ repeats : int or sequence of int
+ Same value for all (int) or different value per (sequence).
Returns
-------
- repeated : Series/Index of objects
+ Series or Index of object
+ Series or Index of repeated string objects specified by
+ input parameter repeats.
+
+ Examples
+ --------
+ >>> s = pd.Series(['a', 'b', 'c'])
+ >>> s
+ 0 a
+ 1 b
+ 2 c
+
+ Single int repeats string in Series
+
+ >>> s.str.repeat(repeats=2)
+ 0 aa
+ 1 bb
+ 2 cc
+
+ Sequence of int repeats corresponding string in Series
+
+ >>> s.str.repeat(repeats=[1, 2, 3])
+ 0 a
+ 1 bb
+ 2 ccc
"""
if is_scalar(repeats):
-
def rep(x):
try:
return compat.binary_type.__mul__(x, repeats)
| https://api.github.com/repos/pandas-dev/pandas/pulls/22571 | 2018-09-01T14:30:12Z | 2018-09-18T12:40:46Z | 2018-09-18T12:40:46Z | 2018-10-04T16:08:24Z |
|
DOC: Updating str_pad docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..c32431ac253b2 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1314,23 +1314,57 @@ def str_index(arr, sub, start=0, end=None, side='left'):
def str_pad(arr, width, side='left', fillchar=' '):
"""
- Pad strings in the Series/Index with an additional character to
- specified side.
+ Pad strings in the Series/Index up to width.
Parameters
----------
width : int
Minimum width of resulting string; additional characters will be filled
- with spaces
+ with character defined in `fillchar`.
side : {'left', 'right', 'both'}, default 'left'
- fillchar : str
- Additional character for filling, default is whitespace
+ Side from which to fill resulting string.
+ fillchar : str, default ' '
+ Additional character for filling, default is whitespace.
Returns
-------
- padded : Series/Index of objects
- """
+ Series or Index of object
+ Returns Series or Index with minimum number of char in object.
+ See Also
+ --------
+ Series.str.rjust: Fills the left side of strings with an arbitrary
+ character. Equivalent to ``Series.str.pad(side='left')``.
+ Series.str.ljust: Fills the right side of strings with an arbitrary
+ character. Equivalent to ``Series.str.pad(side='right')``.
+ Series.str.center: Fills boths sides of strings with an arbitrary
+ character. Equivalent to ``Series.str.pad(side='both')``.
+ Series.str.zfill: Pad strings in the Series/Index by prepending '0'
+ character. Equivalent to ``Series.str.pad(side='left', fillchar='0')``.
+
+ Examples
+ --------
+ >>> s = pd.Series(["caribou", "tiger"])
+ >>> s
+ 0 caribou
+ 1 tiger
+ dtype: object
+
+ >>> s.str.pad(width=10)
+ 0 caribou
+ 1 tiger
+ dtype: object
+
+ >>> s.str.pad(width=10, side='right', fillchar='-')
+ 0 caribou---
+ 1 tiger-----
+ dtype: object
+
+ >>> s.str.pad(width=10, side='both', fillchar='-')
+ 0 -caribou--
+ 1 --tiger---
+ dtype: object
+ """
if not isinstance(fillchar, compat.string_types):
msg = 'fillchar must be a character, not {0}'
raise TypeError(msg.format(type(fillchar).__name__))
| Added Examples to docstring.
Added documentation to each part of docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/22570 | 2018-09-01T14:29:42Z | 2018-09-25T16:49:13Z | 2018-09-25T16:49:13Z | 2018-09-25T17:16:07Z |
DOC: Updating str_slice docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 861739f6c694c..5a23951145cb4 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1437,17 +1437,69 @@ def str_rsplit(arr, pat=None, n=None):
def str_slice(arr, start=None, stop=None, step=None):
"""
- Slice substrings from each element in the Series/Index
+ Slice substrings from each element in the Series or Index.
Parameters
----------
- start : int or None
- stop : int or None
- step : int or None
+ start : int, optional
+ Start position for slice operation.
+ stop : int, optional
+ Stop position for slice operation.
+ step : int, optional
+ Step size for slice operation.
Returns
-------
- sliced : Series/Index of objects
+ Series or Index of object
+ Series or Index from sliced substring from original string object.
+
+ See Also
+ --------
+ Series.str.slice_replace : Replace a slice with a string.
+ Series.str.get : Return element at position.
+ Equivalent to `Series.str.slice(start=i, stop=i+1)` with `i`
+ being the position.
+
+ Examples
+ --------
+ >>> s = pd.Series(["koala", "fox", "chameleon"])
+ >>> s
+ 0 koala
+ 1 fox
+ 2 chameleon
+ dtype: object
+
+ >>> s.str.slice(start=1)
+ 0 oala
+ 1 ox
+ 2 hameleon
+ dtype: object
+
+ >>> s.str.slice(stop=2)
+ 0 ko
+ 1 fo
+ 2 ch
+ dtype: object
+
+ >>> s.str.slice(step=2)
+ 0 kaa
+ 1 fx
+ 2 caeen
+ dtype: object
+
+ >>> s.str.slice(start=0, stop=5, step=3)
+ 0 kl
+ 1 f
+ 2 cm
+ dtype: object
+
+ Equivalent behaviour to:
+
+ >>> s.str[0:5:3]
+ 0 kl
+ 1 f
+ 2 cm
+ dtype: object
"""
obj = slice(start, stop, step)
f = lambda x: x[obj]
| Added Examples to docstring.
Added documentation to each part of docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/22569 | 2018-09-01T14:29:01Z | 2018-10-04T18:56:00Z | 2018-10-04T18:56:00Z | 2018-10-04T19:13:49Z |
DOC: Updating str_slice docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..3796abafada2b 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1314,8 +1314,8 @@ def str_index(arr, sub, start=0, end=None, side='left'):
def str_pad(arr, width, side='left', fillchar=' '):
"""
- Pad strings in the Series/Index with an additional character to
- specified side.
+ Pad strings in the Series/Index with additional characters on
+ specified side to fill up to specified width.
Parameters
----------
@@ -1323,12 +1323,31 @@ def str_pad(arr, width, side='left', fillchar=' '):
Minimum width of resulting string; additional characters will be filled
with spaces
side : {'left', 'right', 'both'}, default 'left'
- fillchar : str
+ fillchar : str, default ' '
Additional character for filling, default is whitespace
Returns
-------
- padded : Series/Index of objects
+ Series or Index of objects
+
+ Examples
+ --------
+ >>> s = pd.Series(["panda", "fox"])
+ >>> s
+ 0 panda
+ 1 fox
+
+ >>> s.str.pad(10)
+ 0 panda
+ 1 fox
+
+ >>> s.str.pad(10, 'right')
+ 0 panda
+ 1 fox
+
+ >>> s.str.pad(10, 'both', '-')
+ 0 --panda---
+ 1 ---fox----
"""
if not isinstance(fillchar, compat.string_types):
@@ -1385,17 +1404,40 @@ def str_rsplit(arr, pat=None, n=None):
def str_slice(arr, start=None, stop=None, step=None):
"""
- Slice substrings from each element in the Series/Index
+ Slice substrings from each element in the Series or Index
Parameters
----------
start : int or None
+ Start position for slice operation
stop : int or None
+ Stop position for slice operation
step : int or None
+ Step size for slice operation
Returns
-------
- sliced : Series/Index of objects
+ Series or Index of object
+ Series or Index containing sliced substring from original string
+
+ Examples
+ --------
+ >>> s = pd.Series(["panda", "fox"])
+ >>> s
+ 0 panda
+ 1 fox
+
+ >>> s.str.slice(start=2)
+ 0 nda
+ 1 x
+
+ >>> s.str.slice(stop=2)
+ 0 pa
+ 1 fo
+
+ >>> s.str.slice(step=2)
+ 0 pna
+ 1 fx
"""
obj = slice(start, stop, step)
f = lambda x: x[obj]
| Added Examples to docstring.
Added documentation to each part of docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/22568 | 2018-09-01T14:04:16Z | 2018-09-01T14:26:58Z | null | 2018-09-02T06:16:31Z |
DOC: Updating str_pad docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..98216c3a7455e 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1314,21 +1314,42 @@ def str_index(arr, sub, start=0, end=None, side='left'):
def str_pad(arr, width, side='left', fillchar=' '):
"""
- Pad strings in the Series/Index with an additional character to
- specified side.
+ Pad strings in the Series/Index with additional characters on
+ specified side to fill up to specified width.
Parameters
----------
width : int
Minimum width of resulting string; additional characters will be filled
- with spaces
+ with character defined in fillchar
side : {'left', 'right', 'both'}, default 'left'
- fillchar : str
+ Side from which to fill resulting string
+ fillchar : str, default ' '
Additional character for filling, default is whitespace
Returns
-------
- padded : Series/Index of objects
+ Series or Index of object
+ Returns Series or Index with minimum number of char in object
+
+ Examples
+ --------
+ >>> s = pd.Series(["panda", "fox"])
+ >>> s
+ 0 panda
+ 1 fox
+
+ >>> s.str.pad(width=10)
+ 0 panda
+ 1 fox
+
+ >>> s.str.pad(width=10, side='right', fillchar='-')
+ 0 panda-----
+ 1 fox-------
+
+ >>> s.str.pad(width=10, side='both', fillchar='-')
+ 0 --panda---
+ 1 ---fox----
"""
if not isinstance(fillchar, compat.string_types):
| - [x ] tests added / passed
- [ x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Pad documentation was describing a different behaviour, so I fixed it and added examples.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22567 | 2018-09-01T13:27:07Z | 2018-09-01T14:27:41Z | null | 2018-09-01T14:33:03Z |
DOC: Updating str_repeat docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..6720510f19626 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -680,12 +680,35 @@ def str_repeat(arr, repeats):
Parameters
----------
- repeats : int or array
- Same value for all (int) or different value per (array)
+ repeats : int or sequence of int
+ Same value for all (int) or different value per (sequence)
Returns
-------
- repeated : Series/Index of objects
+ Series or Index of object
+ Series or Index of repeated string object specified by input parameter repeats
+
+ Examples
+ --------
+ >>> s = pd.Series(['a', 'b', 'c'])
+ >>> s
+ 0 a
+ 1 b
+ 2 c
+
+ Single int repeats string in Series
+
+ >>> s.str.repeat(repeats=2)
+ 0 aa
+ 1 bb
+ 2 cc
+
+ Sequence of int repeats corresponding string in Series
+
+ >>> s.str.repeat(repeats=[1, 2, 3])
+ 0 a
+ 1 bb
+ 2 ccc
"""
if is_scalar(repeats):
| - [ x] tests passed
- [ x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Updated the Docstring with examples for str.repeat()
| https://api.github.com/repos/pandas-dev/pandas/pulls/22566 | 2018-09-01T12:51:49Z | 2018-09-01T14:27:19Z | null | 2023-05-11T01:18:13Z |
DOC: Formatting in Series.str.extractall | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..75d3349f93540 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -942,19 +942,23 @@ def str_extractall(arr, pat, flags=0):
Parameters
----------
- pat : string
- Regular expression pattern with capturing groups
+ pat : str
+ Regular expression pattern with capturing groups.
flags : int, default 0 (no flags)
- re module flags, e.g. re.IGNORECASE
+ A ``re`` module flag, for example ``re.IGNORECASE``. These allow
+ to modify regular expression matching for things like case, spaces,
+ etc. Multiple flags can be combined with the bitwise OR operator,
+ for example ``re.IGNORECASE | re.MULTILINE``.
Returns
-------
- A DataFrame with one row for each match, and one column for each
- group. Its rows have a MultiIndex with first levels that come from
- the subject Series. The last level is named 'match' and indicates
- the order in the subject. Any capture group names in regular
- expression pat will be used for column names; otherwise capture
- group numbers will be used.
+ DataFrame
+ A ``DataFrame`` with one row for each match, and one column for each
+ group. Its rows have a ``MultiIndex`` with first levels that come from
+ the subject ``Series``. The last level is named 'match' and indexes the
+ matches in each item of the ``Series``. Any capture group names in
+ regular expression pat will be used for column names; otherwise capture
+ group numbers will be used.
See Also
--------
@@ -1000,7 +1004,6 @@ def str_extractall(arr, pat, flags=0):
1 a 2
B 0 b 1
C 0 NaN 1
-
"""
regex = re.compile(pat, flags=flags)
| In Series.str.extractall, corrected the formatting in the return value and added a period at the end of the parameter descriptions. Can also clarify descriptions if useful.
- [ ] closes #xxxx
- [ ] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
```
################################################################################
################### Docstring (pandas.Series.str.extractall) ###################
################################################################################
For each subject string in the Series, extract groups from all
matches of regular expression pat. When each subject string in the
Series has exactly one match, extractall(pat).xs(0, level='match')
is the same as extract(pat).
.. versionadded:: 0.18.0
Parameters
----------
pat : str
Regular expression pattern with capturing groups.
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE.
Returns
-------
DataFrame
A DataFrame with one row for each match, and one column for each
group. Its rows have a MultiIndex with first levels that come from
the subject Series. The last level is named 'match' and indexes the
matches in each item of the Series. Any capture group names in regular
expression pat will be used for column names; otherwise capture
group numbers will be used.
See Also
--------
extract : returns first match only (not all matches)
Examples
--------
A pattern with one group will return a DataFrame with one column.
Indices with no matches will not appear in the result.
>>> s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"])
>>> s.str.extractall(r"[ab](\d)")
0
match
A 0 1
1 2
B 0 1
Capture group names are used for column names of the result.
>>> s.str.extractall(r"[ab](?P<digit>\d)")
digit
match
A 0 1
1 2
B 0 1
A pattern with two groups will return a DataFrame with two columns.
>>> s.str.extractall(r"(?P<letter>[ab])(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
Optional groups that do not match are NaN in the result.
>>> s.str.extractall(r"(?P<letter>[ab])?(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 NaN 1
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
Use only one blank line to separate sections or paragraphs
Errors in parameters section
Parameter "flags" description should start with a capital letter
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/22565 | 2018-09-01T12:34:03Z | 2018-09-18T12:46:36Z | 2018-09-18T12:46:36Z | 2018-09-18T12:46:36Z |
BUG: Fix (22477) dtype=str converts NaN to 'n' | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index da48d10e9ef58..ba29a17057d02 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1443,6 +1443,7 @@ Reshaping
- Bug in :func:`merge_asof` where confusing error message raised when attempting to merge with missing values (:issue:`23189`)
- Bug in :meth:`DataFrame.nsmallest` and :meth:`DataFrame.nlargest` for dataframes that have a :class:`MultiIndex` for columns (:issue:`23033`).
- Bug in :meth:`DataFrame.append` with a :class:`Series` with a dateutil timezone would raise a ``TypeError`` (:issue:`23682`)
+- Bug in ``Series`` construction when passing no data and ``dtype=str`` (:issue:`22477`)
.. _whatsnew_0240.bug_fixes.sparse:
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index c7c6f89eb13a4..3c5f8830441f7 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -6,7 +6,7 @@
from pandas._libs import lib, tslib, tslibs
from pandas._libs.tslibs import OutOfBoundsDatetime, Period, iNaT
-from pandas.compat import PY3, string_types, text_type
+from pandas.compat import PY3, string_types, text_type, to_str
from .common import (
_INT64_DTYPE, _NS_DTYPE, _POSSIBLY_CAST_DTYPES, _TD_DTYPE, _string_dtypes,
@@ -1216,11 +1216,16 @@ def construct_1d_arraylike_from_scalar(value, length, dtype):
if not isinstance(dtype, (np.dtype, type(np.dtype))):
dtype = dtype.dtype
- # coerce if we have nan for an integer dtype
- # GH 22858: only cast to float if an index
- # (passed here as length) is specified
if length and is_integer_dtype(dtype) and isna(value):
- dtype = np.float64
+ # coerce if we have nan for an integer dtype
+ dtype = np.dtype('float64')
+ elif isinstance(dtype, np.dtype) and dtype.kind in ("U", "S"):
+ # we need to coerce to object dtype to avoid
+ # to allow numpy to take our string as a scalar value
+ dtype = object
+ if not isna(value):
+ value = to_str(value)
+
subarr = np.empty(length, dtype=dtype)
subarr.fill(value)
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index a01266870b8fc..33177ac452414 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -419,7 +419,7 @@ def is_datetime64_dtype(arr_or_dtype):
return False
try:
tipo = _get_dtype_type(arr_or_dtype)
- except TypeError:
+ except (TypeError, UnicodeEncodeError):
return False
return issubclass(tipo, np.datetime64)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index ce0cf0d5c089e..f5a445e2cca9a 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -134,6 +134,17 @@ def test_constructor_no_data_index_order(self):
result = pd.Series(index=['b', 'a', 'c'])
assert result.index.tolist() == ['b', 'a', 'c']
+ def test_constructor_no_data_string_type(self):
+ # GH 22477
+ result = pd.Series(index=[1], dtype=str)
+ assert np.isnan(result.iloc[0])
+
+ @pytest.mark.parametrize('item', ['entry', 'ѐ', 13])
+ def test_constructor_string_element_string_type(self, item):
+ # GH 22477
+ result = pd.Series(item, index=[1], dtype=str)
+ assert result.iloc[0] == str(item)
+
def test_constructor_dtype_str_na_values(self, string_dtype):
# https://github.com/pandas-dev/pandas/issues/21083
ser = Series(['x', None], dtype=string_dtype)
| - [x] closes #22477
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Now:
```
>>> import pandas as pd
>>> result = pd.Series(index=range(5), dtype=str)
>>> result
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
dtype: object
```
This is implemented by adding a check so if the `dtype` is `str` is will create an empty array type object and then pass the values. Two tests have been implemented:
- test for an empty series. To check that it fills the series with NaN and not with 'n'. For example now:
- test for cases that no string values are given. | https://api.github.com/repos/pandas-dev/pandas/pulls/22564 | 2018-09-01T12:08:09Z | 2018-11-20T15:26:30Z | 2018-11-20T15:26:30Z | 2018-11-20T15:27:16Z |
DOC: Deleted 'replaced' from Returns docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..79122a677fd09 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -558,7 +558,10 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
Returns
-------
- replaced : Series/Index of objects
+ Series or Index of object
+ A copy of the object with all matching occurrences of `pat` replaced by
+ `repl`.
+
Raises
------
| Deleted the word 'replaced' from the Returns section of the docstring, since it is an unnecessary local variable name. | https://api.github.com/repos/pandas-dev/pandas/pulls/22563 | 2018-09-01T11:50:19Z | 2018-09-02T16:07:00Z | 2018-09-02T16:07:00Z | 2018-09-02T16:07:48Z |
DOC: fix return type of str.extract | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e455c751057d1..e078f6ce3d5c7 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -854,8 +854,9 @@ def str_extract(arr, pat, flags=0, expand=True):
pat : string
Regular expression pattern with capturing groups.
flags : int, default 0 (no flags)
- ``re`` module flags, e.g. ``re.IGNORECASE``.
- See :mod:`re`
+ Flags from the ``re`` module, e.g. ``re.IGNORECASE``, that
+ modify regular expression matching for things like case,
+ spaces, etc. For more details, see :mod:`re`.
expand : bool, default True
If True, return DataFrame with one column per capture group.
If False, return a Series/Index if there is one capture group
@@ -865,13 +866,15 @@ def str_extract(arr, pat, flags=0, expand=True):
Returns
-------
- DataFrame with one row for each subject string, and one column for
- each group. Any capture group names in regular expression pat will
- be used for column names; otherwise capture group numbers will be
- used. The dtype of each result column is always object, even when
- no match is found. If expand=False and pat has only one capture group,
- then return a Series (if subject is a Series) or Index (if subject
- is an Index).
+ DataFrame or Series or Index
+ A DataFrame with one row for each subject string, and one
+ column for each group. Any capture group names in regular
+ expression pat will be used for column names; otherwise
+ capture group numbers will be used. The dtype of each result
+ column is always object, even when no match is found. If
+ ``expand=False`` and pat has only one capture group, then
+ return a Series (if subject is a Series) or Index (if subject
+ is an Index).
See Also
--------
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22562 | 2018-09-01T09:53:33Z | 2018-09-03T16:37:59Z | 2018-09-03T16:37:59Z | 2018-09-03T16:37:59Z |
BUG: DataFrame.apply not adding a frequency if freq=None (#22150) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 649629714c3b1..fe74258325902 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -635,7 +635,7 @@ Datetimelike
- Bug in :meth:`DataFrame.eq` comparison against ``NaT`` incorrectly returning ``True`` or ``NaN`` (:issue:`15697`, :issue:`22163`)
- Bug in :class:`DatetimeIndex` subtraction that incorrectly failed to raise ``OverflowError`` (:issue:`22492`, :issue:`22508`)
- Bug in :class:`DatetimeIndex` incorrectly allowing indexing with ``Timedelta`` object (:issue:`20464`)
--
+- Bug in :class:`DatetimeIndex` where frequency was being set if original frequency was ``None`` (:issue:`22150`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 46741ab15aa31..9b00f21668bf5 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -860,8 +860,6 @@ def union_many(self, others):
if isinstance(this, DatetimeIndex):
this._tz = timezones.tz_standardize(tz)
- if this.freq is None:
- this.freq = to_offset(this.inferred_freq)
return this
def join(self, other, how='left', level=None, return_indexers=False,
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index 8beab3fb816df..1452e1ab8d98d 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -11,6 +11,8 @@
import warnings
import numpy as np
+from hypothesis import given
+from hypothesis.strategies import composite, dates, integers, sampled_from
from pandas import (notna, DataFrame, Series, MultiIndex, date_range,
Timestamp, compat)
@@ -1155,3 +1157,24 @@ def test_agg_cython_table_raises(self, df, func, expected, axis):
# GH21224
with pytest.raises(expected):
df.agg(func, axis=axis)
+
+ @composite
+ def indices(draw, max_length=5):
+ date = draw(
+ dates(
+ min_value=Timestamp.min.ceil("D").to_pydatetime().date(),
+ max_value=Timestamp.max.floor("D").to_pydatetime().date(),
+ ).map(Timestamp)
+ )
+ periods = draw(integers(0, max_length))
+ freq = draw(sampled_from(list("BDHTS")))
+ dr = date_range(date, periods=periods, freq=freq)
+ return pd.DatetimeIndex(list(dr))
+
+ @given(index=indices(5), num_columns=integers(0, 5))
+ def test_frequency_is_original(self, index, num_columns):
+ # GH22150
+ original = index.copy()
+ df = DataFrame(True, index=index, columns=range(num_columns))
+ df.apply(lambda x: x)
+ assert index.freq == original.freq
| - [x] closes #22150
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22561 | 2018-09-01T08:55:02Z | 2018-09-18T12:29:00Z | 2018-09-18T12:28:59Z | 2018-09-18T12:29:02Z |
ENH/BUG: Add is_dst method to DatetimeIndex and Timestamp to solve AmbiguousTimeError | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 77d37ec2a7b2e..ded4e1e3c55e9 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -582,6 +582,7 @@ These can be accessed like ``Series.dt.<property>``.
Series.dt.to_pydatetime
Series.dt.tz_localize
Series.dt.tz_convert
+ Series.dt.is_dst
Series.dt.normalize
Series.dt.strftime
Series.dt.round
@@ -1778,6 +1779,7 @@ Time-specific operations
DatetimeIndex.snap
DatetimeIndex.tz_convert
DatetimeIndex.tz_localize
+ DatetimeIndex.is_dst
DatetimeIndex.round
DatetimeIndex.floor
DatetimeIndex.ceil
@@ -1985,6 +1987,7 @@ Methods
Timestamp.isocalendar
Timestamp.isoformat
Timestamp.isoweekday
+ Timestamp.is_dst
Timestamp.month_name
Timestamp.normalize
Timestamp.now
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3a360b09ae789..ff0bbeeb34870 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -184,6 +184,7 @@ Other Enhancements
- :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`)
- :class:`Resampler` now is iterable like :class:`GroupBy` (:issue:`15314`).
- :ref:`Series.resample` and :ref:`DataFrame.resample` have gained the :meth:`Resampler.quantile` (:issue:`15023`).
+- :class:`DatetimeIndex` and :class:`Timestamp` have gained an ``is_dst`` method (:issue:`18885`, :issue:`18946`)
.. _whatsnew_0240.api_breaking:
@@ -619,6 +620,8 @@ Timezones
- Bug when setting a new value with :meth:`DataFrame.loc` with a :class:`DatetimeIndex` with a DST transition (:issue:`18308`, :issue:`20724`)
- Bug in :meth:`DatetimeIndex.unique` that did not re-localize tz-aware dates correctly (:issue:`21737`)
- Bug when indexing a :class:`Series` with a DST transition (:issue:`21846`)
+- Bug in :meth:`DatetimeIndex.floor` that raised an ``AmbiguousTimeError`` during a DST transition (:issue:`18946`)
+- Bug in :func:`merge` when merging ``datetime64[ns, tz]`` data that contained a DST transition (:issue:`18885`)
Offsets
^^^^^^^
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index bdd279b19208b..fc53b88fd4cfe 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -146,7 +146,7 @@ def ints_to_pydatetime(int64_t[:] arr, tz=None, freq=None, box="datetime"):
dt64_to_dtstruct(local_value, &dts)
result[i] = func_create(value, dts, tz, freq)
else:
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if typ not in ['pytz', 'dateutil']:
# static/fixed; in this case we know that len(delta) == 1
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index fe664cf03b0b9..14939427c2da3 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -28,11 +28,10 @@ from np_datetime import OutOfBoundsDatetime
from util cimport (is_string_object,
is_datetime64_object,
- is_integer_object, is_float_object, is_array)
+ is_integer_object, is_float_object)
from timedeltas cimport cast_from_unit
from timezones cimport (is_utc, is_tzlocal, is_fixed_offset,
- treat_tz_as_dateutil, treat_tz_as_pytz,
get_utcoffset, get_dst_info,
get_timezone, maybe_get_tz, tz_compare)
from parsing import parse_datetime_string
@@ -540,7 +539,7 @@ cdef inline void localize_tso(_TSObject obj, tzinfo tz):
dt64_to_dtstruct(local_val, &obj.dts)
else:
# Adjust datetime64 timestamp, recompute datetimestruct
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if is_fixed_offset(tz):
# static/fixed tzinfo; in this case we know len(deltas) == 1
@@ -636,7 +635,7 @@ cdef inline int64_t[:] _tz_convert_dst(int64_t[:] values, tzinfo tz,
int64_t[:] deltas
int64_t v
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if not to_utc:
# We add `offset` below instead of subtracting it
deltas = -1 * np.array(deltas, dtype='i8')
@@ -888,7 +887,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
"the same size as vals")
ambiguous_array = np.asarray(ambiguous)
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
tdata = <int64_t*> cnp.PyArray_DATA(trans)
ntrans = len(trans)
@@ -1150,7 +1149,7 @@ cdef int64_t[:] _normalize_local(int64_t[:] stamps, object tz):
result[i] = _normalized_stamp(&dts)
else:
# Adjust datetime64 timestamp, recompute datetimestruct
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if typ not in ['pytz', 'dateutil']:
# static/fixed; in this case we know that len(delta) == 1
@@ -1227,7 +1226,7 @@ def is_date_array_normalized(int64_t[:] stamps, tz=None):
if (dts.hour + dts.min + dts.sec + dts.us) > 0:
return False
else:
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if typ not in ['pytz', 'dateutil']:
# static/fixed; in this case we know that len(delta) == 1
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 08d9128ff660c..e05572f66525d 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -260,6 +260,20 @@ class NaTType(_NaT):
def is_year_end(self):
return False
+ def is_dst(self):
+ """
+ Returns a boolean indicating if the Timestamp is in daylight savings
+ time. Naive timestamps are considered not to be in daylight savings
+ time.
+
+ Returns
+ -------
+ Boolean
+ True if the Timestamp is in daylight savings time
+ False if the Timestamp is naive or not in daylight savings time
+ """
+ return False
+
def __rdiv__(self, other):
return _nat_rdivide_op(self, other)
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index f68b6d8fdef57..6c29fd0280b02 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1516,7 +1516,7 @@ cdef int64_t[:] localize_dt64arr_to_period(int64_t[:] stamps,
result[i] = get_period_ordinal(&dts, freq)
else:
# Adjust datetime64 timestamp, recompute datetimestruct
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if typ not in ['pytz', 'dateutil']:
# static/fixed; in this case we know that len(delta) == 1
diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx
index 4e3350395400c..7ec3c0062c9e3 100644
--- a/pandas/_libs/tslibs/resolution.pyx
+++ b/pandas/_libs/tslibs/resolution.pyx
@@ -68,7 +68,7 @@ cdef _reso_local(int64_t[:] stamps, object tz):
reso = curr_reso
else:
# Adjust datetime64 timestamp, recompute datetimestruct
- trans, deltas, typ = get_dst_info(tz)
+ trans, deltas, typ = get_dst_info(tz, False)
if typ not in ['pytz', 'dateutil']:
# static/fixed; in this case we know that len(delta) == 1
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 3ab1396c0fe38..c06288f6cf23c 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -722,6 +722,20 @@ class Timestamp(_Timestamp):
raise AttributeError("Cannot directly set timezone. Use tz_localize() "
"or tz_convert() as appropriate")
+ def is_dst(self):
+ """
+ Returns a boolean indicating if the Timestamp is in daylight savings
+ time. Naive timestamps are considered not to be in daylight savings
+ time.
+
+ Returns
+ -------
+ Boolean
+ True if the Timestamp is in daylight savings time
+ False if the Timestamp is naive or not in daylight savings time
+ """
+ return bool(self.dst())
+
def __setstate__(self, state):
self.value = state[0]
self.freq = state[1]
diff --git a/pandas/_libs/tslibs/timezones.pxd b/pandas/_libs/tslibs/timezones.pxd
index 8965b46f747c4..638bd0e79c806 100644
--- a/pandas/_libs/tslibs/timezones.pxd
+++ b/pandas/_libs/tslibs/timezones.pxd
@@ -13,4 +13,4 @@ cpdef object maybe_get_tz(object tz)
cdef get_utcoffset(tzinfo, obj)
cdef bint is_fixed_offset(object tz)
-cdef object get_dst_info(object tz)
+cdef object get_dst_info(object tz, bint dst)
diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx
index 36ec499c7335c..e32cab36a4e82 100644
--- a/pandas/_libs/tslibs/timezones.pyx
+++ b/pandas/_libs/tslibs/timezones.pyx
@@ -108,7 +108,8 @@ def _p_tz_cache_key(tz):
return tz_cache_key(tz)
-# Timezone data caches, key is the pytz string or dateutil file name.
+# Timezone data (UTC offset) caches
+# key is the pytz string or dateutil file name.
dst_cache = {}
@@ -186,16 +187,30 @@ cdef object get_utc_trans_times_from_dateutil_tz(object tz):
return new_trans
-cdef int64_t[:] unbox_utcoffsets(object transinfo):
+cdef int64_t[:] unbox_utcoffsets(object transinfo, bint dst):
+ """
+ Unpack the offset information from the _transition_info attribute of pytz
+ timezones
+
+ Parameters
+ ----------
+ transinfo : list of tuples
+ Each tuple contains (UTC offset, DST offset, tz abbreviation)
+ dst : boolean
+ True returns an array of the DST offsets
+ False returns an array of UTC offsets
+ """
cdef:
Py_ssize_t i, sz
int64_t[:] arr
+ int key
sz = len(transinfo)
arr = np.empty(sz, dtype='i8')
-
for i in range(sz):
- arr[i] = int(transinfo[i][0].total_seconds()) * 1000000000
+ # If dst == True, extract the DST shift in nanoseconds
+ # If dst == False, extract the UTC offset in nanoseconds
+ arr[i] = int(transinfo[i][dst].total_seconds()) * 1000000000
return arr
@@ -204,9 +219,23 @@ cdef int64_t[:] unbox_utcoffsets(object transinfo):
# Daylight Savings
-cdef object get_dst_info(object tz):
+cdef object get_dst_info(object tz, bint dst):
"""
- return a tuple of :
+ Return DST info from a timezone
+
+ Parameters
+ ----------
+ tz : object
+ timezone object
+ dst : bint
+ True returns the DST specific offset and will NOT store the results in
+ dst_cache. dst_cache is reserved for caching UTC offsets.
+ False returns the UTC offset
+ Specific for pytz timezones only
+
+ Returns
+ -------
+ tuple
(UTC times of DST transitions,
UTC offsets in microseconds corresponding to DST transitions,
string of type of transitions)
@@ -221,7 +250,7 @@ cdef object get_dst_info(object tz):
np.array([num], dtype=np.int64),
None)
- if cache_key not in dst_cache:
+ if cache_key not in dst_cache or dst:
if treat_tz_as_pytz(tz):
trans = np.array(tz._utc_transition_times, dtype='M8[ns]')
trans = trans.view('i8')
@@ -230,7 +259,7 @@ cdef object get_dst_info(object tz):
trans[0] = NPY_NAT + 1
except Exception:
pass
- deltas = unbox_utcoffsets(tz._transition_info)
+ deltas = unbox_utcoffsets(tz._transition_info, dst)
typ = 'pytz'
elif treat_tz_as_dateutil(tz):
@@ -273,11 +302,50 @@ cdef object get_dst_info(object tz):
deltas = np.array([num], dtype=np.int64)
typ = 'static'
+ if dst:
+ return trans, deltas, typ
dst_cache[cache_key] = (trans, deltas, typ)
return dst_cache[cache_key]
+def is_dst(int64_t[:] values, object tz):
+ """
+ Return a boolean array indicating whether each epoch timestamp is in
+ daylight savings time with respect with the passed timezone.
+
+ Parameters
+ ----------
+ values : ndarray
+ i8 representation of the datetimes
+ tz : object
+ timezone
+
+ Returns
+ -------
+ ndarray of booleans
+ True indicates daylight savings time
+ """
+ cdef:
+ Py_ssize_t n = len(values)
+ object typ
+
+ result = np.zeros(n, dtype=bool)
+ if tz is None:
+ return result
+ transitions, offsets, typ = get_dst_info(tz, True)
+ offsets = np.array(offsets)
+
+ # Fixed timezone offsets do not have DST transitions
+ if typ not in {'pytz', 'dateutil'}:
+ return result
+ positions = transitions.searchsorted(values, side='right') - 1
+
+ # DST has nonzero offset
+ result = offsets[positions] != 0
+ return result
+
+
def infer_tzinfo(start, end):
if start is not None and end is not None:
tz = start.tzinfo
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 3f8c07fe7cd21..c6ef015e1f54a 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -284,7 +284,7 @@ def _ensure_localized(self, result):
if getattr(self, 'tz', None) is not None:
if not isinstance(result, ABCIndexClass):
result = self._simple_new(result)
- result = result.tz_localize(self.tz)
+ result = result.tz_localize(self.tz, ambiguous=self.is_dst())
return result
def _box_values_as_index(self):
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 019aad4941d26..50dee697b39fb 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -266,7 +266,7 @@ def _add_comparison_methods(cls):
_datetimelike_methods = ['to_period', 'tz_localize',
'tz_convert',
'normalize', 'strftime', 'round', 'floor',
- 'ceil', 'month_name', 'day_name']
+ 'ceil', 'month_name', 'day_name', 'is_dst']
_is_numeric_dtype = False
_infer_as_myclass = True
@@ -443,6 +443,36 @@ def tz(self, value):
raise AttributeError("Cannot directly set timezone. Use tz_localize() "
"or tz_convert() as appropriate")
+ def is_dst(self):
+ """
+ Returns an Index of booleans indicating if each corresponding timestamp
+ is in daylight savings time.
+
+ If the DatetimeIndex does not have a timezone, returns an Index
+ who's values are all False.
+
+ Returns
+ -------
+ Index
+ True if the timestamp is in daylight savings time else False
+
+ Example
+ -------
+ >>> dti = pd.date_range('2018-11-04', periods=4, freq='H',
+ tz='US/Pacific')
+
+ >>> dti
+ DatetimeIndex(['2018-11-04 00:00:00-07:00',
+ '2018-11-04 01:00:00-07:00',
+ '2018-11-04 01:00:00-08:00',
+ '2018-11-04 02:00:00-08:00'],
+ dtype='datetime64[ns, US/Pacific]', freq='H')
+
+ >>> dti.is_dst()
+ Index([True, True, False, False], dtype='object')
+ """
+ return Index(timezones.is_dst(self.asi8, self.tz))
+
@property
def size(self):
# TODO: Remove this when we have a DatetimeTZArray
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 95531b2d7a7ae..2f5f4650ba948 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -1012,6 +1012,22 @@ def test_iteration_preserves_nanoseconds(self, tz):
for i, ts in enumerate(index):
assert ts == index[i]
+ @pytest.mark.parametrize('arg, expected_arg', [
+ [[], []],
+ [date_range('2018-11-04', periods=4, freq='H', tz='US/Pacific'),
+ [True, True, False, False]],
+ [date_range('2018-11-04', periods=4, freq='H'),
+ [False] * 4],
+ [date_range('2018-11-04', periods=4, freq='H', tz=pytz.FixedOffset(3)),
+ [False] * 4],
+ [[pd.NaT], [False]]
+ ])
+ def test_is_dst(self, arg, expected_arg):
+ dti = DatetimeIndex(arg)
+ result = dti.is_dst()
+ expected = Index(expected_arg)
+ tm.assert_index_equal(result, expected)
+
class TestDateRange(object):
"""Tests for date_range with timezones"""
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 42df4511578f1..2ec5c2b78a04f 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -601,6 +601,30 @@ def test_merge_on_datetime64tz(self):
assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
+ def test_merge_datetime64tz_with_dst_transition(self):
+ # GH 18885
+ df1 = pd.DataFrame(pd.date_range(
+ '2017-10-29 01:00', periods=4, freq='H', tz='Europe/Madrid'),
+ columns=['date'])
+ df1['value'] = 1
+ df2 = pd.DataFrame([
+ pd.to_datetime('2017-10-29 03:00:00'),
+ pd.to_datetime('2017-10-29 04:00:00'),
+ pd.to_datetime('2017-10-29 05:00:00')
+ ],
+ columns=['date'])
+ df2['date'] = df2['date'].dt.tz_localize('UTC').dt.tz_convert(
+ 'Europe/Madrid')
+ df2['value'] = 2
+ result = pd.merge(df1, df2, how='outer', on='date')
+ expected = pd.DataFrame({
+ 'date': pd.date_range(
+ '2017-10-29 01:00', periods=7, freq='H', tz='Europe/Madrid'),
+ 'value_x': [1] * 4 + [np.nan] * 3,
+ 'value_y': [np.nan] * 4 + [2] * 3
+ })
+ assert_frame_equal(result, expected)
+
def test_merge_non_unique_period_index(self):
# GH #16871
index = pd.period_range('2016-01-01', periods=16, freq='M')
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index a6b217a37bd0c..495532e778131 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -330,3 +330,7 @@ def test_nat_arithmetic_td64_vector(box, assert_func):
def test_nat_pinned_docstrings():
# GH17327
assert NaT.ctime.__doc__ == datetime.ctime.__doc__
+
+
+def test_is_dst():
+ assert NaT.is_dst() is False
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index 8cebfafeae82a..54977cc7bfbf8 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -307,3 +307,15 @@ def test_timestamp_timetz_equivalent_with_datetime_tz(self,
expected = _datetime.timetz()
assert result == expected
+
+ @pytest.mark.parametrize('tz', ['US/Pacific', 'dateutil/US/Pacific'])
+ def test_timestamp_is_dst(self, tz):
+ ts_naive = Timestamp('2018-11-04')
+ assert ts_naive.is_dst() is False
+
+ ts_aware = ts_naive.tz_localize(tz)
+ assert ts_aware.is_dst() is True
+
+ # DST transition at 2am
+ ts_aware = Timestamp('2018-11-04 04:00').tz_localize(tz)
+ assert ts_aware.is_dst() is False
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 5b45c6003a005..9921d880ff78a 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -37,7 +37,8 @@ def test_dt_namespace_accessor(self):
ok_for_dt = DatetimeIndex._datetimelike_ops
ok_for_dt_methods = ['to_period', 'to_pydatetime', 'tz_localize',
'tz_convert', 'normalize', 'strftime', 'round',
- 'floor', 'ceil', 'day_name', 'month_name']
+ 'floor', 'ceil', 'day_name', 'month_name',
+ 'is_dst']
ok_for_td = TimedeltaIndex._datetimelike_ops
ok_for_td_methods = ['components', 'to_pytimedelta', 'total_seconds',
'round', 'floor', 'ceil']
@@ -95,42 +96,6 @@ def compare(s, name):
expected = Series(exp_values, index=s.index, name='xxx')
tm.assert_series_equal(result, expected)
- # round
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.round('D')
- expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02',
- '2012-01-01']), name='xxx')
- tm.assert_series_equal(result, expected)
-
- # round with tz
- result = (s.dt.tz_localize('UTC')
- .dt.tz_convert('US/Eastern')
- .dt.round('D'))
- exp_values = pd.to_datetime(['2012-01-01', '2012-01-01',
- '2012-01-01']).tz_localize('US/Eastern')
- expected = Series(exp_values, name='xxx')
- tm.assert_series_equal(result, expected)
-
- # floor
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.floor('D')
- expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01',
- '2012-01-01']), name='xxx')
- tm.assert_series_equal(result, expected)
-
- # ceil
- s = Series(pd.to_datetime(['2012-01-01 13:00:00',
- '2012-01-01 12:01:00',
- '2012-01-01 08:00:00']), name='xxx')
- result = s.dt.ceil('D')
- expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02',
- '2012-01-02']), name='xxx')
- tm.assert_series_equal(result, expected)
-
# datetimeindex with tz
s = Series(date_range('20130101', periods=5, tz='US/Eastern'),
name='xxx')
@@ -261,6 +226,45 @@ def get_dir(s):
with pytest.raises(com.SettingWithCopyError):
s.dt.hour[0] = 5
+ @pytest.mark.parametrize('method, dates', [
+ ['round', ['2012-01-02', '2012-01-02', '2012-01-01']],
+ ['floor', ['2012-01-01', '2012-01-01', '2012-01-01']],
+ ['ceil', ['2012-01-02', '2012-01-02', '2012-01-02']]
+ ])
+ def test_dt_round(self, method, dates):
+ # round
+ s = Series(pd.to_datetime(['2012-01-01 13:00:00',
+ '2012-01-01 12:01:00',
+ '2012-01-01 08:00:00']), name='xxx')
+ result = getattr(s.dt, method)('D')
+ expected = Series(pd.to_datetime(dates), name='xxx')
+ tm.assert_series_equal(result, expected)
+
+ def test_dt_round_tz(self):
+ s = Series(pd.to_datetime(['2012-01-01 13:00:00',
+ '2012-01-01 12:01:00',
+ '2012-01-01 08:00:00']), name='xxx')
+
+ result = (s.dt.tz_localize('UTC')
+ .dt.tz_convert('US/Eastern')
+ .dt.round('D'))
+ exp_values = pd.to_datetime(['2012-01-01', '2012-01-01',
+ '2012-01-01']).tz_localize('US/Eastern')
+ expected = Series(exp_values, name='xxx')
+ tm.assert_series_equal(result, expected)
+
+ # GH 18946 round near DST
+ df1 = pd.DataFrame([
+ pd.to_datetime('2017-10-29 02:00:00+02:00', utc=True),
+ pd.to_datetime('2017-10-29 02:00:00+01:00', utc=True),
+ pd.to_datetime('2017-10-29 03:00:00+01:00', utc=True)
+ ],
+ columns=['date'])
+ df1['date'] = df1['date'].dt.tz_convert('Europe/Madrid')
+ result = df1.date.dt.floor('H')
+ expected = df1['date']
+ tm.assert_series_equal(result, expected)
+
def test_dt_namespace_accessor_categorical(self):
# GH 19468
dti = DatetimeIndex(['20171111', '20181212']).repeat(2)
| - [x] closes #18885
- [x] closes #18946
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
The issues above required a new method to keep track if each timestamp in the `DatetimeIndex` was previously in daylight savings time in order to pass into `tz_localize(..., ambiguous=...)`
Therefore, added a `is_dst` method to `DatetimeIndex` and `Timestamp`. | https://api.github.com/repos/pandas-dev/pandas/pulls/22560 | 2018-09-01T02:05:04Z | 2018-09-09T01:42:57Z | null | 2018-09-26T17:11:16Z |
TST: Continue collecting arithmetic tests | diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 9ede1a62aaf2e..d3957330f11e4 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -42,6 +42,30 @@ def test_operator_series_comparison_zerorank(self):
expected = 0.0 > pd.Series([1, 2, 3])
tm.assert_series_equal(result, expected)
+ def test_df_numeric_cmp_dt64_raises(self):
+ # GH#8932, GH#22163
+ ts = pd.Timestamp.now()
+ df = pd.DataFrame({'x': range(5)})
+ with pytest.raises(TypeError):
+ df > ts
+ with pytest.raises(TypeError):
+ df < ts
+ with pytest.raises(TypeError):
+ ts < df
+ with pytest.raises(TypeError):
+ ts > df
+
+ assert not (df == ts).any().any()
+ assert (df != ts).all().all()
+
+ def test_compare_invalid(self):
+ # GH#8058
+ # ops testing
+ a = pd.Series(np.random.randn(5), name=0)
+ b = pd.Series(np.random.randn(5))
+ b.name = pd.Timestamp('2000-01-01')
+ tm.assert_series_equal(a / b, 1 / (b / a))
+
# ------------------------------------------------------------------
# Numeric dtypes Arithmetic with Timedelta Scalar
@@ -754,6 +778,51 @@ def check(series, other):
check(tser, 5)
+class TestUFuncCompat(object):
+ @pytest.mark.parametrize('holder', [pd.Int64Index, pd.UInt64Index,
+ pd.Float64Index, pd.Series])
+ def test_ufunc_coercions(self, holder):
+ idx = holder([1, 2, 3, 4, 5], name='x')
+ box = pd.Series if holder is pd.Series else pd.Index
+
+ result = np.sqrt(idx)
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index(np.sqrt(np.array([1, 2, 3, 4, 5])), name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+ result = np.divide(idx, 2.)
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index([0.5, 1., 1.5, 2., 2.5], name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+ # _evaluate_numeric_binop
+ result = idx + 2.
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index([3., 4., 5., 6., 7.], name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+ result = idx - 2.
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index([-1., 0., 1., 2., 3.], name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+ result = idx * 1.
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index([1., 2., 3., 4., 5.], name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+ result = idx / 2.
+ assert result.dtype == 'f8' and isinstance(result, box)
+ exp = pd.Float64Index([0.5, 1., 1.5, 2., 2.5], name='x')
+ exp = tm.box_expected(exp, box)
+ tm.assert_equal(result, exp)
+
+
class TestObjectDtypeEquivalence(object):
# Tests that arithmetic operations match operations executed elementwise
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index 2c1cc83c09f88..64d7cbc47fddd 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -180,3 +180,36 @@ def test_series_with_dtype_radd_timedelta(self, dtype):
result = ser + pd.Timedelta('3 days')
tm.assert_series_equal(result, expected)
+
+ # TODO: cleanup & parametrize over box
+ def test_mixed_timezone_series_ops_object(self):
+ # GH#13043
+ ser = pd.Series([pd.Timestamp('2015-01-01', tz='US/Eastern'),
+ pd.Timestamp('2015-01-01', tz='Asia/Tokyo')],
+ name='xxx')
+ assert ser.dtype == object
+
+ exp = pd.Series([pd.Timestamp('2015-01-02', tz='US/Eastern'),
+ pd.Timestamp('2015-01-02', tz='Asia/Tokyo')],
+ name='xxx')
+ tm.assert_series_equal(ser + pd.Timedelta('1 days'), exp)
+ tm.assert_series_equal(pd.Timedelta('1 days') + ser, exp)
+
+ # object series & object series
+ ser2 = pd.Series([pd.Timestamp('2015-01-03', tz='US/Eastern'),
+ pd.Timestamp('2015-01-05', tz='Asia/Tokyo')],
+ name='xxx')
+ assert ser2.dtype == object
+ exp = pd.Series([pd.Timedelta('2 days'), pd.Timedelta('4 days')],
+ name='xxx')
+ tm.assert_series_equal(ser2 - ser, exp)
+ tm.assert_series_equal(ser - ser2, -exp)
+
+ ser = pd.Series([pd.Timedelta('01:00:00'), pd.Timedelta('02:00:00')],
+ name='xxx', dtype=object)
+ assert ser.dtype == object
+
+ exp = pd.Series([pd.Timedelta('01:30:00'), pd.Timedelta('02:30:00')],
+ name='xxx')
+ tm.assert_series_equal(ser + pd.Timedelta('00:30:00'), exp)
+ tm.assert_series_equal(pd.Timedelta('00:30:00') + ser, exp)
diff --git a/pandas/tests/test_arithmetic.py b/pandas/tests/arithmetic/test_timedelta64.py
similarity index 100%
rename from pandas/tests/test_arithmetic.py
rename to pandas/tests/arithmetic/test_timedelta64.py
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index f142f770a0c54..a6f4e0e38ec5d 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -48,22 +48,6 @@ def test_mixed_comparison(self):
result = df != other
assert result.all().all()
- def test_df_numeric_cmp_dt64_raises(self):
- # GH#8932, GH#22163
- ts = pd.Timestamp.now()
- df = pd.DataFrame({'x': range(5)})
- with pytest.raises(TypeError):
- df > ts
- with pytest.raises(TypeError):
- df < ts
- with pytest.raises(TypeError):
- ts < df
- with pytest.raises(TypeError):
- ts > df
-
- assert not (df == ts).any().any()
- assert (df != ts).all().all()
-
def test_df_boolean_comparison_error(self):
# GH#4576
# boolean comparisons with a tuple/list give unexpected results
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index c8aa7f8fd50fd..1cb2cd46a65db 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -565,40 +565,6 @@ def test_slice_keep_name(self):
idx = self._holder([1, 2], name='asdf')
assert idx.name == idx[1:].name
- def test_ufunc_coercions(self):
- idx = self._holder([1, 2, 3, 4, 5], name='x')
-
- result = np.sqrt(idx)
- assert isinstance(result, Float64Index)
- exp = Float64Index(np.sqrt(np.array([1, 2, 3, 4, 5])), name='x')
- tm.assert_index_equal(result, exp)
-
- result = np.divide(idx, 2.)
- assert isinstance(result, Float64Index)
- exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x')
- tm.assert_index_equal(result, exp)
-
- # _evaluate_numeric_binop
- result = idx + 2.
- assert isinstance(result, Float64Index)
- exp = Float64Index([3., 4., 5., 6., 7.], name='x')
- tm.assert_index_equal(result, exp)
-
- result = idx - 2.
- assert isinstance(result, Float64Index)
- exp = Float64Index([-1., 0., 1., 2., 3.], name='x')
- tm.assert_index_equal(result, exp)
-
- result = idx * 1.
- assert isinstance(result, Float64Index)
- exp = Float64Index([1., 2., 3., 4., 5.], name='x')
- tm.assert_index_equal(result, exp)
-
- result = idx / 2.
- assert isinstance(result, Float64Index)
- exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x')
- tm.assert_index_equal(result, exp)
-
class TestInt64Index(NumericInt):
_dtype = 'int64'
diff --git a/pandas/tests/indexes/timedeltas/test_arithmetic.py b/pandas/tests/indexes/timedeltas/test_arithmetic.py
index f3bc523ca525e..e425937fedf4b 100644
--- a/pandas/tests/indexes/timedeltas/test_arithmetic.py
+++ b/pandas/tests/indexes/timedeltas/test_arithmetic.py
@@ -430,38 +430,6 @@ def test_ops_ndarray(self):
if LooseVersion(np.__version__) >= LooseVersion('1.8'):
tm.assert_numpy_array_equal(other - td, expected)
- def test_ops_series_object(self):
- # GH 13043
- s = pd.Series([pd.Timestamp('2015-01-01', tz='US/Eastern'),
- pd.Timestamp('2015-01-01', tz='Asia/Tokyo')],
- name='xxx')
- assert s.dtype == object
-
- exp = pd.Series([pd.Timestamp('2015-01-02', tz='US/Eastern'),
- pd.Timestamp('2015-01-02', tz='Asia/Tokyo')],
- name='xxx')
- tm.assert_series_equal(s + pd.Timedelta('1 days'), exp)
- tm.assert_series_equal(pd.Timedelta('1 days') + s, exp)
-
- # object series & object series
- s2 = pd.Series([pd.Timestamp('2015-01-03', tz='US/Eastern'),
- pd.Timestamp('2015-01-05', tz='Asia/Tokyo')],
- name='xxx')
- assert s2.dtype == object
- exp = pd.Series([pd.Timedelta('2 days'), pd.Timedelta('4 days')],
- name='xxx')
- tm.assert_series_equal(s2 - s, exp)
- tm.assert_series_equal(s - s2, -exp)
-
- s = pd.Series([pd.Timedelta('01:00:00'), pd.Timedelta('02:00:00')],
- name='xxx', dtype=object)
- assert s.dtype == object
-
- exp = pd.Series([pd.Timedelta('01:30:00'), pd.Timedelta('02:30:00')],
- name='xxx')
- tm.assert_series_equal(s + pd.Timedelta('00:30:00'), exp)
- tm.assert_series_equal(pd.Timedelta('00:30:00') + s, exp)
-
def test_timedelta_ops_with_missing_values(self):
# setup
s1 = pd.to_timedelta(Series(['00:00:01']))
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 41064b84abc36..37ba1c91368b3 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
import operator
-import numpy as np
import pytest
from pandas import Series
@@ -14,13 +13,6 @@
# Comparisons
class TestSeriesComparison(object):
- def test_compare_invalid(self):
- # GH#8058
- # ops testing
- a = pd.Series(np.random.randn(5), name=0)
- b = pd.Series(np.random.randn(5))
- b.name = pd.Timestamp('2000-01-01')
- tm.assert_series_equal(a / b, 1 / (b / a))
@pytest.mark.parametrize('opname', ['eq', 'ne', 'gt', 'lt', 'ge', 'le'])
def test_ser_flex_cmp_return_dtypes(self, opname):
| This is split off of #22350, which may have been too ambitious.
tests/test_arithmetic.py is moved to tests/arithmetic/test_timedelta64.py. Aside from being moved it is untouched.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22559 | 2018-09-01T01:07:59Z | 2018-09-08T02:48:04Z | 2018-09-08T02:48:04Z | 2018-09-08T03:24:32Z |
[ENH] pull in warning for dialect change from pandas-gbq | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 649629714c3b1..2ba4a6d58d20f 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -170,9 +170,9 @@ Other Enhancements
- :meth:`Series.droplevel` and :meth:`DataFrame.droplevel` are now implemented (:issue:`20342`)
- Added support for reading from Google Cloud Storage via the ``gcsfs`` library (:issue:`19454`)
- :func:`to_gbq` and :func:`read_gbq` signature and documentation updated to
- reflect changes from the `Pandas-GBQ library version 0.5.0
- <https://pandas-gbq.readthedocs.io/en/latest/changelog.html#changelog-0-5-0>`__.
- (:issue:`21627`)
+ reflect changes from the `Pandas-GBQ library version 0.6.0
+ <https://pandas-gbq.readthedocs.io/en/latest/changelog.html#changelog-0-6-0>`__.
+ (:issue:`21627`, :issue:`22557`)
- New method :meth:`HDFStore.walk` will recursively walk the group hierarchy of an HDF5 file (:issue:`10932`)
- :func:`read_html` copies cell data across ``colspan`` and ``rowspan``, and it treats all-``th`` table rows as headers if ``header`` kwarg is not given and there is no ``thead`` (:issue:`17054`)
- :meth:`Series.nlargest`, :meth:`Series.nsmallest`, :meth:`DataFrame.nlargest`, and :meth:`DataFrame.nsmallest` now accept the value ``"all"`` for the ``keep`` argument. This keeps all ties for the nth largest/smallest value (:issue:`16818`)
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index 87a0e4d5d1747..46e1b13631f07 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -1,5 +1,7 @@
""" Google BigQuery support """
+import warnings
+
def _try_import():
# since pandas is a dependency of pandas-gbq
@@ -23,7 +25,7 @@ def _try_import():
def read_gbq(query, project_id=None, index_col=None, col_order=None,
reauth=False, private_key=None, auth_local_webserver=False,
- dialect='legacy', location=None, configuration=None,
+ dialect=None, location=None, configuration=None,
verbose=None):
"""
Load data from Google BigQuery.
@@ -65,6 +67,8 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
*New in version 0.2.0 of pandas-gbq*.
dialect : str, default 'legacy'
+ Note: The default value is changing to 'standard' in a future verion.
+
SQL syntax dialect to use. Value can be one of:
``'legacy'``
@@ -76,6 +80,8 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
compliant with the SQL 2011 standard. For more information
see `BigQuery Standard SQL Reference
<https://cloud.google.com/bigquery/docs/reference/standard-sql/>`__.
+
+ .. versionchanged:: 0.24.0
location : str, optional
Location where the query job should run. See the `BigQuery locations
documentation
@@ -108,6 +114,17 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
pandas.DataFrame.to_gbq : Write a DataFrame to Google BigQuery.
"""
pandas_gbq = _try_import()
+
+ if dialect is None:
+ dialect = "legacy"
+ warnings.warn(
+ 'The default value for dialect is changing to "standard" in a '
+ 'future version of pandas-gbq. Pass in dialect="legacy" to '
+ "disable this warning.",
+ FutureWarning,
+ stacklevel=2,
+ )
+
return pandas_gbq.read_gbq(
query, project_id=project_id, index_col=index_col,
col_order=col_order, reauth=reauth, verbose=verbose,
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index dc6c319bb3366..68413d610e615 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -4,11 +4,17 @@
import platform
import os
+try:
+ from unittest import mock
+except ImportError:
+ mock = pytest.importorskip("mock")
+
import numpy as np
import pandas as pd
from pandas import compat, DataFrame
-
from pandas.compat import range
+import pandas.util.testing as tm
+
pandas_gbq = pytest.importorskip('pandas_gbq')
@@ -93,6 +99,16 @@ def make_mixed_dataframe_v2(test_size):
index=range(test_size))
+def test_read_gbq_without_dialect_warns_future_change(monkeypatch):
+ # Default dialect is changing to standard SQL. See:
+ # https://github.com/pydata/pandas-gbq/issues/195
+ mock_read_gbq = mock.Mock()
+ mock_read_gbq.return_value = DataFrame([[1.0]])
+ monkeypatch.setattr(pandas_gbq, 'read_gbq', mock_read_gbq)
+ with tm.assert_produces_warning(FutureWarning):
+ pd.read_gbq("SELECT 1")
+
+
@pytest.mark.single
class TestToGBQIntegrationWithServiceAccountKeyPath(object):
| - [x] ~closes~ towards https://github.com/pydata/pandas-gbq/issues/195
- [x] tests added / passed (N/A)
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22557 | 2018-08-31T17:41:20Z | 2018-09-18T12:29:48Z | 2018-09-18T12:29:48Z | 2018-09-18T12:29:51Z |
DOC: Update Series min and max docstring. GH22459 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3a7016ce39676..9fa19a5e5329b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9589,7 +9589,7 @@ def _add_numeric_operations(cls):
desc="Return the mean absolute deviation of the values "
"for the requested axis",
name1=name, name2=name2, axis_descr=axis_descr,
- min_count='', examples='')
+ min_count='', see_also='', examples='')
@Appender(_num_doc)
def mad(self, axis=None, skipna=None, level=None):
if skipna is None:
@@ -9630,8 +9630,8 @@ def mad(self, axis=None, skipna=None, level=None):
@Substitution(outname='compounded',
desc="Return the compound percentage of the values for "
"the requested axis", name1=name, name2=name2,
- axis_descr=axis_descr,
- min_count='', examples='')
+ axis_descr=axis_descr, min_count='', see_also='',
+ examples='')
@Appender(_num_doc)
def compound(self, axis=None, skipna=None, level=None):
if skipna is None:
@@ -9687,16 +9687,16 @@ def compound(self, axis=None, skipna=None, level=None):
nanops.nanmedian)
cls.max = _make_stat_function(
cls, 'max', name, name2, axis_descr,
- """This method returns the maximum of the values in the object.
- If you want the *index* of the maximum, use ``idxmax``. This is
- the equivalent of the ``numpy.ndarray`` method ``argmax``.""",
- nanops.nanmax, _max_examples)
+ "Return the maximum of the values in the object."
+ "\n\nIf you want the *index* of the maximum, use ``idxmax``. This "
+ "is the equivalent of the ``numpy.ndarray`` method ``argmax``.",
+ nanops.nanmax, _min_max_see_also, _max_examples)
cls.min = _make_stat_function(
cls, 'min', name, name2, axis_descr,
- """This method returns the minimum of the values in the object.
- If you want the *index* of the minimum, use ``idxmin``. This is
- the equivalent of the ``numpy.ndarray`` method ``argmin``.""",
- nanops.nanmin)
+ "Return the minimum of the values in the object."
+ "\n\nIf you want the *index* of the minimum, use ``idxmin``. This "
+ "is the equivalent of the ``numpy.ndarray`` method ``argmin``.",
+ nanops.nanmin, _min_max_see_also, _max_examples)
@classmethod
def _add_series_only_operations(cls):
@@ -10008,21 +10008,32 @@ def _doc_parms(cls):
Parameters
----------
-axis : %(axis_descr)s
-skipna : boolean, default True
+axis : %(axis_descr)s, default 0
+ Indicate which axis should be reduced. Not implemented for Series.
+
+ * 0 / ‘index’ : reduce the index, return a Series whose index is the
+ original column labels.
+ * 1 / ‘columns’ : reduce the columns, return a Series whose index is the
+ original index.
+ For a DataFrame the value 0 applies %(outname)s on each column, and 1
+ applies it on each row.
+skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a %(name1)s
-numeric_only : boolean, default None
- Include only float, int, boolean columns. If None, will attempt to use
+ particular level, collapsing into a %(name1)s.
+numeric_only : bool, default None
+ Include only float, int, bool columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
%(min_count)s\
+**kwargs : any, default None
+ Additional keyword arguments.
Returns
-------
-%(outname)s : %(name1)s or %(name2)s (if level specified)\
+%(outname)s : %(name1)s or %(name2)s (if level specified)
+%(see_also)s
%(examples)s"""
_num_ddof_doc = """
@@ -10506,6 +10517,19 @@ def _doc_parms(cls):
Series([], dtype: bool)
"""
+_min_max_see_also = """\
+See Also
+--------
+Series.min : Return the minimum.
+Series.max : Return the maximum.
+Series.idxmin : Return the index of the minimum.
+Series.idxmax : Return the index of the maximum.
+DataFrame.min : Return the minimum over the requested axis.
+DataFrame.max : Return the maximum over the requested axis.
+DataFrame.idxmin : Return the index of the minimum over the requested axis.
+DataFrame.idxmax : Return the index of the maximum over the requested axis.
+"""
+
_sum_examples = """\
Examples
--------
@@ -10625,7 +10649,6 @@ def _doc_parms(cls):
dtype: int64
"""
-
_min_count_stub = """\
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than
@@ -10643,7 +10666,7 @@ def _make_min_count_stat_function(cls, name, name1, name2, axis_descr, desc,
f, examples):
@Substitution(outname=name, desc=desc, name1=name1, name2=name2,
axis_descr=axis_descr, min_count=_min_count_stub,
- examples=examples)
+ see_also='', examples=examples)
@Appender(_num_doc)
def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
min_count=0,
@@ -10663,9 +10686,10 @@ def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
def _make_stat_function(cls, name, name1, name2, axis_descr, desc, f,
- examples=''):
+ see_also='', examples=''):
@Substitution(outname=name, desc=desc, name1=name1, name2=name2,
- axis_descr=axis_descr, min_count='', examples=examples)
+ axis_descr=axis_descr, min_count='', see_also=see_also,
+ examples=examples)
@Appender(_num_doc)
def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
**kwargs):
| - [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Doesn't pass all the tests, but can figure out where they come from.
> Errors found:
> Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
> Use only one blank line to separate sections or paragraphs
Also I find the way the axis are defined quite confusing. My initial thought was that if you say `axis='index'` it would take apply your function on a row wise basis.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22554 | 2018-08-31T16:36:30Z | 2018-12-09T20:46:37Z | null | 2018-12-09T20:46:37Z |
CLN: modernize string formatting | diff --git a/pandas/_libs/algos_common_helper.pxi.in b/pandas/_libs/algos_common_helper.pxi.in
index 42dda15ea2cbb..1efef480f3a29 100644
--- a/pandas/_libs/algos_common_helper.pxi.in
+++ b/pandas/_libs/algos_common_helper.pxi.in
@@ -55,8 +55,9 @@ cpdef map_indices_{{name}}(ndarray[{{c_type}}] index):
Better to do this with Cython because of the enormous speed boost.
"""
- cdef Py_ssize_t i, length
- cdef dict result = {}
+ cdef:
+ Py_ssize_t i, length
+ dict result = {}
length = len(index)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index fba7f210b34a1..91faed678192f 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -29,7 +29,7 @@ cdef extern from "Python.h":
import numpy as np
cimport numpy as cnp
-from numpy cimport ndarray, uint8_t, uint64_t, int64_t
+from numpy cimport ndarray, uint8_t, uint64_t, int64_t, float64_t
cnp.import_array()
from util cimport UINT64_MAX, INT64_MAX, INT64_MIN
@@ -694,7 +694,7 @@ cdef class TextReader:
if ptr == NULL:
if not os.path.exists(source):
raise compat.FileNotFoundError(
- 'File %s does not exist' % source)
+ 'File {source} does not exist'.format(source=source))
raise IOError('Initializing from file failed')
self.parser.source = ptr
@@ -772,9 +772,10 @@ cdef class TextReader:
if name == '':
if self.has_mi_columns:
- name = 'Unnamed: %d_level_%d' % (i, level)
+ name = ('Unnamed: {i}_level_{lvl}'
+ .format(i=i, lvl=level))
else:
- name = 'Unnamed: %d' % i
+ name = 'Unnamed: {i}'.format(i=i)
unnamed_count += 1
count = counts.get(name, 0)
@@ -849,8 +850,8 @@ cdef class TextReader:
# 'data has %d fields'
# % (passed_count, field_count))
- if self.has_usecols and self.allow_leading_cols and \
- not callable(self.usecols):
+ if (self.has_usecols and self.allow_leading_cols and
+ not callable(self.usecols)):
nuse = len(self.usecols)
if nuse == passed_count:
self.leading_cols = 0
@@ -1027,8 +1028,10 @@ cdef class TextReader:
if self.table_width - self.leading_cols > num_cols:
raise ParserError(
- "Too many columns specified: expected %s and found %s" %
- (self.table_width - self.leading_cols, num_cols))
+ "Too many columns specified: expected {expected} and "
+ "found {found}"
+ .format(expected=self.table_width - self.leading_cols,
+ found=num_cols))
results = {}
nused = 0
@@ -1036,8 +1039,8 @@ cdef class TextReader:
if i < self.leading_cols:
# Pass through leading columns always
name = i
- elif self.usecols and not callable(self.usecols) and \
- nused == len(self.usecols):
+ elif (self.usecols and not callable(self.usecols) and
+ nused == len(self.usecols)):
# Once we've gathered all requested columns, stop. GH5766
break
else:
@@ -1103,7 +1106,7 @@ cdef class TextReader:
col_res = _maybe_upcast(col_res)
if col_res is None:
- raise ParserError('Unable to parse column %d' % i)
+ raise ParserError('Unable to parse column {i}'.format(i=i))
results[i] = col_res
@@ -1222,8 +1225,8 @@ cdef class TextReader:
elif dtype.kind == 'U':
width = dtype.itemsize
if width > 0:
- raise TypeError("the dtype %s is not "
- "supported for parsing" % dtype)
+ raise TypeError("the dtype {dtype} is not "
+ "supported for parsing".format(dtype=dtype))
# unicode variable width
return self._string_convert(i, start, end, na_filter,
@@ -1241,12 +1244,12 @@ cdef class TextReader:
return self._string_convert(i, start, end, na_filter,
na_hashset)
elif is_datetime64_dtype(dtype):
- raise TypeError("the dtype %s is not supported "
+ raise TypeError("the dtype {dtype} is not supported "
"for parsing, pass this column "
- "using parse_dates instead" % dtype)
+ "using parse_dates instead".format(dtype=dtype))
else:
- raise TypeError("the dtype %s is not "
- "supported for parsing" % dtype)
+ raise TypeError("the dtype {dtype} is not "
+ "supported for parsing".format(dtype=dtype))
cdef _string_convert(self, Py_ssize_t i, int64_t start, int64_t end,
bint na_filter, kh_str_t *na_hashset):
@@ -2058,7 +2061,7 @@ cdef kh_float64_t* kset_float64_from_list(values) except NULL:
khiter_t k
kh_float64_t *table
int ret = 0
- cnp.float64_t val
+ float64_t val
object value
table = kh_init_float64()
@@ -2101,7 +2104,7 @@ cdef raise_parser_error(object base, parser_t *parser):
Py_XDECREF(type)
raise old_exc
- message = '%s. C error: ' % base
+ message = '{base}. C error: '.format(base=base)
if parser.error_msg != NULL:
if PY3:
message += parser.error_msg.decode('utf-8')
| Broken off from #22283.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22553 | 2018-08-31T15:01:08Z | 2018-09-01T17:28:02Z | 2018-09-01T17:28:02Z | 2018-09-01T17:31:18Z |
Correct assert_frame_equal doc string | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 1eedb9e2a8274..089e35e8e93b2 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1306,33 +1306,40 @@ def assert_frame_equal(left, right, check_dtype=True,
check_categorical=True,
check_like=False,
obj='DataFrame'):
- """Check that left and right DataFrame are equal.
+ """
+ Check that left and right DataFrame are equal.
+
+ This function is intended to compare two DataFrames and output any
+ differences. Is is mostly intended for use in unit tests.
+ Additional parameters allow varying the strictness of the
+ equality checks performed.
Parameters
----------
left : DataFrame
+ First DataFrame to compare.
right : DataFrame
+ Second DataFrame to compare.
check_dtype : bool, default True
Whether to check the DataFrame dtype is identical.
- check_index_type : bool / string {'equiv'}, default False
+ check_index_type : {'equiv'} or bool, default 'equiv'
Whether to check the Index class, dtype and inferred_type
are identical.
- check_column_type : bool / string {'equiv'}, default False
+ check_column_type : {'equiv'} or bool, default 'equiv'
Whether to check the columns class, dtype and inferred_type
are identical.
- check_frame_type : bool, default False
+ check_frame_type : bool, default True
Whether to check the DataFrame class is identical.
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
- If int, then specify the digits to compare
+ If int, then specify the digits to compare.
check_names : bool, default True
Whether to check that the `names` attribute for both the `index`
and `column` attributes of the DataFrame is identical, i.e.
* left.index.names == right.index.names
* left.columns.names == right.columns.names
-
by_blocks : bool, default False
Specify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
@@ -1345,10 +1352,39 @@ def assert_frame_equal(left, right, check_dtype=True,
check_like : bool, default False
If True, ignore the order of index & columns.
Note: index labels must match their respective rows
- (same as in columns) - same labels must be with the same data
+ (same as in columns) - same labels must be with the same data.
obj : str, default 'DataFrame'
Specify object name being compared, internally used to show appropriate
- assertion message
+ assertion message.
+
+ See Also
+ --------
+ assert_series_equal : Equivalent method for asserting Series equality.
+ DataFrame.equals : Check DataFrame equality.
+
+ Examples
+ --------
+ This example shows comparing two DataFrames that are equal
+ but with columns of differing dtypes.
+
+ >>> from pandas.util.testing import assert_frame_equal
+ >>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
+ >>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
+
+ df1 equals itself.
+ >>> assert_frame_equal(df1, df1)
+
+ df1 differs from df2 as column 'b' is of a different type.
+ >>> assert_frame_equal(df1, df2)
+ Traceback (most recent call last):
+ AssertionError: Attributes are different
+
+ Attribute "dtype" are different
+ [left]: int64
+ [right]: float64
+
+ Ignore differing dtypes in columns with check_dtype.
+ >>> assert_frame_equal(df1, df2, check_dtype=False)
"""
# instance validation
| Correct default values in assert_frame_equal doc string
| https://api.github.com/repos/pandas-dev/pandas/pulls/22552 | 2018-08-31T13:36:39Z | 2018-09-03T16:46:55Z | 2018-09-03T16:46:55Z | 2018-09-03T16:46:55Z |
Fix incorrect DTI/TDI indexing; warn before dropping tzinfo | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 8445a28a51a5d..7d7dc7f0f17b5 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -525,6 +525,7 @@ Datetimelike API Changes
- :class:`DateOffset` objects are now immutable. Attempting to alter one of these will now raise ``AttributeError`` (:issue:`21341`)
- :class:`PeriodIndex` subtraction of another ``PeriodIndex`` will now return an object-dtype :class:`Index` of :class:`DateOffset` objects instead of raising a ``TypeError`` (:issue:`20049`)
- :func:`cut` and :func:`qcut` now returns a :class:`DatetimeIndex` or :class:`TimedeltaIndex` bins when the input is datetime or timedelta dtype respectively and ``retbins=True`` (:issue:`19891`)
+- :meth:`DatetimeIndex.to_period` and :meth:`Timestamp.to_period` will issue a warning when timezone information will be lost (:issue:`21333`)
.. _whatsnew_0240.api.other:
@@ -626,6 +627,8 @@ Datetimelike
- Bug in :class:`DataFrame` with mixed dtypes including ``datetime64[ns]`` incorrectly raising ``TypeError`` on equality comparisons (:issue:`13128`,:issue:`22163`)
- Bug in :meth:`DataFrame.eq` comparison against ``NaT`` incorrectly returning ``True`` or ``NaN`` (:issue:`15697`,:issue:`22163`)
- Bug in :class:`DatetimeIndex` subtraction that incorrectly failed to raise `OverflowError` (:issue:`22492`, :issue:`22508`)
+- Bug in :class:`DatetimeIndex` incorrectly allowing indexing with ``Timedelta`` object (:issue:`20464`)
+-
Timedelta
^^^^^^^^^
@@ -634,7 +637,7 @@ Timedelta
- Bug in multiplying a :class:`Series` with numeric dtype against a ``timedelta`` object (:issue:`22390`)
- Bug in :class:`Series` with numeric dtype when adding or subtracting an an array or ``Series`` with ``timedelta64`` dtype (:issue:`22390`)
- Bug in :class:`Index` with numeric dtype when multiplying or dividing an array with dtype ``timedelta64`` (:issue:`22390`)
--
+- Bug in :class:`TimedeltaIndex` incorrectly allowing indexing with ``Timestamp`` object (:issue:`20464`)
-
-
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 3ab1396c0fe38..52343593d1cc1 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -737,6 +737,12 @@ class Timestamp(_Timestamp):
"""
from pandas import Period
+ if self.tz is not None:
+ # GH#21333
+ warnings.warn("Converting to Period representation will "
+ "drop timezone information.",
+ UserWarning)
+
if freq is None:
freq = self.freq
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 629660c899a3f..f780b68a536a1 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -2,7 +2,7 @@
from __future__ import division
import operator
import warnings
-from datetime import time, datetime
+from datetime import time, datetime, timedelta
import numpy as np
from pytz import utc
@@ -730,6 +730,10 @@ def to_period(self, freq=None):
"""
from pandas.core.indexes.period import PeriodIndex
+ if self.tz is not None:
+ warnings.warn("Converting to PeriodIndex representation will "
+ "drop timezone information.", UserWarning)
+
if freq is None:
freq = self.freqstr or self.inferred_freq
@@ -740,7 +744,7 @@ def to_period(self, freq=None):
freq = get_period_alias(freq)
- return PeriodIndex(self.values, name=self.name, freq=freq, tz=self.tz)
+ return PeriodIndex(self.values, name=self.name, freq=freq)
def snap(self, freq='S'):
"""
@@ -1204,6 +1208,12 @@ def get_loc(self, key, method=None, tolerance=None):
key = Timestamp(key, tz=self.tz)
return Index.get_loc(self, key, method, tolerance)
+ elif isinstance(key, timedelta):
+ # GH#20464
+ raise TypeError("Cannot index {cls} with {other}"
+ .format(cls=type(self).__name__,
+ other=type(key).__name__))
+
if isinstance(key, time):
if method is not None:
raise NotImplementedError('cannot yet lookup inexact labels '
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 063b578e512de..e0c78d6a1c518 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -1,5 +1,6 @@
""" implement the TimedeltaIndex """
import operator
+from datetime import datetime
import numpy as np
from pandas.core.dtypes.common import (
@@ -487,7 +488,11 @@ def get_loc(self, key, method=None, tolerance=None):
-------
loc : int
"""
- if is_list_like(key):
+ if is_list_like(key) or (isinstance(key, datetime) and key is not NaT):
+ # GH#20464 datetime check here is to ensure we don't allow
+ # datetime objects to be incorrectly treated as timedelta
+ # objects; NaT is a special case because it plays a double role
+ # as Not-A-Timedelta
raise TypeError
if isna(key):
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 78b669de95598..be22d80a862e1 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -246,7 +246,9 @@ def setup_method(self, method):
def test_to_period_millisecond(self):
index = self.index
- period = index.to_period(freq='L')
+ with tm.assert_produces_warning(UserWarning):
+ # warning that timezone info will be lost
+ period = index.to_period(freq='L')
assert 2 == len(period)
assert period[0] == Period('2007-01-01 10:11:12.123Z', 'L')
assert period[1] == Period('2007-01-01 10:11:13.789Z', 'L')
@@ -254,7 +256,9 @@ def test_to_period_millisecond(self):
def test_to_period_microsecond(self):
index = self.index
- period = index.to_period(freq='U')
+ with tm.assert_produces_warning(UserWarning):
+ # warning that timezone info will be lost
+ period = index.to_period(freq='U')
assert 2 == len(period)
assert period[0] == Period('2007-01-01 10:11:12.123456Z', 'U')
assert period[1] == Period('2007-01-01 10:11:13.789123Z', 'U')
@@ -264,12 +268,20 @@ def test_to_period_microsecond(self):
dateutil.tz.tzutc()])
def test_to_period_tz(self, tz):
ts = date_range('1/1/2000', '2/1/2000', tz=tz)
- result = ts.to_period()[0]
- expected = ts[0].to_period()
+
+ with tm.assert_produces_warning(UserWarning):
+ # GH#21333 warning that timezone info will be lost
+ result = ts.to_period()[0]
+ expected = ts[0].to_period()
+
assert result == expected
expected = date_range('1/1/2000', '2/1/2000').to_period()
- result = ts.to_period()
+
+ with tm.assert_produces_warning(UserWarning):
+ # GH#21333 warning that timezone info will be lost
+ result = ts.to_period()
+
tm.assert_index_equal(result, expected)
def test_to_period_nofreq(self):
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 8cffa035721b0..601a7b13e370a 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -586,3 +586,17 @@ def test_reasonable_keyerror(self):
with pytest.raises(KeyError) as excinfo:
index.get_loc('1/1/2000')
assert '2000' in str(excinfo.value)
+
+ @pytest.mark.parametrize('key', [pd.Timedelta(0),
+ pd.Timedelta(1),
+ timedelta(0)])
+ def test_timedelta_invalid_key(self, key):
+ # GH#20464
+ dti = pd.date_range('1970-01-01', periods=10)
+ with pytest.raises(TypeError):
+ dti.get_loc(key)
+
+ def test_get_loc_nat(self):
+ # GH#20464
+ index = DatetimeIndex(['1/3/2000', 'NaT'])
+ assert index.get_loc(pd.NaT) == 1
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 08992188265bd..8ba2c81f429d8 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -1,4 +1,4 @@
-from datetime import timedelta
+from datetime import datetime, timedelta
import pytest
import numpy as np
@@ -41,6 +41,15 @@ def test_getitem(self):
tm.assert_index_equal(result, expected)
assert result.freq == expected.freq
+ @pytest.mark.parametrize('key', [pd.Timestamp('1970-01-01'),
+ pd.Timestamp('1970-01-02'),
+ datetime(1970, 1, 1)])
+ def test_timestamp_invalid_key(self, key):
+ # GH#20464
+ tdi = pd.timedelta_range(0, periods=10)
+ with pytest.raises(TypeError):
+ tdi.get_loc(key)
+
class TestWhere(object):
# placeholder for symmetry with DatetimeIndex and PeriodIndex tests
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index 58146cae587fe..872c510094a4f 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -929,3 +929,11 @@ def test_to_datetime_bijective(self):
with tm.assert_produces_warning(exp_warning, check_stacklevel=False):
assert (Timestamp(Timestamp.min.to_pydatetime()).value / 1000 ==
Timestamp.min.value / 1000)
+
+ def test_to_period_tz_warning(self):
+ # GH#21333 make sure a warning is issued when timezone
+ # info is lost
+ ts = Timestamp('2009-04-15 16:17:18', tz='US/Eastern')
+ with tm.assert_produces_warning(UserWarning):
+ # warning that timezone info will be lost
+ ts.to_period('D')
| Slight hodge-podge.
#21333 is about `to_period` method on DatetimeIndex silently dropping timezone info. This PR makes `DatetimeIndex.to_period` and `Timestamp.to_period` issue a warning when losing timezone info.
#20464 is entirely unrelated, notes that `pd.date_range('1970-01-01', period=2).get_loc(pd.Timedelta(0))` returns `0` instead of raising; ditto for indexing a TDI with a Timestamp.
- [x] closes #21333
- [x] closes #20464
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22549 | 2018-08-31T03:12:23Z | 2018-09-08T02:44:22Z | 2018-09-08T02:44:21Z | 2018-09-08T03:11:14Z |
Typo: UCT -> UTC | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index a287557297cc9..f5d1007dfbbbb 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -2228,7 +2228,7 @@ To remove timezone from tz-aware ``DatetimeIndex``, use ``tz_localize(None)`` or
didx.tz_convert(None)
# tz_convert(None) is identical with tz_convert('UTC').tz_localize(None)
- didx.tz_convert('UCT').tz_localize(None)
+ didx.tz_convert('UTC').tz_localize(None)
.. _timeseries.timezone_ambiguous:
| - [(no issue)] closes #xxxx
- [(didn't change code)] tests added / passed
- [(didn't change .py files)] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [(seems unnecessary)] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22548 | 2018-08-30T22:07:46Z | 2018-08-31T10:00:27Z | 2018-08-31T10:00:27Z | 2018-08-31T10:00:30Z |
CLN: use 'codes' rather than 'values' internally in Categorical | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index fa8ce61f1f4bc..9b7320bf143c2 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1736,7 +1736,7 @@ def fillna(self, value=None, method=None, limit=None):
raise NotImplementedError("specifying a limit for fillna has not "
"been implemented yet")
- values = self._codes
+ codes = self._codes
# pad / bfill
if method is not None:
@@ -1744,7 +1744,7 @@ def fillna(self, value=None, method=None, limit=None):
values = self.to_dense().reshape(-1, len(self))
values = interpolate_2d(values, method, 0, None,
value).astype(self.categories.dtype)[0]
- values = _get_codes_for_values(values, self.categories)
+ codes = _get_codes_for_values(values, self.categories)
else:
@@ -1756,27 +1756,27 @@ def fillna(self, value=None, method=None, limit=None):
values_codes = _get_codes_for_values(value, self.categories)
indexer = np.where(values_codes != -1)
- values[indexer] = values_codes[values_codes != -1]
+ codes[indexer] = values_codes[values_codes != -1]
# If value is not a dict or Series it should be a scalar
elif is_hashable(value):
if not isna(value) and value not in self.categories:
raise ValueError("fill value must be in categories")
- mask = values == -1
+ mask = codes == -1
if mask.any():
- values = values.copy()
+ codes = codes.copy()
if isna(value):
- values[mask] = -1
+ codes[mask] = -1
else:
- values[mask] = self.categories.get_loc(value)
+ codes[mask] = self.categories.get_loc(value)
else:
raise TypeError('"value" parameter must be a scalar, dict '
'or Series, but you passed a '
'"{0}"'.format(type(value).__name__))
- return self._constructor(values, dtype=self.dtype, fastpath=True)
+ return self._constructor(codes, dtype=self.dtype, fastpath=True)
def take_nd(self, indexer, allow_fill=None, fill_value=None):
"""
@@ -2148,14 +2148,12 @@ def mode(self, dropna=True):
"""
import pandas._libs.hashtable as htable
- values = self._codes
+ codes = self._codes
if dropna:
good = self._codes != -1
- values = self._codes[good]
- values = sorted(htable.mode_int64(ensure_int64(values), dropna))
- result = self._constructor(values=values, dtype=self.dtype,
- fastpath=True)
- return result
+ codes = self._codes[good]
+ codes = sorted(htable.mode_int64(ensure_int64(codes), dropna))
+ return self._constructor(values=codes, dtype=self.dtype, fastpath=True)
def unique(self):
"""
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index ab180a13ab4f3..e3a21efe269ce 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -460,9 +460,7 @@ def where(self, cond, other=None):
other = self._na_value
values = np.where(cond, self.values, other)
- cat = Categorical(values,
- categories=self.categories,
- ordered=self.ordered)
+ cat = Categorical(values, dtype=self.dtype)
return self._shallow_copy(cat, **self._get_attributes_dict())
def reindex(self, target, method=None, level=None, limit=None,
| In some places a variable has the name ``values``, where ``codes`` would be more logical, as we're dealing with codes.
This makes it a bit easier to understand what we're looking at when reading IMO, | https://api.github.com/repos/pandas-dev/pandas/pulls/22547 | 2018-08-30T21:18:20Z | 2018-08-31T10:01:27Z | 2018-08-31T10:01:27Z | 2018-08-31T10:02:47Z |
DOC: Fix copy-pasted name in `.day_name` docstring | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 1dd34cdf73ab5..59d2f33b00c63 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -793,17 +793,27 @@ def month_name(self, locale=None):
"""
Return the month names of the DateTimeIndex with specified locale.
+ .. versionadded:: 0.23.0
+
Parameters
----------
- locale : string, default None (English locale)
- locale determining the language in which to return the month name
+ locale : str, optional
+ Locale determining the language in which to return the month name.
+ Default is English locale.
Returns
-------
- month_names : Index
- Index of month names
+ Index
+ Index of month names.
- .. versionadded:: 0.23.0
+ Examples
+ --------
+ >>> idx = pd.DatetimeIndex(start='2018-01', freq='M', periods=3)
+ >>> idx
+ DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],
+ dtype='datetime64[ns]', freq='M')
+ >>> idx.month_name()
+ Index(['January', 'February', 'March'], dtype='object')
"""
if self.tz is not None and self.tz is not utc:
values = self._local_timestamps()
@@ -819,17 +829,27 @@ def day_name(self, locale=None):
"""
Return the day names of the DateTimeIndex with specified locale.
+ .. versionadded:: 0.23.0
+
Parameters
----------
- locale : string, default None (English locale)
- locale determining the language in which to return the day name
+ locale : str, optional
+ Locale determining the language in which to return the day name.
+ Default is English locale.
Returns
-------
- month_names : Index
- Index of day names
+ Index
+ Index of day names.
- .. versionadded:: 0.23.0
+ Examples
+ --------
+ >>> idx = pd.DatetimeIndex(start='2018-01-01', freq='D', periods=3)
+ >>> idx
+ DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'],
+ dtype='datetime64[ns]', freq='D')
+ >>> idx.day_name()
+ Index(['Monday', 'Tuesday', 'Wednesday'], dtype='object')
"""
if self.tz is not None and self.tz is not utc:
values = self._local_timestamps()
| Just a small change... Feel free to close and include this change in any other typo-fixing commit if you like. :blush: | https://api.github.com/repos/pandas-dev/pandas/pulls/22544 | 2018-08-30T16:38:49Z | 2018-09-08T03:02:22Z | 2018-09-08T03:02:22Z | 2018-09-08T03:02:26Z |
Added capture_stderr decorator to test_validate_docstrings | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 1eedb9e2a8274..1f6214e64f9c2 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -673,7 +673,7 @@ def capture_stderr(f):
AssertionError: assert 'foo\n' == 'bar\n'
"""
- @wraps(f)
+ @compat.wraps(f)
def wrapper(*args, **kwargs):
try:
sys.stderr = StringIO()
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 933d02cc8c627..0c0757c6963d7 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -6,6 +6,8 @@
import validate_docstrings
validate_one = validate_docstrings.validate_one
+from pandas.util.testing import capture_stderr
+
class GoodDocStrings(object):
"""
@@ -518,10 +520,12 @@ def _import_path(self, klass=None, func=None):
return base_path
+ @capture_stderr
def test_good_class(self):
assert validate_one(self._import_path(
klass='GoodDocStrings')) == 0
+ @capture_stderr
@pytest.mark.parametrize("func", [
'plot', 'sample', 'random_letters', 'sample_values', 'head', 'head1',
'contains', 'mode'])
@@ -529,10 +533,12 @@ def test_good_functions(self, func):
assert validate_one(self._import_path(
klass='GoodDocStrings', func=func)) == 0
+ @capture_stderr
def test_bad_class(self):
assert validate_one(self._import_path(
klass='BadGenericDocStrings')) > 0
+ @capture_stderr
@pytest.mark.parametrize("func", [
'func', 'astype', 'astype1', 'astype2', 'astype3', 'plot', 'method'])
def test_bad_generic_functions(self, func):
| - [X] closes #22483
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Originally thought it would be nice to have this as a class decorator, but usage of this conflicts with the `capsys` fixture used by `test_bad_examples` so had to pick and choose where to apply. Additionally the decorator doesn't work out of the box to wrap classes, so it would take a decent amount more code.
cc @gfyoung will present minor merge conflicts with your work in #22531 | https://api.github.com/repos/pandas-dev/pandas/pulls/22543 | 2018-08-30T15:13:19Z | 2018-09-04T21:47:45Z | 2018-09-04T21:47:45Z | 2020-01-16T00:34:28Z |
DOC: Improve the docstring of DataFrame.equals() | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 85bd6065314f4..dd5552151f61b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1303,8 +1303,85 @@ def __invert__(self):
def equals(self, other):
"""
- Determines if two NDFrame objects contain the same elements. NaNs in
- the same location are considered equal.
+ Test whether two objects contain the same elements.
+
+ This function allows two Series or DataFrames to be compared against
+ each other to see if they have the same shape and elements. NaNs in
+ the same location are considered equal. The column headers do not
+ need to have the same type, but the elements within the columns must
+ be the same dtype.
+
+ Parameters
+ ----------
+ other : Series or DataFrame
+ The other Series or DataFrame to be compared with the first.
+
+ Returns
+ -------
+ bool
+ True if all elements are the same in both objects, False
+ otherwise.
+
+ See Also
+ --------
+ Series.eq : Compare two Series objects of the same length
+ and return a Series where each element is True if the element
+ in each Series is equal, False otherwise.
+ DataFrame.eq : Compare two DataFrame objects of the same shape and
+ return a DataFrame where each element is True if the respective
+ element in each DataFrame is equal, False otherwise.
+ assert_series_equal : Return True if left and right Series are equal,
+ False otherwise.
+ assert_frame_equal : Return True if left and right DataFrames are
+ equal, False otherwise.
+ numpy.array_equal : Return True if two arrays have the same shape
+ and elements, False otherwise.
+
+ Notes
+ -----
+ This function requires that the elements have the same dtype as their
+ respective elements in the other Series or DataFrame. However, the
+ column labels do not need to have the same type, as long as they are
+ still considered equal.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({1: [10], 2: [20]})
+ >>> df
+ 1 2
+ 0 10 20
+
+ DataFrames df and exactly_equal have the same types and values for
+ their elements and column labels, which will return True.
+
+ >>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
+ >>> exactly_equal
+ 1 2
+ 0 10 20
+ >>> df.equals(exactly_equal)
+ True
+
+ DataFrames df and different_column_type have the same element
+ types and values, but have different types for the column labels,
+ which will still return True.
+
+ >>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
+ >>> different_column_type
+ 1.0 2.0
+ 0 10 20
+ >>> df.equals(different_column_type)
+ True
+
+ DataFrames df and different_data_type have different types for the
+ same values for their elements, and will return False even though
+ their column labels are the same values and types.
+
+ >>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
+ >>> different_data_type
+ 1 2
+ 0 10.0 20.0
+ >>> df.equals(different_data_type)
+ False
"""
if not isinstance(other, self._constructor):
return False
| - [X] closes #22462
- [ ] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
```
################################################################################
##################### Docstring (pandas.DataFrame.equals) #####################
################################################################################
Test whether two DataFrame objects contain the same elements.
This function allows two DataFrame objects to be compared against
each other to see if they same shape and elements. NaNs in the same
location are considered equal. The column headers do not need to have
the same type, but the elements within the columns must be
the same dtype.
Parameters
----------
other : DataFrame
The other DataFrame to be compared with the first.
Returns
-------
bool
True if all elements are the same in both DataFrames, False
otherwise.
See Also
--------
pandas.Series.eq : Compare two Series objects of the same length
and return a Series where each element is True if the element
in each Series is equal, False otherwise.
numpy.array_equal : Return True if two arrays have the same shape
and elements, False otherwise.
Notes
-----
This function requires that the elements have the same dtype as their
respective elements in the other DataFrame. However, the indices do
not need to have the same type, as long as they are still considered
equal.
Examples
--------
>>> a = pd.DataFrame({1:[0], 0:[1]})
>>> b = pd.DataFrame({1.0:[0], 0.0:[1]})
DataFrames a and b have the same element types and values, but have
different types for the indices, which will still return True.
>>> a.equals(b)
True
DataFrames a and c have different types for the same values for their
elements, and will return False even though the indices are the same
values and types.
>>> c = pd.DataFrame({1:[0.0], 0:[1.0]})
>>> a.equals(c)
False
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.DataFrame.equals" correct. :)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/22539 | 2018-08-30T01:41:21Z | 2018-09-05T23:34:52Z | 2018-09-05T23:34:52Z | 2018-09-05T23:34:52Z |
Bug Fix 21755 | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 61dadd833be35..f71fee22e27eb 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1919,7 +1919,7 @@ def head(self, n=5):
"""
self._reset_group_selection()
mask = self._cumcount_array() < n
- return self._selected_obj[mask]
+ return self._selected_obj[mask].dropna(subset=[self.keys])
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1946,7 +1946,7 @@ def tail(self, n=5):
"""
self._reset_group_selection()
mask = self._cumcount_array(ascending=False) < n
- return self._selected_obj[mask]
+ return self._selected_obj[mask].dropna(subset=[self.keys])
GroupBy._add_numeric_operations()
| Corrected the head and tail
- [ ] closes partially #21755
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22536 | 2018-08-29T23:26:27Z | 2018-08-30T00:02:28Z | null | 2018-08-30T00:02:28Z |
DEPR: deprecate integer add/sub with DTI/TDI/PI/Timestamp/Period | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index de111072bef02..22427e1ada1ad 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -955,6 +955,56 @@ Deprecations
- :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have deprecated the ``errors`` argument in favor of the ``nonexistent`` argument (:issue:`8917`)
- The class ``FrozenNDArray`` has been deprecated. When unpickling, ``FrozenNDArray`` will be unpickled to ``np.ndarray`` once this class is removed (:issue:`9031`)
+.. _whatsnew_0240.deprecations.datetimelike_int_ops:
+
+Integer Addition/Subtraction with Datetime-like Classes Is Deprecated
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In the past, users could add or subtract integers or integer-dtypes arrays
+from :class:`Period`, :class:`PeriodIndex`, and in some cases
+:class:`Timestamp`, :class:`DatetimeIndex` and :class:`TimedeltaIndex`.
+
+This usage is now deprecated. Instead add or subtract integer multiples of
+the object's ``freq`` attribute (:issue:`21939`)
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [3]: per = pd.Period('2016Q1')
+ In [4]: per + 3
+ Out[4]: Period('2016Q4', 'Q-DEC')
+
+ In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
+ In [6]: ts + 2
+ Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')
+
+ In [7]: tdi = pd.timedelta_range('1D', periods=2)
+ In [8]: tdi - np.array([2, 1])
+ Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)
+
+ In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
+ In [10]: dti + pd.Index([1, 2])
+ Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
+
+Current Behavior:
+
+.. ipython:: python
+ :okwarning:
+ per = pd.Period('2016Q1')
+ per + 3
+
+ per = pd.Period('2016Q1')
+ per + 3 * per.freq
+
+ ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
+ ts + 2 * ts.freq
+
+ tdi = pd.timedelta_range('1D', periods=2)
+ tdi - np.array([2 * tdi.freq, 1 * tdi.freq])
+
+ dti = pd.date_range('2001-01-01', periods=2, freq='7D')
+ dti + pd.Index([1 * dti.freq, 2 * dti.freq])
+
.. _whatsnew_0240.prior_deprecations:
Removal of prior version deprecations/changes
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 43dc415bfd464..73fccb9125a85 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -33,7 +33,7 @@ cdef extern from "src/datetime/np_datetime.h":
cimport util
from util cimport is_period_object, is_string_object
-from timestamps import Timestamp
+from timestamps import Timestamp, maybe_integer_op_deprecated
from timezones cimport is_utc, is_tzlocal, get_dst_info
from timedeltas import Timedelta
from timedeltas cimport delta_to_nanoseconds
@@ -1645,6 +1645,8 @@ cdef class _Period(object):
elif other is NaT:
return NaT
elif util.is_integer_object(other):
+ maybe_integer_op_deprecated(self)
+
ordinal = self.ordinal + other * self.freq.n
return Period(ordinal=ordinal, freq=self.freq)
elif (PyDateTime_Check(other) or
@@ -1671,6 +1673,8 @@ cdef class _Period(object):
neg_other = -other
return self + neg_other
elif util.is_integer_object(other):
+ maybe_integer_op_deprecated(self)
+
ordinal = self.ordinal - other * self.freq.n
return Period(ordinal=ordinal, freq=self.freq)
elif is_period_object(other):
@@ -1756,7 +1760,7 @@ cdef class _Period(object):
def end_time(self):
# freq.n can't be negative or 0
# ordinal = (self + self.freq.n).start_time.value - 1
- ordinal = (self + 1).start_time.value - 1
+ ordinal = (self + self.freq).start_time.value - 1
return Timestamp(ordinal)
def to_timestamp(self, freq=None, how='start', tz=None):
@@ -1783,7 +1787,8 @@ cdef class _Period(object):
end = how == 'E'
if end:
- return (self + 1).to_timestamp(how='start') - Timedelta(1, 'ns')
+ endpoint = (self + self.freq).to_timestamp(how='start')
+ return endpoint - Timedelta(1, 'ns')
if freq is None:
base, mult = get_freq_code(self.freq)
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 08b0c5472549e..93fa99ce1bd87 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -40,8 +40,19 @@ from timezones cimport (
_zero_time = datetime_time(0, 0)
_no_input = object()
+
# ----------------------------------------------------------------------
+def maybe_integer_op_deprecated(obj):
+ # GH#22535 add/sub of integers and int-arrays is deprecated
+ if obj.freq is not None:
+ warnings.warn("Addition/subtraction of integers and integer-arrays "
+ "to {cls} is deprecated, will be removed in a future "
+ "version. Instead of adding/subtracting `n`, use "
+ "`n * self.freq`"
+ .format(cls=type(obj).__name__),
+ FutureWarning)
+
cdef inline object create_timestamp_from_ts(int64_t value,
npy_datetimestruct dts,
@@ -315,7 +326,8 @@ cdef class _Timestamp(datetime):
return np.datetime64(self.value, 'ns')
def __add__(self, other):
- cdef int64_t other_int, nanos
+ cdef:
+ int64_t other_int, nanos
if is_timedelta64_object(other):
other_int = other.astype('timedelta64[ns]').view('i8')
@@ -323,6 +335,8 @@ cdef class _Timestamp(datetime):
tz=self.tzinfo, freq=self.freq)
elif is_integer_object(other):
+ maybe_integer_op_deprecated(self)
+
if self is NaT:
# to be compat with Period
return NaT
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index dc7cf51ca109d..2266c7be53523 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -8,6 +8,7 @@
from pandas._libs import lib, iNaT, NaT
from pandas._libs.tslibs import timezones
from pandas._libs.tslibs.timedeltas import delta_to_nanoseconds, Timedelta
+from pandas._libs.tslibs.timestamps import maybe_integer_op_deprecated
from pandas._libs.tslibs.period import (
Period, DIFFERENT_FREQ_INDEX, IncompatibleFrequency)
@@ -634,6 +635,7 @@ def __add__(self, other):
elif lib.is_integer(other):
# This check must come after the check for np.timedelta64
# as is_integer returns True for these
+ maybe_integer_op_deprecated(self)
result = self._time_shift(other)
# array-like others
@@ -647,6 +649,7 @@ def __add__(self, other):
# DatetimeIndex, ndarray[datetime64]
return self._add_datetime_arraylike(other)
elif is_integer_dtype(other):
+ maybe_integer_op_deprecated(self)
result = self._addsub_int_array(other, operator.add)
elif is_float_dtype(other):
# Explicitly catch invalid dtypes
@@ -692,7 +695,9 @@ def __sub__(self, other):
elif lib.is_integer(other):
# This check must come after the check for np.timedelta64
# as is_integer returns True for these
+ maybe_integer_op_deprecated(self)
result = self._time_shift(-other)
+
elif isinstance(other, Period):
result = self._sub_period(other)
@@ -710,6 +715,7 @@ def __sub__(self, other):
# PeriodIndex
result = self._sub_period_array(other)
elif is_integer_dtype(other):
+ maybe_integer_op_deprecated(self)
result = self._addsub_int_array(other, operator.sub)
elif isinstance(other, ABCIndexClass):
raise TypeError("cannot subtract {cls} and {typ}"
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 31bcac2f4f529..1bbad4b73953d 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -592,7 +592,7 @@ def to_timestamp(self, freq=None, how='start'):
return self.to_timestamp(how='start') + adjust
else:
adjust = Timedelta(1, 'ns')
- return (self + 1).to_timestamp(how='start') - adjust
+ return (self + self.freq).to_timestamp(how='start') - adjust
if freq is None:
base, mult = frequencies.get_freq_code(self.freq)
@@ -718,10 +718,11 @@ def _sub_period(self, other):
@Appender(dtl.DatetimeLikeArrayMixin._addsub_int_array.__doc__)
def _addsub_int_array(
self,
- other, # type: Union[Index, ExtensionArray, np.ndarray[int]]
- op, # type: Callable[Any, Any]
+ other, # type: Union[Index, ExtensionArray, np.ndarray[int]]
+ op # type: Callable[Any, Any]
):
# type: (...) -> PeriodArray
+
assert op in [operator.add, operator.sub]
if op is operator.sub:
other = -other
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 70a8deb33b7f2..36476a8ecb657 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1429,7 +1429,7 @@ def _get_time_delta_bins(self, ax):
freq=self.freq,
name=ax.name)
- end_stamps = labels + 1
+ end_stamps = labels + self.freq
bins = ax.searchsorted(end_stamps, side='left')
# Addresses GH #10530
@@ -1443,17 +1443,18 @@ def _get_time_period_bins(self, ax):
raise TypeError('axis must be a DatetimeIndex, but got '
'an instance of %r' % type(ax).__name__)
+ freq = self.freq
+
if not len(ax):
- binner = labels = PeriodIndex(
- data=[], freq=self.freq, name=ax.name)
+ binner = labels = PeriodIndex(data=[], freq=freq, name=ax.name)
return binner, [], labels
labels = binner = PeriodIndex(start=ax[0],
end=ax[-1],
- freq=self.freq,
+ freq=freq,
name=ax.name)
- end_stamps = (labels + 1).asfreq(self.freq, 's').to_timestamp()
+ end_stamps = (labels + freq).asfreq(freq, 's').to_timestamp()
if ax.tzinfo:
end_stamps = end_stamps.tz_localize(ax.tzinfo)
bins = ax.searchsorted(end_stamps, side='left')
diff --git a/pandas/plotting/_converter.py b/pandas/plotting/_converter.py
index fe773a6054db5..444b742ae706e 100644
--- a/pandas/plotting/_converter.py
+++ b/pandas/plotting/_converter.py
@@ -574,7 +574,7 @@ def period_break(dates, period):
Name of the period to monitor.
"""
current = getattr(dates, period)
- previous = getattr(dates - 1, period)
+ previous = getattr(dates - 1 * dates.freq, period)
return np.nonzero(current - previous)[0]
@@ -660,7 +660,7 @@ def first_label(label_flags):
def _hour_finder(label_interval, force_year_start):
_hour = dates_.hour
- _prev_hour = (dates_ - 1).hour
+ _prev_hour = (dates_ - 1 * dates_.freq).hour
hour_start = (_hour - _prev_hour) != 0
info_maj[day_start] = True
info_min[hour_start & (_hour % label_interval == 0)] = True
@@ -674,7 +674,7 @@ def _hour_finder(label_interval, force_year_start):
def _minute_finder(label_interval):
hour_start = period_break(dates_, 'hour')
_minute = dates_.minute
- _prev_minute = (dates_ - 1).minute
+ _prev_minute = (dates_ - 1 * dates_.freq).minute
minute_start = (_minute - _prev_minute) != 0
info_maj[hour_start] = True
info_min[minute_start & (_minute % label_interval == 0)] = True
@@ -687,7 +687,7 @@ def _minute_finder(label_interval):
def _second_finder(label_interval):
minute_start = period_break(dates_, 'minute')
_second = dates_.second
- _prev_second = (dates_ - 1).second
+ _prev_second = (dates_ - 1 * dates_.freq).second
second_start = (_second - _prev_second) != 0
info['maj'][minute_start] = True
info['min'][second_start & (_second % label_interval == 0)] = True
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 5435ec643f813..b71ad08cb523e 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1055,7 +1055,8 @@ def test_dti_add_int(self, tz_naive_fixture, one):
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01 09:00', freq='H',
periods=10, tz=tz)
- result = rng + one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = rng + one
expected = pd.date_range('2000-01-01 10:00', freq='H',
periods=10, tz=tz)
tm.assert_index_equal(result, expected)
@@ -1066,14 +1067,16 @@ def test_dti_iadd_int(self, tz_naive_fixture, one):
periods=10, tz=tz)
expected = pd.date_range('2000-01-01 10:00', freq='H',
periods=10, tz=tz)
- rng += one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ rng += one
tm.assert_index_equal(rng, expected)
def test_dti_sub_int(self, tz_naive_fixture, one):
tz = tz_naive_fixture
rng = pd.date_range('2000-01-01 09:00', freq='H',
periods=10, tz=tz)
- result = rng - one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = rng - one
expected = pd.date_range('2000-01-01 08:00', freq='H',
periods=10, tz=tz)
tm.assert_index_equal(result, expected)
@@ -1084,7 +1087,8 @@ def test_dti_isub_int(self, tz_naive_fixture, one):
periods=10, tz=tz)
expected = pd.date_range('2000-01-01 08:00', freq='H',
periods=10, tz=tz)
- rng -= one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ rng -= one
tm.assert_index_equal(rng, expected)
# -------------------------------------------------------------
@@ -1096,10 +1100,15 @@ def test_dti_add_intarray_tick(self, box, freq):
# GH#19959
dti = pd.date_range('2016-01-01', periods=2, freq=freq)
other = box([4, -1])
- expected = DatetimeIndex([dti[n] + other[n] for n in range(len(dti))])
- result = dti + other
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ expected = DatetimeIndex([dti[n] + other[n]
+ for n in range(len(dti))])
+ result = dti + other
tm.assert_index_equal(result, expected)
- result = other + dti
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = other + dti
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize('freq', ['W', 'M', 'MS', 'Q'])
@@ -1108,11 +1117,21 @@ def test_dti_add_intarray_non_tick(self, box, freq):
# GH#19959
dti = pd.date_range('2016-01-01', periods=2, freq=freq)
other = box([4, -1])
- expected = DatetimeIndex([dti[n] + other[n] for n in range(len(dti))])
- with tm.assert_produces_warning(PerformanceWarning):
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ expected = DatetimeIndex([dti[n] + other[n]
+ for n in range(len(dti))])
+
+ # tm.assert_produces_warning does not handle cases where we expect
+ # two warnings, in this case PerformanceWarning and FutureWarning.
+ # Until that is fixed, we don't catch either
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
result = dti + other
tm.assert_index_equal(result, expected)
- with tm.assert_produces_warning(PerformanceWarning):
+
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
result = other + dti
tm.assert_index_equal(result, expected)
@@ -1646,13 +1665,15 @@ def test_dti_add_offset_array(self, tz_naive_fixture):
dti = pd.date_range('2017-01-01', periods=2, tz=tz)
other = np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res = dti + other
expected = DatetimeIndex([dti[n] + other[n] for n in range(len(dti))],
name=dti.name, freq='infer')
tm.assert_index_equal(res, expected)
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res2 = other + dti
tm.assert_index_equal(res2, expected)
@@ -1666,13 +1687,15 @@ def test_dti_add_offset_index(self, tz_naive_fixture, names):
other = pd.Index([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)],
name=names[1])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res = dti + other
expected = DatetimeIndex([dti[n] + other[n] for n in range(len(dti))],
name=names[2], freq='infer')
tm.assert_index_equal(res, expected)
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res2 = other + dti
tm.assert_index_equal(res2, expected)
@@ -1682,7 +1705,8 @@ def test_dti_sub_offset_array(self, tz_naive_fixture):
dti = pd.date_range('2017-01-01', periods=2, tz=tz)
other = np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res = dti - other
expected = DatetimeIndex([dti[n] - other[n] for n in range(len(dti))],
name=dti.name, freq='infer')
@@ -1698,7 +1722,8 @@ def test_dti_sub_offset_index(self, tz_naive_fixture, names):
other = pd.Index([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)],
name=names[1])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res = dti - other
expected = DatetimeIndex([dti[n] - other[n] for n in range(len(dti))],
name=names[2], freq='infer')
@@ -1717,18 +1742,21 @@ def test_dti_with_offset_series(self, tz_naive_fixture, names):
expected_add = Series([dti[n] + other[n] for n in range(len(dti))],
name=names[2])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res = dti + other
tm.assert_series_equal(res, expected_add)
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res2 = other + dti
tm.assert_series_equal(res2, expected_add)
expected_sub = Series([dti[n] - other[n] for n in range(len(dti))],
name=names[2])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
res3 = dti - other
tm.assert_series_equal(res3, expected_sub)
@@ -1762,11 +1790,11 @@ def test_dt64_with_offset_array(klass):
# GH#10699
# array of offsets
box = Series if klass is Series else pd.Index
- dti = DatetimeIndex([Timestamp('2000-1-1'), Timestamp('2000-2-1')])
- s = klass(dti)
+ s = klass([Timestamp('2000-1-1'), Timestamp('2000-2-1')])
- with tm.assert_produces_warning(PerformanceWarning):
+ with tm.assert_produces_warning(PerformanceWarning,
+ clear=[pd.core.arrays.datetimelike]):
result = s + box([pd.offsets.DateOffset(years=1),
pd.offsets.MonthEnd()])
exp = klass([Timestamp('2001-1-1'), Timestamp('2000-2-29')])
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 184e76cfa490f..d2d725b6dc595 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -509,10 +509,14 @@ def test_pi_sub_offset_array(self, box):
def test_pi_add_iadd_int(self, one):
# Variants of `one` for #19012
rng = pd.period_range('2000-01-01 09:00', freq='H', periods=10)
- result = rng + one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ result = rng + one
expected = pd.period_range('2000-01-01 10:00', freq='H', periods=10)
tm.assert_index_equal(result, expected)
- rng += one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ rng += one
tm.assert_index_equal(rng, expected)
def test_pi_sub_isub_int(self, one):
@@ -521,18 +525,24 @@ def test_pi_sub_isub_int(self, one):
the integer 1, e.g. int, long, np.int64, np.uint8, ...
"""
rng = pd.period_range('2000-01-01 09:00', freq='H', periods=10)
- result = rng - one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ result = rng - one
expected = pd.period_range('2000-01-01 08:00', freq='H', periods=10)
tm.assert_index_equal(result, expected)
- rng -= one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ rng -= one
tm.assert_index_equal(rng, expected)
@pytest.mark.parametrize('five', [5, np.array(5, dtype=np.int64)])
def test_pi_sub_intlike(self, five):
rng = period_range('2007-01', periods=50)
- result = rng - five
- exp = rng + (-five)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ result = rng - five
+ exp = rng + (-five)
tm.assert_index_equal(result, exp)
def test_pi_sub_isub_offset(self):
@@ -594,7 +604,9 @@ def test_pi_add_intarray(self, box, op):
# GH#19959
pi = pd.PeriodIndex([pd.Period('2015Q1'), pd.Period('NaT')])
other = box([4, -1])
- result = op(pi, other)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ result = op(pi, other)
expected = pd.PeriodIndex([pd.Period('2016Q1'), pd.Period('NaT')])
tm.assert_index_equal(result, expected)
@@ -603,12 +615,16 @@ def test_pi_sub_intarray(self, box):
# GH#19959
pi = pd.PeriodIndex([pd.Period('2015Q1'), pd.Period('NaT')])
other = box([4, -1])
- result = pi - other
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ result = pi - other
expected = pd.PeriodIndex([pd.Period('2014Q1'), pd.Period('NaT')])
tm.assert_index_equal(result, expected)
with pytest.raises(TypeError):
- other - pi
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ other - pi
# ---------------------------------------------------------------
# Timedelta-like (timedelta, timedelta64, Timedelta, Tick)
@@ -850,10 +866,13 @@ def test_pi_ops(self):
expected = PeriodIndex(['2011-03', '2011-04', '2011-05', '2011-06'],
freq='M', name='idx')
- self._check(idx, lambda x: x + 2, expected)
- self._check(idx, lambda x: 2 + x, expected)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ self._check(idx, lambda x: x + 2, expected)
+ self._check(idx, lambda x: 2 + x, expected)
+
+ self._check(idx + 2, lambda x: x - 2, idx)
- self._check(idx + 2, lambda x: x - 2, idx)
result = idx - Period('2011-01', freq='M')
off = idx.freq
exp = pd.Index([0 * off, 1 * off, 2 * off, 3 * off], name='idx')
@@ -870,7 +889,6 @@ def test_pi_ops_errors(self, ng, box):
obj = tm.box_expected(idx, box)
msg = r"unsupported operand type\(s\)"
-
with tm.assert_raises_regex(TypeError, msg):
obj + ng
@@ -898,47 +916,53 @@ def test_pi_ops_nat(self):
freq='M', name='idx')
expected = PeriodIndex(['2011-03', '2011-04', 'NaT', '2011-06'],
freq='M', name='idx')
- self._check(idx, lambda x: x + 2, expected)
- self._check(idx, lambda x: 2 + x, expected)
- self._check(idx, lambda x: np.add(x, 2), expected)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ self._check(idx, lambda x: x + 2, expected)
+ self._check(idx, lambda x: 2 + x, expected)
+ self._check(idx, lambda x: np.add(x, 2), expected)
- self._check(idx + 2, lambda x: x - 2, idx)
- self._check(idx + 2, lambda x: np.subtract(x, 2), idx)
+ self._check(idx + 2, lambda x: x - 2, idx)
+ self._check(idx + 2, lambda x: np.subtract(x, 2), idx)
# freq with mult
idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'],
freq='2M', name='idx')
expected = PeriodIndex(['2011-07', '2011-08', 'NaT', '2011-10'],
freq='2M', name='idx')
- self._check(idx, lambda x: x + 3, expected)
- self._check(idx, lambda x: 3 + x, expected)
- self._check(idx, lambda x: np.add(x, 3), expected)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ self._check(idx, lambda x: x + 3, expected)
+ self._check(idx, lambda x: 3 + x, expected)
+ self._check(idx, lambda x: np.add(x, 3), expected)
- self._check(idx + 3, lambda x: x - 3, idx)
- self._check(idx + 3, lambda x: np.subtract(x, 3), idx)
+ self._check(idx + 3, lambda x: x - 3, idx)
+ self._check(idx + 3, lambda x: np.subtract(x, 3), idx)
def test_pi_ops_array_int(self):
- idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'],
- freq='M', name='idx')
- f = lambda x: x + np.array([1, 2, 3, 4])
- exp = PeriodIndex(['2011-02', '2011-04', 'NaT', '2011-08'],
- freq='M', name='idx')
- self._check(idx, f, exp)
-
- f = lambda x: np.add(x, np.array([4, -1, 1, 2]))
- exp = PeriodIndex(['2011-05', '2011-01', 'NaT', '2011-06'],
- freq='M', name='idx')
- self._check(idx, f, exp)
-
- f = lambda x: x - np.array([1, 2, 3, 4])
- exp = PeriodIndex(['2010-12', '2010-12', 'NaT', '2010-12'],
- freq='M', name='idx')
- self._check(idx, f, exp)
-
- f = lambda x: np.subtract(x, np.array([3, 2, 3, -2]))
- exp = PeriodIndex(['2010-10', '2010-12', 'NaT', '2011-06'],
- freq='M', name='idx')
- self._check(idx, f, exp)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False,
+ clear=[pd.core.arrays.datetimelike]):
+ idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'],
+ freq='M', name='idx')
+ f = lambda x: x + np.array([1, 2, 3, 4])
+ exp = PeriodIndex(['2011-02', '2011-04', 'NaT', '2011-08'],
+ freq='M', name='idx')
+ self._check(idx, f, exp)
+
+ f = lambda x: np.add(x, np.array([4, -1, 1, 2]))
+ exp = PeriodIndex(['2011-05', '2011-01', 'NaT', '2011-06'],
+ freq='M', name='idx')
+ self._check(idx, f, exp)
+
+ f = lambda x: x - np.array([1, 2, 3, 4])
+ exp = PeriodIndex(['2010-12', '2010-12', 'NaT', '2010-12'],
+ freq='M', name='idx')
+ self._check(idx, f, exp)
+
+ f = lambda x: np.subtract(x, np.array([3, 2, 3, -2]))
+ exp = PeriodIndex(['2010-10', '2010-12', 'NaT', '2011-06'],
+ freq='M', name='idx')
+ self._check(idx, f, exp)
def test_pi_ops_offset(self):
idx = PeriodIndex(['2011-01-01', '2011-02-01', '2011-03-01',
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 40089c8e9e477..eecbdc0130f02 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -423,8 +423,8 @@ def test_truncate(self):
assert_frame_equal(truncated, expected)
pytest.raises(ValueError, ts.truncate,
- before=ts.index[-1] - 1,
- after=ts.index[0] + 1)
+ before=ts.index[-1] - ts.index.freq,
+ after=ts.index[0] + ts.index.freq)
def test_truncate_copy(self):
index = self.tsframe.index
diff --git a/pandas/tests/indexes/datetimelike.py b/pandas/tests/indexes/datetimelike.py
index bb51b47a7fd0a..b798ac34255f1 100644
--- a/pandas/tests/indexes/datetimelike.py
+++ b/pandas/tests/indexes/datetimelike.py
@@ -61,9 +61,8 @@ def test_view(self, indices):
tm.assert_index_equal(result, i_view)
def test_map_callable(self):
-
- expected = self.index + 1
- result = self.index.map(lambda x: x + 1)
+ expected = self.index + self.index.freq
+ result = self.index.map(lambda x: x + x.freq)
tm.assert_index_equal(result, expected)
# map to NaT
@@ -77,7 +76,7 @@ def test_map_callable(self):
lambda values, index: {i: e for e, i in zip(values, index)},
lambda values, index: pd.Series(values, index)])
def test_map_dictlike(self, mapper):
- expected = self.index + 1
+ expected = self.index + self.index.freq
# don't compare the freqs
if isinstance(expected, pd.DatetimeIndex):
diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py
index de51120baeb58..1b75d6bd34764 100644
--- a/pandas/tests/indexes/datetimes/test_arithmetic.py
+++ b/pandas/tests/indexes/datetimes/test_arithmetic.py
@@ -58,11 +58,17 @@ def test_dti_shift_freqs(self):
def test_dti_shift_int(self):
rng = date_range('1/1/2000', periods=20)
- result = rng + 5
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng + 5
+
expected = rng.shift(5)
tm.assert_index_equal(result, expected)
- result = rng - 5
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng - 5
+
expected = rng.shift(-5)
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 8b2e91450c8c0..8d10cb8e42a94 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -338,8 +338,10 @@ def test_is_(self):
assert not index.is_(index[:])
assert not index.is_(index.asfreq('M'))
assert not index.is_(index.asfreq('A'))
- assert not index.is_(index - 2)
- assert not index.is_(index - 0)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ assert not index.is_(index - 2)
+ assert not index.is_(index - 0)
def test_contains(self):
rng = period_range('2007-01', freq='M', periods=10)
diff --git a/pandas/tests/indexes/timedeltas/test_arithmetic.py b/pandas/tests/indexes/timedeltas/test_arithmetic.py
index 82654a3533132..a03698c9ea0de 100644
--- a/pandas/tests/indexes/timedeltas/test_arithmetic.py
+++ b/pandas/tests/indexes/timedeltas/test_arithmetic.py
@@ -130,26 +130,34 @@ def test_ufunc_coercions(self):
def test_tdi_add_int(self, one):
# Variants of `one` for #19012
rng = timedelta_range('1 days 09:00:00', freq='H', periods=10)
- result = rng + one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng + one
expected = timedelta_range('1 days 10:00:00', freq='H', periods=10)
tm.assert_index_equal(result, expected)
def test_tdi_iadd_int(self, one):
rng = timedelta_range('1 days 09:00:00', freq='H', periods=10)
expected = timedelta_range('1 days 10:00:00', freq='H', periods=10)
- rng += one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ rng += one
tm.assert_index_equal(rng, expected)
def test_tdi_sub_int(self, one):
rng = timedelta_range('1 days 09:00:00', freq='H', periods=10)
- result = rng - one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng - one
expected = timedelta_range('1 days 08:00:00', freq='H', periods=10)
tm.assert_index_equal(result, expected)
def test_tdi_isub_int(self, one):
rng = timedelta_range('1 days 09:00:00', freq='H', periods=10)
expected = timedelta_range('1 days 08:00:00', freq='H', periods=10)
- rng -= one
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ rng -= one
tm.assert_index_equal(rng, expected)
# -------------------------------------------------------------
@@ -161,10 +169,15 @@ def test_tdi_add_integer_array(self, box):
rng = timedelta_range('1 days 09:00:00', freq='H', periods=3)
other = box([4, 3, 2])
expected = TimedeltaIndex(['1 day 13:00:00'] * 3)
- result = rng + other
- tm.assert_index_equal(result, expected)
- result = other + rng
- tm.assert_index_equal(result, expected)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng + other
+ tm.assert_index_equal(result, expected)
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = other + rng
+ tm.assert_index_equal(result, expected)
@pytest.mark.parametrize('box', [np.array, pd.Index])
def test_tdi_sub_integer_array(self, box):
@@ -172,10 +185,15 @@ def test_tdi_sub_integer_array(self, box):
rng = timedelta_range('9H', freq='H', periods=3)
other = box([4, 3, 2])
expected = TimedeltaIndex(['5H', '7H', '9H'])
- result = rng - other
- tm.assert_index_equal(result, expected)
- result = other - rng
- tm.assert_index_equal(result, -expected)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = rng - other
+ tm.assert_index_equal(result, expected)
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#22535
+ result = other - rng
+ tm.assert_index_equal(result, -expected)
@pytest.mark.parametrize('box', [np.array, pd.Index])
def test_tdi_addsub_integer_array_no_freq(self, box):
@@ -522,12 +540,12 @@ def test_timedelta_ops_with_missing_values(self):
def test_tdi_ops_attributes(self):
rng = timedelta_range('2 days', periods=5, freq='2D', name='x')
- result = rng + 1
+ result = rng + 1 * rng.freq
exp = timedelta_range('4 days', periods=5, freq='2D', name='x')
tm.assert_index_equal(result, exp)
assert result.freq == '2D'
- result = rng - 2
+ result = rng - 2 * rng.freq
exp = timedelta_range('-2 days', periods=5, freq='2D', name='x')
tm.assert_index_equal(result, exp)
assert result.freq == '2D'
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 0d596e713fcfc..f27b556366d88 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -159,23 +159,24 @@ def f():
columns=['A', 'B', 'C', 'D'])
expected = pd.concat([df_orig,
- DataFrame({'A': 7}, index=[dates[-1] + 1])],
+ DataFrame({'A': 7},
+ index=[dates[-1] + dates.freq])],
sort=True)
df = df_orig.copy()
- df.loc[dates[-1] + 1, 'A'] = 7
+ df.loc[dates[-1] + dates.freq, 'A'] = 7
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- df.at[dates[-1] + 1, 'A'] = 7
+ df.at[dates[-1] + dates.freq, 'A'] = 7
tm.assert_frame_equal(df, expected)
- exp_other = DataFrame({0: 7}, index=[dates[-1] + 1])
+ exp_other = DataFrame({0: 7}, index=[dates[-1] + dates.freq])
expected = pd.concat([df_orig, exp_other], axis=1)
df = df_orig.copy()
- df.loc[dates[-1] + 1, 0] = 7
+ df.loc[dates[-1] + dates.freq, 0] = 7
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- df.at[dates[-1] + 1, 0] = 7
+ df.at[dates[-1] + dates.freq, 0] = 7
tm.assert_frame_equal(df, expected)
def test_partial_setting_mixed_dtype(self):
diff --git a/pandas/tests/scalar/period/test_asfreq.py b/pandas/tests/scalar/period/test_asfreq.py
index 2e3867db65604..432d55ef5967a 100644
--- a/pandas/tests/scalar/period/test_asfreq.py
+++ b/pandas/tests/scalar/period/test_asfreq.py
@@ -16,15 +16,17 @@ def test_asfreq_near_zero(self, freq):
per = Period('0001-01-01', freq=freq)
tup1 = (per.year, per.hour, per.day)
- prev = per - 1
- assert (per - 1).ordinal == per.ordinal - 1
+ with tm.assert_produces_warning(FutureWarning):
+ prev = per - 1
+ assert prev.ordinal == per.ordinal - 1
tup2 = (prev.year, prev.month, prev.day)
assert tup2 < tup1
def test_asfreq_near_zero_weekly(self):
# GH#19834
- per1 = Period('0001-01-01', 'D') + 6
- per2 = Period('0001-01-01', 'D') - 6
+ with tm.assert_produces_warning(FutureWarning):
+ per1 = Period('0001-01-01', 'D') + 6
+ per2 = Period('0001-01-01', 'D') - 6
week1 = per1.asfreq('W')
week2 = per2.asfreq('W')
assert week1 != week2
diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index e360500d443ea..7171b15acbfa1 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -74,7 +74,9 @@ def test_period_cons_annual(self, month):
exp = Period('1989', freq=freq)
stamp = exp.to_timestamp('D', how='end') + timedelta(days=30)
p = Period(stamp, freq=freq)
- assert p == exp + 1
+
+ with tm.assert_produces_warning(FutureWarning):
+ assert p == exp + 1
assert isinstance(p, Period)
@pytest.mark.parametrize('day', DAYS)
@@ -127,13 +129,16 @@ def test_period_cons_mult(self):
assert p2.freq == offsets.MonthEnd()
assert p2.freqstr == 'M'
- result = p1 + 1
- assert result.ordinal == (p2 + 3).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p1 + 1
+ assert result.ordinal == (p2 + 3).ordinal
+
assert result.freq == p1.freq
assert result.freqstr == '3M'
- result = p1 - 1
- assert result.ordinal == (p2 - 3).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p1 - 1
+ assert result.ordinal == (p2 - 3).ordinal
assert result.freq == p1.freq
assert result.freqstr == '3M'
@@ -167,23 +172,27 @@ def test_period_cons_combined(self):
assert p3.freq == offsets.Hour()
assert p3.freqstr == 'H'
- result = p1 + 1
- assert result.ordinal == (p3 + 25).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p1 + 1
+ assert result.ordinal == (p3 + 25).ordinal
assert result.freq == p1.freq
assert result.freqstr == '25H'
- result = p2 + 1
- assert result.ordinal == (p3 + 25).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p2 + 1
+ assert result.ordinal == (p3 + 25).ordinal
assert result.freq == p2.freq
assert result.freqstr == '25H'
- result = p1 - 1
- assert result.ordinal == (p3 - 25).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p1 - 1
+ assert result.ordinal == (p3 - 25).ordinal
assert result.freq == p1.freq
assert result.freqstr == '25H'
- result = p2 - 1
- assert result.ordinal == (p3 - 25).ordinal
+ with tm.assert_produces_warning(FutureWarning):
+ result = p2 - 1
+ assert result.ordinal == (p3 - 25).ordinal
assert result.freq == p2.freq
assert result.freqstr == '25H'
@@ -598,7 +607,7 @@ def test_to_timestamp(self):
from_lst = ['A', 'Q', 'M', 'W', 'B', 'D', 'H', 'Min', 'S']
def _ex(p):
- return Timestamp((p + 1).start_time.value - 1)
+ return Timestamp((p + p.freq).start_time.value - 1)
for i, fcode in enumerate(from_lst):
p = Period('1982', freq=fcode)
@@ -717,14 +726,16 @@ def test_properties_quarterly(self):
#
for x in range(3):
for qd in (qedec_date, qejan_date, qejun_date):
- assert (qd + x).qyear == 2007
- assert (qd + x).quarter == x + 1
+ with tm.assert_produces_warning(FutureWarning):
+ assert (qd + x).qyear == 2007
+ assert (qd + x).quarter == x + 1
def test_properties_monthly(self):
# Test properties on Periods with daily frequency.
m_date = Period(freq='M', year=2007, month=1)
for x in range(11):
- m_ival_x = m_date + x
+ with tm.assert_produces_warning(FutureWarning):
+ m_ival_x = m_date + x
assert m_ival_x.year == 2007
if 1 <= x + 1 <= 3:
assert m_ival_x.quarter == 1
@@ -744,7 +755,8 @@ def test_properties_weekly(self):
assert w_date.quarter == 1
assert w_date.month == 1
assert w_date.week == 1
- assert (w_date - 1).week == 52
+ with tm.assert_produces_warning(FutureWarning):
+ assert (w_date - 1).week == 52
assert w_date.days_in_month == 31
assert Period(freq='W', year=2012,
month=2, day=1).days_in_month == 29
@@ -756,7 +768,8 @@ def test_properties_weekly_legacy(self):
assert w_date.quarter == 1
assert w_date.month == 1
assert w_date.week == 1
- assert (w_date - 1).week == 52
+ with tm.assert_produces_warning(FutureWarning):
+ assert (w_date - 1).week == 52
assert w_date.days_in_month == 31
exp = Period(freq='W', year=2012, month=2, day=1)
@@ -897,10 +910,11 @@ def test_multiples(self):
assert result1.freq == offsets.YearEnd(2)
assert result2.freq == offsets.YearEnd()
- assert (result1 + 1).ordinal == result1.ordinal + 2
- assert (1 + result1).ordinal == result1.ordinal + 2
- assert (result1 - 1).ordinal == result2.ordinal - 2
- assert (-1 + result1).ordinal == result2.ordinal - 2
+ with tm.assert_produces_warning(FutureWarning):
+ assert (result1 + 1).ordinal == result1.ordinal + 2
+ assert (1 + result1).ordinal == result1.ordinal + 2
+ assert (result1 - 1).ordinal == result2.ordinal - 2
+ assert (-1 + result1).ordinal == result2.ordinal - 2
def test_round_trip(self):
@@ -1006,8 +1020,9 @@ class TestMethods(object):
def test_add(self):
dt1 = Period(freq='D', year=2008, month=1, day=1)
dt2 = Period(freq='D', year=2008, month=1, day=2)
- assert dt1 + 1 == dt2
- assert 1 + dt1 == dt2
+ with tm.assert_produces_warning(FutureWarning):
+ assert dt1 + 1 == dt2
+ assert 1 + dt1 == dt2
def test_add_pdnat(self):
p = pd.Period('2011-01', freq='M')
diff --git a/pandas/tests/scalar/timestamp/test_arithmetic.py b/pandas/tests/scalar/timestamp/test_arithmetic.py
index 8f4809c93e28b..0f8ddc53734f6 100644
--- a/pandas/tests/scalar/timestamp/test_arithmetic.py
+++ b/pandas/tests/scalar/timestamp/test_arithmetic.py
@@ -4,6 +4,7 @@
import pytest
import numpy as np
+import pandas.util.testing as tm
from pandas.compat import long
from pandas.tseries import offsets
from pandas import Timestamp, Timedelta
@@ -46,8 +47,10 @@ def test_addition_subtraction_types(self):
# addition/subtraction of integers
ts = Timestamp(dt, freq='D')
- assert type(ts + 1) == Timestamp
- assert type(ts - 1) == Timestamp
+ with tm.assert_produces_warning(FutureWarning):
+ # GH#22535 add/sub with integers is deprecated
+ assert type(ts + 1) == Timestamp
+ assert type(ts - 1) == Timestamp
# Timestamp + datetime not supported, though subtraction is supported
# and yields timedelta more tests in tseries/base/tests/test_base.py
@@ -66,8 +69,11 @@ def test_addition_subtraction_preserve_frequency(self):
td = timedelta(days=1)
original_freq = ts.freq
- assert (ts + 1).freq == original_freq
- assert (ts - 1).freq == original_freq
+ with tm.assert_produces_warning(FutureWarning):
+ # GH#22535 add/sub with integers is deprecated
+ assert (ts + 1).freq == original_freq
+ assert (ts - 1).freq == original_freq
+
assert (ts + td).freq == original_freq
assert (ts - td).freq == original_freq
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 5cd31e08e0a9b..69a0613c95475 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -2245,7 +2245,7 @@ def test_asfreq(self, series_and_frame, freq, kind):
expected = obj.to_timestamp().resample(freq).asfreq()
else:
start = obj.index[0].to_timestamp(how='start')
- end = (obj.index[-1] + 1).to_timestamp(how='start')
+ end = (obj.index[-1] + obj.index.freq).to_timestamp(how='start')
new_index = date_range(start=start, end=end, freq=freq,
closed='left')
expected = obj.to_timestamp().reindex(new_index).to_period(freq)
@@ -2467,7 +2467,8 @@ def test_with_local_timezone_pytz(self):
# Create the expected series
# Index is moved back a day with the timezone conversion from UTC to
# Pacific
- expected_index = (pd.period_range(start=start, end=end, freq='D') - 1)
+ expected_index = (pd.period_range(start=start, end=end, freq='D') -
+ offsets.Day())
expected = Series(1, index=expected_index)
assert_series_equal(result, expected)
@@ -2503,7 +2504,7 @@ def test_with_local_timezone_dateutil(self):
# Index is moved back a day with the timezone conversion from UTC to
# Pacific
expected_index = (pd.period_range(start=start, end=end, freq='D',
- name='idx') - 1)
+ name='idx') - offsets.Day())
expected = Series(1, index=expected_index)
assert_series_equal(result, expected)
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index a0cff6f74b979..cbd3e0903b713 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -2403,7 +2403,11 @@ def test_offset_whole_year(self):
# ensure .apply_index works as expected
s = DatetimeIndex(dates[:-1])
- result = SemiMonthEnd().apply_index(s)
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = SemiMonthEnd().apply_index(s)
+
exp = DatetimeIndex(dates[1:])
tm.assert_index_equal(result, exp)
@@ -2499,7 +2503,11 @@ def test_offset(self, case):
def test_apply_index(self, case):
offset, cases = case
s = DatetimeIndex(cases.keys())
- result = offset.apply_index(s)
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = offset.apply_index(s)
+
exp = DatetimeIndex(cases.values())
tm.assert_index_equal(result, exp)
@@ -2519,8 +2527,12 @@ def test_vectorized_offset_addition(self, klass):
s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
Timestamp('2000-02-15', tz='US/Central')], name='a')
- result = s + SemiMonthEnd()
- result2 = SemiMonthEnd() + s
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = s + SemiMonthEnd()
+ result2 = SemiMonthEnd() + s
+
exp = klass([Timestamp('2000-01-31 00:15:00', tz='US/Central'),
Timestamp('2000-02-29', tz='US/Central')], name='a')
tm.assert_equal(result, exp)
@@ -2528,8 +2540,13 @@ def test_vectorized_offset_addition(self, klass):
s = klass([Timestamp('2000-01-01 00:15:00', tz='US/Central'),
Timestamp('2000-02-01', tz='US/Central')], name='a')
- result = s + SemiMonthEnd()
- result2 = SemiMonthEnd() + s
+
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = s + SemiMonthEnd()
+ result2 = SemiMonthEnd() + s
+
exp = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
Timestamp('2000-02-15', tz='US/Central')], name='a')
tm.assert_equal(result, exp)
@@ -2573,7 +2590,11 @@ def test_offset_whole_year(self):
# ensure .apply_index works as expected
s = DatetimeIndex(dates[:-1])
- result = SemiMonthBegin().apply_index(s)
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = SemiMonthBegin().apply_index(s)
+
exp = DatetimeIndex(dates[1:])
tm.assert_index_equal(result, exp)
@@ -2673,7 +2694,12 @@ def test_offset(self, case):
def test_apply_index(self, case):
offset, cases = case
s = DatetimeIndex(cases.keys())
- result = offset.apply_index(s)
+
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = offset.apply_index(s)
+
exp = DatetimeIndex(cases.values())
tm.assert_index_equal(result, exp)
@@ -2692,8 +2718,12 @@ def test_onOffset(self, case):
def test_vectorized_offset_addition(self, klass):
s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
Timestamp('2000-02-15', tz='US/Central')], name='a')
- result = s + SemiMonthBegin()
- result2 = SemiMonthBegin() + s
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = s + SemiMonthBegin()
+ result2 = SemiMonthBegin() + s
+
exp = klass([Timestamp('2000-02-01 00:15:00', tz='US/Central'),
Timestamp('2000-03-01', tz='US/Central')], name='a')
tm.assert_equal(result, exp)
@@ -2701,8 +2731,12 @@ def test_vectorized_offset_addition(self, klass):
s = klass([Timestamp('2000-01-01 00:15:00', tz='US/Central'),
Timestamp('2000-02-01', tz='US/Central')], name='a')
- result = s + SemiMonthBegin()
- result2 = SemiMonthBegin() + s
+ with tm.assert_produces_warning(None):
+ # GH#22535 check that we don't get a FutureWarning from adding
+ # an integer array to PeriodIndex
+ result = s + SemiMonthBegin()
+ result2 = SemiMonthBegin() + s
+
exp = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
Timestamp('2000-02-15', tz='US/Central')], name='a')
tm.assert_equal(result, exp)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 650c4d5b21d7f..6fb562e301ac2 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -288,8 +288,11 @@ def apply_index(self, i):
weeks = (kwds.get('weeks', 0)) * self.n
if weeks:
- i = (i.to_period('W') + weeks).to_timestamp() + \
- i.to_perioddelta('W')
+ # integer addition on PeriodIndex is deprecated,
+ # so we directly use _time_shift instead
+ asper = i.to_period('W')
+ shifted = asper._data._time_shift(weeks)
+ i = shifted.to_timestamp() + i.to_perioddelta('W')
timedelta_kwds = {k: v for k, v in kwds.items()
if k in ['days', 'hours', 'minutes',
@@ -536,13 +539,21 @@ def apply_index(self, i):
time = i.to_perioddelta('D')
# to_period rolls forward to next BDay; track and
# reduce n where it does when rolling forward
- shifted = (i.to_perioddelta('B') - time).asi8 != 0
+ asper = i.to_period('B')
if self.n > 0:
+ shifted = (i.to_perioddelta('B') - time).asi8 != 0
+
+ # Integer-array addition is deprecated, so we use
+ # _time_shift directly
roll = np.where(shifted, self.n - 1, self.n)
+ shifted = asper._data._addsub_int_array(roll, operator.add)
else:
+ # Integer addition is deprecated, so we use _time_shift directly
roll = self.n
+ shifted = asper._data._time_shift(roll)
- return (i.to_period('B') + roll).to_timestamp() + time
+ result = shifted.to_timestamp() + time
+ return result
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -1091,6 +1102,7 @@ def _apply(self, n, other):
@apply_index_wraps
def apply_index(self, i):
# determine how many days away from the 1st of the month we are
+ dti = i
days_from_start = i.to_perioddelta('M').asi8
delta = Timedelta(days=self.day_of_month - 1).value
@@ -1107,7 +1119,12 @@ def apply_index(self, i):
time = i.to_perioddelta('D')
# apply the correct number of months
- i = (i.to_period('M') + (roll // 2)).to_timestamp()
+
+ # integer-array addition on PeriodIndex is deprecated,
+ # so we use _addsub_int_array directly
+ asper = i.to_period('M')
+ shifted = asper._data._addsub_int_array(roll // 2, operator.add)
+ i = type(dti)(shifted.to_timestamp())
# apply the correct day
i = self._apply_index_days(i, roll)
@@ -1288,8 +1305,10 @@ def apply(self, other):
@apply_index_wraps
def apply_index(self, i):
if self.weekday is None:
- return ((i.to_period('W') + self.n).to_timestamp() +
- i.to_perioddelta('W'))
+ # integer addition on PeriodIndex is deprecated,
+ # so we use _time_shift directly
+ shifted = i.to_period('W')._data._time_shift(self.n)
+ return shifted.to_timestamp() + i.to_perioddelta('W')
else:
return self._end_apply_index(i)
@@ -1314,10 +1333,16 @@ def _end_apply_index(self, dtindex):
normed = dtindex - off + Timedelta(1, 'D') - Timedelta(1, 'ns')
roll = np.where(base_period.to_timestamp(how='end') == normed,
self.n, self.n - 1)
+ # integer-array addition on PeriodIndex is deprecated,
+ # so we use _addsub_int_array directly
+ shifted = base_period._data._addsub_int_array(roll, operator.add)
+ base = shifted.to_timestamp(how='end')
else:
+ # integer addition on PeriodIndex is deprecated,
+ # so we use _time_shift directly
roll = self.n
+ base = base_period._data._time_shift(roll).to_timestamp(how='end')
- base = (base_period + roll).to_timestamp(how='end')
return base + off + Timedelta(1, 'ns') - Timedelta(1, 'D')
def onOffset(self, dt):
| Discussed in #21939; these operations cause headaches and internal-inconsistencies (e.g. ATM `Timedelta` is the only one of the related classes that doesn't support integer ops)
- [x] closes #21939
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22535 | 2018-08-29T18:12:08Z | 2018-11-01T00:50:33Z | 2018-11-01T00:50:33Z | 2018-12-16T15:53:46Z |
CLN: use dispatch_to_series where possible | A bunch of PRs touching DataFrame ops have gone through recently. This does some follow-up cleanup to unify the way things are done across a few different methods. | https://api.github.com/repos/pandas-dev/pandas/pulls/22534 | 2018-08-29T17:42:00Z | 2018-09-08T03:03:12Z | null | 2018-09-08T03:11:06Z |
|
TST: Move tests/scripts to scripts/tests | diff --git a/Makefile b/Makefile
index 4a82566cf726e..4a4aca21e1b78 100644
--- a/Makefile
+++ b/Makefile
@@ -13,7 +13,7 @@ build: clean_pyc
python setup.py build_ext --inplace
lint-diff:
- git diff master --name-only -- "*.py" | grep "pandas" | xargs flake8
+ git diff master --name-only -- "*.py" | grep -E "pandas|scripts" | xargs flake8
develop: build
-python setup.py develop
diff --git a/ci/lint.sh b/ci/lint.sh
index c7ea92e6a67e6..533e1d18d8e0e 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -24,6 +24,11 @@ if [ "$LINT" ]; then
if [ $? -ne "0" ]; then
RET=1
fi
+
+ flake8 scripts/tests --filename=*.py
+ if [ $? -ne "0" ]; then
+ RET=1
+ fi
echo "Linting *.py DONE"
echo "Linting setup.py"
@@ -175,7 +180,7 @@ if [ "$LINT" ]; then
RET=1
fi
echo "Check for old-style classes DONE"
-
+
echo "Check for backticks incorrectly rendering because of missing spaces"
grep -R --include="*.rst" -E "[a-zA-Z0-9]\`\`?[a-zA-Z0-9]" doc/source/
diff --git a/ci/script_single.sh b/ci/script_single.sh
index 60e2fbb33ee5d..ed12ee35b9151 100755
--- a/ci/script_single.sh
+++ b/ci/script_single.sh
@@ -28,6 +28,8 @@ elif [ "$COVERAGE" ]; then
echo pytest -s -m "single" -r xXs --strict --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
pytest -s -m "single" -r xXs --strict --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
+ echo pytest -s -r xXs --strict scripts
+ pytest -s -r xXs --strict scripts
else
echo pytest -m "single" -r xXs --junitxml=/tmp/single.xml --strict $TEST_ARGS pandas
pytest -m "single" -r xXs --junitxml=/tmp/single.xml --strict $TEST_ARGS pandas # TODO: doctest
diff --git a/pandas/tests/scripts/__init__.py b/scripts/tests/__init__.py
similarity index 100%
rename from pandas/tests/scripts/__init__.py
rename to scripts/tests/__init__.py
diff --git a/scripts/tests/conftest.py b/scripts/tests/conftest.py
new file mode 100644
index 0000000000000..f8318b8d402af
--- /dev/null
+++ b/scripts/tests/conftest.py
@@ -0,0 +1,3 @@
+def pytest_addoption(parser):
+ parser.addoption("--strict-data-files", action="store_true",
+ help="Unused. For compat with setup.cfg.")
diff --git a/pandas/tests/scripts/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
similarity index 88%
rename from pandas/tests/scripts/test_validate_docstrings.py
rename to scripts/tests/test_validate_docstrings.py
index 25cb1d7aff649..933d02cc8c627 100644
--- a/pandas/tests/scripts/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -1,8 +1,10 @@
-import os
-import sys
-
-import numpy as np
+import string
+import random
import pytest
+import numpy as np
+
+import validate_docstrings
+validate_one = validate_docstrings.validate_one
class GoodDocStrings(object):
@@ -44,7 +46,7 @@ def sample(self):
float
Random number generated.
"""
- return random.random() # noqa: F821
+ return random.random()
def random_letters(self):
"""
@@ -60,9 +62,8 @@ def random_letters(self):
letters : str
String of random letters.
"""
- length = random.randint(1, 10) # noqa: F821
- letters = ''.join(random.choice(string.ascii_lowercase) # noqa: F821
- for i in range(length))
+ length = random.randint(1, 10)
+ letters = "".join(random.sample(string.ascii_lowercase, length))
return length, letters
def sample_values(self):
@@ -78,7 +79,7 @@ def sample_values(self):
Random number generated.
"""
while True:
- yield random.random() # noqa: F821
+ yield random.random()
def head(self):
"""
@@ -491,44 +492,6 @@ def no_punctuation(self):
class TestValidator(object):
- @pytest.fixture(autouse=True, scope="class")
- def import_scripts(self):
- """
- Import the validation scripts from the scripts directory.
-
- Because the scripts directory is above the top level pandas package,
- we need to modify `sys.path` so that Python knows where to find it.
-
- The code below traverses up the file system to find the scripts
- directory, adds the location to `sys.path`, and imports the required
- module into the global namespace before as part of class setup.
-
- During teardown, those changes are reverted.
- """
-
- up = os.path.dirname
- global_validate_one = "validate_one"
- file_dir = up(os.path.abspath(__file__))
-
- script_dir = os.path.join(up(up(up(file_dir))), "scripts")
- sys.path.append(script_dir)
-
- try:
- from validate_docstrings import validate_one
- globals()[global_validate_one] = validate_one
- except ImportError:
- # Remove addition to `sys.path`
- sys.path.pop()
-
- # Import will fail if the pandas installation is not inplace.
- raise pytest.skip("pandas/scripts directory does not exist")
-
- yield
-
- # Teardown.
- sys.path.pop()
- del globals()[global_validate_one]
-
def _import_path(self, klass=None, func=None):
"""
Build the required import path for tests in this module.
@@ -545,27 +508,29 @@ def _import_path(self, klass=None, func=None):
str
Import path of specified object in this module
"""
- base_path = 'pandas.tests.scripts.test_validate_docstrings'
+ base_path = "scripts.tests.test_validate_docstrings"
+
if klass:
- base_path = '.'.join([base_path, klass])
+ base_path = ".".join([base_path, klass])
+
if func:
- base_path = '.'.join([base_path, func])
+ base_path = ".".join([base_path, func])
return base_path
def test_good_class(self):
- assert validate_one(self._import_path( # noqa: F821
+ assert validate_one(self._import_path(
klass='GoodDocStrings')) == 0
@pytest.mark.parametrize("func", [
'plot', 'sample', 'random_letters', 'sample_values', 'head', 'head1',
'contains', 'mode'])
def test_good_functions(self, func):
- assert validate_one(self._import_path( # noqa: F821
+ assert validate_one(self._import_path(
klass='GoodDocStrings', func=func)) == 0
def test_bad_class(self):
- assert validate_one(self._import_path( # noqa: F821
+ assert validate_one(self._import_path(
klass='BadGenericDocStrings')) > 0
@pytest.mark.parametrize("func", [
| Moves the `tests/scripts` directory to `scripts/tests`, as these tests don't really belong in the top-level `pandas` directory (they test the `scripts` directory, not `pandas`). | https://api.github.com/repos/pandas-dev/pandas/pulls/22531 | 2018-08-29T00:59:27Z | 2018-09-01T21:56:36Z | 2018-09-01T21:56:36Z | 2018-09-01T21:57:47Z |
CLN: remove unused np version check | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index fe993ecc0cdd7..fdcb41da46303 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -18,8 +18,6 @@ cnp.import_array()
cimport util
from lib import maybe_convert_objects
-is_numpy_prior_1_6_2 = LooseVersion(np.__version__) < '1.6.2'
-
cdef _get_result_array(object obj, Py_ssize_t size, Py_ssize_t cnt):
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index be695eabb24bd..bdd279b19208b 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -46,8 +46,7 @@ from tslibs.nattype cimport checknull_with_nat, NPY_NAT
from tslibs.offsets cimport to_offset
-from tslibs.timestamps cimport (create_timestamp_from_ts,
- _NS_UPPER_BOUND, _NS_LOWER_BOUND)
+from tslibs.timestamps cimport create_timestamp_from_ts
from tslibs.timestamps import Timestamp
@@ -350,8 +349,8 @@ cpdef array_with_unit_to_datetime(ndarray values, unit, errors='coerce'):
# check the bounds
if not need_to_iterate:
- if ((fvalues < _NS_LOWER_BOUND).any()
- or (fvalues > _NS_UPPER_BOUND).any()):
+ if ((fvalues < Timestamp.min.value).any()
+ or (fvalues > Timestamp.max.value).any()):
raise OutOfBoundsDatetime("cannot convert input with unit "
"'{unit}'".format(unit=unit))
result = (iresult * m).astype('M8[ns]')
diff --git a/pandas/_libs/tslibs/timestamps.pxd b/pandas/_libs/tslibs/timestamps.pxd
index d6b649becc479..a162799828cba 100644
--- a/pandas/_libs/tslibs/timestamps.pxd
+++ b/pandas/_libs/tslibs/timestamps.pxd
@@ -6,5 +6,3 @@ from np_datetime cimport npy_datetimestruct
cdef object create_timestamp_from_ts(int64_t value,
npy_datetimestruct dts,
object tz, object freq)
-
-cdef int64_t _NS_UPPER_BOUND, _NS_LOWER_BOUND
| https://api.github.com/repos/pandas-dev/pandas/pulls/22530 | 2018-08-28T18:21:00Z | 2018-08-29T11:00:54Z | 2018-08-29T11:00:54Z | 2018-08-29T13:40:40Z |
|
TST: fixturize series/test_alter_axes.py | diff --git a/pandas/tests/series/conftest.py b/pandas/tests/series/conftest.py
new file mode 100644
index 0000000000000..80a4e81c443ed
--- /dev/null
+++ b/pandas/tests/series/conftest.py
@@ -0,0 +1,43 @@
+import pytest
+
+import pandas.util.testing as tm
+
+from pandas import Series
+
+
+@pytest.fixture
+def datetime_series():
+ """
+ Fixture for Series of floats with DatetimeIndex
+ """
+ s = tm.makeTimeSeries()
+ s.name = 'ts'
+ return s
+
+
+@pytest.fixture
+def string_series():
+ """
+ Fixture for Series of floats with Index of unique strings
+ """
+ s = tm.makeStringSeries()
+ s.name = 'series'
+ return s
+
+
+@pytest.fixture
+def object_series():
+ """
+ Fixture for Series of dtype datetime64[ns] with Index of unique strings
+ """
+ s = tm.makeObjectSeries()
+ s.name = 'objects'
+ return s
+
+
+@pytest.fixture
+def empty_series():
+ """
+ Fixture for empty Series
+ """
+ return Series([], index=[])
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index ed3191cf849c0..c3e4cb8bc3abc 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -6,44 +6,39 @@
from datetime import datetime
import numpy as np
-import pandas as pd
-from pandas import Index, Series
-from pandas.core.index import MultiIndex, RangeIndex
+from pandas import Series, DataFrame, Index, MultiIndex, RangeIndex
from pandas.compat import lrange, range, zip
-from pandas.util.testing import assert_series_equal, assert_frame_equal
import pandas.util.testing as tm
-from .common import TestData
+class TestSeriesAlterAxes(object):
-class TestSeriesAlterAxes(TestData):
-
- def test_setindex(self):
+ def test_setindex(self, string_series):
# wrong type
- series = self.series.copy()
- pytest.raises(TypeError, setattr, series, 'index', None)
+ pytest.raises(TypeError, setattr, string_series, 'index', None)
# wrong length
- series = self.series.copy()
- pytest.raises(Exception, setattr, series, 'index',
- np.arange(len(series) - 1))
+ pytest.raises(Exception, setattr, string_series, 'index',
+ np.arange(len(string_series) - 1))
# works
- series = self.series.copy()
- series.index = np.arange(len(series))
- assert isinstance(series.index, Index)
+ string_series.index = np.arange(len(string_series))
+ assert isinstance(string_series.index, Index)
+
+ # Renaming
- def test_rename(self):
+ def test_rename(self, datetime_series):
+ ts = datetime_series
renamer = lambda x: x.strftime('%Y%m%d')
- renamed = self.ts.rename(renamer)
- assert renamed.index[0] == renamer(self.ts.index[0])
+ renamed = ts.rename(renamer)
+ assert renamed.index[0] == renamer(ts.index[0])
# dict
- rename_dict = dict(zip(self.ts.index, renamed.index))
- renamed2 = self.ts.rename(rename_dict)
- assert_series_equal(renamed, renamed2)
+ rename_dict = dict(zip(ts.index, renamed.index))
+ renamed2 = ts.rename(rename_dict)
+ tm.assert_series_equal(renamed, renamed2)
# partial dict
s = Series(np.arange(4), index=['a', 'b', 'c', 'd'], dtype='int64')
@@ -105,12 +100,12 @@ def test_set_name(self):
assert s.name is None
assert s is not s2
- def test_rename_inplace(self):
+ def test_rename_inplace(self, datetime_series):
renamer = lambda x: x.strftime('%Y%m%d')
- expected = renamer(self.ts.index[0])
+ expected = renamer(datetime_series.index[0])
- self.ts.rename(renamer, inplace=True)
- assert self.ts.index[0] == expected
+ datetime_series.rename(renamer, inplace=True)
+ assert datetime_series.index[0] == expected
def test_set_index_makes_timeseries(self):
idx = tm.makeDateIndex(10)
@@ -135,7 +130,7 @@ def test_reset_index(self):
s = ser.reset_index(drop=True)
s2 = ser
s2.reset_index(drop=True, inplace=True)
- assert_series_equal(s, s2)
+ tm.assert_series_equal(s, s2)
# level
index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]],
@@ -150,8 +145,8 @@ def test_reset_index(self):
assert isinstance(rs, Series)
def test_reset_index_level(self):
- df = pd.DataFrame([[1, 2, 3], [4, 5, 6]],
- columns=['A', 'B', 'C'])
+ df = DataFrame([[1, 2, 3], [4, 5, 6]],
+ columns=['A', 'B', 'C'])
for levels in ['A', 'B'], [0, 1]:
# With MultiIndex
@@ -189,19 +184,19 @@ def test_reset_index_level(self):
s.reset_index(level=[0, 1, 2])
# Check that .reset_index([],drop=True) doesn't fail
- result = pd.Series(range(4)).reset_index([], drop=True)
- expected = pd.Series(range(4))
- assert_series_equal(result, expected)
+ result = Series(range(4)).reset_index([], drop=True)
+ expected = Series(range(4))
+ tm.assert_series_equal(result, expected)
def test_reset_index_range(self):
# GH 12071
- s = pd.Series(range(2), name='A', dtype='int64')
+ s = Series(range(2), name='A', dtype='int64')
series_result = s.reset_index()
assert isinstance(series_result.index, RangeIndex)
- series_expected = pd.DataFrame([[0, 0], [1, 1]],
- columns=['index', 'A'],
- index=RangeIndex(stop=2))
- assert_frame_equal(series_result, series_expected)
+ series_expected = DataFrame([[0, 0], [1, 1]],
+ columns=['index', 'A'],
+ index=RangeIndex(stop=2))
+ tm.assert_frame_equal(series_result, series_expected)
def test_reorder_levels(self):
index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]],
@@ -212,11 +207,11 @@ def test_reorder_levels(self):
# no change, position
result = s.reorder_levels([0, 1, 2])
- assert_series_equal(s, result)
+ tm.assert_series_equal(s, result)
# no change, labels
result = s.reorder_levels(['L0', 'L1', 'L2'])
- assert_series_equal(s, result)
+ tm.assert_series_equal(s, result)
# rotate, position
result = s.reorder_levels([1, 2, 0])
@@ -225,17 +220,16 @@ def test_reorder_levels(self):
[0, 0, 0, 0, 0, 0]],
names=['L1', 'L2', 'L0'])
expected = Series(np.arange(6), index=e_idx)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
- def test_rename_axis_inplace(self):
+ def test_rename_axis_inplace(self, datetime_series):
# GH 15704
- series = self.ts.copy()
- expected = series.rename_axis('foo')
- result = series.copy()
+ expected = datetime_series.rename_axis('foo')
+ result = datetime_series
no_return = result.rename_axis('foo', inplace=True)
assert no_return is None
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_set_axis_inplace_axes(self, axis_series):
# GH14636
@@ -291,25 +285,25 @@ def test_reset_index_drop_errors(self):
# GH 20925
# KeyError raised for series index when passed level name is missing
- s = pd.Series(range(4))
+ s = Series(range(4))
with tm.assert_raises_regex(KeyError, 'must be same as name'):
s.reset_index('wrong', drop=True)
with tm.assert_raises_regex(KeyError, 'must be same as name'):
s.reset_index('wrong')
# KeyError raised for series when level to be dropped is missing
- s = pd.Series(range(4), index=pd.MultiIndex.from_product([[1, 2]] * 2))
+ s = Series(range(4), index=MultiIndex.from_product([[1, 2]] * 2))
with tm.assert_raises_regex(KeyError, 'not found'):
s.reset_index('wrong', drop=True)
def test_droplevel(self):
# GH20342
- ser = pd.Series([1, 2, 3, 4])
- ser.index = pd.MultiIndex.from_arrays([(1, 2, 3, 4), (5, 6, 7, 8)],
- names=['a', 'b'])
+ ser = Series([1, 2, 3, 4])
+ ser.index = MultiIndex.from_arrays([(1, 2, 3, 4), (5, 6, 7, 8)],
+ names=['a', 'b'])
expected = ser.reset_index('b', drop=True)
result = ser.droplevel('b', axis='index')
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# test that droplevel raises ValueError on axis != 0
with pytest.raises(ValueError):
ser.droplevel(1, axis='columns')
| - [x] prep for #22225, in the sense that it preempts (and splits off) the test-related changes that have been required on the DataFrame-side of the PR (see #22236)
- [x] tests modified / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
In particular, it takes the fixture-like attributes of `tests/series/common.TestData` and transforms them into fixtures in `tests/series/conftest.py`, with the eventual goal of replacing all the `TestData`-attributes in the Series tests, similar to #22471 (I can also open a sister issue to that for the Series tests). | https://api.github.com/repos/pandas-dev/pandas/pulls/22526 | 2018-08-28T06:23:12Z | 2018-09-05T23:43:36Z | 2018-09-05T23:43:36Z | 2018-09-09T21:22:46Z |
Updated condition to skip for pytables build issue on numpy 1.15 #22098 | diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py
index cb8ad5e3ea46f..a6f586c7f2638 100644
--- a/pandas/compat/numpy/__init__.py
+++ b/pandas/compat/numpy/__init__.py
@@ -16,6 +16,7 @@
_np_version_under1p14 = _nlv < LooseVersion('1.14')
_np_version_under1p15 = _nlv < LooseVersion('1.15')
+
if _nlv < '1.9':
raise ImportError('this version of pandas is incompatible with '
'numpy < 1.9.0\n'
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 99c3c659e9b4d..ddcfcc0842d1a 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -14,7 +14,7 @@
from pandas import (Series, DataFrame, Panel, MultiIndex, Int64Index,
RangeIndex, Categorical, bdate_range,
date_range, timedelta_range, Index, DatetimeIndex,
- isna, compat, concat, Timestamp, _np_version_under1p15)
+ isna, compat, concat, Timestamp)
import pandas.util.testing as tm
import pandas.util._test_decorators as td
@@ -2192,9 +2192,9 @@ def test_unimplemented_dtypes_table_columns(self):
pytest.raises(TypeError, store.append, 'df_unimplemented', df)
@pytest.mark.skipif(
- not _np_version_under1p15,
- reason=("pytables conda build package needs build "
- "with numpy 1.15: gh-22098"))
+ LooseVersion(np.__version__) == LooseVersion('1.15.0'),
+ reason=("Skipping pytables test when numpy version is "
+ "exactly equal to 1.15.0: gh-22098"))
def test_calendar_roundtrip_issue(self):
# 8591
| - [ ] closes #22098
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/22522 | 2018-08-27T14:05:32Z | 2018-08-31T10:09:40Z | 2018-08-31T10:09:40Z | 2018-08-31T10:09:42Z |
TST: Streaming of S3 files | diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index 7a3062f470ce8..a2c3d17f8754a 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -1,3 +1,7 @@
+import pytest
+
+from pandas import read_csv
+from pandas.compat import BytesIO
from pandas.io.common import is_s3_url
@@ -6,3 +10,18 @@ class TestS3URL(object):
def test_is_s3_url(self):
assert is_s3_url("s3://pandas/somethingelse.com")
assert not is_s3_url("s4://pandas/somethingelse.com")
+
+
+def test_streaming_s3_objects():
+ # GH17135
+ # botocore gained iteration support in 1.10.47, can now be used in read_*
+ pytest.importorskip('botocore', minversion='1.10.47')
+ from botocore.response import StreamingBody
+
+ data = [
+ b'foo,bar,baz\n1,2,3\n4,5,6\n',
+ b'just,the,header\n',
+ ]
+ for el in data:
+ body = StreamingBody(BytesIO(el), content_length=len(el))
+ read_csv(body)
| Issue #17135 was resolved by an upstream change, this is just a unit test to avoid regressions. | https://api.github.com/repos/pandas-dev/pandas/pulls/22520 | 2018-08-27T13:05:00Z | 2018-08-29T12:26:44Z | 2018-08-29T12:26:44Z | 2018-08-29T12:39:27Z |
API: Make .shift always copy (Fixes #22397) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index f2ec08c61a6d8..5621476d3d4e7 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -544,6 +544,7 @@ Other API Changes
- :class:`Index` subtraction will attempt to operate element-wise instead of raising ``TypeError`` (:issue:`19369`)
- :class:`pandas.io.formats.style.Styler` supports a ``number-format`` property when using :meth:`~pandas.io.formats.style.Styler.to_excel` (:issue:`22015`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now raise a ``ValueError`` along with a helpful error message instead of a ``KeyError`` when supplied with an invalid method (:issue:`22298`)
+- :meth:`shift` will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (:issue:`22397`)
.. _whatsnew_0240.deprecations:
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index eb8821382037d..12e1dd1052e0b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -548,7 +548,7 @@ def shift(self, n, freq=None):
if n == 0:
# immutable so OK
- return self
+ return self.copy()
if self.freq is None:
raise NullFrequencyError("Cannot shift with no freq")
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2e5da21f573b0..cdc5b4310bce2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8282,7 +8282,7 @@ def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None,
@Appender(_shared_docs['shift'] % _shared_doc_kwargs)
def shift(self, periods=1, freq=None, axis=0):
if periods == 0:
- return self
+ return self.copy()
block_axis = self._get_block_manager_axis(axis)
if freq is None:
diff --git a/pandas/tests/generic/test_series.py b/pandas/tests/generic/test_series.py
index 3393d7704e411..f0c6c969f765a 100644
--- a/pandas/tests/generic/test_series.py
+++ b/pandas/tests/generic/test_series.py
@@ -227,3 +227,22 @@ def test_valid_deprecated(self):
# GH18800
with tm.assert_produces_warning(FutureWarning):
pd.Series([]).valid()
+
+ @pytest.mark.parametrize("s", [
+ Series([np.arange(5)]),
+ pd.date_range('1/1/2011', periods=24, freq='H'),
+ pd.Series(range(5), index=pd.date_range("2017", periods=5))
+ ])
+ @pytest.mark.parametrize("shift_size", [0, 1, 2])
+ def test_shift_always_copy(self, s, shift_size):
+ # GH22397
+ assert s.shift(shift_size) is not s
+
+ @pytest.mark.parametrize("move_by_freq", [
+ pd.Timedelta('1D'),
+ pd.Timedelta('1M'),
+ ])
+ def test_datetime_shift_always_copy(self, move_by_freq):
+ # GH22397
+ s = pd.Series(range(5), index=pd.date_range("2017", periods=5))
+ assert s.shift(freq=move_by_freq) is not s
| - [x] closes #22397
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22517 | 2018-08-27T02:16:57Z | 2018-09-15T12:25:21Z | 2018-09-15T12:25:21Z | 2018-09-15T17:53:00Z |
DOC: Fix to_latex docstring. | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 85bd6065314f4..1e2c0e17dad53 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2521,69 +2521,107 @@ def to_xarray(self):
coords=coords,
)
- _shared_docs['to_latex'] = r"""
+ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
+ index=True, na_rep='NaN', formatters=None, float_format=None,
+ sparsify=None, index_names=True, bold_rows=False,
+ column_format=None, longtable=None, escape=None,
+ encoding=None, decimal='.', multicolumn=None,
+ multicolumn_format=None, multirow=None):
+ r"""
+ Render an object to a LaTeX tabular environment table.
+
Render an object to a tabular environment table. You can splice
- this into a LaTeX document. Requires \\usepackage{booktabs}.
+ this into a LaTeX document. Requires \usepackage{booktabs}.
.. versionchanged:: 0.20.2
Added to Series
- `to_latex`-specific options:
-
- bold_rows : boolean, default False
- Make the row labels bold in the output
- column_format : str, default None
+ Parameters
+ ----------
+ buf : file descriptor or None
+ Buffer to write to. If None, the output is returned as a string.
+ columns : list of label, optional
+ The subset of columns to write. Writes all columns by default.
+ col_space : int, optional
+ The minimum width of each column.
+ header : bool or list of str, default True
+ Write out the column names. If a list of strings is given,
+ it is assumed to be aliases for the column names.
+ index : bool, default True
+ Write row names (index).
+ na_rep : str, default 'NaN'
+ Missing data representation.
+ formatters : list of functions or dict of {str: function}, optional
+ Formatter functions to apply to columns' elements by position or
+ name. The result of each function must be a unicode string.
+ List must be of length equal to the number of columns.
+ float_format : str, optional
+ Format string for floating point numbers.
+ sparsify : bool, optional
+ Set to False for a DataFrame with a hierarchical index to print
+ every multiindex key at each row. By default, the value will be
+ read from the config module.
+ index_names : bool, default True
+ Prints the names of the indexes.
+ bold_rows : bool, default False
+ Make the row labels bold in the output.
+ column_format : str, optional
The columns format as specified in `LaTeX table format
- <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3
- columns
- longtable : boolean, default will be read from the pandas config module
- Default: False.
- Use a longtable environment instead of tabular. Requires adding
- a \\usepackage{longtable} to your LaTeX preamble.
- escape : boolean, default will be read from the pandas config module
- Default: True.
- When set to False prevents from escaping latex special
+ <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g. 'rcl' for 3
+ columns. By default, 'l' will be used for all columns except
+ columns of numbers, which default to 'r'.
+ longtable : bool, optional
+ By default, the value will be read from the pandas config
+ module. Use a longtable environment instead of tabular. Requires
+ adding a \usepackage{longtable} to your LaTeX preamble.
+ escape : bool, optional
+ By default, the value will be read from the pandas config
+ module. When set to False prevents from escaping latex special
characters in column names.
- encoding : str, default None
+ encoding : str, optional
A string representing the encoding to use in the output file,
defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
- decimal : string, default '.'
+ decimal : str, default '.'
Character recognized as decimal separator, e.g. ',' in Europe.
-
.. versionadded:: 0.18.0
-
- multicolumn : boolean, default True
+ multicolumn : bool, default True
Use \multicolumn to enhance MultiIndex columns.
The default will be read from the config module.
-
.. versionadded:: 0.20.0
-
multicolumn_format : str, default 'l'
The alignment for multicolumns, similar to `column_format`
The default will be read from the config module.
-
+ .. versionadded:: 0.20.0
+ multirow : bool, default False
+ Use \multirow to enhance MultiIndex rows. Requires adding a
+ \usepackage{multirow} to your LaTeX preamble. Will print
+ centered labels (instead of top-aligned) across the contained
+ rows, separating groups via clines. The default will be read
+ from the pandas config module.
.. versionadded:: 0.20.0
- multirow : boolean, default False
- Use \multirow to enhance MultiIndex rows.
- Requires adding a \\usepackage{multirow} to your LaTeX preamble.
- Will print centered labels (instead of top-aligned)
- across the contained rows, separating groups via clines.
- The default will be read from the pandas config module.
+ Returns
+ -------
+ str or None
+ If buf is None, returns the resulting LateX format as a
+ string. Otherwise returns None.
- .. versionadded:: 0.20.0
- """
+ See Also
+ --------
+ DataFrame.to_string : Render a DataFrame to a console-friendly
+ tabular output.
+ DataFrame.to_html : Render a DataFrame as an HTML table.
- @Substitution(header='Write out the column names. If a list of strings '
- 'is given, it is assumed to be aliases for the '
- 'column names.')
- @Appender(_shared_docs['to_latex'] % _shared_doc_kwargs)
- def to_latex(self, buf=None, columns=None, col_space=None, header=True,
- index=True, na_rep='NaN', formatters=None, float_format=None,
- sparsify=None, index_names=True, bold_rows=False,
- column_format=None, longtable=None, escape=None,
- encoding=None, decimal='.', multicolumn=None,
- multicolumn_format=None, multirow=None):
+ Examples
+ --------
+ >>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
+ ... 'mask': ['red', 'purple'],
+ ... 'weapon': ['sai', 'bo staff']})
+ >>> df.to_latex(index=False) # doctest: +NORMALIZE_WHITESPACE
+ '\\begin{tabular}{lll}\n\\toprule\n name & mask & weapon
+ \\\\\n\\midrule\n Raphael & red & sai \\\\\n Donatello &
+ purple & bo staff \\\\\n\\bottomrule\n\\end{tabular}\n'
+ """
# Get defaults from the pandas config
if self.ndim == 1:
self = self.to_frame()
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Fix the DataFrame.to_latex docstring to match `scripts/validate_docstrings.py` as explained in #22459 and add an example.
The docstring was previously in a variable that was only used in to_latex. I put it in the method docstring instead. The `@Substitution` wasn't matching anything, I suspect this dates back to the common docstring in `io/formats/format.py`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22516 | 2018-08-26T19:32:47Z | 2018-09-08T03:10:33Z | 2018-09-08T03:10:33Z | 2018-09-08T03:10:37Z |
Fix test_sql pytest fixture warnings | diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 824e5a2b23df3..e4df7043919ae 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -253,9 +253,13 @@ def _get_exec(self):
else:
return self.conn.cursor()
- def _load_iris_data(self, datapath):
+ @pytest.fixture(params=[('io', 'data', 'iris.csv')])
+ def load_iris_data(self, datapath, request):
import io
- iris_csv_file = datapath('io', 'data', 'iris.csv')
+ iris_csv_file = datapath(*request.param)
+
+ if not hasattr(self, 'conn'):
+ self.setup_connect()
self.drop_table('iris')
self._get_exec().execute(SQL_STRINGS['create_iris'][self.flavor])
@@ -503,10 +507,14 @@ class _TestSQLApi(PandasSQLTest):
flavor = 'sqlite'
mode = None
- @pytest.fixture(autouse=True)
- def setup_method(self, datapath):
+ def setup_connect(self):
self.conn = self.connect()
- self._load_iris_data(datapath)
+
+ @pytest.fixture(autouse=True)
+ def setup_method(self, load_iris_data):
+ self.load_test_data_and_sql()
+
+ def load_test_data_and_sql(self):
self._load_iris_view()
self._load_test1_data()
self._load_test2_data()
@@ -1027,8 +1035,8 @@ class _EngineToConnMixin(object):
"""
@pytest.fixture(autouse=True)
- def setup_method(self, datapath):
- super(_EngineToConnMixin, self).setup_method(datapath)
+ def setup_method(self, load_iris_data):
+ super(_EngineToConnMixin, self).load_test_data_and_sql()
engine = self.conn
conn = engine.connect()
self.__tx = conn.begin()
@@ -1153,14 +1161,14 @@ def setup_class(cls):
msg = "{0} - can't connect to {1} server".format(cls, cls.flavor)
pytest.skip(msg)
- @pytest.fixture(autouse=True)
- def setup_method(self, datapath):
- self.setup_connect()
-
- self._load_iris_data(datapath)
+ def load_test_data_and_sql(self):
self._load_raw_sql()
self._load_test1_data()
+ @pytest.fixture(autouse=True)
+ def setup_method(self, load_iris_data):
+ self.load_test_data_and_sql()
+
@classmethod
def setup_import(cls):
# Skip this test if SQLAlchemy not available
@@ -1925,15 +1933,17 @@ class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest):
def connect(cls):
return sqlite3.connect(':memory:')
- @pytest.fixture(autouse=True)
- def setup_method(self, datapath):
+ def setup_connect(self):
self.conn = self.connect()
- self.pandasSQL = sql.SQLiteDatabase(self.conn)
-
- self._load_iris_data(datapath)
+ def load_test_data_and_sql(self):
+ self.pandasSQL = sql.SQLiteDatabase(self.conn)
self._load_test1_data()
+ @pytest.fixture(autouse=True)
+ def setup_method(self, load_iris_data):
+ self.load_test_data_and_sql()
+
def test_read_sql(self):
self._read_sql_iris()
@@ -2151,6 +2161,12 @@ def setup_method(self, request, datapath):
self.method = request.function
self.conn = sqlite3.connect(':memory:')
+ # In some test cases we may close db connection
+ # Re-open conn here so we can perform cleanup in teardown
+ yield
+ self.method = request.function
+ self.conn = sqlite3.connect(':memory:')
+
def test_basic(self):
frame = tm.makeTimeDataFrame()
self._check_roundtrip(frame)
@@ -2227,7 +2243,7 @@ def test_execute_fail(self):
with pytest.raises(Exception):
sql.execute('INSERT INTO test VALUES("foo", "bar", 7)', self.conn)
- def test_execute_closed_connection(self, request, datapath):
+ def test_execute_closed_connection(self):
create_sql = """
CREATE TABLE test
(
@@ -2246,9 +2262,6 @@ def test_execute_closed_connection(self, request, datapath):
with pytest.raises(Exception):
tquery("select * from test", con=self.conn)
- # Initialize connection again (needed for tearDown)
- self.setup_method(request, datapath)
-
def test_na_roundtrip(self):
pass
| - [y] closes #22338
- [y] tests added / passed
- [y] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
See Travis CI Python 3.6 build [before](https://travis-ci.org/pandas-dev/pandas/jobs/419432794) and [after](https://travis-ci.org/pandas-dev/pandas/jobs/420824121) my changes get rid of the warning for calling fixtures directly.
You can see this by searching for the string "pandas/tests/io/test_sql.py" to see that well over 100 warnings have been removed. | https://api.github.com/repos/pandas-dev/pandas/pulls/22515 | 2018-08-26T18:42:53Z | 2018-09-14T01:54:18Z | 2018-09-14T01:54:18Z | 2018-09-14T01:54:18Z |
BUG: Resample raises AmbiguousTimeError if index starts or ends on DST | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3e22084d98234..ade395b15cd47 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -615,6 +615,7 @@ Timezones
- Bug when setting a new value with :meth:`DataFrame.loc` with a :class:`DatetimeIndex` with a DST transition (:issue:`18308`, :issue:`20724`)
- Bug in :meth:`DatetimeIndex.unique` that did not re-localize tz-aware dates correctly (:issue:`21737`)
- Bug when indexing a :class:`Series` with a DST transition (:issue:`21846`)
+- Bug in :meth:`DataFrame.resample` when :class:`DatetimeIndex` starts or ends on a DST transition (:issue:`10117`, :issue:`19375`)
Offsets
^^^^^^^
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index fe664cf03b0b9..966801a400c37 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -893,34 +893,39 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, object ambiguous=None,
tdata = <int64_t*> cnp.PyArray_DATA(trans)
ntrans = len(trans)
+ # Determine whether each date lies left of the DST transition (store in
+ # result_a) or right of the DST transition (store in result_b)
result_a = np.empty(n, dtype=np.int64)
result_b = np.empty(n, dtype=np.int64)
result_a.fill(NPY_NAT)
result_b.fill(NPY_NAT)
- # left side
- idx_shifted = (np.maximum(0, trans.searchsorted(
+ idx_shifted_left = (np.maximum(0, trans.searchsorted(
vals - DAY_NS, side='right') - 1)).astype(np.int64)
- for i in range(n):
- v = vals[i] - deltas[idx_shifted[i]]
- pos = bisect_right_i8(tdata, v, ntrans) - 1
-
- # timestamp falls to the left side of the DST transition
- if v + deltas[pos] == vals[i]:
- result_a[i] = v
-
- # right side
- idx_shifted = (np.maximum(0, trans.searchsorted(
+ idx_shifted_right = (np.maximum(0, trans.searchsorted(
vals + DAY_NS, side='right') - 1)).astype(np.int64)
for i in range(n):
- v = vals[i] - deltas[idx_shifted[i]]
- pos = bisect_right_i8(tdata, v, ntrans) - 1
-
- # timestamp falls to the right side of the DST transition
- if v + deltas[pos] == vals[i]:
- result_b[i] = v
+ v_left = vals[i] - deltas[idx_shifted_left[i]]
+ if v_left in trans:
+ # The vals[i] lies directly on the DST border.
+ result_a[i] = v_left
+ else:
+ pos_left = bisect_right_i8(tdata, v_left, ntrans) - 1
+ # timestamp falls to the left side of the DST transition
+ if v_left + deltas[pos_left] == vals[i]:
+ result_a[i] = v_left
+
+ v_right = vals[i] - deltas[idx_shifted_right[i]]
+ if v_right in trans:
+ # The vals[i] lies directly on the DST border.
+ result_b[i] = v_right
+ else:
+ pos_right = bisect_right_i8(tdata, v_right, ntrans) - 1
+ # timestamp falls to the right side of the DST transition
+ if v_right + deltas[pos_right] == vals[i]:
+ result_b[i] = v_right
if infer_dst:
dst_hours = np.empty(n, dtype=np.int64)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 38801832829b0..73b48fbf511c3 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -2125,6 +2125,26 @@ def test_downsample_across_dst(self):
freq='H'))
tm.assert_series_equal(result, expected)
+ def test_bin_edges_on_DST_transition(self):
+ # GH 10117
+ # Ends on DST boundary
+ idx = date_range("2014-10-26 00:30:00", "2014-10-26 02:30:00",
+ freq="30T", tz="Europe/London")
+ expected = Series(range(len(idx)), index=idx)
+ result = expected.resample('30T').mean()
+ tm.assert_series_equal(result, expected)
+
+ # Starts on DST boundary
+ idx = date_range('2014-03-09 03:00', periods=4,
+ freq='H', tz='America/Chicago')
+ s = Series(range(len(idx)), index=idx)
+ result = s.resample('H', label='right', closed='right').sum()
+ expected = Series([1, 2, 3], index=date_range('2014-03-09 04:00',
+ periods=3,
+ freq='H',
+ tz='America/Chicago'))
+ tm.assert_series_equal(result, expected)
+
def test_resample_with_nat(self):
# GH 13020
index = DatetimeIndex([pd.NaT,
| - [x] closes #10117
- [x] closes #19375
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22514 | 2018-08-26T18:31:31Z | 2018-08-27T03:24:02Z | null | 2018-09-09T17:06:23Z |
CLN: use self.dtype internally in Categorical | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index c4144d2e8b086..fa8ce61f1f4bc 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -500,8 +500,7 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
def copy(self):
""" Copy constructor. """
return self._constructor(values=self._codes.copy(),
- categories=self.categories,
- ordered=self.ordered,
+ dtype=self.dtype,
fastpath=True)
def astype(self, dtype, copy=True):
@@ -1632,8 +1631,8 @@ def sort_values(self, inplace=False, ascending=True, na_position='last'):
self._codes = codes
return
else:
- return self._constructor(values=codes, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ return self._constructor(values=codes, dtype=self.dtype,
+ fastpath=True)
def _values_for_rank(self):
"""
@@ -1777,8 +1776,7 @@ def fillna(self, value=None, method=None, limit=None):
'or Series, but you passed a '
'"{0}"'.format(type(value).__name__))
- return self._constructor(values, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ return self._constructor(values, dtype=self.dtype, fastpath=True)
def take_nd(self, indexer, allow_fill=None, fill_value=None):
"""
@@ -1823,8 +1821,7 @@ def take_nd(self, indexer, allow_fill=None, fill_value=None):
codes = take(self._codes, indexer, allow_fill=allow_fill,
fill_value=fill_value)
- result = self._constructor(codes, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ result = self._constructor(codes, dtype=self.dtype, fastpath=True)
return result
take = take_nd
@@ -1843,9 +1840,8 @@ def _slice(self, slicer):
"categorical")
slicer = slicer[1]
- _codes = self._codes[slicer]
- return self._constructor(values=_codes, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ codes = self._codes[slicer]
+ return self._constructor(values=codes, dtype=self.dtype, fastpath=True)
def __len__(self):
"""The length of this Categorical."""
@@ -2157,8 +2153,8 @@ def mode(self, dropna=True):
good = self._codes != -1
values = self._codes[good]
values = sorted(htable.mode_int64(ensure_int64(values), dropna))
- result = self._constructor(values=values, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ result = self._constructor(values=values, dtype=self.dtype,
+ fastpath=True)
return result
def unique(self):
@@ -2298,8 +2294,7 @@ def repeat(self, repeats, *args, **kwargs):
"""
nv.validate_repeat(args, kwargs)
codes = self._codes.repeat(repeats)
- return self._constructor(values=codes, categories=self.categories,
- ordered=self.ordered, fastpath=True)
+ return self._constructor(values=codes, dtype=self.dtype, fastpath=True)
# Implement the ExtensionArray interface
@property
| Some clean-up that makes the code a bit easier to read IMO. | https://api.github.com/repos/pandas-dev/pandas/pulls/22513 | 2018-08-26T17:23:50Z | 2018-08-30T19:42:21Z | 2018-08-30T19:42:21Z | 2018-08-30T19:58:09Z |
ENH: better MultiIndex.__repr__ | diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index 0e68cddde8bc7..fb9b9db428e34 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -182,15 +182,15 @@ on a deeper level.
Defined Levels
~~~~~~~~~~~~~~
-The repr of a ``MultiIndex`` shows all the defined levels of an index, even
+The :class:`MultiIndex` keeps all the defined levels of an index, even
if they are not actually used. When slicing an index, you may notice this.
For example:
.. ipython:: python
- df.columns # original MultiIndex
+ df.columns.levels # original MultiIndex
- df[['foo','qux']].columns # sliced
+ df[['foo','qux']].columns.levels # sliced
This is done to avoid a recomputation of the levels in order to make slicing
highly performant. If you want to see only the used levels, you can use the
@@ -210,7 +210,8 @@ To reconstruct the ``MultiIndex`` with only the used levels, the
.. ipython:: python
- df[['foo', 'qux']].columns.remove_unused_levels()
+ new_mi = df[['foo', 'qux']].columns.remove_unused_levels()
+ new_mi.levels
Data alignment and using ``reindex``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 7d123697d3d20..3ccffdedcb895 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -74,6 +74,38 @@ a dict to a Series groupby aggregation (:ref:`whatsnew_0200.api_breaking.depreca
See :ref:`_groupby.aggregate.named` for more.
+
+.. _whatsnew_0250.enhancements.multi_index_repr:
+
+Better repr for MultiIndex
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Printing of :class:`MultiIndex` instances now shows tuples of each row and ensures
+that the tuple items are vertically aligned, so it's now easier to understand
+the structure of the ``MultiIndex``. (:issue:`13480`):
+
+The repr now looks like this:
+
+.. ipython:: python
+
+ pd.MultiIndex.from_product([['a', 'abc'], range(500)])
+
+Previously, outputting a :class:`MultiIndex` printed all the ``levels`` and
+``codes`` of the ``MultiIndex``, which was visually unappealing and made
+the output more difficult to navigate. For example (limiting the range to 5):
+
+.. code-block:: ipython
+
+ In [1]: pd.MultiIndex.from_product([['a', 'abc'], range(5)])
+ Out[1]: MultiIndex(levels=[['a', 'abc'], [0, 1, 2, 3]],
+ ...: codes=[[0, 0, 0, 0, 1, 1, 1, 1], [0, 1, 2, 3, 0, 1, 2, 3]])
+
+In the new repr, all values will be shown, if the number of rows is smaller
+than :attr:`options.display.max_seq_items` (default: 100 items). Horizontally,
+the output will truncate, if it's wider than :attr:`options.display.width`
+(default: 80 characters).
+
+
.. _whatsnew_0250.enhancements.other:
Other Enhancements
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 4fb9c4197109f..cd90ab63fb83d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1322,16 +1322,23 @@ def set_names(self, names, level=None, inplace=False):
>>> idx = pd.MultiIndex.from_product([['python', 'cobra'],
... [2018, 2019]])
>>> idx
- MultiIndex(levels=[['cobra', 'python'], [2018, 2019]],
- codes=[[1, 1, 0, 0], [0, 1, 0, 1]])
+ MultiIndex([('python', 2018),
+ ('python', 2019),
+ ( 'cobra', 2018),
+ ( 'cobra', 2019)],
+ )
>>> idx.set_names(['kind', 'year'], inplace=True)
>>> idx
- MultiIndex(levels=[['cobra', 'python'], [2018, 2019]],
- codes=[[1, 1, 0, 0], [0, 1, 0, 1]],
+ MultiIndex([('python', 2018),
+ ('python', 2019),
+ ( 'cobra', 2018),
+ ( 'cobra', 2019)],
names=['kind', 'year'])
>>> idx.set_names('species', level=0)
- MultiIndex(levels=[['cobra', 'python'], [2018, 2019]],
- codes=[[1, 1, 0, 0], [0, 1, 0, 1]],
+ MultiIndex([('python', 2018),
+ ('python', 2019),
+ ( 'cobra', 2018),
+ ( 'cobra', 2019)],
names=['species', 'year'])
"""
@@ -1393,12 +1400,16 @@ def rename(self, name, inplace=False):
... [2018, 2019]],
... names=['kind', 'year'])
>>> idx
- MultiIndex(levels=[['cobra', 'python'], [2018, 2019]],
- codes=[[1, 1, 0, 0], [0, 1, 0, 1]],
+ MultiIndex([('python', 2018),
+ ('python', 2019),
+ ( 'cobra', 2018),
+ ( 'cobra', 2019)],
names=['kind', 'year'])
>>> idx.rename(['species', 'year'])
- MultiIndex(levels=[['cobra', 'python'], [2018, 2019]],
- codes=[[1, 1, 0, 0], [0, 1, 0, 1]],
+ MultiIndex([('python', 2018),
+ ('python', 2019),
+ ( 'cobra', 2018),
+ ( 'cobra', 2019)],
names=['species', 'year'])
>>> idx.rename('species')
Traceback (most recent call last):
@@ -5420,8 +5431,8 @@ def ensure_index_from_sequences(sequences, names=None):
>>> ensure_index_from_sequences([['a', 'a'], ['a', 'b']],
names=['L1', 'L2'])
- MultiIndex(levels=[['a'], ['a', 'b']],
- codes=[[0, 0], [0, 1]],
+ MultiIndex([('a', 'a'),
+ ('a', 'b')],
names=['L1', 'L2'])
See Also
@@ -5461,8 +5472,10 @@ def ensure_index(index_like, copy=False):
Index([('a', 'a'), ('b', 'c')], dtype='object')
>>> ensure_index([['a', 'a'], ['b', 'c']])
- MultiIndex(levels=[['a'], ['b', 'c']],
- codes=[[0, 0], [0, 1]])
+ MultiIndex([('a', 'b'),
+ ('a', 'c')],
+ dtype='object')
+ )
See Also
--------
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 9217b388ce86b..0f457ba799928 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -29,7 +29,8 @@
from pandas.core.indexes.frozen import FrozenList, _ensure_frozen
import pandas.core.missing as missing
-from pandas.io.formats.printing import pprint_thing
+from pandas.io.formats.printing import (
+ format_object_attrs, format_object_summary, pprint_thing)
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
_index_doc_kwargs.update(
@@ -193,8 +194,10 @@ class MultiIndex(Index):
>>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
>>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
- MultiIndex(levels=[[1, 2], ['blue', 'red']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ MultiIndex([(1, 'red'),
+ (1, 'blue'),
+ (2, 'red'),
+ (2, 'blue')],
names=['number', 'color'])
See further examples for how to construct a MultiIndex in the doc strings
@@ -359,8 +362,10 @@ def from_arrays(cls, arrays, sortorder=None, names=None):
--------
>>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
>>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
- MultiIndex(levels=[[1, 2], ['blue', 'red']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ MultiIndex([(1, 'red'),
+ (1, 'blue'),
+ (2, 'red'),
+ (2, 'blue')],
names=['number', 'color'])
"""
error_msg = "Input must be a list / sequence of array-likes."
@@ -420,8 +425,10 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
>>> tuples = [(1, 'red'), (1, 'blue'),
... (2, 'red'), (2, 'blue')]
>>> pd.MultiIndex.from_tuples(tuples, names=('number', 'color'))
- MultiIndex(levels=[[1, 2], ['blue', 'red']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ MultiIndex([(1, 'red'),
+ (1, 'blue'),
+ (2, 'red'),
+ (2, 'blue')],
names=['number', 'color'])
"""
if not is_list_like(tuples):
@@ -477,8 +484,12 @@ def from_product(cls, iterables, sortorder=None, names=None):
>>> colors = ['green', 'purple']
>>> pd.MultiIndex.from_product([numbers, colors],
... names=['number', 'color'])
- MultiIndex(levels=[[0, 1, 2], ['green', 'purple']],
- codes=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
+ MultiIndex([(0, 'green'),
+ (0, 'purple'),
+ (1, 'green'),
+ (1, 'purple'),
+ (2, 'green'),
+ (2, 'purple')],
names=['number', 'color'])
"""
from pandas.core.arrays.categorical import _factorize_from_iterables
@@ -537,15 +548,19 @@ def from_frame(cls, df, sortorder=None, names=None):
3 NJ Precip
>>> pd.MultiIndex.from_frame(df)
- MultiIndex(levels=[['HI', 'NJ'], ['Precip', 'Temp']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ MultiIndex([('HI', 'Temp'),
+ ('HI', 'Precip'),
+ ('NJ', 'Temp'),
+ ('NJ', 'Precip')],
names=['a', 'b'])
Using explicit names, instead of the column names
>>> pd.MultiIndex.from_frame(df, names=['state', 'observation'])
- MultiIndex(levels=[['HI', 'NJ'], ['Precip', 'Temp']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ MultiIndex([('HI', 'Temp'),
+ ('HI', 'Precip'),
+ ('NJ', 'Temp'),
+ ('NJ', 'Precip')],
names=['state', 'observation'])
"""
if not isinstance(df, ABCDataFrame):
@@ -663,21 +678,29 @@ def set_levels(self, levels, level=None, inplace=False,
>>> idx = pd.MultiIndex.from_tuples([(1, 'one'), (1, 'two'),
(2, 'one'), (2, 'two')],
names=['foo', 'bar'])
- >>> idx.set_levels([['a','b'], [1,2]])
- MultiIndex(levels=[['a', 'b'], [1, 2]],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ >>> idx.set_levels([['a', 'b'], [1, 2]])
+ MultiIndex([('a', 1),
+ ('a', 2),
+ ('b', 1),
+ ('b', 2)],
names=['foo', 'bar'])
- >>> idx.set_levels(['a','b'], level=0)
- MultiIndex(levels=[['a', 'b'], ['one', 'two']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ >>> idx.set_levels(['a', 'b'], level=0)
+ MultiIndex([('a', 'one'),
+ ('a', 'two'),
+ ('b', 'one'),
+ ('b', 'two')],
names=['foo', 'bar'])
- >>> idx.set_levels(['a','b'], level='bar')
- MultiIndex(levels=[[1, 2], ['a', 'b']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ >>> idx.set_levels(['a', 'b'], level='bar')
+ MultiIndex([(1, 'a'),
+ (1, 'b'),
+ (2, 'a'),
+ (2, 'b')],
names=['foo', 'bar'])
- >>> idx.set_levels([['a','b'], [1,2]], level=[0,1])
- MultiIndex(levels=[['a', 'b'], [1, 2]],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ >>> idx.set_levels([['a', 'b'], [1, 2]], level=[0, 1])
+ MultiIndex([('a', 1),
+ ('a', 2),
+ ('b', 1),
+ ('b', 2)],
names=['foo', 'bar'])
"""
if is_list_like(levels) and not isinstance(levels, Index):
@@ -779,24 +802,34 @@ def set_codes(self, codes, level=None, inplace=False,
Examples
--------
- >>> idx = pd.MultiIndex.from_tuples([(1, 'one'), (1, 'two'),
- (2, 'one'), (2, 'two')],
+ >>> idx = pd.MultiIndex.from_tuples([(1, 'one'),
+ (1, 'two'),
+ (2, 'one'),
+ (2, 'two')],
names=['foo', 'bar'])
- >>> idx.set_codes([[1,0,1,0], [0,0,1,1]])
- MultiIndex(levels=[[1, 2], ['one', 'two']],
- codes=[[1, 0, 1, 0], [0, 0, 1, 1]],
+ >>> idx.set_codes([[1, 0, 1, 0], [0, 0, 1, 1]])
+ MultiIndex([(2, 'one'),
+ (1, 'one'),
+ (2, 'two'),
+ (1, 'two')],
names=['foo', 'bar'])
- >>> idx.set_codes([1,0,1,0], level=0)
- MultiIndex(levels=[[1, 2], ['one', 'two']],
- codes=[[1, 0, 1, 0], [0, 1, 0, 1]],
+ >>> idx.set_codes([1, 0, 1, 0], level=0)
+ MultiIndex([(2, 'one'),
+ (1, 'two'),
+ (2, 'one'),
+ (1, 'two')],
names=['foo', 'bar'])
- >>> idx.set_codes([0,0,1,1], level='bar')
- MultiIndex(levels=[[1, 2], ['one', 'two']],
- codes=[[0, 0, 1, 1], [0, 0, 1, 1]],
+ >>> idx.set_codes([0, 0, 1, 1], level='bar')
+ MultiIndex([(1, 'one'),
+ (1, 'one'),
+ (2, 'two'),
+ (2, 'two')],
names=['foo', 'bar'])
- >>> idx.set_codes([[1,0,1,0], [0,0,1,1]], level=[0,1])
- MultiIndex(levels=[[1, 2], ['one', 'two']],
- codes=[[1, 0, 1, 0], [0, 0, 1, 1]],
+ >>> idx.set_codes([[1, 0, 1, 0], [0, 0, 1, 1]], level=[0, 1])
+ MultiIndex([(2, 'one'),
+ (1, 'one'),
+ (2, 'two'),
+ (1, 'two')],
names=['foo', 'bar'])
"""
if level is not None and not is_list_like(level):
@@ -947,28 +980,25 @@ def _nbytes(self, deep=False):
# --------------------------------------------------------------------
# Rendering Methods
-
- def _format_attrs(self):
+ def _formatter_func(self, tup):
"""
- Return a list of tuples of the (attr,formatted_value)
+ Formats each item in tup according to its level's formatter function.
"""
- attrs = [
- ('levels', ibase.default_pprint(self._levels,
- max_seq_items=False)),
- ('codes', ibase.default_pprint(self._codes,
- max_seq_items=False))]
- if com._any_not_none(*self.names):
- attrs.append(('names', ibase.default_pprint(self.names)))
- if self.sortorder is not None:
- attrs.append(('sortorder', ibase.default_pprint(self.sortorder)))
- return attrs
-
- def _format_space(self):
- return "\n%s" % (' ' * (len(self.__class__.__name__) + 1))
+ formatter_funcs = [level._formatter_func for level in self.levels]
+ return tuple(func(val) for func, val in zip(formatter_funcs, tup))
def _format_data(self, name=None):
- # we are formatting thru the attributes
- return None
+ """
+ Return the formatted data as a unicode string
+ """
+ return format_object_summary(self, self._formatter_func,
+ name=name, line_break_each_value=True)
+
+ def _format_attrs(self):
+ """
+ Return a list of tuples of the (attr,formatted_value).
+ """
+ return format_object_attrs(self, include_dtype=False)
def _format_native_types(self, na_rep='nan', **kwargs):
new_levels = []
@@ -1555,9 +1585,19 @@ def to_hierarchical(self, n_repeat, n_shuffle=1):
>>> idx = pd.MultiIndex.from_tuples([(1, 'one'), (1, 'two'),
(2, 'one'), (2, 'two')])
>>> idx.to_hierarchical(3)
- MultiIndex(levels=[[1, 2], ['one', 'two']],
- codes=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1],
- [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]])
+ MultiIndex([(1, 'one'),
+ (1, 'one'),
+ (1, 'one'),
+ (1, 'two'),
+ (1, 'two'),
+ (1, 'two'),
+ (2, 'one'),
+ (2, 'one'),
+ (2, 'one'),
+ (2, 'two'),
+ (2, 'two'),
+ (2, 'two')],
+ )
"""
levels = self.levels
codes = [np.repeat(level_codes, n_repeat) for
@@ -1648,16 +1688,21 @@ def _sort_levels_monotonic(self):
Examples
--------
- >>> i = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
- >>> i
- MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
-
- >>> i.sort_monotonic()
- MultiIndex(levels=[['a', 'b'], ['aa', 'bb']],
- codes=[[0, 0, 1, 1], [1, 0, 1, 0]])
+ >>> mi = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
+ ... codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
+ >>> mi
+ MultiIndex([('a', 'bb'),
+ ('a', 'aa'),
+ ('b', 'bb'),
+ ('b', 'aa')],
+ )
+ >>> mi.sort_values()
+ MultiIndex([('a', 'aa'),
+ ('a', 'bb'),
+ ('b', 'aa'),
+ ('b', 'bb')],
+ )
"""
if self.is_lexsorted() and self.is_monotonic:
@@ -1706,20 +1751,25 @@ def remove_unused_levels(self):
Examples
--------
- >>> i = pd.MultiIndex.from_product([range(2), list('ab')])
- MultiIndex(levels=[[0, 1], ['a', 'b']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
+ >>> mi = pd.MultiIndex.from_product([range(2), list('ab')])
+ >>> mi
+ MultiIndex([(0, 'a'),
+ (0, 'b'),
+ (1, 'a'),
+ (1, 'b')],
+ )
- >>> i[2:]
- MultiIndex(levels=[[0, 1], ['a', 'b']],
- codes=[[1, 1], [0, 1]])
+ >>> mi[2:]
+ MultiIndex([(1, 'a'),
+ (1, 'b')],
+ )
The 0 from the first level is not represented
and can be removed
- >>> i[2:].remove_unused_levels()
- MultiIndex(levels=[[1], ['a', 'b']],
- codes=[[0, 0], [0, 1]])
+ >>> mi2 = mi[2:].remove_unused_levels()
+ >>> mi2.levels
+ FrozenList([[1], ['a', 'b']])
"""
new_levels = []
@@ -2026,11 +2076,17 @@ def swaplevel(self, i=-2, j=-1):
>>> mi = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
>>> mi
- MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
- codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
+ MultiIndex([('a', 'bb'),
+ ('a', 'aa'),
+ ('b', 'bb'),
+ ('b', 'aa')],
+ )
>>> mi.swaplevel(0, 1)
- MultiIndex(levels=[['bb', 'aa'], ['a', 'b']],
- codes=[[0, 1, 0, 1], [0, 0, 1, 1]])
+ MultiIndex([('bb', 'a'),
+ ('aa', 'a'),
+ ('bb', 'b'),
+ ('aa', 'b')],
+ )
"""
new_levels = list(self.levels)
new_codes = list(self.codes)
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index bd756491abd2f..edfd3e7cf2fed 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2548,8 +2548,9 @@ def rsplit(self, pat=None, n=-1, expand=False):
Which will create a MultiIndex:
>>> idx.str.partition()
- MultiIndex(levels=[['X', 'Y'], [' '], ['123', '999']],
- codes=[[0, 1], [0, 0], [0, 1]])
+ MultiIndex([('X', ' ', '123'),
+ ('Y', ' ', '999')],
+ dtype='object')
Or an index with tuples with ``expand=False``:
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index bee66fcbfaa82..73d8586a0a8c9 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -265,7 +265,7 @@ class TableSchemaFormatter(BaseFormatter):
def format_object_summary(obj, formatter, is_justify=True, name=None,
- indent_for_name=True):
+ indent_for_name=True, line_break_each_value=False):
"""
Return the formatted obj as a unicode string
@@ -282,6 +282,12 @@ def format_object_summary(obj, formatter, is_justify=True, name=None,
indent_for_name : bool, default True
Whether subsequent lines should be be indented to
align with the name.
+ line_break_each_value : bool, default False
+ If True, inserts a line break for each value of ``obj``.
+ If False, only break lines when the a line of values gets wider
+ than the display width.
+
+ .. versionadded:: 0.25.0
Returns
-------
@@ -306,7 +312,12 @@ def format_object_summary(obj, formatter, is_justify=True, name=None,
space2 = "\n " # space for the opening '['
n = len(obj)
- sep = ','
+ if line_break_each_value:
+ # If we want to vertically align on each value of obj, we need to
+ # separate values by a line break and indent the values
+ sep = ',\n ' + ' ' * len(name)
+ else:
+ sep = ','
max_seq_items = get_option('display.max_seq_items') or n
# are we a truncated display
@@ -334,10 +345,10 @@ def best_len(values):
if n == 0:
summary = '[]{}'.format(close)
- elif n == 1:
+ elif n == 1 and not line_break_each_value:
first = formatter(obj[0])
summary = '[{}]{}'.format(first, close)
- elif n == 2:
+ elif n == 2 and not line_break_each_value:
first = formatter(obj[0])
last = formatter(obj[-1])
summary = '[{}, {}]{}'.format(first, last, close)
@@ -353,21 +364,39 @@ def best_len(values):
# adjust all values to max length if needed
if is_justify:
-
- # however, if we are not truncated and we are only a single
+ if line_break_each_value:
+ # Justify each string in the values of head and tail, so the
+ # strings will right align when head and tail are stacked
+ # vertically.
+ head, tail = _justify(head, tail)
+ elif (is_truncated or not (len(', '.join(head)) < display_width and
+ len(', '.join(tail)) < display_width)):
+ # Each string in head and tail should align with each other
+ max_length = max(best_len(head), best_len(tail))
+ head = [x.rjust(max_length) for x in head]
+ tail = [x.rjust(max_length) for x in tail]
+ # If we are not truncated and we are only a single
# line, then don't justify
- if (is_truncated or
- not (len(', '.join(head)) < display_width and
- len(', '.join(tail)) < display_width)):
- max_len = max(best_len(head), best_len(tail))
- head = [x.rjust(max_len) for x in head]
- tail = [x.rjust(max_len) for x in tail]
+
+ if line_break_each_value:
+ # Now head and tail are of type List[Tuple[str]]. Below we
+ # convert them into List[str], so there will be one string per
+ # value. Also truncate items horizontally if wider than
+ # max_space
+ max_space = display_width - len(space2)
+ value = tail[0]
+ for max_items in reversed(range(1, len(value) + 1)):
+ pprinted_seq = _pprint_seq(value, max_seq_items=max_items)
+ if len(pprinted_seq) < max_space:
+ break
+ head = [_pprint_seq(x, max_seq_items=max_items) for x in head]
+ tail = [_pprint_seq(x, max_seq_items=max_items) for x in tail]
summary = ""
line = space2
- for i in range(len(head)):
- word = head[i] + sep + ' '
+ for max_items in range(len(head)):
+ word = head[max_items] + sep + ' '
summary, line = _extend_line(summary, line, word,
display_width, space2)
@@ -376,8 +405,8 @@ def best_len(values):
summary += line.rstrip() + space2 + '...'
line = space2
- for i in range(len(tail) - 1):
- word = tail[i] + sep + ' '
+ for max_items in range(len(tail) - 1):
+ word = tail[max_items] + sep + ' '
summary, line = _extend_line(summary, line, word,
display_width, space2)
@@ -391,7 +420,7 @@ def best_len(values):
close = ']' + close.rstrip(' ')
summary += close
- if len(summary) > (display_width):
+ if len(summary) > (display_width) or line_break_each_value:
summary += space1
else: # one row
summary += ' '
@@ -402,7 +431,44 @@ def best_len(values):
return summary
-def format_object_attrs(obj):
+def _justify(head, tail):
+ """
+ Justify items in head and tail, so they are right-aligned when stacked.
+
+ Parameters
+ ----------
+ head : list-like of list-likes of strings
+ tail : list-like of list-likes of strings
+
+ Returns
+ -------
+ tuple of list of tuples of strings
+ Same as head and tail, but items are right aligned when stacked
+ vertically.
+
+ Examples
+ --------
+ >>> _justify([['a', 'b']], [['abc', 'abcd']])
+ ([(' a', ' b')], [('abc', 'abcd')])
+ """
+ combined = head + tail
+
+ # For each position for the sequences in ``combined``,
+ # find the length of the largest string.
+ max_length = [0] * len(combined[0])
+ for inner_seq in combined:
+ length = [len(item) for item in inner_seq]
+ max_length = [max(x, y) for x, y in zip(max_length, length)]
+
+ # justify each item in each list-like in head and tail using max_length
+ head = [tuple(x.rjust(max_len) for x, max_len in zip(seq, max_length))
+ for seq in head]
+ tail = [tuple(x.rjust(max_len) for x, max_len in zip(seq, max_length))
+ for seq in tail]
+ return head, tail
+
+
+def format_object_attrs(obj, include_dtype=True):
"""
Return a list of tuples of the (attr, formatted_value)
for common attrs, including dtype, name, length
@@ -411,6 +477,8 @@ def format_object_attrs(obj):
----------
obj : object
must be iterable
+ include_dtype : bool
+ If False, dtype won't be in the returned list
Returns
-------
@@ -418,10 +486,12 @@ def format_object_attrs(obj):
"""
attrs = []
- if hasattr(obj, 'dtype'):
+ if hasattr(obj, 'dtype') and include_dtype:
attrs.append(('dtype', "'{}'".format(obj.dtype)))
if getattr(obj, 'name', None) is not None:
attrs.append(('name', default_pprint(obj.name)))
+ elif getattr(obj, 'names', None) is not None and any(obj.names):
+ attrs.append(('names', default_pprint(obj.names)))
max_seq_items = get_option('display.max_seq_items') or len(obj)
if len(obj) > max_seq_items:
attrs.append(('length', len(obj)))
diff --git a/pandas/tests/indexes/multi/conftest.py b/pandas/tests/indexes/multi/conftest.py
index 956d2e6cc17e3..307772347e8f5 100644
--- a/pandas/tests/indexes/multi/conftest.py
+++ b/pandas/tests/indexes/multi/conftest.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+import pandas as pd
from pandas import Index, MultiIndex
@@ -52,3 +53,28 @@ def holder():
def compat_props():
# a MultiIndex must have these properties associated with it
return ['shape', 'ndim', 'size']
+
+
+@pytest.fixture
+def narrow_multi_index():
+ """
+ Return a MultiIndex that is narrower than the display (<80 characters).
+ """
+ n = 1000
+ ci = pd.CategoricalIndex(list('a' * n) + (['abc'] * n))
+ dti = pd.date_range('2000-01-01', freq='s', periods=n * 2)
+ return pd.MultiIndex.from_arrays([ci, ci.codes + 9, dti],
+ names=['a', 'b', 'dti'])
+
+
+@pytest.fixture
+def wide_multi_index():
+ """
+ Return a MultiIndex that is wider than the display (>80 characters).
+ """
+ n = 1000
+ ci = pd.CategoricalIndex(list('a' * n) + (['abc'] * n))
+ dti = pd.date_range('2000-01-01', freq='s', periods=n * 2)
+ levels = [ci, ci.codes + 9, dti, dti, dti]
+ names = ['a', 'b', 'dti_1', 'dti_2', 'dti_3']
+ return pd.MultiIndex.from_arrays(levels, names=names)
diff --git a/pandas/tests/indexes/multi/test_format.py b/pandas/tests/indexes/multi/test_format.py
index c320cb32b856c..8315478d85125 100644
--- a/pandas/tests/indexes/multi/test_format.py
+++ b/pandas/tests/indexes/multi/test_format.py
@@ -55,31 +55,11 @@ def test_repr_with_unicode_data():
assert "\\" not in repr(index) # we don't want unicode-escaped
-@pytest.mark.skip(reason="#22511 will remove this test")
-def test_repr_roundtrip():
-
+def test_repr_roundtrip_raises():
mi = MultiIndex.from_product([list('ab'), range(3)],
names=['first', 'second'])
- str(mi)
-
- tm.assert_index_equal(eval(repr(mi)), mi, exact=True)
-
- mi_u = MultiIndex.from_product(
- [list('ab'), range(3)], names=['first', 'second'])
- result = eval(repr(mi_u))
- tm.assert_index_equal(result, mi_u, exact=True)
-
- # formatting
- str(mi)
-
- # long format
- mi = MultiIndex.from_product([list('abcdefg'), range(10)],
- names=['first', 'second'])
-
- tm.assert_index_equal(eval(repr(mi)), mi, exact=True)
-
- result = eval(repr(mi_u))
- tm.assert_index_equal(result, mi_u, exact=True)
+ with pytest.raises(TypeError):
+ eval(repr(mi))
def test_unicode_string_with_unicode():
@@ -94,3 +74,126 @@ def test_repr_max_seq_item_setting(idx):
with pd.option_context("display.max_seq_items", None):
repr(idx)
assert '...' not in str(idx)
+
+
+class TestRepr:
+
+ def test_repr(self, idx):
+ result = idx[:1].__repr__()
+ expected = """\
+MultiIndex([('foo', 'one')],
+ names=['first', 'second'])"""
+ assert result == expected
+
+ result = idx.__repr__()
+ expected = """\
+MultiIndex([('foo', 'one'),
+ ('foo', 'two'),
+ ('bar', 'one'),
+ ('baz', 'two'),
+ ('qux', 'one'),
+ ('qux', 'two')],
+ names=['first', 'second'])"""
+ assert result == expected
+
+ with pd.option_context('display.max_seq_items', 5):
+ result = idx.__repr__()
+ expected = """\
+MultiIndex([('foo', 'one'),
+ ('foo', 'two'),
+ ...
+ ('qux', 'one'),
+ ('qux', 'two')],
+ names=['first', 'second'], length=6)"""
+ assert result == expected
+
+ def test_rjust(self, narrow_multi_index):
+ mi = narrow_multi_index
+ result = mi[:1].__repr__()
+ expected = """\
+MultiIndex([('a', 9, '2000-01-01 00:00:00')],
+ names=['a', 'b', 'dti'])"""
+ assert result == expected
+
+ result = mi[::500].__repr__()
+ expected = """\
+MultiIndex([( 'a', 9, '2000-01-01 00:00:00'),
+ ( 'a', 9, '2000-01-01 00:08:20'),
+ ('abc', 10, '2000-01-01 00:16:40'),
+ ('abc', 10, '2000-01-01 00:25:00')],
+ names=['a', 'b', 'dti'])"""
+ assert result == expected
+
+ result = mi.__repr__()
+ expected = """\
+MultiIndex([( 'a', 9, '2000-01-01 00:00:00'),
+ ( 'a', 9, '2000-01-01 00:00:01'),
+ ( 'a', 9, '2000-01-01 00:00:02'),
+ ( 'a', 9, '2000-01-01 00:00:03'),
+ ( 'a', 9, '2000-01-01 00:00:04'),
+ ( 'a', 9, '2000-01-01 00:00:05'),
+ ( 'a', 9, '2000-01-01 00:00:06'),
+ ( 'a', 9, '2000-01-01 00:00:07'),
+ ( 'a', 9, '2000-01-01 00:00:08'),
+ ( 'a', 9, '2000-01-01 00:00:09'),
+ ...
+ ('abc', 10, '2000-01-01 00:33:10'),
+ ('abc', 10, '2000-01-01 00:33:11'),
+ ('abc', 10, '2000-01-01 00:33:12'),
+ ('abc', 10, '2000-01-01 00:33:13'),
+ ('abc', 10, '2000-01-01 00:33:14'),
+ ('abc', 10, '2000-01-01 00:33:15'),
+ ('abc', 10, '2000-01-01 00:33:16'),
+ ('abc', 10, '2000-01-01 00:33:17'),
+ ('abc', 10, '2000-01-01 00:33:18'),
+ ('abc', 10, '2000-01-01 00:33:19')],
+ names=['a', 'b', 'dti'], length=2000)"""
+ assert result == expected
+
+ def test_tuple_width(self, wide_multi_index):
+ mi = wide_multi_index
+ result = mi[:1].__repr__()
+ expected = """MultiIndex([('a', 9, '2000-01-01 00:00:00', '2000-01-01 00:00:00', ...)],
+ names=['a', 'b', 'dti_1', 'dti_2', 'dti_3'])"""
+ assert result == expected
+
+ result = mi[:10].__repr__()
+ expected = """\
+MultiIndex([('a', 9, '2000-01-01 00:00:00', '2000-01-01 00:00:00', ...),
+ ('a', 9, '2000-01-01 00:00:01', '2000-01-01 00:00:01', ...),
+ ('a', 9, '2000-01-01 00:00:02', '2000-01-01 00:00:02', ...),
+ ('a', 9, '2000-01-01 00:00:03', '2000-01-01 00:00:03', ...),
+ ('a', 9, '2000-01-01 00:00:04', '2000-01-01 00:00:04', ...),
+ ('a', 9, '2000-01-01 00:00:05', '2000-01-01 00:00:05', ...),
+ ('a', 9, '2000-01-01 00:00:06', '2000-01-01 00:00:06', ...),
+ ('a', 9, '2000-01-01 00:00:07', '2000-01-01 00:00:07', ...),
+ ('a', 9, '2000-01-01 00:00:08', '2000-01-01 00:00:08', ...),
+ ('a', 9, '2000-01-01 00:00:09', '2000-01-01 00:00:09', ...)],
+ names=['a', 'b', 'dti_1', 'dti_2', 'dti_3'])"""
+ assert result == expected
+
+ result = mi.__repr__()
+ expected = """\
+MultiIndex([( 'a', 9, '2000-01-01 00:00:00', '2000-01-01 00:00:00', ...),
+ ( 'a', 9, '2000-01-01 00:00:01', '2000-01-01 00:00:01', ...),
+ ( 'a', 9, '2000-01-01 00:00:02', '2000-01-01 00:00:02', ...),
+ ( 'a', 9, '2000-01-01 00:00:03', '2000-01-01 00:00:03', ...),
+ ( 'a', 9, '2000-01-01 00:00:04', '2000-01-01 00:00:04', ...),
+ ( 'a', 9, '2000-01-01 00:00:05', '2000-01-01 00:00:05', ...),
+ ( 'a', 9, '2000-01-01 00:00:06', '2000-01-01 00:00:06', ...),
+ ( 'a', 9, '2000-01-01 00:00:07', '2000-01-01 00:00:07', ...),
+ ( 'a', 9, '2000-01-01 00:00:08', '2000-01-01 00:00:08', ...),
+ ( 'a', 9, '2000-01-01 00:00:09', '2000-01-01 00:00:09', ...),
+ ...
+ ('abc', 10, '2000-01-01 00:33:10', '2000-01-01 00:33:10', ...),
+ ('abc', 10, '2000-01-01 00:33:11', '2000-01-01 00:33:11', ...),
+ ('abc', 10, '2000-01-01 00:33:12', '2000-01-01 00:33:12', ...),
+ ('abc', 10, '2000-01-01 00:33:13', '2000-01-01 00:33:13', ...),
+ ('abc', 10, '2000-01-01 00:33:14', '2000-01-01 00:33:14', ...),
+ ('abc', 10, '2000-01-01 00:33:15', '2000-01-01 00:33:15', ...),
+ ('abc', 10, '2000-01-01 00:33:16', '2000-01-01 00:33:16', ...),
+ ('abc', 10, '2000-01-01 00:33:17', '2000-01-01 00:33:17', ...),
+ ('abc', 10, '2000-01-01 00:33:18', '2000-01-01 00:33:18', ...),
+ ('abc', 10, '2000-01-01 00:33:19', '2000-01-01 00:33:19', ...)],
+ names=['a', 'b', 'dti_1', 'dti_2', 'dti_3'], length=2000)""" # noqa
+ assert result == expected
diff --git a/pandas/tests/util/test_assert_index_equal.py b/pandas/tests/util/test_assert_index_equal.py
index ec9cbd104d751..445d9c4e482b0 100644
--- a/pandas/tests/util/test_assert_index_equal.py
+++ b/pandas/tests/util/test_assert_index_equal.py
@@ -10,8 +10,11 @@ def test_index_equal_levels_mismatch():
Index levels are different
\\[left\\]: 1, Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\)
-\\[right\\]: 2, MultiIndex\\(levels=\\[\\['A', 'B'\\], \\[1, 2, 3, 4\\]\\],
- codes=\\[\\[0, 0, 1, 1\\], \\[0, 1, 2, 3\\]\\]\\)"""
+\\[right\\]: 2, MultiIndex\\(\\[\\('A', 1\\),
+ \\('A', 2\\),
+ \\('B', 3\\),
+ \\('B', 4\\)\\],
+ \\)"""
idx1 = Index([1, 2, 3])
idx2 = MultiIndex.from_tuples([("A", 1), ("A", 2),
| closes #13480
closes #12423
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Proposal to make a new repr for MultiIndex. Displaying MultiIndex will be based on displaying vertically stacked tuples, as discussed in #13480. This makes it easier to understand the structure of the MultiIndex.
In the proposal we get:
* item formatting according to each level's formatting rule,
* right-justification for each tuple item,
* row-wise truncation according to ``pd.options.display.max_seq_items``,
* column-wise truncation according to ``pd.options.display.width``,
A large MultiIndex example will now look like this:
```python
>>> n = 1_000_000
>>> ci = pd.CategoricalIndex(list('a' * n) + (['bcd'] * n),
... categories=['a', 'bcd'], ordered=True)
>>> dti =pd.date_range('2000-01-01', freq='s', periods=2 * n)
>>> mi = pd.MultiIndex.from_arrays([ci, ci.codes+9, dti, dti, dti],
... names = ['a', 'b', 'x', 'x2', 'x3'])
>>> mi
MultiIndex([( 'a', 9, '2000-01-01 00:00:00', '2000-01-01 00:00:00', ...),
( 'a', 9, '2000-01-01 00:00:01', '2000-01-01 00:00:01', ...),
( 'a', 9, '2000-01-01 00:00:02', '2000-01-01 00:00:02', ...),
( 'a', 9, '2000-01-01 00:00:03', '2000-01-01 00:00:03', ...),
( 'a', 9, '2000-01-01 00:00:04', '2000-01-01 00:00:04', ...),
( 'a', 9, '2000-01-01 00:00:05', '2000-01-01 00:00:05', ...),
( 'a', 9, '2000-01-01 00:00:06', '2000-01-01 00:00:06', ...),
( 'a', 9, '2000-01-01 00:00:07', '2000-01-01 00:00:07', ...),
( 'a', 9, '2000-01-01 00:00:08', '2000-01-01 00:00:08', ...),
( 'a', 9, '2000-01-01 00:00:09', '2000-01-01 00:00:09', ...),
...
('bcd', 10, '2000-01-24 03:33:10', '2000-01-24 03:33:10', ...),
('bcd', 10, '2000-01-24 03:33:11', '2000-01-24 03:33:11', ...),
('bcd', 10, '2000-01-24 03:33:12', '2000-01-24 03:33:12', ...),
('bcd', 10, '2000-01-24 03:33:13', '2000-01-24 03:33:13', ...),
('bcd', 10, '2000-01-24 03:33:14', '2000-01-24 03:33:14', ...),
('bcd', 10, '2000-01-24 03:33:15', '2000-01-24 03:33:15', ...),
('bcd', 10, '2000-01-24 03:33:16', '2000-01-24 03:33:16', ...),
('bcd', 10, '2000-01-24 03:33:17', '2000-01-24 03:33:17', ...),
('bcd', 10, '2000-01-24 03:33:18', '2000-01-24 03:33:18', ...),
('bcd', 10, '2000-01-24 03:33:19', '2000-01-24 03:33:19', ...)],
dtype='object', names=['a', 'b', 'x', 'x2', 'x3'], length=2000000)
```
For further examples, see the added tests in pandas/tests/indexes/multi/test_format.py. | https://api.github.com/repos/pandas-dev/pandas/pulls/22511 | 2018-08-26T08:00:56Z | 2019-06-19T01:05:34Z | 2019-06-19T01:05:34Z | 2019-06-19T01:06:02Z |
DOC: Update series apply docstring. GH22459 | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 36ca2c0c6e097..c9b1a2c45eab3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3288,23 +3288,27 @@ def transform(self, func, axis=0, *args, **kwargs):
def apply(self, func, convert_dtype=True, args=(), **kwds):
"""
- Invoke function on values of Series. Can be ufunc (a NumPy function
- that applies to the entire Series) or a Python function that only works
- on single values
+ Invoke function on values of Series.
+
+ Can be ufunc (a NumPy function that applies to the entire Series) or a
+ Python function that only works on single values.
Parameters
----------
func : function
- convert_dtype : boolean, default True
+ Python function or NumPy ufunc to apply.
+ convert_dtype : bool, default True
Try to find better dtype for elementwise function results. If
- False, leave as dtype=object
+ False, leave as dtype=object.
args : tuple
- Positional arguments to pass to function in addition to the value
- Additional keyword arguments will be passed as keywords to the function
+ Positional arguments passed to func after the series value.
+ **kwds
+ Additional keyword arguments passed to func.
Returns
-------
- y : Series or DataFrame if func returns a Series
+ Series or DataFrame
+ If func returns a Series object the result will be a DataFrame.
See Also
--------
@@ -3314,12 +3318,11 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
Examples
--------
-
Create a series with typical summer temperatures for each city.
- >>> series = pd.Series([20, 21, 12], index=['London',
- ... 'New York','Helsinki'])
- >>> series
+ >>> s = pd.Series([20, 21, 12],
+ ... index=['London', 'New York', 'Helsinki'])
+ >>> s
London 20
New York 21
Helsinki 12
@@ -3329,8 +3332,8 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
argument to ``apply()``.
>>> def square(x):
- ... return x**2
- >>> series.apply(square)
+ ... return x ** 2
+ >>> s.apply(square)
London 400
New York 441
Helsinki 144
@@ -3339,7 +3342,7 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
Square the values by passing an anonymous function as an
argument to ``apply()``.
- >>> series.apply(lambda x: x**2)
+ >>> s.apply(lambda x: x ** 2)
London 400
New York 441
Helsinki 144
@@ -3350,9 +3353,9 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
``args`` keyword.
>>> def subtract_custom_value(x, custom_value):
- ... return x-custom_value
+ ... return x - custom_value
- >>> series.apply(subtract_custom_value, args=(5,))
+ >>> s.apply(subtract_custom_value, args=(5,))
London 15
New York 16
Helsinki 7
@@ -3363,10 +3366,10 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
>>> def add_custom_values(x, **kwargs):
... for month in kwargs:
- ... x+=kwargs[month]
+ ... x += kwargs[month]
... return x
- >>> series.apply(add_custom_values, june=30, july=20, august=25)
+ >>> s.apply(add_custom_values, june=30, july=20, august=25)
London 95
New York 96
Helsinki 87
@@ -3374,7 +3377,7 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
Use a function from the Numpy library.
- >>> series.apply(np.log)
+ >>> s.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
| - [X] refs #22459
- [x] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Updated `Series.apply` docstring to resolve errors raised from scripts/validate_docstrings.py from #22459. | https://api.github.com/repos/pandas-dev/pandas/pulls/22510 | 2018-08-26T03:07:06Z | 2018-11-24T20:00:47Z | 2018-11-24T20:00:47Z | 2018-11-24T20:00:58Z |
BUG: silent overflow in DateTimeArray subtraction | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3e22084d98234..21e45294c87a3 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -582,6 +582,7 @@ Datetimelike
- Bug in :class:`DataFrame` comparisons against ``Timestamp``-like objects failing to raise ``TypeError`` for inequality checks with mismatched types (:issue:`8932`,:issue:`22163`)
- Bug in :class:`DataFrame` with mixed dtypes including ``datetime64[ns]`` incorrectly raising ``TypeError`` on equality comparisons (:issue:`13128`,:issue:`22163`)
- Bug in :meth:`DataFrame.eq` comparison against ``NaT`` incorrectly returning ``True`` or ``NaN`` (:issue:`15697`,:issue:`22163`)
+- Bug in :class:`DatetimeIndex` subtraction that incorrectly failed to raise `OverflowError` (:issue:`22492`, :issue:`22508`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 1dd34cdf73ab5..484eb430c82b1 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -459,7 +459,8 @@ def _sub_datelike_dti(self, other):
self_i8 = self.asi8
other_i8 = other.asi8
- new_values = self_i8 - other_i8
+ new_values = checked_add_with_arr(self_i8, -other_i8,
+ arr_mask=self._isnan)
if self.hasnans or other.hasnans:
mask = (self._isnan) | (other._isnan)
new_values[mask] = iNaT
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 879a4e1b4af1a..d597ea834f097 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1570,6 +1570,40 @@ def test_datetimeindex_sub_timestamp_overflow(self):
with pytest.raises(OverflowError):
dtimin - variant
+ def test_datetimeindex_sub_datetimeindex_overflow(self):
+ # GH#22492, GH#22508
+ dtimax = pd.to_datetime(['now', pd.Timestamp.max])
+ dtimin = pd.to_datetime(['now', pd.Timestamp.min])
+
+ ts_neg = pd.to_datetime(['1950-01-01', '1950-01-01'])
+ ts_pos = pd.to_datetime(['1980-01-01', '1980-01-01'])
+
+ # General tests
+ expected = pd.Timestamp.max.value - ts_pos[1].value
+ result = dtimax - ts_pos
+ assert result[1].value == expected
+
+ expected = pd.Timestamp.min.value - ts_neg[1].value
+ result = dtimin - ts_neg
+ assert result[1].value == expected
+
+ with pytest.raises(OverflowError):
+ dtimax - ts_neg
+
+ with pytest.raises(OverflowError):
+ dtimin - ts_pos
+
+ # Edge cases
+ tmin = pd.to_datetime([pd.Timestamp.min])
+ t1 = tmin + pd.Timedelta.max + pd.Timedelta('1us')
+ with pytest.raises(OverflowError):
+ t1 - tmin
+
+ tmax = pd.to_datetime([pd.Timestamp.max])
+ t2 = tmax + pd.Timedelta.min - pd.Timedelta('1us')
+ with pytest.raises(OverflowError):
+ tmax - t2
+
@pytest.mark.parametrize('names', [('foo', None, None),
('baz', 'bar', None),
('bar', 'bar', 'bar')])
| - [x] closes #22492
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22508 | 2018-08-26T01:07:01Z | 2018-08-31T10:10:59Z | 2018-08-31T10:10:59Z | 2018-08-31T20:11:08Z |
Fix DataFrame.to_string() justification (2) | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 6c91b6374b8af..c067adc8936a2 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -762,6 +762,7 @@ I/O
- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`)
+- Bug in :func:`to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
Plotting
^^^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 1ff0613876838..db86409adc2b0 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -288,8 +288,7 @@ def to_string(self):
if self.index:
result = self.adj.adjoin(3, *[fmt_index[1:], fmt_values])
else:
- result = self.adj.adjoin(3, fmt_values).replace('\n ',
- '\n').strip()
+ result = self.adj.adjoin(3, fmt_values)
if self.header and have_header:
result = fmt_index[0] + '\n' + result
@@ -650,8 +649,6 @@ def to_string(self):
self._chk_truncate()
strcols = self._to_str_columns()
text = self.adj.adjoin(1, *strcols)
- if not self.index:
- text = text.replace('\n ', '\n').strip()
self.buf.writelines(text)
if self.should_show_dimensions:
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index ffbc978b92ba5..03e830fb09ad6 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1269,18 +1269,42 @@ def test_to_string_specified_header(self):
df.to_string(header=['X'])
def test_to_string_no_index(self):
- df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
+ # GH 16839, GH 13032
+ df = DataFrame({'x': [11, 22], 'y': [33, -44], 'z': ['AAA', ' ']})
df_s = df.to_string(index=False)
- expected = "x y\n1 4\n2 5\n3 6"
+ # Leading space is expected for positive numbers.
+ expected = (" x y z\n"
+ " 11 33 AAA\n"
+ " 22 -44 ")
+ assert df_s == expected
+ df_s = df[['y', 'x', 'z']].to_string(index=False)
+ expected = (" y x z\n"
+ " 33 11 AAA\n"
+ "-44 22 ")
assert df_s == expected
def test_to_string_line_width_no_index(self):
+ # GH 13998, GH 22505
df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
df_s = df.to_string(line_width=1, index=False)
- expected = "x \\\n1 \n2 \n3 \n\ny \n4 \n5 \n6"
+ expected = " x \\\n 1 \n 2 \n 3 \n\n y \n 4 \n 5 \n 6 "
+
+ assert df_s == expected
+
+ df = DataFrame({'x': [11, 22, 33], 'y': [4, 5, 6]})
+
+ df_s = df.to_string(line_width=1, index=False)
+ expected = " x \\\n 11 \n 22 \n 33 \n\n y \n 4 \n 5 \n 6 "
+
+ assert df_s == expected
+
+ df = DataFrame({'x': [11, 22, -33], 'y': [4, 5, -6]})
+
+ df_s = df.to_string(line_width=1, index=False)
+ expected = " x \\\n 11 \n 22 \n-33 \n\n y \n 4 \n 5 \n-6 "
assert df_s == expected
@@ -1793,7 +1817,7 @@ def test_to_string_without_index(self):
# GH 11729 Test index=False option
s = Series([1, 2, 3, 4])
result = s.to_string(index=False)
- expected = (u('1\n') + '2\n' + '3\n' + '4')
+ expected = (u(' 1\n') + ' 2\n' + ' 3\n' + ' 4')
assert result == expected
def test_unicode_name_in_footer(self):
| - [x] closes #16839,
closes #13032
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
'Competes' with #22437 which attempts to revert `% d` to `%d` as suggested here: https://github.com/pandas-dev/pandas/issues/13032#issue-151973347 That turned out to affect a lot of tests, which in hindsight is expected; the `% d` has been around since at least 2012 (106fe994cb0eb2701).
Instead, this PR reverts parts of #11942 and embraces the leading space even when `index=False`. `df.to_string(index=False)` will print the leading space when the first column is positive only, as well as preserve leading/trailing spaces on first/last lines.
With the following code:
```python
import pandas as pd
def wrap_to_string(df, **kwargs):
s = df.to_string(**kwargs)
print(str(kwargs).center(25, '-'))
for i, line in enumerate(s.split('\n')):
print(f'^{line}$-{i}')
print()
df = pd.DataFrame({'w': [1, 2], 'x': [3, -4], 'y': [555, 666],
'z': [777, -888], 'a': ['AAA', ' ']})
cols_ = list(map(list, ['wxyza', 'xyzaw', 'yzawx', 'zawxy', 'awxyz']))
for cols in cols_:
wrap_to_string(df[cols], index=False)
```
Output with master:
```python
-----{'index': False}---- # last cell (three spaces) disappeared
^w x y z a$-0
^1 3 555 777 AAA$-1
^2 -4 666 -888$-2
-----{'index': False}---- # misaligned
^x y z a w$-0
^3 555 777 AAA 1$-1
^-4 666 -888 2$-2
-----{'index': False}---- # misaligned
^y z a w x$-0
^555 777 AAA 1 3$-1
^666 -888 2 -4$-2
-----{'index': False}---- # misaligned
^z a w x y$-0
^777 AAA 1 3 555$-1
^-888 2 -4 666$-2
-----{'index': False}---- # misaligned
^a w x y z$-0
^AAA 1 3 555 777$-1
^ 2 -4 666 -888$-2
```
Output with this PR:
```python
-----{'index': False}----
^ w x y z a$-0
^ 1 3 555 777 AAA$-1
^ 2 -4 666 -888 $-2
-----{'index': False}----
^ x y z a w$-0
^ 3 555 777 AAA 1$-1
^-4 666 -888 2$-2
-----{'index': False}----
^ y z a w x$-0
^ 555 777 AAA 1 3$-1
^ 666 -888 2 -4$-2
-----{'index': False}----
^ z a w x y$-0
^ 777 AAA 1 3 555$-1
^-888 2 -4 666$-2
-----{'index': False}----
^ a w x y z$-0
^ AAA 1 3 555 777$-1
^ 2 -4 666 -888$-2
```
Similar effect on Series as well.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22505 | 2018-08-25T14:24:20Z | 2018-09-25T12:55:05Z | 2018-09-25T12:55:05Z | 2018-09-25T12:55:09Z |
DOC: Updated docstrings related to DateTimeIndex. GH22459 | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index f87059ba1f017..33a234c74d01e 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1413,7 +1413,8 @@ def date_range(start=None, end=None, periods=None, freq=None, tz=None,
>>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
- '2018-04-27 00:00:00'], freq=None)
+ '2018-04-27 00:00:00'],
+ dtype='datetime64[ns]', freq=None)
**Other Parameters**
@@ -1484,37 +1485,39 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
Parameters
----------
start : string or datetime-like, default None
- Left bound for generating dates
+ Left bound for generating dates.
end : string or datetime-like, default None
- Right bound for generating dates
+ Right bound for generating dates.
periods : integer, default None
- Number of periods to generate
+ Number of periods to generate.
freq : string or DateOffset, default 'B' (business daily)
- Frequency strings can have multiples, e.g. '5H'
+ Frequency strings can have multiples, e.g. '5H'.
tz : string or None
Time zone name for returning localized DatetimeIndex, for example
- Asia/Beijing
+ Asia/Beijing.
normalize : bool, default False
- Normalize start/end dates to midnight before generating date range
+ Normalize start/end dates to midnight before generating date range.
name : string, default None
- Name of the resulting DatetimeIndex
+ Name of the resulting DatetimeIndex.
weekmask : string or None, default None
Weekmask of valid business days, passed to ``numpy.busdaycalendar``,
only used when custom frequency strings are passed. The default
- value None is equivalent to 'Mon Tue Wed Thu Fri'
+ value None is equivalent to 'Mon Tue Wed Thu Fri'.
.. versionadded:: 0.21.0
holidays : list-like or None, default None
Dates to exclude from the set of valid business days, passed to
``numpy.busdaycalendar``, only used when custom frequency strings
- are passed
+ are passed.
.. versionadded:: 0.21.0
closed : string, default None
Make the interval closed with respect to the given frequency to
- the 'left', 'right', or both sides (None)
+ the 'left', 'right', or both sides (None).
+ **kwargs
+ For compatibility. Has no effect on the result.
Notes
-----
@@ -1528,7 +1531,16 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
Returns
-------
- rng : DatetimeIndex
+ DatetimeIndex
+
+ Examples
+ --------
+ Note how the two weekend days are skipped in the result.
+
+ >>> pd.bdate_range(start='1/1/2018', end='1/08/2018')
+ DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
+ '2018-01-05', '2018-01-08'],
+ dtype='datetime64[ns]', freq='B')
"""
if freq is None:
msg = 'freq must be specified for bdate_range; use date_range instead'
| - [ ] closes #xxxx
- [x] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Updated `pandas.date_range` and `pandas.bdate_range` docstrings to resolve errors raised from `scripts/validate_docstrings.py` from #22459. Also added an example to `pandas.bdate_range`. | https://api.github.com/repos/pandas-dev/pandas/pulls/22504 | 2018-08-25T10:29:06Z | 2018-11-04T14:59:20Z | 2018-11-04T14:59:20Z | 2018-11-04T14:59:24Z |
CLN: Remove versionadded in groupby.rst | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index cf8ba84ecd4f8..fecc336049a40 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -103,8 +103,6 @@ consider the following ``DataFrame``:
.. note::
- .. versionadded:: 0.20
-
A string passed to ``groupby`` may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
``ValueError`` will be raised.
| Follow-up to https://github.com/pandas-dev/pandas/pull/22415#pullrequestreview-148462071. | https://api.github.com/repos/pandas-dev/pandas/pulls/22503 | 2018-08-25T09:11:46Z | 2018-08-28T12:04:47Z | 2018-08-28T12:04:47Z | 2018-08-28T19:26:01Z |
Bug: exporting data frames to excel using xlsxwriter with option cons… | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 32085332caf40..c1259019b7efe 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -671,6 +671,7 @@ Missing
- Bug in :func:`DataFrame.fillna` where a ``ValueError`` would raise when one column contained a ``datetime64[ns, tz]`` dtype (:issue:`15522`)
- Bug in :func:`Series.hasnans` that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (:issue:`19700`)
- :func:`Series.isin` now treats all nans as equal also for `np.object`-dtype. This behavior is consistent with the behavior for float64 (:issue:`22119`)
+- Bug in: class:`ExcelWriter` where exporting `DataFrames` to Excel using ``xlsxwriter`` with option `constant_memory` set to True, most of the cells are empty. Now raises ``NotImlementedError``. (:issue:`15392`)
MultiIndex
^^^^^^^^^^
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index e2db6643c5ef0..864ee40fdd608 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -916,6 +916,14 @@ def save(self):
def __init__(self, path, engine=None,
date_format=None, datetime_format=None, mode='w',
**engine_kwargs):
+
+ # check for contant_memory option
+ options = engine_kwargs.get('options', {})
+ constant_memory = options.get('constant_memory', None)
+ if constant_memory:
+ raise NotImplementedError('The option constant_memory=True is '
+ 'not supported.')
+
# validate that this engine can handle the extension
if isinstance(path, string_types):
ext = os.path.splitext(path)[-1]
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 5f27ff719fda1..732278c5abdd7 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -1831,6 +1831,16 @@ def test_comment_used(self, merge_cells, engine, ext):
result = read_excel(self.path, 'test_c', comment='#')
tm.assert_frame_equal(result, expected)
+ def test_constant_memory_option_raises_NotImplementedError(self, engine):
+ # Re issue # 15392
+ # Test ExcelWriter with constant_memory=True raises NotImplementedError
+ df = DataFrame({'a': ['1', '2'], 'b': ['2', '3']})
+ msg = 'The option constant_memory=True is not supported.'
+ with tm.assert_raises_regex(NotImplementedError, msg):
+ xlw = pd.ExcelWriter(self.path, engine=engine,
+ options=dict(constant_memory=True))
+ df.to_excel(xlw)
+
def test_comment_emptyline(self, merge_cells, engine, ext):
# Re issue #18735
# Test that read_excel ignores commented lines at the end of file
| …tant_memory set to true, most of the cells are empty. Now raises NotImlementedError. #15392
- [x] closes #15392
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22502 | 2018-08-24T21:54:56Z | 2018-11-23T03:34:00Z | null | 2018-11-23T03:34:00Z |
CI: Bump to NumPy compat to 1.9.3 | diff --git a/ci/circle-27-compat.yaml b/ci/circle-27-compat.yaml
index b5be569eb28a4..5e9842f4742c5 100644
--- a/ci/circle-27-compat.yaml
+++ b/ci/circle-27-compat.yaml
@@ -7,7 +7,7 @@ dependencies:
- cython=0.28.2
- jinja2=2.8
- numexpr=2.4.4 # we test that we correctly don't use an unsupported numexpr
- - numpy=1.9.2
+ - numpy=1.9.3
- openpyxl
- psycopg2
- pytables=3.2.2
diff --git a/ci/travis-27-locale.yaml b/ci/travis-27-locale.yaml
index 78cbe8f59a8e0..73ab424329463 100644
--- a/ci/travis-27-locale.yaml
+++ b/ci/travis-27-locale.yaml
@@ -7,7 +7,7 @@ dependencies:
- cython=0.28.2
- lxml
- matplotlib=1.4.3
- - numpy=1.9.2
+ - numpy=1.9.3
- openpyxl=2.4.0
- python-dateutil
- python-blosc
| 1.9.2 doesn't seem to be available in `/pkgs/main`. We're seeing errors like
```
ImportError: libgfortran.so.1: cannot open shared object file: No such file or directory
```
when importing numpy.
Switching to 1.9.3 (which is available in main) solves that for me locally. | https://api.github.com/repos/pandas-dev/pandas/pulls/22499 | 2018-08-24T13:41:26Z | 2018-08-24T14:20:12Z | 2018-08-24T14:20:12Z | 2018-08-24T14:26:30Z |
CLN: Simplify read_csv tz offset parsing | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 08fb0172adcff..8d37bf4c84d5d 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1632,19 +1632,17 @@ def _infer_types(self, values, na_values, try_num_bool=True):
if try_num_bool:
try:
- result = lib.maybe_convert_numeric(np.asarray(values),
- na_values, False)
+ result = lib.maybe_convert_numeric(values, na_values, False)
na_count = isna(result).sum()
except Exception:
result = values
if values.dtype == np.object_:
- na_count = parsers.sanitize_objects(np.asarray(result),
+ na_count = parsers.sanitize_objects(result,
na_values, False)
else:
result = values
if values.dtype == np.object_:
- na_count = parsers.sanitize_objects(np.asarray(values),
- na_values, False)
+ na_count = parsers.sanitize_objects(values, na_values, False)
if result.dtype == np.object_ and try_num_bool:
result = libops.maybe_convert_bool(np.asarray(values),
@@ -3034,7 +3032,7 @@ def converter(*date_cols):
return tools.to_datetime(
ensure_object(strs),
utc=None,
- box=True,
+ box=False,
dayfirst=dayfirst,
errors='ignore',
infer_datetime_format=infer_datetime_format
| I _think_ after #22457 we do not need the `np.asarray` calls that were introduced in #22380 | https://api.github.com/repos/pandas-dev/pandas/pulls/22494 | 2018-08-24T04:43:59Z | 2018-08-29T12:31:53Z | 2018-08-29T12:31:53Z | 2018-08-29T15:56:04Z |
BUG:reorder type check/conversion so wide_to_long handles str arg for… | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 618d7454c67fe..9d559acfa59e7 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -802,6 +802,7 @@ Reshaping
- Bug in :meth:`DataFrame.replace` raises ``RecursionError`` when replacing empty lists (:issue:`22083`)
- Bug in :meth:`Series.replace` and meth:`DataFrame.replace` when dict is used as the ``to_replace`` value and one key in the dict is is another key's value, the results were inconsistent between using integer key and using string key (:issue:`20656`)
- Bug in :meth:`DataFrame.drop_duplicates` for empty ``DataFrame`` which incorrectly raises an error (:issue:`20516`)
+- Bug in :func:`pandas.wide_to_long` when a string is passed to the stubnames argument and a column name is a substring of that stubname (:issue:`22468`)
Build Changes
^^^^^^^^^^^^^
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index f4b96c8f1ca49..26221143c0cdf 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -409,14 +409,14 @@ def melt_stub(df, stub, i, j, value_vars, sep):
return newdf.set_index(i + [j])
- if any(col in stubnames for col in df.columns):
- raise ValueError("stubname can't be identical to a column name")
-
if not is_list_like(stubnames):
stubnames = [stubnames]
else:
stubnames = list(stubnames)
+ if any(col in stubnames for col in df.columns):
+ raise ValueError("stubname can't be identical to a column name")
+
if not is_list_like(i):
i = [i]
else:
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index 81570de7586de..e83a2cb483de7 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -640,3 +640,24 @@ def test_float_suffix(self):
result = wide_to_long(df, ['result', 'treatment'],
i='A', j='colname', suffix='[0-9.]+', sep='_')
tm.assert_frame_equal(result, expected)
+
+ def test_col_substring_of_stubname(self):
+ # GH22468
+ # Don't raise ValueError when a column name is a substring
+ # of a stubname that's been passed as a string
+ wide_data = {'node_id': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4},
+ 'A': {0: 0.80, 1: 0.0, 2: 0.25, 3: 1.0, 4: 0.81},
+ 'PA0': {0: 0.74, 1: 0.56, 2: 0.56, 3: 0.98, 4: 0.6},
+ 'PA1': {0: 0.77, 1: 0.64, 2: 0.52, 3: 0.98, 4: 0.67},
+ 'PA3': {0: 0.34, 1: 0.70, 2: 0.52, 3: 0.98, 4: 0.67}
+ }
+ wide_df = pd.DataFrame.from_dict(wide_data)
+ expected = pd.wide_to_long(wide_df,
+ stubnames=['PA'],
+ i=['node_id', 'A'],
+ j='time')
+ result = pd.wide_to_long(wide_df,
+ stubnames='PA',
+ i=['node_id', 'A'],
+ j='time')
+ tm.assert_frame_equal(result, expected)
| closes #22468
- [x] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22490 | 2018-08-23T22:17:05Z | 2018-09-23T20:11:03Z | 2018-09-23T20:11:03Z | 2018-09-24T01:28:42Z |
Wrong error message in HDFStore.append | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index c57b1c3e211f6..27cce20e9d319 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1732,10 +1732,11 @@ def validate_col(self, itemsize=None):
itemsize = self.itemsize
if c.itemsize < itemsize:
raise ValueError(
- "Trying to store a string with len [%s] in [%s] "
- "column but\nthis column has a limit of [%s]!\n"
+ "Trying to store a string with len [%s] in the "
+ "column [%s], but\nthis column has a limit of [%s]!\n"
"Consider using min_itemsize to preset the sizes on "
- "these columns" % (itemsize, self.cname, c.itemsize))
+ "these columns" % (itemsize, self.values[0],
+ c.itemsize))
return c.itemsize
return None
| Updated pytables.py to clarify error message when appending dataframe with None item in previously string-only column
- [ ] closes #16300
- [ ] tests passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22489 | 2018-08-23T21:21:18Z | 2018-08-24T14:53:34Z | null | 2018-08-24T14:53:34Z |
BUG: resample with TimedeltaIndex, fenceposts are off | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3a360b09ae789..1979bde796452 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -709,6 +709,7 @@ Groupby/Resample/Rolling
datetime-like index leading to incorrect results and also segfault. (:issue:`21704`)
- Bug in :meth:`Resampler.apply` when passing postiional arguments to applied func (:issue:`14615`).
- Bug in :meth:`Series.resample` when passing ``numpy.timedelta64`` to `loffset` kwarg (:issue:`7687`).
+- Bug in :meth:`Resampler.asfreq` when frequency of ``TimedeltaIndex`` is a subperiod of a new frequency (:issue:`13022`).
Sparse
^^^^^^
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 2ada4d758d463..1ef8a0854887b 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -963,7 +963,10 @@ def _downsample(self, how, **kwargs):
return self._wrap_result(result)
def _adjust_binner_for_upsample(self, binner):
- """ adjust our binner when upsampling """
+ """
+ Adjust our binner when upsampling.
+ The range of a new index should not be outside specified range
+ """
if self.closed == 'right':
binner = binner[1:]
else:
@@ -1156,17 +1159,11 @@ def _get_binner_for_time(self):
return self.groupby._get_time_delta_bins(self.ax)
def _adjust_binner_for_upsample(self, binner):
- """ adjust our binner when upsampling """
- ax = self.ax
-
- if is_subperiod(ax.freq, self.freq):
- # We are actually downsampling
- # but are in the asfreq path
- # GH 12926
- if self.closed == 'right':
- binner = binner[1:]
- else:
- binner = binner[:-1]
+ """
+ Adjust our binner when upsampling.
+ The range of a new index is allowed to be greater than original range
+ so we don't need to change the length of a binner, GH 13022
+ """
return binner
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index b60fd10d745c1..530a683c02f9d 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -26,7 +26,6 @@
from pandas.compat import range, lrange, zip, OrderedDict
from pandas.errors import UnsupportedFunctionCall
import pandas.tseries.offsets as offsets
-from pandas.tseries.frequencies import to_offset
from pandas.tseries.offsets import Minute, BDay
from pandas.core.groupby.groupby import DataError
@@ -626,12 +625,7 @@ def test_asfreq(self, series_and_frame, freq):
obj = series_and_frame
result = obj.resample(freq).asfreq()
- if freq == '2D':
- new_index = obj.index.take(np.arange(0, len(obj.index), 2))
- new_index.freq = to_offset('2D')
- else:
- new_index = self.create_index(obj.index[0], obj.index[-1],
- freq=freq)
+ new_index = self.create_index(obj.index[0], obj.index[-1], freq=freq)
expected = obj.reindex(new_index)
assert_almost_equal(result, expected)
@@ -2932,6 +2926,17 @@ def test_resample_with_nat(self):
freq='1S'))
assert_frame_equal(result, expected)
+ def test_resample_as_freq_with_subperiod(self):
+ # GH 13022
+ index = timedelta_range('00:00:00', '00:10:00', freq='5T')
+ df = DataFrame(data={'value': [1, 5, 10]}, index=index)
+ result = df.resample('2T').asfreq()
+ expected_data = {'value': [1, np.nan, np.nan, np.nan, np.nan, 10]}
+ expected = DataFrame(data=expected_data,
+ index=timedelta_range('00:00:00',
+ '00:10:00', freq='2T'))
+ tm.assert_frame_equal(result, expected)
+
class TestResamplerGrouper(object):
| - [x] closes #13022
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22488 | 2018-08-23T19:59:09Z | 2018-09-05T11:31:04Z | 2018-09-05T11:31:04Z | 2018-09-05T11:31:07Z |
API: better error-handling for df.set_index | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 2c142bdd7185b..d3cc0dfc866ed 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -714,6 +714,8 @@ Other API Changes
- :class:`pandas.io.formats.style.Styler` supports a ``number-format`` property when using :meth:`~pandas.io.formats.style.Styler.to_excel` (:issue:`22015`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now raise a ``ValueError`` along with a helpful error message instead of a ``KeyError`` when supplied with an invalid method (:issue:`22298`)
- :meth:`shift` will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (:issue:`22397`)
+- :meth:`DataFrame.set_index` now allows all one-dimensional list-likes, raises a ``TypeError`` for incorrect types,
+ has an improved ``KeyError`` message, and will not fail on duplicate column names with ``drop=True``. (:issue:`22484`)
- Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (:issue:`22784`)
- :class:`DateOffset` attribute `_cacheable` and method `_should_cache` have been removed (:issue:`23118`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8f3873b4299a5..c10d78ce55d0c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -73,6 +73,7 @@
is_sequence,
is_named_tuple)
from pandas.core.dtypes.concat import _get_sliced_frame_result_type
+from pandas.core.dtypes.generic import ABCSeries, ABCIndexClass, ABCMultiIndex
from pandas.core.dtypes.missing import isna, notna
from pandas.core import algorithms
@@ -3980,6 +3981,25 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
if not isinstance(keys, list):
keys = [keys]
+ missing = []
+ for col in keys:
+ if (is_scalar(col) or isinstance(col, tuple)) and col in self:
+ # tuples can be both column keys or list-likes
+ # if they are valid column keys, everything is fine
+ continue
+ elif is_scalar(col) and col not in self:
+ # tuples that are not column keys are considered list-like,
+ # not considered missing
+ missing.append(col)
+ elif (not is_list_like(col, allow_sets=False)
+ or getattr(col, 'ndim', 1) > 1):
+ raise TypeError('The parameter "keys" may only contain a '
+ 'combination of valid column keys and '
+ 'one-dimensional list-likes')
+
+ if missing:
+ raise KeyError('{}'.format(missing))
+
if inplace:
frame = self
else:
@@ -3989,7 +4009,7 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
names = []
if append:
names = [x for x in self.index.names]
- if isinstance(self.index, MultiIndex):
+ if isinstance(self.index, ABCMultiIndex):
for i in range(self.index.nlevels):
arrays.append(self.index._get_level_values(i))
else:
@@ -3997,29 +4017,29 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
to_remove = []
for col in keys:
- if isinstance(col, MultiIndex):
- # append all but the last column so we don't have to modify
- # the end of this loop
- for n in range(col.nlevels - 1):
+ if isinstance(col, ABCMultiIndex):
+ for n in range(col.nlevels):
arrays.append(col._get_level_values(n))
-
- level = col._get_level_values(col.nlevels - 1)
names.extend(col.names)
- elif isinstance(col, Series):
- level = col._values
- names.append(col.name)
- elif isinstance(col, Index):
- level = col
+ elif isinstance(col, (ABCIndexClass, ABCSeries)):
+ # if Index then not MultiIndex (treated above)
+ arrays.append(col)
names.append(col.name)
- elif isinstance(col, (list, np.ndarray, Index)):
- level = col
+ elif isinstance(col, (list, np.ndarray)):
+ arrays.append(col)
+ names.append(None)
+ elif (is_list_like(col)
+ and not (isinstance(col, tuple) and col in self)):
+ # all other list-likes (but avoid valid column keys)
+ col = list(col) # ensure iterator do not get read twice etc.
+ arrays.append(col)
names.append(None)
+ # from here, col can only be a column label
else:
- level = frame[col]._values
+ arrays.append(frame[col]._values)
names.append(col)
if drop:
to_remove.append(col)
- arrays.append(level)
index = ensure_index_from_sequences(arrays, names)
@@ -4028,7 +4048,8 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
raise ValueError('Index has duplicate keys: {dup}'.format(
dup=duplicates))
- for c in to_remove:
+ # use set to handle duplicate column names gracefully in case of drop
+ for c in set(to_remove):
del frame[c]
# clear up memory usage
diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 348331fc0ccdf..ec66fb6bf55d2 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -211,12 +211,13 @@ def frame_of_index_cols():
"""
Fixture for DataFrame of columns that can be used for indexing
- Columns are ['A', 'B', 'C', 'D', 'E']; 'A' & 'B' contain duplicates (but
- are jointly unique), the rest are unique.
+ Columns are ['A', 'B', 'C', 'D', 'E', ('tuple', 'as', 'label')];
+ 'A' & 'B' contain duplicates (but are jointly unique), the rest are unique.
"""
df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
'B': ['one', 'two', 'three', 'one', 'two'],
'C': ['a', 'b', 'c', 'd', 'e'],
'D': np.random.randn(5),
- 'E': np.random.randn(5)})
+ 'E': np.random.randn(5),
+ ('tuple', 'as', 'label'): np.random.randn(5)})
return df
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 4e61c9c62266d..fdeae61f96b93 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -49,7 +49,8 @@ def test_set_index_cast(self):
tm.assert_frame_equal(df, df2)
# A has duplicate values, C does not
- @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B']])
+ @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B'],
+ ('tuple', 'as', 'label')])
@pytest.mark.parametrize('inplace', [True, False])
@pytest.mark.parametrize('drop', [True, False])
def test_set_index_drop_inplace(self, frame_of_index_cols,
@@ -72,7 +73,8 @@ def test_set_index_drop_inplace(self, frame_of_index_cols,
tm.assert_frame_equal(result, expected)
# A has duplicate values, C does not
- @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B']])
+ @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B'],
+ ('tuple', 'as', 'label')])
@pytest.mark.parametrize('drop', [True, False])
def test_set_index_append(self, frame_of_index_cols, drop, keys):
df = frame_of_index_cols
@@ -88,7 +90,8 @@ def test_set_index_append(self, frame_of_index_cols, drop, keys):
tm.assert_frame_equal(result, expected)
# A has duplicate values, C does not
- @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B']])
+ @pytest.mark.parametrize('keys', ['A', 'C', ['A', 'B'],
+ ('tuple', 'as', 'label')])
@pytest.mark.parametrize('drop', [True, False])
def test_set_index_append_to_multiindex(self, frame_of_index_cols,
drop, keys):
@@ -114,8 +117,10 @@ def test_set_index_after_mutation(self):
tm.assert_frame_equal(result, expected)
# MultiIndex constructor does not work directly on Series -> lambda
+ # Add list-of-list constructor because list is ambiguous -> lambda
# also test index name if append=True (name is duplicate here for B)
@pytest.mark.parametrize('box', [Series, Index, np.array,
+ list, tuple, iter, lambda x: [list(x)],
lambda x: MultiIndex.from_arrays([x])])
@pytest.mark.parametrize('append, index_name', [(True, None),
(True, 'B'), (True, 'test'), (False, None)])
@@ -126,21 +131,29 @@ def test_set_index_pass_single_array(self, frame_of_index_cols,
df.index.name = index_name
key = box(df['B'])
- # np.array and list "forget" the name of B
- name = [None if box in [np.array, list] else 'B']
+ if box == list:
+ # list of strings gets interpreted as list of keys
+ msg = "['one', 'two', 'three', 'one', 'two']"
+ with tm.assert_raises_regex(KeyError, msg):
+ df.set_index(key, drop=drop, append=append)
+ else:
+ # np.array/tuple/iter/list-of-list "forget" the name of B
+ name_mi = getattr(key, 'names', None)
+ name = [getattr(key, 'name', None)] if name_mi is None else name_mi
- result = df.set_index(key, drop=drop, append=append)
+ result = df.set_index(key, drop=drop, append=append)
- # only valid column keys are dropped
- # since B is always passed as array above, nothing is dropped
- expected = df.set_index(['B'], drop=False, append=append)
- expected.index.names = [index_name] + name if append else name
+ # only valid column keys are dropped
+ # since B is always passed as array above, nothing is dropped
+ expected = df.set_index(['B'], drop=False, append=append)
+ expected.index.names = [index_name] + name if append else name
- tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# MultiIndex constructor does not work directly on Series -> lambda
# also test index name if append=True (name is duplicate here for A & B)
- @pytest.mark.parametrize('box', [Series, Index, np.array, list,
+ @pytest.mark.parametrize('box', [Series, Index, np.array,
+ list, tuple, iter,
lambda x: MultiIndex.from_arrays([x])])
@pytest.mark.parametrize('append, index_name',
[(True, None), (True, 'A'), (True, 'B'),
@@ -152,8 +165,8 @@ def test_set_index_pass_arrays(self, frame_of_index_cols,
df.index.name = index_name
keys = ['A', box(df['B'])]
- # np.array and list "forget" the name of B
- names = ['A', None if box in [np.array, list] else 'B']
+ # np.array/list/tuple/iter "forget" the name of B
+ names = ['A', None if box in [np.array, list, tuple, iter] else 'B']
result = df.set_index(keys, drop=drop, append=append)
@@ -168,10 +181,12 @@ def test_set_index_pass_arrays(self, frame_of_index_cols,
# MultiIndex constructor does not work directly on Series -> lambda
# We also emulate a "constructor" for the label -> lambda
# also test index name if append=True (name is duplicate here for A)
- @pytest.mark.parametrize('box2', [Series, Index, np.array, list,
+ @pytest.mark.parametrize('box2', [Series, Index, np.array,
+ list, tuple, iter,
lambda x: MultiIndex.from_arrays([x]),
lambda x: x.name])
- @pytest.mark.parametrize('box1', [Series, Index, np.array, list,
+ @pytest.mark.parametrize('box1', [Series, Index, np.array,
+ list, tuple, iter,
lambda x: MultiIndex.from_arrays([x]),
lambda x: x.name])
@pytest.mark.parametrize('append, index_name', [(True, None),
@@ -183,21 +198,22 @@ def test_set_index_pass_arrays_duplicate(self, frame_of_index_cols, drop,
df.index.name = index_name
keys = [box1(df['A']), box2(df['A'])]
+ result = df.set_index(keys, drop=drop, append=append)
- # == gives ambiguous Boolean for Series
- if drop and keys[0] is 'A' and keys[1] is 'A':
- with tm.assert_raises_regex(KeyError, '.*'):
- df.set_index(keys, drop=drop, append=append)
- else:
- result = df.set_index(keys, drop=drop, append=append)
+ # if either box was iter, the content has been consumed; re-read it
+ keys = [box1(df['A']), box2(df['A'])]
- # to test against already-tested behavior, we add sequentially,
- # hence second append always True; must wrap in list, otherwise
- # list-box will be illegal
- expected = df.set_index([keys[0]], drop=drop, append=append)
- expected = expected.set_index([keys[1]], drop=drop, append=True)
+ # need to adapt first drop for case that both keys are 'A' --
+ # cannot drop the same column twice;
+ # use "is" because == would give ambiguous Boolean error for containers
+ first_drop = False if (keys[0] is 'A' and keys[1] is 'A') else drop
- tm.assert_frame_equal(result, expected)
+ # to test against already-tested behaviour, we add sequentially,
+ # hence second append always True; must wrap keys in list, otherwise
+ # box = list would be illegal
+ expected = df.set_index([keys[0]], drop=first_drop, append=append)
+ expected = expected.set_index([keys[1]], drop=drop, append=True)
+ tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize('append', [True, False])
@pytest.mark.parametrize('drop', [True, False])
@@ -229,13 +245,24 @@ def test_set_index_verify_integrity(self, frame_of_index_cols):
def test_set_index_raise(self, frame_of_index_cols, drop, append):
df = frame_of_index_cols
- with tm.assert_raises_regex(KeyError, '.*'): # column names are A-E
+ with tm.assert_raises_regex(KeyError, "['foo', 'bar', 'baz']"):
+ # column names are A-E, as well as one tuple
df.set_index(['foo', 'bar', 'baz'], drop=drop, append=append)
# non-existent key in list with arrays
- with tm.assert_raises_regex(KeyError, '.*'):
+ with tm.assert_raises_regex(KeyError, 'X'):
df.set_index([df['A'], df['B'], 'X'], drop=drop, append=append)
+ msg = 'The parameter "keys" may only contain a combination of.*'
+ # forbidden type, e.g. set
+ with tm.assert_raises_regex(TypeError, msg):
+ df.set_index(set(df['A']), drop=drop, append=append)
+
+ # forbidden type in list, e.g. set
+ with tm.assert_raises_regex(TypeError, msg):
+ df.set_index(['A', df['A'], set(df['A'])],
+ drop=drop, append=append)
+
def test_construction_with_categorical_index(self):
ci = tm.makeCategoricalIndex(10)
ci.name = 'B'
| - [x] closes #22484
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
split off of #22236, and so builds on top of it.
| https://api.github.com/repos/pandas-dev/pandas/pulls/22486 | 2018-08-23T15:18:46Z | 2018-10-19T13:13:39Z | 2018-10-19T13:13:38Z | 2018-10-22T14:16:10Z |
BUG: loffset has no effect when passing in a numpy.timedelta64 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 7ed92935a0991..2bfc57d7f5dcd 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -707,6 +707,7 @@ Groupby/Resample/Rolling
- Multiple bugs in :func:`pandas.core.Rolling.min` with ``closed='left'` and a
datetime-like index leading to incorrect results and also segfault. (:issue:`21704`)
- Bug in :meth:`Resampler.apply` when passing postiional arguments to applied func (:issue:`14615`).
+- Bug in :meth:`Series.resample` when passing ``numpy.timedelta64`` to `loffset` kwarg (:issue:`7687`).
Sparse
^^^^^^
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index ae59014ac34f4..2ada4d758d463 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -366,7 +366,8 @@ def _apply_loffset(self, result):
"""
needs_offset = (
- isinstance(self.loffset, (DateOffset, timedelta)) and
+ isinstance(self.loffset, (DateOffset, timedelta,
+ np.timedelta64)) and
isinstance(result.index, DatetimeIndex) and
len(result.index) > 0
)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 38801832829b0..b60fd10d745c1 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -1173,27 +1173,20 @@ def test_resample_frame_basic(self):
df.resample('M', kind='period').mean()
df.resample('W-WED', kind='period').mean()
- def test_resample_loffset(self):
+ @pytest.mark.parametrize('loffset', [timedelta(minutes=1),
+ '1min', Minute(1),
+ np.timedelta64(1, 'm')])
+ def test_resample_loffset(self, loffset):
+ # GH 7687
rng = date_range('1/1/2000 00:00:00', '1/1/2000 00:13:00', freq='min')
s = Series(np.random.randn(14), index=rng)
result = s.resample('5min', closed='right', label='right',
- loffset=timedelta(minutes=1)).mean()
+ loffset=loffset).mean()
idx = date_range('1/1/2000', periods=4, freq='5min')
expected = Series([s[0], s[1:6].mean(), s[6:11].mean(), s[11:].mean()],
index=idx + timedelta(minutes=1))
assert_series_equal(result, expected)
-
- expected = s.resample(
- '5min', closed='right', label='right',
- loffset='1min').mean()
- assert_series_equal(result, expected)
-
- expected = s.resample(
- '5min', closed='right', label='right',
- loffset=Minute(1)).mean()
- assert_series_equal(result, expected)
-
assert result.index.freq == Minute(5)
# from daily
| - [x] closes #7687
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22482 | 2018-08-23T12:52:52Z | 2018-09-04T11:09:18Z | 2018-09-04T11:09:18Z | 2018-09-04T11:09:21Z |
BUG: fix Series(extension array) + extension array values addition | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 8171840c96b6e..a02152a123b48 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1218,7 +1218,7 @@ def dispatch_to_extension_op(op, left, right):
new_right = [new_right]
new_right = list(new_right)
elif is_extension_array_dtype(right) and type(left) != type(right):
- new_right = list(new_right)
+ new_right = list(right)
else:
new_right = right
diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index 05351c56862b8..ee4a92146128b 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -77,6 +77,12 @@ def test_divmod(self, data):
self._check_divmod_op(s, divmod, 1, exc=TypeError)
self._check_divmod_op(1, ops.rdivmod, s, exc=TypeError)
+ def test_add_series_with_extension_array(self, data):
+ s = pd.Series(data)
+ result = s + data
+ expected = pd.Series(data + data)
+ self.assert_series_equal(result, expected)
+
def test_error(self, data, all_arithmetic_operators):
# invalid ops
op_name = all_arithmetic_operators
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 0126d771caf7f..93f10b7fbfc23 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -261,6 +261,11 @@ class TestArithmeticOps(BaseJSON, base.BaseArithmeticOpsTests):
def test_error(self, data, all_arithmetic_operators):
pass
+ def test_add_series_with_extension_array(self, data):
+ ser = pd.Series(data)
+ with tm.assert_raises_regex(TypeError, "unsupported"):
+ ser + data
+
class TestComparisonOps(BaseJSON, base.BaseComparisonOpsTests):
pass
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index ff66f53eab6f6..c588552572aed 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -22,6 +22,7 @@
from pandas.api.types import CategoricalDtype
from pandas import Categorical
from pandas.tests.extension import base
+import pandas.util.testing as tm
def make_data():
@@ -202,6 +203,11 @@ def test_arith_series_with_scalar(self, data, all_arithmetic_operators):
else:
pytest.skip('rmod never called when string is first argument')
+ def test_add_series_with_extension_array(self, data):
+ ser = pd.Series(data)
+ with tm.assert_raises_regex(TypeError, "cannot perform"):
+ ser + data
+
class TestComparisonOps(base.BaseComparisonOpsTests):
diff --git a/pandas/tests/extension/test_integer.py b/pandas/tests/extension/test_integer.py
index 7aa33006dadda..fa5c89d85e548 100644
--- a/pandas/tests/extension/test_integer.py
+++ b/pandas/tests/extension/test_integer.py
@@ -143,6 +143,12 @@ def test_error(self, data, all_arithmetic_operators):
# other specific errors tested in the integer array specific tests
pass
+ @pytest.mark.xfail(reason="EA is listified. GH-22922", strict=True)
+ def test_add_series_with_extension_array(self, data):
+ super(TestArithmeticOps, self).test_add_series_with_extension_array(
+ data
+ )
+
class TestComparisonOps(base.BaseComparisonOpsTests):
| - [x] closes #22478
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22479 | 2018-08-23T04:57:40Z | 2018-10-03T07:28:08Z | 2018-10-03T07:28:07Z | 2018-10-03T11:51:01Z |
DOC: Fix DataFrame.to_csv docstring and add an example. GH22459 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 85bd6065314f4..6039da839010a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9359,80 +9359,100 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
quotechar='"', line_terminator='\n', chunksize=None,
tupleize_cols=None, date_format=None, doublequote=True,
escapechar=None, decimal='.'):
- r"""Write object to a comma-separated values (csv) file
+ r"""
+ Write object to a comma-separated values (csv) file.
+
+ .. versionchanged:: 0.24.0
+ The order of arguments for Series was changed.
Parameters
----------
- path_or_buf : string or file handle, default None
+ path_or_buf : str or file handle, default None
File path or object, if None is provided the result is returned as
a string.
.. versionchanged:: 0.24.0
Was previously named "path" for Series.
- sep : character, default ','
- Field delimiter for the output file.
- na_rep : string, default ''
- Missing data representation
- float_format : string, default None
- Format string for floating point numbers
+ sep : str, default ','
+ String of length 1. Field delimiter for the output file.
+ na_rep : str, default ''
+ Missing data representation.
+ float_format : str, default None
+ Format string for floating point numbers.
columns : sequence, optional
- Columns to write
- header : boolean or list of string, default True
+ Columns to write.
+ header : bool or list of str, default True
Write out the column names. If a list of strings is given it is
- assumed to be aliases for the column names
+ assumed to be aliases for the column names.
.. versionchanged:: 0.24.0
Previously defaulted to False for Series.
- index : boolean, default True
- Write row names (index)
- index_label : string or sequence, or False, default None
+ index : bool, default True
+ Write row names (index).
+ index_label : str or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and
`header` and `index` are True, then the index names are used. A
- sequence should be given if the object uses MultiIndex. If
+ sequence should be given if the object uses MultiIndex. If
False do not print fields for index names. Use index_label=False
- for easier importing in R
+ for easier importing in R.
mode : str
- Python write mode, default 'w'
- encoding : string, optional
+ Python write mode, default 'w'.
+ encoding : str, optional
A string representing the encoding to use in the output file,
defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
- compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None},
- default 'infer'
- If 'infer' and `path_or_buf` is path-like, then detect compression
- from the following extensions: '.gz', '.bz2', '.zip' or '.xz'
- (otherwise no compression).
-
+ compression : str, default 'infer'
+ Compression mode among the following possible values: {'infer',
+ 'gzip', 'bz2', 'zip', 'xz', None}. If 'infer' and `path_or_buf`
+ is path-like, then detect compression from the following
+ extensions: '.gz', '.bz2', '.zip' or '.xz'. (otherwise no
+ compression).
.. versionchanged:: 0.24.0
- 'infer' option added and set to default
- line_terminator : string, default ``'\n'``
- The newline character or character sequence to use in the output
- file
+ 'infer' option added and set to default.
quoting : optional constant from csv module
- defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
+ Defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
then floats are converted to strings and thus csv.QUOTE_NONNUMERIC
- will treat them as non-numeric
- quotechar : string (length 1), default '\"'
- character used to quote fields
- doublequote : boolean, default True
- Control quoting of `quotechar` inside a field
- escapechar : string (length 1), default None
- character used to escape `sep` and `quotechar` when appropriate
+ will treat them as non-numeric.
+ quotechar : str, default '\"'
+ String of length 1. Character used to quote fields.
+ line_terminator : string, default ``'\n'``
+ The newline character or character sequence to use in the output
+ file.
chunksize : int or None
- rows to write at a time
- tupleize_cols : boolean, default False
- .. deprecated:: 0.21.0
- This argument will be removed and will always write each row
- of the multi-index as a separate row in the CSV file.
-
+ Rows to write at a time.
+ tupleize_cols : bool, default False
Write MultiIndex columns as a list of tuples (if True) or in
the new, expanded format, where each MultiIndex column is a row
in the CSV (if False).
- date_format : string, default None
- Format string for datetime objects
- decimal: string, default '.'
+ .. deprecated:: 0.21.0
+ This argument will be removed and will always write each row
+ of the multi-index as a separate row in the CSV file.
+ date_format : str, default None
+ Format string for datetime objects.
+ doublequote : bool, default True
+ Control quoting of `quotechar` inside a field.
+ escapechar : str, default None
+ String of length 1. Character used to escape `sep` and `quotechar`
+ when appropriate.
+ decimal : str, default '.'
Character recognized as decimal separator. E.g. use ',' for
- European data
+ European data.
- .. versionchanged:: 0.24.0
- The order of arguments for Series was changed.
+ Returns
+ -------
+ None or str
+ If path_or_buf is None, returns the resulting csv format as a
+ string. Otherwise returns None.
+
+ See Also
+ --------
+ pandas.read_csv : Load a CSV file into a DataFrame.
+ pandas.to_excel: Load an Excel file into a DataFrame.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
+ ... 'mask': ['red', 'purple'],
+ ... 'weapon': ['sai', 'bo staff']})
+ >>> df.to_csv(index=False)
+ 'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n'
"""
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Fix the DataFrame.to_csv docstring to match `scripts/validate_docstrings.py` as explained in #22459. I also added an example. Is the whatsnew entry needed for documentation too?
| https://api.github.com/repos/pandas-dev/pandas/pulls/22475 | 2018-08-23T01:39:36Z | 2018-09-23T15:32:58Z | 2018-09-23T15:32:58Z | 2018-09-24T09:29:02Z |
DOC: Updating Series.agg docstring | diff --git a/ci/doctests.sh b/ci/doctests.sh
index fee33a0f93f40..2af5dbd26aeb1 100755
--- a/ci/doctests.sh
+++ b/ci/doctests.sh
@@ -28,7 +28,7 @@ if [ "$DOCTEST" ]; then
fi
pytest --doctest-modules -v pandas/core/series.py \
- -k"-agg -map -nlargest -nonzero -nsmallest -reindex -searchsorted -to_dict"
+ -k"-nlargest -nonzero -nsmallest -reindex -searchsorted -to_dict"
if [ $? -ne "0" ]; then
RET=1
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0e5204fcd6524..85bd6065314f4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4439,7 +4439,6 @@ def pipe(self, func, *args, **kwargs):
- function.
- list of functions.
- dict of column names -> functions (or list of functions).
-
%(axis)s
*args
Positional arguments to pass to `func`.
@@ -8146,7 +8145,7 @@ def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None,
Parameters
----------
periods : int
- Number of periods to move, can be positive or negative
+ Number of periods to move, can be positive or negative.
freq : DateOffset, timedelta, or time rule string, optional
Increment to use from the tseries module or time rule (e.g. 'EOM').
See Notes.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4558314d612d0..ab41954990412 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1132,7 +1132,6 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
Examples
--------
-
>>> s = pd.Series([1, 2, 3, 4], name='foo',
... index=pd.Index(['a', 'b', 'c', 'd'], name='idx'))
@@ -3046,21 +3045,27 @@ def _gotitem(self, key, ndim, subset=None):
Examples
--------
- >>> s = pd.Series(np.random.randn(10))
+ >>> s = pd.Series([1, 2, 3, 4])
+ >>> s
+ 0 1
+ 1 2
+ 2 3
+ 3 4
+ dtype: int64
>>> s.agg('min')
- -1.3018049988556679
+ 1
>>> s.agg(['min', 'max'])
- min -1.301805
- max 1.127688
- dtype: float64
+ min 1
+ max 4
+ dtype: int64
See also
--------
- pandas.Series.apply
- pandas.Series.transform
-
+ pandas.Series.apply : Invoke function on a Series.
+ pandas.Series.transform : Transform function producing
+ a Series with like indexes.
""")
@Appender(_agg_doc)
@@ -3315,7 +3320,6 @@ def rename(self, index=None, **kwargs):
Examples
--------
-
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
@@ -3337,7 +3341,6 @@ def rename(self, index=None, **kwargs):
3 2
5 3
dtype: int64
-
"""
kwargs['inplace'] = validate_bool_kwarg(kwargs.get('inplace', False),
'inplace')
@@ -3507,7 +3510,6 @@ def memory_usage(self, index=True, deep=False):
Examples
--------
-
>>> s = pd.Series(range(3))
>>> s.memory_usage()
104
@@ -3591,7 +3593,6 @@ def isin(self, values):
Examples
--------
-
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama',
... 'hippo'], name='animal')
>>> s.isin(['cow', 'lama'])
| With reference to: #22459. I have fixed up the agg/map doc strings for `pandas/core/series.py` so they now pass our doc tests. CC @jorisvandenbossche
Thanks,
| https://api.github.com/repos/pandas-dev/pandas/pulls/22474 | 2018-08-23T00:19:44Z | 2018-08-24T07:28:07Z | 2018-08-24T07:28:07Z | 2018-08-26T18:33:41Z |
WIP: Fix tuple indexing | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f0635014b166b..3ec92e4be3faf 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -298,6 +298,10 @@ def __setstate__(self, state):
def _slice(self, slicer):
""" return a slice of my values """
+
+ # Non-tuple Multidimensional indexing is deprecated in np
+ if isinstance(slicer, list):
+ slicer = tuple(slicer)
return self.values[slicer]
def reshape_nd(self, labels, shape, ref_items, mgr=None):
| - [ y] closes #21360
- [ y] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/22472 | 2018-08-22T23:15:04Z | 2018-08-26T18:45:13Z | null | 2018-08-26T18:45:13Z |
BUG: resample Grouper in a list grouping on a column with NaT throws an error | diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 58e9797dbeea5..737e8a805f3ce 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -534,6 +534,23 @@ def test_grouping_labels(self, mframe):
exp_labels = np.array([2, 2, 2, 0, 0, 1, 1, 3, 3, 3], dtype=np.intp)
assert_almost_equal(grouped.grouper.labels[0], exp_labels)
+ def test_list_grouper_with_nat(self):
+ # GH 14715
+ df = pd.DataFrame({'date': pd.date_range('1/1/2011',
+ periods=365, freq='D')})
+ df.iloc[-1] = pd.NaT
+ grouper = pd.Grouper(key='date', freq='AS')
+
+ # Grouper in a list grouping
+ result = df.groupby([grouper])
+ expected = {pd.Timestamp('2011-01-01'): pd.Index(list(range(364)))}
+ tm.assert_dict_equal(result.groups, expected)
+
+ # Test case without a list
+ result = df.groupby(grouper)
+ expected = {pd.Timestamp('2011-01-01'): 365}
+ tm.assert_dict_equal(result.groups, expected)
+
# get_group
# --------------------------------
| - [x] closes #14715
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/22470 | 2018-08-22T20:53:16Z | 2018-08-28T12:22:29Z | 2018-08-28T12:22:28Z | 2018-08-28T12:22:32Z |
remove numpy_helper and some unneeded util functions | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index b7a1471ae5a9e..9c906a00bd4fe 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -104,10 +104,7 @@ cdef class IndexEngine:
loc = self.get_loc(key)
value = convert_scalar(arr, value)
- if PySlice_Check(loc) or util.is_array(loc):
- arr[loc] = value
- else:
- util.set_value_at(arr, loc, value)
+ arr[loc] = value
cpdef get_loc(self, object val):
if is_definitely_invalid_key(val):
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 654e7eaf92ff0..a6078da28a3ba 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -492,9 +492,7 @@ def astype_intsafe(ndarray[object] arr, new_dtype):
if is_datelike and checknull(v):
result[i] = NPY_NAT
else:
- # we can use the unsafe version because we know `result` is mutable
- # since it was created from `np.empty`
- util.set_value_at_unsafe(result, i, v)
+ result[i] = v
return result
@@ -505,9 +503,7 @@ cpdef ndarray[object] astype_unicode(ndarray arr):
ndarray[object] result = np.empty(n, dtype=object)
for i in range(n):
- # we can use the unsafe version because we know `result` is mutable
- # since it was created from `np.empty`
- util.set_value_at_unsafe(result, i, unicode(arr[i]))
+ result[i] = unicode(arr[i])
return result
@@ -518,9 +514,7 @@ cpdef ndarray[object] astype_str(ndarray arr):
ndarray[object] result = np.empty(n, dtype=object)
for i in range(n):
- # we can use the unsafe version because we know `result` is mutable
- # since it was created from `np.empty`
- util.set_value_at_unsafe(result, i, str(arr[i]))
+ result[i] = str(arr[i])
return result
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index fe993ecc0cdd7..4a5e859b8f50b 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -282,8 +282,7 @@ cdef class SeriesBinGrouper:
result = _get_result_array(res,
self.ngroups,
len(self.dummy_arr))
-
- util.assign_value_1d(result, i, res)
+ result[i] = res
islider.advance(group_size)
vslider.advance(group_size)
@@ -408,7 +407,7 @@ cdef class SeriesGrouper:
self.ngroups,
len(self.dummy_arr))
- util.assign_value_1d(result, lab, res)
+ result[lab] = res
counts[lab] = group_size
islider.advance(group_size)
vslider.advance(group_size)
diff --git a/pandas/_libs/src/numpy_helper.h b/pandas/_libs/src/numpy_helper.h
deleted file mode 100644
index d9d0fb74da73c..0000000000000
--- a/pandas/_libs/src/numpy_helper.h
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
-Copyright (c) 2016, PyData Development Team
-All rights reserved.
-
-Distributed under the terms of the BSD Simplified License.
-
-The full license is in the LICENSE file, distributed with this software.
-*/
-
-#ifndef PANDAS__LIBS_SRC_NUMPY_HELPER_H_
-#define PANDAS__LIBS_SRC_NUMPY_HELPER_H_
-
-#include "Python.h"
-#include "inline_helper.h"
-#include "numpy/arrayobject.h"
-#include "numpy/arrayscalars.h"
-
-
-PANDAS_INLINE int assign_value_1d(PyArrayObject* ap, Py_ssize_t _i,
- PyObject* v) {
- npy_intp i = (npy_intp)_i;
- char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0);
- return PyArray_DESCR(ap)->f->setitem(v, item, ap);
-}
-
-PANDAS_INLINE PyObject* get_value_1d(PyArrayObject* ap, Py_ssize_t i) {
- char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0);
- return PyArray_Scalar(item, PyArray_DESCR(ap), (PyObject*)ap);
-}
-
-#endif // PANDAS__LIBS_SRC_NUMPY_HELPER_H_
diff --git a/pandas/_libs/util.pxd b/pandas/_libs/util.pxd
index e78f132ada2ca..e05795d74c503 100644
--- a/pandas/_libs/util.pxd
+++ b/pandas/_libs/util.pxd
@@ -30,11 +30,6 @@ cdef extern from *:
const char *get_c_string(object) except NULL
-cdef extern from "src/numpy_helper.h":
- int assign_value_1d(ndarray, Py_ssize_t, object) except -1
- object get_value_1d(ndarray, Py_ssize_t)
-
-
cdef extern from "src/headers/stdint.h":
enum: UINT8_MAX
enum: UINT16_MAX
@@ -116,26 +111,4 @@ cdef inline object get_value_at(ndarray arr, object loc):
Py_ssize_t i
i = validate_indexer(arr, loc)
- return get_value_1d(arr, i)
-
-
-cdef inline set_value_at_unsafe(ndarray arr, object loc, object value):
- """Sets a value into the array without checking the writeable flag.
-
- This should be used when setting values in a loop, check the writeable
- flag above the loop and then eschew the check on each iteration.
- """
- cdef:
- Py_ssize_t i
-
- i = validate_indexer(arr, loc)
- assign_value_1d(arr, i, value)
-
-
-cdef inline set_value_at(ndarray arr, object loc, object value):
- """Sets a value into the array after checking that the array is mutable.
- """
- if not cnp.PyArray_ISWRITEABLE(arr):
- raise ValueError('assignment destination is read-only')
-
- set_value_at_unsafe(arr, loc, value)
+ return arr[i]
diff --git a/pandas/core/sparse/array.py b/pandas/core/sparse/array.py
index 6f0ffbff22028..eb07e5ef6c85f 100644
--- a/pandas/core/sparse/array.py
+++ b/pandas/core/sparse/array.py
@@ -446,7 +446,10 @@ def _get_val_at(self, loc):
if sp_loc == -1:
return self.fill_value
else:
- return libindex.get_value_at(self, sp_loc)
+ # libindex.get_value_at will end up calling __getitem__,
+ # so to avoid recursing we need to unwrap `self` so the
+ # ndarray.__getitem__ implementation is called.
+ return libindex.get_value_at(np.asarray(self), sp_loc)
@Appender(_index_shared_docs['take'] % _sparray_doc_kwargs)
def take(self, indices, axis=0, allow_fill=True,
diff --git a/setup.py b/setup.py
index 964167737c9c6..19438d950e8a7 100755
--- a/setup.py
+++ b/setup.py
@@ -491,8 +491,7 @@ def srcpath(name=None, suffix='.pyx', subdir='src'):
ts_include = ['pandas/_libs/tslibs/src']
-lib_depends = ['pandas/_libs/src/numpy_helper.h',
- 'pandas/_libs/src/parse_helper.h',
+lib_depends = ['pandas/_libs/src/parse_helper.h',
'pandas/_libs/src/compat_helper.h']
np_datetime_headers = [
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22469 | 2018-08-22T17:36:20Z | 2018-08-29T12:43:57Z | 2018-08-29T12:43:57Z | 2018-08-29T13:41:04Z |
TEST: groupby(as_index=False, sort=False).aggregate formerly (?) gave unexpected results with a list-like function | diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 606539a564323..61db4cee1ab02 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -512,3 +512,14 @@ def test_agg_category_nansum(observed):
if observed:
expected = expected[expected != 0]
tm.assert_series_equal(result, expected)
+
+
+def test_agg_list_like_func():
+ # GH 18473
+ df = pd.DataFrame({'A': [str(x) for x in range(3)],
+ 'B': [str(x) for x in range(3)]})
+ grouped = df.groupby('A', as_index=False, sort=False)
+ result = grouped.agg({'B': lambda x: list(x)})
+ expected = pd.DataFrame({'A': [str(x) for x in range(3)],
+ 'B': [[str(x)] for x in range(3)]})
+ tm.assert_frame_equal(result, expected)
| - [x] closes #18473
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/22467 | 2018-08-22T15:08:05Z | 2018-08-29T12:32:42Z | 2018-08-29T12:32:42Z | 2018-08-29T12:32:45Z |
DOC: confusing wording in Dataframe.quantile, term 'a la` | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 052952103e28c..b79c83cccd5ab 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7249,8 +7249,7 @@ def f(s):
def quantile(self, q=0.5, axis=0, numeric_only=True,
interpolation='linear'):
"""
- Return values at the given quantile over requested axis, a la
- numpy.percentile.
+ Return values at the given quantile over requested axis.
Parameters
----------
@@ -7315,6 +7314,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
See Also
--------
pandas.core.window.Rolling.quantile
+ numpy.percentile
"""
self._check_percentile(q)
| - [x] closes #22463
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/22464 | 2018-08-22T13:53:19Z | 2018-08-22T13:54:23Z | 2018-08-22T13:54:23Z | 2023-03-06T06:03:58Z |
BUG: Retain timezone information in to_datetime if box=False | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index d7feb6e547b22..0f5ef1caf1d96 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -237,7 +237,7 @@ without timezone localization. This is inconsistent from parsing the same
datetime string with :class:`Timestamp` which would preserve the UTC
offset in the ``tz`` attribute. Now, :func:`to_datetime` preserves the UTC
offset in the ``tz`` attribute when all the datetime strings have the same
-UTC offset (:issue:`17697`, :issue:`11736`)
+UTC offset (:issue:`17697`, :issue:`11736`, :issue:`22457`)
*Previous Behavior*:
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 90a083557a662..57387b9ea870a 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -275,14 +275,25 @@ def _convert_listlike_datetimes(arg, box, format, name=None, tz=None,
yearfirst=yearfirst,
require_iso8601=require_iso8601
)
- if tz_parsed is not None and box:
- return DatetimeIndex._simple_new(result, name=name,
- tz=tz_parsed)
+ if tz_parsed is not None:
+ if box:
+ # We can take a shortcut since the datetime64 numpy array
+ # is in UTC
+ return DatetimeIndex._simple_new(result, name=name,
+ tz=tz_parsed)
+ else:
+ # Convert the datetime64 numpy array to an numpy array
+ # of datetime objects
+ result = [Timestamp(ts, tz=tz_parsed).to_pydatetime()
+ for ts in result]
+ return np.array(result, dtype=object)
if box:
+ # Ensure we return an Index in all cases where box=True
if is_datetime64_dtype(result):
return DatetimeIndex(result, tz=tz, name=name)
elif is_object_dtype(result):
+ # e.g. an Index of datetime objects
from pandas import Index
return Index(result, name=name)
return result
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 72e5358f21966..bef9b73773f46 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -592,6 +592,17 @@ def test_iso_8601_strings_with_same_offset(self):
result = DatetimeIndex([ts_str] * 2)
tm.assert_index_equal(result, expected)
+ def test_iso_8601_strings_same_offset_no_box(self):
+ # GH 22446
+ data = ['2018-01-04 09:01:00+09:00', '2018-01-04 09:02:00+09:00']
+ result = pd.to_datetime(data, box=False)
+ expected = np.array([
+ datetime(2018, 1, 4, 9, 1, tzinfo=pytz.FixedOffset(540)),
+ datetime(2018, 1, 4, 9, 2, tzinfo=pytz.FixedOffset(540))
+ ],
+ dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+
def test_iso_8601_strings_with_different_offsets(self):
# GH 17697, 11736
ts_strings = ["2015-11-18 15:30:00+05:30",
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 9a6fa70892e26..64d2e155aa9a9 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -330,10 +330,9 @@ def test_datetime64_dtype_array_returned(self):
'2015-01-01T00:00:00.000000000+0000'],
dtype='M8[ns]')
- dt_index = pd.to_datetime(['2015-01-03T00:00:00.000000000+0000',
- '2015-01-01T00:00:00.000000000+0000',
- '2015-01-01T00:00:00.000000000+0000'],
- box=False)
+ dt_index = pd.to_datetime(['2015-01-03T00:00:00.000000000',
+ '2015-01-01T00:00:00.000000000',
+ '2015-01-01T00:00:00.000000000'])
result = algos.unique(dt_index)
tm.assert_numpy_array_equal(result, expected)
assert result.dtype == expected.dtype
| - [x] closes #22446
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Don't think this requires a whatsnew entry since this was introduced in (and part of) #21822 which is slated for v0.24.0 | https://api.github.com/repos/pandas-dev/pandas/pulls/22457 | 2018-08-22T06:58:21Z | 2018-08-24T03:29:53Z | 2018-08-24T03:29:53Z | 2018-08-24T03:30:45Z |
MAINT: Reset sys.path if docstrings import fails | diff --git a/pandas/tests/scripts/test_validate_docstrings.py b/pandas/tests/scripts/test_validate_docstrings.py
index cebcbf5a61465..29da40cd78d22 100644
--- a/pandas/tests/scripts/test_validate_docstrings.py
+++ b/pandas/tests/scripts/test_validate_docstrings.py
@@ -471,6 +471,9 @@ def import_scripts(self):
from validate_docstrings import validate_one
globals()[global_validate_one] = validate_one
except ImportError:
+ # Remove addition to `sys.path`
+ sys.path.pop()
+
# Import will fail if the pandas installation is not inplace.
raise pytest.skip("pandas/scripts directory does not exist")
| Follow-up to https://github.com/pandas-dev/pandas/pull/22413#discussion_r211459766
cc @WillAyd | https://api.github.com/repos/pandas-dev/pandas/pulls/22456 | 2018-08-22T06:45:30Z | 2018-08-22T10:03:49Z | 2018-08-22T10:03:49Z | 2018-08-25T06:07:35Z |